id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
21,117,851 | https://en.wikipedia.org/wiki/Composite%20laminate | In materials science, a composite laminate is an assembly of layers of fibrous composite materials which can be joined to provide required engineering properties, including in-plane stiffness, bending stiffness, strength, and coefficient of thermal expansion.
The individual layers consist of high-modulus, high-strength fibers in a polymeric, metallic, or ceramic matrix material. Typical fibers used include cellulose, graphite, glass, boron, and silicon carbide, and some matrix materials are epoxies, polyimides, aluminium, titanium, and alumina.
Layers of different materials may be used, resulting in a hybrid laminate. The individual layers generally are orthotropic (that is, with principal properties in orthogonal directions) or transversely isotropic (with isotropic properties in the transverse plane) with the laminate then exhibiting anisotropic (with variable direction of principal properties), orthotropic, or quasi-isotropic properties. Quasi-isotropic laminates exhibit isotropic (that is, independent of direction) inplane response but are not restricted to isotropic out-of-plane (bending) response. Depending upon the stacking sequence of the individual layers, the laminate may exhibit coupling between inplane and out-of-plane response. An example of bending-stretching coupling is the presence of curvature developing as a result of in-plane loading.
Classical laminate analysis
Composite laminates may be regarded as a type of plate or thin-shell structure, and as such their stiffness properties may be found by integration of in-plane stress in the direction normal to the laminates surface. The broad majority of ply or lamina materials obey Hooke's law and hence all of their stresses and strains may be related by a system of linear equations. Laminates are assumed to deform by developing three strains of the mid-plane/surface and three changes in curvature
and
where and define the co-ordinate system at the laminate level. Individual plies have local co-ordinate axes which are aligned with the materials characteristic directions; such as the principal directions of its elasticity tensor. Uni-directional ply's for example always have their first axis aligned with the direction of the reinforcement. A laminate is a stack of individual plies having a set of ply orientations
which have a strong influence on both the stiffness and strength of the laminate as a whole. Rotating an anisotropic material results in a variation of its elasticity tensor. If in its local co-ordinates a ply is assumed to behave according to the stress-strain law
then under a rotation transformation (see transformation matrix) it has the modified elasticity terms
Hence
An important assumption in the theory of classical laminate analysis is that the strains resulting from curvature vary linearly in the thickness direction, and that the total in-plane strains are a sum of those derived from membrane loads and bending loads. Hence
Furthermore, a three-dimensional stress field is replaced by six stress resultants; three membrane forces (forces per unit length) and bending moments per unit length. It is assumed that if these three quantities are known at any location (x,y) then the stresses may be computed from them. Once part of a laminate the transformed elasticity is treated as a piecewise function of the thickness direction, hence the integration operation may be treated as the sum of a finite series, giving
where
See also
Carbon-fiber-reinforced polymer
Composite material
High-pressure laminate
Laminate
Lay-up process
Void (composites)
References
External links
Advanced Composites Centre for Innovation and Science
Composite materials
Fibre-reinforced polymers | Composite laminate | Physics | 758 |
565,905 | https://en.wikipedia.org/wiki/Disassortative%20mating | Disassortative mating (also known as negative assortative mating or heterogamy) is a mating pattern in which individuals with dissimilar phenotypes mate with one another more frequently than would be expected under random mating. Disassortative mating reduces the mean genetic similarities within the population and produces a greater number of heterozygotes. The pattern is character specific, but does not affect allele frequencies. This nonrandom mating pattern will result in deviation from the Hardy-Weinberg principle (which states that genotype frequencies in a population will remain constant from generation to generation in the absence of other evolutionary influences, such as "mate choice" in this case).
Disassortative mating is different from outbreeding, which refers to mating patterns in relation to genotypes rather than phenotypes.
Due to homotypic preference (bias toward the same type), assortative mating occurs more frequently than disassortative mating. This is because homotypic preferences increase relatedness between mates and between parents and offspring that would promote cooperation and increases inclusive fitness. With disassortative mating, heterotypic preference (bias towards different types) in many cases has been shown to increase overall fitness. When this preference is favored, it allows a population to generate and/or maintain polymorphism (genetic variation within a population).
The fitness advantage aspect of disassortative mating seems straightforward, but the evolution of selective forces involved in disassortative mating are still largely unknown in natural populations.
Types of disassortative mating
Imprinting is one example of disassortative mating. A model shows that individuals imprint on a genetically transmitted trait during early ontogeny and choosy females later use those parental images as a basis of mate choice. A viability-reducing trait may be maintained even without the fertility cost of same-type matings. With imprinting, preference can be established even if it is initially rare, when there is a fertility cost of same-type matings.
One uncommon type of disassortative mating is the female preference on rare (or novel) male phenotypes. A study on guppies, Poecilia reticulata, revealed that the female preference was sufficient to tightly maintain polymorphism in male traits. This type of mate choice shows that costly preferences can persist at higher frequencies if mate choice is hindered, which would allow the alleles to approach fixation.
Effects
Disassortative mating may result in balancing selection and the maintenance of high genetic variation in the population. This is due to the excess heterozygotes that are produced from disassortative mating relative to a randomly mating population.
In humans
The best-known example of disassortative mating in humans is preference for genes in the major histocompatibility complex (MHC) region on chromosome 6. Individuals feel more attracted to odors of individuals who are genetically different in this region. This promotes MHC heterozygosity in the children, making them less vulnerable to pathogens.
In non-human species
Evidence from research regarding coloration in Heliconius butterflies suggests that disassortative mating is more likely to emerge when phenotypic variation is based on self-referencing (mate preference depends on phenotype of the choosing individual, therefore dominance in relationships influence the evolution of disassortative mating).
Disassortative mating has been found with traits such as body symmetry in Amphridromus inversus snails. Normally in snails, rarely are individuals of the opposite coil able to mate with individuals of a normal coil pattern. However, it has been discovered that this species of snail frequents mating between individuals of opposing coils. It is said that the chirality of the spermatophore and the females reproductive tract have a greater chance of producing offspring. This example of disassortative mating promotes polymorphism within the population.
In the scale eating predator fish, Perissodus microlepis, disassortative mating allows the individuals with the rare phenotype of mouth-opening direction to have better success as predators.
House mice conduct disassortative mating as they prefer mates genetically dissimilar to themselves. Specifically, odor profiles in mice are strongly linked to genotypes at the MHC loci controlling changes in the immune response. When MHC-heterozygous offspring are produced, it enhances their immunocompetence because of their ability to recognize a large range of pathogens. Thus, the mice tend to prefer providing "good genes" to their offspring so they will mate with individuals with differences at the MHC loci.
In the seaweed fly, Coelopa frigida, heterozygotes at the locus alcohol dehydrogenase (Adh) have been shown to express better fitness by having higher larval density and relative viability. Females displayed disassortative mating in respect to the Adh locus because they would only mate with males of the opposite Adh genotype. It is suspected that they do this to maintain genetic variation in the population.
White-throated sparrows, Zonotrichia albicollis, prefer strong disassortative mating behaviors regarding the color of their head stripe. The single locus that controls this expression is only observed in heterozygotes. Additionally, the heterozygote arrangement of chromosome 2 from disassortative mating produced offspring of high aggression which is shown to be a social behavior that allows them to dominate their opponents.
References
Mating
Mating systems
Population genetics
Ecology | Disassortative mating | Biology | 1,141 |
22,497 | https://en.wikipedia.org/wiki/OpenGL | OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
Silicon Graphics, Inc. (SGI) began developing OpenGL in 1991 and released it on June 30, 1992. It is used for a variety of applications, including computer-aided design (CAD), video games, scientific visualization, virtual reality, and flight simulation. Since 2006, OpenGL has been managed by the non-profit technology consortium Khronos Group.
Design
The OpenGL specification describes an abstract application programming interface (API) for drawing 2D and 3D graphics. It is designed to be implemented mostly or entirely using hardware acceleration such as a GPU, although it is possible for the API to be implemented entirely in software running on a CPU.
The API is defined as a set of functions which may be called by the client program, alongside a set of named integer constants (for example, the constant GL_TEXTURE_2D, which corresponds to the decimal number 3553). Although the function definitions are superficially similar to those of the programming language C, they are language-independent. As such, OpenGL has many language bindings, some of the most noteworthy being the JavaScript binding WebGL (API, based on OpenGL ES 2.0, for 3D rendering from within a web browser); the C bindings WGL, GLX and CGL; the C binding provided by iOS; and the Java and C bindings provided by Android.
In addition to being language-independent, OpenGL is also cross-platform. The specification says nothing on the subject of obtaining and managing an OpenGL context, leaving this as a detail of the underlying windowing system. For the same reason, OpenGL is purely concerned with rendering, providing no APIs related to input, audio, or windowing.
Development
OpenGL is no longer in active development, whereas between 2001 and 2014, OpenGL specification was updated mostly on a yearly basis, with two releases (3.1 and 3.2) taking place in 2009 and three (3.3, 4.0 and 4.1) in 2010. The latest OpenGL specification 4.6 was released in 2017 after a three-year break, and was limited to inclusion of eleven existing ARB and EXT extensions into the core profile.
Active development of OpenGL was dropped in favor of the Vulkan API, released in 2016, and codenamed glNext during initial development. In 2017, Khronos Group announced that OpenGL ES would not have new versions
and has since concentrated on development of Vulkan and other technologies. As a result, certain capabilities offered by modern GPUs, e.g. ray tracing, are not supported by the OpenGL standard. However, support for newer features might be provided through the vendor-specific OpenGL extensions.
New versions of the OpenGL specifications are released by the Khronos Group, each of which extends the API to support various new features. The details of each version are decided by consensus between the Group's members, including graphics card manufacturers, operating system designers, and general technology companies such as Mozilla and Google.
In addition to the features required by the core API, graphics processing unit (GPU) vendors may provide additional functionality in the form of extensions. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions. Vendors can use extensions to expose custom APIs without needing support from other vendors or the Khronos Group as a whole, which greatly increases the flexibility of OpenGL. All extensions are collected in, and defined by, the OpenGL Registry.
Each extension is associated with a short identifier, based on the name of the company which developed it. For example, Nvidia's identifier is NV, which is part of the extension name GL_NV_half_float, the constant GL_HALF_FLOAT_NV, and the function glVertex2hNV(). If multiple vendors agree to implement the same functionality using the same API, a shared extension may be released, using the identifier EXT. In such cases, it could also happen that the Khronos Group's Architecture Review Board gives the extension their explicit approval, in which case the identifier ARB is used.
The features introduced by each new version of OpenGL are typically formed from the combined features of several widely implemented extensions, especially extensions of type ARB or EXT.
Documentation
The OpenGL Architecture Review Board released a series of manuals along with the specification which have been updated to track changes in the API. These are commonly referred to by the colors of their covers:
The Red Book
OpenGL Programming Guide, 9th Edition.
The Official Guide to Learning OpenGL, Version 4.5 with SPIR-V
The Orange Book
OpenGL Shading Language, 3rd edition.
A tutorial and reference book for GLSL.
Historic books (pre-OpenGL 2.0):
The Green Book
OpenGL Programming for the X Window System.
A book about X11 interfacing and OpenGL Utility Toolkit (GLUT).
The Blue Book
OpenGL Reference manual, 4th edition.
Essentially a hard-copy printout of the Unix manual (man) pages for OpenGL.
Includes a poster-sized fold-out diagram showing the structure of an idealised OpenGL implementation.
The Alpha Book (white cover)
OpenGL Programming for Windows 95 and Windows NT.
A book about interfacing OpenGL with Microsoft Windows.
OpenGL's documentation is also accessible via its official webpage.
Associated libraries
The earliest versions of OpenGL were released with a companion library called the OpenGL Utility Library (GLU). It provided simple, useful features which were unlikely to be supported in contemporary hardware, such as tessellating, and generating mipmaps and primitive shapes. The GLU specification was last updated in 1998 and depends on OpenGL features which are now deprecated.
Context and window toolkits
Given that creating an OpenGL context is quite a complex process, and given that it varies between operating systems, automatic OpenGL context creation has become a common feature of several game-development and user-interface libraries, including SDL, Allegro, SFML, FLTK, and Qt. A few libraries have been designed solely to produce an OpenGL-capable window. The first such library was OpenGL Utility Toolkit (GLUT), later superseded by freeglut. GLFW is a newer alternative.
These toolkits are designed to create and manage OpenGL windows, and manage input, but little beyond that.
GLFW – A cross-platform windowing and keyboard-mouse-joystick handler; is more game-oriented
freeglut – A cross-platform windowing and keyboard-mouse handler; its API is a superset of the GLUT API, and it is more stable and up to date than GLUT
OpenGL Utility Toolkit (GLUT) – An old windowing handler, no longer maintained.
Several "multimedia libraries" can create OpenGL windows, in addition to input, sound and other tasks useful for game-like applications
Allegro 5 – A cross-platform multimedia library with a C API focused on game development
Simple DirectMedia Layer (SDL) – A cross-platform multimedia library with a C API
SFML – A cross-platform multimedia library with a C++ API and multiple other bindings to languages such as C#, Java, Haskell, and Go
Widget toolkits
FLTK – A small cross-platform C++ widget library
Qt – A cross-platform C++ widget toolkit. It provides many OpenGL helper objects, which even abstract away the difference between desktop GL and OpenGL ES
wxWidgets – A cross-platform C++ widget toolkit
Extension loading libraries
Given the high workload involved in identifying and loading OpenGL extensions, a few libraries have been designed which load all available extensions and functions automatically. Examples include OpenGL Easy Extension library (GLEE), OpenGL Extension Wrangler Library (GLEW) and glbinding. Extensions are also loaded automatically by most language bindings, such as Java OpenGL, PyOpenGL and WebGL.
Implementations
Mesa 3D is an open-source implementation of OpenGL. It can do pure software rendering, and it may also use hardware acceleration on BSD, Linux, and other platforms by taking advantage of the Direct Rendering Infrastructure. As of version 20.0, it implements version 4.6 of the OpenGL standard.
History
In the 1980s, developing software that could function with a wide range of graphics hardware was a challenge without a cross-platform library. Software developers wrote custom interfaces and drivers for each piece of hardware. This was expensive and resulted in multiplication of effort.
By the early 1990s, Silicon Graphics (SGI) was a leader in 3D graphics for workstations. Their IRIS GL API became the industry standard, as IRIS GL was considered easier to use, and it supported immediate mode rendering, therefore being faster than competitors like PHIGS.
SGI's competitors (including Sun Microsystems, Hewlett-Packard and IBM) were also able to bring to market 3D hardware supported by extensions made to the PHIGS standard, which pressured SGI to open source a version of IRIS GL as a public standard called OpenGL.
However, SGI had many customers for whom the change from IRIS GL to OpenGL would demand significant investment. Moreover, IRIS GL had API functions that were irrelevant to 3D graphics. For example, it included a windowing, keyboard and mouse API, in part because it was developed before the X Window System and Sun's NeWS. IRIS GL libraries also were unsuitable for opening due to licensing and patent issues. These factors required SGI to continue to support the advanced and proprietary Iris Inventor and Iris Performer programming APIs while market support for OpenGL matured.
One of the restrictions of IRIS GL was that it only provided access to features supported by the underlying hardware. If the graphics hardware did not support a feature natively, then the application could not use it. OpenGL overcame this problem by providing software implementations of features unsupported by hardware, allowing applications to use advanced graphics on relatively low-powered systems. OpenGL standardized access to hardware, pushed the development responsibility of hardware interface programs (device drivers) to hardware manufacturers, and delegated windowing functions to the underlying operating system. With so many different kinds of graphics hardware, getting them all to speak the same language in this way had a remarkable impact by giving software developers a higher-level platform for 3D-software development.
In 1992, SGI led the creation of the OpenGL Architecture Review Board (OpenGL ARB), the group of companies that would maintain and expand the OpenGL specification in the future. Two years later, they also played with the idea of releasing something called "OpenGL++" which included elements such as a scene-graph API (presumably based on their Performer technology). The specification was circulated among a few interested parties – but never turned into a product.
Released in 1996, Microsoft's Direct3D eventually became the main competitor of OpenGL. Over 50 game developers signed an open letter to Microsoft, released on June 12, 1997, calling on the company to actively support OpenGL. On December 17, 1997, Microsoft and SGI initiated the Fahrenheit project, which was a joint effort with the goal of unifying the OpenGL and Direct3D interfaces (and adding a scene-graph API too). In 1998, Hewlett-Packard joined the project. It initially showed some promise of bringing order to the world of interactive 3D computer graphics APIs, but on account of financial constraints at SGI, strategic reasons at Microsoft, and a general lack of industry support, it was abandoned in 1999.
In July 2006, the OpenGL Architecture Review Board voted to transfer control of the OpenGL API standard to the Khronos Group.
Industry support
Despite the emergence of newer graphics APIs like its successor Vulkan or Metal, OpenGL continues to be a widely used standard. This continued relevance is supported by several factors: ongoing development with new extensions and driver optimizations, its cross-platform compatibility, and the availability of compatibility layers like ANGLE and Zink. These layers allow OpenGL to run efficiently on top of Vulkan and Metal, offering a pathway for continued use or gradual transitions for developers.
However, the graphics API landscape has been shifting, where some companies are moving away from OpenGL. Back in June 2018, Apple has deprecated OpenGL APIs on all of their platforms (iOS, macOS and tvOS), strongly encouraging developers to use their proprietary Metal API, which was introduced in 2014.
Game developers have also begun to adopt newer APIs. id Software, who has been using OpenGL in their games since the late 1990s in games such as GLQuake or some games of the Doom franchise, transitioned away to its successor Vulkan in its id Tech 7 engine in 2016. They first supported Vulkan in an update for their id Tech 6 engine. The company's first licensed use of OpenGL was in its Quake II engine, also known as id Tech 2. In March 2023, Valve removed OpenGL support from Dota 2 in favor of Vulkan. Atypical Games, with support from Samsung, updated their game engine to use Vulkan, rather than OpenGL, across all non-Apple platforms.
The Khronos Group, the consortium responsible for OpenGL's development, has stopped providing support for OpenGL. It has not received a number of modern graphics technologies, such as Ray Tracing, on-GPU video decoding, anti-aliasing algorithms with deep learning
like as Nvidia DLSS and AMD FSR
Google's Fuchsia OS, while using Vulkan natively and requiring a Vulkan-conformant GPU, still intends to support OpenGL on top of Vulkan via the ANGLE translation layer.
Version history
The first version of OpenGL, version 1.0, was released on June 30, 1992, by Mark Segal and Kurt Akeley. Since then, OpenGL has occasionally been extended by releasing a new version of the specification. Such releases define a baseline set of features which all conforming graphics cards must support, and against which new extensions can more easily be written. Each new version of OpenGL tends to incorporate several extensions which have widespread support among graphics-card vendors, although the details of those extensions may be changed.
OpenGL 2.0
Release date: September 7, 2004
OpenGL 2.0 was originally conceived by 3Dlabs to address concerns that OpenGL was stagnating and lacked a strong direction. 3Dlabs proposed a number of major additions to the standard. Most of these were, at the time, rejected by the ARB or otherwise never came to fruition in the form that 3Dlabs proposed. However, their proposal for a C-style shading language was eventually completed, resulting in the current formulation of the OpenGL Shading Language (GLSL or GLslang). Like the assembly-like shading languages it was replacing, it allowed replacing the fixed-function vertex and fragment pipe with shaders, though this time written in a C-like high-level language.
The design of GLSL was notable for making relatively few concessions to the limits of the hardware then available. This harked back to the earlier tradition of OpenGL setting an ambitious, forward-looking target for 3D accelerators rather than merely tracking the state of currently available hardware. The final OpenGL 2.0 specification includes support for GLSL.
Longs Peak and OpenGL 3.0
Before the release of OpenGL 3.0, the new revision had the codename Longs Peak. At the time of its original announcement, Longs Peak was presented as the first major API revision in OpenGL's lifetime. It consisted of an overhaul to the way that OpenGL works, calling for fundamental changes to the API.
The draft introduced a change to object management. The GL 2.1 object model was built upon the state-based design of OpenGL. That is, to modify an object or to use it, one needs to bind the object to the state system, then make modifications to the state or perform function calls that use the bound object.
Because of OpenGL's use of a state system, objects must be mutable. That is, the basic structure of an object can change at any time, even if the rendering pipeline is asynchronously using that object. A texture object can be redefined from 2D to 3D. This requires any OpenGL implementations to add a degree of complexity to internal object management.
Under the Longs Peak API, object creation would become atomic, using templates to define the properties of an object which would be created with one function call. The object could then be used immediately across multiple threads. Objects would also be immutable; however, they could have their contents changed and updated. For example, a texture could change its image, but its size and format could not be changed.
To support backwards compatibility, the old state based API would still be available, but no new functionality would be exposed via the old API in later versions of OpenGL. This would have allowed legacy code bases, such as the majority of CAD products, to continue to run while other software could be written against or ported to the new API.
Longs Peak was initially due to be finalized in September 2007 under the name OpenGL 3.0, but the Khronos Group announced on October 30 that it had run into several issues that it wished to address before releasing the specification. As a result, the spec was delayed, and the Khronos Group went into a media blackout until the release of the final OpenGL 3.0 spec.
The final specification proved far less revolutionary than the Longs Peak proposal. Instead of removing all immediate mode and fixed functionality (non-shader mode), the spec included them as deprecated features. The proposed object model was not included, and no plans have been announced to include it in any future revisions. As a result, the API remained largely the same with a few existing extensions being promoted to core functionality. Among some developer groups this decision caused something of an uproar, with many developers professing that they would switch to DirectX in protest. Most complaints revolved around the lack of communication by Khronos to the development community and multiple features being discarded that were viewed favorably by many. Other frustrations included the requirement of DirectX 10 level hardware to use OpenGL 3.0 and the absence of geometry shaders and instanced rendering as core features.
Other sources reported that the community reaction was not quite as severe as originally presented, with many vendors showing support for the update.
OpenGL 3.0
Release date: August 11, 2008
OpenGL 3.0 introduced a deprecation mechanism to simplify future revisions of the API. Certain features, marked as deprecated, could be completely disabled by requesting a forward-compatible context from the windowing system. OpenGL 3.0 features could still be accessed alongside these deprecated features, however, by requesting a full context.
Deprecated features include:
All fixed-function vertex and fragment processing
Direct-mode rendering, using glBegin and glEnd
Display lists
Indexed-color rendering targets
OpenGL Shading Language versions 1.10 and 1.20
OpenGL 3.1
Release date: March 24, 2009
OpenGL 3.1 fully removed all of the features which were deprecated in version 3.0, with the exception of wide lines. From this version onwards, it's not possible to access new features using a full context, or to access deprecated features using a forward-compatible context. An exception to the former rule is made if the implementation supports the ARB_compatibility extension, but this is not guaranteed.
Hardware support: Mesa supports ARM Panfrost with Version 21.0.
OpenGL 3.2
Release date: August 3, 2009
OpenGL 3.2 further built on the deprecation mechanisms introduced by OpenGL 3.0, by dividing the specification into a core profile and compatibility profile. Compatibility contexts include the previously removed fixed-function APIs, equivalent to the ARB_compatibility extension released alongside OpenGL 3.1, while core contexts do not. OpenGL 3.2 also included an upgrade to GLSL version 1.50.
OpenGL 3.3
Release date: March 11, 2010
Mesa supports software Driver SWR, softpipe and for older Nvidia cards with NV50.
OpenGL 4.0
Release date: March 11, 2010
OpenGL 4.0 was released alongside version 3.3. It was designed for hardware able to support Direct3D 11.
As in OpenGL 3.0, this version of OpenGL contains a high number of fairly inconsequential extensions, designed to thoroughly expose the abilities of Direct3D 11-class hardware. Only the most influential extensions are listed below.
Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Ivy Bridge processors and newer.
OpenGL 4.1
Release date: July 26, 2010
Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Ivy Bridge processors and newer.
Minimum "maximum texture size" is 16,384 × 16,384 for GPUs implementing this specification.
OpenGL 4.2
Release date: August 8, 2011
Support for shaders with atomic counters and load-store-atomic read-modify-write operations to one level of a texture
Drawing multiple instances of data captured from GPU vertex processing (including tessellation), to enable complex objects to be efficiently repositioned and replicated
Support for modifying an arbitrary subset of a compressed texture, without having to re-download the whole texture to the GPU for significant performance improvements
Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), and Intel HD Graphics in Intel Haswell processors and newer. (Linux Mesa: Ivy Bridge and newer)
OpenGL 4.3
Release date: August 6, 2012
Compute shaders leveraging GPU parallelism within the context of the graphics pipeline
Shader storage buffer objects, allowing shaders to read and write buffer objects like image load/store from 4.2, but through the language rather than function calls.
Image format parameter queries
ETC2/EAC texture compression as a standard feature
Full compatibility with OpenGL ES 3.0 APIs
Debug abilities to receive debugging messages during application development
Texture views to interpret textures in different ways without data replication
Increased memory security and multi-application robustness
Hardware support: AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Haswell processors and newer. (Linux Mesa: Ivy Bridge without stencil texturing, Haswell and newer), Nvidia GeForce 400 series and newer. VIRGL Emulation for virtual machines supports 4.3+ with Mesa 20.
OpenGL 4.4
Release date: July 22, 2013
Enforced buffer object usage controls
Asynchronous queries into buffer objects
Expression of more layout controls of interface variables in shaders
Efficient binding of multiple objects simultaneously
Hardware support: AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Broadwell processors and newer (Linux Mesa: Haswell and newer), Nvidia GeForce 400 series and newer, Tegra K1.
OpenGL 4.5
Release date: August 11, 2014
Direct State Access (DSA) – object accessors enable state to be queried and modified without binding objects to contexts, for increased application and middleware efficiency and flexibility.
Flush Control – applications can control flushing of pending commands before context switching – enabling high-performance multithreaded applications;
Robustness – providing a secure platform for applications such as WebGL browsers, including preventing a GPU reset affecting any other running applications;
OpenGL ES 3.1 API and shader compatibility – to enable the easy development and execution of the latest OpenGL ES applications on desktop systems.
Hardware support: AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Broadwell processors and newer (Linux Mesa: Haswell and newer), Nvidia GeForce 400 series and newer, Tegra K1, and Tegra X1.
OpenGL 4.6
Release date: July 31, 2017
more efficient, GPU-sided, geometry processing
more efficient shader execution ()
more information through statistics, overflow query and counters
higher performance through no error handling contexts
clamping of polygon offset function, solves a shadow rendering problem
SPIR-V shaders
Improved anisotropic filtering
Hardware support: AMD Radeon HD 7000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel Haswell and newer, Nvidia GeForce 400 series and newer.
Driver support:
Mesa 19.2 on Linux supports OpenGL 4.6 for Intel Broadwell and newer. Mesa 20.0 supports AMD Radeon GPUs, while support for Nvidia Kepler+ is in progress. Zink as Emulation Driver with 21.1 and software driver LLVMpipe also support with Mesa 21.0.
AMD Adrenalin 18.4.1 Graphics Driver on Windows 7 SP1, 10 version 1803 (April 2018 update) for AMD Radeon HD 7700+, HD 8500+ and newer. Released April 2018.
Intel 26.20.100.6861 graphics driver on Windows 10. Released May 2019.
NVIDIA GeForce 397.31 Graphics Driver on Windows 7, 8, 10 x86-64 bit only, no 32-bit support. Released April 2018
Alternative implementations
Apple deprecated OpenGL in iOS 12 and macOS 10.14 Mojave in favor of Metal, but it is still available as of macOS 14 Sonoma (including on Apple silicon devices). The latest version supported for OpenGL is 4.1 from 2011. A proprietary library from Molten – authors of MoltenVK – called MoltenGL, can translate OpenGL calls to Metal.
There are several projects that attempt to implement OpenGL on top of Vulkan. The Vulkan backend for Google's ANGLE achieved OpenGL ES 3.1 conformance in July 2020. The Mesa3D project also includes such a driver, called Zink.
Microsoft's Windows 11 on Arm added support for OpenGL 3.3 via GLon12, an open source OpenGL implementation on top DirectX 12 via Mesa Gallium.
Vulkan
Vulkan, formerly named the "Next Generation OpenGL Initiative" (glNext), is a ground-up redesign effort to unify OpenGL and OpenGL ES into one common API that will not be backwards compatible with existing OpenGL versions.
The initial version of Vulkan API was released on February 16, 2016.
See also
ARB assembly language – OpenGL's legacy low-level shading language
Direct3D – main competitor of OpenGL
Glide (API) – a graphics API once used on 3dfx Voodoo cards
Metal (API) – a graphics API for iOS, macOS, tvOS, watchOS
OpenAL – cross-platform audio library, designed to resemble OpenGL
OpenGL ES – OpenGL for embedded systems
OpenSL ES – API for audio on embedded systems, developed by the Khronos Group
OpenVG – API for accelerated 2D graphics, developed by the Khronos Group
RenderMan Interface Specification (RISpec) – Pixar's open API for photorealistic off-line rendering
VOGL – a debugger for OpenGL
Vulkan – low-overhead, cross-platform 2D and 3D graphics API, the "next generation OpenGL initiative"
Graphics pipeline
WebGL
WebGPU
Notes
References
Further reading
External links
OpenGL Overview and OpenGL.org's Wiki with more information on OpenGL Language bindings
SGI's OpenGL website
Khronos Group, Inc.
1992 software
3D graphics APIs
Application programming interfaces
Cross-platform software
Graphics libraries
Graphics standards
Video game development
Video game development software
Virtual reality
Augmented reality
Mixed reality
Metaverse | OpenGL | Technology | 6,042 |
10,353,862 | https://en.wikipedia.org/wiki/Agaricus%20augustus | Agaricus augustus, known commonly as the prince, is a basidiomycete fungus of the genus Agaricus.
Taxonomy
According to Heinemann's (1978) popular division of Agaricus, A. augustus belongs to section Arvenses. The system proposed by Wasser (2002) classifies A. augustus within subgenus Flavoagaricus, section Majores, subsection Flavescentes. Moreover, there have been attempts to recognise distinct varieties, namely A. augustus var. augustus Fr., and A. augustus var. perrarus (Schulzer) Bon & Cappelli. The specific epithet augustus is a Latin adjective meaning noble.
Description
The fruiting bodies of Agaricus augustus are large and distinctive agarics. The cap shape is hemispherical during the so-called button stage, and then expands, becoming convex and finally flat, with a diameter from . The cap cuticle is dry, and densely covered with concentrically arranged, brown-coloured scales on a white to yellow background.
The flesh is thick, firm and white and may discolour yellow when bruised. The gills are crowded and pallid at first, and turn pink then dark brown with maturity. The gills are not attached to the stem—they are free. Immature specimens bear a delicate white partial veil with darker-coloured warts, extending from the stem to the cap periphery.
The stem is clavate and tall, and thick. In mature specimens, the partial veil is torn and left behind as a pendulous ring adorning the stem. Above the ring, the stem is white to yellow and smooth. Below, it is covered with numerous small scales. Its flesh is thick, white and sometimes has a narrow central hollow. The stem base extends deeply into the substrate.
The mushroom's odour is strong and sweet, similar to almond extract, marzipan or maraschino cherry, due to the presence of benzaldehyde and benzyl alcohol. Its taste has been described as not distinctive.
Under a microscope, the ellipsoid-shaped spores are seen characteristically large at 7–10 by 4.5–6.5 μm. The basidia are 4-spored. The spore mass is coloured chocolate-brown.
A species initially reported from North America, A. subrufescens closely resembles A. augustus in appearance. However, A. subrufescens produces smaller spores, sized 6–7.5 by 4–5 μm.
Identification
Agaricus augustus shows a red positive Schaeffer's test reaction. The cap cuticle turns yellow when a 10% potassium hydroxide solution is applied.
Toxic lookalikes include Amanitas which stain yellow when bruised or emit bad odor. Another similar-looking toxic species is Agaricus moelleri.
Distribution and habitat
Agaricus augustus has a widespread distribution, occurring throughout Europe, North America, North Africa and Asia. This mushroom is found in deciduous and coniferous woods and in gardens and by roadside verges. The fungus is saprotrophic and terrestrial—it acquires nutrients from decaying dead organic matter and its fruiting bodies occur on humus-rich soil. The species seems adapted to thriving near human activity, for it also emerges from disturbed ground. In Europe, A. augustus fruits in late summer and autumn.
Edibility
This mushroom is a choice edible, and is collected widely for consumption in Eurasia, the United States, Canada and some parts of Mexico. A. augustus has been implicated in specifically bioaccumulating the metal cadmium, with a quantity of 2.44 mg per kilogram of fresh weight as recorded in one Swiss study. The same phenomenon is true for other edible species of Agaricus, namely A. arvensis, A. macrosporus and A. silvicola, though quantities may vary greatly depending on species, which part of the fruiting body is analysed, and the level of contamination of the substrate. Specimens collected near metal smelters and urban areas have a higher cadmium content. The hymenium contains the highest concentration of metal, followed by the rest of the cap, while the lower part of the stem contains the least.
See also
List of Agaricus species
References
External links
Tom Volk's fungus of the month - Agaricus augustus
Electronic Atlas of the Plants of British Columbia - Agaricus augustus
Mykoweb - Agaricus augustus
augustus
Edible fungi
Fungi of Europe
Fungi of North America
Fungi described in 1838
Taxa named by Elias Magnus Fries
Fungus species | Agaricus augustus | Biology | 942 |
54,724,614 | https://en.wikipedia.org/wiki/Watching-eye%20effect | The watching-eye effect says that people behave more altruistically and exhibit less antisocial behavior in the presence of images that depict eyes, because these images insinuate that they are being watched. Eyes are strong signals of perception for humans. They signify that our actions are being seen and paid attention to even through mere depictions of eyes.
It has been demonstrated that these effects are so pronounced that even depictions of eyes are enough to trigger them. This means that people need not actually be watched, but that a simple photograph of eyes is enough to elicit feelings that individuals are being watched which can impact their behavior to be more pro-social and less antisocial. Empirical psychological research has continually shown that the visible presence of images depicting eyes nudges people towards slightly, but measurably more honest and more pro-social behavior.
The concept is part of the psychology of surveillance and has implications for the areas of crime reduction and prevention without increasing actual surveillance, just by psychological measures alone. By simply inserting signs depicting eyes and leading others to believe they are being watched, crime can be reduced, as it leads to behavior that is more socially acceptable.
Similar effects
The effect differs from the psychic staring effect in that the latter describes the feeling of being watched, whereas individuals who succumb to the watching-eye effect usually affect our behaviour through the subconscious level.
Evidence of effects on behavior
Effects on pro-social behavior
There is evidence that images of eyes being present cause people to behave pro-socially. Pro-social behavior is acting in a way or with the intent that benefits others. There are two forms of motivation that support this. One being negative motivation that makes people want to avoid behavior that is wrong and violates the norm. They want to keep up a positive social image, or be seen improving their image rather than worsening it. The second being positive motivation to get a reward or benefits in the future. They believed that under watching eyes that if they behaved in a positive manner that benefited others, they were likely to get paid back for it in the future.
Pro-social experiments
Certain studies have shown that under the influence of eyes people will behave as truthfully honest. Under controlled groups without images of eyes present people were more likely to behave anti-socially and lie for the benefit of others. People lean toward honesty rather than acting generously to keep a good image in these situations in order to avoid violating norms. In these situations honesty is often chosen since it is seen as the most pro-social behavior.
There are more examples of studies that show that pro-social behavior is more likely under watchful behavior. People were more likely to share things such as money in games that had to do with economics when presented with images with eyes. People were also shown to be more likely to pick up trash at bus stops and pick up after themselves in a cafeteria, they were less likely to commit bicycle theft, and people were much more likely to give the full amount of money for their coffee on certain days that images of eyes were put up nearby.
In an experiment on littering funded by the School of Psychology at Newcastle University it was found that places that already had trash on the ground tending to have an increase in littering, showing that people tend to behave in ways that seem socially acceptable. Likewise, it was discovered that images of eyes that insinuated watching caused a reduce in littering however, the reduction of littering was mainly only present when there were also larger groups of people around. The findings of this study added to the idea that watching eyes reduce anti-social behavior and increase people to behave more pro-socially.
Donation experiment
In situations where the image of eyes were present people were also more likely to be generous with donations and give more. One study testing this was done at the University of Virginia by Caroline Kelsey. The study was done at a children's museum where there was a donation box at the front desk. Data was collected from this setting for 28 weeks, testing more than 34,100 people who visited in the span of this time. Each week the sign over the box that usually read "Donations would be appreciated" changed to primarily images of eyes or other inanimate objects such as chairs or noses with some wording with it. Throughout each week the number of people who visited the museum was recording along with the total amount of donations made. By the end of the study it was found that patrons donated more under the presence of eyes on the signs rather than other inanimate objects.
More on studies
Other studies in relation to the watching-eye-effect show that people are more cooperative and aware of themselves when their identity is exposed as opposed to when they are acting anonymously. They act more respectfully and appropriately because their reputation is at risk when they are being watched by others or feel that they are being watched. Even in some studies that insisted to their participants that their actions were anonymous they were still more generous because they felt identified by the eyes.
Some studies argue that it may not be the effect of these eyes that gives people incentive to be more generous, but the number of people that are around them that make them feel peer pressure to conform to more pro-social behavior.
See also
Decision-making
Evil eye
Eye contact
Fake security camera
Gaze
Hawthorne effect
Security theater
Situation awareness
Subject-expectancy effect
References
"No effect of ‘watching eyes’: An attempted replication and extension investigating individual differences", Rotella et al 2021
Psychological effects
Cognition
Cognitive biases
Cognitive psychology | Watching-eye effect | Biology | 1,112 |
3,202,164 | https://en.wikipedia.org/wiki/Orion%20Electric | was a Japanese consumer electronics company that was established in 1958 in Osaka, Japan. Their devices were branded as "Orion".
History
Orion Co., Ltd. was founded as Orion Electric Co., Ltd. in 1958 in Osaka, Japan, by Shigemasa Otake. The company initially produced transistor radios, audiocassette recorders, and CB radio transceivers. Later audio products included 8-track players, car stereos, and home stereo systems.
From 1984 to their acquisition, their headquarters were based in Echizen, Fukui, Japan. Before their acquisition, they were of the world's largest OEM television and video equipment manufacturers, primarily supplying major-brand OEM customers, with Toshiba being its major customer in the 2000s. Orion produced around six million televisions and twelve million DVD player and TV combo units each year until 2019. Most of their products were manufactured in Thailand.
The Orion Group employed in excess of 9,000 workers. They had factories and offices in Japan, Thailand, Poland, the United Kingdom, and the United States. Orion's flagship factories in Thailand were one of Thailand's top exporters, and they were recognized with an award from the Thai Government for their contribution.
Orion manufactured products primarily for Emerson, Memorex, Hitachi, JVC, and Sansui. In the North American market, Orion manufactured many televisions and VCRs for Emerson Radio during the 1980s and 1990s, but when Emerson Radio filed for bankruptcy in 2000, rights to the Emerson brand were sold to Orion's primary competitor, Funai, for use in home video equipment. During the 1990s, Orion and another of their brand names, World, were exclusively sold by Wal-Mart. The products sold consisted of discounted televisions, TV/VCR combos, and VHS players. In 2001, at its peak, Orion partnered with Toshiba and Sumitomo to manufacture smaller CRT and LCD televisions, combo televisions, and DVD/VCR combos in Indonesia for the North American market, until 2009. After Toshiba exited, Orion production numbers had dropped significantly by more than 90% and ran into financially trouble, and most workers were laid off after 2010.
In 2011, Orion licensed the JVC name for televisions. Until 2019, all JVC televisions were designed, produced, and supported by Orion. Orion also manufactured OEM televisions for Hitachi. Most of these TVs were sold at Wal-Mart and Sam's Club stores. Orion also operated Orion Sales, headquartered in Olney, Illinois, for the North American market, using their privately-owned Sansui brand, and their recently licensed JVC television brand. Due to declining sales, Orion Sales ceased to exist in 2016, and was sold to Elitelux Technologies.
On March 31, 2015, Orion Electric Co., Ltd. had reached insolvency and appointed provisional liquidators, due to poor sales by severe low price competition worldwide. However, on April 1, 2015, the "new" Orion Electric Co., Ltd. was established, and took over previous Orion Electric business. On January 19, 2019, Orion Co., Ltd., a subsidiary of the Doshisha Corporation in Osaka, took over the Orion brands and businesses. On May 20, 2019, Orion Electric Co., Ltd. had once again reached insolvency. Due to the lack of continuing funds, the Orion Electric Co., Ltd. ceased to exist, with all assets and holdings presumably owned by the Doshisha Corporation.
References
External links
Electronics companies established in 1958
Electronics companies disestablished in 2019
Display technology companies
Japanese companies established in 1958
Japanese companies disestablished in 2019
Electronics companies of Japan
Radio manufacturers | Orion Electric | Engineering | 768 |
56,091,345 | https://en.wikipedia.org/wiki/NGC%20513 | NGC 513, also occasionally referred to as PGC 5174 or UGC 953, is a spiral galaxy in the constellation Andromeda. It is located approximately 262 million light-years from the Solar System and was discovered on 13 September 1784 by astronomer William Herschel.
Observation history
Herschel discovered the object and simply noted "stellar". Therefore, the galaxy was probably mistaken for a star. Herschel discovered this galaxy along with many other objects on a single observation, using Beta Andromedae as a reference star. The position noted is correct and off only by approximately 30" from UGC 953, thus the objects are generally viewed as equivalents. John Louis Emil Dreyer, creator of the New General Catalogue, described the galaxy as "faint, small, stellar", still indicating the misidentification of NGC 513 as a star.
One supernova has been observed in NGC 513: SN 2023jpp (type Ia, mag. 17.6).
Description
The galaxy has an apparent size of 0.9 × 0.6 arcmins and a recessional velocity of approximately 5807 kilometers per second. The redshift of 0.01956 allows an estimate of the galaxy's distance using Hubble's law, which puts the object at roughly 260 million light-years from the Sun.
See also
List of NGC objects (1–1000)
References
External links
SEDS
Spiral galaxies
Andromeda (constellation)
0513
5174
0953
Astronomical objects discovered in 1784
Discoveries by William Herschel | NGC 513 | Astronomy | 318 |
71,649,659 | https://en.wikipedia.org/wiki/FARSIDE%20telescope | FARSIDE (Farside Array for Radio Science Investigations of the Dark Ages and Exoplanets) is a concept for a low-frequency interferometric array that would be placed on the farside of the Moon.
FarView
FarView is another concept for a 20x20 km radio observatory with a total collecting area of 400 kilometers that would also be located on the Moon's far side.
References
Proposed telescopes
Telescopes | FARSIDE telescope | Astronomy | 86 |
20,335,837 | https://en.wikipedia.org/wiki/Graph%20dynamical%20system | In mathematics, the concept of graph dynamical systems can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of GDSs is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result.
The work on GDSs considers finite graphs and finite state spaces. As such, the research typically involves techniques from, e.g., graph theory, combinatorics, algebra, and dynamical systems rather than differential geometry. In principle, one could define and study GDSs over an infinite graph (e.g. cellular automata or probabilistic cellular automata over or interacting particle systems when some randomness is included), as well as GDSs with infinite state space (e.g. as in coupled map lattices); see, for example, Wu. In the following, everything is implicitly assumed to be finite unless stated otherwise.
Formal definition
A graph dynamical system is constructed from the following components:
A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected.
A state xv for each vertex v of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[v] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of v in Y (in some fixed order).
A vertex function fv for each vertex v. The vertex function maps the state of vertex v at time t to the vertex state at time t + 1 based on the states associated to the 1-neighborhood of v in Y.
An update scheme specifying the mechanism by which the mapping of individual vertex states is carried out so as to induce a discrete dynamical system with map F: Kn → Kn.
The phase space associated to a dynamical system with map F: Kn → Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update scheme. The research in this area seeks to infer phase space properties based on the structure of the system constituents. The analysis has a local-to-global character.
Generalized cellular automata (GCA)
If, for example, the update scheme consists of applying the vertex functions synchronously one obtains the class of generalized cellular automata (CA). In this case, the global map F: Kn → Kn is given by
This class is referred to as generalized cellular automata since the classical or standard cellular automata are typically defined and studied over regular graphs or grids, and the vertex functions are typically assumed to be identical.
Example: Let Y be the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. Let K = {0,1} be the state space for each vertex and use the function nor3 : K3 → K defined by nor3(x,y,z) = (1 + x)(1 + y)(1 + z) with arithmetic modulo 2 for all vertex functions. Then for example the system state (0,1,0,0) is mapped to (0, 0, 0, 1) using a synchronous update. All the transitions are shown in the phase space below.
Sequential dynamical systems (SDS)
If the vertex functions are applied asynchronously in the sequence specified by a word w = (w1, w2, ... , wm) or permutation = ( , ) of v[Y] one obtains the class of Sequential dynamical systems (SDS). In this case it is convenient to introduce the Y-local maps Fi constructed from the vertex functions by
The SDS map F = [FY , w] : Kn → Kn is the function composition
If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point.
Example: Let Y be the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. Let K={0,1} be the state space for each vertex and use the function nor3 : K3 → K defined by nor3(x, y, z) = (1 + x)(1 + y)(1 + z) with arithmetic modulo 2 for all vertex functions. Using the update sequence (1,2,3,4) then the system state (0, 1, 0, 0) is mapped to (0, 0, 1, 0). All the system state transitions for this sequential dynamical system are shown in the phase space below.
Stochastic graph dynamical systems
From, e.g., the point of view of applications it is interesting to consider the case where one or more of the components of a GDS contains stochastic elements. Motivating applications could include processes that are not fully understood (e.g. dynamics within a cell) and where certain aspects for all practical purposes seem to behave according to some probability distribution. There are also applications governed by deterministic principles whose description is so complex or unwieldy that it makes sense to consider probabilistic approximations.
Every element of a graph dynamical system can be made stochastic in several ways. For example, in a sequential dynamical system the update sequence can be made stochastic. At each iteration step one may choose the update sequence w at random from a given distribution of update sequences with corresponding probabilities. The matching probability space of update sequences induces a probability space of SDS maps. A natural object to study in this regard is the Markov chain on state space induced by this collection of SDS maps. This case is referred to as update sequence stochastic GDS and is motivated by, e.g., processes where "events" occur at random according to certain rates (e.g. chemical reactions), synchronization in parallel computation/discrete event simulations, and in computational paradigms described later.
This specific example with stochastic update sequence illustrates two general facts for such systems: when passing to a stochastic graph dynamical system one is generally led to (1) a study of Markov chains (with specific structure governed by the constituents of the GDS), and (2) the resulting Markov chains tend to be large having an exponential number of states. A central goal in the study of stochastic GDS is to be able to derive reduced models.
One may also consider the case where the vertex functions are stochastic, i.e., function stochastic GDS. For example, Random Boolean networks are examples of function stochastic GDS using a synchronous update scheme and where the state space is K = {0, 1}. Finite probabilistic cellular automata (PCA) is another example of function stochastic GDS. In principle the class of Interacting particle systems (IPS) covers finite and infinite PCA, but in practice the work on IPS is largely concerned with the infinite case since this allows one to introduce more interesting topologies on state space.
Applications
Graph dynamical systems constitute a natural framework for capturing distributed systems such as biological networks and epidemics over social networks, many of which are frequently referred to as complex systems.
See also
Chemical reaction network theory
Dynamic network analysis (a social science topic)
Finite-state machine
Hopfield network
Petri net
References
Further reading
External links
Graph Dynamical Systems – A Mathematical Framework for Interaction-Based Systems, Their Analysis and Simulations by Henning Mortveit
Dynamical systems
Graph theory
Combinatorics | Graph dynamical system | Physics,Mathematics | 1,682 |
73,959,740 | https://en.wikipedia.org/wiki/Elio%20Morillo | Elio Morillo Baquerizo is an Ecuadorian-Boricua aerospace engineer at NASA's Jet Propulsion Laboratory in Pasadena, California. Morillo won the SHPE-EL Poder en Ti Scholarship while attending the University of Michigan, pursuing this degree in Mechanical Engineering.
Morillo is known for his work on the Mars 2020 program named Perseverance. The Perseverance was the fifth rover sent to Mars after Sojourner, Spirit, Opportunity, and Curiosity. The mission for Mars 2020 was the first mission that was able to collect rocks and small fragments, dust and sand from the surface.
Education
Morillo pursued his academic journey at University of Michigan, where he successfully obtained a degree in mechanical engineering. Motivated by his passion for space exploration, he continued his education at the same university and earned a master's degree specializing in space systems design. His dedication and hard work were recognized during his time at University of Michigan when he was awarded the SHPE-EL Poder en Ti Scholarship.
Career
Morillo embarked on his NASA career in 2016, initially joining as a young engineer working on the Mars 2020 System test bed. His dedication and expertise led to his promotion as the Mars 2020 Engineering Operations Mechanisms Lead. As his career progressed, Morillo was further elevated to the esteemed position of Operations Mechanisms Chair. During this time, he played a pivotal role in the Mars 2020 Ingenuity Helicopter project, serving as the Operator.
Morillo's contributions to the Mars 2020 program have been widely recognized. Under his guidance, the mission successfully deployed the Perseverance rover, which embarked on a seven-month journey to Mars. On February 18, the rover achieved a safe landing, marking a significant milestone as the most substantial and advanced vehicle ever sent to another planet. The Perseverance joined the ranks of previous successful Mars rovers, including Sojourner, Spirit, Opportunity, and Curiosity.
The primary objective of the Mars 2020 mission was to collect rocks, small fragments, dust, and sand from the Martian surface. Equipped with a robotic arm, the rover could analyze samples immediately upon contact. Notably, the Perseverance also featured an instrument called MOXIE, which stands for "Mars Oxygen In Situ Experiment." This innovative device demonstrated the capability to convert carbon dioxide into oxygen on the Red Planet.
References
Living people
Year of birth missing (living people)
Aerospace engineers
University of Michigan alumni
NASA people
Mechanical engineers
Ecuadorian engineers
Hispanic and Latino American scientists | Elio Morillo | Engineering | 496 |
11,421,969 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD35 | In molecular biology, snoRNA U35 (also known as SNORD35) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA U35 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
U35 is encoded in intron 6 of ribosomal protein L13A and intron 3 of ribosomal protein S11 in humans and at homologous positions in mouse and chicken ribosomal protein genes. U35 is predicted to guide the 2'O-ribose methylation of 28S ribosomal RNA (rRNA) residue C4506.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD35 | Chemistry | 256 |
18,046,910 | https://en.wikipedia.org/wiki/Phenylacetaldehyde | Phenylacetaldehyde is an organic compound used in the synthesis of fragrances and polymers. Phenylacetaldehyde is an aldehyde that consists of acetaldehyde bearing a phenyl substituent; the parent member of the phenylacetaldehyde class of compounds. It has a role as a human metabolite, a Saccharomyces cerevisiae metabolite, an Escherichia coli metabolite and a mouse metabolite. It is an alpha-CH2-containing aldehyde and a member of phenylacetaldehydes.
Phenylacetaldehyde is one important oxidation-related aldehyde. Exposure to styrene gives phenylacetaldehyde as a secondary metabolite. Styrene has been implicated as reproductive toxicant, neurotoxicant, or carcinogen in vivo or in vitro. Phenylacetaldehyde could be formed by diverse thermal reactions during the cooking process together with C8 compounds is identified as a major aroma–active compound in cooked pine mushroom. Phenylacetaldehyde is readily oxidized to phenylacetic acid. Therefore will eventually be hydrolyzed and oxidized to yield phenylacetic acid that will be excreted primarily in the urine in conjugated form.
Natural occurrence
Phenylacetaldehyde occurs extensively in nature because it can be biosynthetically derived from the amino acid phenylalanine. Natural sources of the compound include chocolate, buckwheat, flowers, and communication pheromones from various insect orders. It is notable for being a floral attractant for numerous species of Lepidoptera; for example, it is the strongest floral attractor for the cabbage looper moth.
Uses
Fragrances and flavors
The aroma of pure substance can be described as honey-like, sweet, rose, green, grassy and is added to fragrances to impart hyacinth, narcissi, or rose nuances. For similar reasons the compound can sometimes be found in flavored cigarettes and beverages.
Historically, before biotechnology approaches were developed, phenylacetaldehyde was also used to produce phenylalanine via the Strecker reaction as a step in the production of aspartame sweetener.
Polymers
Phenylacetaldehyde is used in the synthesis of polyesters where it serves as a rate-controlling additive during polymerization.
Natural Medicine
Phenylacetaldehyde is responsible for the antibiotic activity of maggot therapy.
MAOI
Theoretically, hydrazone formation and subsequent reduction of the phenylethylidenehydrazine gives phenelzine.
Preparation
Phenylacetaldehyde can be obtained via various synthetic routes and precursors. Notable examples include:
Isomerization of styrene oxide.
Dehydrogenation of 2-Phenylethanol over silver or gold catalysts.
Darzens reaction between benzaldehyde and chloroacetate esters.
Wacker oxidation of styrene.
Hofmann rearrangement of Cinnamamide (aka (2E)-3-Phenylacrylamide).
Oxidation of Cyclooctatetraene with aqueous Mercury(II) sulfate.
Strecker degradation of phenylalanine.
Reactivity
Phenylacetaldehyde is often contaminated with polystyrene oxide polymer because of the especial lability of the benzylic alpha proton and the reactivity of the aldehyde. Aldol condensation of the initial dimer gives rise to a range of Michael acceptors and donors.
References
Insect pheromones
Aldehydes
Phenyl compounds | Phenylacetaldehyde | Chemistry | 789 |
49,273,219 | https://en.wikipedia.org/wiki/Boletopsis%20smithii | Boletopsis smithii is a species of hydnoid fungus in the family Bankeraceae. It was described as new to science in 1975 by mycologist Keith A. Harrison, from collections made in Washington.
References
External links
Fungi described in 1975
Fungi of the United States
Thelephorales
Fungi without expected TNC conservation status
Fungus species | Boletopsis smithii | Biology | 72 |
2,849,620 | https://en.wikipedia.org/wiki/Lumber%20room | A lumber room is a room, most often in the attic of a house, used for storing unused possessions such as furniture and other items the household has been "lumbered with", or accumulated over time. "Lumber" meaning to give something of little use or worth to the recipient that cannot be easily gotten rid of.
References
Rooms | Lumber room | Engineering | 69 |
3,300,424 | https://en.wikipedia.org/wiki/Ammonium%20perrhenate | Ammonium perrhenate (APR) is the ammonium salt of perrhenic acid, NH4ReO4. It is the most common form in which rhenium is traded. It is a white salt; soluble in ethanol and water, and mildly soluble in NH4Cl. It was first described soon after the discovery of rhenium.
Structure
The crystal structure of APR resembles that of scheelite, with atomic cation is replaced by ammonium. The pertechnetate (NH4TcO4), periodate (NH4IO4), tetrachlorothallate (NH4TlCl4), and tetrachloroindate (NH4InCl4) follow this motif. It undergoes a molecular orientational ordering transition on cooling without change of space group, but with a highly anisotropic change in the shape of the unit cell, resulting in the unusual property of having a positive temperature and pressure Re NQR coefficient. APR does not give hydrates.
Preparation
Ammonium perrhenate may be prepared from virtually all common sources of rhenium. The metal, oxides, and sulfides can be oxidized with nitric acid and the resulting solution treated with aqueous ammonia. Alternatively an aqueous solution of Re2O7 can be treated with ammonia followed by crystallisation.
Reactions
Ammonium perrhenate is weak oxidizer. It slowly reacts with hydrochloric acid:
NH4ReO4 + 6 HCl → NH4[ReCl4O] + Cl2 ↑ + 3H2O.
It is reduced to metallic Re upon heating under hydrogen:
2 NH4ReO4 + 7 H2 → 2 Re + 8 H2O + 2 NH3
Ammonium perrhenate decomposes to volatile Re2O7 starting at 250 °C. When heated in a sealed tube at 500 °C, It decomposes to rhenium dioxide:
2NH4ReO4 → 2ReO2 + N2 + 4 H2O
The ammonium ion can be displaced with some concentrated nitrates e.g. potassium nitrate,, silver nitrate, etc.:
NH4ReO4 + KNO3 → KReO4 ↓ + NH4NO3
It can be reduced to nonahydridorhenate with sodium in ethanol:
NH4ReO4 + 18Na + 13C2H5OH → Na2[ReH9] + 13NaC2H5O + 3NaOH + NH3•H2O.
References
Inorganic compounds
Perrhenates
Ammonium compounds | Ammonium perrhenate | Chemistry | 542 |
11,146,362 | https://en.wikipedia.org/wiki/Coprecipitation | In chemistry, coprecipitation (CPT) or co-precipitation is the carrying down by a precipitate of substances normally soluble under the conditions employed. Analogously, in medicine, coprecipitation (referred to as immunoprecipitation) is specifically "an assay designed to purify a single antigen from a complex mixture using a specific antibody attached to a beaded support".
Coprecipitation is an important topic in chemical analysis, where it can be undesirable, but can also be usefully exploited. In gravimetric analysis, which consists on precipitating the analyte and measuring its mass to determine its concentration or purity, coprecipitation is a problem because undesired impurities often coprecipitate with the analyte, resulting in excess mass. This problem can often be mitigated by "digestion" (waiting for the precipitate to equilibrate and form larger and purer particles) or by redissolving the sample and precipitating it again.
On the other hand, in the analysis of trace elements, as is often the case in radiochemistry, coprecipitation is often the only way of separating an element. Since the trace element is too dilute (sometimes less than a part per trillion) to precipitate by conventional means, it is typically coprecipitated with a carrier, a substance that has a similar crystalline structure that can incorporate the desired element. An example is the separation of francium from other radioactive elements by coprecipitating it with caesium salts such as caesium perchlorate. Otto Hahn is credited for promoting the use of coprecipitation in radiochemistry.
There are three main mechanisms of coprecipitation: inclusion, occlusion, and adsorption. An inclusion (incorporation in the crystal lattice) occurs when the impurity occupies a lattice site in the crystal structure of the carrier, resulting in a crystallographic defect; this can happen when the ionic radius and charge of the impurity are similar to those of the carrier. An adsorbate is an impurity that is weakly, or strongly, bound (adsorbed) to the surface of the precipitate. An occlusion occurs when an adsorbed impurity gets physically trapped inside the crystal as it grows.
Besides its applications in chemical analysis and in radiochemistry, coprecipitation is also important to many environmental issues related to water resources, including acid mine drainage, radionuclide migration around waste repositories, toxic heavy metal transport at industrial and defense sites, metal concentrations in aquatic systems, and wastewater treatment technology.
Coprecipitation is also used as a method of magnetic nanoparticle synthesis.
Distribution between precipitate and solution
There are two models describing of the distribution of the tracer compound between the two phases (the precipitate and the solution):
Doerner-Hoskins law (logarithmic):
Berthelot-Nernst law:
where:
a and b are the initial concentrations of the tracer and carrier, respectively;
a − x and b − y are the concentrations of tracer and carrier after separation;
x and y are the amounts of the tracer and carrier on the precipitate;
D and λ are the distribution coefficients.
For D and λ greater than 1, the precipitate is enriched in the tracer.
Depending on the co-precipitation system and conditions either λ or D may be constant.
The derivation of the Doerner-Hoskins law assumes that there in no mass exchange between the interior of the precipitating crystals and the solution. When this assumption is fulfilled, then the content of the tracer in the crystal is non-uniform (the crystals are said to be heterogeneous). When the Berthelot-Nernst law applies, then the concentration of the tracer in the interior of the crystal is uniform (and the crystals are said to be homogeneous). This is the case when diffusion in the interior is possible (like in the liquids) or when the initial small crystals are allowed to recrystallize. Kinetic effects (like speed of crystallization and presence of mixing) play a role.
See also
Fajans–Paneth–Hahn Law
References
Chemical processes
Analytical chemistry
Radiochemistry | Coprecipitation | Chemistry | 917 |
69,696,582 | https://en.wikipedia.org/wiki/Wellbee | Wellbee was an American cartoon character and public health mascot that first appeared in 1962. He was an anthropomorphic bumblebee created by Hollywood artist Harold M. Walker at the request of Centers for Disease Control and Prevention's (CDC) public information officer George M. Stenhouse. The character became CDC's national symbol of public health at the time, and was widely used to promote immunization and other public health campaigns in the United States following the Vaccination Assistance Act of 1962.
Origin
Wellbee, a standing cartoon character bumblebee with a smiling round face representing "well-being", was created by the Hollywood artist Harold M. Walker, at the request of CDC's public information officer George M. Stenhouse. Referred to by the CDC as "he", Wellbee was first revealed in The Atlanta Journal-Constitution newspaper on March 11, 1962, following a press release that described the character as "a pleasant-faced, bright–eyed, happy cartoon character, who is the personification of good health."
The purpose of the character was the promotion of preventive health measures and the importance of vaccination. At the time, the US government had substantially increased funding and new programs in public health, and with the support of the Vaccination Assistance Act of 1962, sponsored the CDC in its educational efforts, the symbol of which became Wellbee.
Campaigns
The marketing campaign by the CDC planned appearances of Wellbee at public health events and in leaflets, newspapers and posters, and on radio and television, beginning with promoting Sabin's oral polio vaccine in Atlanta and across the United States. Local health departments used the character Wellbee. In Atlanta and Tampa, a smiling Wellbee appeared on posters encouraging children to "drink the free polio vaccine", stating it "tastes good, works fast, prevents polio". In Chicago, its image appeared on pin-back buttons and billboards. A person dressed as Wellbee posed with baseball players Bill Monbouquette, Dick Radatz and Eddie Bressoud of the Boston Red Sox at Fenway Park. Also in Boston, Wellbee stood alongside mayor John F. Collins, who had been affected by polio.
The bee visited schools in Honolulu, appeared on a dog sled in Anchorage, and in Dallas it cautioned against being "Illbee". Subsequent immunization campaigns included promoting vaccines against diphtheria and tetanus, and the character was used to emphasize the benefits of hand-washing, exercise, oral health, and injury prevention, becoming familiar to children and the national symbol of public health. In 1964 posters encouraged the vaccinated to get boosted.
Effect
Within a year, Stenhouse noted "Wellbee, the 'health educator's friend', had a busy year. He was particularly active in promoting community polio programs. He spoke Spanish in New Mexico; he came to life in costume in Hawaii and led a parade."
As a result of the Vaccination Assistance Act, 50 million people were vaccinated against polio between 1962 and 1964 and seven million children received the vaccine that prevents diphtheria, tetanus and whooping cough, resulting in a fall in cases of polio and diphtheria. In 1965 the Vaccination Assistance Act was extended.
Several vaccine mascots have been created since Wellbee. According to the director of the Vaccine Confidence Project at the London School of Hygiene & Tropical Medicine, Heidi Larson, vaccine mascots are "humorous, playful", and it "makes it seem less clinical, less government-driven, less 'You have to take this, thereby engaging young and older groups.
Gallery
Public health posters featuring Wellbee:
See also
Zé Gotinha
References
Advertising campaigns
1962 in the United States
Public health in the United States
American advertising slogans
Public health education
Health promotion
Health campaigns
Cartoon mascots
American mascots
Fictional bees
Insect mascots
Mascots introduced in 1962
Vaccination advocates
Anthropomorphic insects | Wellbee | Biology | 816 |
64,266,300 | https://en.wikipedia.org/wiki/Schwartz%20topological%20vector%20space | In functional analysis and related areas of mathematics, Schwartz spaces are topological vector spaces (TVS) whose neighborhoods of the origin have a property similar to the definition of totally bounded subsets. These spaces were introduced by Alexander Grothendieck.
Definition
A Hausdorff locally convex space with continuous dual , is called a Schwartz space if it satisfies any of the following equivalent conditions:
For every closed convex balanced neighborhood of the origin in , there exists a neighborhood of in such that for all real , can be covered by finitely many translates of .
Every bounded subset of is totally bounded and for every closed convex balanced neighborhood of the origin in , there exists a neighborhood of in such that for all real , there exists a bounded subset of such that .
Properties
Every quasi-complete Schwartz space is a semi-Montel space.
Every Fréchet Schwartz space is a Montel space.
The strong dual space of a complete Schwartz space is an ultrabornological space.
Examples and sufficient conditions
Vector subspace of Schwartz spaces are Schwartz spaces.
The quotient of a Schwartz space by a closed vector subspace is again a Schwartz space.
The Cartesian product of any family of Schwartz spaces is again a Schwartz space.
The weak topology induced on a vector space by a family of linear maps valued in Schwartz spaces is a Schwartz space if the weak topology is Hausdorff.
The locally convex strict inductive limit of any countable sequence of Schwartz spaces (with each space TVS-embedded in the next space) is again a Schwartz space.
Counter-examples
Every infinite-dimensional normed space is not a Schwartz space.
There exist Fréchet spaces that are not Schwartz spaces and there exist Schwartz spaces that are not Montel spaces.
See also
References
Bibliography
Functional analysis
Topological vector spaces | Schwartz topological vector space | Mathematics | 360 |
602,528 | https://en.wikipedia.org/wiki/Great%20Falls%20%28Passaic%20River%29 | The Great Falls of the Passaic River is a prominent waterfall, high, on the Passaic River in the city of Paterson in Passaic County, New Jersey. One of the United States' largest waterfalls, it played a significant role in the early industrial development of New Jersey starting in the earliest days of the nation. The falls and surrounding area are protected as part of the Paterson Great Falls National Historical Park, administered by the National Park Service. Congress authorized its establishment in 2009.
In 1967 it was designated a National Natural Landmark together with Garret Mountain Reservation. The falls and the surrounding neighborhood have also been designated as a National Historic Landmark District since 1976. The Great Falls' raceway and power systems were designated a National Historic Civil Engineering Landmark and a National Historic Mechanical Engineering Landmark in 1977.
History
Formation and early history
Geologically, the falls were formed at the end of the last ice age approximately 13,000 years ago. Formerly the Passaic had followed a shorter course through the Watchung Mountains near present-day Summit. As the glacier receded, the river's previous course was blocked by a newly formed moraine. A large lake, called Glacial Lake Passaic, formed behind the Watchungs. As the ice receded, the river found a new circuitous route around the north end of the Watchungs, carving the spectacular falls through the underlying basalt, which was formed approximately 200 million years ago.
The falls later became the site of a habitation of the historic Lenape Native Americans, who followed earlier indigenous cultures in the region. Later, in the colonial era, Dutch settlers developed a community here beginning in the 1690s.
Industrial development
In 1778, Alexander Hamilton visited the falls and was impressed by its potential for industry. Later when Hamilton was the nation's Secretary of Treasury, he selected the site of the nation's first planned industrial city, which he called a "national manufactory." In 1791, Hamilton helped found the Society for the Establishment of Useful Manufactures (S.U.M.), a state-chartered private corporation to fulfill this vision. The town of Paterson was founded by the society and named after New Jersey Governor William Paterson, in appreciation of his efforts to promote the society.
Hamilton commissioned civil engineer Pierre Charles L'Enfant, responsible for the layout of the new capital at Washington, D.C., to design the system of canals known as raceways to supply the power for the watermills in the new town. As a result, Paterson became the nucleus for a burgeoning mill industry. In 1792, David Godwin was commissioned to build the first water-powered cotton spinning mill in New Jersey. He subsequently built the first dam on the falls; it was a structure made of wood.
In 1812, this was the site of the state's first continuous roll paper mill. Other 19th-century industries that produced goods using the falls as a power source include the Rogers Locomotive Works (1832), Colt's Manufacturing Company, for the Colt revolver (1837), and the construction of the USS Holland (SS-1) (1898). The oldest extant structure in the historic district is the Phoenix Mill, built in 1813.
Workers were exploited, especially new immigrants from Europe, who often did not speak English. They began to seek better working conditions. The industrial area became the site of labor unrest, and it was a center for the 1913 Paterson silk strike. Facing harsh conditions in factories, immigrant workers staged numerous strikes during and after the Great War, adding to social tensions of the time. They organized the first labor movements in the United States.
The SUM society continued operation until 1945, when its charter and property were sold to the city of Paterson. The area fell into disuse during a period of restructuring that resulted in a steep decline of industry in the region during the mid to late 20th century. In 1971, concerned residents established the Great Falls Preservation and Development Corporation to restore and redevelop the historic mill buildings and raceways as artifacts of industrial history.
Great Falls State Park
The State of New Jersey announced plans for a new urban state park in Paterson surrounding the Great Falls, called Great Falls State Park, in 2007. The master plan for the park called for utilizing surrounding industrial areas for parklands that include a trail network and recreation areas, and creating new areas to view the falls. These plans were superseded by the establishment of Great Falls National Historical Park.
National Historical Park
On March 30, 2009, President Obama signed the Omnibus Public Land Management Act authorizing the falls as a national historical park, which would provide additional federal protections for the 77-foot waterfall. By 2011, Great Falls State Park and other land along the Passaic River were transferred to the federal government for the creation of the Paterson Great Falls National Historical Park. Formal establishment as a unit of the National Park System required action by the Secretary of the Interior, which took place November 7, 2011, when Secretary Salazar formally accepted lands on behalf of the United States, and dedicated the park as the nation's 397th park system unit.
Viewing the falls
The Falls are viewable from Haines Overlook Park on the south and Mary Ellen Kramer Park on the north. Drive-by viewing is available from McBride Avenue where it crosses the river just above the Falls. A footbridge over the Falls gorge (historically, the eighth such bridge to span this chasm) also serves as an outlook point. A visitor's center at the corner of Spruce and McBride avenues, in the Great Falls Historic District, provides a historical overview of the falls and the industrial and cultural history of Paterson. A record 177,000 visitors went to the Great Falls in 2016.
National Natural Landmark
The Great Falls of Paterson – Garret Mountain is a National Natural Landmark designated in January 1967 and expanded in April 1967 to include nearby Garret Mountain. Together they help demonstrate how jointed basaltic lava flow shaped the geology of the area during the Early Mesozoic period through both extrusion and intrusion.
The designation protects the site from federal development, but not from local and state development. Redevelopment of the decayed adjacent industrial areas has been an ongoing controversial topic. An attempt in the 1990s to redevelop the adjacent Allied Textile Printing Co. (ATP) facility, destroyed by fire in the 1980s, into prefabricated townhouses was initially approved by the city but later repelled by a coalition of local citizens seeking to preserve the historic character of the district.
Hydroelectric facility
The hydroelectric plant at the falls is operated by Eagle Creek Renewable Energy, which is considering commissioning another facility downstream at the Dundee Dam.
The Great Falls hydroelectric plant has three Kaplan type turbines with a total capacity of 10.95 Mwe. Flow through each turbine is 710 cfs, with a total flow of 2,130 cfs, 1,377 MGD. Three 8.5' diameter penstocks feed the turbines, with a velocity 12.5 ft/sec and 8.5 mph.
In popular culture
The unique history of the Great Falls and the city were described in the five-volume philosophical poem Paterson by William Carlos Williams. Among the episodes described in Williams' poem is the 1827 leap over the falls by Sam Patch, who later became the first known person to perform a stunt at Niagara Falls. The 2016 film Paterson, directed by Jim Jarmusch, is partly inspired by the works of Williams and features the falls as a primary location.
The Great Falls were also featured in the pilot of the HBO crime drama The Sopranos, as well as in the series' sixth episode, in which Mikey Palmice and another associate throw a drug dealer off the bridge and into the falls to his death.
See also
List of waterfalls
List of National Natural Landmarks in New Jersey
Garret Mountain Reservation
National Register of Historic Places listings in Passaic County, New Jersey
Paterson Museum and Rogers Locomotive and Machine Works
Old Great Falls Historic District
Lambert Castle
References
External links
Paterson Great Falls National Historical Park
National Park Service: On Designation of the Area as part of the National Park System
Paterson Friends of the Great Falls
Passaic County, NJ Passaic County Board of Chosen Freeholders
Hamilton Partnership for Paterson
From Local Landmark to National Site
Landforms of Passaic County, New Jersey
Historic Civil Engineering Landmarks
Historic districts on the National Register of Historic Places in New Jersey
National Historic Landmarks in New Jersey
National Historical Parks in New Jersey
Passaic River
National Natural Landmarks in New Jersey
Protected areas of Passaic County, New Jersey
Watchung Mountains
Waterfalls of New Jersey
Parks in Passaic County, New Jersey
National Register of Historic Places in Passaic County, New Jersey
Block waterfalls
Protected areas established in 2009
2009 establishments in New Jersey
United Water
Paterson, New Jersey
Hydroelectric power plants in New Jersey
Energy infrastructure on the National Register of Historic Places
Geography of Passaic County, New Jersey
National historical parks of the United States | Great Falls (Passaic River) | Engineering | 1,798 |
3,868,142 | https://en.wikipedia.org/wiki/Ortho-McNeil%20Pharmaceutical | Ortho-McNeil Pharmaceutical (now operating under Janssen Pharmaceuticals) was a pharmaceutical company based in Raritan, New Jersey, that was formed from the merger of Ortho Pharmaceutical and McNeil Pharmaceutical in 1993. These pharmaceutical companies were pioneers and leaders in areas such as pain management, acid reflux disease, and infectious diseases.
Ortho-McNeil and Janssen Pharmaceuticals together composed the Ortho-McNeil-Janssen group within Johnson & Johnson before a decision to operate under the Janssen Pharmaceuticals name in 2011.
Products
Amongst its many prescription drugs are:
Ortho Tri-cyclen
Ortho-Evra
Doribax
Elmiron
Levaquin
Ultram ER
Aciphex
Concerta
Lawsuits
Topamax
False claims federal case
In 2010, Ortho-McNeil pled guilty in U.S. District Court to one count of misdemeanor violation of the Food, Drug & Cosmetic Act for illegally promoting its epilepsy drug Topamax for uses that were not approved by the FDA. The company was charged with using a program called "Doctor for a Day" to promote Topamax to psychiatrists for treatment of mental health conditions, despite never applying for FDA approval of Topimax for any psychiatric indication. The company was sentenced to pay a fine of $6.14 million.
Ortho-McNeil's parent company, Johnson and Johnson, also paid $75.37 million to resolve civil allegations under the False Claims Act that it caused false claims to be submitted to government health care programs for a variety of psychiatric uses that were not FDA approved.
Civil lawsuits
Ortho-McNeil was found liable in two 2013 civil suits by women who gave birth to children with birth defects after taking Topamax while pregnant. The jury found that they negligently failed to warn the patients and their doctors of the risks associated with Topamax when used by patients during pregnancy. They awarded $11 million in damages to one family and $4 million to the other.
As a result of these and other patient reports, the FDA ordered that a warning be added to the prescribing information for Topamax detailing the risk of birth defects such as cleft lip and cleft palate.
See also
Biotech and pharmaceutical companies in the New York metropolitan area
Cilag
Janssen Pharmaceutica
References
External links
Official website
Johnson & Johnson subsidiaries
Pharmaceutical companies based in New Jersey
Companies based in New Jersey
Pharmaceutical companies established in 1993
American companies established in 1993 | Ortho-McNeil Pharmaceutical | Chemistry | 514 |
18,579,693 | https://en.wikipedia.org/wiki/A%20value | A-values are numerical values used in the determination of the most stable orientation of atoms in a molecule (conformational analysis), as well as a general representation of steric bulk. A-values are derived from energy measurements of the different cyclohexane conformations of a monosubstituted cyclohexane chemical.
Substituents on a cyclohexane ring prefer to reside in the equatorial position to the axial. The difference in Gibbs free energy (ΔG) between the higher energy conformation (axial substitution) and the lower energy conformation (equatorial substitution) is the A-value for that particular substituent.
Utility
A-values help predict the conformation of cyclohexane rings. The most stable conformation will be the one which has the substituent or substituents equatorial. When multiple substituents are taken into consideration, the conformation where the substituent with the largest A-value is equatorial is favored.
The utility of A-values can be generalized for use outside of cyclohexane conformations. A-values can help predict the steric effect of a substituent. In general, the larger a substituent's A-value, the larger the steric effect of that substituent. A methyl group has an A-value of 1.74 while tert-butyl group has an A-value of ~5. Because the A-value of tert-butyl is higher, tert-butyl has a larger steric effect than methyl. This difference in steric effects can be used to help predict reactivity in chemical reactions.
Free energy considerations
Steric effects play a major role in the assignment of configurations in cyclohexanes. One can use steric hindrances to determine the propensity of a substituent to reside in the axial or equatorial plane. It is known that axial bonds are more hindered than the corresponding equatorial bonds. This is because substituents in the axial position are relatively close to two other axial substituents. This makes it very crowded when bulky substituents are oriented in the axial position. These types of steric interactions are commonly known as 1,3 diaxial interactions. These types of interactions are not present with substituents at the equatorial position.
There are generally considered three principle contributions to the conformational free energy:
Baeyer strain, defined as the strain arising from deformation of bond angles.
Pitzer strain, defined as the torsional strain arising from 1,2 interactions between groups attached to contiguous carbons,
Van der Waals interactions, which are similar to 1,3 diaxial interactions.
Enthalpic components
When comparing relative stability, 6- and 7-atom interactions can be used to approximate differences in enthalpy between conformations. Each 6-atom interaction is worth and each 7-atom interaction is worth .
Entropic components
Entropy also plays a role in a substituent's preference for the equatorial position. The entropic component is determined by the following formula:
Where σ is equal to the number of microstates available for each conformation.
Due to the larger number of possible conformations of ethyl cyclohexane, the A value is reduced from what would be predicted based purely on enthalpic terms. Due to these favorable entropic conditions, the steric relevance of an ethyl group is similar to that of a methyl substituent.
Table of A-values
Applications
Predicting reactivity
One of the original experiments performed by Winston and Holness was measuring the rate of oxidation in trans and cis substituted rings using a chromium catalyst. The large tert-butyl group used locks the conformation of each molecule, placing it equatorial (cis compound shown).
It was observed that the cis compound underwent oxidation at a much faster rate than the trans compound. The proposition was that the large hydroxyl group in the axial position was disfavored and formed the carbonyl more readily to relieve this strain. The trans compound had rates identical to those found in the monosubstituted cyclohexanol.
Approximating intramolecular force strength using A-values
Using the A-values of the hydroxyl and isopropyl subunit, the energetic value of a favorable intramolecular hydrogen bond can be calculated.
Limitations
A-Values are measured using a mono-substituted cyclohexane ring, and are an indication of only the sterics a particular substituent imparts on the molecule. This leads to a problem when there are possible stabilizing electronic factors in a different system. The carboxylic acid substituent shown below is axial in the ground state, despite a positive A-value. From this observation, it is clear that there are other possible electronic interactions that stabilize the axial conformation.
Other considerations
A-values do not predict the physical size of a molecule, only the steric effect. For example, the tert-butyl group (A-value=4.9) has a larger A-value than the trimethylsilyl group (A-value=2.5), yet the tert-butyl group actually occupies less space. This difference can be attributed to the longer length of the carbon–silicon bond as compared to the carbon–carbon bond of the tert-butyl group. The longer bond allows for less interactions with neighboring substituents, which effectively makes the trimethylsilyl group less sterically hindering, thus, lowering its A-value. This can also be seen when comparing the halogens. Bromine, iodine, and chlorine all have similar A-values even though their atomic radii differ. A-values then, predict the apparent size of a substituent, and the relative apparent sizes determine the differences in steric effects between compounds. Thus, A-values are useful tools in determining compound reactivity in chemical reactions.
References
Isomerism
Physical organic chemistry | A value | Chemistry | 1,258 |
3,871,014 | https://en.wikipedia.org/wiki/Rainbow | A rainbow is an optical phenomenon caused by refraction, internal reflection and dispersion of light in water droplets resulting in a continuous spectrum of light appearing in the sky. The rainbow takes the form of a multicoloured circular arc. Rainbows caused by sunlight always appear in the section of sky directly opposite the Sun. Rainbows can be caused by many forms of airborne water. These include not only rain, but also mist, spray, and airborne dew.
Rainbows can be full circles. However, the observer normally sees only an arc formed by illuminated droplets above the ground, and centered on a line from the Sun to the observer's eye.
In a primary rainbow, the arc shows red on the outer part and violet on the inner side. This rainbow is caused by light being refracted when entering a droplet of water, then reflected inside on the back of the droplet and refracted again when leaving it.
In a double rainbow, a second arc is seen outside the primary arc, and has the order of its colours reversed, with red on the inner side of the arc. This is caused by the light being reflected twice on the inside of the droplet before leaving it.
Visibility
Rainbows can be observed whenever there are water drops in the air and sunlight shining from behind the observer at a low altitude angle. Because of this, rainbows are usually seen in the western sky during the morning and in the eastern sky during the early evening. The most spectacular rainbow displays happen when half the sky is still dark with raining clouds and the observer is at a spot with clear sky in the direction of the Sun. The result is a luminous rainbow that contrasts with the darkened background. During such good visibility conditions, the larger but fainter secondary rainbow is often visible. It appears about 10° outside of the primary rainbow, with inverse order of colours.
The rainbow effect is also commonly seen near waterfalls or fountains. In addition, the effect can be artificially created by dispersing water droplets into the air during a sunny day. Rarely, a moonbow, lunar rainbow or nighttime rainbow, can be seen on strongly moonlit nights. As human visual perception for colour is poor in low light, moonbows are often perceived to be white.
It is difficult to photograph the complete semicircle of a rainbow in one frame, as this would require an angle of view of 84°. For a 35 mm camera, a wide-angle lens with a focal length of 19 mm or less would be required. Now that software for stitching several images into a panorama is available, images of the entire arc and even secondary arcs can be created fairly easily from a series of overlapping frames.
From above the Earth such as in an aeroplane, it is sometimes possible to see a rainbow as a full circle. This phenomenon can be confused with the glory phenomenon, but a glory is usually much smaller, covering only 5–20°.
The sky inside a primary rainbow is brighter than the sky outside of the bow. This is because each raindrop is a sphere and it scatters light over an entire circular disc in the sky. The radius of the disc depends on the wavelength of light, with red light being scattered over a larger angle than blue light. Over most of the disc, scattered light at all wavelengths overlaps, resulting in white light which brightens the sky. At the edge, the wavelength dependence of the scattering gives rise to the rainbow.
The light of a primary rainbow arc is 96% polarised tangential to the arc. The light of the second arc is 90% polarised.
Number of colours in a spectrum or a rainbow
For colours seen by the human eye, the most commonly cited and remembered sequence is Isaac Newton's sevenfold red, orange, yellow, green, blue, indigo and violet, remembered by the mnemonic Richard Of York Gave Battle In Vain, or as the name of a fictional person (Roy G. Biv). The initialism is sometimes referred to in reverse order, as VIBGYOR. More modernly, the rainbow is often divided into red, orange, yellow, green, cyan, blue and violet. The apparent discreteness of main colours is an artefact of human perception and the exact number of main colours is a somewhat arbitrary choice.
Newton, who admitted his eyes were not very critical in distinguishing colours, originally (1672) divided the spectrum into five main colours: red, yellow, green, blue and violet. Later he included orange and indigo, giving seven main colours by analogy to the number of notes in a musical scale. Newton chose to divide the visible spectrum into seven colours out of a belief derived from the beliefs of the ancient Greek sophists, who thought there was a connection between the colours, the musical notes, the known objects in the Solar System, and the days of the week. Scholars have noted that what Newton regarded at the time as "blue" would today be regarded as cyan, and what Newton called "indigo" would today be considered blue.
The colour pattern of a rainbow is different from a spectrum, and the colours are less saturated. There is spectral smearing in a rainbow since, for any particular wavelength, there is a distribution of exit angles, rather than a single unvarying angle. In addition, a rainbow is a blurred version of the bow obtained from a point source, because the disk diameter of the sun (0.533°) cannot be neglected compared to the width of a rainbow (2.36°). Further red of the first supplementary rainbow overlaps the violet of the primary rainbow, so rather than the final colour being a variant of spectral violet, it is actually a purple. The number of colour bands of a rainbow may therefore be different from the number of bands in a spectrum, especially if the droplets are particularly large or small. Therefore, the number of colours of a rainbow is variable. If, however, the word rainbow is used inaccurately to mean spectrum, it is the number of main colours in the spectrum.
Moreover, rainbows have bands beyond red and violet in the respective near infrared and ultraviolet regions, however, these bands are not visible to humans. Only near frequencies of these regions to the visible spectrum are included in rainbows, since water and air become increasingly opaque to these frequencies, scattering the light. The UV band is sometimes visible to cameras using black and white film.
The question of whether everyone sees seven colours in a rainbow is related to the idea of linguistic relativity. Suggestions have been made that there is universality in the way that a rainbow is perceived. However, more recent research suggests that the number of distinct colours observed and what these are called depend on the language that one uses, with people whose language has fewer colour words seeing fewer discrete colour bands.
Explanation
When sunlight encounters a raindrop, part of the light is reflected and the rest enters the raindrop. The light is refracted at the surface of the raindrop. When this light hits the back of the raindrop, some of it is reflected off the back. When the internally reflected light reaches the surface again, once more some is internally reflected and some is refracted as it exits the drop. (The light that reflects off the drop, exits from the back, or continues to bounce around inside the drop after the second encounter with the surface, is not relevant to the formation of the primary rainbow.) The overall effect is that part of the incoming light is reflected back over the range of 0° to 42°, with the most intense light at 42°. This angle is independent of the size of the drop, but does depend on its refractive index. Seawater has a higher refractive index than rain water, so the radius of a "rainbow" in sea spray is smaller than that of a true rainbow. This is visible to the naked eye by a misalignment of these bows.
The reason the returning light is most intense at about 42° is that this is a turning point – light hitting the outermost ring of the drop gets returned at less than 42°, as does the light hitting the drop nearer to its centre. There is a circular band of light that all gets returned right around 42°. If the Sun were a laser emitting parallel, monochromatic rays, then the luminance (brightness) of the bow would tend toward infinity at this angle if interference effects are ignored . But since the Sun's luminance is finite and its rays are not all parallel (it covers about half a degree of the sky) the luminance does not go to infinity. Furthermore, the amount by which light is refracted depends upon its wavelength, and hence its colour. This effect is called dispersion. Blue light (shorter wavelength) is refracted at a greater angle than red light, but due to the reflection of light rays from the back of the droplet, the blue light emerges from the droplet at a smaller angle to the original incident white light ray than the red light. Due to this angle, blue is seen on the inside of the arc of the primary rainbow, and red on the outside. The result of this is not only to give different colours to different parts of the rainbow, but also to diminish the brightness. (A "rainbow" formed by droplets of a liquid with no dispersion would be white, but brighter than a normal rainbow.)
The light at the back of the raindrop does not undergo total internal reflection, and most of the light emerges from the back. However, light coming out the back of the raindrop does not create a rainbow between the observer and the Sun because spectra emitted from the back of the raindrop do not have a maximum of intensity, as the other visible rainbows do, and thus the colours blend together rather than forming a rainbow.
A rainbow does not exist at one particular location. Many rainbows exist; however, only one can be seen depending on the particular observer's viewpoint as droplets of light illuminated by the sun. All raindrops refract and reflect the sunlight in the same way, but only the light from some raindrops reaches the observer's eye. This light is what constitutes the rainbow for that observer. The whole system composed by the Sun's rays, the observer's head, and the (spherical) water drops has an axial symmetry around the axis through the observer's head and parallel to the Sun's rays. The rainbow is curved because the set of all the raindrops that have the right angle between the observer, the drop, and the Sun, lie on a cone pointing at the sun with the observer at the tip. The base of the cone forms a circle at an angle of 40–42° to the line between the observer's head and their shadow but 50% or more of the circle is below the horizon, unless the observer is sufficiently far above the earth's surface to see it all, for example in an aeroplane (see below). Alternatively, an observer with the right vantage point may see the full circle in a fountain or waterfall spray. Conversely, at lower latitudes near midday (specifically, when the sun's elevation exceeds 42 degrees) a rainbow will not be visible against the sky.
Mathematical derivation
It is possible to determine the perceived angle which the rainbow subtends as follows.
Given a spherical raindrop, and defining the perceived angle of the rainbow as , and the angle of the internal reflection as , then the angle of incidence of the Sun's rays with respect to the drop's surface normal is . Since the angle of refraction is , Snell's law gives us
,
where is the refractive index of water. Solving for , we get
.
The rainbow will occur where the angle is maximum with respect to the angle . Therefore, from calculus, we can set , and solve for , which yields
Substituting back into the earlier equation for yields ≈ 42° as the radius angle of the rainbow.
For red light (wavelength 750nm, based on the dispersion relation of water), the radius angle is 42.5°; for blue light (wavelength 350nm, ), the radius angle is 40.6°.
Variations
Double rainbows
A secondary rainbow, at a greater angle than the primary rainbow, is often visible. The term double rainbow is used when both the primary and secondary rainbows are visible. In theory, all rainbows are double rainbows, but since the secondary bow is always fainter than the primary, it may be too weak to spot in practice.
Secondary rainbows are caused by a double reflection of sunlight inside the water droplets. Technically the secondary bow is centred on the sun itself, but since its angular size is more than 90° (about 127° for violet to 130° for red), it is seen on the same side of the sky as the primary rainbow, about 10° outside it at an apparent angle of 50–53°. As a result of the "inside" of the secondary bow being "up" to the observer, the colours appear reversed compared to those of the primary bow.
The secondary rainbow is fainter than the primary because more light escapes from two reflections compared to one and because the rainbow itself is spread over a greater area of the sky. Each rainbow reflects white light inside its coloured bands, but that is "down" for the primary and "up" for the secondary. The dark area of unlit sky lying between the primary and secondary bows is called Alexander's band, after Alexander of Aphrodisias, who first described it.
Twinned rainbow
Unlike a double rainbow that consists of two separate and concentric rainbow arcs, the very rare twinned rainbow appears as two rainbow arcs that split from a single base. The colours in the second bow, rather than reversing as in a secondary rainbow, appear in the same order as the primary rainbow. A "normal" secondary rainbow may be present as well. Twinned rainbows can look similar to, but should not be confused with supernumerary bands. The two phenomena may be told apart by their difference in colour profile: supernumerary bands consist of subdued pastel hues (mainly pink, purple and green), while the twinned rainbow shows the same spectrum as a regular rainbow.
The cause of a twinned rainbow is believed to be the combination of different sizes of water drops falling from the sky. Due to air resistance, raindrops flatten as they fall, and flattening is more prominent in larger water drops. When two rain showers with different-sized raindrops combine, they each produce slightly different rainbows which may combine and form a twinned rainbow.
A numerical ray tracing study showed that a twinned rainbow on a photo could be explained by a mixture of 0.40 and 0.45 mm droplets. That small difference in droplet size resulted in a small difference in flattening of the droplet shape, and a large difference in flattening of the rainbow top.
Meanwhile, the even rarer case of a rainbow split into three branches was observed and photographed in nature.
Full-circle rainbow
In theory, every rainbow is a circle, but from the ground, usually only its upper half can be seen. Since the rainbow's centre is diametrically opposed to the Sun's position in the sky, more of the circle comes into view as the sun approaches the horizon, meaning that the largest section of the circle normally seen is about 50% during sunset or sunrise. Viewing the rainbow's lower half requires the presence of water droplets below the observer's horizon, as well as sunlight that is able to reach them. These requirements are not usually met when the viewer is at ground level, either because droplets are absent in the required position, or because the sunlight is obstructed by the landscape behind the observer. From a high viewpoint such as a high building or an aircraft, however, the requirements can be met and the full-circle rainbow can be seen. Like a partial rainbow, the circular rainbow can have a secondary bow or supernumerary bows as well. It is possible to produce the full circle when standing on the ground, for example by spraying a water mist from a garden hose while facing away from the sun.
A circular rainbow should not be confused with the glory, which is much smaller in diameter and is created by different optical processes. In the right circumstances, a glory and a (circular) rainbow or fog bow can occur together. Another atmospheric phenomenon that may be mistaken for a "circular rainbow" is the 22° halo, which is caused by ice crystals rather than liquid water droplets, and is located around the Sun (or Moon), not opposite it.
Supernumerary rainbows
In certain circumstances, one or several narrow, faintly coloured bands can be seen bordering the violet edge of a rainbow; i.e., inside the primary bow or, much more rarely, outside the secondary. These extra bands are called supernumerary rainbows or supernumerary bands; together with the rainbow itself the phenomenon is also known as a stacker rainbow. The supernumerary bows are slightly detached from the main bow, become successively fainter along with their distance from it, and have pastel colours (consisting mainly of pink, purple and green hues) rather than the usual spectrum pattern. The effect becomes apparent when water droplets are involved that have a diameter of about 1 mm or less; the smaller the droplets are, the broader the supernumerary bands become, and the less saturated their colours. Due to their origin in small droplets, supernumerary bands tend to be particularly prominent in fogbows.
Supernumerary rainbows cannot be explained using classical geometric optics. The alternating faint bands are caused by interference between rays of light following slightly different paths with slightly varying lengths within the raindrops. Some rays are in phase, reinforcing each other through constructive interference, creating a bright band; others are out of phase by up to half a wavelength, cancelling each other out through destructive interference, and creating a gap. Given the different angles of refraction for rays of different colours, the patterns of interference are slightly different for rays of different colours, so each bright band is differentiated in colour, creating a miniature rainbow. Supernumerary rainbows are clearest when raindrops are small and of uniform size. The very existence of supernumerary rainbows was historically a first indication of the wave nature of light, and the first explanation was provided by Thomas Young in 1804.
Reflected rainbow, reflection rainbow
When a rainbow appears above a body of water, two complementary mirror bows may be seen below and above the horizon, originating from different light paths. Their names are slightly different.
A reflected rainbow may appear in the water surface below the horizon. The sunlight is first deflected by the raindrops, and then reflected off the body of water, before reaching the observer. The reflected rainbow is frequently visible, at least partially, even in small puddles.
A reflection rainbow may be produced where sunlight reflects off a body of water before reaching the raindrops, if the water body is large, quiet over its entire surface, and close to the rain curtain. The reflection rainbow appears above the horizon. It intersects the normal rainbow at the horizon, and its arc reaches higher in the sky, with its centre as high above the horizon as the normal rainbow's centre is below it. Reflection bows are usually brightest when the sun is low because at that time its light is most strongly reflected from water surfaces. As the sun gets lower the normal and reflection bows are drawn closer together. Due to the combination of requirements, a reflection rainbow is rarely visible.
Up to eight separate bows may be distinguished if the reflected and reflection rainbows happen to occur simultaneously: the normal (non-reflection) primary and secondary bows above the horizon (1, 2) with their reflected counterparts below it (3, 4), and the reflection primary and secondary bows above the horizon (5, 6) with their reflected counterparts below it (7, 8).
Monochrome rainbow
Occasionally a shower may happen at sunrise or sunset, where the shorter wavelengths like blue and green have been scattered and essentially removed from the spectrum. Further scattering may occur due to the rain, and the result can be the rare and dramatic monochrome or red rainbow.
Higher-order rainbows
In addition to the common primary and secondary rainbows, it is also possible for rainbows of higher orders to form. The order of a rainbow is determined by the number of light reflections inside the water droplets that create it: One reflection results in the first-order or primary rainbow; two reflections create the second-order or secondary rainbow. More internal reflections cause bows of higher orders—theoretically unto infinity. As more and more light is lost with each internal reflection, however, each subsequent bow becomes progressively dimmer and therefore increasingly difficult to spot. An additional challenge in observing the third-order (or tertiary) and fourth-order (quaternary) rainbows is their location in the direction of the sun (about 40° and 45° from the sun, respectively), causing them to become drowned in its glare.
For these reasons, naturally occurring rainbows of an order higher than 2 are rarely visible to the naked eye. Nevertheless, sightings of the third-order bow in nature have been reported, and in 2011 it was photographed definitively for the first time. Shortly after, the fourth-order rainbow was photographed as well, and in 2014 the first ever pictures of the fifth-order (or quinary) rainbow were published. The quinary rainbow lies partially in the gap between the primary and secondary rainbows and is far fainter than even the secondary. In a laboratory setting, it is possible to create bows of much higher orders. Felix Billet (1808–1882) depicted angular positions up to the 19th-order rainbow, a pattern he called a "rose of rainbows". In the laboratory, it is possible to observe higher-order rainbows by using extremely bright and well collimated light produced by lasers. Up to the 200th-order rainbow was reported by Ng et al. in 1998 using a similar method but an argon ion laser beam.
Tertiary and quaternary rainbows should not be confused with "triple" and "quadruple" rainbows—terms sometimes erroneously used to refer to the (much more common) supernumerary bows and reflection rainbows.
Rainbows under moonlight
Like most atmospheric optical phenomena, rainbows can be caused by light from the Sun, but also from the Moon. In case of the latter, the rainbow is referred to as a lunar rainbow or moonbow. They are much dimmer and rarer than solar rainbows, requiring the Moon to be near-full in order for them to be seen. For the same reason, moonbows are often perceived as white and may be thought of as monochrome. The full spectrum is present, however, but the human eye is not normally sensitive enough to see the colours. Long exposure photographs will sometimes show the colour in this type of rainbow.
Fogbow
Fogbows form in the same way as rainbows, but they are formed by much smaller cloud and fog droplets that diffract light extensively. They are almost white with faint reds on the outside and blues inside; often one or more broad supernumerary bands can be discerned inside the inner edge. The colours are dim because the bow in each colour is very broad and the colours overlap. Fogbows are commonly seen over water when air in contact with the cooler water is chilled, but they can be found anywhere if the fog is thin enough for the sun to shine through and the sun is fairly bright. They are very large—almost as big as a rainbow and much broader. They sometimes appear with a glory at the bow's centre.
Fog bows should not be confused with ice halos, which are very common around the world and visible much more often than rainbows (of any order), yet are unrelated to rainbows.
Sleetbow
A sleetbow forms in the same way as a typical rainbow, with the exception that it occurs when light passes through falling sleet (ice pellets) instead of liquid water. As light passes through the sleet, the light is refracted causing the rare phenomena. These have been documented across United States with the earliest publicly documented and photographed sleetbow being seen in Richmond, Virginia on 21 December 2012. Just like regular rainbows, these can also come in various forms, with a monochrome sleetbow being documented on 7 January 2016 in Valparaiso, Indiana.
Circumhorizontal and circumzenithal arcs
The circumzenithal and circumhorizontal arcs are two related optical phenomena similar in appearance to a rainbow, but unlike the latter, their origin lies in light refraction through hexagonal ice crystals rather than liquid water droplets. This means that they are not rainbows, but members of the large family of halos.
Both arcs are brightly coloured ring segments centred on the zenith, but in different positions in the sky: The circumzenithal arc is notably curved and located high above the Sun (or Moon) with its convex side pointing downwards (creating the impression of an "upside down rainbow"); the circumhorizontal arc runs much closer to the horizon, is more straight and located at a significant distance below the Sun (or Moon). Both arcs have their red side pointing towards the Sun and their violet part away from it, meaning the circumzenithal arc is red on the bottom, while the circumhorizontal arc is red on top.
The circumhorizontal arc is sometimes referred to by the misnomer "fire rainbow". In order to view it, the Sun or Moon must be at least 58° above the horizon, making it a rare occurrence at higher latitudes. The circumzenithal arc, visible only at a solar or lunar elevation of less than 32°, is much more common, but often missed since it occurs almost directly overhead.
Extraterrestrial rainbows
It has been suggested that rainbows might exist on Saturn's moon Titan, as it has a wet surface and humid clouds. The radius of a Titan rainbow would be about 49° instead of 42°, because the fluid in that cold environment is methane instead of water. Although visible rainbows may be rare due to Titan's hazy skies, infrared rainbows may be more common, but an observer would need infrared night vision goggles to see them.
Rainbows with different materials
Droplets (or spheres) composed of materials with different refractive indices than plain water produce rainbows with different radius angles. Since salt water has a higher refractive index, a sea spray bow does not perfectly align with the ordinary rainbow, if seen at the same spot. Tiny plastic or glass marbles may be used in road marking as a reflectors to enhance its visibility by drivers at night. Due to a much higher refractive index, rainbows observed on such marbles have a noticeably smaller radius. One can easily reproduce such phenomena by sprinkling liquids of different refractive indices in the air, as illustrated in the photo.
The displacement of the rainbow due to different refractive indices can be pushed to a peculiar limit. For a material with a refractive index larger than 2, there is no angle fulfilling the requirements for the first order rainbow. For example, the index of refraction of diamond is about 2.4, so diamond spheres would produce rainbows starting from the second order, omitting the first order. In general, as the refractive index exceeds a number , where is a natural number, the critical incidence angle for times internally reflected rays escapes the domain . This results in a rainbow of the -th order shrinking to the antisolar point and vanishing.
Scientific history
The classical Greek scholar Aristotle (384–322 BC) was first to devote serious attention to the rainbow. According to Raymond L. Lee and Alistair B. Fraser, "Despite its many flaws and its appeal to Pythagorean numerology, Aristotle's qualitative explanation showed an inventiveness and relative consistency that was unmatched for centuries. After Aristotle's death, much rainbow theory consisted of reaction to his work, although not all of this was uncritical."
In Book I of Naturales Quaestiones (), the Roman philosopher Seneca the Younger discusses various theories of the formation of rainbows extensively, including those of Aristotle. He notices that rainbows appear always opposite to the Sun, that they appear in water sprayed by a rower, in the water spat by a fuller on clothes stretched on pegs or by water sprayed through a small hole in a burst pipe. He even speaks of rainbows produced by small rods (virgulae) of glass, anticipating Newton's experiences with prisms. He takes into account two theories: one, that the rainbow is produced by the Sun reflecting in each water drop, the other, that it is produced by the Sun reflected in a cloud shaped like a concave mirror; he favours the latter. He also discusses other phenomena related to rainbows: the mysterious "virgae" (rods), halos and parhelia.
According to Hüseyin Gazi Topdemir, the Arab physicist and polymath Ibn al-Haytham (965–1039 AD) attempted to provide a scientific explanation for the rainbow phenomenon. In his Maqala fi al-Hala wa Qaws Quzah (On the Rainbow and Halo), al-Haytham "explained the formation of rainbow as an image, which forms at a concave mirror. If the rays of light coming from a farther light source reflect to any point on axis of the concave mirror, they form concentric circles in that point. When it is supposed that the sun as a farther light source, the eye of viewer as a point on the axis of mirror and a cloud as a reflecting surface, then it can be observed the concentric circles are forming on the axis." He was not able to verify this because his theory that "light from the sun is reflected by a cloud before reaching the eye" did not allow for a possible experimental verification. This explanation was repeated by Averroes, and, though incorrect, provided the groundwork for the correct explanations later given by Kamāl al-Dīn al-Fārisī in 1309 and, independently, by Theodoric of Freiberg (c. 1250–c. 1311)—both having studied al-Haytham's Book of Optics.
In Song dynasty China (960–1279), a polymath scholar-official named Shen Kuo (1031–1095) hypothesised—as a certain Sun Sikong (1015–1076) did before him—that rainbows were formed by a phenomenon of sunlight encountering droplets of rain in the air. Paul Dong writes that Shen's explanation of the rainbow as a phenomenon of atmospheric refraction "is basically in accord with modern scientific principles."
According to Nader El-Bizri, the Persian astronomer, Qutb al-Din al-Shirazi (1236–1311), gave a fairly accurate explanation for the rainbow phenomenon. This was elaborated on by his student, Kamāl al-Dīn al-Fārisī (1267–1319), who gave a more mathematically satisfactory explanation of the rainbow. He "proposed a model where the ray of light from the sun was refracted twice by a water droplet, one or more reflections occurring between the two refractions." An experiment with a water-filled glass sphere was conducted and al-Farisi showed the additional refractions due to the glass could be ignored in his model. As he noted in his Kitab Tanqih al-Manazir (The Revision of the Optics), al-Farisi used a large clear vessel of glass in the shape of a sphere, which was filled with water, in order to have an experimental large-scale model of a rain drop. He then placed this model within a camera obscura that has a controlled aperture for the introduction of light. He projected light unto the sphere and ultimately deduced through several trials and detailed observations of reflections and refractions of light that the colours of the rainbow are phenomena of the decomposition of light.
In Europe, Ibn al-Haytham's Book of Optics was translated into Latin and studied by Robert Grosseteste. His work on light was continued by Roger Bacon, who wrote in his of 1268 about experiments with light shining through crystals and water droplets showing the colours of the rainbow. In addition, Bacon was the first to calculate the angular size of the rainbow. He stated that the rainbow summit can not appear higher than 42° above the horizon. Theodoric of Freiberg is known to have given an accurate theoretical explanation of both the primary and secondary rainbows in 1307. He explained the primary rainbow, noting that "when sunlight falls on individual drops of moisture, the rays undergo two refractions (upon ingress and egress) and one reflection (at the back of the drop) before transmission into the eye of the observer." He explained the secondary rainbow through a similar analysis involving two refractions and two reflections.
Descartes' 1637 treatise, Discourse on Method, further advanced this explanation. Knowing that the size of raindrops did not appear to affect the observed rainbow, he experimented with passing rays of light through a large glass sphere filled with water. By measuring the angles that the rays emerged, he concluded that the primary bow was caused by a single internal reflection inside the raindrop and that a secondary bow could be caused by two internal reflections. He supported this conclusion with a derivation of the law of refraction (subsequently to, but independently of, Snell) and correctly calculated the angles for both bows. His explanation of the colours, however, was based on a mechanical version of the traditional theory that colours were produced by a modification of white light.
Isaac Newton demonstrated that white light was composed of the light of all the colours of the rainbow, which a glass prism could separate into the full spectrum of colours, rejecting the theory that the colours were produced by a modification of white light. He also showed that red light is refracted less than blue light, which led to the first scientific explanation of the major features of the rainbow. Newton's corpuscular theory of light was unable to explain supernumerary rainbows, and a satisfactory explanation was not found until Thomas Young realised that light behaves as a wave under certain conditions, and can interfere with itself.
Young's work was refined in the 1820s by George Biddell Airy, who explained the dependence of the strength of the colours of the rainbow on the size of the water droplets. Modern physical descriptions of the rainbow are based on Mie scattering, work published by Gustav Mie in 1908. Advances in computational methods and optical theory continue to lead to a fuller understanding of rainbows. For example, Nussenzveig provides a modern overview.
Experiments
Experiments on the rainbow phenomenon using artificial raindrops, i.e. water-filled spherical flasks, go back at least to Theodoric of Freiberg in the 14th century. Later, also Descartes studied the phenomenon using a Florence flask. A flask experiment known as Florence's rainbow is still often used today as an imposing and intuitively accessible demonstration experiment of the rainbow phenomenon. It consists in illuminating (with parallel white light) a water-filled spherical flask through a hole in a screen. A rainbow will then appear thrown back / projected on the screen, provided the screen is large enough. Due to the finite wall thickness and the macroscopic character of the artificial raindrop, several subtle differences exist as compared to the natural phenomenon, including slightly changed rainbow angles and a splitting of the rainbow orders.
A very similar experiment consists in using a cylindrical glass vessel filled with water or a solid transparent cylinder and illuminated either parallel to the circular base (i.e. light rays remaining at a fixed height while they transit the cylinder) or under an angle to the base. Under these latter conditions the rainbow angles change relative to the natural phenomenon since the effective index of refraction of water changes (Bravais' index of refraction for inclined rays applies).
Other experiments use small liquid drops (see text above).
Culture and mythology
Rainbows occur frequently in mythology, and have been used in the arts. The first literary occurrence of a rainbow is in the Book of Genesis chapter 9, as part of the flood story of Noah, where it is a sign of God's covenant to never destroy all life on Earth with a global flood again. In Norse mythology, the rainbow bridge Bifröst connects the world of men (Midgard) and the realm of the gods (Asgard). Cuchavira was the god of the rainbow for the Muisca in present-day Colombia and when the regular rains on the Bogotá savanna were over the people thanked him, offering gold, snails and small emeralds. Some forms of Tibetan Buddhism or Dzogchen reference a rainbow body. The Irish leprechaun's secret hiding place for his pot of gold is usually said to be at the end of the rainbow. This place is appropriately impossible to reach, because the rainbow is an optical effect which cannot be approached. In Greek mythology, the goddess Iris is the personification of the rainbow, a messenger goddess who, like the rainbow, connects the mortal world with the gods through messages. In Albanian folk beliefs the rainbow is regarded as the belt of the goddess Prende, and oral legend has it that anyone who jumps over the rainbow changes their sex.
In heraldry, the rainbow proper consists of 4 bands of colour (argent, gules, or, and vert) with the ends resting on clouds. Generalised examples in coat of arms include those of the towns of Regen and Pfreimd, both in Bavaria, Germany; of Bouffémont, France; and of the 69th Infantry Regiment (New York) of the United States Army National Guard.
Rainbow flags have been used for centuries. It was a symbol of the Cooperative movement in the German Peasants' War in the 16th century, of peace in Italy, and of LGBT pride and LGBT social movements; the rainbow flag as a symbol of LGBT pride and the June pride month since it was designed by Gilbert Baker in 1978. In 1994, Archbishop Desmond Tutu and President Nelson Mandela described newly democratic post-apartheid South Africa as the rainbow nation. The rainbow has also been used in technology product logos, including the Apple computer logo. Many political alliances spanning multiple political parties have called themselves a "Rainbow Coalition".
Pointing at rainbows has been considered a taboo in many cultures.
In Saudi Arabia and other similar-minded countries, authorities seize children's clothing (including hats, hair clips, pencil cases, etc.) and toys if they are rainbow-coloured, claiming that such can encourage homosexuality, and selling such items is illegal.
See also
Atmospheric optics
Circumzenithal arc
Circumhorizontal arc
Glory (optical phenomenon)
Iridescent colours in soap bubbles
Sun dog
Fog bow
Moonbow
Notes
References
Further reading
(Large format handbook for the Summer 1976 exhibition The Rainbow Art Show which took place primarily at the De Young Museum but also at other museums. The book is divided into seven sections, each coloured a different colour of the rainbow.)
External links
The Mathematics of Rainbows, article from the American mathematical society
Interactive simulation of light refraction in a drop (java applet)
Rainbow seen through infrared filter and through ultraviolet filter
Atmospheric Optics website by Les Cowley – Description of multiple types of bows, including: "bows that cross, red bows, twinned bows, coloured fringes, dark bands, spokes", etc.
Creating Circular and Double Rainbows! – video explanation of basics, shown artificial rainbow at night, second rainbow and circular one.
Atmospheric optical phenomena
Lucky symbols
Heraldic charges
LGBTQ symbols
Atmospheric sciences | Rainbow | Physics | 8,223 |
8,271,578 | https://en.wikipedia.org/wiki/Urediniospore | Urediniospores (or uredospores) are thin-walled spores produced by the uredium, a stage in the life-cycle of rusts.
Development
Urediniospores develop in the uredium, generally on a leaf's under surface.
Morphology
Urediniospores usually have two dikaryote nuclei within one cell. In mass they are usually pale brown in contrast to teliospores which are generally dark brown.
See also
Chlamydospore
Urediniomycetes
Pycniospore
Aeciospore
Teliospore
Ustilaginomycetes
Rust fungus: Spores
References
C.J. Alexopolous, Charles W. Mims, M. Blackwell, Introductory Mycology, 4th ed. (John Wiley and Sons, Hoboken NJ, 2004)
Germ cells
Fungal morphology and anatomy
Mycology | Urediniospore | Biology | 185 |
68,543,797 | https://en.wikipedia.org/wiki/Arsenate%20arsenite | An arsenate arsenite is a chemical compound or salt that contains arsenate and arsenite anions (AsO33- and AsO43-). These are mixed anion compounds or mixed valence compounds. Some have third anions. Most known substances are minerals, but a few artificial arsenate arsenite compounds have been made. Many of the minerals are in the Hematolite Group.
An arsenate arsenite compound may also be called an arsenite arsenate.
Properties
Some members of this group of materials like mcgovernite has an extremely high unit cell dimension of 204 Å.
Related
Mixed valence pnictide compounds related to the arsenate arsenites include the nitrite nitrates, and phosphate phosphites.
List
References
Arsenates
Arsenites
Mixed anion compounds | Arsenate arsenite | Physics,Chemistry | 178 |
24,432,905 | https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Liechtenstein | As a member of the EFTA, Liechtenstein (LI) is included in the Nomenclature of Territorial Units for Statistics (NUTS). The three NUTS levels all correspond to the country itself:
NUTS-1: LI0 Liechtenstein
NUTS-2: LI00 Liechtenstein
NUTS-3: LI000 Liechtenstein
Below the NUTS levels, there are LAU: municipalities.
See also
Subdivisions of Liechtenstein
Electoral District of Oberland
Electoral District of Unterland
ISO 3166-2 codes of Liechtenstein
FIPS region codes of Liechtenstein
Sources
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EFTA countries - Statistical regions at level 1
LIECHTENSTEIN - Statistical regions at level 2
LIECHTENSTEIN - Statistical regions at level 3
Correspondence between the regional levels and the national administrative units
Communes of Liechtenstein, Statoids.com
Liechtenstein
Subdivisions of Liechtenstein | NUTS statistical regions of Liechtenstein | Mathematics | 171 |
87,806 | https://en.wikipedia.org/wiki/Cornucopia | In classical antiquity, the cornucopia (; ), also called the horn of plenty, was a symbol of abundance and nourishment, commonly a large horn-shaped container overflowing with produce, flowers, or nuts. In Greek, it was called the "horn of Amalthea" (), after Amalthea, a nurse of Zeus, who is often part of stories of the horn's origin.
Baskets or panniers of this form were traditionally used in western Asia and Europe to hold and carry newly harvested food products. The horn-shaped basket would be worn on the back or slung around the torso, leaving the harvester's hands free for picking.
In Greek/Roman mythology
Mythology offers multiple explanations of the origin of the cornucopia. One of the best-known involves the birth and nurturance of the infant Zeus, who had to be hidden from his devouring father Cronus. In a cave on Mount Ida on the island of Crete, baby Zeus was cared for and protected by a number of divine attendants, including the goat Amalthea ("Nourishing Goddess"), who fed him with her milk. The suckling future king of the gods had unusual abilities and strength, and in playing with his nursemaid accidentally broke off one of her horns, which then had the divine power to provide unending nourishment, as the foster mother had to the god.
In another myth, the cornucopia was created when Heracles (Roman Hercules) wrestled with the river god Achelous and ripped off one of his horns; river gods were sometimes depicted as horned. This version is represented in the Achelous and Hercules mural painting by the American Regionalist artist Thomas Hart Benton.
The cornucopia became the attribute of several Greek and Roman deities, particularly those associated with the harvest, prosperity, or spiritual abundance, such as personifications of Earth (Gaia or Terra); the child Plutus, god of riches and son of the grain goddess Demeter; the nymph Maia; and Fortuna, the goddess of luck, who had the power to grant prosperity. In Roman Imperial cult, abstract Roman deities who fostered peace (pax Romana) and prosperity were also depicted with a cornucopia, including Abundantia, "Abundance" personified, and Annona, goddess of the grain supply to the city of Rome. Hades, the classical ruler of the underworld in the mystery religions, was a giver of agricultural, mineral and spiritual wealth, and in art often holds a cornucopia.
Modern depictions
In modern depictions, the cornucopia is typically a hollow, horn-shaped wicker basket filled with various kinds of festive fruit and vegetables. In most of North America, the cornucopia has come to be associated with Thanksgiving and the harvest. Cornucopia is also the name of the annual November Food and Wine celebration in Whistler, British Columbia, Canada. Two cornucopias are seen in the flag and state seal of Idaho. The Great Seal of North Carolina depicts Liberty standing and Plenty holding a cornucopia. The coats of arms of Colombia, Panama, Peru, Venezuela, Victoria, Australia and Kharkiv, Ukraine, also feature the cornucopia, symbolizing prosperity.
Cornucopia motifs appear in some modern literature, such as Terry Pratchett's Wintersmith, and Suzanne Collins's The Hunger Games.
The horn of plenty is used for body art and at Thanksgiving, as it is a symbol of fertility, fortune and abundance.
Gallery
See also
Akshaya Patra
Ark of the Covenant
Chalice of Doña Urraca
Cup of Jamshid
Drinking horn
Holy Chalice
Holy Grail
List of mythological objects
Nanteos Cup
Relic
Sampo
Venus of Laussel
Śarīra
Cintamani
Mani stone
Ashtamangala
Yasakani no Magatama
Kaustubha Gem
Luminous gemstones
Philosopher's stone
Sendai Daikannon statue
Syamantaka Gem
Eight Treasures
Cornucopian
Notes
References
External links
Food storage containers
Heraldic charges
Iconography
Magic items
Mythological objects
Objects in Greek mythology
Ornaments
Ornaments (architecture)
Roman mythology
Symbols
Thanksgiving
Visual motifs | Cornucopia | Physics,Mathematics | 862 |
531,231 | https://en.wikipedia.org/wiki/UAN | UAN is a solution of urea and ammonium nitrate in water used as a fertilizer. The combination of urea and ammonium nitrate has an extremely low critical relative humidity (18% at 30 °C) and can therefore only be used in liquid fertilizers. The most commonly used grade of these fertilizer solutions is UAN 32.0.0 (32%N) known as UN32 or UN-32, which consists of 45% ammonium nitrate, 35% urea and only 20% water. Other grades are UAN 28, UAN 30 and UAN 18.
The solutions are quite corrosive towards mild steel (up to per year on C1010 steel) and are therefore generally equipped with a corrosion inhibitor to protect tanks, pipelines, nozzles, etc. Urea–ammonium nitrate solutions should not be combined with calcium ammonium nitrate (CAN-17) or other solutions prepared from calcium nitrate. A thick, milky-white insoluble precipitate forms that may plug nozzles.
Physical and chemical characteristics of urea ammonium nitrate solutions
The solutions contain a remarkably low amount of water and nevertheless have a low salt-out temperature:
References
UNIDO and International Fertilizer Development Center (1998), Fertilizer Manual, Kluwer Academic Publishers, .
Simplot UAN-32 Product Data Sheet
Poole Chemical UAN 32% Solution
Tom Dorn, Extension Educator, '' Nitrogen Sources University of Nebraska Fact Sheet 288-01
Agricultural chemicals
Fertilizers
Nitrogen cycle
Soil improvers | UAN | Chemistry | 323 |
30,973,489 | https://en.wikipedia.org/wiki/Foster%20Provost | Foster Provost is an American computer scientist, information systems researcher, and Professor of Data Science, Professor of Information Systems and Ira Rennert Professor of Entrepreneurship at New York University's Stern School of Business. He is also the Director for the Data Science and AI Initiative at Stern's Fubon Center for Technology, Business and Innovation. Professor Provost has a Bachelor of Science from Duquesne University in physics and mathematics and a Master of Science and Ph.D. in computer science from the University of Pittsburgh.
Professor Provost is known for his work on evaluating machine learning algorithms and AI systems, for his work on applying ROC analysis to AI systems, for his work on social network data analysis, for his work on combining humans and machine learning, and for his work on machine learning for targeted marketing, online advertising, and activity monitoring.
He has won awards for his work, including:
The 2020 ACM SIGKDD Test of Time Award
The 2017 European Research Paper of the Year (AIS & CIONET).
The best paper in the journal Information Systems Research in 2015.
The 2009 INFORMS Design Science award for social network-based marketing,
IBM Faculty Awards for outstanding research in data mining and machine learning,
A President’s Award from NYNEX Science and Technology
Best Paper Awards from the ACM SIGKDD conference in 1997, 2008, and 2012, and
Awards in SIGKDD’s annual KDDCUP data mining competition.
Professor Provost was on the founding teams for five startups, including Dstillery, Integral Ad Science (IAS), Everyscreen Media, Predicube, and Detectica.
Professor Provost is coauthor (with Tom Fawcett) of the book, Data Science for Business, which often tops Amazon's best-seller lists in data mining and data modeling.
Professor Provost was a Scientific Advisor for the ISI Foundation (which awards the Lagrange Prize), served as Editor-in-Chief of the journal Machine Learning for 6+ years. He is a member of the editorial boards of the Journal of Machine Learning Research (JMLR) and the journal Data Mining and Knowledge Discovery (DMKD/DAMI). He was elected as a founding board member of the International Machine Learning Society.
Sources
External links
Year of birth missing (living people)
Living people
Duquesne University alumni
Information systems researchers
New York University faculty
University of Pittsburgh alumni | Foster Provost | Technology | 491 |
1,582,743 | https://en.wikipedia.org/wiki/Closteroviridae | Closteroviridae is a family of viruses. Plants serve as natural hosts. There are four genera and 59 species in this family, seven of which are unassigned to a genus. Diseases associated with this family include: yellowing and necrosis, particularly affecting the phloem.
Taxonomy
Genome type and transmission vector are two of the most important traits used for classification. Ampeloviruses and Closteroviruses have monopartite genomes and are transmitted by pseudococcid mealybugs (and soft scale insects) and aphids respectively. While Criniviruses are bipartite and transmitted by whiteflies.
Genera:
Ampelovirus
Closterovirus
Crinivirus
Velarivirus
Unassigned species:
Actinidia virus 1
Alligatorweed stunting virus
Blueberry virus A
Megakespama mosaic virus
Mint vein banding-associated virus
Olive leaf yellowing-associated virus
Persimmon virus B
Structure
Viruses in the family Closteroviridae are non-enveloped, with flexuous and filamentous geometries. The diameter is around 10–13 nm, with a length of 950–2200 nm. Genomes are linear and non-segmented, bipartite, around 20kb in length.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded rna virus transcription is the method of transcription. The virus exits the host cell by tubule-guided viral movement.
Plants serve as the natural host. Transmission routes are mechanical.
References
External links
ICTV Report: Closteroviridae
Viralzone: Closteroviridae
Virus families
Riboviria | Closteroviridae | Biology | 357 |
47,303,276 | https://en.wikipedia.org/wiki/Emergency%20response%20system | Emergency response systems are means for emergency response teams to locate and move resources to emergency sites.
The Russian Federation
ERA-GLONASS is the modern Russian system of emergency response, similar to the European standard eCall/E112. The system is designed for use with the Russian global satellite navigation system GLONASS on behalf of the Government of the Russian Federation.
Since 2018, Russian federation is a member of the UNECE Regulation 144 related to accident emergency call components (AECC), accident emergency call devices (AECD) and accident emergency call systems (AECS).
United States
Since 2001, authorities have implemented project E911, which tries to automatically associate a location with the origin of calls to 9-1-1 emergency services. In 2006, the Next Generation 9-1-1 (NG 9-1-1) initiative was introduced. The purpose of the initiative is to afford any emergency caller the opportunity to use any communication means for connection to the emergency services operator, which in turn can receive location data from fixed and mobile phones, as well as automatic sensor-activated devices in case of accidents. In 2010, the system was tested and has become widely implemented. The planning and progress is underway fr a more digital technology under the National 911 Program, under supervision of the National Highway Traffic Safety Administration Office of Emergency Medical Services.
Several states are deploying several response systems for various issues. Georgia has Peer2Peer Warm Lines that offer support from trained specialists to people facing challenges who may not require severe emergency response. Oklahoma shares first responders with tablets equipped with crisis de-escalation tools. Florida is offering a full care service for mental health crisis intervention that includes treatment, from home, drug addiction services and child care.
European Union
In 2001, countries within the European Union implemented the eCall program. eCall is an initiative to bring rapid assistance to motorists involved in collisions and is not designed to allow vehicle tracking outside of emergencies. Some European countries equip trucks with similar devices, containing navigational and communication components. In 2005, Germany began installing eCall devices on trucks with a carrying capacity exceeding 12 tonnes. Trucks in Sweden greater than 3.5 tonnes install the automatic connection devices. The European Commission proposals for legislative acts predicted eCall would be seamlessly functioning in most European vehicles by end of 2015. The deadlines for implementation will most likely be delayed to the end of 2017 or early 2018, as the adoption procedure of these legislative acts by the European Parliament and the Council is not complete.
An eCall equipped car transmits a 1-1-2 emergency call through the Private GSM frequency, to the closest radio tower, to ensure the signal is sent to the appropriate PSAP, as fast as possible. If none of the passengers involved in the collision are able to speak, a minimum data set is sent, including the coordinates of the vehicle.
Since 2018, European Union is a member of the UNECE regulation 144 related to accident emergency call components (AECC), accident emergency call devices (AECD) and accident emergency call systems (AECS).
Kazakhstan
Kazakhstan has developed an analog system ERA GLONASS called "EVAK" – an emergency call in case of emergencies and disasters. It operates using signals from navigation satellite systems GLONASS and GPS. It is expected in 2016 to equip the system board passenger vehicles weighing over 2.5 tonnes, buses, trucks and special vehicles for the transport of dangerous goods, and in 2017 – all other vehicles.
Since 2018, Kazakhstan is a member of the UNECE regulation 144 related to accident emergency call components (AECC), accident emergency call devices (AECD) and accident emergency call systems (AECS).
References
External links
ERA-GLONASS
Emergency communication
Emergency telephone numbers
GLONASS
N11 codes | Emergency response system | Technology | 770 |
39,906,388 | https://en.wikipedia.org/wiki/LG%20Optimus%20L5%20II | LG Optimus L5 II is a middle range slate-format smartphone designed and manufactured by LG Electronics. The Optimus L5 II phone runs Android 4.1 Jelly Bean.
Hardware
The Optimus L5 runs on a 1 GHz MediaTek MT6575 CPU, and has 512 MB of RAM. There also exists a dual-SIM variant called LG Optimus L5 II Dual (E455).
See also
LG Optimus
List of LG mobile phones
Comparison of smartphones
References
Android (operating system) devices
LG Electronics smartphones
Discontinued smartphones | LG Optimus L5 II | Technology | 119 |
43,937,725 | https://en.wikipedia.org/wiki/Urban%20Culture%20Lab | Urban Culture Lab is an interdisciplinary forum for studies in urban culture and was established spring 2014. The Lab is based at the Faculty of Humanities, which is a part of the University of Copenhagen. The main focus of Urban Culture Lab is to bring together scholars of urban culture, cities, livability and urbanity. The Lab is headed by associate professor Henrik Reeh.
Activities
Since its formation, Urban Culture Lab has conducted a number of public seminars showcasing current studies in Urban Culture within the Faculty of Humanities. Researchers affiliated with the Lab has also made public appearances as experts within the field of urban culture and livability - examples of such appearances include radio shows and videos.
During Euroscience Open Forum 2014 in Copenhagen (ESOF2014) Urban Culture Lab contributed with a number of panels and sessions focusing on themes such as urbanity and livability. In relation to Copenhagen being named the world's most liveable city by Monocle in 2014 Urban Culture Lab arranged a cross-disciplinary scientific panel at ESOF2014 in order to discuss livability as a concept. The Lab also conducted a project during the conference named Sense of Cycling (Sense of Cycling). The project examined a diversity of bicycling practices and perceptions by inviting a number of participants to conduct fieldwork within the city of Copenhagen. The key findings were presented at a public seminar during ESOF2014.
Urban Culture Lab is also part of the interdisciplinary and international master programme 4Cities. The programme is a UNICA Euromaster with a focus on urban studies and is organised as a collaboration between 6 European universities. Students enrolled in the programme spend a semester in four different cities, and Urban Culture Lab's chairman Henrik Reeh and affiliated researcher and professor Martin Zerlang are responsible for the University of Copenhagen's part in this programme.
Organisation
Urban Culture Lab is run by a steering committee with representatives from eight departments at the Faculty of Humanities and the deanery: Associate professor Henrik Reeh is the chairman of this committee. The Lab also has a number of affiliated researchers conducting research in a number of different areas related to urban culture.
References
External links
Official website
Sense of Cycling
University of Copenhagen
Faculty of Humanities (University of Copenhagen)
University of Copenhagen
Education in Copenhagen
Urban planning
2014 establishments in Denmark
Research institutes established in 2014 | Urban Culture Lab | Engineering | 461 |
16,267,934 | https://en.wikipedia.org/wiki/Actinorhizal%20plant | Actinorhizal plants are a group of angiosperms characterized by their ability to form a symbiosis with the nitrogen fixing actinomycetota Frankia. This association leads to the formation of nitrogen-fixing root nodules.
Actinorhizal plants are distributed within three clades, and are characterized by nitrogen fixation. They are distributed globally, and are pioneer species in nitrogen-poor environments. Their symbiotic relationships with Frankia evolved independently over time, and the symbiosis occurs in the root nodule infection site.
Classification
Actinorhizal plants are dicotyledons distributed within 3 orders, 8 families and 26 genera, of the angiosperm clade.
All nitrogen fixing plants are classified under the "Nitrogen-Fixing Clade", which consists of the three actinorhizal plant orders, as well as the order fabales. The most well-known nitrogen fixing plants are the legumes, but they are not classified as actinorhizal plants. The actinorhizal species are either trees or shrubs, except for those in the genus Datisca which are herbs. Other species of actinorhizal plants are common in temperate regions like alder, bayberry, sweetfern, avens, mountain misery and coriaria. Some Elaeagnus species, such as sea-buckthorns produce edible fruit. What characterizes an actinorhizal plant is the symbiotic relationship it forms with the bacteria Frankia, in which they infect the roots of the plant. This relationship is what is responsible for the nitrogen-fixation qualities of the plants, and what makes them important to nitrogen-poor environments.
Distribution and ecology
Actinorhizal plants are found on all continents except for Antarctica. Their ability to form nitrogen-fixing nodules confers a selective advantage in poor soils, and are therefore pioneer species where available nitrogen is scarce, such as moraines, volcanic flows or sand dunes. Being among the first species to colonize these disturbed environments, actinorhizal shrubs and trees play a critical role, enriching the soil and enabling the establishment of other species in an ecological succession. Actinorhizal plants like alders are also common in the riparian forest.
They are also major contributors to nitrogen fixation in broad areas of the world, and are particularly important in temperate forests. The nitrogen fixation rates measured for some alder species are as high as 300 kg of N2/ha/year, close to the highest rate reported in legumes.
Evolutionary origin
No fossil records are available concerning nodules, but fossil pollen of plants similar to modern actinorhizal species has been found in sediments deposited 87 million years ago. The origin of the symbiotic association remains uncertain. The ability to associate with Frankia is a polyphyletic character and has probably evolved independently in different clades. Nevertheless, actinorhizal plants and Legumes, the two major nitrogen-fixing groups of plants share a relatively close ancestor, as they are all part of a clade within the rosids which is often called the nitrogen-fixing clade. This ancestor may have developed a "predisposition" to enter into symbiosis with nitrogen fixing bacteria and this led to the independent acquisition of symbiotic abilities by ancestors of the actinorhizal and Legume species. The genetic program used to establish the symbiosis has probably recruited elements of the arbuscular mycorrhizal symbioses, a much older and widely distributed symbiotic association between plants and fungi.
The symbiotic nodules
As in legumes, nodulation is favored by nitrogen deprivation and is inhibited by high nitrogen concentrations. Depending on the plant species, two mechanisms of infection have been described: The first is observed in casuarinas or alders and is called root hair infection. In this case the infection begins with an intracellular penetration of a Frankia hyphae root hair, and is followed by the formation of a primitive symbiotic organ known as a prenodule. The second mechanism of infection is called intercellular entry and is well described in Discaria species. In this case bacteria penetrate the root extracellularly, growing between epidermal cells then between cortical cells. Later on Frankia becomes intracellular but no prenodule is formed. In both cases the infection leads to cell divisions in the pericycle and the formation of a new organ consisting of several lobes anatomically similar to a lateral root. Cortical cells of the nodule are invaded by Frankia filaments coming from the site of infection/the prenodule. Actinorhizal nodules have generally an indeterminate growth, new cells are therefore continually produced at the apex and successively become infected. Mature cells of the nodule are filled with bacterial filaments that actively fix nitrogen. No equivalent of the rhizobial nod factors have been found, but several genes known to participate in the formation and functioning of Legume nodules (coding for haemoglobin and other nodulins) are also found in actinorhizal plants where they are supposed to play similar roles. The lack of genetic tools in Frankia and in actinorhizal species was the main factor explaining such a poor understating of this symbiosis, but the recent sequencing of 3 Frankia genomes and the development of RNAi and genomic tools in actinorhizal species should help to develop a far better understanding in the following years.
Notes
References
External links
Frankia and Actinorhizal plant Website
Biogeochemical cycle
Cycle
Nitrogen cycle
Soil biology
Symbiosis | Actinorhizal plant | Chemistry,Biology | 1,194 |
18,595,173 | https://en.wikipedia.org/wiki/Mohamed%20Gad-el-Hak | Mohamed Gad-el-Hak (born 1945) is an engineering scientist. He is currently the Inez Caudill Eminent Professor of biomedical engineering and professor of mechanical and nuclear engineering at Virginia Commonwealth University.
Biography
Gad-el-Hak was born on 11 February 1945 in Tanta, Egypt.
Gad-el-Hak was senior research scientist and program manager at Flow Research Company in Seattle, Washington, and then professor of aerospace and mechanical engineering at the University of Notre Dame, finally coming to Virginia Commonwealth University in 2002 as chair of mechanical engineering, subsequently expanded to mechanical and nuclear engineering.
Scientific work
Gad-el-Hak has developed diagnostic tools for turbulent flows, including the laser-induced fluorescence (LIF) technique for flow visualization, and discovered the efficient mechanism by which a turbulent region rapidly grows by destabilizing a surrounding laminar flow. His has also published on Reynolds number effects in turbulent boundary layers and on the fluid mechanics of microdevices.
Gad-el-Hak is the author of the book Flow Control: Passive, Active, and Reactive Flow Management, and editor of the books Frontiers in Experimental Fluid Mechanics, Advances in Fluid Mechanics Measurements, Flow Control: Fundamentals and Practices, The MEMS Handbook (three volumes), and Large-Scale Disasters: Prediction, Control, and Mitigation.
Honors
Gad-el-Hak has been a member of several advisory panels for DOD, DOE, NASA, and NSF. During the 1991/1992 academic year, he was a visiting professor at Institut de Mécanique de Grenoble, France. During the summers of 1993, 1994, and 1997, he was, respectively, a distinguished faculty fellow at Naval Undersea Warfare Center, Newport, Rhode Island, a visiting exceptional professor at Université de Poitiers, France, and a Gastwissenschaftler (guest scientist) at Forschungszentrum Rossendorf, Dresden, Germany.
Gad-el-Hak is a fellow of the American Academy of Mechanics, a fellow of the American Physical Society, a fellow of the American Institute of Physics, a fellow of the American Society of Mechanical Engineers, a fellow of the American Association for the Advancement of Science, an associate fellow of the American Institute of Aeronautics and Astronautics, and a member of the European Mechanics Society. Gad-el-Hak served as editor of eight international journals, including AIAA Journal, Applied Mechanics Reviews, and Bulletin of the Polish Academy of Sciences. He is additionally a contributing editor for Springer-Verlag's Lecture Notes in Engineering and Lecture Notes in Physics, for McGraw-Hill's Year Book of Science and Technology, and for CRC Press's Mechanical Engineering Series.
An editorial in honor of Gal-el-Hak titled "Homage to a Legendary Dynamicist on His Seventy-Fifth Birthday" appeared in the July 2020 issue of the Journal of Fluids Engineering.
In 1998, Gad-el-Hak was named the 14th American Society of Mechanical Engineers (ASME) Freeman Scholar. In 1999, he was awarded the Alexander von Humboldt Prize as well as the Japanese Government Research Award for Foreign Scholars. In 2002, he was named ASME Distinguished Lecturer. Gad-el-Hak has also been awarded the ASME Medal for contributions to the discipline of fluids engineering, as well as a Certificate of Appreciation.
Selected publications
Gad-el-Hak, M., and Bandyopadhyay, P.R. (1994) "Reynolds Number Effects in Wall-Bounded Flows," Applied Mechanics Reviews, vol. 47, pp. 307–365.
Sen, M., Wajerski, D., and Gad-el-Hak, M. (1996) "A Novel Pump for MEMS Applications," Journal of Fluids Engineering, vol. 118, pp. 624–627.
Gad-el-Hak, M. (1999) "The Fluid Mechanics of Microdevices—The Freeman Scholar Lecture," Journal of Fluids Engineering, vol. 121, pp. 5–33.
Hemeda, A.A., Esteves, R.J.A., McLeskey, J.T., Gad-el-Hak, M., Khraisheh, M., and Vahedi Tafreshi, H. (2018) "Molecular Dynamic Simulations of Fibrous Distillation Membranes," International Communications in Heat and Mass Transfer, vol. 98, pp. 304–309.
Ullah, R., Khraisheh, M., Esteves, R.J., McLeskey, J.T., AlGhouti, M., Gad-el-Hak, M., and Vahedi, Tafreshi, H. (2018) "Energy Efficiency of Direct Contact Membrane Distillation," Desalination, vol. 433, pp. 56–67.
Zhu, Y., Lee, C., Chen, X., Wu, J., Chen, S., and Gad-el-Hak, M. (2018) "Newly Identified Principle for Aerodynamic Heating in Hypersonic Flows," Journal of Fluid Mechanics, vol. 855, pp. 152–180.
Gad-el-Hak, M. (2019) "Coherent Structures and Flow Control: Genesis and Prospect," Bulletin of the Polish Academy of Sciences, vol. 67, pp. 411–444.
References
External links
1945 births
American physicists
Fluid dynamicists
American mechanical engineers
Aerospace engineers
Egyptian mechanical engineers
Egyptian emigrants to the United States
Engineers from Virginia
Ain Shams University alumni
Fellows of the American Association for the Advancement of Science
Fellows of the American Physical Society
Fellows of the American Society of Mechanical Engineers
Johns Hopkins University alumni
Living people
People from Tanta
Scientists from Virginia
Whiting School of Engineering alumni
University of Southern California faculty
University of Virginia faculty
University of Notre Dame faculty
Virginia Commonwealth University faculty | Mohamed Gad-el-Hak | Chemistry,Engineering | 1,232 |
19,575,563 | https://en.wikipedia.org/wiki/Linear%20inequality | In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality:
< less than
> greater than
≤ less than or equal to
≥ greater than or equal to
≠ not equal to
A linear inequality looks exactly like a linear equation, with the inequality sign replacing the equality sign.
Linear inequalities of real numbers
Two-dimensional linear inequalities
Two-dimensional linear inequalities are expressions in two variables of the form:
where the inequalities may either be strict or not. The solution set of such an inequality can be graphically represented by a half-plane (all the points on one "side" of a fixed line) in the Euclidean plane. The line that determines the half-planes (ax + by = c) is not included in the solution set when the inequality is strict. A simple procedure to determine which half-plane is in the solution set is to calculate the value of ax + by at a point (x0, y0) which is not on the line and observe whether or not the inequality is satisfied.
For example, to draw the solution set of x + 3y < 9, one first draws the line with equation x + 3y = 9 as a dotted line, to indicate that the line is not included in the solution set since the inequality is strict. Then, pick a convenient point not on the line, such as (0,0). Since 0 + 3(0) = 0 < 9, this point is in the solution set, so the half-plane containing this point (the half-plane "below" the line) is the solution set of this linear inequality.
Linear inequalities in general dimensions
In Rn linear inequalities are the expressions that may be written in the form
or
where f is a linear form (also called a linear functional), and b a constant real number.
More concretely, this may be written out as
or
Here are called the unknowns, and are called the coefficients.
Alternatively, these may be written as
or
where g is an affine function.
That is
or
Note that any inequality containing a "greater than" or a "greater than or equal" sign can be rewritten with a "less than" or "less than or equal" sign, so there is no need to define linear inequalities using those signs.
Systems of linear inequalities
A system of linear inequalities is a set of linear inequalities in the same variables:
Here are the unknowns, are the coefficients of the system, and are the constant terms.
This can be concisely written as the matrix inequality
where A is an m×n matrix, x is an n×1 column vector of variables, and b is an m×1 column vector of constants.
In the above systems both strict and non-strict inequalities may be used.
Not all systems of linear inequalities have solutions.
Variables can be eliminated from systems of linear inequalities using Fourier–Motzkin elimination.
Applications
Polyhedra
The set of solutions of a real linear inequality constitutes a half-space of the 'n'-dimensional real space, one of the two defined by the corresponding linear equation.
The set of solutions of a system of linear inequalities corresponds to the intersection of the half-spaces defined by individual inequalities. It is a convex set, since the half-spaces are convex sets, and the intersection of a set of convex sets is also convex. In the non-degenerate cases this convex set is a convex polyhedron (possibly unbounded, e.g., a half-space, a slab between two parallel half-spaces or a polyhedral cone). It may also be empty or a convex polyhedron of lower dimension confined to an affine subspace of the n-dimensional space Rn.
Linear programming
A linear programming problem seeks to optimize (find a maximum or minimum value) a function (called the objective function) subject to a number of constraints on the variables which, in general, are linear inequalities. The list of constraints is a system of linear inequalities.
Generalization
The above definition requires well-defined operations of addition, multiplication and comparison; therefore, the notion of a linear inequality may be extended to ordered rings, and in particular to ordered fields.
References
Sources
External links
Khan Academy: Linear inequalities, free online micro lectures
Linear algebra
Linear programming
Polyhedra | Linear inequality | Mathematics | 929 |
54,160,625 | https://en.wikipedia.org/wiki/NGC%207015 | NGC 7015 is a spiral galaxy located about 203 million light-years away from Earth in the constellation Equuleus. NGC 7015's calculated velocity is . NGC 7015 was discovered by French astronomer Édouard Stephan on September 29, 1878.
NGC 7015 has two symmetric inner arms, with multiple long and continuous outer arms. It is also host to a supermassive black hole with an estimated mass of 8.2 × 107 M☉. NGC 7015 is a member of a group of galaxies known as the NGC 7042 Group. This group contains ten galaxies, with the group named after its brightest member, NGC 7042. Besides NGC 7015, the group also contains, NGC 7025, NGC 7042, NGC 7043, and IC 1359.
See also
NGC 1300
Barred spiral galaxy
NGC 7003
List of NGC objects (7001–7840)
References
External links
Spiral galaxies
Equuleus
7015
11674
066076
18780929
Discoveries by Édouard Stephan
NGC 7042 Group | NGC 7015 | Astronomy | 209 |
59,708,753 | https://en.wikipedia.org/wiki/Statice%20limonium | Statice limonium may refer to:
Statice limonium Bigelow
Statice limonium Cav. ex Willk. & Lange
Statice limonium L., accepted as Limonium vulgare Mill.
Statice limonium Pall.
Statice limonium Thunb.
Statice limonium var. californica (Boiss. ex DC.) A.Gray, accepted as Limonium californicum (Boiss.) A.Heller
Statice limonium var. caroliniana (Walter) A.Gray, accepted as Limonium carolinianum (Walter) Britton
References | Statice limonium | Biology | 127 |
50,721,357 | https://en.wikipedia.org/wiki/Interleukin-38 | Interleukin-38 (IL-38) is a member of the interleukin-1 (IL-1) family and the interleukin-36 (IL-36) subfamily. It is important for the inflammation and host defense. This cytokine is named IL-1F10 in humans and has similar three dimensional structure as IL-1 receptor antagonist (IL-1Ra). The organisation of IL-1F10 gene is conserved with other members of IL-1 family within chromosome 2q13. IL-38 is produced by mammalian cells may bind the IL-1 receptor type I. It is expressed in basal epithelia of skin, in proliferating B cells of the tonsil, in spleen and other tissues. This cytokine is playing important role in regulation of innate and adaptive immunity.
Discovery
IL-38 probably originated from a common ancestral gene - an ancient IL-1RN gene. This cytokine has 41% homology with IL-1Ra and 43% homology with IL-36Ra. IL-38 is expressed in skin, spleen, tonsil, thymus, heart, placenta and fetal liver. In tissues which do not play a special role in immune response, IL-38 is expressed in low quantity similar to other members of the IL-1 family. In disease setting, specially when the activation of inflammatory response is dysregulated, the expression of IL-38 is changed. For example, in case of spondylitis ankylopoetica, cardiovascular disease, rheumatoid arthritis or hidradenitis suppurativa.
Processing and signaling
According to consensus of cleaving site of IL-1 family, it is predicted that two amino acids (AA) should be removed to generate a processed 3-152AA IL-38 protein. The protease which cleaves IL-38 is still unknown as well as it is still not known which form of IL-38 is the natural variant present in the human body. It was reported that 20-152AA IL-38 form has increased biological activity.
IL-38 has non-characteristic dose-response curve and it binds to IL-36R (IL-1R6). This cytokine is blocking Candida-induced interleukin-17 (IL-17) response better in low concentration than in higher concentration even if induction of cytokine is not blocked. So it is possible that IL-38 released by apoptotic cells can bind to the Three Immunoglobulin Domain-containing IL-1 receptor-related 2 (TIGIRR-2, gene name IL1RAPL1, also known as IL-1R9) and IL-38 will have in this case an antagonistic effect on induction of inflammatory cytokine. It is possible that IL-38 would be first ligand of TIGIRR-2, a former orphan receptor of the IL-1 Family.
Role in disease
Studies showed that IL-38 could play an important role in rheumatic diseases. IL-38 is also one of the five proteins which are related with C-reactive protein (CRP) levels in the serum. The association of IL-38 with CRP could mean that IL-38 will play role also in inflammatory diseases as cardiovascular disease.
Function
The observation of knockdown of IL-38 with siRNA in peripheral blood mononuclear cells shows that production of interleukin-6 (IL-6), APRIL and CCL-2 were increased in response to TLR ligands, so IL-38 acted like antagonist in this case. There are also studies which show agonistic effect. In one study was compared the function of full-length IL-38 and truncated IL-38 and showed that high concentrations of the truncated IL-38 decreased production of IL-6 in response to interleukin-1β (IL-1β) in human macrophages, while full-length form increased IL-6 in the same concentrations. So IL-38 could have agonistic and also antagonistic effects which depend on processing and concentration.
Also when spontaneous murine model of systemic lupus erythematosus (SLE) was treated with recombinant IL-38, mice had less symptoms like proteinuria and skin lesions. Also serum levels of IL-17 and interleukin-22 were lower in these mice what approves in vitro observation that IL-38 could inhibit Th17 responses. Patients with SLE had higher concentrations of IL-38 in the serum than healthy patients and also patients with active disease had higher concentrations of IL-38 in the serum than patients with inactive form.
Sjogren's disease is disease related to SLE. Biopsy of gland of patients with primary Sjogren's disease shows that the expression of IL-38 was increased here. For modulation of this disease is important axis of IL-36. IL-38 is probably antagonist of IL-36 signaling similar as IL-36Ra what can play an important role in the pathogenesis of this autoimmune disease.
IL-38 was found also in the synovium of patients with rheumatoid arthritis and as well in mice with collagen-induced arthritis (CIA). IL-38 concentrations correlated with IL-1β. The overexpression of IL-38 in murine model of arthritis and serum transfer-induced arthritis ameliorate these diseases but not in case of antigen-induced arthritis. TNF production and IL-17 responses were decreased in these models. These data shows that IL-38 could have anti-inflammatory properties in rheumatoid arthritis and probably could be use in a therapeutic strategy.
References
Immunology
Cytokines | Interleukin-38 | Chemistry,Biology | 1,195 |
3,859,111 | https://en.wikipedia.org/wiki/National%20Institutes%20of%20Health%20Director%27s%20Pioneer%20Award | National Institutes of Health Director's Pioneer Award is a research initiative first announced in 2004 designed to support individual scientists' biomedical research. The focus is specifically on "pioneering" research that is highly innovative and has a potential to produce paradigm shifting results.
The awards, made annually from the National Institutes of Health common fund, are each worth $500,000 per year, or $2,500,000 for five years.
Recipients
2004
Source: NIH
Larry Abbott
George Q. Daley
Homme W. Hellinga
Joseph McCune
Steven L. McKnight
Rob Phillips
Stephen R. Quake
Chad Mirkin
Xiaoliang Sunney Xie
2005
Source: NIH
Vicki L. Chandler
Hollis T. Cline
Leda Cosmides
Titia de Lange
Karl Deisseroth
Pehr A.B. Harbury
Erich D. Jarvis
Thomas A. Rando
Derek J. Smith
Giulio Tononi
Clare M. Waterman-Storer
Nathan Wolfe
Junying Yuan
2006
Source: NIH
Kwabena A. Boahen
Arup K. Chakraborty
Lila M. Gierasch
Rebecca W. Heald
Karla Kirkegaard
Thomas J. Kodadek
Cheng Chi Lee
Evgeny A. Nudler
Gary J. Pielak
David A. Relman
Rosalind A Segal
James L. Sherley
Younan Xia
2007
Source: NIH
Lisa Feldman Barrett
Peter Bearman
Emery N. Brown
Thomas R. Clandinin
James J. Collins
Margaret Gardel
Takao K. Hensch
Marshall S. Horwitz
Rustem F. Ismagilov
Frances E. Jensen
Mark J. Schnitzer
Gina Turrigiano
2008
Source: NIH
James K. Chen, Ph.D., Stanford University
Ricardo Dolmetsch, Ph.D., Stanford University
James Eberwine, Ph.D., University of Pennsylvania
Joshua M. Epstein, Ph.D., Brookings Institution
Bruce A. Hay, Ph.D., California Institute of Technology
Ann Hochschild, Ph.D., Harvard Medical School
Charles M. Lieber, Ph.D., Harvard University
Barry London, M.D., Ph.D., University of Pittsburgh
Tom Maniatis, Ph.D., Harvard University
Teri W. Odom, Ph.D., Northwestern University
Hongkun Park, Ph.D., Harvard University
Aviv Regev, Ph.D., Massachusetts Institute of Technology/Broad Institute
Aravinthan D.T. Samuel, Ph.D., Harvard University
Saeed Tavazoie, Ph.D., Princeton University
Alice Y. Ting, Ph.D., Massachusetts Institute of Technology
Alexander van Oudenaarden, Ph.D., Massachusetts Institute of Technology
2009
Source: NIH
Ivor J. Benjamin, University of Utah School of Medicine
Ajay Chawla, Stanford University
Chang-Zheng Chen, Stanford University
Hilde Cheroutre, La Jolla Institute for Immunology
Markus W. Covert, Stanford University
Joseph M. DeSimone, University of North Carolina at Chapel Hill/North Carolina State University
Sylvia M. Evans, University of California, San Diego
Joseph R. Fetcho, Cornell University
Timothy E. Holy, Washington University School of Medicine
Tannishtha Reya, Duke University
Gene E. Robinson, University of Illinois at Urbana-Champaign
Susan M. Rosenberg, Baylor College of Medicine
Leona D. Samson, Massachusetts Institute of Technology
Nirao M. Shah, University of California, San Francisco
Krishna V. Shenoy, Stanford University
Sarah A. Tishkoff, University of Pennsylvania
Alexander J. Travis, Cornell University State College of Veterinary Medicine
Jin Zhang, Johns Hopkins University School of Medicine
2010
Source: NIH
Carlos F. Barbas III, Ph.D., Scripps Research
Pamela J. Bjorkman, Ph.D., California Institute of Technology
Valentin Dragoi, Ph.D., University of Texas, Health Science Center at Houston
Stephen W. Fesik, Ph.D., Vanderbilt University School of Medicine
Tamas L. Horvath, D.V.M., Ph.D., Yale School of Medicine
J. Keith Joung, M.D., Ph.D., Massachusetts General Hospital / Harvard Medical School
David Kleinfeld, Ph.D., University of California, San Diego
Haifan Lin, Ph.D., Yale University
Jun O. Liu, Ph.D., Johns Hopkins University School of Medicine
Andres Villu Maricq, M.D., Ph.D., University of Utah
Joseph H. Nadeau, Ph.D., Institute for Systems Biology
Miguel A. L. Nicolelis, M.D., Ph.D. , Duke University
Lalita Ramakrishnan, M.D., Ph.D., University of Washington
Lorna W. Role, Ph.D., Stony Brook University
Michael L. Roukes, Ph.D., California Institute of Technology
Ram Samudrala, Ph.D., University of Washington
Bruce A. Yankner, M.D., Ph.D., Harvard Medical School
2011
Source: NIH
Utpal Banerjee, Ph.D., University of California, Los Angeles
Brenda L. Bass, Ph.D., University of Utah
Jean Bennett, Ph.D., University of Pennsylvania
William M. Clemons, Ph.D., California Institute of Technology
Florian Engert, Ph.D., Harvard University
Andrew P. Feinberg, M.D., M.P.H., Johns Hopkins University
James E.K. Hildreth, M.D., Ph.D., University of California, Davis
Tao Pan, Ph.D., University of Chicago
Sharad Ramanathan, Ph.D., Harvard University
David S. Schneider, Ph.D., Stanford University
Thanos Siapas, Ph.D., California Institute of Technology
Andreas S. Tolias, Ph.D. , Baylor College of Medicine
Mehmet Fatih Yanik, Ph.D. , Massachusetts Institute of Technology
2012
Source: NIH
Anne Brunet, Ph.D., Stanford University
Edward Marcotte, Ph.D., University of Texas at Austin
Hidde Ploegh, Ph.D., Whitehead Institute
Christina Smolke, Ph.D., Stanford University
Yi Tang, Ph.D., University of California
Doris Ying Tsao, Ph.D., California Institute of Technology,
Lihong V. Wang, Ph.D., Washington University in St. Louis
Chao-Ting Wu, Ph.D., Harvard University Medical School
Gary Yellen, Ph.D., Harvard University Medical School
Feng Zhang, Ph.D., Broad Institute
2013
Source: NIH
Amy Arnsten, Ph.D., Yale University, New Haven, CT
Edward S. Boyden, Ph.D., Massachusetts Institute of Technology, Boston, MA
Vadim N. Gladyshev, Ph.D., Brigham and Women's Hospital and Harvard Medical School, Boston, MA
Baljit S. Khakh, Ph.D., University of California Los Angeles, David Geffen School of Medicine, CA
Michael Z. Lin, M.D., Ph.D., Stanford University, Stanford, CA
Jay Shendure, M.D., Ph.D., University of Washington, Seattle, WA
Natalia A. Trayanova, Ph.D., The Johns Hopkins University, Baltimore, MD
Fan Wang, Ph.D., Duke University Medical Center, Durham, NC
Leor S Weinberger, Ph.D., Gladstone Institutes and University of California, San Francisco, CA
Xiaoliang Sunney Xie, Ph.D., Harvard University, Cambridge, MA
Rafael Yuste, M.D., Ph.D., Columbia University, New York, NY
Mark J Zylka, Ph.D., University of North Carolina, Chapel Hill, NC
2014
Source: NIH
Jayakrishna Ambati, M.D., University of Kentucky
Chenghua Gu, D.V.M., Ph.D., Harvard medical School
Cato T. Laurencin, M.D., Ph.D., University of Connecticut
Denise J. Montell, Ph.D., University of California Santa Barbara
Carl D. Novina, M.D., Ph.D., Dana-Farber Cancer Institute
Amy Palmer, Ph.D., University of Colorado
Dana Pe'er, Ph.D., Columbia University
Oliver Rando, M.D., Ph.D. , University of Massachusetts Medical School
Donna L. Spiegelman, Sc.D., Harvard School of Public Health
Sean Wu, M.D., Ph.D., Stanford University
2015
Source: 2015 NIH
Giovanni Bosco, Dartmouth Geisel School of Medicine
Jeffery S. Cox, University of California San Francisco
Matthew David Disney, The Scripps Research Institute
Zemer Gitai, Princeton University
Jonathon Howard, Yale University
Craig Montell, University of California Santa Barbara
Coleen T. Murphy, Princeton University
Gwendalyn J. Randolph, Washington University School of Medicine
Steven J. Schiff, The Pennsylvania State University
Hao Wu, Boston Children’s Hospital and Harvard Medical School
Tony Wyss-Coray, Stanford University School of Medicine and VA Palo Alto
Ryohei Yasuda, Max Planck Florida Institute for Neuroscience
Sheng Zhong, University of California San Diego
2016
Source: NIH
Kristin Baldwin, The Scripps Research Institute
Bradley Bernstein, Massachusetts General Hospital and Broad Institute
Michael Fischbach, University of California, San Francisco
Uri Hasson, Princeton University
Juan Carlos Izpisua Belmonte, The Salk Institute for Biological Studies
Nancy Kanwisher, Massachusetts Institute of Technology
Stephen D. Liberles, Harvard Medical School
Christine Mayr, Memorial Sloan Kettering Cancer Center
Joshua D. Rabinowitz, Princeton University
Meng Wang, Baylor College of Medicine
Sing Sing Way, Cincinnati Children’s Hospital
Seok-Hyun "Andy" Yun, Massachusetts General Hospital and Harvard Medical School
2017
Source: NIH
Hongjie Dai, Stanford University
Amit Etkin, Stanford University
Howard A. Fine, Weill Cornell College of Medicine
Charles M. Lieber, Harvard University
Jeffrey D. Macklis, Harvard University
Luciano A. Marraffini, Rockefeller University
Alex Schier, Harvard University
Ramin Shiekhattar, University of Miami
David A. Sinclair, Harvard Medical School
Justin L. Sonnenburg, Stanford University
Kay M. Tye, MIT
Feng Zhang, Broad Institute, MIT
2018
Source: NIH
Janelle S. Ayres, Salk Institute
Daniel A. Colón-Ramos, Yale University School of Medicine
Christina Curtis, Stanford University School of Medicine
Viviana Gradinaru, Caltech
Jonathan Kipnis, University of Virginia School of Medicine
Hyungbae Kwon, Max Planck Florida Institute for Neuroscience
Michelle Monje, Stanford University
Gabriel D. Victora, Rockefeller University
Amy J. Wagers, Harvard Medical School
Peng Yin, Harvard University
2019
Source: NIH
Mark Andermann, Beth Israel Deaconess Medical Center, Harvard Medical School
James Eberwine, University of Pennsylvania Perelman School of Medicine
Jennifer H. Elisseeff, Johns Hopkins University
Valentina Greco, Yale University
Christophe Herman, Baylor College of Medicine
Sun Hur, Boston Children’s Hospital
Rob Knight, University of California San Diego
Jin Hyung Lee, Stanford University
Marina R. Picciotto, Yale University
Hidde Ploegh, Boston Children's Hospital, Harvard Medical School
Simon Scheuring, Weill Cornell Medicine
2020
Source: NIH
Annelise E. Barron, Stanford University, Schools of Medicine and of Engineering
Kathleen Collins, University of California at Berkeley
Christopher D. Harvey, Harvard Medical School
Peter S. Kim, Stanford University
Brian Litt, University of Pennsylvania
Shu-Bing Qian, Cornell University
Susan M. Rosenberg, Baylor College of Medicine
John Schoggins, University of Texas Southwestern Medical Center
David Veesler, University of Washington School of Medicine
Magdalena Zernicka-Goetz, California Institute of Technology and University of Cambridge
See also
List of medicine awards
References
External links
Official website
National Institutes of Health
American science and technology awards
Medicine awards
Awards established in 2004 | National Institutes of Health Director's Pioneer Award | Technology | 2,531 |
7,007,010 | https://en.wikipedia.org/wiki/Inverted%20bell | The inverted bell is a metaphorical name for a geometric shape that resembles a bell upside-down.
By context
In architecture, the term is applied to describe the shape of the capitals of Corinthian columns.
The inverted bell is used in shape classification in pottery, often featured in archaeology as well as in modern times.
In statistics, a bimodial distribution is sometimes called an inverted bell curve.
References
Geometric shapes | Inverted bell | Mathematics | 86 |
62,909,047 | https://en.wikipedia.org/wiki/Maria%20Girone | Maria Girone is the Head of CERN openlab. She leads the development of High Performance Computing (HPC) technologies for particle physics experiments.
Early life and education
Girone studied physics at the University of Bari. She earned her doctoral degree in particle physics in 1994. She soon became a research fellow on the ALEPH experiment, supporting analysis and acting as liaison for the accelerator. She was awarded a Marie Curie Fellowship and joined Imperial College London, where she worked on the hardware development for both the LHCb and ALEPH experiments.
Career and research
CERN openlab was established in 2001 and supports academics at CERN in their collaborations with independent companies. Girone moved into scientific computing in 2002, working for the Worldwide LHC Computing Grid (WLCG). The WLCG stores, shares and assists in the analysis of data from the Large Hadron Collider where she developed a persistence framework. The WLCG is the largest assembly of computing resources ever collected for a scientific endeavour. In the Large Hadron Collider experiment detectors there are around one billion beam collisions per second. WLCG analyses billions of beam crossings and tries to predict the detector response.
In 2009, whilst at the WLCG, Girone founded and led the Operations Coordinations team. She was appointed coordinator of the software and computing for the Compact Muon Solenoid (CMS) in 2014. In this capacity, she was responsible for the operation of seventy computing centres across five different continents. She joined CERN openlab as chief technology officer (CTO) in 2016, and she's leading it since 2023.
She has worked on the upgrade of the Large Hadron Collider (the High Luminosity Large Hadron Collider), which will require up to one hundred times more computing capacity than it did originally. This increase in capacity will come through access to commercial cloud computing platforms, data analytics, deep learning and new computing architectures.
References
Living people
Year of birth missing (living people)
Italian women physicists
Particle physicists
21st-century Italian physicists
University of Bari alumni
Academics of Imperial College London
People associated with CERN | Maria Girone | Physics | 439 |
27,665,428 | https://en.wikipedia.org/wiki/Geochimica%20et%20Cosmochimica%20Acta | is a biweekly peer-reviewed scientific journal published by Elsevier. It was established in 1950 and is sponsored by the Geochemical Society and the Meteoritical Society. The editor-in-chief is Jeffrey Catalano (Washington University in St. Louis). The journal covers topics in Earth geochemistry, planetary geochemistry, cosmochemistry and meteoritics.
Publishing formats include original research articles and invited reviews and occasional editorials, book reviews, and announcements. In addition, the journal publishes short comments (4 pages) targeting specific articles and designed to improve understanding of the target article by advocating a different interpretation supported by the literature, followed by a response by the author.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.921.
References
External links
Geochemistry journals
Planetary science journals
Academic journals established in 1950
Quarterly journals
English-language journals
Elsevier academic journals
Meteoritics publications
Academic journals associated with learned and professional societies
Geochemical Society | Geochimica et Cosmochimica Acta | Chemistry | 215 |
70,234,658 | https://en.wikipedia.org/wiki/System%20for%20Differential%20Corrections%20and%20Monitoring | The System for Differential Corrections and Monitoring (SDCM), is the satellite-based navigation augmentation system operated by Russia's Roscosmos space agency to augment the precision of the GLONASS satellite navigation system. It uses the Luch Multifunctional Space Relay System to transmit correction data.
SDCM's service area covers the Russian Federation. it had not yet been certified for use for public aviation.
References
External links
Design summary for SDCM, 2012
Navigation satellite constellations
Satellite-based augmentation systems | System for Differential Corrections and Monitoring | Astronomy | 110 |
49,384,183 | https://en.wikipedia.org/wiki/Sulfometuron%20methyl | Sulfometuron methyl is an organic compound used as an herbicide. It is classed as a sulfonylurea. It functions via the inhibition of the enzyme acetolactate synthase, which catalyzes the first step in biosynthesis of the branched-chain amino acids valine, leucine, and isoleucine.
References
Benzenesulfonylureas
Herbicides | Sulfometuron methyl | Biology | 86 |
69,122,776 | https://en.wikipedia.org/wiki/Vector%20overlay | Vector overlay is an operation (or class of operations) in a geographic information system (GIS) for integrating two or more vector spatial data sets. Terms such as polygon overlay, map overlay, and topological overlay are often used synonymously, although they are not identical in the range of operations they include. Overlay has been one of the core elements of spatial analysis in GIS since its early development. Some overlay operations, especially Intersect and Union, are implemented in all GIS software and are used in a wide variety of analytical applications, while others are less common.
Overlay is based on the fundamental principle of geography known as areal integration, in which different topics (say, climate, topography, and agriculture) can be directly compared based on a common location. It is also based on the mathematics of set theory and point-set topology.
The basic approach of a vector overlay operation is to take in two or more layers composed of vector shapes, and output a layer consisting of new shapes created from the topological relationships discovered between the input shapes. A range of specific operators allows for different types of input, and different choices in what to include in the output.
History
Prior to the advent of GIS, the overlay principle had developed as a method of literally superimposing different thematic maps (typically an isarithmic map or a chorochromatic map) drawn on transparent film (e.g., cellulose acetate) to see the interactions and find locations with specific combinations of characteristics. The technique was largely developed by landscape architects. Warren Manning appears to have used this approach to compare aspects of Billerica, Massachusetts, although his published accounts only reproduce the maps without explaining the technique. Jacqueline Tyrwhitt published instructions for the technique in an English textbook in 1950, including:
Ian McHarg was perhaps most responsible for widely publicizing this approach to planning in Design with Nature (1969), in which he gave several examples of projects on which he had consulted, such as transportation planning and land conservation.
The first true GIS, the Canada Geographic Information System (CGIS), developed during the 1960s and completed in 1971, was based on a rudimentary vector data model, and one of the earliest functions was polygon overlay. Another early vector GIS, the Polygon Information Overlay System (PIOS), developed by ESRI for San Diego County, California in 1971, also supported polygon overlay. It used the Point in polygon algorithm to find intersections quickly. Unfortunately, the results of overlay in these early systems was often prone to error.
Carl Steinitz, a landscape architect, helped found the Harvard Laboratory for Computer Graphics and Spatial Analysis, in part to develop GIS as a digital tool to implement McHarg's methods. In 1975, Thomas Peucker and Nicholas Chrisman of the Harvard Lab introduced the POLYVRT data model, one of the first to explicitly represent topological relationships and attributes in vector data. They envisioned a system that could handle multiple "polygon networks" (layers) that overlapped by computing Least Common Geographic Units (LCGU), the area where a pair of polygons overlapped, with attributes inherited from the original polygons. Chrisman and James Dougenik implemented this strategy in the WHIRLPOOL program, released in 1979 as part of the Odyssey project to develop a general-purpose GIS. This system implemented several improvements over the earlier approaches in CGIS and PIOS, and its algorithm became part of the core of GIS software for decades to come.
Algorithm
The goal of all overlay operations is to take in vector layers, and create a layer that integrates both the geometry and the attributes of the inputs. Usually, both inputs are polygon layers, but lines and points are allowed in many operations, with simpler processing.
Since the original implementation, the basic strategy of the polygon overlay algorithm has remained the same, although the vector data structures that are used have evolved.
Given the two input polygon layers, extract the boundary lines.
Cracking part A: In each layer, identify edges shared between polygons. Break each line at the junction of shared edges and remove duplicates to create a set of topologically planar connected lines. In early topological data structures such as POLYVRT and the ARC/INFO coverage, the data was natively stored this way, so this step was unnecessary.
Cracking part B: Find any intersections between lines from the two inputs. At each intersection, split both lines. Then merge the two line layers into a single set of topologically planar connected lines.
Assembling part A: Find each minimal closed ring of lines, and use it to create a polygon. Each of these will be a least common geographic unit (LCGU), with at most one "parent" polygon from each of the two inputs.
Assembling part B: Create an attribute table that includes the columns from both inputs. For each LCGU, determine its parent polygon from each input layer, and copy its attributes into the LCGU's row the new table; if was not in any of the polygons for one of the input layers, leave the values as null.
Parameters are usually available to allow the user to calibrate the algorithm for a particular situation. One of the earliest was the snapping or fuzzy tolerance, a threshold distance. Any pair of lines that stay within this distance of each other are collapsed into a single line, avoiding unwanted narrow sliver polygons that can occur when lines that should be coincident (for example, a river and a boundary that should follow it de jure) are digitized separately with slightly different vertices.
Operators
The basic algorithm can be modified in a number of ways to return different forms of integration between the two input layers. These different overlay operators are used to answer a variety of questions, although some are far more commonly implemented and used than others. The most common are closely analogous to operators in set theory and boolean logic, and have adopted their terms. As in these algebraic systems, the overlay operators may be commutative (giving the same result regardless of order) and/or associative (more than two inputs giving the same result regardless of the order in which they are paired).
Intersect (ArcGIS, QGIS, Manifold, TNTmips; AND in GRASS): The result includes only the LCGUs where the two input layers intersect (overlap); that is, those with both "parents." This is identical to the set theoretic intersection of the input layers. Intersect is probably the most commonly used operator in this list. Commutative, associative
Union (ArcGIS, QGIS, Manifold, TNTmips; or in GRASS): The result includes all of the LCGUs, both those where the inputs intersect and where they do not. This is identical to the set theoretic union of the input layers. Commutative, associative
Subtract (TNTmips; Erase in ArcGIS; Difference in QGIS; not in GRASS; missing from Manifold): The result includes only the portions of polygons in one layer that do not overlap with the other layer; that is, the LCGUs that have no parent from the other layer. Non-commutative, non-associative
Exclusive or (Symmetrical Difference in ArcGIS, QGIS; Exclusive Union in TNTmips; XOR in GRASS; missing from Manifold): The result includes the portions of polygons in both layers that do not overlap; that is, all LCGUs that have one parent. This could also be achieved by computing the intersection and the union, then subtracting the intersection from the union, or by subtracting each layer from the other, then computing the union of the two subtractions. Commutative, associative
Clip (ArcGIS, QGIS, GRASS, Manifold; Extract Inside in TNTmips): The result includes the portions of polygons of one layer where they intersect the other layer. The outline is the same as the intersection, but the interior only includes the polygons of one layer rather than computing the LCGUs. Non-commutative, non-associative
Cover (Update in ArcGIS and Manifold; Replace in TNTmips; not in QGIS or GRASS): The result includes one layer intact, with the portions of the polygons of the other layer only where the two layers do not intersect. It is called "cover" because the result looks like one layer is covering the other; it is called "update" in ArcGIS because the most common use is when the two layers represent the same theme, but one represents recent changes (e.g., new parcels) that need to replace the older ones in the same location. It can be replicated by subtracting one layer from the other, then computing the union of that result with the original first layer. Non-commutative, non-associative
Divide (Identity in ArcGIS and Manifold; not in QGIS, TNTmips, or GRASS): The result includes all of the LCGUs that cover one of the input layers, excluding those that are only in the other layer. It is called "divide" because it has the appearance of one layer being used to divide the polygons of the other layer. It can be replicated by computing the intersection, then subtracting one layer from the other, then computing the union of these two results. Non-commutative, non-associative
Boolean overlay algebra
One of the most common uses of polygon overlay is to perform a suitability analysis, also known as a suitability model or multi-criteria evaluation. The task is to find the region that meets a set of criteria, each of which can be represented by a region. For example, the habitat of a species of wildlife might need to be A) within certain vegetation cover types, B) within a threshold distance of a water source (computed using a buffer), and C) not within a threshold distance of significant roads. Each of the criteria can be considered boolean in the sense of Boolean logic, because for any point in space, each criterion is either present or not present, and the point is either in the final habitat area or it is not (acknowledging that the criteria may be vague, but this requires more complex fuzzy suitability analysis methods). That is, which vegetation polygon the point is in is not important, only whether it is suitable or not suitable. This means that the criteria can be expressed as a Boolean logic expression, in this case, H = A and B and not C.
In a task such as this, the overlay procedure can be simplified because the individual polygons within each layer are not important, and can be dissolved into a single boolean region (consisting of one or more disjoint polygons but no adjacent polygons) representing the region that meets the criterion. With these inputs, each of the operators of Boolean logic corresponds exactly to one of the polygon overlay operators: intersect = AND, union = OR, subtract = AND NOT, exclusive or = XOR. Thus, the above habitat region would be generated by computing the intersection of A and B, and subtracting C from the result.
Thus, this particular use of polygon overlay can be treated as an algebra that is homomorphic to Boolean logic. This enables the use of GIS to solve many spatial tasks that can be reduced to simple logic.
Lines and points
Vector overlay is most commonly performed using two polygon layers as input and creating a third polygon layer. However, it is possible to perform the same algorithm (parts of it at least) on points and lines. The following operations are typically supported in GIS software:
Intersect: The output will be of the same dimension as the lower of the inputs: Points * {Points, Lines, Polygons} = Points, Lines * {Lines, Polygons} = Lines. This is often used as a form of spatial join, as it merges the attribute tables of the two layers analogous to a table join. An example of this would be allocating students to school districts. Because it is rare for a point to exactly fall on a line or another point, the fuzzy tolerance is often used here. QGIS has separate operations for computing a line intersection as lines (to find coincident lines) and as points.
Subtract: The output will be of the same dimension as the primary input, with the subtraction layer being of the same or lesser dimension: Points - {Points, Lines, Polygons} = Points, Lines - {Lines, Polygons} = Lines
Clip: While the primary input can be points or lines, the clipping layer is usually required to be polygons, producing the same geometry as the primary input, but only including those features (or parts of lines) that are within the clipping polygons. This operation might also be considered a form of spatial query, as it retains the features of one layer based on its topological relationship to another.
Union: Normally, both input layers are expected to be of the same dimensionality, producing an output layer including both sets of features. ArcGIS and GRASS do not allow this option with points or lines.
Implementations
Vector Overlay is included in some form in virtually every GIS software package that supports vector analysis, although the interface and underlying algorithms vary significantly.
Esri GIS software has included polygon overlay since the first release of ARC/INFO in 1982. Each generation of Esri software (ARC/INFO, ArcGIS, ArcGIS Pro) has included a set of separate tools for each of the overlay operators (Intersect, Union, Clip, etc.). The current implementation in ArcGIS Pro recently added an alternative set of "Pairwise Overlay" tools (as of v2.7) that uses parallel processing to more efficiently process very large datasets.
GRASS GIS (open source), although it was originally raster-based, has included overlay as part of its vector system since GRASS 3.0 (1988). Most of the polygon overlay operators are collected into a single v.overlay command, with v.clip as a separate command.
QGIS (open source) originally incorporated GRASS as its analytical engine, but has gradually developed its own processing framework, including vector overlay.
Manifold System implements overlay in its transformation system.
The Turf Javascript API includes the most common overlay methods, although these operate on individual input polygon objects, not on entire layers.
TNTmips includes several tools for overlay among its vector analysis process.
References
External links
The Overlay toolset documentation in Esri ArcGIS
v.overlay command documentation in GRASS GIS
Vector Overlay documentation in QGIS
Topology Overlays documentation in Manifold
GIS software
Geographic information systems | Vector overlay | Technology | 3,084 |
47,512,346 | https://en.wikipedia.org/wiki/Helvella%20semiobruta | Helvella semiobruta is a species of fungus in the family Helvellaceae. Originally found in the country of France, it was described as new to science in 1976. It has also been collected in Greece, and Cyprus, where it grows in maquis shrubland.
References
Further reading
External links
semiobruta
Fungi described in 1976
Fungi of Europe
Fungus species | Helvella semiobruta | Biology | 81 |
38,567,368 | https://en.wikipedia.org/wiki/Matthew%20O.%20Jackson | Matthew Owen Jackson is the William D. Eberle Professor of Economics at Stanford University, an external faculty member of the Santa Fe Institute, and a fellow of CIFAR.
Jackson's research concerns game theory, microeconomic theory, and the study of social and economic networks. Jackson was one of the founders of the study of networks in economics. His work has analyzed the formation of networks and the sources and effects of homophily in social relationships. He has also made important contributions to the study of how networks mediate access to jobs and information as well as the contagion of financial distress.
He received his Ph.D. from Stanford University in 1988, and has taught at Northwestern University and the California Institute of Technology.
He has served as co-editor of Games and Economic Behavior, the Review of Economic Design, and Econometrica. Jackson co-teaches a popular game theory course on Coursera.org, along with Kevin Leyton-Brown and Yoav Shoham.
Jackson has been honored with the Social Choice and Welfare Prize, the B.E.Press Arrow Prize for Senior Economists, and a Guggenheim Fellowship. He has been elected to the National Academy of Sciences, the American Academy of Arts and Sciences, and is a Fellow of the Econometric Society. For 2021 he was awarded the BBVA Foundation Frontiers of Knowledge Award in Economics, Finance and Management.
Selected publications
References
External links
Jackson's home page at Stanford University
1962 births
Living people
Stanford University Department of Economics faculty
Game theorists
20th-century American economists
21st-century American economists
Fellows of the Econometric Society
Santa Fe Institute people
Fellows of the American Academy of Arts and Sciences
Network scientists | Matthew O. Jackson | Mathematics | 338 |
999,536 | https://en.wikipedia.org/wiki/Product%20design | Product design is the process of creating new products for businesses to sell to their customers. It involves the generation and development of ideas through a systematic process that leads to the creation of innovative products. Thus, it is a major aspect of new product development.
Product Design Process:
The product design process is a set of strategic and tactical activities, from idea generation to commercialization, used to create a product design. In a systematic approach, product designers conceptualize and evaluate ideas, turning them into tangible inventions and products. The product designer's role is to combine art, science, and technology to create new products that people can use. Their evolving role has been facilitated by digital tools that now allow designers to do things that include communicate, visualize, analyze, 3D modeling and actually produce tangible ideas in a way that would have taken greater human resources in the past.
Product design is sometimes confused with (and certainly overlaps with) industrial design, and has recently become a broad term inclusive of service, software, and physical product design. Industrial design is concerned with bringing artistic form and usability, usually associated with craft design and ergonomics, together in order to mass-produce goods. Other aspects of product design and industrial design include engineering design, particularly when matters of functionality or utility (e.g. problem-solving) are at issue, though such boundaries are not always clear.
Product design process
There are various product design processes and many focus on different aspects. One example formulation/model of the process is described by Don Koberg and Jim Bagnel in "The Seven Universal Stages of Creative Problem-Solving." The process is usually completed by a group of people with different skills and training—e.g. industrial designers, field experts (prospective users), engineers (for engineering design aspects), depending upon the nature and type of the product involved. The process often involves figuring out what is required, brainstorming possible ideas, creating mock prototypes and then generating the product. However, that is not the end. Product designers would still need to execute the idea, making it into an actual product and evaluating its success (seeing if any improvements are necessary).
The product design process has experienced huge leaps in evolution over the last few years with the rise and adoption of 3D printing. New consumer-friendly 3D printers can produce dimensional objects and print upwards with a plastic like substance opposed to traditional printers that spread ink across a page.
The product design process, as expressed by Koberg and Bagnell, typically involves three main aspects:
Analysis
Concept
Synthesis
Depending on the kind of product being designed, the latter two sections are most often revisited (e.g. depending on how often the design needs revision, to improve it or to better fit the criteria). This is a continuous loop, where feedback is the main component. Koberg and Bagnell offer more specifics on the process: In their model, "analysis" consists of two stages, "concept" is only one stage, and "synthesis" encompasses the other four. (These terms notably vary in usage in different design frameworks. Here, they are used in the way they're used by Koberg and Bagnell.)
Analysis
Accept Situation: Here, the designers decide on committing to the project and finding a solution to the problem. They pool their resources into figuring out how to solve the task most efficiently.
Analyze: In this stage, everyone in the team begins research. They gather general and specific materials which will help to figure out how their problem might be solved. This can range from statistics, questionnaires, and articles, among many other sources.
Concept
Define: This is where the key issue of the matter is defined. The conditions of the problem become objectives, and restraints on the situation become the parameters within which the new design must be constructed.
Synthesis
Ideate: The designers here brainstorm different ideas, solutions for their design problem. The ideal brainstorming session does not involve any bias or judgment, but instead builds on original ideas.
Select: By now, the designers have narrowed down their ideas to a select few, which can be guaranteed successes and from there they can outline their plan to make the product.
Implement: This is where the prototypes are built, the plan outlined in the previous step is realized and the product starts to become an actual object.
Evaluate: In the last stage, the product is tested, and from there, improvements are made. Although this is the last stage, it does not mean that the process is over. The finished prototype may not work as well as hoped so new ideas need to be brainstormed.
Double Diamond Framework
The Double Diamond Framework is a widely used approach for product discovery, which emphasizes a structured method for problem-solving and solution development, encouraging teams to diverge (broad exploration) before converging (focused decision-making).
The framework is divided into two primary stages: diverging and converging, each with its own steps and considerations.
Diverging Stage:
During the diverging stage, teams explore the problem space broadly without predefined solutions. This phase involves engaging with core personas, conducting open-ended conversations, and gathering unfiltered input from customer-facing teams. The goal is to identify and document various problem areas, allowing themes and key issues to emerge naturally.
Converging Stage:
As insights emerge, teams transition to the converging stage, where they narrow down problem areas and prioritize solutions. This phase involves defining the problem, understanding major pain points, and advocating for solutions within the organization. Effective convergence requires clear articulation of the problem's significance and consideration of business strategies and feasibility.
Iterative Process:
The Double Diamond Framework is iterative, allowing teams to revisit stages as needed based on feedback and outcomes. Moving back to earlier stages may be necessary if solutions fail to address underlying issues or elicit negative user responses. Success lies in the team's ability to adapt and refine their approach over time.
Creative visualization
In design, Creative Visualization refers to the process by which computer generated imagery, digital animation, three-dimensional models, and two-dimensional representations, such as architectural blueprints, engineering drawings, and sewing patterns are created and used in order to visualize a potential product prior to production. Such products include prototypes for vehicles in automotive engineering, apparel in the fashion industry, and buildings in architectural design.
Demand-pull innovation and invention-push innovation
Most product designs fall under one of two categories: demand-pull innovation or invention-push innovation.
Demand-pull happens when there is an opportunity in the market to be explored by the design of a product. This product design attempts to solve a design problem. The design solution may be the development of a new product or developing a product that's already on the market, such as developing an existing invention for another purpose.
Invention-push innovation happens when there is an advancement in intelligence. This can occur through research or it can occur when the product designer comes up with a new product design idea.
Product design expression
Design expression comes from the combined effect of all elements in a product. Colour tone, shape and size should direct a person's thoughts towards buying the product. Therefore, it is in the product designer's best interest to consider the audiences who are most likely to be the product's end consumers. Keeping in mind how consumers will perceive the product during the design process will direct towards the product’s success in the market. However, even within a specific audience, it is challenging to cater to each possible personality within that group.
One solution to that is to create a product that, in its designed appearance and function, expresses a personality or tells a story. Products that carry such attributes are more likely to give off a stronger expression that will attract more consumers. On that note it is important to keep in mind that design expression does not only concern the appearance of a product, but also its function. For example, as humans our appearance as well as our actions are subject to people's judgment when they are making a first impression of us. People usually do not appreciate a rude person even if they are good looking. Similarly, a product can have an attractive appearance but if its function does not follow through it will most likely drop in regards to consumer interest. In this sense, designers are like communicators, they use the language of different elements in the product to express something.
Trends in product design
Product designers must consider every detail: how people use and misuse objects, potential flaws in products, errors in the design process, and the ideal ways people wish they could interact with those objects. Many new designs will fail and many won't even make it to market. Some designs eventually become obsolete. The design process itself can be quite frustrating usually taking 5 or 6 tries to get the product design right. A product that fails in the marketplace the first time may be re-introduced to the market 2 more times. If it continues to fail, the product is then considered to be dead because the market believes it to be a failure. Most new products fail, even if there's a great idea behind them.
All types of product design are clearly linked to the economic health of manufacturing sectors. Innovation provides much of the competitive impetus for the development of new products, with new technology often requiring a new design interpretation. It only takes one manufacturer to create a new product paradigm to force the rest of the industry to catch up—fueling further innovation. Products designed to benefit people of all ages and abilities—without penalty to any group—accommodate our swelling aging population by extending independence and supporting the changing physical and sensory needs we all encounter as we grow older.
See also
Axiomatic product development lifecycle (APDL)
Industrial design
Sustainable design
Transgenerational design
Virtual product development
Universal design
Inclusive design
References
Design for X | Product design | Engineering | 1,996 |
25,887,525 | https://en.wikipedia.org/wiki/Psilocybe%20banderillensis | Psilocybe banderillensis is a species of psilocybin mushroom in the family Hymenogastraceae known from the states of Veracruz and Oaxaca in Mexico. It is in the Psilocybe fagicola complex with Psilocybe fagicola, Psilocybe oaxacana, Psilocybe columbiana, Psilocybe herrerae, Psilocybe keralensis, Psilocybe neoxalapensis, and Psilocybe teofiloi.
See also
List of Psilocybe species
List of psilocybin mushrooms
References
Entheogens
Psychoactive fungi
banderillensis
Psychedelic tryptamine carriers
Fungi of North America
Fungi described in 1978
Taxa named by Gastón Guzmán
Fungus species | Psilocybe banderillensis | Biology | 157 |
5,439,007 | https://en.wikipedia.org/wiki/Yttrium%28III%29%20arsenide | Yttrium arsenide is an inorganic compound of yttrium and arsenic with the chemical formula YAs. It can be prepared by reacting yttrium and arsenic at high temperature. Some literature has done research on the eutectic system of it and zinc arsenide.
It reacts with iron, iron(III) arsenide, iron(III) oxide and yttrium(III) fluoride (for doping) at high temperature to obtain superconducting material YFeAsO0.9F0.1 (Tc=10.2 K).
References
External reading
Arsenides
Yttrium compounds
Rock salt crystal structure | Yttrium(III) arsenide | Chemistry | 135 |
2,332,464 | https://en.wikipedia.org/wiki/High-%CE%BA%20dielectric | In the semiconductor industry, the term high-κ dielectric refers to a material with a high dielectric constant (κ, kappa), as compared to silicon dioxide. High-κ dielectrics are used in semiconductor manufacturing processes where they are usually used to replace a silicon dioxide gate dielectric or another dielectric layer of a device. The implementation of high-κ gate dielectrics is one of several strategies developed to allow further miniaturization of microelectronic components, colloquially referred to as extending Moore's Law.
Sometimes these materials are called "high-k" (pronounced "high kay"), instead of "high-κ" (high kappa).
Need for high-κ materials
Silicon dioxide () has been used as a gate oxide material for decades. As metal–oxide–semiconductor field-effect transistors (MOSFETs) have decreased in size, the thickness of the silicon dioxide gate dielectric has steadily decreased to increase the gate capacitance (per unit area) and thereby drive current (per device width), raising device performance. As the thickness scales below 2 nm, leakage currents due to tunneling increase drastically, leading to high power consumption and reduced device reliability. Replacing the silicon dioxide gate dielectric with a high-κ material allows increased gate thickness thus decreasing gate capacitance without the associated leakage effects.
First principles
The gate oxide in a MOSFET can be modeled as a parallel plate capacitor. Ignoring quantum mechanical and depletion effects from the Si substrate and gate, the capacitance of this parallel plate capacitor is given by
where
is the capacitor area
is the relative dielectric constant of the material (3.9 for silicon dioxide)
is the permittivity of free space
is the thickness of the capacitor oxide insulator
Since leakage limitation constrains further reduction of , an alternative method to increase gate capacitance is to alter κ by replacing silicon dioxide with a high-κ material. In such a scenario, a thicker gate oxide layer might be used which can reduce the leakage current flowing through the structure as well as improving the gate dielectric reliability.
Gate capacitance impact on drive current
The drain current for a MOSFET can be written (using the gradual channel approximation) as
where
is the width of the transistor channel
is the channel length
is the channel carrier mobility (assumed constant here)
is the capacitance density associated with the gate dielectric when the underlying channel is in the inverted state
is the voltage applied to the transistor gate
is the threshold voltage
The term is limited in range due to reliability and room temperature operation constraints, since a too large would create an undesirable, high electric field across the oxide. Furthermore, cannot easily be reduced below about 200 mV, because leakage currents due to increased oxide leakage (that is, assuming high-κ dielectrics are not available) and subthreshold conduction raise stand-by power consumption to unacceptable levels. (See the industry roadmap, which limits threshold to 200 mV, and Roy et al. ). Thus, according to this simplified list of factors, an increased requires a reduction in the channel length or an increase in the gate dielectric capacitance.
Materials and considerations
Replacing the silicon dioxide gate dielectric with another material adds complexity to the manufacturing process. Silicon dioxide can be formed by oxidizing the underlying silicon, ensuring a uniform, conformal oxide and high interface quality. As a consequence, development efforts have focused on finding a material with a requisitely high dielectric constant that can be easily integrated into a manufacturing process. Other key considerations include band alignment to silicon (which may alter leakage current), film morphology, thermal stability, maintenance of a high mobility of charge carriers in the channel and minimization of electrical defects in the film/interface. Materials which have received considerable attention are hafnium silicate, zirconium silicate, hafnium dioxide and zirconium dioxide, typically deposited using atomic layer deposition.
It is expected that defect states in the high-κ dielectric can influence its electrical properties. Defect states can be measured for example by using zero-bias thermally stimulated current, zero-temperature-gradient zero-bias thermally stimulated current spectroscopy, or inelastic electron tunneling spectroscopy (IETS).
Use in industry
Industry has employed oxynitride gate dielectrics since the 1990s, wherein a conventionally formed silicon oxide dielectric is infused with a small amount of nitrogen. The nitride content subtly raises the dielectric constant and is thought to offer other advantages, such as resistance against dopant diffusion through the gate dielectric.
In 2000, Gurtej Singh Sandhu and Trung T. Doan of Micron Technology initiated the development of atomic layer deposition high-κ films for DRAM memory devices. This helped drive cost-effective implementation of semiconductor memory, starting with 90-nm node DRAM.
In early 2007, Intel announced the deployment of hafnium-based high-κ dielectrics in conjunction with a metallic gate for components built on 45 nanometer technologies, and has shipped it in the 2007 processor series codenamed Penryn. At the same time, IBM announced plans to transition to high-κ materials, also hafnium-based, for some products in 2008. While not identified, the most likely dielectric used in such applications are some form of nitrided hafnium silicates (). and are susceptible to crystallization during dopant activation annealing. NEC Electronics has also announced the use of a dielectric in their 55 nm UltimateLowPower technology. However, even is susceptible to trap-related leakage currents, which tend to increase with stress over device lifetime. This leakage effect becomes more severe as hafnium concentration increases. There is no guarantee, however, that hafnium will serve as a de facto basis for future high-κ dielectrics. The 2006 ITRS roadmap predicted the implementation of high-κ materials to be commonplace in the industry by 2010.
See also
Low-κ dielectric
Silicon–germanium
Silicon on insulator
References
Further reading
Review article by Wilk et al. in the Journal of Applied Physics
Houssa, M. (Ed.) (2003) High-k Dielectrics Institute of Physics CRC Press Online
Huff, H.R., Gilmer, D.C. (Ed.) (2005) High Dielectric Constant Materials : VLSI MOSFET applications Springer
Demkov, A.A, Navrotsky, A., (Ed.) (2005) Materials Fundamentals of Gate Dielectrics Springer
"High dielectric constant gate oxides for metal oxide Si transistors" Robertson, J. (Rep. Prog. Phys. 69 327-396 2006) Institute Physics Publishing High dielectric constant gate oxides]
Media coverage of March, 2007 Intel/IBM announcements BBC NEWS|Technology|Chips push through nano-barrier, NY Times Article (1/27/07)
Gusev, E. P. (Ed.) (2006) "Defects in High-k Gate Dielectric Stacks: Nano-Electronic Semiconductor Devices", Springer
Electronic engineering
Transistors
Semiconductor fabrication materials
MOSFETs | High-κ dielectric | Technology,Engineering | 1,535 |
1,289,909 | https://en.wikipedia.org/wiki/Washburn%27s%20equation | In physics, Washburn's equation describes capillary flow in a bundle of parallel cylindrical tubes; it is extended with some issues also to imbibition into porous materials. The equation is named after Edward Wight Washburn; also known as Lucas–Washburn equation, considering that Richard Lucas wrote a similar paper three years earlier, or the Bell-Cameron-Lucas-Washburn equation, considering J.M. Bell and F.K. Cameron's discovery of the form of the equation in 1906.
Derivation
In its most general form the Lucas Washburn equation describes the penetration length () of a liquid into a capillary pore or tube with time as , where is a simplified diffusion coefficient. This relationship, which holds true for a variety of situations, captures the essence of Lucas and Washburn's equation and shows that capillary penetration and fluid transport through porous structures exhibit diffusive behaviour akin to that which occurs in numerous physical and chemical systems. The diffusion coefficient is governed by the geometry of the capillary as well as the properties of the penetrating fluid.
A liquid having a dynamic viscosity and surface tension will penetrate a distance into the capillary whose pore radius is following the relationship:
Where is the contact angle between the penetrating liquid and the solid (tube wall).
Washburn's equation is also used commonly to determine the contact angle of a liquid to a powder using a force tensiometer.
In the case of porous materials, many issues have been raised both about the physical meaning of the calculated pore radius and the real possibility to use this equation for the calculation of the contact angle of the solid.
The equation is derived for capillary flow in a cylindrical tube in the absence of a gravitational field, but is sufficiently accurate in many cases when the capillary force is still significantly greater than the gravitational force.
In his paper from 1921 Washburn applies Poiseuille's Law for fluid motion in a circular tube. Inserting the expression for the differential volume in terms of the length of fluid in the tube , one obtains
where is the sum over the participating pressures, such as the atmospheric pressure , the hydrostatic pressure and the equivalent pressure due to capillary forces . is the viscosity of the liquid, and is the coefficient of slip, which is assumed to be 0 for wetting materials. is the radius of the capillary. The pressures in turn can be written as
where is the density of the liquid and its surface tension. is the angle of the tube with respect to the horizontal axis. is the contact angle of the liquid on the capillary material. Substituting these expressions leads to the first-order differential equation for
the distance the fluid penetrates into the tube :
Washburn's constant
The Washburn constant may be included in Washburn's equation.
It is calculated as follows:
Fluid inertia
In the derivation of Washburn's equation, the inertia of the liquid is ignored as negligible. This is apparent in the dependence of length to the square root of time, , which gives an arbitrarily large velocity dL/dt for small values of t. An improved version of Washburn's equation, called Bosanquet equation, takes the inertia of the liquid into account.
Applications
Inkjet printing
The penetration of a liquid into the substrate flowing under its own capillary pressure can be calculated using a simplified version of Washburn's equation:
where the surface tension-to-viscosity ratio represents the speed of ink penetration into the substrate. In reality, the evaporation of solvents limits the extent of liquid penetration in a porous layer and thus, for the meaningful modelling of inkjet printing physics it is appropriate to utilise models which account for evaporation effects in limited capillary penetration.
Food
According to physicist and Ig Nobel prize winner Len Fisher, the Washburn equation can be extremely accurate for more complex materials including biscuits. Following an informal celebration called national biscuit dunking day, some newspaper articles quoted the equation as Fisher's equation.
Novel capillary pump
The flow behaviour in traditional capillary follows the Washburn's equation. Recently, novel capillary pumps with a constant pumping flow rate independent of the liquid viscosity were developed, which have a significant advantage over the traditional capillary pump (of which the flow behaviour is Washburn behaviour, namely the flow rate is not constant). These new concepts of capillary pump are of great potential to improve the performance of lateral flow test.
See also
Bosanquet equation
Mercury intrusion porosimetry (MIP)
References
External links
Powder wettability measurement with the Washburn method
Equations of fluid dynamics
Porous media | Washburn's equation | Physics,Chemistry,Materials_science,Engineering | 972 |
28,537,869 | https://en.wikipedia.org/wiki/Inland%20Customs%20Line | The Inland Customs Line, incorporating the Great Hedge of India (or Indian Salt Hedge), was a customs barrier built by the British colonial rulers of India to prevent smuggling of salt from coastal regions in order to avoid the substantial salt tax.
The customs line was begun under the East India Company and continued into direct British rule. The line had its beginnings in a series of customs houses established in Bengal in 1803 to prevent the smuggling of salt to avoid the tax. These customs houses were eventually formed into a continuous barrier that was brought under the control of the Inland Customs Department in 1843.
The line was gradually expanded as more territory was brought under British control until it covered more than , often running alongside rivers and other natural barriers. It ran from the Punjab in the northwest to the princely states of Orissa, near the Bay of Bengal, in the southeast. The line was initially made of dead, thorny material such as the Indian plum but eventually evolved into a living hedge that grew up to high and was compared to the Great Wall of China. The Inland Customs Department employed customs officers, jemadars and men to patrol the line and apprehend smugglers, reaching a peak of more than 14,000 staff in 1872.
The line and hedge were abandoned in 1879 when the British seized control of the Sambhar Salt Lake in Rajasthan and applied tax at the point of manufacture. The salt tax itself remained in place until 1946.
Origins
When the Inland Customs Line was first conceived, British India was governed by the East India Company. This situation lasted until 1858 when the responsibility for government of the colony was transferred to the Crown following the events of the Indian Rebellion of 1857. By 1780 Warren Hastings, the company's Governor-General of India, had brought all salt manufacture in the Bengal Presidency under company control. This allowed him to increase the ancient salt tax in Bengal from 0.3 rupees per maund (37 kg) to 3.25 rupees per maund by 1788, a rate that it remained at until 1879. This brought in 6,257,470 rupees for the 1784–85 financial year, at a cost to an average Indian family of around two rupees per year (two months' income for a labourer). There were taxes on salt in the other British India territories but the tax in Bengal was the highest, with the other taxes at less than a third of the Bengal tax rate.
It was possible to avoid paying the salt tax by extracting salt illegally in salt pans, stealing it from warehouses or smuggling salt from the princely states which remained outside of direct British rule. The latter was the greatest threat to the company's salt revenues. Much of the smuggled salt came into Bengal from the west and the company decided to act to prevent this trade. In 1803 a series of customs houses and barriers were constructed across major roads and rivers in Bengal to collect the tax on traded salt as well as duties on tobacco and other imports. These customs houses were backed up by "preventative customs houses" located near salt works and the coast in Bengal to collect the tax at source.
These customs houses alone did little to prevent the mass avoidance of the salt tax. This was due to the lack of a continuous barrier, corruption within the customs staff and the westward expansion of Bengal towards salt-rich states. In 1823 the Commissioner of Customs for Agra, George Saunders, installed a line of customs posts along the Ganges and Yamuna rivers from Mirzapur to Allahabad that would eventually evolve into the Inland Customs Line. The main aim was to prevent salt from being smuggled from the south and west but there was also a secondary line running from Allahabad to Nepal to prevent smuggling from the Northwest frontier. The annexation of Sindh and the Punjab allowed the line to be extended north-west by G. H. Smith, who had become Commissioner of Customs in 1834. Smith exempted items such as tobacco and iron from taxation to concentrate on salt and was responsible for expanding and improving the line, increasing its budget to 790,000 rupees per year and the staff to 6,600 men. Under Smith, the line saw many reforms and was officially named the Inland Customs Line in 1843.
Inland Customs Line
Smith's new Inland Customs Line was first concentrated between Agra and Delhi and consisted of a series of customs posts at one mile intervals, linked by a raised path with gateways (known as "chokis") to allow people to cross the line every four miles. Policing of the barrier and surrounding land, to a distance of , was the responsibility of the Inland Customs Department, headed by a Commissioner of Inland Customs. The department staffed each post with an Indian Jemadar (approximately equivalent to a British Warrant Officer) and ten men, backed up by patrols operating 2–3 miles behind the line. The line was mainly concerned with the collection of the salt tax but also collected tax on sugar exported from Bengal and functioned as a deterrent against opium, bhang and cannabis smuggling.
The end of company rule in 1858 allowed the British government to expand Bengal through territorial acquisitions, updating the line as needed. In 1869 the government in Calcutta ordered the connection of sections of the line into a continuous customs barrier stretching from the Himalayas to Orissa, near the Bay of Bengal. This distance was said to be the equivalent of London to Constantinople. The north section from Tarbela to Multan was lightly guarded with posts spread further apart as the wide Indus River was judged to provide a sufficient barrier to smuggling. The more heavily guarded section was around long and began at Multan, running along the rivers Sutlej and Yamuna before terminating south of Burhanpur. The final section reverted to longer distances between customs posts and ran east to Sonapur.
In the 1869–70 financial year the line collected 12.5 million rupees in salt tax and 1 million rupees in sugar duties at a cost of 1.62 million rupees in maintenance. In this period the line employed around 12,000 men and maintained 1,727 customs posts. By 1877 the salt tax was worth £6.3 million (approx 29.1 million rupees) to the British government in India, with the majority being collected in the Madras and Bengal provinces, lying on either side of the customs line.
Great Hedge
It is not known when an actual live hedge was first grown along the customs line but it is likely that it began in the 1840s when thorn bushes, cut and laid along the line as a barrier (known as the "dry hedge", see also dead hedge), took root. By 1868 it had become of "thoroughly impenetrable" hedge. The original dry hedge consisted mainly of samples of the dwarf Indian plum fixed to the line with stakes. This hedge was at risk of attack by white ants, rats, fire, storms, locusts, parasitic creepers, natural decay and strong winds which could destroy furlongs at a time and necessitated constant maintenance. Allan Octavian Hume, Commissioner of Inland Customs from 1867 to 1870, estimated that each mile of dry hedge required 250 tons of material to construct and that this material had to be carried to the line from between away. The amount of labour involved in such a task was one of the reasons that a live hedge was encouraged, particularly as damage required the replacement of around half of the dry hedge each year.
In 1869 Hume, in preparation for a rapid expansion of the live hedge, began trials of various indigenous thorny shrubs to see which would be suited to different soil and rainfall conditions. The result was that the main body of the hedge was composed of Indian plum, babool, karonda and several species of Euphorbia. The prickly pear was used where conditions meant that nothing else could grow, as was found in parts of the Hisar district, and in other places bamboo was planted. Where the soil was poor it was dug out and replaced or overlain with better soil and in flood plains the hedge was planted on a raised bank to protect it. The hedge was watered from nearby wells or rainwater collected in large, purpose-built trenches and a "well made" road was constructed along its entire length.
Hume was responsible for transforming the hedge from "a mere line of persistently dwarf seedlings, or of irregularly scattered, disconnected bushes" into a formidable barrier that, by the end of his tenure as commissioner, contained of "perfect" hedge and of "strong and good", but not impenetrable hedge. The hedge was nowhere less than high and thick and in some places was high and thick. Hume himself remarked that his barrier was "in its most perfect form, ... utterly impassable to man or beast".
Hume also substantially realigned the Inland Customs Line, joining separate sections and removing some of the spurs that were no longer necessary. Where this happened, whole runs of hedge were abandoned, and the men would have to construct a hedge from scratch on the new alignment. The living hedge was terminated at Burhanpur in the south, beyond which it could not grow, and at Layyah in the north where it met the River Indus, whose strong current was judged sufficient to deter smugglers. Historian Henry Francis Pelham compared the use of the Indus in this way to that of the River Main, in modern Germany, for the Roman Limes Germanicus fortifications.
Hume was replaced as Commissioner of Customs in 1870 by G. H. M. Batten who would hold the post for the next six years. His administration saw little realignment of the hedge but extensive strengthening of the existing run, including the building of stone walls and ditch and bank systems where the hedge could not be grown. By the end of Batten's first year he had increased the length of "perfect" hedge by , and by 1873 the central portion between Agra and Delhi was said to be almost impregnable. The line was altered slightly in 1875–6 to run alongside the newly built Agra Canal, which was judged a sufficient obstacle to allow the distance between guard posts to be increased to .
Batten's replacement as Commissioner was W. S. Halsey who was the last to be in charge of the Great Hedge. Under Halsey's control the hedge grew to its greatest extent, reaching a peak of of "perfect" and "good" live hedge by 1878 with a further of inferior hedge, dry hedge or stone wall. The live hedge extended to at least and in places was backed up with an additional dry hedge barrier. All maintenance work was halted on the hedge in 1878 after a decision was made that the line would be abandoned in 1879.
Tree and plants
Carissa carandas, an easy-to-grow drought-resistant sturdy shrub that grows in a variety of soil and produces berry size fruits rich in iron and vitamin C which is used for pickle, was one of the shrubs used because it is ideal for hedges, growing rapidly, densely and needing little attention. Senegalia catechu, Zizyphus jujube, prickly pear, and Euphorbia were some of the other shrubs plants and trees used for the hedge.
Staff
The customs line and hedge required a large number of staff to patrol and maintain it. The majority of the staff were Indian, with their officers coming mainly from the British. In 1869 the Inland Customs Department employed 136 officers, 2,499 petty officers and 11,288 men on the line, reaching a peak of 14,188 men of all ranks in 1872, after which staff numbers declined to around 10,000 as expansion slowed and the hedge matured. The Indian staff were recruited disproportionately from the Muslim population, who constituted 42 per cent of the customs men. The men were intentionally stationed in areas away from their home towns which, together with their removal of local wood for the hedge, made them unpopular among local people. To encourage co-operation, those Indians who lived in villages near the line were allowed to carry up to of salt across for free.
The job of customs man was highly desirable due to its high pay of five rupees per month (agricultural wages were around three rupees a month), which could be topped up with the proceeds from the sale of seized salt. However the men were forced to live away from their families in order to minimise distractions and were not provided with houses, being expected to build their own from mud or wood. In 1868 the Inland Customs department allowed the men's families to join them on the line, as the previous order had led to customs men straying from their posts and associating too closely with local women. The men worked twelve-hour days consisting of two equal day and night shifts. The principal tasks were patrolling and maintaining the hedge; in 1869 alone the customs men carried out 18 million miles (29 million km) of patrols, dug 2 million cubic feet (57,000 cubic metres) of earth and carried over 150,000 tons of thorny material for the hedge. There was a fairly high level of turnover in the staff; for example, in 1876-7 more than 800 men left the service. This included 115 customs men who died on the line, 276 dismissed, 30 deserted on duty, 360 failing to rejoin after leave and 23 removed for being unfit.
The officer corps was almost entirely British; attempts to attract Indian men to the post proved unsuccessful, as the officers were required to be fluent in English, and such men could easily find better paid work in other fields. The job was tough, with each officer responsible for 100 men on of the line, and working through Sundays and holidays. The officers undertook at least one customs excursion per day on average, weighing and applying tax to almost of goods, in addition to personally patrolling around of the line. The only other British men they would meet while on the line would typically be officers of adjacent beats and senior officers who visited about three times a year.
Abandonment
Several British viceroys considered dismantling the line, as it was a major obstacle to free travel and trade across the subcontinent. This was partly due to the use of the line for the collection of taxes on sugar (which made up 10 per cent of the revenues) as well as salt, meaning that traffic had to be stopped and searched in both directions. In addition the line had created a confusing number of different customs jurisdictions in the area surrounding it. The viceroys were also displeased with the corruption and bribery which was present in the Inland Customs system, and the way the line came to serve as a symbol of unjust taxes (parts were set on fire during the Indian Rebellion of 1857). However, the government could not afford to lose the revenue generated by the line and hence, before they could abolish it, needed to take control of all salt production in India, so that tax could be applied at the point of manufacture.
The Viceroy from 1869 to 1872, Lord Mayo, took the first steps towards abolition of the line, instructing British officials to negotiate agreements with the rulers of princely states to take control of salt production. The process was speeded up by Mayo's successor, Lord Northbrook, and by the loss of revenue caused by the Great Famine of 1876–78 that reduced the land tax and killed 6.5 million people. Having secured salt production, British India's Finance Minister, Sir John Strachey, led a review of the tax system and his recommendations, implemented by Viceroy Lord Lytton, resulted in the increase of the salt tax in Madras, Bombay and northern India to 2.5 rupees per maund and a reduction in Bengal to 2.9 rupees. This reduced difference in tax between neighbouring territories made smuggling uneconomical and allowed for the abandonment of the Inland Customs Line on 1 April 1879. The tax on sugar and 29 other commodities had been abolished a year earlier. Strachey's tax reforms continued, and he brought an end to import duties and almost complete free trade to India by 1880. In 1882 Viceroy Lord Ripon finally standardised the salt tax across most of India at a rate of two rupees per maund. However the trans-Indus districts of India continued to be taxed at eight annas ( rupee) per maund until 23 July 1896 and Burma maintained its reduced rate of just three annas. The equalisation of tax cost the government 1.2 million rupees of lost revenue. The potential for salt to be smuggled from the Kohat (trans-Indus) region meant that the north-western section of the line, some 325 miles long from Layyah to Torbela, continued to be policed by the Department of Salt Revenue in Northern India until at least 1895.
Impact
On health
The use of the customs line to maintain the higher salt tax in Bengal is likely to have had a detrimental effect on the health of Indians through salt deprivation. The higher prices within the area enclosed by the line meant that the average annual salt consumption was just compared with up to outside the line. Indeed, the British government's own figures showed that the barrier directly affected salt consumption, reducing it to below the level that regulations prescribed for English soldiers serving in India and that supplied to prisoners in British jails. The consumption of salt was further lowered during the periods of famine that affected India in the 19th century.
It is impossible to know how many died from salt deprivation in India as a result of the salt tax as salt deficiency was not often recorded as a cause of death and was instead more likely to worsen the effects of other diseases and hinder recoveries. It is known that the equalisation of tax made salt cheaper on the whole, decreasing the tax imposed on 130 million people and increasing it on just 47 million, leading to an increase in the use of the mineral. Consumption grew by 50 per cent between 1868 and 1888 and doubled by 1911, by which time salt had become cheaper (relatively).
The rate of salt tax was increased to 2.5 rupees per maund in 1888 to compensate for the loss of revenue from falling silver prices, but this had no adverse effect on salt consumption. The salt tax remained a controversial means of collecting revenue and became the subject of the 1930 Salt Satyagraha, a civil disobedience movement led by Mohandas Gandhi against British rule. During the Satyagraha Gandhi and others marched to the salt producing area of Dandi and defied the salt laws, leading to the imprisonment of 80,000 Indians. The march drew significant publicity to the Indian independence movement but failed to get the tax repealed. The salt tax would finally be abolished by the Interim Government of India, led by Jawaharlal Nehru, in October 1946. The government of Indira Gandhi overlaid much of the old route with roads.
On liberty
Sir John Strachey, the minister whose tax review led to the abolition of the line, was quoted in 1893 describing the line as "a monstrous system, to which it would be almost impossible to find a parallel in any tolerably civilised country".
This has been echoed by modern writers such as journalist Madeleine Bunting, who wrote in The Guardian in February 2001 that the line was "one of the most grotesque and least well known achievements of the British in India".
The massive scale of the undertaking has also been commented upon, with both Hume, the customs commissioner, and M. E. Grant Duff, who was Under-Secretary of State for India from 1868 to 1874, comparing the hedge to the Great Wall of China. The abolition of the line and equalisation of tax has generally been viewed as a good move, with one writer of 1901 stating that it "relieved the people and the trade along a broad belt of country, 2,000 miles long, from much harassment". Sir Richard Temple, governor of the Bengal and Bombay Presidencies, wrote in 1882 that "the inland customs line for levying the salt-duties has been at length swept away" and that care must be taken to ensure that the "evils of the obsolete transit-duties" did not return. However, the same year, the India Salt Act of 1882 explicitly prohibited Indians from collecting or selling salt and continued to limit access to the vital product at affordable prices.
On smuggling
The Line was intended to prevent smuggling, and in this respect it was fairly successful. Smugglers who were caught by customs men were arrested and fined around 8 rupees, those that could not pay being imprisoned for around six weeks. The number of smugglers caught increased as the line extended and was built up. In 1868 2,340 people were convicted of smuggling after being caught on the line, this rose to 3,271 smugglers in 1873–74 and to 6,077 convicted in 1877–78.
Several methods of smuggling were employed. Early on, when patrols were patchy, large scale smuggling was common, with armed gangs breaking through the line with herds of salt-laden camels or cattle. As the line was strengthened, smugglers changed tactics and would try to disguise salt and bring it through the line or throw it over the hedge. Sometimes smugglers hid salt within the jurisdiction of the customs department to collect the 50 per cent finders fee.
Clashes between smugglers and customs men were often violent. Customs officials "harassed Indian people and exhorted bribes". Many of the smugglers died, with examples including one drowning while trying to escape by swimming an irrigation tank and another accidentally killed by other smugglers during a fight with customs men. In September 1877, one large skirmish occurred when two customs men attempted to apprehend 112 smugglers and were both killed. Many of the gang were later caught and either imprisoned or transported.
Rediscovery
Despite its scale, the customs line and associated hedge were not widely known in either Britain or India, the standard histories of the period neglecting to mention them. Roy Moxham, a conservator at the University of London library, wrote a book on the customs line and his search for its remains that was published in 2001. This followed his finding, in 1995, of a passing mention of the hedge in Major-General Sir William Henry Sleeman's work Rambles and Recollections of an Indian Official. Moxham looked up the hedge in the India Office Records of the British Library and determined to locate its remnants.
Moxham conducted extensive research in London before making three trips to India to look for any remains of the line. In 1998 he located a small raised embankment in the Etawah district in Uttar Pradesh which may be all that remains of the Great Hedge of India. Moxham's book, which he claims to be the first on the subject, details the history of the line and his attempts to locate its modern remains. The book was translated into Marathi by Anand Abhyankar in 2007 and into Tamil by Cyril Alex in 2015.
In July 2015, the Children's BBC channel outlined the hedge on Horrible Histories, watched that week by 207,000 viewers.
Artist Sheila Ghelani and Sue Palmer produced live art performances of a piece called "Common Salt", about the hedge. Their book on the subject was published in July 2021 by Live Art Development Agency.
In August 2021, journalist Kamala Thiagarajan wrote about the hedge on BBC Future's "Lost Index" series.
See also
The Great Green Wall of Aravalli, a 1,600 km long and 5 km wide green ecological corridor of India
Great Green Wall, across North Africa in Sahara desert
Three-North Shelter Forest Program, a Chinese anti-desertification program started in 1978
References
Bibliography
Economic history of India
Economic history of Pakistan
Walls
Salt tax
Economy of British India
Customs duties
Fences
Separation barriers
Internal borders of India
Border barriers
Salt industry in India | Inland Customs Line | Engineering | 4,838 |
996,410 | https://en.wikipedia.org/wiki/Forwarder | A forwarder is a forestry vehicle that carries big felled logs cut by a harvester from the stump to a roadside landing for later acquisition. Forwarders can use rubber tires or tracks. Unlike a skidder, a forwarder carries logs clear of the ground, which can reduce soil impacts but tends to limit the size of the logs it can move. Forwarders are typically employed together with harvesters in cut-to-length logging operations. Forwarders originated in Scandinavia.
Load capacity
Forwarders are commonly categorized on their load carrying capabilities. Other classifications include whether they are wheeled or tracked and the axle arrangement. The smallest are trailers designed for towing behind all-terrain vehicles which can carry a load between 1 and 3 tonnes. Agricultural self-loading trailers designed to be towed by farm tractors can handle load weights up to around 12 to 15 tonnes. Light weight purpose-built machines utilised in commercial logging and early thinning operations can handle payloads of up to 8 tonnes. Medium-sized forwarders used in clearfells and later thinnings carry between 12 and 16 tonnes. The largest class specialized for clearfells handles up to 25 tonnes. Forwarders also carry their load at least 2 feet above the ground.
Manufacturers
Barko Hydraulics, LLC
Caterpillar Inc.
John Deere (Timberjack)
EcoLog
Fabtek
HSM
HSM (Hohenloher Spezial Maschinenbau GmbH, Germany)
Komatsu Forest (Valmet)
Kronos
Logset
Malwa
Neuson Forest
PM Pfanzelt Maschinenbau
Ponsse
Rottne
Strojirna Novotny
Tigercat
Timber Pro
Zanello
References
External links
Engineering vehicles
Log transport
Forestry equipment | Forwarder | Engineering | 347 |
66,445,586 | https://en.wikipedia.org/wiki/Young%20Ladies%20Don%27t%20Play%20Fighting%20Games | is a Japanese manga series by Eri Ejima. It has been serialized in Media Factory's seinen manga magazine Monthly Comic Flapper since January 2020 and has been collected in eight tankōbon volumes. The manga is licensed in North America by Seven Seas Entertainment. A live-action web drama adaptation aired from May to July 2023. An anime adaptation has been announced.
Plot
Young school girls have a love for fighting games in a prestigious all-girl academy where video games are banned. Despite this, they enter Japan's biggest fighting game tournament.
Characters
Media
Manga
The manga series is written and illustrated by Eri Ejima and has been serialized in Media Factory's seinen manga magazine Monthly Comic Flapper since January 4, 2020. Eight tankōbon volumes were released as of October 2024. Seven Seas Entertainment licensed the manga for a North American release.
Drama
A live-action web drama adaptation was announced on October 21, 2022. It is directed by Ryoma Ouchida and written by Anna Kawahara. It aired on the Lemino streaming service from May 19 to July 7, 2023 and ran for eight episodes.
Anime
An anime adaptation was announced on January 21, 2021.
References
External links
2023 Japanese television series debuts
2023 Japanese television series endings
2020s LGBTQ literature
Anime series based on manga
Comedy-drama anime and manga
Japanese girls' love television series
Media Factory manga
Anime and manga set in schools
Seinen manga
Seven Seas Entertainment titles
Works about video games
Yuri (genre) anime and manga | Young Ladies Don't Play Fighting Games | Technology | 311 |
8,334,762 | https://en.wikipedia.org/wiki/Edward%20Orton%20Jr. | Professor Edward Orton Jr. (October 8, 1863 in Chester, New York, United States – February 10, 1932 in Columbus, Ohio, USA) was an American academic administrator, businessman, ceramic engineer, geologist, and philanthropist.
Biography
Early life
Orton Jr. was the son of Edward Orton Sr., a Harvard educated geologist, and Mary Jennings Orton. Shortly after his birth in 1865, his family relocated to Yellow Springs, Ohio, when his father became principal of the preparatory school of Antioch College. In 1873, he began attending public school in Columbus after his father relocated the family after being appointed first President of The Ohio State Agricultural and Mechanical College.
Career
Orton Jr. graduated from Ohio State University with an Engineer of Mines degree in 1884. From 1884 to 1888, he was chemist and superintendent of blast furnaces. The regular manufacture of high silicon alloy of iron, "ferro-silicon," in the United States was introduced first by him, at the Bessie Furnace, New Straitsville, Ohio, 1887–88. In the latter year, he entered the ceramic industries of Ohio, managing several plants until 1893.
In 1894, Orton was appointed the first chairman of a school of ceramic engineering at Ohio State University, the first ceramic engineering school in the United States. This school for instruction in the technology of clay, glass and cement industries was established largely through his efforts. Following in his father's footsteps, Orton was the State Geologist of Ohio from 1899 until 1906. Orton also served as the Dean of the Ohio State College of Engineering from 1902 to 1906 and again from 1910 to 1915.
Orton served as the secretary of the American Ceramic Society from 1899 to 1917 and later as president from 1930 to 1931. From this role he led the organization from its inception and played an important role in its early growth and establishment as a scientific organization.
Orton honored his father with the Orton Memorial Library of Geology, inside Orton Hall at Ohio State University, for perusing the theories and records of earthly change. Orton Hall would later house the Orton Geological Museum.
World War I
In 1916, Orton aided in the drafting of the US National Defense Act. Later that year, during World War I, Orton entered the United States military service. In 1917, he was commissioned a Major in the Officer's Reserve Corps. By 1919, he became a Brigadier General in the Quartermaster's Officers Reserve Corp. On June 2, 1919, he was awarded a Distinguished Service Medal by the United States Congress.
He purchased, created and donated Camp Mary Orton (named after his first wife) to the Godman Guild of Columbus which operated it as a summer camp and retreat for young mothers and their babies.
Later career
He was elected President of the Columbus Chamber of Commerce in 1921 and re-elected for a second term in 1922 (only the second citizen to succeed himself). In 1922, he received a Doctor of Science from Rutgers College. In 1931, he received an honorary degree of Doctor of Laws from Alfred University. Later in 1931, he received the professional degree of Ceramic Engineer from The Ohio State University.
Orton developed a series of pyrometric cones and established the Standard Pyrometric Cone Company to manufacture the cones, which continue to be used. He died in 1932, and in accordance with his will the Edward Orton Jr. Ceramic Foundation was formed as a charitable trust to operate of the Standard Pyrometric Cone Company.
Personal life
Orton married twice, first to Mary Princess Anderson (1888 until her death in 1927) and later to Mina Althea Orton (1928 until his death in 1932).
Publications
Clays of Ohio and the Industries Established Upon Them, in Ohio Geological Survey, v. V, published by The Ohio State University in 1884.
Ceramics 6 (Clay manufacture — pottery) lectures (April 11 – June 6, 1902) (1902)
The Progress of the Ceramic Industry (1903)
with Samuel Vernon Peppel: Limestone Resources & the Lime Industry (1906). .
He also published a number of technical articles and reports in periodicals.
References
External links
Orton information, Department of Geological Sciences, OSU
1863 births
1932 deaths
People from Chester, Orange County, New York
American academic administrators
Engineers from New York (state)
American geologists
Philanthropists from New York (state)
Ohio State University College of Engineering alumni
Ohio State University faculty
Businesspeople from Columbus, Ohio
People from Yellow Springs, Ohio
Ceramic engineering
Engineers from Ohio
Scientists from New York (state) | Edward Orton Jr. | Engineering | 890 |
1,927,959 | https://en.wikipedia.org/wiki/Seventh%20Generation%20Inc. | Seventh Generation, Inc. is an American company selling eco-friendly cleaning, paper, and personal care products. Established in 1988, the Burlington, Vermont-based company distributes products to natural food stores, supermarkets, mass merchants, and online retailers. In 2016, Anglo-Dutch consumer goods company Unilever acquired Seventh Generation for an estimated $700 million.
Seventh Generation focuses its marketing and product development on sustainability and the conservation of natural resources. The company uses recycled and post-consumer materials in its packaging, as well as biodegradable, plant-based phosphate-free and chlorine-free ingredients in its products.
The company attributes the name "Seventh Generation" to the "Great Law of the Iroquois". Per the company, the document states, "In our every deliberation, we must consider the impact of our decisions on the next seven generations."
History
1988–1990
In 1988, Alan Newman acquired Renew America, a mail-order catalog that sells energy-, water- and resource-saving products. After giving the catalog a new look, an enhanced mix of products, and a new name, Seventh Generation, Newman embarked on a campaign to raise funding for the venture. The next year, entrepreneur and author of How to Make the World a Better Place, Jeffrey Hollender, joined Newman and helped secure much-needed capital, and a mention in the New York Times increased orders seven-fold within a year.
1991–2000
Newman left Seventh Generation in 1992 to start Magic Hat Brewing Company. Seventh Generation went public the next year on 8 November, raising $7 million.
In 1994, Seventh Generation entered the mass retail market with three products: dishwasher detergent, non-chlorine bleach, and liquid laundry detergent. And in 1995, the company's mail-order catalog business sold to Gaiam, Inc. and Seventh Generation began focusing solely on its wholesale products business.
2001–2021
Hollender stepped aside as CEO in 2009, and former PepsiCo division president Chuck Maniscalco joined the company and took over the role. John Replogle took over as president and CEO in February 2011. Joey Bergstein was named CEO in 2017 after Replogle became chairman of the Seventh Generation Social Mission Board.
In September 2016, Unilever Plc. purchased Seventh Generation for an estimated $700 million. In July 2021, Alison Whritenour became Seventh Generation's first female CEO.
Awards
Seventh Generation has received multiple awards.
2004 Corporate Stewardship Award for Small Business from the United States Chamber of Commerce Center for Corporate Citizenship. Award recipients were selected based on "a demonstration of ethical leadership and corporate stewardship, making a difference in their communities, and contributions to the advancement of important economic and social goals."
Fastest Growing Company in Vermont - 5x5x5 Award from Vermont Business Magazine and KeyBank for "achievements in keeping true to its mission to create healthy products that preserve the environment, every year since 2004."
Ceres-ACCA North American Awards for Sustainability Reporting - Best Small or Medium Enterprise Corporate Responsibility Report, April 2006 - the international competition was sponsored by Ceres (organization), a national network of investment funds, environmental organizations and other public interest groups working to advance environmental stewardship on the part of businesses, in partnership with the Association of Chartered Certified Accountants, and CoVeris, an independent corporate verification firm. Ceres called Seventh Generation's report "a pioneering effort in transparency for a privately owned company."
In 2007, Seventh Generation was named the second fastest growing company in Vermont over the past ten years.
Fast Company Social Capitalist Award 2007 – Fast Company magazine and Monitor Group.
The Microsoft Excellence in Environmental Sustainability Award 2008 - Seventh Generation was recognized as a customer who is "using their business management system in an innovative way to track their initiatives around becoming more environmentally sustainable."
In 2009, the IT department at Seventh Generation was named number eight in Computerworld's "Top Green-IT Organizations."
In 2018, Seventh Generation was recognized as one of "the 50 most sustainable companies in the world" at the SEAL Business Sustainability Awards.
References
External links
Unilever
Chemical companies of the United States
Cleaning products
Chemical companies established in 1988
Manufacturing companies based in Vermont
Benefit corporations | Seventh Generation Inc. | Chemistry | 860 |
765,457 | https://en.wikipedia.org/wiki/Diethyl%20ether | Diethyl ether, or simply ether, is an organic compound with the chemical formula , sometimes abbreviated as . It is a colourless, highly volatile, sweet-smelling ("ethereal odour"), extremely flammable liquid. It belongs to the ether class of organic compounds. It is a common solvent. It was formerly used as a general anesthetic.
Production
Most diethyl ether is produced as a byproduct of the vapor-phase hydration of ethylene to make ethanol. This process uses solid-supported phosphoric acid catalysts and can be adjusted to make more ether if the need arises: Vapor-phase dehydration of ethanol over some alumina catalysts can give diethyl ether yields of up to 95%.
Diethyl ether can be prepared both in laboratories and on an industrial scale by the acid ether synthesis.
Uses
The dominant use of diethyl ether is as a solvent. One particular application is in the production of cellulose plastics such as cellulose acetate.
Laboratory solvent
It is a common solvent for the Grignard reaction in addition to other reactions involving organometallic reagents. These uses exploit its basicity. Diethyl ether is a popular non-polar solvent in liquid-liquid extraction. As an extractant, it is immiscible with and less dense than water.
Although immiscible, it has significant solubility in water (6.05 g/(100 ml) at 25 °C) and dissolves 1.5 g/(100 g) (1.0 g/(100 ml)) water at 25 °C.
Fuel
Diethyl ether has a high cetane number of 85–96 and, in combination with petroleum distillates for gasoline and diesel engines, is used as a starting fluid because of its high volatility and low flash point. Ether starting fluid is sold and used in countries with cold climates, as it can help with cold starting an engine at sub-zero temperatures. For the same reason it is also used as a component of the fuel mixture for carbureted compression ignition model engines.
Chemical reactions
Triethyloxonium tetrafluoroborate is prepared from boron trifluoride, diethyl ether, and epichlorohydrin:
Diethyl ether is a common laboratory aprotic solvent. It is susceptible to the formation of hydroperoxides.
Metabolism
A cytochrome P450 enzyme is proposed to metabolize diethyl ether.
Diethyl ether inhibits alcohol dehydrogenase, and thus slows the metabolism of ethanol. It also inhibits metabolism of other drugs requiring oxidative metabolism.
For example, diazepam requires hepatic oxidization whereas its oxidized metabolite oxazepam does not.
Safety, stability, regulations
Diethyl ether is extremely flammable and may form explosive vapour/air mixtures.
Since ether is heavier than air it can collect low to the ground and the vapour may travel considerable distances to ignition sources. Ether will ignite if exposed to an open flame, though due to its high flammability, an open flame is not required for ignition. Other possible ignition sources include – but are not limited to – hot plates, steam pipes, heaters, and electrical arcs created by switches or outlets. Vapour may also be ignited by the static electricity which can build up when ether is being poured from one vessel into another. The autoignition temperature of diethyl ether is . The diffusion of diethyl ether in air is (298 K, 101.325 kPa).
Ether is sensitive to light and air, tending to form explosive peroxides. Ether peroxides have a higher boiling point than ether and are contact explosives when dry. Commercial diethyl ether is typically supplied with trace amounts of the antioxidant butylated hydroxytoluene (BHT), which reduces the formation of peroxides. Storage over sodium hydroxide precipitates the intermediate ether hydroperoxides. Water and peroxides can be removed by either distillation from sodium and benzophenone, or by passing through a column of activated alumina.
Due to its application in the manufacturing of illicit substances, it is listed in the Table II precursor under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances as well as substances such as acetone, toluene and sulfuric acid.
History
The compound may have been synthesised by either Jābir ibn Hayyān in the 8th century or Ramon Llull in 1275. It was synthesised in 1540 by Valerius Cordus, who called it "sweet oil of vitriol" (oleum dulce vitrioli) – the name reflects the fact that it is obtained by distilling a mixture of ethanol and sulfuric acid (then known as oil of vitriol) – and noted some of its medicinal properties. At about the same time, Paracelsus discovered the analgesic properties of the molecule in dogs. The name ether was given to the substance in 1729 by August Sigmund Frobenius.
It was considered to be a sulfur compound until the idea was disproved in about 1800.
The synthesis of diethyl ether by a reaction between ethanol and sulfuric acid has been known since the 13th century.
Anesthesia
William T. G. Morton participated in a public demonstration of ether anesthesia on October 16, 1846, at the Ether Dome in Boston, Massachusetts. Morton had called his ether preparation, with aromatic oils to conceal its smell, "Letheon" after the Lethe River (Λήθη, meaning "forgetfulness, oblivion"). However, Crawford Williamson Long is now known to have demonstrated its use privately as a general anesthetic in surgery to officials in Georgia, as early as March 30, 1842, and Long publicly demonstrated ether's use as a surgical anesthetic on six occasions before the Boston demonstration. British doctors were aware of the anesthetic properties of ether as early as 1840 where it was widely prescribed in conjunction with opium. Diethyl ether was preferred by some practitioners over chloroform as a general anesthetic due to ether's more favorable therapeutic index, that is, a greater difference between an effective dose and a potentially toxic dose.
Diethyl ether does not depress the myocardium but rather it stimulates the sympathetic nervous system leading to hypertension and tachycardia. It is safely used in patients with shock as it preserves the baroreceptor reflex. Its minimal effect on myocardial depression and respiratory drive, as well as its low cost and high therapeutic index allows it to see continued use in developing countries. Diethyl ether could also be mixed with other anesthetic agents such as chloroform to make C.E. mixture, or chloroform and alcohol to make A.C.E. mixture. In the 21st century, ether is rarely used. The use of flammable ether was displaced by nonflammable fluorinated hydrocarbon anesthetics. Halothane was the first such anesthetic developed and other currently used inhaled anesthetics, such as isoflurane, desflurane, and sevoflurane, are halogenated ethers. Diethyl ether was found to have undesirable side effects, such as post-anesthetic nausea and vomiting. Modern anesthetic agents reduce these side effects.
Prior to 2005, it was on the World Health Organization's List of Essential Medicines for use as an anesthetic.
Medicine
Ether was once used in pharmaceutical formulations. A mixture of alcohol and ether, one part of diethyl ether and three parts of ethanol, was known as "Spirit of ether", Hoffman's Anodyne or Hoffman's Drops. In the United States this concoction was removed from the Pharmacopeia at some point prior to June 1917, as a study published by William Procter, Jr. in the American Journal of Pharmacy as early as 1852 showed that there were differences in formulation to be found between commercial manufacturers, between international pharmacopoeia, and from Hoffman's original recipe. It is also used to treat hiccups through instillation into the nasal cavity.
Recreational abuse
The recreational use of ether also took place at organised parties in the 19th century called ether frolics, where guests were encouraged to inhale therapeutic amounts of diethyl ether or nitrous oxide, producing a state of excitation. Long, as well as fellow dentists Horace Wells, William Edward Clarke and William T. G. Morton observed that during these gatherings, people would often experience minor injuries but appear to show no reaction to the injury, nor memory that it had happened, demonstrating ether's anaesthetic effects.
In the 19th century and early 20th century ether drinking was popular among Polish peasants. It is a traditional and still relatively popular recreational drug among Lemkos. It is usually consumed in a small quantity (kropka, or "dot") poured over milk, sugar water, or orange juice in a shot glass. As a drug, it has been known to cause psychological dependence, sometimes referred to as etheromania.
See also
The Great Moment – film about William T.G. Morton and ether
Flurothyl – fluorinated derivative
Explanatory notes
References
External links
Michael Faraday's announcement of ether as an anesthetic in 1818
Calculation of vapor pressure, liquid density, dynamic liquid viscosity, surface tension of diethyl ether, ddbonline.ddbst.de
CDC – NIOSH Pocket Guide to Chemical Hazards
Dialkyl ethers
General anesthetics
Dissociative drugs
Euphoriants
Fuels
Ether solvents
GABAA receptor positive allosteric modulators
NMDA receptor antagonists
Glycine receptor agonists
Symmetrical ethers
Sweet-smelling chemicals | Diethyl ether | Chemistry | 2,050 |
14,875,238 | https://en.wikipedia.org/wiki/BBS1 | Bardet–Biedl syndrome 1 protein is a protein that in humans is encoded by the BBS1 gene.
BBS1 is part of the BBSome complex, which required for ciliogenesis.
Mutations in this gene have been observed in patients with the major form (type 1) of Bardet–Biedl syndrome.
History
, research results indicated that the encoded protein may play a role in eye, limb, cardiac and reproductive system development.
References
External links
GeneReviews/NIH/NCBI/UW entry on Bardet–Biedl syndrome
External links
Further reading | BBS1 | Chemistry | 123 |
1,552,505 | https://en.wikipedia.org/wiki/Vickers%20hardness%20test | The Vickers hardness test was developed in 1921 by Robert L. Smith and George E. Sandland at Vickers Ltd as an alternative to the Brinell method to measure the hardness of materials. The Vickers test is often easier to use than other hardness tests since the required calculations are independent of the size of the indenter, and the indenter can be used for all materials irrespective of hardness. The basic principle, as with all common measures of hardness, is to observe a material's ability to resist plastic deformation from a standard source.
The Vickers test can be used for all metals and has one of the widest scales among hardness tests.
The unit of hardness given by the test is known as the Vickers Pyramid Number (HV) or Diamond Pyramid Hardness (DPH). The hardness number can be converted into units of pascals, but should not be confused with pressure, which uses the same units. The hardness number is determined by the load over the surface area of the indentation and not the area normal to the force, and is therefore not pressure.
Implementation
It was decided that the indenter shape should be capable of producing geometrically similar impressions, irrespective of size; the impression should have well-defined points of measurement; and the indenter should have high resistance to self-deformation. A diamond in the form of a square-based pyramid satisfied these conditions. It had been established that the ideal size of a Brinell impression was of the ball diameter. As two tangents to the circle at the ends of a chord 3d/8 long intersect at 136°, it was decided to use this as the included angle between plane faces of the indenter tip. This gives an angle from each face normal to the horizontal plane normal of 22° on each side. The angle was varied experimentally and it was found that the hardness value obtained on a homogeneous piece of material remained constant, irrespective of load. Accordingly, loads of various magnitudes are applied to a flat surface, depending on the hardness of the material to be measured. The HV number is then determined by the ratio F/A, where F is the force applied to the diamond in kilograms-force and A is the surface area of the resulting indentation in square millimeters.
which can be approximated by evaluating the sine term to give,
where d is the average length of the diagonal left by the indenter in millimeters. Hence,
,
where F is in kgf and d is in millimeters.
The corresponding unit of HV is then the kilogram-force per square millimeter (kgf/mm2) or HV number. In the above equation, F could be in N and d in mm, giving HV in the SI unit of MPa. To calculate Vickers hardness number (VHN) using SI units one needs to convert the force applied from newtons to kilogram-force by dividing by 9.806 65 (standard gravity). This leads to the following equation:
where F is in N and d is in millimeters. A common error is that the above formula to calculate the HV number does not result in a number with the unit newton per square millimeter (N/mm2), but results directly in the Vickers hardness number (usually given without units), which is in fact one kilogram-force per square millimeter (1 kgf/mm2).
Vickers hardness numbers are reported as xxxHVyy, e.g. 440HV30, or if duration of force differs from 10 s to 15 s, e.g. 440HV30/20, where:
440 is the hardness number,
HV names the hardness scale (Vickers),
30 indicates the load used in kgf.
20 indicates the loading time if it differs from 10 s to 15 s
Precautions
When doing the hardness tests, the minimum distance between indentations and the distance from the indentation to the edge of the specimen must be taken into account to avoid interaction between the work-hardened regions and effects of the edge. These minimum distances are different for ISO 6507-1 and ASTM E384 standards.
Vickers values are generally independent of the test force: they will come out the same for 500 gf and 50 kgf, as long as the force is at least 200 gf. However, lower load indents often display a dependence of hardness on indent depth known as the indentation size effect (ISE). Small indent sizes will also have microstructure-dependent hardness values.
For thin samples indentation depth can be an issue due to substrate effects. As a rule of thumb the sample thickness should be kept greater than 2.5 times the indent diameter. Alternatively indent depth, , can be calculated according to:
Conversion to SI units
To convert the Vickers hardness number to SI units the hardness number in kilograms-force per square millimeter (kgf/mm2) has to be multiplied with the standard gravity, , to get the hardness in MPa (N/mm2) and furthermore divided by 1000 to get the hardness in GPa.
Vickers hardness can also be converted to an SI hardness based on the projected area of the indent rather than the surface area. The projected area, , is defined as the following for a Vickers indenter geometry:
This hardness is sometimes referred to as the mean contact area or Meyer hardness, and ideally can be directly compared with other hardness tests also defined using projected area. Care must be used when comparing other hardness tests due to various size scale factors which can impact the measured hardness.
Estimating tensile strength
If HV is first expressed in N/mm2 (MPa), or otherwise by converting from kgf/mm2, then the tensile strength (in MPa) of the material can be approximated as ≈ HV/ , where is a constant determined by yield strength, Poisson's ratio, work-hardening exponent and geometrical factors usually ranging between 2 and 4. In other words, if HV is expressed in N/mm2 (i.e. in MPa) then the tensile strength (in MPa) ≈ HV/3. This empirical law depends variably on the work-hardening behavior of the material.
Application
The fin attachment pins and sleeves in the Convair 580 airliner were specified by the aircraft manufacturer to be hardened to a Vickers Hardness specification of 390HV5, the '5' meaning five kiloponds. However, on the aircraft flying Partnair Flight 394 the pins were later found to have been replaced with sub-standard parts, leading to rapid wear and finally loss of the aircraft. On examination, accident investigators found that the sub-standard pins had a hardness value of only some 200–230HV5.
See also
Indentation hardness
Leeb Rebound Hardness Test
Hardness comparison
Knoop hardness test
Meyer hardness test
Mohs scale
Rockwell scale
Vickers toughness test of ceramics
Superhard material
References
Further reading
ASTM E92: Standard method for Vickers hardness of metallic materials (withdrawn and replaced by E384-10e2)
ASTM E384: Standard Test Method for Knoop and Vickers Hardness of Materials
ISO 6507-1: Metallic materials – Vickers hardness test – Part 1: Test method
ISO 6507-2: Metallic materials – Vickers hardness test – Part 2: Verification and calibration of testing machines
ISO 6507-3: Metallic materials – Vickers hardness test – Part 3: Calibration of reference blocks
ISO 6507-4: Metallic materials – Vickers hardness test – Part 4: Tables of hardness values
ISO 18265: Metallic materials – Conversion of Hardness Values
External links
Video on the Vickers hardness test
Vickers hardness test
Conversion table – Vickers, Brinell, and Rockwell scales
Hardness tests
de:Härte#Härteprüfung nach Vickers (HV) | Vickers hardness test | Materials_science | 1,612 |
33,934,518 | https://en.wikipedia.org/wiki/C26H35N3O2 | {{DISPLAYTITLE:C26H35N3O2}}
The molecular formula C26H35N3O2 may refer to:
1H-LSD
Mazapertine | C26H35N3O2 | Chemistry | 42 |
44,571,310 | https://en.wikipedia.org/wiki/Contextual%20searching | Contextual search is a form of optimizing web-based search results based on context provided by the user and the computer being used to enter the query. Contextual search services differ from current search engines based on traditional information retrieval that return lists of documents based on their relevance to the query. Rather, contextual search attempts to increase the precision of results based on how valuable they are to individual users.
Basic contextual search
The basic form of contextual search is the process of scanning the full-text of a query in order to understand what the user needs. Web search engines scan HTML pages for content and return an index rating based on how relevant the content is to the entered query. HTML pages that have a higher occurrence of query keywords within their content are not rated higher. Users have limited control over the context of their query based on the words they use to search with. For example, users looking for the menu portion of a website can add “menu” to the end of their query to provide the search engine with context of what they need. The next step in contextualizing search is for the search service itself to request information that narrows down the results, such as Google asking for a time range to search within.
Explicitly supplied context
Certain search services, including many Meta search engines, request individual contextual information from users to increase the precision of returned documents. Inquirus 2 is a Meta search engine that acts as a mediator between the user query and other search engines. When searching on Inquirus 2, users enter a query and specify constraints such as the information need category, maximum number of hits, and display formats. For example, a user looking for research papers can specify documents with “references” or “abstracts” to be rated higher. If another user is searching for general information on the topic rather than research papers, they can specify the GenScore attribute to have a heavier weight.
Explicitly supplied context effectively increases the precision of results, however, these search services tend to suffer from poor user-experience. Learning the interface of programs like Inquirus can prove challenging for general users without knowledge of search metrics. Aspects of supplied context do appear on major search engines with better user-interaction such as Google and Bing. Google allows users to filter by type: Images, Maps, Shopping, News, Videos, Books, Flights, and Apps. Google has an extensive list of search operators that allow users to explicitly limit results to fit their needs such as restricting certain file types or removing certain words. Bing also uses a similar set of search operators to assist users in explicitly narrowing down the context of their queries. Bing allows users to search within a time range, by file type, by location, language, and more.
Automatically inferred context
There are other systems being developed that are working on automatically inferring the context of user queries based on the content of other documents they view or edit. IBM's Watson Project aims to create a cognitive technology that dynamically learns as it processes user queries. When presented with a query Watson creates a hypothesis that is evaluated against its present bank of knowledge based on previous questions. As related terms and relevant documents are matched against the query, Watson's hypothesis is modified to reflect the new information provided through unstructured data based on information it has obtained in previous situations. Watson's ability to build off previous knowledge allows queries to be automatically filtered for similar contexts in order to supply precise results.
Major search services such as Google, Bing, and Yahoo also have a system of automatically inferring the context of particular user queries. Google tracks user's previous queries and selected results to further personalize results for those individuals. For example, if a user consistently searches for articles related to animals, wild animals, or animal care a search for "jaguar" would rank an article on jaguar cats higher than links to Jaguar Cars. Similar to Watson, search services strive to learn from users based on previous experiences to automatically provide context on current queries. Bing also provides automatic context for particular queries based on the content of the query itself. A search of "pizza" returns an interactive list of restaurants and their ratings based on the approximate location of the user's computer. The Bing server automatically infers that when a user searches for a food item, they are interested in documents within the context of purchasing that food item or finding restaurants that sell that particular item.
Contextual mobile search
The drive to develop better contextualized search coincides with the increasing popularity of using mobile phones to complete searches. BIA/Kelsey research marketing firm projected that by 2015 mobile local search would "exceed local search by more than 27 billion queries". Mobile phones provide the opportunity to provide search services with a broader supply of contextual information, particularly for location services but also personalized searches based on the wealth of information stored locally on the phone including contacts information, geometric analysis such as speed and elevation, and installed apps.
References
Internet search engines
Semantic Web
Information retrieval techniques
Internet terminology | Contextual searching | Technology | 1,016 |
38,213,645 | https://en.wikipedia.org/wiki/Centre%20for%20Earthquake%20Studies | The Centre for Earthquake Studies (CES) () is a federally funded research institute and national laboratory dedicated to the advancement in understanding of natural vibration, seismology, and yield-based energy measurement of seismic waves.
The CES was established through federal funding as a direct response to the devastating 2005 Kashmir earthquake in order to understand earthquakes and provide scientific prediction of quakes to improve earthquake preparedness. The CES is the only national site in Pakistan working on earthquake precursors.
The national laboratory is headquartered in the campus area of the National Centre for Physics (NCP) and conducts mathematical research in earth sciences, in close coordination with the NCP.
History
The national site was founded by the Government of Pakistan on the advice of the science adviser Dr. Ishfaq Ahmad. The establishment of the national site came in response to Pakistans' deadliest earthquake, the 2005 Kashmir earthquake on 8 October 2005. Initially created as the Earthquake Studies Department at the National Centre for Physics, it gained independence shortly after its establishment. The CES undertakes research studies in the development of expertise in anomalous geophysical phenomenon prior to seismic activity. The CES primarily produces its research outcomes by using computer simulation and mathematical modelling to interpret seismic activity and give earthquake predictions.
The CES's campus also includes the various ATROPATENA stations network, and supports its research and development with close collaboration with the Global Network for the Forecasting of Earthquakes. Its first and founding director was Dr. Ahsan Mubarak who is still designated as the CES's senior scientist. Currently, Dr. Muhammad Qaisar is the CES's current administrator.
Galleries
See also
2005 Pakistan earthquake
Notes
Official links
Official website
Nuclear weapons programme of Pakistan
Pakistan federal departments and agencies
2005 establishments in Pakistan
Geology organizations
Laboratories in Pakistan
Earth science research institutes
International research institutes
Research institutes in Pakistan
Science parks in Pakistan
2005 Kashmir earthquake
Earthquake engineering | Centre for Earthquake Studies | Engineering | 385 |
31,978,886 | https://en.wikipedia.org/wiki/Wiener%20amalgam%20space | In mathematics, amalgam spaces categorize functions with regard to their local and global behavior. While the concept of function spaces treating local and global behavior separately was already known earlier, Wiener amalgams, as the term is used today, were introduced by Hans Georg Feichtinger in 1980. The concept is named after Norbert Wiener.
Let be a normed space with norm . Then the Wiener amalgam space with local component and
global component , a weighted space with non-negative weight , is defined by
where is a continuously differentiable, compactly supported function, such that , for all . Again, the space defined is independent of . As the definition suggests, Wiener amalgams are useful to describe functions showing characteristic local and global behavior.
References
Function spaces | Wiener amalgam space | Mathematics | 157 |
5,077,059 | https://en.wikipedia.org/wiki/Jack%20PC | Jack PC is a thin client device that is approximately the size of a network wall port. Its design allows for one's monitor, keyboard & mouse to plug straight into the wall-mounted unit. Jack PC operates in an SBC (Server Based Computing) environment.
The Jack PC thin client computers are connected at the back side through Ethernet cables to the building's LAN and receive Power over Ethernet (or 802.3af) through the existing enterprise infrastructure.
Jack PC is also notable in that it consumes very little power. Some tests have found it to consume as little as 5 W, not counting monitor and other external peripherals.
See also
Thin clients
Chip PC
External links
Chip PC Technologies' Website
Chip PC Thin Clients
The Jack PC Thin Client
HD PC+ Thin Client
Remote desktop
Thin clients | Jack PC | Technology | 162 |
24,876,073 | https://en.wikipedia.org/wiki/Critical%20Manufacturing | Critical Manufacturing is a subsidiary of ASMPT. It was founded in 2009 and is focused on providing automation and manufacturing software for high-tech industries, such as semiconductor, electronics, medical devices and industrial equipment. It has offices in Portugal, China, Germany, Malaysia, Mexico and USA. In 2018, it became a subsidiary of ASM Pacific Technology Limited.
Products
The company flagship product is Critical Manufacturing MES, a next-generation manufacturing operations management system.
Critical Manufacturing MES uses technologies from Microsoft, providing an Internet application user experience.
References
External links
Official Website
Critical Software
Software companies of Portugal
Industrial automation | Critical Manufacturing | Engineering | 122 |
76,591,897 | https://en.wikipedia.org/wiki/The%20Jacobaeus%20Prize | The Jacobaeus Prize, (also known as the "Jacobæus Prize") is regarded as a prestigious recognition within the field of medical research. It is an annual award given to individuals who have made significant contributions to the advancement of medical science, particularly in the areas of physiology or endocrinology.
Background
Named after Hans Christian Jacobæus, a pioneering Swedish physician and researcher known for his contributions to the development of laparoscopy and thoracoscopy, the prize aims to honor those who continue to push the boundaries of medical research.
The prize was established to commemorate the legacy of Hans Christian Jacobæus, whose innovative work in the early 20th century laid the groundwork for minimally invasive surgery techniques. The award is sponsored by the Novo Nordisk Foundation.
Recipients
List of recipients of The Jacobæus Prize over the years:
References
Medicine awards
Awards established in 1985
Danish science and technology awards | The Jacobaeus Prize | Technology | 185 |
54,181,978 | https://en.wikipedia.org/wiki/Coma%20Filament | The Coma Filament is a galaxy filament. The filament contains the Coma Supercluster of galaxies and forms a part of the CfA2 Great Wall.
See also
Abell catalogue
Large-scale structure of the universe
Supercluster
References
Galaxy filaments
Large-scale structure of the cosmos
Astronomical objects discovered in 1985 | Coma Filament | Astronomy | 71 |
71,719,880 | https://en.wikipedia.org/wiki/Time%20in%20the%20Central%20African%20Republic | The Central African Republic (CAR) observes a single time zone year-round, denoted as West Africa Time (WAT; UTC+01:00).
IANA time zone database
In the IANA time zone database, the Central African Republic is given one zone in the file zone.tab—Africa/Bangui. "CF" refers to the country's ISO 3166-1 alpha-2 country code. Data for the Central African Republic directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
See also
Time in Africa
List of time zones by country
References
External links
Current time in the Central African Republic at Time.is
Time in the Central African Republic at TimeAndDate.com
Time by country
Geography of the Central African Republic
Time in Africa | Time in the Central African Republic | Physics | 167 |
5,086,112 | https://en.wikipedia.org/wiki/Beaker%20%28laboratory%20equipment%29 | In laboratory equipment, a beaker is generally a cylindrical container with a flat bottom. Most also have a small spout (or "beak") to aid pouring, as shown in the picture. Beakers are available in a wide range of sizes, from one milliliter up to several liters. A beaker is distinguished from a flask by having straight rather than sloping sides. The exception to this definition is a slightly conical-sided beaker called a Philips beaker. The beaker shape in general drinkware is similar.
Beakers are commonly made of glass (today usually borosilicate glass), but can also be in metal (such as stainless steel or aluminum) or certain plastics (notably polythene, polypropylene, PTFE). A common use for polypropylene beakers is gamma spectral analysis of liquid and solid samples.
Construction and use
Standard or "low-form" (A) beakers typically have a height about 1.4 times the diameter. The common low form with a spout was devised by John Joseph Griffin and is therefore sometimes called a Griffin beaker. These are the most universal character and are used for various purposes—from preparing solutions and decanting supernatant fluids to holding waste fluids prior to disposal to performing simple reactions. Low form beakers are likely to be used in some way when performing a chemical experiment. "Tall-form" (B) beakers have a height about twice their diameter. These are sometimes called Berzelius beakers, after Jöns Jacob Berzelius, and are mostly used for titration. Flat beakers (C) are often called "crystallizers" because most are used to perform crystallization, but they are also often used as a vessel for use in hot-bath heating. These beakers usually do not have a flat scale.
The presence of a spout means that the beaker cannot have a lid. However, when in use, beakers may be covered by a watch glass to prevent contamination or loss of the contents, but allowing venting via the spout. Alternatively, a beaker may be covered with another larger beaker that has been inverted, though a watch glass is preferable.
Beakers are often graduated, that is, marked on the side with lines indicating the volume contained. For instance, a 250 mL beaker might be marked with lines to indicate 50, 100, 150, 200, and 250 mL of volume. These marks are not intended for obtaining a precise measurement of volume (a graduated cylinder or a volumetric flask would be a more appropriate instrument for such a task), but rather an estimation. Most beakers are accurate to within ~10%.
Standards
DIN EN ISO 3819:2015-12 defines the following types and sizes:
See also
Beaker (drinkware)
Beaker (archaeology)
Beaker (disambiguation)
Volumetric flask
Schott bottle
Stirring rod
Test tube
Graduated cylinder
Scoop
References
Further reading
ASTM E960 - 93 (2008) Standard Specification for Laboratory Glass Beakers
External links
Volumetric instruments
Laboratory glassware
Drinkware | Beaker (laboratory equipment) | Technology,Engineering | 641 |
17,387,711 | https://en.wikipedia.org/wiki/Animal%20pound | An animal pound is a place where stray livestock were impounded. Animals were kept in a dedicated enclosure, until claimed by their owners, or sold to cover the costs of impounding.
Etymology
The terms "pinfold" and "pound" are Saxon in origin. Pundfald and pund both mean an enclosure. There appears to be no difference between a pinfold and a village pound.
The person in charge of the pinfold was the "pinder", giving rise to the surname Pinder.
Village pound or pinfold
The village pound was a feature of most English medieval villages, and they were also found in the English colonies of North America and in Ireland.
A high-walled and lockable structure served several purposes; the most common use was to hold stray sheep, pigs and cattle until they were claimed by the owners, usually for the payment of a fine or levy. The pound could be as small as or as big as and may be circular or square. Early pounds had just briar hedges, but most were built in stone or brick, making them more stock-proof.
The size and shape of village pounds varies. Some are four-sided—rectangular, square and irregular—while others are circular. In size they vary from a few square metres (some square feet) to over . Pounds are known to date from the medieval period. By the 16th century most villages and townships would have had a pound. Most of what remain today would date from the 16th and 17th centuries. Some are listed buildings, but most have fallen into disrepair.
The Sussex County Magazine in 1930 stated:
Although pounds are most common to England, there are also examples in other countries. In Americans and Their Forests: a Historical Geography, author Michael Williams writes:
"There was hardly a town in eighteenth-century New England without its town pound..."
In some mountainous areas of northern Spain (such as Cantabria or Asturias) some similar enclosures are traditionally used to protect beehives from bear attacks.
Cultural references
The artist Andy Goldsworthy has produced a series of sculptures in several of the pinfolds in Cumbria.
See also
Kraal
Pen (enclosure)
Scarisbrick, Lancashire, in which is the hamlet of Pinfold
List of extant pinfolds in Cheshire
Village lock-up
Poundmaster
Notes
References
External links
photos of examples of village pounds today on geograph
Google maps aerial view of a pinfold in Hougham, Lincolnshire
Agricultural buildings
Animal equipment
Animal welfare
Buildings and structures used to confine animals
Society in medieval England | Animal pound | Biology | 515 |
28,846,445 | https://en.wikipedia.org/wiki/2MASS%201507%E2%88%921627 | 2MASS J15074769−1627386 (also abbreviated to 2MASS 1507−1627) is a brown dwarf in the constellation Libra, located about 23.9 light-years from Earth. It was discovered in 1999 by I. Neill Reid et al. It belongs to the spectral class L5; its surface temperature is 1,300 to 2,000 kelvins. As with other brown dwarfs of spectral type L, its spectrum is dominated by metal hydrides and alkali metals. Its spectrum also has a weak silicate absorption band and highly variable water absorption band, indicating complicated clouds and haze structures.
The brown dwarf is suspected to have a substellar companion (planet) on wide orbit with period over 10 years.
References
External links
Entry at DwarfArchives.org
Libra (constellation)
Brown dwarfs
J15074769-1627386
L-type brown dwarfs
Hypothetical planetary systems | 2MASS 1507−1627 | Astronomy | 198 |
55,130,585 | https://en.wikipedia.org/wiki/Colterol | Colterol is a short-acting β2-adrenoreceptor agonist. Bitolterol, a prodrug for colterol, is used in the management of bronchospasm in asthma and chronic obstructive pulmonary disease (COPD).
Patents:
References
Beta2-adrenergic agonists | Colterol | Chemistry | 75 |
36,732,217 | https://en.wikipedia.org/wiki/Parasoft%20C/C%2B%2Btest | Parasoft C/C++test is an integrated set of tools for testing C and C++ source code that software developers use to analyze, test, find defects, and measure the quality and security of their applications. It supports software development practices that are part of development testing, including static code analysis, dynamic code analysis, unit test case generation and execution, code coverage analysis, regression testing, runtime error detection, requirements traceability, and code review. It's a commercial tool that supports operation on Linux, Windows, and Solaris platforms as well as support for on-target embedded testing and cross compilers.
Overview
Parasoft C/C++test is a combined set of tools that helps developers test their software. It's delivered as a standalone application that runs from the command line, or as a plug-in to Eclipse or Microsoft Visual studio. Various modules in the set assist software developers in performing static and dynamic analysis, creating, executing and maintaining unit tests, measuring code coverage and other software metrics, and executing regression tests.
The errors that C/C++test discovers include uninitialized or invalid memory, null pointer dereferencing, array and buffer overflow, division by zero, memory and resource leaks, duplicate code, and various types of dead or unreachable code.
C/C++test customers include Samsung Electronics, Wipro, NEC, and SELEX Sistemi Integrati. It is also used by Lockheed Martin for the F-35 Joint Strike Fighter program (JSF) Inomed uses it to achieve IEC 62304 certification for their medical device software.
Basic functionality
Code coverage
When testing software code coverage is a measure of which parts of the code have been executed during a test, and which have not. There are many different methods for measuring coverage that have different criteria on how it's calculated. Depending on your needs you can choose which is the best fit for your application.
C/C++test includes options for line coverage, meaning has the line been executed, block coverage, statement coverage, path coverage, decision coverage, branch coverage, and simple condition coverage. It also supports modified condition/decision coverage or MCDC because projects that require safe reliable software such as aircraft and cars, tend to required this form of coverage as it's believed to be a better measure of whether or not the code has been thoroughly exercised.
Regression testing
Regression testing verifies that software continues to operate correctly, even as changes are made and new versions are released. C/C++test automatically generates tests that capture the current state of an applications behavior by recording what happens while the application is running. Later test runs are compared against stored results from earlier runs that help determine what problems changes in the code may have introduced. Having a robust regression test suite is especially critical in areas where there are short release cycles and high degrees of test automation such as agile software development or extreme programming, to help insure that changes aren't introducing bugs into the software.
Runtime error detection
C/C++test includes a lightweight form of runtime error detection that is suitable for use in embedded systems, including running on a target board or host. It helps find serious runtime defects such as memory leaks, null pointers, uninitialized memory, and buffer overflows.
Software metrics
Software metrics are used to help assess and improve software quality. Some metrics are used to help determine where bug-prone code might be, while others help understand maintainability and proper construction. C/C++test provides a variety of software metrics including traditional counting metrics of lines, files, comments, methods, etc. as well as industry standards like fan out, cyclomatic complexity, cohesion, and various Halstead metrics.
Users can configure which metrics they want to run and where applicable can set thresholds for what's an acceptable value for a particular metric. This allows users to flag code that is outside the expected range as an error to be reviewed or fixed. Graphic reports are provided to show values and trends in the metrics.
Static analysis
Static code analysis is the process of analyzing source code without executing the software. It helps developers to find bugs early, as well as code according to best practices. This helps create code that is less susceptible to bugs by avoiding potentially dangerous code styles and constructs. In industries where software performance is critical there are often requirements to run static analysis tools or even particular static analysis rules.
Static analysis in C/C++test includes different types of analysis including pattern-based, abstract interpretation, flow analysis, and metrics. This helps detect code responsible for memory leaks, erratic behavior, crashes, deadlocks, and security vulnerabilities.
C/C++test comes with pre-configured templates to assist enforcing static analysis rules for a variety of industry standards such as:
ANSI IEC 62304 for medical devices
DO-178B for airborne systems
IEC 61508 & Safety Integrity Level for functional safety of electronic systems
U.S. FDA general principles of software validation for medical software
ISO 26262 & ASIL for automotive software
Joint Strike Fighter Program for fighter aircraft
Safety-critical software development
Motor Industry Software Reliability Association (MISRA) for automotive software
PCI DSS Payment Card Industry data security standard
DISA STIG for defense industry systems and software
Traceability
When working in industries where there are strict coding requirements or regulatory standards, it is necessary to be able to prove that an application was developed according to the required steps. traceability is having all the information necessary to prove in a software audit that you've done the proper process. Commonly this means being able to prove what code belongs to a particular requirement as well as who reviewed it and what the outcome of such a review was. It also encompasses any tests and analysis performed on the code and what was done for any tests that failed. C/C++test keeps track of your testing and links it back to the requirement system, source control system, and bug tracking systems. This provides full traceability into each step of the software development process.
Unit testing
The purpose of unit testing is to make sure that all of the individual pieces of a software application work properly by themselves before integration. In programming languages like C and C++ this usually consists of a single file, or a small number of files that all perform a related function. Unit testing encompasses the creation of tests, execution of tests to see the results, and maintenance of tests for long term use. Because unit testing is often associated with code coverage which shows exactly what lines of code were executed by a test, both functionalities are included in C/C++test.
C++test helps you create unit tests that are compatible with xUnit testing frameworks. It also provides tracing functionality that lets you monitor a system under test and generate test cases based on actual paths and data used during the execution. It also provides functionality to handle isolating the code necessary to allow it to function without the rest of the application, also called stubbing, as well as an object repository to store, share, and reuse software objects initialized with the necessary test data. Stubs allow you to remove dependent parts of the full application such as a database or API but still run the application as if the component were still there. C/C++test allows you to create the necessary stubs to run your code in isolation.
The capability to alter and extend test data is provided through a variety of means such as a data source interface that allows you to read test inputs from files, spreadsheets, and databases. Tests can also be run simultaneous with runtime error detection turned on so as to find serious programming flaw that won't necessarily cause assertion failures during testing but are likely to cause software instability when deployed. Execution on embedded systems is support, whether it's a host, target, or simulator, including cross-compilation, loading tests to the target, and loading results from a remote execution back in the GUI.
History
Parasoft C/C++test was originally introduced in 1995 as a static analysis tool based on guidelines found in the book Effective C++ by Scott Meyers. Later when unit test creation and execution was added the product was renamed to C++test. Eventually the product name was modified to include both C and C++ to reflect what languages are actually covered.
Parasoft C/C++test won Software Test and Performances’ 2008 Testers Choice Award in the best embedded/mobile test/performance category. It was selected as VDC's Software Embeddy "Best in Show" award winner in 2012.
Parasoft received TUV certification as an automotive functional safety tool in 2011 according to IEC 61508 and ISO 26262 standards.
Supported systems
Supported compilers
Supported IDEs
ARM Development Studio
ARM Workbench IDE for RVDS
Eclipse IDE for Developers
Green Hills MULTI
IAR Embedded Workbench
Keil μVision IDE
Keil RealView
Microsoft eMbedded Visual C++
Microsoft Visual Studio
Microsoft Visual Studio Code
QNX Momentics IDE
Texas Instruments Code Composer Studio
Wind River Tornado
Wind River Workbench
See also
Development Testing
DevOps
Software development
Software engineering
Software quality
References
External links
Parasoft C/C++test page
Abstract interpretation
Computer security software
Security testing tools
Software review
Software testing tools
Static program analysis tools
Unit testing
Unit testing frameworks | Parasoft C/C++test | Engineering | 1,927 |
24,145,749 | https://en.wikipedia.org/wiki/C27H44O3 | {{DISPLAYTITLE:C27H44O3}}
The molecular formula C27H44O3 (molar mass: 416.63 g/mol, exact mass: 416.329045) may refer to:
1,25-Dihydroxycholecalciferol
24,25-Dihydroxycholecalciferol
Paricalcitol, an analog of vitamin D
Sarsasapogenin
Tacalcitol | C27H44O3 | Chemistry | 100 |
65,757,854 | https://en.wikipedia.org/wiki/GreenPAK | GreenPAK™ is a Renesas Electronics family of mixed-signal integrated circuits and development tools. GreenPAK circuits are classified as configurable mixed-signal ICs. This category is characterized by analogue and digital blocks that can be configured through programmable non-volatile memory. These devices also have a "Connection Matrix", which supports routing signals between the various blocks. These devices can include multiple components within a single IC.
Also, the company developed the Go Configure™ Software Hub for IC design creation, chip emulation, and programming.
History
The GreenPAK technology was developed by Silego Technology Inc. The company was established in 2001. The GreenPAK product line was introduced in April 2010. Then, the first generation of ICs was released. Later, Silego was acquired by Dialog Semiconductor PLC in 2017. Officially, the trademark for the GreenPAK title was registered in 2019.
Currently, in the market, the sixth generation of GreenPAK ICs was already released. Over 6 billion GreenPAK ICs have been shipped to Dialog's customers all over the world.
In 2021, Dialog was acquired by Renesas Electronics, therefore the GreenPAK technology is currently officially owned by Renesas.
GreenPAK Integrated Circuits
There are a few categories of ICs developed within the GreenPAK technology:
Dual Supply GreenPAK – provides level translation from higher or lower voltage domains.
GreenPAK with Power Switches – includes single and dual power switches up to 2A.
GreenPAK with Asynchronous State Machine – allows the developing of customized state machine IC designs.
GreenPAK with Low Power Dropout Regulators – enables a user to divide power loads using the unique concept of "Flexible Power Islands" devoted to wearable devices.
GreenPAK with In-System Programmability – can be reprogrammed up to 1000 times using the I2C serial interface.
Automotive GreenPAK – allows multiple system functions in a single IC used for automotive circuit designs.
GreenPAK with High Voltage Features – contains both mixed-signal logic and high-voltage H-bridge functionality.
GreenPAK Designer Software
GreenPAK Designer Software is a free GUI-based platform that enables users to create IC designs without any programming language prior skills.
The software functions include:
Access to a library of GreenPAK ICs with a description of available elements for each device as well as example application cases and technical documentation
Designing integral circuits using schematic-oriented capturing of elements and their connection
Simulation of created designs
Samples programming
Development Tools
Two development boards allow engineers to conduct different procedures mentioned in the table below.
The development boards are compatible with different GreenPAK ICs that can be checked on Dialog Semiconductor's website.
Circuit Design Applications
Over 300 application notes were developed to showcase IC designs created in the GreenPAK Designer Software and provide complete project instructions.
Origin of the Title
The "GreenPAK" title indicates the circuit's environmentally friendly nature. These circuits consume low power and use less lead during production, reducing their environmental impact. The "PAK" suffix stands for "Programmable Analog Kit," which highlights the device family's aim to provide a suite of analog resources, along with digital resources, that can be utilized to address various real-world application challenges.
See also
Configurable mixed-signal IC
Silego Technology Inc.
Dialog Semiconductor PLC
References
Integrated circuits
Application-specific integrated circuits | GreenPAK | Technology,Engineering | 704 |
101,679 | https://en.wikipedia.org/wiki/VRML | VRML (Virtual Reality Modeling Language, pronounced vermal or by its initials, originally—before 1995—known as the Virtual Reality Markup Language) is a standard file format for representing 3-dimensional (3D) interactive vector graphics, designed particularly with the World Wide Web in mind. It has been superseded by X3D.
WRL file format
VRML is a text file format where, e.g., vertices and edges for a 3D polygon can be specified along with the surface color, UV-mapped textures, shininess, transparency, and so on. URLs can be associated with graphical components so that a web browser might fetch a webpage or a new VRML file from the Internet when the user clicks on the specific graphical component. Animations, sounds, lighting, and other aspects of the virtual world can interact with the user or may be triggered by external events such as timers. A special Script Node allows the addition of program code (e.g., written in Java or ECMAScript) to a VRML file.
VRML files are commonly called "worlds" and have the extension (for example, ). VRML files are in plain text and generally compress well using gzip, useful for transferring over the Internet more quickly (some gzip compressed files use the extension). Many 3D modeling programs can save objects and scenes in VRML format.
Standardization
The Web3D Consortium has been formed to further the collective development of the format. VRML (and its successor, X3D), have been accepted as international standards by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
The first version of VRML was specified in November 1994. This version was specified from, and very closely resembled, the API and file format of the Open Inventor software component, originally developed by SGI. Version 2.0 development was guided by the ad hoc VRML Architecture Group (VAG). A working draft was published in August 1996. Formal collaboration between the VAG and SC24 of ISO/IEC began in 1996 and VRML 2.0 was submitted to ISO for adoption as an international standard. The current and functionally complete version is VRML97 (ISO/IEC 14772-1:1997). VRML has now been superseded by X3D (ISO/IEC 19775-1).
Emergence, popularity, and rival technical upgrade
The term VRML was coined by Dave Raggett in a paper called "Extending WWW to support Platform Independent Virtual Reality" submitted to the First World Wide Web Conference in 1994, and first discussed at the WWW94 VRML BOF established by Tim Berners-Lee, where Mark Pesce presented the Labyrinth demo he developed with Tony Parisi and Peter Kennard. VRML was introduced to a wider audience in the SIGGRAPH Course, VRML: Using 3D to Surf the Web in August 1995. In October 1995, at Internet World, Template Graphics Software (TGS) demonstrated a 3D/VRML plug-in for the beta release of Netscape 2.0 by Netscape Communications.
In 1997, a new version of the format was finalized, as VRML97 (also known as VRML2 or VRML 2.0), and became an ISO/IEC standard. VRML97 was used on the Internet on some personal homepages and sites such as "CyberTown", which offered 3D chat using Blaxxun Software, as well as Sony's SAPARi program, which was pre-installed on Vaio computers from 1997 to 2001. The format was championed by SGI's Cosmo Software; when SGI restructured in 1998, the division was sold to the VREAM Division of Platinum Technology, which was then taken over by Computer Associates, which did not develop or distribute the software. To fill the void a variety of proprietary Web 3D formats emerged over the next few years, including Microsoft Chrome and Adobe Atmosphere, neither of which is supported today. VRML's capabilities remained largely the same while realtime 3D graphics kept improving. The VRML Consortium changed its name to the Web3D Consortium, and began work on the successor to VRML—X3D.
SGI ran a web site at vrml.sgi.com on which was hosted a string of regular short performances of a character called "Floops" who was a VRML character in a VRML world. Floops was a creation of a company called Protozoa.
H-Anim is a standard for animated Humanoids, which is based around VRML, and later X3D. The initial version 1.0 of the H-Anim standard was scheduled for submission at the end of March 1998.
VRML has never seen much serious widespread use. One reason for this may have been the lack of available bandwidth. At the time of VRML's popularity, a majority of users, both business and personal, were using slow dial-up Internet access.
VRML experimentation was primarily in education and research where an open specification is most valued. It has now been re-engineered as X3D. The MPEG-4 Interactive Profile (ISO/IEC 14496) was based on VRML (now on X3D), and X3D is largely backward-compatible with it. VRML is also widely used as a file format for interchange of 3D models, particularly from CAD systems.
A free cross-platform runtime implementation of VRML is available in OpenVRML. Its libraries can be used to add both VRML and X3D support to applications, and a GTK+ plugin is available to render VRML/X3D worlds in web browsers.
In the 2000s, many companies like Bitmanagement improved the quality level of virtual effects in VRML to the quality level of DirectX 9.0c, but at the expense of using proprietary solutions. All main features like game modeling are already complete. They include multi-pass render with low level setting for Z-buffer, BlendOp, AlphaOp, Stencil, Multi-texture, Shader with HLSL and GLSL support, realtime Render To Texture, Multi Render Target (MRT) and PostProcessing. Many demos shows that VRML already supports lightmap, normalmap, SSAO, CSM and Realtime Environment Reflection along with other virtual effects.
Example
This example shows the same scene as .
#VRML V2.0 utf8
Shape {
geometry IndexedFaceSet {
coordIndex [ 0, 1, 2 ]
coord Coordinate {
point [ 0, 0, 0, 1, 0, 0, 0.5, 1, 0 ]
}
}
}
Early criticism
In a March 1998 ACM essay, "Playfulness in 3D Spaces -- Why Quake is better than VRML, and what it means for software design", Clay Shirky sharply criticised VRML as a "technology in search of a problem", whereas "Quake does something well instead of many things poorly...The VRML community has failed to come up with anything this compelling -- not despite the community's best intentions, but because of them. Every time VRML practitioners approach the problem of how to represent space on the screen, they have no focused reason to make any particular trade-off of detail versus rendering speed, or making objects versus making spaces, because VRML isn't for anything except itself. Many times, having a particular, near-term need to solve brings a project's virtues into sharp focus, and gives it enough clarity to live on its own."
Alternatives
3DMLW: 3D Markup Language for Web
COLLADA: managed by the Khronos Group
O3D: developed by Google
U3D: Ecma International standard ECMA-363
X3D: successor of VRML
glTF: created by the Khronos Group, successor of Collada
See also
Active Worlds virtual reality – multi-user 3D chat platform
A-Frame (virtual reality framework) - Entity Component System VR platform base on threejs and WebXR
Additive Manufacturing File Format
Blaxxun virtual reality – multi-user 3D chat platform
Flux – freely downloadable VRML/X3D editor/browser, now discontinued
List of vector graphics markup languages
MeshLab – open source mesh processing system that can export VRML/X3D
OZ Virtual
Seamless3d – free Open Source 3D modeling software for Microsoft Windows
STL – STereoLithography or Standard Tessellation Language, common to CAD software and 3D printing.
Virtual Environment Software
Virtual tour
Web3D
WebGL
WebVR
WebXR - Successor to WebVR
References
External links
Code samples
VRML examples from the VRML Sourcebook (to get the example VRML code, click on a chapter, then on a figure)
Documentation
VRML ISO/IEC 14772 standard document
3D graphics file formats
Graphics standards
ISO/IEC standards
Open formats
Vector graphics markup languages
Virtual reality
Web 1.0 | VRML | Technology | 1,839 |
55,404,218 | https://en.wikipedia.org/wiki/%28574372%29%202010%20JO179 | (provisional designation ) is a large, high-order resonant trans-Neptunian object in the outermost regions of the Solar System, approximately in diameter. Long-term observations suggest that the object is in a meta-stable 5:21 resonance with Neptune. Other sources classify it as a scattered disc object.
It is possibly large enough to be a dwarf planet.
First observation and orbit
The Minor Planet Center credits the object's first official observation on 10 May 2010 to Pan-STARRS at Haleakala Observatory, Hawaii, United States. The observations were made by Pan-STARRS Outer Solar System Survey. There are 4 February 1951 precovery images from the Palomar Observatory Sky Survey, extending the observation arc by approximately 60 years. The precovery images are from the same year the object came to perihelion (closest approach to the Sun).
orbits the Sun at a distance of 39.6–118 AU once every 699 years and 5 months (semi-major axis of 78.8 AU). Its orbit has a high eccentricity of 0.50 and an inclination of 32° with respect to the ecliptic.
Numbering and naming
This minor planet was numbered by the Minor Planet Center on 10 August 2021, receiving the number in the minor planet catalog (). , it has not been named.
Physical characteristics
Photometry
Photometric observations of gave a monomodal lightcurve with slow rotation period of 30.6 hours, suggesting a rather spherical shape with significant albedo patchiness. An alternative period solution of a bimodal lightcurve is considered less likely. It would double the period and imply an ellipsoidal shape with an axis-ratio of at least 1.58.
Diameter and albedo
The object's mean diameter has been estimated to measure 574 and 735 kilometers, with an assumed albedo of 0.09, by Michael Brown and the Johnston's Archive respectively, while the discoverers estimate a diameter of 600–900 kilometers with an estimated albedo of 0.21 to 0.07.
References
External links
MPEC, 18 September 2017
1951 precovery images
Trans-Neptunian objects
574372
Trans-Neptunian objects in a 5:21 resonance
574372
574372
20100517 | (574372) 2010 JO179 | Physics,Astronomy | 476 |
170,384 | https://en.wikipedia.org/wiki/Copal | Copal is a tree resin, particularly the aromatic resins from the copal tree Protium copal (Burseraceae) used by the cultures of pre-Columbian Mesoamerica as ceremonially burned incense and for other purposes. More generally, copal includes resinous substances in an intermediate stage of polymerization and hardening between "gummier" resins and amber. Copal that is partly mineralized is known as copaline.
It is available in different forms; the hard, amber-like yellow copal is a less expensive version, while the milky-white copal is more expensive.
Etymology
The word "copal" is derived from the Nahuatl language word , meaning "incense".
History and uses
Subfossil copal is well known from New Zealand (kauri gum from Agathis australis (Araucariaceae)), Japan, the Dominican Republic, Colombia, and Madagascar. It often has inclusions and is sometimes sold as "young amber". When it is treated or enhanced in an autoclave (as is sometimes done to industrialized Baltic amber) it is used for jewelry. In its natural condition copal can be easily distinguished from old amber by its lighter citrine colour and its surface getting tacky with a drop of acetone or chloroform. Copal resin from Hymenaea verrucosa (Fabaceae) is found in East Africa and is used in incense. East Africa apparently had a higher amount of subfossil copal, which is found one or two meters below living copal trees, from roots of trees that may have lived thousands of years earlier. This subfossil copal produces a harder varnish.
By the 18th century, Europeans found it to be a valuable ingredient in making a good wood varnish. It became widely used in the manufacture of furniture and carriages. It was also sometimes used as a picture varnish. By the late 19th and early 20th century, varnish manufacturers in England and America were using it on train carriages, greatly swelling its demand. In 1859, Americans consumed 68% of the East African trade, which was controlled through the Sultan of Zanzibar, with Germany receiving 24%. The American Civil War and the creation of the Suez Canal led to Germany, India, and Hong Kong taking the majority by the end of that century.
Copal is still used by a number of indigenous peoples of Mexico and Central America as an incense, during sweat lodge ceremonies and sacred mushroom ceremonies.
References
Sources
Further reading
Visual arts materials
Fossil resins
Incense material
Mesoamerican society
Natural history of Mesoamerica
Resins
Organic gemstones
Kauri gum | Copal | Physics | 551 |
40,173,942 | https://en.wikipedia.org/wiki/Nuclear%20Science%20and%20Techniques | Nuclear Science and Techniques is a monthly peer-reviewed, scientific journal that is published by Science Press and Springer. This journal was established in 1990. The editor-in-chief is Yu-Gang Ma. The journal covers all theoretical and experimental aspects of nuclear physics and technology, including synchrotron radiation applications, beam line technology, accelerator, ray technology and applications, nuclear chemistry, radiochemistry, and radiopharmaceuticals and nuclear medicine, nuclear electronics and instrumentation, nuclear energy science and engineering.
Abstracing and indexing
The journal is indexed in the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.085, ranking it 18th out of 33 journals in the category "Nuclear Science and Technology" and 18th out of 20 journals in the category "Physics, Nuclear".
References
External links
Nuclear physics journals
Academic journals established in 1990
Bimonthly journals
English-language journals | Nuclear Science and Techniques | Physics | 193 |
419,820 | https://en.wikipedia.org/wiki/LAN%20eXtensions%20for%20Instrumentation | LAN eXtensions for Instrumentation (LXI) is a standard which defines the communication protocols for instrumentation and data acquisition systems using Ethernet.
Overview
Proposed in 2005 by Keysight and VTI Instruments, the LXI standard adapts the Ethernet and World Wide Web standards to test and measurement applications. The standard defines how existing standards should be used in instrumentation applications to provide a consistent feel and ensure compatibility between equipment.
The LXI standard does not define a mechanical format. LXI products can be modular, rack mounted, bench mounted or take any other physical form. LXI products may have no front panel or display, or they may include embedded displays and keyboards.
Use of Ethernet allows instrument systems to be spread over large distances. An optional Extended Function based on IEEE 1588 Precision Timing Protocol allows instruments to communicate on a time basis, initiating events at specified times or intervals and time stamping events to indicate when these events occurred.
Interoperability and IVI
LXI devices can coexist with Ethernet devices that are not themselves LXI compliant. They can also be present in test systems which include products based on the GPIB, VXI, and PXI standards.
The standard mandates that every LXI instrument must have an Interchangeable Virtual Instrument (IVI) driver. The IVI Foundation defines a standard driver application programming interface (API) for programmable instruments. IVI driver formats can be IVI-COM for working with COM-based development environments and IVI-C for working in traditional programming languages for use in a .NET Framework.
Most LXI instruments can be programmed with methods other than IVI, so it is not mandatory to work with an IVI driver. Developers can use other driver technologies or work directly with SCPI commands.
Standardization
The LXI Standard has three major elements:
A standardized LAN interface that provides a framework for web-based interfacing and programmatic control. The LAN interface can include wireless connectivity, as well as physically connected interfaces. The interface supports peer-to-peer operation, as well as master/slave operation. Devices can optionally support IPv6.
An optional trigger facility based on the IEEE 1588 Precision Timing Protocol that enables modules to have a sense of time, which allows modules to time stamp actions and initiate triggered events over the LAN interface.
An optional physical wired trigger system based on a Multipoint Low-Voltage Differential Signaling (M-LVDS) electrical interface that tightly synchronizes the operation of multiple LXI instruments.
The specification is organized into a set of documents which describe:
The LXI Device Core Specification which contains the requirements for the LAN interface which all LXI Devices must adhere to
A set of optional Extended Functions which LXI devices can adhere to. If a device claims conformance it must have been tested under the LXI Consortium Conformance regime. As of March 2016, there are 7 Extended Functions specified
HiSLIP
IPv6
LXI Wired Trigger Bus
LXI Event Messaging
LXI Clock Synchronization (based on IEEE1588)
LXI Time Stamped Data
LXI Event Log
Specification History
In 2005, the LXI Consortium released Version 1.0 of the LXI Standard. Version 1.1 followed with minor corrections and clarifications. In 2007, the Consortium adopted Version 1.2; its major focus was discovery mechanisms. Specifically, LXI 1.2 included enhancements to support mDNS discovery of LXI devices. Version 1.3 incorporates the 2008 version of IEEE 1588 for synchronizing time among instruments. As of November 2016, the standard is at Revision 1.5.
Conformance testing
The LXI Consortium requires LXI Devices to go through standard testing.
To support this compliance regime an LXI Test Suite is available. After a vendor joins the LXI Consortium they can gain access to the Consortium's Conformance Test Suite software, which they can use as a pre-test before submitting the product to the Consortium for compliance testing. Once a product is ready to submit, a vendor can choose to have their product tested at a PlugFest or an approved test house. A Technical Justification route allows vendors to certify compliance of derivative products by submitting test results to the Consortium to show that the device has been tested on the LXI Test Suite. The consortium provides guidance on when the Technical Justification route can be used and when a new formal test is required.
References
Interchangeable Virtual Instrument (IVI) Foundation website
External links
Networking standards
Electronic test equipment | LAN eXtensions for Instrumentation | Technology,Engineering | 903 |
22,555,803 | https://en.wikipedia.org/wiki/PncA | PncA is a gene encoding pyrazinamidase in Mycobacterium species. Pyrazinamidase converts the drug pyrazinamide to the active form pyrazinoic acid. There is a strong correlation between mutations in pncA and resistance of M. tuberculosis to pyrazinamide.
See also
Pyrazinamide
References
Enzymes
Prokaryote genes
Tuberculosis | PncA | Biology | 84 |
32,023,896 | https://en.wikipedia.org/wiki/Birkhoff%20factorization | In mathematics, Birkhoff factorization or Birkhoff decomposition, introduced by , is the factorization of an invertible matrix with coefficients that are Laurent polynomials in into a product , where has entries that are polynomials in , is diagonal, and has entries that are polynomials in . There are several variations where the general linear group is replaced by some other reductive algebraic group, due to .
Birkhoff factorization implies the Birkhoff–Grothendieck theorem of that vector bundles over the projective line are sums of line bundles.
Birkhoff factorization follows from the Bruhat decomposition for affine Kac–Moody groups (or loop groups), and conversely the Bruhat decomposition for the affine general linear group follows from Birkhoff factorization together with the Bruhat decomposition for the ordinary general linear group.
See also
Birkhoff decomposition (disambiguation)
Riemann–Hilbert problem
References
Matrices | Birkhoff factorization | Mathematics | 194 |
37,861,466 | https://en.wikipedia.org/wiki/Southfield%20Furnace%20Ruin | The Southfield Furnace Ruin in Southfields, New York, was a longtime smelting site for iron ore mined from nearby veins in what is now Sterling Forest State Park. It is located on the north side of Orange County Route 19, 0.7 miles northwest of the junction with New York State Route 17.
It was added to the National Register of Historic Places on November 2, 1973 for its significance in industry.
History
It was built by Peter Townsend II, who also owned the mines. The Southfield Ironworks in addition to the furnace included a stamping mill, grist mill, saw mill, smith shop, wheel wright shop, coal shed, store, and stables.
The furnace was shut down in September 1887.
Gallery
See also
National Register of Historic Places in Orange County, New York
Clove Furnace Ruin
References
External links
Hike passes by ruin
National Register of Historic Places in Orange County, New York
Industrial buildings completed in 1804
Buildings and structures in Orange County, New York
Industrial furnaces | Southfield Furnace Ruin | Chemistry | 201 |
416,779 | https://en.wikipedia.org/wiki/Scenic%20design | Scenic design, also known as stage design or set design, is the creation of scenery for theatrical productions including plays and musicals. The term can also be applied to film and television productions, where it may be referred to as production design. Scenic designers create sets and scenery to support the overall artistic goals of the production. Scenic design is an aspect of scenography, which includes theatrical set design as well as light and sound.
History
The origins of scenic design may be found in the outdoor amphitheaters of ancient Greece, when acts were staged using basic props and scenery. Because of improvements in stage equipment and drawing perspectives throughout the Renaissance, more complex and realistic sets could be created for scenic design. Scenic design evolved in conjunction with technological and theatrical improvements over the 19th and 20th centuries.
Elements of Scenic Design
Scenic design involves several key elements:
Set Pieces: These are physical structures, such as platforms, walls, and furniture, that define the spatial environment of the performance.
Props: Objects used by actors during a performance, which help to establish the setting and enhance the narrative.
Backdrops: Painted or digitally projected backdrops and flat scenery that create the illusion of depth and perspective on stage.
Lighting: Setting the tone, ambiance, and focal point of the performance, lighting design is an essential component of scenic design.
Functionality: In order to meet the demands of the actors, crew, and technical specifications of the show, and sets must be useful and practical. When building the set, designers have to take accessibility, perspectives, entrances, and exits into account.
Scenic Designer
A scenic designer works with the theatre director and other members of the creative team to establish a visual concept for the production and to design the stage environment. They are responsible for developing a complete set of design drawings that include:
Basic floor plan showing all stationary scenic elements;
Composite floor plan showing all moving scenic elements, indicating both their onstage and storage positions;
Complete floor plan of the stage space incorporating all elements; and
Front elevations of every scenic element and additional elevations of sections of units as required.
In planning, scenic designers often make multiple scale models and renderings. Models are often made before final drawings are completed for construction. These precise drawings help the scenic designer effectively communicate with other production staff, especially the technical director, production manager, charge scenic artist, and prop master.
In Europe and Australia, many scenic designers are also responsible for costume design, lighting design and sound design. They are commonly referred to as theatre designers, scenographers, or production designers.
Scenic design often involves skills such as carpentry, architecture, textual analysis, and budgeting.
Many modern scenic designers use 3D CAD models to produce design drawings that used to be done by hand.
Notable scenic designers
Some notable scenic designers include: Adolphe Appia, Boris Aronson, Alexandre Benois, Alison Chitty, Antony McDonald, Barry Kay, Caspar Neher, Cyro Del Nero, Aleksandra Ekster, David Gallo, Edward Gordon Craig, Es Devlin, Ezio Frigerio, Christopher Gibbs, Franco Zeffirelli, George Tsypin, Howard Bay, Inigo Jones, Jean-Pierre Ponnelle, Jo Mielziner, John Lee Beatty, Josef Svoboda, Ken Adam, Léon Bakst, Luciano Damiani, Maria Björnson, Ming Cho Lee, Philip James de Loutherbourg, Natalia Goncharova, Nathan Altman, Nicholas Georgiadis, Oliver Smith, Ralph Koltai, Emanuele Luzzati, Neil Patel, Robert Wilson, Russell Patterson, Brian Sidney Bembridge, Santo Loquasto, Sean Kenny, Todd Rosenthal, Robin Wagner, Tony Walton, Louis Daguerre, Ralph Funicello, and Roger Kirk.
See also
Prop design
Film sculptor
Scenic painting
Scenographer
Scenography
Set construction
Stage machinery
Theatrical scenery
References
Further reading
Brockett, Oscar G., Margaret Mitchell, and Linda Hardberger. Making the Scene: A History of Stage Design and Technology in Europe and the United States, Tobin Theatre Arts Fund, distributed by University of Texas Press, 2010. Traces the history of scene design since the ancient Greeks.
Pecktal, Lynn. Designing and Painting for the Theater, McGraw-Hill, 1995. Details production design processes for theater, opera, and ballet. The foundational text provides a professional picture and comprehensive references to the design process. Well-illustrated with detailed lined drawings and photographs to convey the beauty and craft of scenic and production design.
External links
.
Prague Quadrennial of Performance Design and Space Largest scenography event in the world.
What is Scenography Article illustrating the differences between US and European theatre design practices.
Design
Theatrical occupations
Stagecraft
Film production | Scenic design | Engineering | 951 |
44,569,825 | https://en.wikipedia.org/wiki/Skin%20friction%20drag | Skin friction drag is a type of aerodynamic or hydrodynamic drag, which is resistant force exerted on an object moving in a fluid. Skin friction drag is caused by the viscosity of fluids and is developed from laminar drag to turbulent drag as a fluid moves on the surface of an object. Skin friction drag is generally expressed in terms of the Reynolds number, which is the ratio between inertial force and viscous force.
Total drag can be decomposed into a skin friction drag component and a pressure drag component, where pressure drag includes all other sources of drag including lift-induced drag. In this conceptualisation, lift-induced drag is an artificial abstraction, part of the horizontal component of the aerodynamic reaction force. Alternatively, total drag can be decomposed into a parasitic drag component and a lift-induced drag component, where parasitic drag is all components of drag except lift-induced drag. In this conceptualisation, skin friction drag is a component of parasitic drag.
Flow and effect on skin friction drag
Laminar flow over a body occurs when layers of the fluid move smoothly past each other in parallel lines. In nature, this kind of flow is rare. As the fluid flows over an object, it applies frictional forces to the surface of the object which works to impede forward movement of the object; the result is called skin friction drag. Skin friction drag is often the major component of parasitic drag on objects in a flow.
The flow over a body may begin as laminar. As a fluid flows over a surface shear stresses within the fluid slow additional fluid particles causing the boundary layer to grow in thickness. At some point along the flow direction, the flow becomes unstable and becomes turbulent. Turbulent flow has a fluctuating and irregular pattern of flow which is made obvious by the formation of vortices. While the turbulent layer grows, the laminar layer thickness decreases. This results in a thinner laminar boundary layer which, relative to laminar flow, depreciates the magnitude of friction force as fluid flows over the object.
Skin friction coefficient
Definition
The skin friction coefficient is defined as:
where:
is the skin friction coefficient.
is the density of the free stream (far from the body's surface).
is the free stream speed, which is the velocity magnitude of the fluid in the free stream.
is the skin shear stress on the surface.
is the dynamic pressure of the free stream.
The skin friction coefficient is a dimensionless skin shear stress which is nondimensionalized by the dynamic pressure of the free stream. The skin friction coefficient is defined at any point of a surface that is subjected to the free stream. It will vary at different positions. A fundamental fact in aerodynamics states that
.
This immediately implies that laminar skin friction drag is smaller than turbulent skin friction drag, for the same inflow.
The skin friction coefficient is a strong function of the Reynolds number , as increases decreases.
Laminar flow
Blasius solution
where:
, which is the Reynolds number.
is the distance from the reference point at which a boundary layer starts to form.
The above relation derived from Blasius boundary layer, which assumes constant pressure throughout the boundary layer and a thin boundary layer. The above relation shows that the skin friction coefficient decreases as the Reynolds number () increases.
Transitional flow
The Computational Preston Tube Method (CPM)
CPM, suggested by Nitsche, estimates the skin shear stress of transitional boundary layers by fitting the equation below to a velocity profile of a transitional boundary layer. (Karman constant), and (skin shear stress) are determined numerically during the fitting process.
where:
is a distance from the wall.
is a speed of a flow at a given .
is the Karman constant, which is lower than 0.41, the value for turbulent boundary layers, in transitional boundary layers.
is the Van Driest constant, which is set to 26 in both transitional and turbulent boundary layers.
is a pressure parameter, which is equal to when is a pressure and is the coordinate along a surface where a boundary layer forms.
Turbulent flow
Prandtl's one-seventh-power law
The above equation, which is derived from Prandtl's one-seventh-power law, provided a reasonable approximation of the drag coefficient of low-Reynolds-number turbulent boundary layers. Compared to laminar flows, the skin friction coefficient of turbulent flows lowers more slowly as the Reynolds number increases.
Skin friction drag
A total skin friction drag force can be calculated by integrating skin shear stress on the surface of a body.
Relationship between skin friction and heat transfer
In the point of view of engineering, calculating skin friction is useful in estimating not only total frictional drag exerted on an object but also convectional heat transfer rate on its surface. This relationship is well developed in the concept of Reynolds analogy, which links two dimensionless parameters: skin friction coefficient (Cf), which is a dimensionless frictional stress, and Nusselt number (Nu), which indicates the magnitude of convectional heat transfer. Turbine blades, for example, require the analysis of heat transfer in their design process since they are imposed in high temperature gas, which can damage them with the heat. Here, engineers calculate skin friction on the surface of turbine blades to predict heat transfer occurred through the surface.
Effects of skin friction drag
A 1974 NASA study found that for subsonic aircraft, skin friction drag is the largest component of drag, causing about 45% of the total drag. For supersonic and hypersonic aircraft, the figures are 35% and 25% respectively.
A 1992 NATO study found that for a typical civil transport aircraft, skin friction drag accounted for almost 48% of total drag, followed by induced drag at 37%.
Reducing skin friction drag
There are two main techniques for reducing skin friction drag: delaying the boundary layer transition, and modifying the turbulence structures in a turbulent boundary layer.
One method to modify the turbulence structures in a turbulent boundary layer is the use of riblets. Riblets are small grooves in the surface of the aircraft, aligned with the direction of flow. Tests on an Airbus A320 found riblets caused a drag reduction of almost 2%. Another method is the use of large eddy break-up (LEBU) devices. However, some research into LEBU devices has found a slight increase in drag.
See also
Parasitic drag
Pressure drag
References
Fundamentals of Flight by Richard Shepard Shevell
Drag (physics) | Skin friction drag | Chemistry | 1,312 |
78,924,917 | https://en.wikipedia.org/wiki/Wave%20Transmitter | The Wave Transmitter was a radio transmitter/receiver, described in a patent by Roberto Landell de Moura in 1904, capable of transmitting audio via radio waves as well as light (similar to a photophone). It was developed after many years of Landell de Moura experimenting with multi-function devices that were combinations of megaphones, photophones, and radio telegraphs.
History
Background
Landell de Moura began experiments in wireless communication in the mid 1890s. He worked with electrically powered megaphones, photophones, and when radio technology came along, incorporated radio telegraphy. By 1900 Landell de Moura was giving public demonstration of a device that seemed to use light (a photophone), a device he received a Brazilian patent for in 1901.
On 14 June 1901, he boarded the steamship Piemonte for Europe, from where he then went to the United States, where he sought to patent his inventions, and set up a workshop in New York. During his stay in the U.S., he received patents for a Photophone with radio wave bell or buzzer to alert the user at the other end and a stand-alone wireless telegraph using light or radio waves. He had to change his patent descriptions several times due to the requirements of the Patent Office.
In 1904, he received a patent for his wave transmitter, patent no. 771917. According to the American technicians who analyzed his research during the patent issuance process, his wireless transmission system was superior to what had already been developed, and regarding radiotelephony itself, he was "the discoverer and creator of the principles on which it is based." With the public announcement about his patents, several entrepreneurs offered to buy the rights, but Landell de Moura refused them, declaring: "'The inventions no longer belong to me. By the grace of God, I am merely their custodian. I will take them to my homeland, Brazil, which will be responsible for delivering them to humanity.'"
At the end of 1904, he had to return to Brazil with a debt of US$4,000 , hoping to return to New York in a short time, and, according to Ernani Fornari, patent other six inventions, but he had to abandon his plans. He sought to gain support from the Brazilian government to demonstrate his equipment offshore. However, when meeting with a government representative, he stated that the ships could be at any distance from each other, even suggesting the possibility of interplanetary communication, which was not well received by the government official. He also sought support from the Legislative Assembly of São Paulo to finance the commercialization of his invention, without success. According to Alencar, after the Brazilian government's refusal, Landell de Moura would have destroyed his experiments and given up scientific research. After these events, the federal government began to invest in radio telegraphy for the Armed Forces.
Operation
The Wave Transmitter, originally called the Gouradphone, built in an artisanal way, used an electromechanical microphone, invented by Landell, capable of collecting, according to Claudia Zaltrão, "sound waves through a resonance chamber," whose metallic diaphragm "opened and closed the primary of a Ruhmkorff coil, and induced a high voltage in the secondary of that coil which was radiated either through an antenna or two sparking spheres." The two spheres are called "phonetic interrupter," which, when exposed to the vibrations of the human voice or other sounds, create a series of electric sparks or flashes of light that, when they reach the receiver, are made understandable through a telephone, a lamp, or a Morse code device. However, the radiated voice did not contain the characteristics of the speaker's timbre, requiring training to understand the content of the messages. However, the signal contained many harmonics, allowing it to be detected over a wide range of frequencies.
According to the Cientec team, who replicated the invention, the "phonetic interrupter" was Landell's true innovation, as the other parts of the equipment were already known. According to a report from A Federação, 1905, the transmission of light waves would reach 30 to 50 kilometers and would not have climatic interference, as the beam of light would be modified both by mechanical vibration and by the electrical vibrations produced by the voice.
The device also used both radio waves and light beams, in addition to using continuous waves, with Landell advocating the use of short waves for long-distance communication, something that Marconi only recognized as useful in 1916. D'Arisbo notes that the device for transmitting light waves would be different from that for transmitting electromagnetic waves, while the patent issued in the US explains that the vibrations of the phonetic interrupter are transformed into both electrical and light waves, with Bruscato explaining that the "telephone" and the "wireless telegraph" are those that used light beams. The biographer Hamilton Almeida reports that Landell took more than 10 years to develop his equipment, having started to develop his ideas around 1886, after returning from his studies in Rome. Essentially, Landell sought to establish a point-to-point connection with electromagnetic waves, with the Wave Transmitter radiating information in all directions. Drawing a comparison between the experiments of Landell and Marconi, researchers César Augusto dos Santos and Otto Albuquerque explain as follows:
Landell's patent, in 1901, according to Albuquerque, had priority of speech transmission in a photonic-electronic system, while Marconi's patent focused only on the transmission of signals in Morse code. Both researchers agree that Marconi and Landell conducted similar experiments, but with different aims, with Santos explaining that "the priest-scientist was the first radio amateur in voice telegraphy and the first broadcaster with continued contacts in the country and abroad."
Legacy
In the 1980s, a working group from Telebrás, when analyzing the patents issued by the United States, considered that Landell was the first to carry out continuous wave transmissions, using a valve equivalent to the three-electrode valve patented by Lee De Forest in 1907. At the same time, Edson Benedicto Ramos Féris, then an engineer at the Telebrás Research and Development Center and a professor at USP, explained, after analyzing the patents, that the luminous system used by Landell was a predecessor of fiber optics, as, despite the differences, they are based on the same principle. When discussing the importance of these patents, Hamilton Almeida, in a 1983 book, states that "the wave transmitter patented by Father Landell in the United States is the precursor of the radio."
Regarding Landell's experiments, in 1993, the Italian work "Tu piccola scatola... La radio: fatti, cose, persone," by Laura De Luca and Walter Lobina, states that he conducted the "first radio transmission of which there is any record. The municipality of São Paulo witnessed the emission and reception of electromagnetic and luminous waves. Radio was born, but no one noticed.". In the authors' view, the radio did not find an environment in the country where it could develop. According to Professor Luiz Artur Ferraretto from UFRGS, with the experiences of 1899 and 1900, "Father Landell came close to what, more than a decade later, was named broadcasting." Meanwhile, as Claudia Zaltrão acknowledges, Landell's name and his work remain forgotten in his own country and abroad, while Almeida notes that in life, Landell's invention received recognition from inventors in the USA.
Replics
In 1984, the Fundação de Ciência e Tecnologia (Cientec), from Rio Grande do Sul, after three months of work by engineer Antonio Carlos Solano and technicians José Clóvis Totel and Antônio Felipe Pepe, presented a functional replica of the Wave Transmitter. One of the many difficulties they faced was understanding the scale of the device and what materials to use in its construction. On 7 September of that year, at the closing of the Semana da Pátria (Week of the Motherland), the replica was presented to the public at an event where Governor Jair Soares conveyed the words "Porto Alegre." In 2004, another functional replica was made by Marco Aurélio Cardoso Moura, after two years of work.
The 1984 replica had a range of up to 50 meters over a wide frequency band, including FM, with Ferraretto noting that at Landell's time the result would have been better due to the absence of external interference. However, it had difficulty reproducing the intonation of the human voice, a problem that led Landell to suggest a code of words for better communication. The 2004 replica had better reception with medium waves, around 540 kHz, in addition to recognizing FM - otherwise, the performance was similar to the 1984 replica.
References
Notes
Bibliography
Primary source
Articles
Newspapers and magazines from the time period
Monography
Books
Additional reading
Radio in Brazil
Radio technology
History of radio
Brazilian inventions
1899 in Brazil
19th-century inventions
Discovery and invention controversies | Wave Transmitter | Technology,Engineering | 1,864 |
52,784,231 | https://en.wikipedia.org/wiki/Q%20Carinae | The names q Carinae and Q Carinae are the Bayer designations of two different giant stars in the constellation Carina.
For the variable star q Carinae, see V337 Carinae.
For the orange star Q Carinae, see HD 61248.
Carinae, q
Carina (constellation) | Q Carinae | Astronomy | 64 |
455,186 | https://en.wikipedia.org/wiki/Planarian | Planarians (triclads) are free-living flatworms of the class Turbellaria, order Tricladida, which includes hundreds of species, found in freshwater, marine, and terrestrial habitats. Planarians are characterized by a three-branched intestine, including a single anterior and two posterior branches. Their body is populated by adult stem cells called neoblasts, which planarians use for regenerating missing body parts. Many species are able to regenerate any missing organ, which has made planarians a popular model in research of regeneration and stem cell biology. The genome sequences of several species are available, as are tools for molecular biology analysis.
The order Tricladida is split into three suborders, according to their phylogenetic relationships: Maricola, Cavernicola and Continenticola. Formerly, the Tricladida was split according to their habitat: Maricola (marine planarians); Paludicola (freshwater planarian); and Terricola (land planarians).
Planarians move by beating cilia on the ventral dermis, allowing them to glide along on a film of mucus. Some also can move by undulations of the whole body by the contractions of muscles built into the body membrane.
Triclads play an important role in watercourse ecosystems and are often very important as bio-indicators.
Phylogeny and taxonomy
Phylogeny
Phylogenetic supertree after Sluys et al., 2009:
Taxonomy
Linnaean ranks after Sluys et al., 2009:
Order Tricladida
Suborder Maricola
Superfamily Cercyroidea
Family Centrovarioplanidae
Family Cercyridae
Family Meixnerididae
Superfamily Bdellouroidea
Family Uteriporidae
Family Bdellouridae
Superfamily Procerodoidea
Family Procerodidae
Suborder Cavernicola
Family Dimarcusidae
Suborder Continenticola
Superfamily Planarioidea
Family Planariidae
Family Dendrocoelidae
Family Kenkiidae
Superfamily Geoplanoidea
Family Dugesiidae
Family Geoplanidae
Anatomy and physiology
Planarians are bilaterian flatworms that lack a fluid-filled body cavity, and the space between their organ systems is filled with parenchyma. Planarians lack a circulatory system, and absorb oxygen through their body wall. They uptake food to their gut using a muscular pharynx, and nutrients diffuse to internal tissues. A three-branched intestine runs across almost the entire body, and includes a single anterior and two posterior branches. The planarian intestine is a blind sac, having no exit cavity, and therefore planarians uptake food and egest waste through the same orifice, located near the middle of the ventral body surface.
The excretory system is made of many tubes with many flame cells and excretory pores on them. Also, flame cells remove unwanted liquids from the body by passing them through ducts which lead to excretory pores, where waste is released on the dorsal surface of the planarian.
The triclads have an anterior end or head where sense organs, such as eyes and chemoreceptors, are usually found. Some species have auricles that protrude from the margins of the head. The auricles can contain chemical and mechanical sensory receptors.
The number of eyes in the triclads is variable depending on the species. While many species have two eyes (e.g. Dugesia or Microplana), others have many more distributed along the body (e.g. most Geoplaninae). Sometimes, those species with two eyes may present smaller accessory or supernumerary eyes. The subterranean triclads are often eyeless or blind.
The body of the triclads is covered by a ciliated epidermis that contains rhabdites. Between the epidermis and the gastrodermis there is a parenchymatous tissue or mesenchyme.
Nervous system
The planarian nervous systems consists of a bilobed shaped cerebral ganglion that is referred to as the planarian brain. Longitudinal ventral nerve chords extend from the brain to the tail. Transverse nerves, commissure, connect the ventral nerve chords forming ladder-like nerve system. The brain has been shown to exhibit spontaneous electrophysiological oscillations, similar to the electroencephalographic (EEG) activity of other animals.
The planarian has a soft, flat, wedge-shaped body that may be black, brown, blue, gray, or white. The blunt, triangular head has two ocelli (eyespots), pigmented areas that are sensitive to light. There are two auricles (earlike projections) at the base of the head, which are sensitive to touch and the presence of certain chemicals. The mouth is located in the middle of the underside of the body, which is covered with hairlike projections (cilia). There are no circulatory or respiratory systems; oxygen enters and carbon dioxide leaves the planarian's body by diffusing through the body wall.
Reproduction
Triclads reproduce sexually and asexually, and different species may be able to reproduce by one or both modes. Planarians are hermaphrodites. In sexual reproduction, the mating generally involves mutual insemination.
Thus, one of their gametes will combine with the gamete of another planarian. Each planarian transports its secretion to the other planarian, giving and receiving sperm. Eggs develop inside the body and are shed in capsules. Weeks later, the eggs hatch and grow into adults. In asexual reproduction, the planarian fissions and each fragment regenerates its missing tissues, generating complete anatomy and restoring functions. Asexual reproduction, similar to regeneration following injury, requires neoblasts, adult stem cells, which proliferate and produce differentiated cells. Some researchers claim that the products derived from bisecting a planarian are similar to the products of planarian asexual reproduction; however, debates about the nature of asexual reproduction in planarians and its effect on the population are ongoing. Some species of planarian are exclusively asexual, whereas some can reproduce both sexually and asexually. In most of the cases the sexual reproduction involve two individuals; auto fecundation has been rarely reported (e.g. in Cura foremanii).
Neoblasts
Neoblasts are abundant adult stem cells that are found in the planarian parenchyma across the planarian body. They are small and round cells, 5 to 10 μm, and characterized by a large nucleus, which is surrounded by little cytoplasm. Neoblasts are required for regenerating missing tissues and organs, and they continuously replenish tissues by producing new cells. Neoblasts can self-renew and generate progenitors for different cell types. In contrast to adult vertebrate stem cells (e.g., hematopoietic stem cell), neoblasts are pluripotent (i.e., producing all somatic cell types). Moreover, they give rise to differentiating, post-mitotic, cells directly, and not by producing rapidly-dividing transit amplifying cells. Consequently, neoblasts divide frequently, and apparently lack a large sub-population of dormant or slow-cycling cells.
As a model system in biological and biomedical research
The life history of planarians make them a model system for investigating a number of biological processes, many of which may have implications for human health and disease. Advances in molecular genetic technologies has made the study of gene function possible in these animals and scientists are studying them worldwide. Like other invertebrate model organisms, for example C. elegans and D. melanogaster, the relative simplicity of planarians facilitates experimental study.
Planarians have a number of cell types, tissues and simple organs that are homologous to our own cells, tissues and organs. However, regeneration has attracted the most attention. Thomas Hunt Morgan was responsible for some of the first systematic studies (that still underpin modern research) before the advent of molecular biology as a discipline.
Planarians are also an emerging model organism for aging research. These animals have an apparently limitless regenerative capacity, and asexual Schmidtea mediterranea has been shown to maintain its telomere length through regeneration.
Live planarians are increasingly used in toxicological research due to their regenerative capabilities, simple anatomy, and sensitivity to environmental changes. Their ability to regenerate lost body parts provides a unique model to study the effects of chemical exposures on cellular processes, while their rapid response to toxins makes them an efficient tool for screening potential environmental and pharmaceutical hazards. An example of this application is a fluorescence-based skin irritability assay, where planaria are exposed to various chemicals, and fluorescence dye is used to evaluate their epithelial damage in response to irritation, providing an effective screening method.
Regeneration
Planarian regeneration combines new tissue production with reorganization to the existing anatomy, morphallaxis. The rate of tissue regrowth varies between species, but in frequently used lab species, functional regenerated tissues are available already 7–10 days following tissue amputation. Regeneration starts following an injury that require the growth of a new tissue. Neoblasts localized near the injury site proliferate to generate a structure of differentiating cells called blastema. Neoblasts are required for new cell production, and they therefore provide the cellular basis for planarian regeneration. Cell signaling mechanisms provide positional information that regulates the cell types and tissues that are produced from the neoblasts in regeneration. Many signaling molecules that provide positional information to neoblasts, in regeneration and homeostasis, are expressed in muscle cells. Following injury, muscle cells throughout the body can alter the expression of genes that encode molecules that provide positional information. Therefore, the activities of neoblasts and muscle cells following injuries are essential for successful regeneration.
Historically, planarians have been considered "immortal under the edge of a knife." Very small pieces of the planarian, estimated to be as little as 1/279th of the organism it is cut from, can regenerate back into a complete organism over the course of a few weeks. New tissues can grow due to pluripotent stem cells that have the ability to create all the various cell types. These adult stem cells are called neoblasts, and comprise 20% or more of the cells in the adult animal. They are the only proliferating cells in the worm, and they differentiate into progeny that replace older cells. In addition, existing tissue is remodeled to restore symmetry and proportion of the new planaria that forms from a piece of a cut up organism.
The organism itself does not have to be completely cut into separate pieces for the regeneration phenomenon to be witnessed. In fact, if the head of a planarian is cut in half down its center, and each side retained on the organism, it is possible for the planarian to regenerate two heads and continue to live. Researchers, including those from Tufts University in the U.S., sought to determine how microgravity and micro-geomagnetic fields would affect the growth and regeneration of planarian flatworms, Dugesia japonica. They discovered that one of the amputated fragments sent to space regenerated into a double-headed worm. The majority of such amputated worms (95%) did not do so, however. An amputated worm regenerated into a double-head creature after spending five weeks aboard the International Space Station (ISS) – though regeneration of amputated worms as double-headed heteromorphosis is not a rare phenomenon unique to a microgravity environment. In contrast, two-headed planaria regenerates can be induced by exposing amputated fragments to electrical fields. Such exposure with opposite polarity can induce a planarian with 2 tails. Two-headed planaria regenerates can be induced by treating amputated fragments with pharmacological agents that alter levels of calcium, cyclic AMP, and protein kinase C activity in cells, as well as by genetic expression blocks (interference RNA) to the canonical Wnt/β-Catenin signalling pathway.
Biochemical memory experiments
In 1955, Robert Thompson and James V. McConnell conditioned planarian flatworms by pairing a bright light with an electric shock. After repeating this several times they took away the electric shock, and only exposed them to the bright light. The flatworms would react to the bright light as if they had been shocked. Thompson and McConnell found that if they cut the worm in two, and allowed both worms to regenerate each half would develop the light-shock reaction. In 1963, McConnell repeated the experiment, but instead of cutting the trained flatworms in two he ground them into small pieces and fed them to other flatworms. He reported that the flatworms learned to associate the bright light with a shock much faster than flatworms who had not been fed trained worms.
This experiment intended to test whether memory could be transferred chemically. The experiment was repeated with mice, fish, and rats, but it always failed to produce the same results. The perceived explanation was that rather than memory being transferred to the other animals, it was the hormones in the ingested ground animals that changed the behavior. McConnell believed that this was evidence of a chemical basis for memory, which he identified as memory RNA. McConnell's results are now attributed to observer bias. No blinded experiment has ever reproduced his results of planarians scrunching when exposed to light. Subsequent explanations of this scrunching behaviour associated with cannibalism of trained planarian worms were that the untrained flatworms were only following tracks left on the dirty glassware rather than absorbing the memory of their fodder.
In 2012, Tal Shomrat and Michael Levin have shown that planarians exhibit evidence of long-term memory retrieval after regenerating a new head.
Planarian species used for research and education
Several planarian species are commonly used for biological research. Popular experimental species are Schmidtea mediterranea, Schmidtea polychroa, and Dugesia japonica, which in addition to excellent regenerative abilities, are easy to culture in the lab. In recent decades, S. mediterranea has emerged as the species of choice for modern molecular biology research, due to its diploid chromosomes and the availability of both asexual and sexual strains.
The most frequently used planarian in high school and first-year college laboratories is the brownish Girardia tigrina. Other common species used are the blackish Planaria maculata and Girardia dorotocephala.
See also
References
External links
More information on freshwater planarians and their biology
More information on the genetic screen to identify regeneration genes
YouTube videos: Planaria eating worm segment, Planarian
Schmidtea mediterranea, facts, anatomy, image at GeoChemBio.com
Alejandro Sanchez-Alvarado's Seminar: Regeneration in Planarians
Link to an article discussing some work on planarian immortality
A user-friendly visualization tool and database of planarian regeneration experiments
Tricladida on the Encyclopedia of Life (EOL)
Land planarians on the UF / IFAS Featured Creatures Web site
Rhabditophora
Animal models
Negligibly senescent organisms
Articles containing video clips
Invertebrate common names | Planarian | Biology | 3,198 |
30,798,432 | https://en.wikipedia.org/wiki/Kuniumi | In Japanese mythology, is the traditional and legendary history of the emergence of the Japanese archipelago, of islands, as narrated in the Kojiki and Nihon Shoki. According to this legend, after the creation of Heaven and Earth, the gods Izanagi and Izanami were given the task of forming a series of islands that would become what is now Japan. In Japanese mythology, these islands make up the known world. The creation of Japan is followed by the creation of the gods (kamiumi).
Creation story
According to the Kojiki
After the formation, Heaven was above and Earth was still a drifting soft mush. The first five gods named were lone deities without sex and did not reproduce. Then came the , consisting of two lone deities followed by five couples. The elder gods delegated the youngest couple Izanagi and Izanami to carry out their venerable mandate: to reach down from heaven and give solid form to the earth.
This they did with the use of a precious stone-covered spear named , given to them by the elders. Standing over the , they churned the chaotic mass with the spear. When drops of salty water fell from the tip, they formed into the first island, Onogoroshima. In forming this island, both gods came down from heaven, and spontaneously built a central support column called the which upheld the "hall measuring eight fathoms" that the gods caused to appear afterwards.
Then they initiated conversation inquiring of each other's anatomy, leading to a mutual decision to mate and reproduce:
Izanami accepted the offer and Izanagi proposed that both should circle around the column Ame-no-mihashira in opposite directions, Izanami going right and Izanagi left and on meeting each other would perform sexual intercourse (). However, when they met on the other side of the pillar, Izanami was the first to speak, saying: "Oh, indeed you are a beautiful and kind youth!", to which Izanagi replied: "Oh, what a most beautiful and kind youth!". Izanagi then rebukes Izanami saying: "It is wrong for the wife to speak first.".
However, they mated anyway and later fathered a child Hiruko (lit. "leech child), who was placed in a reed boat dragged by the current. Afterwards they gave birth to . Neither Hiruko nor Awashima were considered legitimate children of Izanagi and Izanami.
Izanagi and Izanami decided to ascend to heaven and consult the primordial gods at Takamagahara about the ill-formed children that resulted from their union. The gods determined through divination that the female speaking first during the ceremony was the cause. So the couple returned to Onogoroshima island and repeated the rite encircling the column, only making sure Izanagi was the first to speak out in greeting. When finished, they performed the union successfully and lands began to be born.
Birth of the islands
According to the legend, the formation of Japan began with the creation of eight large islands by Izanagi and Izanami. In order of birth these islands are the following:
: currently, Awaji Island;
: currently, Shikoku. This island had a body and four faces. The names of the faces are as follows:
: Iyo Province;
: Sanuki Province;
: Awa Province;
: Tosa Province.
: today, Oki Islands. Dubbed ;
: today, Kyūshū. This island had a body and four faces. The names of the faces are as follows:
: Tsukushi Province;
: Toyo Province;
: Hi Province;
: Kumaso.
: today, Iki Island. Dubbed ;
: today, Tsushima Island. Dubbed ;
: today, Sado Island;
: today, Honshu. Dubbed .
Traditionally these islands are known as Ōyashima (lit. eight large islands) and as a whole are what is currently known as Japan. In the myth neither Hokkaidō nor the Ryukyu Islands are mentioned as these were not known to the Japanese at the time of compiling the Kojiki.
Additionally, Izanagi and Izanami then gave birth to six islands:
: of Kibi (now in Okayama). Dubbed ;
: Shōdoshima. Dubbed ;
: Suō-Ōshima. Dubbed ;
: Himeshima. Dubbed ;
: Gotō Islands. Dubbed ;
: Danjo Islands. Dubbed .
According to the Nihon Shoki
The story of this book only differs in that Izanagi and Izanami volunteered to consolidate the earth. In addition, the two deities are described as "god of yang" (陽神 youshin, male deity) and "goddess of yin" (陰神 inshin, female deity) influenced by the ideas of Yin and yang. The rest of the story is identical, except that the other celestial gods (Kotoamatsukami) do not appear, nor are the last six smaller islands mentioned that were born through Izanagi and Izanami.
Notes
References
Bibliography
Japanese mythology
Creation myths | Kuniumi | Astronomy | 1,055 |
26,248,738 | https://en.wikipedia.org/wiki/List%20of%20megaprojects | This is a list of megaprojects, which may be defined in the following categories:
Projects that cost more than US$1 billion and attract a large amount of public attention because of substantial impacts on communities, the natural and built environment, and budgets.
Projects with "initiatives that are physical, very expensive, and public".
Some examples include bridges, tunnels, highways, railways, hospitals, airports, seaports, power plants, dams, wastewater projects, Special Economic Zones (SEZ), oil and natural gas extraction projects, public buildings, information technology systems, aerospace projects, and weapons systems. This list identifies a wide variety of examples of major historic and contemporary projects that meet one or both megaproject criteria identified above.
Legend
Aerospace projects
Disaster cleanup
While most megaprojects are planned and undertaken with careful forethought, some are undertaken out of necessity after a natural disaster occurs. There have also been a few human-made disasters. Major restoration was necessary after the destruction caused by World War I and II, some of which was paid for by German reparations for World War I and for World War II.
Energy projects
Science projects
Research and development efforts
Physics and Astronomy infrastructure
Spacecraft
Other spaceflight projects
Sports and culture projects
Every Olympic Games and FIFA World Cup in the latter part of the twentieth century and entering into the 21st century has cost more than $1 billion in arenas, hotels etc., usually several billions. The Olympic Games are considered to be the world's foremost international sporting event with over 200 nations participating. Sports-related costs for the Summer Games since 1960 is on average $5.2 billion (USD) and for the Winter Games $393.1 million dollars. The highest recorded total cost was the 2014 Sochi Winter Olympics, costing approximately US$55 billion. The International Olympic Committee requires a minimum of 40,000 hotel rooms available for visiting spectators and an Olympic Village that is able to house 15,000 athletes, referees, and officials.
Roads and transport infrastructure
Ground transportation systems like roads, tunnels, bridges, terminals, railways, and mass transit systems are often megaprojects. Numerous large airports and terminals used for airborne passenger and cargo transportation are built as megaprojects.
Africa
Asia
Europe
North America
Oceania
Planned cities and urban renewal projects
Africa
Asia
Europe
North America
Oceania
South America
Water-related
Ports, waterways, canals, and locks for ships carrying passengers and cargo are built as megaprojects.
Africa
Asia
Europe
North America
South America
Hospitals
Europe
See also
List of most expensive U.S. public works projects
References
External links
Megaprojects in Dubai
Lists of most expensive things
Megaprojects
Infrastructure-related lists | List of megaprojects | Engineering | 540 |
62,756,782 | https://en.wikipedia.org/wiki/List%20of%20train-surfing%20injuries%20and%20deaths | This is a list of train-surfing injuries and deaths.
Data of train-surfing injuries and deaths
Train-surfing injuries and deaths
See also
Car surfing
Elevator surfing
List of graffiti and street-art injuries and deaths
List of selfie-related injuries and deaths
Rail suicide
Skitching
Train surfing
References
Train surfing
Train surfing
Train surfing
Train surfing
Train surfing | List of train-surfing injuries and deaths | Technology | 71 |
40,155,925 | https://en.wikipedia.org/wiki/3-Aminophthalic%20acid | 3-Aminophthalic acid is a product of the oxidation of luminol. The reaction requires the presence of a catalyst. A mixture of luminol and hydrogen peroxide is used in forensics. When the mixture is sprayed on an area that contains blood, the iron in the hemoglobin in the blood catalyzes a reaction between the mixture, resulting in 3-aminophthalate which gives out light by chemiluminescence.
References
Forensic chemicals
Anilines
Carboxylic acids | 3-Aminophthalic acid | Chemistry | 107 |
65,791,339 | https://en.wikipedia.org/wiki/Diversity-generating%20retroelement | Diversity-generating retroelements (DGRs) are a family of retroelements that were first found in Bordetella phage (BPP-1), and since been found in bacteria (e.g.Treponema denticola and Legionella pneumophila), Archaea, Archaean viruses (e.g. ANMV-1), temperate phages (e.g. Hankyphage and CrAss-like phage), and lytic phages. DGRs benefit their host by mutating particular regions of specific target proteins, for instance, phage tail fiber in BPP-1, lipoprotein in legionella pneumophila ( the pathogen behind Legionnaires disease), and TvpA in Treponema denticola (oral-associated periopathogen). An error-prone reverse transcriptase is responsible for generating these hypervariable regions in target proteins (Mutagenic retrohoming). In mutagenic retrohoming, a mutagenized cDNA (containing substantial A to N mutations) is reverse transcribed from a template region (TR), and is replaced with a segment similar to the template region called variable region (VR). Accessory variability determinant (Avd) protein is another component of DGRs, and its complex formation with the error-prone RT is of importance to mutagenic rehoming.
DGRs are beneficial to the evolution and survival of their host. A large fraction of Faecalibacterium prausnitzii phages contain DGRs that are believed to have a role in phage adaptability to the digestive system, as patients with inflammatory bowel disease (IBD), have more phages, but less F.prausnitzii in their stool samples compared to healthy individuals, suggesting that these phages activate during the illness, and that they may trigger F.prausnitzii depletion. Several tools have been implemented to identify DGRs, such as DiGReF, DGRscan, MetaCSST, and myDGR
See also
Retron
References
Mobile genetic elements
Molecular biology | Diversity-generating retroelement | Chemistry,Biology | 451 |
40,333,801 | https://en.wikipedia.org/wiki/JFLAP | JFLAP (Java Formal Languages and Automata Package) is interactive educational software written in Java
for experimenting with topics in the computer science
area of formal languages and automata theory, primarily intended for use at the undergraduate level or as an advanced
topic for high school. JFLAP allows one to create and simulate structures, such as programming a finite-state machine, and
experiment with proofs, such as converting a nondeterministic finite automaton (NFA) to a
deterministic finite automaton (DFA).
JFLAP is developed and maintained at Duke University, with support from the National Science Foundation since 1993. It is freeware and the source code of the most recent version is available, but under some restrictions. JFLAP runs as a Java application.
History
Before JFLAP, there were several software tools related to automata theory developed by Susan H. Rodger and her students starting around 1990
in the Computer Science Department at Rensselaer Polytechnic Institute. In 1992, the first published paper at a DIMACS 2012 workshop described a related tool called NPDA
(the paper was published later in 1994 in a DIMACS series).
NPDA then evolved into FLAP, including also finite-state machines and Turing machines.
In 1993, a paper on Formal Languages and Automata Package
(FLAP) was published
. At that time, the tool was written in C++ and X Window. Around 1994, Rodger moved to
Duke University and continued tool development. Around 1996, FLAP was converted to
Java and the first paper mentioned JFLAP was published in 1996
Along the way, other tools were developed as stand alone tools and then later integrated into JFLAP.
For example, a paper in 1999 described how JFLAP now allowed one to experiment with construction
type proofs, such as converting an NFA to a DFA to a minimal state DFA, and as another example,
converting NPDA to CFG and vice versa. In 2002 JFLAP was converted to Swing. In 2005-2007 a study was run with fourteen institutions using
JFLAP. A paper on this study in 2009 showed that students using JFLAP thought JFLAP made them feel more engaged in the
class, and made learning the concepts easier.
The history of JFLAP is covered on the jflap.org site, and includes
over 35 students from Rensselaer Polytechnic Institute and Duke University who have worked on
JFLAP and related tools since 1990.
A paper by Chakraborty, Saxena and Katti entitled "Fifty years of automata simulation: a review"
in ACM Inroads magazine in December 2011 stated the following about JFLAP:
"The effort put into developing this tool is unparalleled in the field of simulation of automata. As a result, today it is the most sophisticated tool for simulating automata. It now covers a large number of topics on automata and related fields. The tool is also the best documented among the tools for simulation of automata." and "The tool uses state of the art graphics and is one of the easiest to use. The tool is undoubtedly the most widely used tool for simulation of automata developed to date. Thousands of students have used it at numerous universities in more than a hundred countries."
Topics covered in JFLAP
Topics on regular language include:
finite-state machine
regular grammar
regular expression
Proof on nondeterministic finite automaton to deterministic finite automaton
Proof on deterministic finite automaton to regular grammar
Proof on deterministic finite automaton to regular expression
pumping lemma for regular languages
Topics on context-free language include:
pushdown automata
context-free grammar
proof on wikt:nondeterministic pushdown automaton to context-free grammar
proof on context-free grammar to pushdown automaton
pumping lemma for context-free language
CYK parser
LL parser
SLR parser
Topics on recursively enumerable language:
Turing machine
unrestricted grammar
Other related topics:
Moore machine
Mealy machine
L-system
Releases
JFLAP is currently released as Version 7.1.
Awards
In 2007, Rodger and her students were a Finalist in the NEEDS Premier Award for Excellence in Engineering
Education Courseware for the software JFLAP.
In 2014, Rodger was awarded the ACM Karl V. Karlstrom Outstanding Educator Award for her contributions to CS education, including the development of JFLAP.
Books on JFLAP
Rodger and Thomas Finley wrote a book on JFLAP in 2006
that can be used as a supplemental book with an automata theory course.
Gopalakrishnan wrote a book on Computation Engineering
and in his book he encourages the use of JFLAP for experimenting with machines. JFLAP is also suggested to use for exercises. Mordechai Ben-Ari wrote a book entitled Principles of the SPIN model checker
and JFLAP is referenced in the book. In particular the Visualizing Nondeterminism (VN) software the book is
about reads finite automata in JFLAP file format.
Maxim Mozgovoy wrote an automata theory textbook in which he uses screen shots from JFLAP
Other people have written books that refer to the use of JFLAP in some way; several are mentioned on the JFLAP
web site.
References
External links
JFLAP web site
FLAP web site
Java (programming language) software
Educational programming languages
Computer science education | JFLAP | Technology | 1,117 |
24,314,125 | https://en.wikipedia.org/wiki/Romidepsin | Romidepsin, sold under the brand name Istodax, is an anticancer agent used in cutaneous T-cell lymphoma (CTCL) and other peripheral T-cell lymphomas (PTCLs). Romidepsin is a natural product obtained from the bacterium Chromobacterium violaceum, and works by blocking enzymes known as histone deacetylases, thus inducing apoptosis. It is sometimes referred to as depsipeptide, after the class of molecules to which it belongs. Romidepsin is branded and owned by Gloucester Pharmaceuticals, a part of Celgene.
History
Romidepsin was first reported in the scientific literature in 1994, by a team of researchers from Fujisawa Pharmaceutical Company (now Astellas Pharma) in Tsukuba, Japan, who isolated it in a culture of Chromobacterium violaceum from a soil sample obtained in Yamagata Prefecture. It was found to have little to no antibacterial activity, but was potently cytotoxic against several human cancer cell lines, with no effect on normal cells; studies on mice later found it to have antitumor activity in vivo as well.
The first total synthesis of romidepsin was accomplished by Harvard researchers and published in 1996. Its mechanism of action was elucidated in 1998, when researchers from Fujisawa and the University of Tokyo found it to be a histone deacetylase inhibitor with effects similar to those of trichostatin A.
Clinical trials
Phase I studies of romidepsin, initially codenamed FK228 and FR901228, began in 1997. Phase II and phase III trials were conducted for a variety of indications. The most significant results were found in the treatment of cutaneous T-cell lymphoma (CTCL) and other peripheral T-cell lymphomas (PTCLs).
In 2004, romidepsin received Fast Track designation from the FDA for the treatment of cutaneous T-cell lymphoma, and orphan drug status from the FDA and the European Medicines Agency for the same indication.
The FDA approved romidepsin for CTCL in November 2009 and approved romidepsin for other peripheral T-cell lymphomas (PTCLs) in June 2011.
A randomised, phase III trial of romidepsin + CHOP chemotherapy vs CHOP chemotherapy for patients with peripheral T cell lymphoma returned negative results, having no significant impact on progression free survival or overall survival.
Pre-clinical HIV study
In 2014, PLOS Pathogens published a study involving romidepsin in a trial designed to reactivate latent HIV virus in order to deplete the HIV reservoir. Latently infected T-cells were exposed in vitro and ex vivo to romidepsin, leading to an increase in detectable levels of cell-associated HIV RNA. The trial also compared the effect of romidepsin to another histone deacetylase inhibitor, Vorinostat
Autism study in animal model
A study involving romidepsin in an animal study that showed that a brief treatment with low amounts of romidepsin could reverse social deficits in a mouse model of autism.
Pharmacodynamics
In a Phase II trial of romidepsin involving patients with CTCL or PTCL, there was evidence of increased histone acetylation in peripheral blood mononuclear cells (PBMCs) extending 4–48 hours. Expression of the ABCB1 gene, a marker of romidepsin-induced gene expression, was also increased in both PBMCs and tumor biopsy samples. Increased gene expression following increased histone acetylation is an expected effect of an HDAC inhibitor. Increased hemoglobin F (another surrogate marker for gene-expression changes resulting from HDAC inhibition) was also detected in blood after romidepsin administration, and persistent histone acetylation was inversely associated with drug clearance and directly associated with patient response to therapy.
Dosage and administration
The approved dosage of romidepsin in both CTCL and PTCL is a four-hour i.v. administration of 14 mg/m2 on days 1, 8, and 15 of a 28-day treatment cycle. This cycle should be repeated as long as the patient continues to benefit and tolerate the therapy. A dose reduction to 10 mg/m2 is possible in some patients who experience high-grade toxicities.
Pharmacokinetics
In trials involving patients with advanced cancers, romidepsin exhibited linear pharmacokinetics across doses ranging from 1.0 to 24.9 mg/m2 when administered intravenously over four hours. Age, race, sex, mild-to-severe renal impairment, and mild-to-moderate hepatic impairment had no effect on romidepsin pharmacokinetics. No accumulation of plasma concentration was observed after repeated dosing.
Mechanism of action
Romidepsin acts as a prodrug with the disulfide bond undergoing reduction within the cell to release a zinc-binding thiol. The thiol binds to a zinc atom in the binding pocket of Zn-dependent histone deacetylase to block its activity. Thus it is an HDAC inhibitor. Many HDAC inhibitors are potential treatments for cancer through the ability to epigenetically restore normal expression of tumor suppressor genes, which may result in cell cycle arrest, differentiation, and apoptosis.
Adverse effects
The use of romidepsin is uniformly associated with adverse effects. In clinical trials, the most common were nausea and vomiting, fatigue, infection, loss of appetite, and blood disorders (including anemia, thrombocytopenia, and leukopenia). It has also been associated with infections, and with metabolic disturbances (such as abnormal electrolyte levels), skin reactions, altered taste perception, and changes in cardiac electrical conduction.
References
Antineoplastic drugs
Drugs developed by Bristol Myers Squibb
Orphan drugs
Prodrugs
Histone deacetylase inhibitors
Depsipeptides
Astellas Pharma | Romidepsin | Chemistry | 1,277 |
36,959,022 | https://en.wikipedia.org/wiki/Mu%20Persei | Mu Persei, Latinised from μ Persei, is a binary star system in the northern constellation of Perseus. It is visible to the naked eye as a point of light with a combined apparent visual magnitude of +4.16. The distance to this system is approximately 900 light-years based on parallax measurements. It is drifting further away with a radial velocity of +26 km/s.
Mu Persei is a spectroscopic binary with an orbital period of 284 days and an eccentricity of about 0.06. The primary component is a yellow G-type supergiant star. With an effective temperature of about and a radius of 53 solar radii, this star has the luminosity of about 2,030 times that of the Sun. The companion is a B-type star with a class of B9.5
Mu Persei is moving through the galaxy at a speed of 35.6 km/s relative to the Sun. Its projected galactic orbit carries it between 23,900 and 32,400 light-years from the center of the galaxy.
Mu Persei came closest to the Sun 5.6 million years ago when it had brightened to magnitude 3.25 from a distance of 600 light-years.
Naming
In Chinese, (), meaning Celestial Boat, refers to an asterism consisting of μ Persei, η Persei, γ Persei, α Persei, ψ Persei, δ Persei, 48 Persei and HD 27084. Consequently, μ Persei itself is known as (, ).
References
G-type supergiants
Spectroscopic binaries
Perseus (constellation)
Persei, Mu
Durchmusterung objects
Persei, 51
026630
019812
1303 | Mu Persei | Astronomy | 361 |
21,505,797 | https://en.wikipedia.org/wiki/List%20of%20Windows%20Mobile%20devices | Windows Mobile is a mobile operating system developed by Microsoft, based on Windows CE and is the successor to Pocket PC 2002 and predecessor of Windows Phone. New devices running Windows Mobile were released between 2003 and 2010. Many different companies produced devices running Windows Mobile during this time frame. The table below groups devices into two categories, those with cellular capability and those without. The version of Windows Mobile 5.x called "Smartphone", and the version of Windows Mobile 6.x called "Standard", is designed to run on devices without a touch screen; all other devices listed have touch screens.
Windows Mobile 2003
Windows Mobile 2003 Second Edition (SE)
Windows Mobile 5.0
Windows Mobile 6.0
Windows Mobile 6.1
Windows Mobile 6.5
See also
List of Windows Phone devices (Windows Mobile is not to be confused with Windows Phone)
References
Directory of devices based on Windows Mobile
Windows | List of Windows Mobile devices | Technology | 178 |
381,585 | https://en.wikipedia.org/wiki/George%20de%20Hevesy | George Charles de Hevesy (born György Bischitz; ; ; 1 August 1885 – 5 July 1966) was a Hungarian radiochemist and Nobel Prize in Chemistry laureate, recognized in 1943 for his key role in the development of radioactive tracers to study chemical processes such as in the metabolism of animals. He also co-discovered the element hafnium.
Biography
Early years
Hevesy György was born in Budapest, Hungary, to a wealthy and ennobled family of Hungarian-Jewish descent, the fifth of eight children to his parents Lajos Bischitz and Baroness Eugénia (Jenny) Schossberger (ennobled as "De Tornya"). Grandparents from both sides of the family had provided the presidents of the Jewish community of Pest. His parents converted to Roman Catholicism. George grew up in Budapest and graduated high school in 1903 from Piarist Gimnázium. The family's name in 1904 was Hevesy-Bischitz, and Hevesy later changed his own.
De Hevesy began his studies in chemistry at the University of Budapest for one year, and at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin) for several months, but transferred to the University of Freiburg. There he met Ludwig Gattermann. In 1906, he started his Ph.D. thesis with Georg Franz Julius Meyer, acquiring his doctorate in physics in 1908. In 1908, Hevesy was offered a position at the ETH Zürich, Switzerland, yet being independently wealthy, he was able to choose his research environment. He worked first with Fritz Haber in Karlsruhe, Germany, then with Ernest Rutherford in Manchester, England, where he also met Niels Bohr. Back at home in Budapest, he was appointed professor in physical chemistry in 1918. In 1920, he settled in Copenhagen.
Research
In 1922, de Hevesy co-discovered (with Dirk Coster) the element hafnium (72Hf) (Latin Hafnia for "Copenhagen", the home town of Niels Bohr). Mendeleev's 1869 periodic table arranged the chemical elements into a logical system, but a chemical element with 72 protons was missing. Hevesy determined to look for that element on the basis of Bohr's atomic model. The mineralogical museum of Norway and Greenland in Copenhagen furnished the material for the research. Characteristic X-ray spectra recordings made of the sample indicated that a new element was present. The accepted account has been disputed by Mansel Davies and Eric Scerri who attribute the prediction that element 72 would be a transition element to the chemist Charles Bury.
Supported financially by the Rockefeller Foundation, Hevesy had a very productive year. He developed the X-ray fluorescence analytical method, and discovered the samarium alpha-ray. It was here he began the use of radioactive isotopes in studying the metabolic processes of plants and animals, by tracing chemicals in the body by replacing part of stable isotopes with small quantities of the radioactive isotopes. In 1923, Hevesy published the first study on the use of the naturally radioactive 212Pb as radioactive tracer to follow the absorption and translocation in the roots, stems and leaves of Vicia faba, also known as the broad bean. Later, in 1943, the work on radioactive tracing would earn Hevesy the Nobel Prize in Chemistry.
In 1924, Hevesy returned to Freiburg as Professor of Physical Chemistry. In 1930, he went to Cornell University, Ithaca as Baker Lecturer. In 1934, after the Nazis came to power in Germany, he returned to Niels Bohr's Institute at the University of Copenhagen. In 1936, he invented Neutron Activation Analysis. In 1943 he fled to Stockholm (Sweden being neutral during the war), where he an associate of the Institute of Research in Organic Chemistry. In 1949 he was elected Franqui Professor in the University of Ghent. In his retirement, he remained an active scientific associate of the University of Stockholm.
World War II and beyond
Prior to the onset of World War II, Max von Laue and James Franck had sent their gold Nobel Prize medals to Denmark to keep them from being confiscated by the Nazis. After the Nazi invasion of Denmark this placed them in danger; it was illegal at the time to send gold out of Germany, and were it discovered that Laue and Franck had done so, they could have faced prosecution. To prevent this, de Hevesy concealed the medals by dissolving them in aqua regia and placing the resulting solution on a shelf in his laboratory at the Niels Bohr Institute in Copenhagen. After the war, he returned to find the solution undisturbed and precipitated the gold out of the acid. The Nobel Society then recast the medals using the recovered gold and returned them to the two laureates.
By 1943, Copenhagen was no longer safe for a Jewish scientist and de Hevesy fled to Sweden, where he worked at the University of Stockholm until 1961. In Stockholm, de Hevesy was received at the department of chemistry by the Swedish professor and Nobel Prize winner Hans von Euler-Chelpin, who remained strongly pro-German throughout the war. Despite this, de Hevesy and von Euler-Chelpin collaborated on many scientific papers during and after the war.
While in Stockholm, de Hevesy received the Nobel Prize in chemistry. He was later inducted into the Royal Swedish Academy of Sciences and received the Copley Medal, of which he was particularly proud. De Hevesy stated: "The public thinks the Nobel Prize in chemistry for the highest honor that a scientist can receive, but it is not so. Forty or fifty have received Nobel chemistry prizes, but there are only ten foreign members of the Royal Swedish Academy, and only two have received a Copley." (Bohr was the other one.) He received the Atoms for Peace Award in 1958 for his peaceful use of radioactive isotopes.
Family life and death
De Hevesy married Pia Riis in 1924. They had one son and three daughters together, one of whom (Eugenie) married a grandson of the Swedish Nobel laureate Svante Arrhenius. De Hevesy died in 1966 at the age of eighty and was buried in Freiburg. In 2000, his body was moved to the Kerepesi Cemetery in Budapest, Hungary. He had published a total of 397 scientific documents, one of which was the Becquerel-Curie Memorial Lecture, in which he had reminisced about the careers of pioneers of radiochemistry. At his family's request, his ashes were interred at his birthplace in Budapest on 19 April 2001.
On 10 May 2005 the Hevesy Laboratory was founded at Risø National Laboratory for Sustainable Energy, now Technical University of Denmark, DTU Nutech. It was named after George de Hevesy as the father of the isotope tracer principle under the initiative of the lab's first director, Prof. Mikael Jensen.
See also
August Krogh
List of Jewish Nobel laureates
Johanna Bischitz de Heves
10444 de Hevesy
Hevesy (crater)
The Martians (scientists)
Hungarian Nobel Prize winners
References
External links
including the Nobel Lecture on 12 December 1944 Some Applications of Isotopic Indicators
Annotated bibliography for George de Hevesy from the Alsos Digital Library for Nuclear Issues
1885 births
1966 deaths
Nobel laureates in Chemistry
Hungarian Nobel laureates
Nobel laureates from Austria-Hungary
Jewish Nobel laureates
Jewish chemists
Scientists from Budapest
Hungarian Jews
Hungarian Roman Catholics
Hungarian physical chemists
Recipients of the Copley Medal
Atoms for Peace Award recipients
Members of the Royal Swedish Academy of Sciences
University of Freiburg alumni
Technische Universität Berlin alumni
Nobility from Budapest
Hungarian expatriates in Sweden
Foreign members of the Royal Society
Discoverers of chemical elements
Burials at Kerepesi Cemetery
Niels Bohr International Gold Medal recipients
Recipients of the Pour le Mérite (civil class)
Medicinal radiochemistry
People from Tura, Hungary
Jews who emigrated to escape Nazism
Hungarian expatriates in Denmark
Recipients of the Cothenius Medal | George de Hevesy | Chemistry | 1,668 |
34,947,463 | https://en.wikipedia.org/wiki/IT%20as%20a%20service | IT as a service (ITaaS) is an operational model where the information technology (IT) service provider delivers an information technology service to a business. The IT service provider can be an internal IT organization or an external IT services company. The recipients of ITaaS can be a line of business (LOB) organization within an enterprise or a small and medium business (SMB). The information technology is typically delivered as a managed service with a clear IT services catalog and pricing associated with each of the catalog items. At its core, ITaaS is a competitive business model where businesses have many options for IT services and the internal IT organization has to compete against those other external options in order to be the selected IT service provider to the business. Options for providers other than the internal IT organization may include IT outsourcing companies and public cloud providers.
Under an ITaaS model, the IT service provider will place great emphasis on the needs and the outcomes required by the business to improve employee productivity and improving the top line (revenue) and bottom line (profitability). Such services will have a deep industry focus to fully enable industry specific use cases. The benefits to the business sought by using the ITaaS model include the standardization and simplification of products delivered by IT, improved financial transparency and more direct association of costs to consumption, and increased IT operational efficiency resulting from the need to compare the price of internally produced products to those available from external providers. The transformation of an internal IT organization from operating as a cost-center to an ITaaS model is also believed to produce improved levels of business agility for the enterprise as a whole.
Not a cloud service model
According to The NIST Definition of Cloud Computing, there are three service models associated with cloud computing: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The concept of ITaaS as an operating model is not limited to or dependent on cloud computing. Several proponents of ITaaS as an operating model will insist that the ability for an IT organization to deliver ITaaS is enabled by underlying technology models such as IaaS, PaaS, and SaaS. Vendors who are proponents of the concept of ITaaS as an operating model include EMC, Citrix, and VMware.
ITaaS is not a technology shift - such as a move to increase the use of virtualization. Rather, it is an operational and organizational shift to running IT like a business and optimizing IT production for business consumption. IT organizations that adopt ITaaS are most likely to use the best practices for IT service management as defined in ITIL.
Transformation to IT as a service
Several vendors who are proponents of ITaaS describe the transition of an IT organization to the ITaaS model as a journey which includes the adoption of such models as:
New technology models founded on the use of private, public, and hybrid clouds; employing controls, trust and compliance up and down the stack; introducing infrastructure standardization and automation wherever possible.
New consumption models leveraging self-service catalogs offering both internal and external services; providing IT financial transparency for costs and pricing; offering consumerized IT – such as bring your own device (BYoD) – to meet the needs of users. All of which simplify and encourage consumption of services.
New operational models which imply a revised organization, with new business and technical skills and roles; creation of more horizontal, service-oriented processes; explicit IT alignment with lines-of-business.
Business driven IT solution is represented as a repeatable business activity having a specified outcome, wherein service acts as a self-contained logical unit, may be composed of other services (choreography), is generally a "black box" to typical consumers while highly transparent to leadership.
Can include services such as fractional CIO for the business, whereby fiduciary duty is to the client.
See also
Desktop outsourcing
as a service
References
As a service
Information technology management
| IT as a service | Technology | 816 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.