id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
48,707
https://en.wikipedia.org/wiki/GNU%20Octave
GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. It may also be used as a batch-oriented language. As part of the GNU Project, it is free software under the terms of the GNU General Public License. History The project was conceived around 1988. At first it was intended to be a companion to a chemical reactor design course. Full development was started by John W. Eaton in 1992. The first alpha release dates back to 4 January 1993 and on 17 February 1994 version 1.0 was released. Version 9.2.0 was released on 7 June 2024. The program is named after Octave Levenspiel, a former professor of the principal author. Levenspiel was known for his ability to perform quick back-of-the-envelope calculations. Development history Developments In addition to use on desktops for personal scientific computing, Octave is used in academia and industry. For example, Octave was used on a massive parallel computer at Pittsburgh Supercomputing Center to find vulnerabilities related to guessing social security numbers. Acceleration with OpenCL or CUDA is also possible with use of GPUs. Technical details Octave is written in C++ using the C++ standard library. Octave uses an interpreter to execute the Octave scripting language. Octave is extensible using dynamically loadable modules. Octave interpreter has an OpenGL-based graphics engine to create plots, graphs and charts and to save or print them. Alternatively, gnuplot can be used for the same purpose. Octave includes a graphical user interface (GUI) in addition to the traditional command-line interface (CLI); see #User interfaces for details. Octave, the language The Octave language is an interpreted programming language. It is a structured programming language (similar to C) and supports many common C standard library functions, and also certain UNIX system calls and functions. However, it does not support passing arguments by reference although function arguments are copy-on-write to avoid unnecessary duplication. Octave programs consist of a list of function calls or a script. The syntax is matrix-based and provides various functions for matrix operations. It supports various data structures and allows object-oriented programming. Its syntax is very similar to MATLAB, and careful programming of a script will allow it to run on both Octave and MATLAB. Because Octave is made available under the GNU General Public License, it may be freely changed, copied and used. The program runs on Microsoft Windows and most Unix and Unix-like operating systems, including Linux, Android, and macOS. Notable features Command and variable name completion Typing a TAB character on the command line causes Octave to attempt to complete variable, function, and file names (similar to Bash's tab completion). Octave uses the text before the cursor as the initial portion of the name to complete. Command history When running interactively, Octave saves the commands typed in an internal buffer so that they can be recalled and edited. Data structures Octave includes a limited amount of support for organizing data in structures. In this example, we see a structure with elements , , and , (an integer, an array, and a string, respectively): octave:1> x.a = 1; x.b = [1, 2; 3, 4]; x.c = "string"; octave:2> x.a ans = 1 octave:3> x.b ans = 1 2 3 4 octave:4> x.c ans = string octave:5> x x = scalar structure containing the fields: a = 1 b = 1 2 3 4 c = string Short-circuit Boolean operators Octave's && and || logical operators are evaluated in a short-circuit fashion (like the corresponding operators in the C language), in contrast to the element-by-element operators & and |. Increment and decrement operators Octave includes the C-like increment and decrement operators ++ and -- in both their prefix and postfix forms. Octave also does augmented assignment, e.g. x += 5. Unwind-protect Octave supports a limited form of exception handling modelled after the unwind_protect of Lisp. The general form of an unwind_protect block looks like this: unwind_protect body unwind_protect_cleanup cleanup end_unwind_protect As a general rule, GNU Octave recognizes as termination of a given block either the keyword end (which is compatible with the MATLAB language) or a more specific keyword endblock or, in some cases, end_block. As a consequence, an unwind_protect block can be terminated either with the keyword end_unwind_protect as in the example, or with the more portable keyword end. The cleanup part of the block is always executed. In case an exception is raised by the body part, cleanup is executed immediately before propagating the exception outside the block unwind_protect. GNU Octave also supports another form of exception handling (compatible with the MATLAB language): try body catch exception_handling end This latter form differs from an unwind_protect block in two ways. First, exception_handling is only executed when an exception is raised by body. Second, after the execution of exception_handling the exception is not propagated outside the block (unless a rethrow( lasterror ) statement is explicitly inserted within the exception_handling code). Variable-length argument lists Octave has a mechanism for handling functions that take an unspecified number of arguments without explicit upper limit. To specify a list of zero or more arguments, use the special argument varargin as the last (or only) argument in the list. varargin is a cell array containing all the input arguments. function s = plus (varargin) if (nargin==0) s = 0; else s = varargin{1} + plus (varargin{2:nargin}); end end Variable-length return lists A function can be set up to return any number of values by using the special return value varargout. For example: function varargout = multiassign (data) for k=1:nargout varargout{k} = data(:,k); end end C++ integration It is also possible to execute Octave code directly in a C++ program. For example, here is a code snippet for calling rand([10,1]): #include <octave/oct.h> ... ColumnVector NumRands(2); NumRands(0) = 10; NumRands(1) = 1; octave_value_list f_arg, f_ret; f_arg(0) = octave_value(NumRands); f_ret = feval("rand", f_arg, 1); Matrix unis(f_ret(0).matrix_value()); C and C++ code can be integrated into GNU Octave by creating oct files, or using the MATLAB compatible MEX files. MATLAB compatibility Octave has been built with MATLAB compatibility in mind, and shares many features with MATLAB: Matrices as fundamental data type. Built-in support for complex numbers. Powerful built-in math functions and extensive function libraries. Extensibility in the form of user-defined functions. Octave treats incompatibility with MATLAB as a bug; therefore, it could be considered a software clone, which does not infringe software copyright as per Lotus v. Borland court case. MATLAB scripts from the MathWorks' FileExchange repository in principle are compatible with Octave. However, while they are often provided and uploaded by users under an Octave compatible and proper open source BSD license, the FileExchange Terms of use prohibit any usage beside MathWorks' proprietary MATLAB. Syntax compatibility There are a few purposeful, albeit minor, syntax additions : Comment lines can be prefixed with the # character as well as the % character; Various C-based operators ++, --, +=, *=, /= are supported; Elements can be referenced without creating a new variable by cascaded indexing, e.g. [1:10](3); Strings can be defined with the double-quote " character as well as the single-quote ' character; When the variable type is single (a single-precision floating-point number), Octave calculates the "mean" in the single-domain (MATLAB in double-domain) which is faster but gives less accurate results; Blocks can also be terminated with more specific Control structure keywords, i.e., endif, endfor, endwhile, etc.; Functions can be defined within scripts and at the Octave prompt; Presence of a do-until loop (similar to do-while in C). Function compatibility Many, but not all, of the numerous MATLAB functions are available in GNU Octave, some of them accessible through packages in Octave Forge. The functions available as part of either core Octave or Forge packages are listed online . A list of unavailable functions is included in the Octave function __unimplemented.m__. Unimplemented functions are also listed under many Octave Forge packages in the Octave Wiki. When an unimplemented function is called the following error message is shown: octave:1> guide warning: the 'guide' function is not yet implemented in Octave Please read <http://www.octave.org/missing.html> to learn how you can contribute missing functionality. error: 'guide' undefined near line 1 column 1 User interfaces Octave comes with an official graphical user interface (GUI) and an integrated development environment (IDE) based on Qt. It has been available since Octave 3.8, and has become the default interface (over the command-line interface) with the release of Octave 4.0. It was well-received by an EDN contributor, who wrote "[Octave] now has a very workable GUI" in reviewing the then-new GUI in 2014. Several 3rd-party graphical front-ends have also been developed, like ToolboX for coding education. GUI applications With Octave code, the user can create GUI applications. See GUI Development (GNU Octave (version 7.1.0)). Below are some examples: Button, edit control, checkbox# create figure and panel on it f = figure; # create a button (default style) b1 = uicontrol (f, "string", "A Button", "position",[10 10 150 40]); # create an edit control e1 = uicontrol (f, "style", "edit", "string", "editable text", "position",[10 60 300 40]); # create a checkbox c1 = uicontrol (f, "style", "checkbox", "string", "a checkbox", "position",[10 120 150 40]);Textboxprompt = {"Width", "Height", "Depth"}; defaults = {"1.10", "2.20", "3.30"}; rowscols = [1,10; 2,20; 3,30]; dims = inputdlg (prompt, "Enter Box Dimensions", rowscols, defaults);Listbox with message boxes.my_options = {"An item", "another", "yet another"}; [sel, ok] = listdlg ("ListString", my_options, "SelectionMode", "Multiple"); if (ok == 1) msgbox ("You selected:"); for i = 1:numel (sel) msgbox (sprintf ("\t%s", my_options{sel(i)})); endfor else msgbox ("You cancelled."); endifRadiobuttons# create figure and panel on it f = figure; # create a button group gp = uibuttongroup (f, "Position", [ 0 0.5 1 1]) # create a buttons in the group b1 = uicontrol (gp, "style", "radiobutton", "string", "Choice 1", "Position", [ 10 150 100 50 ]); b2 = uicontrol (gp, "style", "radiobutton", "string", "Choice 2", "Position", [ 10 50 100 30 ]); # create a button not in the group b3 = uicontrol (f, "style", "radiobutton","string", "Not in the group","Position", [ 10 50 100 50 ]); Packages Octave also has many packages available. Those packages are located at Octave-Forge Octave Forge - Packages, or Github Octave Packages. It is also possible for anyone to create and maintain packages. Comparison with other similar software Alternatives to GNU Octave under an open source license, other than the aforementioned MATLAB, include Scilab and FreeMat. Octave is more compatible with MATLAB than Scilab is, and FreeMat has not been updated since June 2013. Also the Julia programming language and its plotting capabilities has similarities with GNU Octave. See also List of numerical-analysis software Comparison of numerical-analysis software List of statistical packages List of numerical libraries Notes References Further reading External links Array programming languages Articles with example MATLAB/Octave code Cross-platform free software Data analysis software Data mining and machine learning software Free educational software Free mathematics software Free software programmed in C++ Octave Numerical analysis software for Linux Numerical analysis software for macOS Numerical analysis software for Windows Numerical programming languages Science software that uses Qt Software that uses Qt
GNU Octave
Mathematics
2,935
22,125,656
https://en.wikipedia.org/wiki/Strecker%20degradation
The Strecker degradation is a chemical reaction which converts an α-amino acid into an aldehyde containing the side chain, by way of an imine intermediate. It is named after Adolph Strecker, a German chemist. The original observation by Strecker involved the use of alloxan as the oxidant in the first step, followed by hydrolysis: The reaction can take place using a variety of organic and inorganic reagents. References Organic reactions Food chemistry Name reactions Degradation reactions
Strecker degradation
Chemistry,Biology
103
2,583,733
https://en.wikipedia.org/wiki/Bridgman%20effect
The Bridgman effect (named after P. W. Bridgman), also called the internal Peltier effect, is a phenomenon that occurs when an electric current passes through an anisotropic crystal – there is an absorption or liberation of heat because of the non-uniformity in current distribution. The Bridgman effect is observable in geology. It describes stick-slip behavior of materials under very high pressure. References Electricity
Bridgman effect
Materials_science
92
2,342,852
https://en.wikipedia.org/wiki/Aza-Diels%E2%80%93Alder%20reaction
The Aza-Diels–Alder reaction is a modification of the Diels–Alder reaction wherein a nitrogen replaces sp2 carbon. The nitrogen atom can be part of the diene or the dienophile. Mechanism The aza Diels-Alder (IDA) reaction may occur either by a concerted or stepwise process. The lowest-energy transition state for the concerted process places the imine lone pair (or coordinated Lewis acid) in an exo position. Thus, (E) imines, in which the lone pair and larger imine carbon substituent are cis, tend to give exo products. When the imine nitrogen is protonated or coordinated to a strong Lewis acid, the mechanism shifts to a stepwise, Mannich-Michael pathway. Attaching an electron-withdrawing group to the imine nitrogen increases the rate. The exo isomer usually predominates (particularly when cyclic dienes are used), although selectivities vary. Scope and limitations Stereoselective variants In many cases, cyclic dienes give higher diastereoselectivities than acyclic dienes. Use of amino-acid-based chiral auxiliaries, for instance, leads to good diastereoselectivities in reactions of cyclopentadiene, but not in reactions of acyclic dienes. Asymmetric variants Chiral auxiliaries have been employed on either the imino nitrogen or imino carbon to effect diastereoselection. In the enantioselective Diels–Alder reaction of an aniline, formaldehyde and a cyclohexenone catalyzed by (S)-proline even the diene is masked. In situ generated imines The imine is often generated in situ from an amine and formaldehyde. An example is the reaction of cyclopentadiene with benzylamine to an aza norbornene. The catalytic cycle starts with the reactions of the aromatic amine with formaldehyde to the imine and the reaction of the ketone with proline to the diene. The second step, an endo trig cyclisation, is driven to one of the two possible enantiomers (99% ee) because the imine nitrogen atom forms a hydrogen bond with the carboxylic acid group of proline on the Si face. Hydrolysis of the final complex releases the product and regenerates the catalyst. Tosylimines may be generated in situ from tosylisocyanate and aldehydes. Cycloadditions of these intermediates with dienes give single constitutional isomers, but proceed with moderate stereoselectivity. Lewis-acid catalyzed reactions of sulfonyl imines also exhibit moderate stereoselectivity. Simple unactivated imines react with hydrocarbon dienes only with the help of a Lewis acid; however, both electron-rich and electron-poor dienes react with unactivated imines when heated. Vinylketenes, for instance, afford dihydropyridones upon [4+2] cycloaddition with imines. Regio- and stereoselectivity are unusually high in reactions of this class of dienes. Vinylallenes react similarly in the presence of a Lewis acid, often with high diastereoselectivity. Acyliminium substrates Acyliminium ions also participate in cycloadditions. These cations are generated by removal of chloride from chloromethylated amides: The resulting acyl iminium cations serve as heterodienes as well as dienophile. Use in natural products synthesis The aza-Diels–Alder reaction has been applied to the synthesis of a number of alkaloid natural products. Danishefsky's diene is used to form a six-membered ring en route to phyllanthine. See also Oxo-Diels–Alder reaction References Cycloadditions Nitrogen heterocycle forming reactions Name reactions
Aza-Diels–Alder reaction
Chemistry
844
1,619,127
https://en.wikipedia.org/wiki/Shot%20peening
Shot peening is a cold working process used to produce a compressive residual stress layer and modify the mechanical properties of metals and composites. It entails striking a surface with shot (round metallic, glass, or ceramic particles) with force sufficient to create plastic deformation. In machining, shot peening is used to strengthen and relieve stress in components like steel automobile crankshafts and connecting rods. In architecture it provides a muted finish to metal. Shot peening is similar mechanically to sandblasting, though its purpose is not to remove material, but rather it employs the mechanism of plasticity to achieve its goal, with each particle functioning as a ball-peen hammer. Details Peening a surface spreads it plastically, causing changes in the mechanical properties of the surface. Its main application is to avoid the propagation of microcracks in a surface. By putting a material under compressive stress, shot peening prevents such cracks from propagating. Shot peening is often called for in aircraft repairs to relieve tensile stresses built up in the grinding process and replace them with beneficial compressive stresses. Depending on the part geometry, part material, shot material, shot quality, shot intensity, and shot coverage, shot peening can increase fatigue life up to 1000%. Plastic deformation induces a residual compressive stress in a peened surface, along with tensile stress in the interior. Surface compressive stresses confer resistance to metal fatigue and to some forms of stress corrosion. The tensile stresses deep in the part are not as troublesome as tensile stresses on the surface because cracks are less likely to start in the interior. Intensity is a key parameter of the shot peening process. After some development of the process, an analog was needed to measure the effects of shot peening. John Almen noticed that shot peening made the side of the sheet metal that was exposed begin to bend and stretch. He created the Almen strip to measure the compressive stresses in the strip created by the shot peening operation. One can obtain what is referred to as the "intensity of the blast stream" by measuring the deformation on the Almen strip that is in the shot peening operation. As the strip reaches a 10% deformation, the Almen strip is then hit with the same intensity for twice the amount of time. If the strip deforms another 10%, then one obtains the intensity of the blast stream. Another operation to gauge the intensity of a shot peening process is the use of an Almen round, developed by R. Bosshard. Coverage, the percentage of the surface indented once or more, is subject to variation due to the angle of the shot blast stream relative to the workpiece surface. The stream is cone-shaped, thus, shot arrives at varying angles. Processing the surface with a series of overlapping passes improves coverage, although variation in "stripes" will still be present. Alignment of the axis of the shot stream with the axis of the Almen strip is important. A continuous compressively stressed surface of the workpiece has been shown to be produced at less than 50% coverage but falls as 100% is approached. Optimizing coverage level for the process being performed is important for producing the desired surface effect. SAE International's includes several standards for shot peening in aerospace and other industries. Process and equipment Popular methods for propelling shot media include air blast systems and centrifugal blast wheels. In the air blast systems, media are introduced by various methods into the path of high pressure air and accelerated through a nozzle directed at the part to be peened. The centrifugal blast wheel consists of a high speed paddle wheel. Shot media are introduced in the center of the spinning wheel and propelled by the centrifugal force by the spinning paddles towards the part by adjusting the media entrance location, effectively timing the release of the media. Other methods include ultrasonic peening, wet peening, and laser peening (which does not use media). Media choices include spherical cast steel shot, ceramic bead, glass bead or conditioned (rounded) cut wire. Cut wire shot is preferred because it maintains its roundness as it is degraded, unlike cast shot which tends to break up into sharp pieces that can damage the workpiece. Cut wire shot can last five times longer than cast shot. Because peening demands well-graded shot of consistent hardness, diameter, and shape, a mechanism for removing shot fragments throughout the process is desirable. Equipment is available that includes separators to clean and recondition shot and feeders to add new shot automatically to replace the damaged material. Wheel blast systems include satellite rotation models, rotary throughfeed components, and various manipulator designs. There are overhead monorail systems as well as reverse-belted models. Workpiece holding equipment includes rotating index tables, loading and unloading robots, and jigs that hold multiple workpieces. For larger workpieces, manipulators to reposition them to expose features to the shot blast stream are available. Cut wire shot Cut wire shot is a metal shot used for shot peening, where small particles are fired at a workpiece by a compressed air jet. It is a low-cost manufacturing process, as the basic feedstock is inexpensive. As-cut particles are an effective abrasive due to the sharp edges created in the cutting process; however, as-cut shot is not a desirable shot peening medium, as its sharp edges are not suitable to the process. Cut shot is manufactured from high quality wire in which each particle is cut to a length about equal to its diameter. If required, the particles are conditioned (rounded) to remove the sharp corners produced during the cutting process. Depending on application, various hardness ranges are available, with the higher the hardness of the media the lower its durability. Other cut-wire shot applications include tumbling and vibratory finishing. Coverage Factors affecting coverage density include: number of impacts (shot flow), exposure time, shot properties (size, chemistry), and work piece properties. Coverage is monitored by visual examination to determine the percent coverage (0–100%). Coverage beyond 100% cannot be determined. The number of individual impacts is linearly proportional to shot flow, exposure area, and exposure time. Coverage is not linearly proportional because of the random nature of the process (chaos theory). When 100% coverage is achieved, locations on the surface have been impacted multiple times. At 150% coverage, 5 or more impacts occur at 52% of locations. At 200% coverage, 5 or more impacts occur at 84% of locations. Coverage is affected by shot geometry and the shot and workpiece chemistry. The size of the shot controls how many impacts there are per pound, where smaller shot produces more impacts per pound therefore requiring less exposure time. Soft shot impacting hard material will take more exposure time to reach acceptable coverage compared to hard shot impacting a soft material (since the harder shot can penetrate deeper, thus creating a larger impression). Coverage and intensity (measured by Almen strips) can have a profound effect on fatigue life. This can affect a variety of materials typically shot peened. Incomplete or excessive coverage and intensity can result in reduced fatigue life. Over-peening will cause excessive cold working on the surface of the workpiece, which can also cause fatigue cracks. Diligence is required when developing parameters for coverage and intensity, especially when using materials having different properties (i.e. softer metal to harder metal). Testing fatigue life over a range of parameters would result in a "sweet-spot" where there is near exponential growth to a peak fatigue life (x = peening intensity or media stream energy, y = time-to-crack or fatigue strength) and rapidly decay fatigue life as more intensity or coverage is added. The "sweet-spot" will directly correlate with the kinetic energy transferred and the material properties of the shot media and workpiece. Applications Shot peening is used on gear parts, cams and camshafts, clutch springs, coil springs, connecting rods, crankshafts, gearwheels, leaf and suspension springs, rock drills, and turbine blades. It is also used in foundries for sand removal, decoring, descaling, and surface finishing of castings such as engine blocks and cylinder heads. Its descaling action can be used in the manufacturing of steel products such as strip, plates, sheets, wire, and bar stock. Shot peening is a crucial process in spring making. Types of springs include leaf springs, extension springs, and compression springs. The most widely used application are for engine valve springs (compression springs) due to high cyclic fatigue. In an OEM valve spring application, the mechanical design combined with some shot peening ensures longevity. Automotive makers are shifting to more high performance higher stressed valve spring designs as engines evolve. In aftermarket high performance valve spring applications, the need for controlled and multi-step shot peening is a requirement to withstand extreme surface stresses that sometimes exceeds material specifications. The fatigue life of an extreme performance spring (NHRA, IHRA) can be as short as two passes on a 1/4 mile drag racing track before relaxation or failure occurs. Shot peening may be used for cosmetic effect. The surface roughness resulting from the overlapping dimples causes light to scatter upon reflection. Because peening typically produces larger surface features than sand-blasting, the resulting effect is more pronounced. Shot peening and abrasive blasting can apply materials on metal surfaces. When the shot or grit particles are blasted through a powder or liquid containing the desired surface coating, the impact plates or coats the workpiece surface. The process has been used to embed ceramic coatings, though the coverage is random rather than coherent. 3M developed a process where a metal surface was blasted with particles with a core of alumina and an outer layer of silica. The result was fusion of the silica to the surface. The process known as peen plating was developed by NASA. Fine powders of metals or non-metals are plated onto metal surfaces using glass bead shot as the blast medium. The process has evolved to applying solid lubricants such as molybdenum disulphide to surfaces. Biocompatible ceramics have been applied this way to biomedical implants. Peen plating subjects the coating material to high heat in the collisions with the shot and the coating must also be available in powder form, limiting the range of materials that can be used. To overcome the problem of heat, a process called temperature moderated-collision mediated coating (TM-CMC) has allowed the use of polymers and antibiotic materials as peened coatings. The coating is presented as an aerosol directed to the surface at the same time as a stream of shot particles. The TM-CMC process is still in the R&D phase of development. Compressive residual stress A sub-surface compressive residual stress profile is measured using techniques such as x-ray diffraction and hardness profile testings. The X-axis is depth in mm or inches and the Y-axis is residual stress in ksi or MPa. The maximum residual stress profile can be affected by the factors of shot peening, including: part geometry, part material, shot material, shot quality, shot intensity, and shot coverage. For example, shot peening a hardened steel part with a process and then using the same process for another unhardened part could result in over-peening; causing a sharp decrease in surface residual stresses, but not affecting sub-surface stresses. This is critical because maximum stresses are typically at the surface of the material. Mitigation of these lower surface stresses can be accomplished by a multi-stage post process with varied shot diameters and other surface treatments that remove the low residual stress layer. The compressive residual stress in a metal alloy is produced by the transfer of kinetic energy (K.E.) from a moving mass (shot particle or ball peen) into the surface of a material with the capacity to plastically deform. The residual stress profile is also dependent on coverage density. The mechanics of the collisions involve properties of the shot hardness, shape, and structure; as well as the properties of the workpiece. Factors for process development and the control for K.E. transfer for shot peening are: shot velocity (wheel speed or air pressure/nozzle design), shot mass, shot chemistry, impact angle and work piece properties. Example: if you needed very high residual stresses you would likely want to use large diameter cut-wire shot, a high-intensity process, direct blast onto the workpiece, and a very hard workpiece material. See also Autofrettage, which produces compressive residual stresses in pressure vessels. Case hardening Differential hardening Steel abrasives Shot peening of steel belts High-frequency impact treatment after-treatment of weld transitions Suncorite Trimite References
Shot peening
Materials_science
2,659
77,069,742
https://en.wikipedia.org/wiki/Gower%27s%20distance
In statistics, Gower's distance between two mixed-type objects is a similarity measure that can handle different types of data within the same dataset and is particularly useful in cluster analysis or other multivariate statistical techniques. Data can be binary, ordinal, or continuous variables. It works by normalizing the differences between each pair of variables and then computing a weighted average of these differences. The distance was defined in 1971 by Gower and it takes values between 0 and 1 with smaller values indicating higher similarity. Definition For two objects and having descriptors, the similarity is defined as: where the are non-negative weights usually set to and is the similarity between the two objects regarding their -th variable. If the variable is binary or ordinal, the values of are 0 or 1, with 1 denoting equality. If the variable is continuous, with being the range of -th variable and thus ensuring . As a result, the overall similarity between two objects is the weighted average of the similarities calculated for all their descriptors. In its original exposition, the distance does not treat ordinal variables in a special manner. In the 1990s, first Kaufman and Rousseeuw and later Podani suggested extensions where the ordering of an ordinal feature is used. For example, Podani obtains relative rank differences as with being the ranks corresponding to the ordered categories of the -th variable. Software implementations Many programming languages and statistical packages, such as R, Python, etc., include implementations of Gower's distance. The implementations may follow Kaufmann and Rousseeuw's extensions, which change the similarity for continuous variables to References Statistical distance Similarity measures
Gower's distance
Physics
342
23,633,964
https://en.wikipedia.org/wiki/Screen%20reading
Screen reading is the act of reading a text on a computer screen, smartphone, e-book reader, Discovery Louis Émile Javal, a French ophthalmologist and founder of an ophthalmology laboratory in Paris is credited with the introduction of the term saccades into eye movement research. Javal discovered that while reading, one's eyes tend to jump across the text in saccades, and stop intermittently along each line in fixations. Because of the lack of technology at the time, naked-eye observations were used to observe eye movement, until later in the late 19th and mid-20th century eye-tracking experiments were conducted in an attempt to discover a pattern regarding eye fixations while reading. Research F-Pattern In a 1997 study conducted by Jakob Nielsen, a web usability expert who co-founded usability consulting company Nielsen Norman Group with Donald Norman, it was discovered that generally people read 25% slower on a computer screen in comparison with a printed page. The researchers state that this is only true for when reading on an older type computer screen with a low-scanrate. In an additional study done in 2006, Nielsen also discovered that people read Web pages in an F-shaped pattern that consists of two horizontal stripes followed by a vertical stripe. He had 232 participants fitted with eye-tracking cameras to trace their eye movements as they read online texts and webpages. The findings showed that people do not read the text on webpages word-by-word, but instead generally read horizontally across the top of the webpage, then in a second horizontal movement slightly lower on the page, and lastly scan vertically down the left side of the screen. The Software Usability Research Laboratory at Wichita State University did a subsequent study in 2007 testing eye gaze patterns while searching versus browsing a website, and the results confirmed that users appeared to follow Nielsen's ‘F’ pattern while browsing and searching through text-based pages. A group of German researchers conducted a study that examined the Web browsing behavior of 25 participants over the course of around one hundred days. The researchers concluded that "browsing is a rapidly interactive activity" and that Web pages are mostly viewed for 10 seconds or less. Nielsen analyzed this data in 2008 and found that, on average, users read 20-28% of the content on a webpage. Google Golden Triangle A technical report from Eyetools, DidIt and Enquiro, using search results from the Google search engine, indicated that readers primarily looked at a triangular area of the top and left side of the screen. This corresponds to the Nielsen F-shaped pattern, and was dubbed the Google Golden Triangle. A recent 2014 Meditative blog showed evidence of the declination of the Golden Triangle phenomenon since 2005 as users view more search result listings than before. Comparisons to reading printed text Since the first notion of screen reading, many studies have been performed to discern any differences between reading off of an electronic device and reading off of a paper. In a 2013 study, a group of 72 high school students in Norway were randomly assigned into one of two groups: one that read using PDF files on a computer and one that used standard paper. The students were put through various tests involving reading-comprehension and vocabulary. The results indicated that those who read using the PDF files performed much worse than those reading off of a paper. A conclusion was reached that certain aspects of screen reading, such as scrolling, can impede comprehension. However, not all experiments have concluded that reading from a digitized screen can be detrimental. The same year, another experiment was conducted on 90 undergraduates at a college in Western New York involving paper reading, computer reading, and e-book reading. Like the children in the Norwegian experiment, the students were tested for comprehension upon reading a number of passages: five focused around facts and information and the other five based on narratives. No significant difference was found between any of the different forms of reading for either type of passage. However, the researchers noted that due to the participants being college students who were accustomed to using technology, they may react differently to reading on electronic devices than older individuals. A study conducted in 2014 by Tirza Lauterman and Rakefet Ackerman allowed subjects the option to choose between reading digitally or reading printed pages. The results found that those who chose to read digitally performed worse than those who used print. However, by practicing with PDF files, subjects that preferred to read on computers were able to overcome what researchers labeled as “screen inferiority” and managed to score just as well as paper readers, who did not improve with practice. Lauterman and Ackerman concluded that the study supported the idea that screen reading is shallower than paper reading, but that with practice the shallowness can be removed as an impediment. No conclusion has yet been reached among professionals regarding whether or not reading on a screen is significantly different than reading printed text. Criticism Critics have voiced concerns about screen reading, though some have taken a more positive stance. Kevin Kelly believes that we are transitioning from "book fluency to screen fluency, from literacy to visuality". Anne Mangen holds that because of the materialist nature of a printed book the reader is more engaged with a text, while the opposite is true with a digital text in which the reader is engaged in a "shallower, less focused way". Nicholas Carr, author of The Shallows, says that “the ability to skim text is every bit as important as the ability to read deeply. What is… troubling, is that skimming is becoming our dominant mode of reading” (138). Studies have shown that prolonged exposure to computer screens can have negative effects on the eyes, causing symptoms of computer vision syndrome (CVS) that include strained eyes and blurred vision. The occurrence of CVS has grown greatly over the past few years, effecting a large majority of American workers who spend over three hours a day on computers in some form. See also Computer literacy References Reading (process) Display technology
Screen reading
Engineering
1,228
12,076,921
https://en.wikipedia.org/wiki/Water-repellent%20glass
Water-repellent glass (WRG) is a transparent coating film fabricated onto glass, enabling the glass to exhibit hydrophobicity and durability. WRGs are often manufactured out of materials including derivatives from per- and polyfluoroalkyl substances (PFAS), tetraethylorthosilicate (TEOS), polydimethylsilicone (PDMS), and fluorocarbons. In order to prepare WRGs, sol-gel processes involving dual-layer enrichments of large size glasses are commonly implemented. Glasses enriched with WRG coatings prevent water droplets from sticking to the surface due to hydrophobic properties. These properties are achieved through high water-sliding property and high contact angles with water drops (over 100°). Additionally, durability against both chemical and mechanical attack allows the coating to protect the glass from abrasion due to windshield wipers, rainwater, and other weather conditions.        WRGs are most commonly used commercially for automobile windows to increase visibility in precipitous weather conditions and nighttime driving. In industry, WRG's were first used by Volvo Cars first on their late-2005 vehicles, and have also been used by Japanese automobile makers such as Toyota, Honda, and Mazda. Additionally, WRG has other practical applications such as eyewear and photocatalysts. Properties Hydrophobicity Hydrophobic properties of WRG glass windows are crucial to its repellency abilities. High water-sliding property of WRG films is necessary for hydrophobicity. The higher the water-sliding angle, or angle of a surface in which a water droplet begins to slide down, the easier a water drop can slide down the film surface. A film's water-sliding angle is often dependent on the film coating substance. For instance, a study revealed that coating a WRG film with Fluoroalkylsilane (FAS) produced a higher water-sliding angle than coating with Polydimethylsilicone (PDMS). High contact angles of over 100 degrees are associated with more effective water-repellency properties. The greater the contact angle between the water droplet and glass surface, the less the contact between the water and the glass, and the easier the water droplet can slide off of the glass. This can be achieved by increasing surface roughness, since the contact angle becomes larger as surface particles become larger. Mechanical durability Water-repellent films' mechanical durability properties are dependent on state of the surface roughness of the film and density of adsorbed water-repellent molecules. Mechanical durability of WRG can be characterized as wear and weather resistance, an important attribute for manufacturing automobile windows. The greater the surface roughness, the more resistant the film will be to abrasion. Average surface roughness of a glass substrate indicates the size of surface particles, is measured using an atomic force microscope (AFM), and recorded in nanometers. A study analyzed different samples of silica films and found minimum and maximum surface roughnesses of 0.4 and 16.1 nm respectively. Surface roughnesses greater than 8 nm are considered large. After rubbing each sample with a flannel cloth, the study was able to determine each's resistance against wear. Films with higher surface roughnesses exhibit the highest mechanical durability. Additionally, films formed on top of silica were more durable than films formed on soda-lime glass. The WRG's mechanical durability can also be increased by a larger density of reaction sites per surface area. An increased density of reaction sites on the film is also a result of a higher surface roughness. This works to increase durability since a higher density means more rigid chemical bonds. For instance, forming a WRG film out of polyfluoroalkyl isocyanate creates a surface with siloxane bonding. There exists a direct correlation between the density of silanol groups on the film surface and the adhesion density of the film. Production Sol-gel process The sol-gel process is a common method of preparing water-repellent glass coating films done with various materials and often resulting in dual-layer films. This process is advantageous for automobile window applications since it works with large, curved safety glasses and allows qualities such as durability and hydrophobicity to be controlled. In a study done by the University of Massachusetts, the sol-gel process was employed to prepare a dual-layer film using layers composed of silica and fluorocarbon. The silica layer was selected to enhance durability and placed at the glass-film interface, while the fluorocarbon layer was placed at the film-air interface and incorporated a specific surface roughness into the design. The process involved the following distinct steps: preparing both the silica sol and water-repellent solutions, spraying the solution onto the glass, treating the glass through drying, and treating through heating. In addition, the Nippon Sheet Glass Co. in Japan discussed a sol-gel treatment involving fluoroalkylsilane (FAS) and polydimethylsilicone (PDMS). Both materials were mixed with catalysts in solvents, fabricated onto glasses, and dried. The use of the sol-gel treatment allowed for flexibility in experimenting with contact angle, sliding angle, and durability. The study pointed out that this process could be also used in automobile industry. Applications The table below provides an overview of some notable applications of WRG films. Automobile windows and mirrors WRG is commonly used as a coating for windows and mirrors of automobiles in order to increase visibility through wicking away rainwater, snow, and dirt. Several millions of WRG windows have already been manufactured and installed in industry. For instance, Central Glass Company developed a hydrophobic glass film exhibiting excellent repellency, durability, and transparency. Many Japanese automobile companies including Honda and Mazda are selling cars with these glass films. Additionally, water-repellent coatings are being applied to automobile side mirrors. Eyewear The eyeglass industry is also moving toward implementing water and dust repellent glasses to decrease fogging due to rain, sweat, and other water sources. When glasses experience condensation, the small water droplets begin scattering light, impairing the vision of the glasses wearer. The eyecare company Nasho is innovating toward WRG technology to improve vision, but is currently limited financially for the research and development. Photocatalysts Photocatalyst coatings allow for the self-cleaning of surfaces of road signs, building materials, and solar panels. Multiple photocatalyst WRG film such as CLEARTECT and HYDRAP have been commercialized. A WRG film can be added on top of solar panels in order to increase their efficiency. The cover glass technology is self-cleaning, allowing for maximized light transmission into the solar cell. The hydrophobic film acts as a barrier that causes water droplets to roll off the solar panel, rather than adhering and blocking sunlight from being absorbed. Solar panels enhanced with anti-reflective, water-repellent layers show a 6.6% increase in performance when compared to those without a coating. References Glass coating and surface modification Volvo Cars
Water-repellent glass
Chemistry
1,493
37,871,496
https://en.wikipedia.org/wiki/Giovanni%20Roncagli
Giovanni Roncagli (1857–1929) was an Italian naval officer and hydrographer. Biography Roncagli was born in 1857. He enlisted in the navy in 1875, attending the Royal Naval School of Naples. In his biography Vita di mare he tells of his early career, at a time when sailing ships were only slowly being replaced by steam. Roncagli was the hydrographer for the Italian expedition to explore Patagonia and Tierra del Fuego of 1881–1882, led by Giacomo Bove. Decio Vinciguerra was officially both zoologist and botanist, but in fact Carlos Luigi Spegazzini from Buenos Aires handled the botanical work. The geologist Domenico Lovisato made up the scientific party. Giovanni Roncagli became a navy captain, an expert in commercial geography and a member of the Italian Naval League. He was secretary general of the Italian Geographic Society from 1897 until World War I (1914–1918). He also became director of the Navy's historical section. He was a pioneer in aeronautical topography in Italy, which quickly turned out to be of great importance in World War I. In 1913 Roncagli presented a report to the Tenth International Geographical Congress in Rome in which he discussed the need for an international aeronautical map. Bibliography Notes and references Citations Sources 1857 births 1929 deaths Hydrographers Hydrologists
Giovanni Roncagli
Environmental_science
270
54,637,700
https://en.wikipedia.org/wiki/Teknomo%E2%80%93Fernandez%20algorithm
The Teknomo–Fernandez algorithm (TF algorithm), is an efficient algorithm for generating the background image of a given video sequence. By assuming that the background image is shown in the majority of the video, the algorithm is able to generate a good background image of a video in -time using only a small number of binary operations and Boolean bit operations, which require a small amount of memory and has built-in operators found in many programming languages such as C, C++, and Java. History People tracking from videos usually involves some form of background subtraction to segment foreground from background. Once foreground images are extracted, then desired algorithms (such as those for motion tracking, object tracking, and facial recognition) may be executed using these images. However, background subtraction requires that the background image is already available and unfortunately, this is not always the case. Traditionally, the background image is searched for manually or automatically from the video images when there are no objects. More recently, automatic background generation through object detection, medial filtering, medoid filtering, approximated median filtering, linear predictive filter, non-parametric model, Kalman filter, and adaptive smoothening have been suggested; however, most of these methods have high computational complexity and are resource-intensive. The Teknomo–Fernandez algorithm is also an automatic background generation algorithm. Its advantage, however, is its computational speed of only -time, depending on the resolution of an image and its accuracy gained within a manageable number of frames. Only at least three frames from a video is needed to produce the background image assuming that for every pixel position, the background occurs in the majority of the videos. Furthermore, it can be performed for both grayscale and colored videos. Assumptions The camera is stationary. The light of the environment changes only slowly relative to the motions of the people in the scene. The number of people does not occupy the scene for most of the time at the same place. Generally, however, the algorithm will certainly work whenever the following single important assumption holds: For each pixel position, the majority of the pixel values in the entire video contain the pixel value of the actual background image (at that position).As long as each part of the background is shown in the majority of the video, the entire background image needs not to appear in any of its frames. The algorithm is expected to work accurately. Background image generation Equations For three frames of image sequence , , and , the background image is obtained using       The Boolean mode function of the table occurs when the number of 1 entries is larger than half of the number of images such that      For three images, the background image can be taken as the value Background generation algorithm At the first level, three frames are selected at random from the image sequence to produce a background image by combining them using the first equation. This yields a better background image at the second level. The procedure is repeated until desired level . Theoretical accuracy At level , the probability that the modal bit predicted is the actual modal bit is represented by the equation . The table below gives the computed probability values across several levels using some specific initial probabilities. It can be observed that even if the modal bit at the considered position is at a low 60% of the frames, the probability of accurate modal bit determination is already more than 99% at 6 levels. Space complexity The space requirement of the Teknomo–Fernandez algorithm is given by the function , depending on the resolution of the image, the number of frames in the video, and the desired number of levels. However, the fact that will probably not exceed 6 reduces the space complexity to . Time complexity The entire algorithm runs in -time, only depending on the resolution of the image. Computing the modal bit for each bit can be done in -time while the computation of the resulting image from the three given images can be done in -time. The number of the images to be processed in levels is . However, since , then this is actually , thus the algorithm runs in . Variants A variant of the Teknomo–Fernandez algorithm that incorporates the Monte-Carlo method named CRF has been developed. Two different configurations of CRF were implemented: CRF9,2 and CRF81,1. Experiments on some colored video sequences showed that the CRF configurations outperform the TF algorithm in terms of accuracy. However, the TF algorithm remains more efficient in terms of processing time. Applications Object detection Face detection Face recognition Pedestrian detection Video surveillance Motion capture Human-computer interaction Content-based video coding Traffic monitoring Real-time gesture recognition References Further reading External links Background Image Generation Using Boolean Operations – describes the TF algorithm, its assumptions, processes, accuracy, time and space complexity, and sample results. A Monte-Carlo-based Algorithm for Background Generation – a variant of the Teknomo–Fernandez algorithm that incorporates the Monte-Carlo method was developed in this study. Mathematical examples Image processing Computer vision
Teknomo–Fernandez algorithm
Mathematics,Engineering
1,016
32,115,510
https://en.wikipedia.org/wiki/Arginine%20repressor%20ArgR
In molecular biology, the arginine repressor (ArgR) is a repressor of prokaryotic arginine deiminase pathways. The arginine dihydrolase (AD) pathway is found in many prokaryotes and some eukaryotes, an example of the latter being Giardia lamblia (Giardia intestinalis). The three-enzyme anaerobic pathway breaks down L-arginine to form 1 mol of ATP, carbon dioxide and ammonia. In some bacteria, the first enzyme, arginine deiminase, can account for up to 10% of total cell protein. Most prokaryotic arginine deiminase pathways are under the control of a repressor gene, termed ArgR. This is a negative regulator, and will only release the arginine deiminase operon for expression in the presence of arginine. The crystal structure of apo-ArgR from Bacillus stearothermophilus has been determined to 2.5A by means of X-ray crystallography. The protein exists as a hexamer of identical subunits, and is shown to have six DNA-binding domains, clustered around a central oligomeric core when bound to arginine. It predominantly interacts with A.T residues in ARG boxes. This hexameric protein binds DNA at its N terminus to repress arginine biosynthesis or activate arginine catabolism. Some species have several ArgR paralogs. In a neighbour-joining tree, some of these paralogous sequences show long branches and differ significantly from the well-conserved C-terminal region. References Protein domains
Arginine repressor ArgR
Biology
357
42,304,791
https://en.wikipedia.org/wiki/Green%20Party%20Korea
Green Party Korea is a political party in South Korea. The party was established in March 2012. It is a continuation of the Korea Greens, created following initial discussions in 2011. The party was established in response to the Fukushima Nuclear Crisis of Japan. Green Party Korea is a member of the Global Greens and the Asia Pacific Greens Federation. History As a result of the party only getting 0.48% in the 19th national parliamentary election in April 2012, the party was disbanded by the National Election Administration Office. However, the paragraph 4 of article 41 and the subparagraph 3 of paragraph 1 of article 44 of the Political Parties Act, which had revoked registration of parties and banned use of the titles of the parties whose obtained numbers of votes had been less than 2% of the total number of effective votes, were ruled unconstitutional by the Constitutional Court of Korea on 28 January 2014. As a result, Green Party Korea recovered its title. Green Party Korea, together with the Basic Income Youth Network, began a two-week tour on 6 July 2015 to discover the opinions citizens in South Korea have about basic income, and to introduce the concept of basic income to the community. The party has also adopted basic income as part of their party platform. The party joined an electoral alliance with social-democratic Justice Party to participate in 2024 South Korean legislative election. Due to South Korean electoral law, which doesn't allow electoral alliances officially, the Justice Party changed its name to the Green Justice Party, and candidates of the Green Party individually joined the Green Justice Party. Membership demographics The party has more females than males. About 38.2% of party members are in their 40s. 24.8% of party members are 50 years of age or older. Leadership Lee Hyun-joo and Ha Seung-soo (co-serving; 2012–2014) Lee You-jin and Ha Seung-soo (co-serving; 2014–2016) Kim Ju-on and Choi Hyeok-bong (co-serving; 2016–2018) Ha Seung-soo and Shin Ji-ye (co-serving; 2018–2020) Sung Mi-sun (2020–2021) Kim Ye-won and Lee Jae-hyeok (co-serving; 2021) Kim Ye-won and Kim Chan-hwi (co-serving; 2021–2023) Kim Chan-hwi (2023–2024) Kim Soon-ae and Lee Chi-seon (co-serving; 2024–present) Election results Legislature Local See also Air pollution in South Korea Energy in South Korea Environment of South Korea List of environmental organizations Politics of South Korea Political parties in South Korea Pollution in Korea References External links Korea Greens on Daum Cafe 2004 establishments in South Korea Anti-nationalism in Korea Anti-nuclear organizations Direct democracy parties Environment of South Korea Feminist organizations in South Korea Feminist parties in Asia Global Greens member parties Green parties in Asia Immigration political advocacy groups in South Korea LGBTQ political advocacy groups in South Korea Participatory democracy Political parties established in 2004 Political parties supporting universal basic income Progressive parties in South Korea Universal basic income in South Korea
Green Party Korea
Engineering
633
189,726
https://en.wikipedia.org/wiki/Java%20APIs%20for%20Integrated%20Networks
Java APIs for Integrated Networks (JAIN) is an activity within the Java Community Process, developing APIs for the creation of telephony (voice and data) services. Originally, JAIN stood for Java APIs for Intelligent Network. The name was later changed to Java APIs for Integrated Networks to reflect the widening scope of the project. The JAIN activity consists of a number of "Expert Groups", each developing a single API specification. Trend JAIN is part of a general trend to open up service creation in the telephony network so that, by analogy with the Internet, openness should result in a growing number of participants creating services, in turn creating more demand and better, more targeted services. Goal A goal of the JAIN APIs is to abstract the underlying network, so that services can be developed independent of network technology, be it traditional PSTN or Next Generation Network. API The JAIN effort has produced around 20 APIs, in various stages of standardization, ranging from Java APIs for specific network protocols, such as SIP and TCAP, to more abstract APIs such as for call control and charging, and even including a non-Java effort for describing telephony services in XML. Parlay X There is overlap between JAIN and Parlay/OSA because both address similar problem spaces. However, as originally conceived, JAIN focused on APIs that would make it easier for network operators to develop their own services within the framework of Intelligent Network (IN) protocols. As a consequence, the first JAIN APIs focused on methods for building and interpreting SS7 messages and it was only later that JAIN turned its attention to higher-level methods for call control. Meanwhile, at about the same time JAIN was getting off the ground, work on Parlay began with a focus on APIs to enable development of network services by non-operator third parties. Standardized APIs From around 2001 to 2003, there was an effort to harmonize the not yet standardized JAIN APIs for call control with the comparable and by then standardized Parlay APIs. A number of difficulties were encountered, but perhaps the most serious was not technical but procedural. The Java Community Process requires that a reference implementation be built for every standardized Java API. Parlay does not have this requirement. Not surprisingly, given the effort that would have been needed to build a reference implementation of JAIN call control, the standards community decided, implicitly if not explicitly, that the Parlay call control APIs were adequate and work on JAIN call control faded off. Nonetheless, the work on JAIN call control did have an important impact on Parlay since it helped to drive the definition of an agreed-upon mapping of Parlay to the Java language. See also NGIN Parlay Group External links The JAIN APIs. JAIN-SIP. JAIN-SIP (new site). Books Integrated Networks Telecommunications standards Computer standards
Java APIs for Integrated Networks
Technology
578
71,242,004
https://en.wikipedia.org/wiki/Wiliot
Wiliot is a startup company developing Internet of Things technology for supply-chains and asset management, founded in 2017 and based in Caesarea, Israel, with customer operations in San Diego, US. Wiliot develops battery-free printable sensor tags to monitor products like groceries, apparel and pharmaceuticals from their sources to stores and homes. The company's business model is to sell the use of its cloud software. History Wiliot was founded in 2017 by Tal Tamir, Yaron Elboim, and Alon Yehezkely, following the sales of their previous startup Wilocity to Qualcomm in 2014. In 2019, Williot closed a $30 million series B round of funding from Amazon, Avery Dennison, Samsung and its previous series A investors Norwest Venture Partners, 83North Venture Capital, Grove Ventures, Qualcomm Ventures, and M Ventures. Other early investors include PepsiCo, NTT Docomo Ventures, and Vintage Investment Partners. In 2021 Wiliot raised $200 million in a series C funding round led by SoftBank Vision Fund 2 and backed by all previous investors. Technology Wiliot's tags, called IoT Pixels, are a postage stamp-sized printed computer that powers itself by harvesting the energy from surrounding Wi-Fi, cellular and Bluetooth radio signals. The IoT Pixel tags have sensors for temperature, fill level, motion, location changes, humidity, and proximity. The tags cost less than 10 cents a piece. The IoT Pixel includes an ARM Cortex-M0+ processor core, Bluetooth Low Energy connectivity, 1 kB of non-volatile memory, and antennas for Bluetooth and energy harvesting. Dual-band models include connectivity in the ISM bands. In June 2022, Wiliot launched a business card-sized battery-assisted version of the IoT Pixel providing continuous connectivity. Data from the sensors is fed into a Wiliot Cloud server, where algorithms help its customers make decisions through a software as a service subscription. As of 2022, Wiliot is the assignee of 66 patents that relate to harvesting energy from very weak sources, running a computer element on tiny amounts of energy, producing a computer element in a thin, flexible form factor and the cloud services that enable sensing from such a system. Applications Wiliot's tags are designed for use in the many crates that agriculture shippers use to get their products to markets. The tags can provide information about the safety of the journey and the condition of perishable goods, to better manage inventory and reduce waste. Its first large public customer was Israeli supermarket chain Shufersal in June, 2022. The has experimented with reducing food loss throughout the food supply chain from producers to stores and in consumers’ homes by visualizing product information using Wiliot's tags. The company hopes to extend more broadly to sectors like pharmaceuticals and apparel Their tags can sense when a consumable is nearing end of life, or when a non-perishable consumable is almost used up, or how many washings a garment has been given. Recognition The industry recognition received by Wiliot include: Winner of the 2019 CableLabs Innovation Showcase Winner of the FDA’s Low- or No-Cost Food Traceability Challenge 2021 Frost & Sullivan’s 2022 North American Battery-free Bluetooth Low Energy Tag Technology Innovation Leadership Award Frost & Sullivan's 2022 European Passive BLE-based IoT Solutions Customer Value Leadership Award 2022 SXSW Innovation Awards Finalist in the Smart Cities, Transportation & Delivery category See also Ambient IoT References External links Wiliot: Mr. Beacon podcast 2017 establishments in Israel Cloud applications Internet of things companies Israeli companies established in 2017 Supply chain software companies Wireless sensor network Companies based in Caesarea Software companies of Israel
Wiliot
Technology
762
52,638,014
https://en.wikipedia.org/wiki/R-U-Dead-Yet
R.U.D.Y., short for R U Dead Yet, is an acronym used to describe a Denial of Service (DoS) tool used by hackers to perform slow-rate a.k.a. “Low and slow” attacks by directing long form fields to the targeted server. It is known to have an interactive console, thus making it a user-friendly tool. It opens fewer connections to the website being targeted for a long period and keeps the sessions open as long as it is feasible. The amount of open sessions overtires the server or website making it unavailable for the authentic visitors. The data is sent in small packs at an incredibly slow rate; normally there is a gap of ten seconds between each byte but these intervals are not definite and may vary to avert detection. The victim servers of these types of attacks may face issues such as not being able to access a particular website, disrupt their connection, drastically slow network performance, etc. Hackers can use such attacks for different purposes while targeting different servers or hosts; these purposes include, but are not limited to, blackmail, vengeance or sometimes even activism. The RUDY attack opens concurrent POST HTTP connections to the HTTP server and delays sending the body of the POST request to the point that the server resources are saturated. This attack sends numerous small packets at a very slow rate to keep the connection open and the server busy. This low-and slow attack behavior makes it relatively difficult to detect, compared to flooding DoS attacks that raise the traffic volume abnormally. See also Fork bomb High Orbit Ion Cannon LAND Ping of death ReDoS Slowloris Zemra References External links r-u-dead-yet on google code r-u-dead-yet on sourceforge Denial-of-service attacks
R-U-Dead-Yet
Technology
360
32,494,671
https://en.wikipedia.org/wiki/Capsulan
Capsulan is the exopolysaccharide which makes up the thick capsule surrounding the unicellular alga Prasinococcus capsulatus. Extraction Capsulan is extracted from cell cultures of P. capsulatus using the French press method to burst the cells. Alternatively the cells may be boiled. Composition Capsulan has been found to be mostly carbohydrate (70%) with some protein and sulfur. The main sugars making up the carbohydrate are galactose and glucose while other sugars such as xylose, arabinose and mannose are also present in smaller quantities. Sugar acids are common in plant and algal polysaccharide but there is disagreement in the literature concerning capsulan's sugar acid content. Kurano claims that capsulan contains none, while Myklestad maintains that both galacturonic and glucuronic acids are present. Notes Polysaccharides
Capsulan
Chemistry
203
77,423,022
https://en.wikipedia.org/wiki/NGC%205508
NGC 5508 is a very large and distant spiral galaxy located in the constellation Boötes. Its velocity relative to the cosmic microwave background is 11,615 ± 15 km/s, which corresponds to a Hubble's law of 171 ± 12 Mpc (∼558 million light-years). It was discovered by French astronomer Édouard Stephan in 1882. This galaxy is classified by all sources consulted, except Professor Seligman, as a lenticular galaxy. However, the image obtained from the SDSS survey clearly shows that it is a spiral galaxy. According to the Simbad database, NGC 5508 is a LINER galaxy, i.e. a galaxy whose nucleus exhibits an emission spectrum characterized by broad lines of weakly ionized atoms. The Hubble distance of neighboring galaxy PGC 50725 is 237.51 ± 16.63 Mpc (∼775 million light-years), well beyond NGC 5508. Although they appear as neighbors on the celestial sphere, they do not form a physical galaxy pair. See also List of NGC objects (5001–6000) References External links NGC 5508 on the site of Professor C. Seligman NGC 5508 on the SEDS website NGC 5508 on the NASA/IPAC Extragalactic Database 5508 05450 Boötes 09094 Astronomical objects discovered in 1882 Discoveries by Édouard Stephan LINER galaxies Spiral galaxies Lenticular galaxies
NGC 5508
Astronomy
281
29,062,023
https://en.wikipedia.org/wiki/List%20of%20ancient%20woods%20in%20England
This list of ancient woods in England contains areas of ancient woodland in England larger than . The list is arranged alphabetically by ceremonial county. Natural England lists 53,636 ancient woodlands in its database , comprising 39,223 ancient and semi-natural woodlands (ASNW), 14,339 ancient replanted woodlands (PAWS) and 64 ancient wood pastures (AWP). Most of these are small, with 45,445 of the woods being below 10 ha in size. The breakdown by size (in logarithmic steps) for larger woods is: B Bedfordshire The woodlands of Bedfordshire cover 6.2% of the county. Some two thirds of this () is broad-leaved woodland, principally oak and ash. A Woodland Trust estimate of all ancient woodland in Bedfordshire (dating back to at least the year 1600), including woods of and upward suggests an area of . This list of Bedfordshire's ancient woodland shows only those woods of over , all of which have SSSI status, and cover a total of . Of the eight woods shown, five fall roughly on the line of heavily wooded sandstone that runs diagonally across the county south of Bedford. Berkshire Berkshire has woodland covering , which is 14.5% of its land area. The woodlands listed below are all ancient woods of or more, and these cover some . A major proportion of the area is the area of woodland along the Surrey and Buckinghamshire borders. This is Windsor Great Park and Forest, and as well as the woodland area listed here, it has vast tracts of heath and parkland. Also in the east of the county are woodlands on the southern end of the Chiltern Hills. The great majority of the woods listed are in West Berkshire and follow the line of the chalk hills across the county. Bristol There is only one sizeable area of Ancient Woodland within Bristol. The Avon Gorge SSSI is partly within the city boundary, but the woodland is mainly in Somerset, so is covered under that county. Buckinghamshire 9.4% of the land area of Buckinghamshire is Woodland. Bernwood Forest Burnham Beeches Hollington Wood Jones Hill Wood C Cambridgeshire The ancient woods listed here are those over . With one exception, these are all SSSIs. The woods are distributed very unevenly. Large areas of the fenland in the north-eastern side of the county have none. There are significant numbers in the south, toward Suffolk. More of the woods are found in the western half of the county, with three near Peterborough. Cheshire Cheshire has some 4% of its area under woodland - around half the national average. Since 1994 the Mersey Community Forest has been promoting new woodland planting within the Merseyside and Cheshire region to alleviate this deficit, and also better manage the existing woodland to secure its future. Cheshire has less ancient woodland, and in smaller units than most counties. Many of the ancient woodlands survive in steep valleys or cloughs, of small extent. Taylor's Rough, Wellmeadow Wood, Warburton's Wood And Well Wood are examples of clough woodland too small for inclusion in this list. Most of the ancient woodland in the county is in units smaller than and 65% of the area is in woods smaller than . The list below is of ancient woodland larger than . City of London No Ancient Woodland remains in the City of London although the City of London Corporation are directly responsible for large areas of woodland elsewhere, notably Epping Forest (Essex), Highgate Wood (Greater London) and Burnham Beeches (Bucks) Cornwall The county of Cornwall has woodland representing 7.5% of the Land Area. Steeple Woods -16.2 Ha (40 acres) Devichoys Wood -16 Ha (40 acres) Cumbria 9.5% of the land area of Cumbria is woodland. Whinfell Forest D Derbyshire Shining Cliff Wood Devon Wistman's Wood Dorset Duncliffe Wood Holt Heath Powerstock Common Thorncombe Wood Durham Brignall Banks SSSI Castle Eden Dene SSSI and NNR Deepdale Wood Derwent Gorge SSSI and NNR Great High Wood Hawthorn Dene SSSI Hesleden Dene Pontburn Woods E East Riding of Yorkshire Burton Bushes in Beverley Westwood East Sussex 16.7% of the land area of East Sussex is woodland. Essex Epping Forest Hadleigh Woods Hockley Woods SSSI Hatfield Forest SSSI Nevendon Bushes Norsey Wood G Gloucestershire 11.2% of the land area of Gloucestershire is woodland. Forest of Dean Lower Woods Greater London Bluebell Wood Cherry Tree Wood Coldfall Wood Highgate Wood Lesnes Abbey Woods Oxleas Wood SSSI Queen's Wood Ruislip Woods NNR Great North Wood Scratchwood Greater Manchester Borsdane Wood H Hampshire 17.7% of the Land Area of Hampshire is woodland. New Forest Herefordshire Queenswood Hertfordshire 9.5% of Hertfordshire's land area is woodland. Ashridge Estate Benington High Wood Birchanger Wood, near Bishop's Stortford Broxbourne Woods NNR, near Broxbourne Bush Wood Knebworth Woods Northaw Great Wood Sherrardspark Wood, near Welwyn Garden City Whippendell Wood, , Watford I Isle of Wight In 2012 the Isle of Wight Biodiversity Partnership commissioned a revised Ancient Woodland Inventory for the island, and this was completed in 2014. This has a list of all identified ancient woodland sites on the Isle of Wight. Brading Wood, part of the Brading Marshes RSPB reserve Parkhurst Forest K Kent 10.6% of Kent's land area is wooded, and it has more ancient woodland than any other county. Barrows Wood, Trundle Wood and High Wood around Wormshill Chattenden Woods and Lodge Hill SSSI Cobham Woods Combwell Wood Darenth Wood SSSI East Blean Woods NNR Ham Street Woods NNR Parsonage Wood SSSI Robins Wood SSSI South Blean West Blean NNR Westerham Wood SSSI Yockletts Bank SSSI L Lancashire Boilton, Nab, Red Scar & Tun Brook woods, Preston Leicestershire It is estimated that 2% of Leicestershire's land area is ancient woodland, of which half has been replaced by new plantings in recent times. There are over 100 woods in Leicestershire believed to be ancient. The sites listed below are those over in size, and with one exception, all have SSSI status. With one group of woods near Hinckley, in the south-west, the remainder fall into three broad areas. In East Leicestershire, close to the border with Rutland, are the woods near Leighfield Forest, an extensive Royal Forest which straddled the two counties. North west of Leicester are the woods of Charnwood Forest. Further west are the woods of the coal measures toward the border with Derbyshire. Lincolnshire Bradley and Dixon Woods, Grimsby Legbourne Wood, Legbourne, Louth Stapleford Woods, Stapleford, North Kesteven Reddings Woods, Kirkby on Bain, Lincolnshire, East Lindsey M Merseyside Dibbinsdale, Wirral Hundred, Merseyside N Norfolk Foxley Wood Wayland Wood North Yorkshire Grass Wood, Wharfedale Nidd Gorge, Knaresborough Northamptonshire The ancient woods of Northants are concentrated towards the south and west of the county, to that region bordering Bucks, Oxford and Beds. Many are managed by the Forestry Commission, although others are in private hands. They tend to occur on limestone soils in elevated country, and exhibit a diversity of habitats. Hazleborough Wood, part of Whittlewood Forest Royal Forest of Rockingham Salcey Forest Whittlewood Forest Yardley Chase SSSI Northumberland Allen Banks and Steward Gorge Whittle Dene Nottinghamshire Sherwood Forest O Oxfordshire The ancient woods of Oxfordshire are concentrated in three distinct areas. In the south are woods of the Chiltern Hills. A second cluster lies to the east of Oxford. The Cotswolds woods on the western side of the county include those in the Royal Forest of Wychwood. Oxfordshire has nearly of woodland in total (6.9% of its area), two-thirds of which are in woods of over . of woodland is represented in the 17 ancient woods listed below. Some of woodland is split among the 3,390 woods smaller than 10 ha. Many of these smaller woods may be ancient, but are not covered by this list. The list here covers woods of over 10 ha with SSSI status. R Rutland Burley Wood Prior's Coppice S Shropshire Wyre Forest NNR (also in Worcestershire) Somerset Somerset is a rural county of rolling hills such as the Blackdown Hills, Mendip Hills, Quantock Hills and Exmoor National Park, and large flat expanses of land including the Somerset Levels. Many of the woodland areas have been designated as SSSIs with some being managed by the Avon Wildlife Trust or Somerset Wildlife Trust. Woodland covers seven per cent of the land area of the county. South Yorkshire Bagger Wood Beeley Wood Watchley Crags Staffordshire Cannock Chase SSSI Suffolk Arger Fen and Spouses Grove Assington Thicks Bradfield Woods NNR Bull's Wood Calves Wood Foxburrow Wood (Suffolk) Palant's Grove Snakes Wood Staverton Park and the Thicks Wolves Wood Surrey 22.4% of the Land Area of Surrey is woodland this makes it the most wooded county in England. T Tyne and Wear Thornley Wood SSSI Derwent Walk Country Park woods Stanley Burn Wood Snipes Dene Wood, part of Gibside SSSI Lands Wood, Winlaton Mill W Warwickshire Bush Wood Rough Hill Wood Ryton Wood SSSI West Midlands Sutton Park SSSI Rough Wood West Sussex 18.9% of West Sussex's land area is woodland. Titnore Wood Kingley Vale NNR Worth Forest West Yorkshire Batty's Wood Wiltshire Savernake forest SSSI Vincients Wood Worcestershire Grafton Wood Laight Rough Pepper Wood Shrawley Wood Wyre Forest (also in Shropshire) See also Ancient Woodland English land law Forest of Lyme Notes References External links What is Ancient Woodland? Ancient Woods in England Ancient Woods Old-growth forests Ancient Woods
List of ancient woods in England
Biology
2,054
13,244,136
https://en.wikipedia.org/wiki/World%20Rainforest%20Movement
The World Rainforest Movement (WRM) is an international initiative created to strengthen the global movement in defense of forests, in order to fight deforestation and forest degradation. It was founded in 1986 by activists from around the world. WRM believes that this goal can only be achieved by fighting for social and ecological justice, by respecting the collective rights of traditional communities and the right to self-determination of peoples who depend on the forests for their livelihoods. For this reason, WRM's actions are oriented to support the struggles of indigenous peoples and peasant communities in defense of their territories. WRM's International Secretariat is composed of a small team with members from different countries. The head office is in Uruguay. Main areas of work Expansion of monoculture tree plantations for the production of timber, cellulose, palm oil, rubber or biomass. Industrial tree plantations pose a major threat to communities beyond tropical forest areas. Impacts of corporations that extract timber, minerals, water and fossil fuels from forest territories, and of the infrastructure that supports this exploitation. Initiatives that are presented as "solutions" but in fact only exacerbate forest loss and climate change. These include certification of forest management concessions, monoculture tree plantations, carbon offsets, environmental compensation programmes, among others. New trends related to corporate tactics and national and international policies that facilitate the appropriation of community forests. Local struggles and resistance strategies of movements, organisations and communities in the defence of their territories and forests. The differentiated impacts that women face when their lands are encroached and appropriated: sexual violence, harassment, persecution and deprivation of livelihood, among others. Activities Mutual learning and support for community struggles Visiting communities that are struggling against the destruction of their forests for tree plantations and other corporate projects, to exchange experiences and to jointly decide on forms of support. Supporting meetings elaborated collectively with people from communities, organisations and social movements on the causes of forest destruction, global trends, threats and local resistance. Promoting exchanges between activists and organisations that resist against similar threats to their livelihoods. Creating spaces of trust and political connection to strengthen communities' struggles. Showing solidarity with local and community struggles, based on demands presented by the organisations, communities and activists involved. Production and dissemination of information and analyses Participating in debates and international campaigns to give visibility to community struggles and to expose the private and state tactics of land grabbing. Producing analyses and exposing violations – in local and international spaces – on the impacts of false solutions to the destruction of forests and climate change for communities. Producing analyses about new trends and international policies related to climate and biodiversity with forest dwellers threatened by these initiatives. Facilitating the flow of information among groups in different regions of the world, for example with translations of texts, petitions and action alerts into local languages. Publishing the WRM bulletin, an e-newsletter, since 1997. It exposes struggles, threats and resistance in forests, as well as false policy solutions at international and local level. Articles are written by activists and organizations from all over the world. The bulletin is distributed to more than 10,000 individuals and organizations in 131 countries around the world. Producing diverse materials for activists and communities on specific topics. Maintaining an online library with WRM materials since 1996, available in Spanish, French, English and Portuguese. Some are also translated into other languages, such as Bahasa Indonesian, Lingala, Malagasy, Swahili and Thai. References External links WRM site Forests Indigenous peoples and the environment Food sovereignty Forestry Peasants Organizations based in Uruguay 1986 establishments in Uruguay Environmental organizations established in 1986 Forest conservation organizations
World Rainforest Movement
Biology
716
60,100,869
https://en.wikipedia.org/wiki/NGC%204294
NGC 4294 is a barred spiral galaxy with flocculent spiral arms located about 55 million light-years away in the constellation Virgo. The galaxy was discovered by astronomer William Herschel on March 15, 1784 and is a member of the Virgo Cluster. NGC 4294 appears to be undergoing ram-pressure stripping edge-on. Physical characteristics NGC 4294 hosts many H II regions. Interaction with NGC 4299 NGC 4294 appears to be in a pair with NGC 4299 and may be possibly tidally interacting. Effects of a tidal interaction on NGC 4294 are evident as the galaxy has a disturbed optical and HI morphology, a high global star formation rate, and has an observed asymmetry in polarized radio continuum emission. HI tail Chung et al. identified that NGC 4294 has a one sided tail of neutral atomic hydrogen (HI). The tail points to the southwest and appears to be a result of ram-pressure. The tail has no optical counterpart and is oriented parallel to the HI tail found in NGC 4299. As the tail has no optical counterpart, this makes the probability of the tail being caused by tidal interaction low. However, NGC 4299 lies from NGC 4294 and the two galaxies have almost the same velocity, with a difference of 120 km/s. This means that the scenario of the tail originating from a tidal interaction cannot be ruled out entirely. Black Hole NGC 4294 may harbor an intermediate-mass black hole with an estimated mass ranging from 3,000 (3*10^3) to 20,000 (2*10^4) solar masses. References External links 4294 7407 39925 +02-32-009 J122117.82+113037.6 Virgo (constellation) Astronomical objects discovered in 1784 Barred spiral galaxies Flocculent spiral galaxies Virgo Cluster Interacting galaxies Discoveries by William Herschel
NGC 4294
Astronomy
395
1,342,447
https://en.wikipedia.org/wiki/Eye%20drop
Eye drops or eyedrops are liquid drops applied directly to the surface of the eye usually in small amounts such as a single drop or a few drops. Eye drops usually contain saline to match the salinity of the eye. Drops containing only saline and sometimes a lubricant are often used as artificial tears to treat dry eyes or simple eye irritation such as itching or redness. Eye drops may also contain one or more medications to treat a wide variety of eye diseases. Depending on the condition being treated, they may contain steroids, antihistamines, sympathomimetics, beta receptor blockers, parasympathomimetics, parasympatholytics, prostaglandins, nonsteroidal anti-inflammatory drugs (NSAIDs), antibiotics, antifungals, or topical anesthetics. Eye drops have less of a risk of side effects than do oral medicines, and such risk can be minimized by occluding the lacrimal punctum (i.e. pressing on the inner corner of the eye) for a short while after instilling drops. Prior to the development of single-use pre-loaded sterile plastic applicators, eye drops were administered using an eye dropper, a glass pipette with a rubber bulb. Shelf life Although most bottles of eye drops contain preservatives to inhibit contamination once opened, these will not prevent contamination indefinitely. Ophthalmologists recommend keeping bottles for no longer than three months after opening. Eye drops that contain no preservatives are usually packaged in single-use tubes. Dispensers typically oversize the drops; the human eye can only handle about 25 microlitres. Types and uses Different pharmacological classes of eye drops can be recognized by patients by their different colored tops. For instance, the tops to dilating drops are a different color than anti-allergy drops. Dry eyes Eyes drops sometimes do not have medications in them and are only lubricating and tear-replacing solutions. There is a wide variety of artificial tear eye drops that provide different surface healing strategies. One can find bicarbonate ions, hypotonicity, high viscosity gels and ointments, and non-preserved types. They all act differently and therefore, one may have to try different artificial tears to find the one that works the best. Steroid and antibiotic eye drops Steroid and antibiotic eye drops are used to treat eye infections. They also have prophylactic properties and are used to prevent infections after eye surgeries. They should be used for the entire time prescribed without interruptions. The infection may relapse if the use of the medication is stopped. Pink eye Antibiotic eye drops are prescribed when infection conjunctivitis is caused by bacteria but not when it is caused by a virus. In the case of allergic conjunctivitis, artificial tears can help dilute irritating allergens present in the tear film. Allergies Some eye drops may contain histamine antagonists or nonsteroidal anti-inflammatory drug (NSAIDs), which suppress the optical mast cell responses to allergens including (but not limited to) aerosolized dust particles. Glaucoma Eye drops used in managing glaucoma help the eye's fluid to drain better and decrease the amount of fluid made by the eye which decreases eye pressure. They are classified by their active ingredient and they include: prostaglandin analogs, beta blockers, alpha agonists, and carbonic anhydrase inhibitors. There are also combination drugs available for those patients who require more than one type of medication. Mydriatic eye drops These make the eye's pupil widen to maximum, to let an optometrist have the best view inside the eyeball behind the iris. Afterwards in sunny weather they can cause dazzling and photophobia until the effect of the mydriatic has worn off. In some countries including Russia and Italy, Tropicamide, a mydriatic eye drop, is used to some degree as an inexpensive recreational drug. Like other anticholinergics, when taken recreationally, tropicamide acts as a deliriant. When injected intravenously, as is most often the case, the tropicamide may cause problems such as slurred speech, unconsciousness, unresponsiveness, hallucinations, kidney pain, dysphoria, hyperthermia, tremors, suicidal tendency, convulsions, psychomotor agitation, tachycardia and headache. Injectable medication Syringe designed saline drops (e.g. Wallace Cameron Ultra Saline Minipod) are distributed in modern needle-exchange programmes as they can be used efficiently either by injection or ophthalmic (if the drug is potent in small doses) route of administer which is compared to intravenous use; by demonstration, the elimination of latanoprost acid from plasma is rapid (half-life 17 minutes) after either ophthalmic or intravenous administration. Side effects Steroid and antibiotic eye drops may cause stinging for one or two minutes when first used and if stinging continues, medical advice should be sought. Also, one should tell their doctor if vision changes occur or if they experience persistent sore throat, fever, easy bleeding or bruising when using drops with chloramphenicol. Also, one should be aware of symptoms of an allergic reaction, such as: rash, itching, swelling, dizziness, and trouble breathing. Long term steroid use can cause many adverse effects including steroid-induced glaucoma and cataract. Prostaglandin analogs may cause changes in iris color and eyelid skin, growth of eyelashes, stinging, blurred vision, eye redness, itching, and burning. Beta blockers' side effects include low blood pressure, reduced pulse rate, fatigue, shortness of breath, and in rare occasions, reduced libido and depression. Alpha agonists can cause burning or stinging, fatigue, headache, drowsiness, dry mouth and nose, and also they have a higher likelihood of allergic reaction. Carbonic anhydrase inhibitors may cause stinging, burning, and eye discomfort. Lubricant eye drops may cause some side effects and one should consult a doctor if pain in the eye or changes in vision occur. Furthermore, when redness occurs but lasts more than 3 days, one should immediately consult a doctor. See also Artificial tears Carboxymethyl cellulose Mydriasis Refractive error Tetrahydrozoline hydrochloride Visine References External links Dosage forms Drug delivery devices Ophthalmology drugs Ophthalmic drug administration
Eye drop
Chemistry
1,394
2,134,103
https://en.wikipedia.org/wiki/Periodic%20acid
Periodic acid ( ) is the highest oxoacid of iodine, in which the iodine exists in oxidation state +7. It can exist in two forms: orthoperiodic acid, with the chemical formula , and metaperiodic acid, which has the formula . Periodic acid was discovered by Heinrich Gustav Magnus and C. F. Ammermüller in 1833. Synthesis Modern industrial scale production involves the oxidation of a solution of sodium iodate under alkaline conditions, either electrochemically on a anode, or by treatment with chlorine: (counter ions omitted for clarity) E° = -1.6 V A standard laboratory preparation involves treating a mixture of barium periodate with nitric acid. Upon concentrating the mixture, the barium nitrate, which is less soluble, is separated from periodic acid: Properties Orthoperiodic acid has a number of acid dissociation constants. The pKa of metaperiodic acid has not been determined. , pKa = 3.29 , pKa = 8.31 , pKa = 11.60 There being two forms of periodic acid, it follows that two types of periodate salts are formed. For example, sodium metaperiodate, NaIO4, can be synthesised from HIO4 while sodium orthoperiodate, Na5IO6 can be synthesised from H5IO6. Structure Orthoperiodic acid forms monoclinic crystals (space group P21/n) consisting of a slightly deformed octahedron interlinked via bridging hydrogens. Five I–O bond distances are in the range 1.87–1.91 Å and one I–O bond is 1.78 Å. The structure of metaperiodic acid also includes octahedra, however these are connected via cis-edge-sharing with bridging oxygens to form one-dimensional infinite chains. Reactions Orthoperiodic acid can be dehydrated to give metaperiodic acid by heating to 100 °C under reduced pressure. Further heating to around 150 °C gives iodine pentoxide () rather than the expected anhydride diiodine heptoxide (). Metaperiodic acid can also be prepared from various orthoperiodates by treatment with dilute nitric acid. Like all periodates periodic acid can be used to cleave various 1,2-difunctional compounds. Most notably periodic acid will cleave vicinal diols into two aldehyde or ketone fragments (Malaprade reaction). This can be useful in determining the structure of carbohydrates as periodic acid can be used to open saccharide rings. This process is often used in labeling saccharides with fluorescent molecules or other tags such as biotin. Because the process requires vicinal diols, periodate oxidation is often used to selectively label the 3′-termini of RNA (ribose has vicinal diols) instead of DNA as deoxyribose does not have vicinal diols. Periodic acid is also used as an oxidising agent of moderate strength, as exemplified in the Babler oxidation of secondary allyl alcohols which are oxidised to enones by stoichiometric amounts of orthoperiodic acid with catalyst PCC. Other oxyacids Periodic acid is part of a series of oxyacids in which iodine can assume oxidation states of −1, +1, +3, +5, or +7. A number of neutral iodine oxides are also known. See also Compounds with a similar structure: Perchloric acid, perbromic acid, the related perhalogenic acids Telluric acid and perxenic acid, the isoelectronic oxoacids of tellurium and xenon Compounds with similar chemistry: lead tetraacetate (Criegee oxidation) References Halogen oxoacids Oxidizing acids Periodates
Periodic acid
Chemistry
828
75,041,845
https://en.wikipedia.org/wiki/Tremella%20imshaugiae
Tremella imshaugiae, is a lichenicolous (lichen-dwelling) fungus that is parasitic on the lichen Imshaugia aleurites. It is a species of Basidiomycota belonging to the family Tremellaceae. Description The fungus is typically found on the thallus of Imshaugia aleurites with an amber-colored fruiting bodies 0.1–1 mm in diameter. Like other fungi in the family Tremellaceae it has two to four celled septate basidia that average at 15.5–21.5 × 13–16.5 μm. Unlike others in the family Tremellaceae, it has somewhat spherical basidiospores averaging 6.5–9 × 6.5–8.5 μm. Its closest relative is Tremella diploschistina. Habitat and distribution The species has been documented in four areas across the globe including Scotland, Spain, USA, and Canada. The first documented occurrence was in 2012 on the Great Wass Island Preserve in Maine, USA. The lichen is recorded within habitats that contain Imshaugia aleurites that include conifer forests, particularly pines and maples. References imshaugiae Lichenicolous fungi Fungi described in 2020 Fungi of Europe Fungi of North America Taxa named by Paul Diederich Taxa named by Brian John Coppins Taxa named by Mats Wedin Fungus species Taxa named by Richard Clinton Harris
Tremella imshaugiae
Biology
301
23,450,595
https://en.wikipedia.org/wiki/350.org
350.org is an international environmental organization addressing the climate crisis. Its stated goal is to end the use of fossil fuels and transition to renewable energy by building a global, grassroots movement. The 350 in the name stands for 350 parts per million (ppm) of carbon dioxide (), which has been identified as a safe upper limit to avoid a climate tipping point. By the end of 2007, the year 350.org was founded, atmospheric had already exceeded this threshold, reaching 383 ppm ; as of July 2022, the concentration had reached 421 ppm , a level 50% higher than pre-industrial levels. Through online campaigns, grassroots organizing, mass public actions, and collaboration with an extensive network of partner groups and organizations, 350.org mobilized thousands of volunteer organizers in over 188 countries. It was one of the many organizers of the September 2019 Global Climate Strike, which evolved from the Fridays for Future movement. Campaigns 350.org runs a variety of campaigns, from the local to the global scale. Fossil fuel divestment The fossil fuel divestment campaign, also known as "Fossil Free", borrows activist tactics from other social movements, notably the successful campaign for disinvestment from South Africa over apartheid. From its inception in 2012 through October 2021, over 1500 institutions with more than US$40.43 trillion in assets under management had committed to divest from fossil fuels. 350.org explains that the reasoning behind this campaign is simple: "If it is wrong to wreck the climate, then it is wrong to profit from that wreckage." 350.org states their demand as the following "We want institutions to immediately freeze any new investment in fossil fuel companies and divest from direct ownership and any commingled funds that include fossil-fuel public equities and corporate bonds." The campaign has grown from colleges and universities around the United States to now include other kinds of public and private institutions, such as the City of New York, major Japanese banks, development banks, religious institutions, and more. Campaigns for divestment are active and growing around the world. From 2013 to 2020, Australian members built a network of local groups across the country advocating for institutions to divest. Keystone XL pipeline 350.org named the Keystone XL pipeline as a critical issue and turning point for the environmental movement, as well as for then-President Barack Obama's legacy. NASA climatologist James Hansen labeled the Keystone XL pipeline as "game over" for the planet and called the amount of carbon stored in Canadian bitumen sands a "fuse to the largest carbon bomb on the planet". 350.org cited oil spills along the proposed pipeline route, which would pass near Texas' Carrizo-Wilcox Aquifer, which supplies drinking water to more than 12 million people, as one important reason to reject the pipeline. They argued that it could also pose a danger to the Ogallala Aquifer, the largest aquifer in western North America that supplies drinking water and irrigation to millions of people and agricultural businesses. 350.org has opposed the economic argument that has been made by proponents of the pipeline, arguing that Keystone XL would create only a few thousand temporary jobs during construction. The State Department estimated that ultimately the pipeline will create 35 permanent jobs. Additionally, the Natural Resources Defense Council (NRDC) has said that the Keystone XL pipeline will increase gas prices instead of lowering them as oil industry proponents claimed. The NRDC's study also rebutted the claim that the pipeline will lead to energy independence because the pipeline would carry tar sands from Canada to Texas for export to the global market. Partly due to efforts from 350.org and other organizations, President Obama officially rejected the building of Keystone XL on November 6, 2015. This marked the end of a seven-year review of the pipeline. Speaking on the decision, Bill McKibben said, "President Obama is the first world leader to reject a project because of its effect on the climate. That gives him new stature as an environmental leader, and it eloquently confirms the five years and millions of hours of work that people of every kind put into this fight." In response, proponent TC Energy filed a US$15 billion lawsuit under NAFTA's Chapter 11. On January 24, 2017, President Donald Trump took action intended to permit the pipeline's completion, whereupon TC Energy suspended their NAFTA Chapter 11 action. On January 18, 2018, TransCanada Pipelines (now TC PipeLines) announced they had secured commitments from oil companies to ship of dilbit per day for 20 years, meeting the threshold to make the project economically viable. On January 20, 2021, President Joe Biden revoked the permit for the pipeline on his first day in office. On June 9, 2021, the project was abandoned by TC Energy. In its coverage of the abandonment, The Wall Street Journal highlighted the role of 350.org in the project's failure. Mountain Valley pipeline On January 31, 2024, 350.org and international multifaith organization GreenFaith gathered in Charlotte, North Carolina, to oppose the Mountain Valley Pipeline which would carry gasoline and methane "more than from West Virginia to Southern Virginia", as well as the pipeline's Southgate Extension. Fossil Fuel bans Local campaigns in jurisdictions around the world have passed laws limiting or banning fossil fuel production. These include 410 municipal bans for fracking in Brazil and two state bans: Santa Catarina and Paraná. International Day of Climate Action An "International Day of Climate Action" on October 24, 2009, was organized by 350.org to influence the delegates going to the United Nations Framework Convention on Climate Change meeting in December 2009 (COP15). This was the first global campaign ever organized around a scientific data point. The actions organized by 350.org included gigantic depictions of the number "350", walks, marches, rallies, teach-ins, bike rides, sing-a-thons, carbon-free dinners, retrofitting houses to save energy, tree plantings, mass dives at the Great Barrier Reef, solar-cooked bake-outs, church bell ringings, underwater cabinet meetings (Maldives), and armband distributions to athletes. The group reported that they organized the world's "most widespread day of political action" on that Saturday, reporting 5,245 actions in 181 countries. Global Work Party As a follow-up to 2009's International Day of Climate Action, 350.org and the 10:10 Climate Campaign joined forces to help coordinate another global day of action, which occurred on October 10, 2010. The 2010 campaign was focused on concrete actions that can be taken locally to help combat climate change. Actions from tree-plantings to solar panel installations to huge electricity service-provider switching parties occurred in almost every country around the world. Connect the dots The organization's efforts continued into 2012 with a planned May 5 worldwide series of rallies under the slogan "Connect the Dots" to draw attention to the links between climate change and extreme weather. Per the 350.org website, the day is called "Climate Impacts Day". Global Power Shift Phase 1 of Global Power Shift was a convergence in Istanbul, Turkey, in June 2013 of about five-hundred climate organizers from 135 countries. Stated objectives include sharing and developing skills to organize movements, building upon existing plans to organize in-country Power Shift events after the kickoff event in Turkey, building political alignment and a clear theory of change, sharing experiences from different countries, formulating strategies to overcome challenges, and building relationships to strengthen regional and international cooperation and collaboration. Phase 2 of Global Power Shift involves the organizers who were in Turkey in June 2013 to bring home what they learned to organize summits, events, and mobilizations. Summer Heat 350.org launched the Summer Heat campaign in the summer of 2013, a wave of mass mobilizations across the USA. Summer Heat actions took place at eleven locations: Richmond, California; Vancouver, Washington; Green River, Utah; Albuquerque, New Mexico; Houston, Texas; St. Ignace, Michigan; Warren, Ohio; Washington, D.C.; Camp David, Maryland; Somerset, Massachusetts; and Sebago Lake, Maine. Participants included grassroots organizers, labor unions, farmers, ranchers, environmental justice groups, and others. The slogan that was used for the Summer Heat campaign was As The Temperature Rises, So Do We. People's Climate March 350.org helped organize the People's Climate March, which took place on September 21, 2014. 2,000 events took place around the world. Global Climate Strike 350.org was one of the leading organizers of the Global Climate Strike, September 20–27, 2019. Strike actions were planned in more than 150 countries. Worn by a broad coalition of NGOs, unions, and social movements, the strikes were inspired by the school strikes of the Fridays for Future movement. Also supported is the digital climate strike, which calls for a shutdown or 'go green' of websites with redirection to coverage of the physical mobilizations. The aim of the Global Climate Strike was to draw attention to the emergency climate crisis and to create pressure on politics, the media and the fossil fuel industry. The strikes were intended as a prelude to a permanent mass mobilization. Over 7.6 million people across 185 countries participated in the mass mobilization event, making the Global Climate Strike the largest climate mobilization in history. Other activities Apart from special events, 350.org organizes actions on an ongoing basis to promote its message. These activities include tree plantings (350 trees in each instance) for biosequestration, promoting the term "350", publishing adverts in major newspapers calling for the target level of carbon dioxide to be lowered to 350 ppm, conducting polls on the subject of climate change, educating youth leaders, lobbying governments on the issue of carbon targets, and joining a campaign to establish a .eco top-level domain or "tld". In December 2009, the group petitioned the United States Environmental Protection Agency to set national limits for greenhouse gases using the Clean Air Act, asking the agency to cap atmospheric concentrations of carbon dioxide at 350 parts per million. The organization created and distributed a time-lapse video showing the recent retreat of Mendenhall Glacier in Alaska, graphically depicting the impacts of warming climates. Do The Math movie The Do The Math movie is a 42-minute documentary film about the rising movement to change the terrifying math of the climate crisis and challenge the fossil fuel industry. The math revolves around these three numbers: to stay below 2 degrees Celsius of global warming we can emit only 565 more gigatons of carbon dioxide versus the 2,795 gigatons held in proven reserves by fossil fuel corporations. This warming rise was agreed to in the 2009 Copenhagen Summit as a limit. NASA scientist James Hansen says "2 degrees of warming is actually a prescription for long-term disaster." "Rise: From One Island to Another" poem "Rise: From One Island to Another" is a poem and video project that showcases the impacts of sea level rise and the ways the climate crisis spans across national borders. The poem is written by two islanders, Kathy Jetn̄il-Kijiner from the Marshall Islands and Aka Niviâna from Greenland. Through their poetry, they draw connections between their realities of melting glaciers and rising sea levels. 350.org founder Bill McKibben writes that "[climate change] science is uncontroversial. But science alone can't make change, because it appeals only to the hemisphere of the brain that values logic and reason. We're also creatures of emotion, intuition, spark – which is perhaps why we should mount more poetry expeditions, put more musicians on dying reefs, make sure that novelists can feel the licking heat of wildfire." "Rise" was created in 2018. The "Rise" film project team included photographer and photojournalist Dan Lin, freelance filmmaker Nick Stone, visual storyteller Rob Lau, and filmmaker Oz Go. Origins 350.org was founded by American environmentalist Bill McKibben and a group of students from Middlebury College in Vermont. Their 2007 "Step It Up" campaign involved 1,400 demonstrations at famous sites across the United States. McKibben credits these activities with making Hillary Clinton and Barack Obama change their energy policies during the 2008 United States presidential campaign. Starting in 2008, 350.org built upon the "Step It Up" campaign and made it into a global organization. McKibben is an American environmentalist and writer who wrote one of the first books on global warming for the general public, and frequently writes about climate change, alternative energy, and the need for more localized economies. As of 2022, McKibben was a senior advisor to 350.org and May Boeve is the Executive Director. Rajendra Pachauri, the UN's "top climate scientist" and leader of the Intergovernmental Panel on Climate Change (IPCC), has come out, as have others, in favor of reducing atmospheric concentrations of carbon dioxide to 350 ppm. McKibben called news of Pachauri's embrace of the 350 ppm target "amazing". Some media have indicated that Pachauri's endorsement of the 350 ppm target was a victory for 350.org's activism. The organization had a lift in prominence after McKibben appeared on The Colbert Report television show on August 17, 2009. McKibben promotes the organization on speaking tours and by writing articles about it for many major newspapers and media, such as the Los Angeles Times and The Guardian. In 2012, the organization was presented with the Katerva Award for Behavioural Change. Science of 350 NASA climate scientist James Hansen contended that any atmospheric concentration of CO2 above 350 parts per million (ppm) was unsafe. Hansen opined in 2009 that "if humanity wishes to preserve a planet similar to that on which civilization developed and to which life on Earth is adapted, paleoclimate evidence and ongoing climate change suggest that CO2 will need to be reduced from its current 400 ppm to at most 350 ppm, but likely less than that." Hansen has noted that nuclear energy is a viable solution to lower CO2 in the atmosphere, at odds with 350.org. Carbon dioxide, the main greenhouse gas, rose by 2.6 ppm to 396 ppm in 2013 from the previous year (annual global averages). In May 2013, two independent teams of scientists measuring CO2 near the summit of Mauna Loa in Hawaii recorded that the amount of carbon dioxide in the atmosphere exceeded 400 ppm, probably for the first time in more than 3 million years of Earth history. It crossed 415 ppm in May 2019 and the amount continues to rise. was agreed upon during the 2009 Copenhagen Accord as a limit for global temperature rise. In the 2015 Paris Agreement, 1.5 °C of warming was introduced as a limit, reflecting the significant difference in impacts between 2 °C and 1.5 °C, especially for climate-vulnerable areas. This was reaffirmed in the 2018 report by the Intergovernmental Panel on Climate Change, where the world's leading scientists urged action to limit warming to 1.5 °C. In order to stay below a 2 °C increase, scientists have estimated that humans can pour roughly 565 more gigatons of carbon dioxide into the atmosphere. Fossil-fuel companies have about 2,795 gigatons of carbon already contained in their proven coal and oil and gas reserves, and is the amount of fossil fuels they are currently planning to burn. 2,795 gigatons is five times higher than the limit of 565 gigatons that would keep Earth under a global temperature increase of 2 °C, which is already unsafe according to the latest science. Membership 350.org claims alliance with 300 organizations around the world. Many notable figures have publicly allied themselves with the organization or its goal to spread the movement, including Archbishop Desmond Tutu, Alex Steffen, Bianca Jagger, David Suzuki, and Colin Beavan. 1Sky merged into 350.org in 2011. See also 2010 United Nations Climate Change Conference Air pollution reduction efforts Climate change mitigation Climate change policy of the United States Climate Reality Project Conservation (ethic) Criticism of non-governmental organizations Environmental movement Individual and political action on climate change List of environmental issues NGO-ization Politics of global warming Stern Review References External links Official website Check the current level of CO2 in the Earth's atmosphere International environmental organizations Climate change organizations based in the United States Environmental policies organizations Emissions reduction International climate change organizations Anti-consumerist groups Environmental advocacy groups
350.org
Chemistry
3,394
65,888
https://en.wikipedia.org/wiki/Electromagnetic%20induction
Electromagnetic or magnetic induction is the production of an electromotive force (emf) across an electrical conductor in a changing magnetic field. Michael Faraday is generally credited with the discovery of induction in 1831, and James Clerk Maxwell mathematically described it as Faraday's law of induction. Lenz's law describes the direction of the induced field. Faraday's law was later generalized to become the Maxwell–Faraday equation, one of the four Maxwell equations in his theory of electromagnetism. Electromagnetic induction has found many applications, including electrical components such as inductors and transformers, and devices such as electric motors and generators. History Electromagnetic induction was discovered by Michael Faraday, published in 1831. It was discovered independently by Joseph Henry in 1832. In Faraday's first experimental demonstration (August 29, 1831), he wrapped two wires around opposite sides of an iron ring or "torus" (an arrangement similar to a modern toroidal transformer). Based on his understanding of electromagnets, he expected that, when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. He saw a transient current, which he called a "wave of electricity", when he connected the wire to the battery and another when he disconnected it. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. Within two months, Faraday found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk"). Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's model, the time varying aspect of electromagnetic induction is expressed as a differential equation, which Oliver Heaviside referred to as Faraday's law even though it is slightly different from Faraday's original formulation and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations. In 1834 Heinrich Lenz formulated the law named after him to describe the "flux through the circuit". Lenz's law gives the direction of the induced emf and current resulting from electromagnetic induction. Theory Faraday's law of induction and Lenz's law Faraday's law of induction makes use of the magnetic flux ΦB through a region of space enclosed by a wire loop. The magnetic flux is defined by a surface integral: where dA is an element of the surface Σ enclosed by the wire loop, B is the magnetic field. The dot product B·dA corresponds to an infinitesimal amount of magnetic flux. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop. When the flux through the surface changes, Faraday's law of induction says that the wire loop acquires an electromotive force (emf). The most widespread version of this law states that the induced electromotive force in any closed circuit is equal to the rate of change of the magnetic flux enclosed by the circuit: where is the emf and ΦB is the magnetic flux. The direction of the electromotive force is given by Lenz's law which states that an induced current will flow in the direction that will oppose the change which produced it. This is due to the negative sign in the previous equation. To increase the generated emf, a common approach is to exploit flux linkage by creating a tightly wound coil of wire, composed of N identical turns, each with the same magnetic flux going through them. The resulting emf is then N times that of one single wire. Generating an emf through a variation of the magnetic flux through the surface of a wire loop can be achieved in several ways: the magnetic field B changes (e.g. an alternating magnetic field, or moving a wire loop towards a bar magnet where the B field is stronger), the wire loop is deformed and the surface Σ changes, the orientation of the surface dA changes (e.g. spinning a wire loop into a fixed magnetic field), any combination of the above Maxwell–Faraday equation In general, the relation between the emf in a wire loop encircling a surface Σ, and the electric field E in the wire is given by where dℓ is an element of contour of the surface Σ, combining this with the definition of flux we can write the integral form of the Maxwell–Faraday equation It is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. Faraday's law and relativity Faraday's law describes two different phenomena: the motional emf generated by a magnetic force on a moving wire (see Lorentz force), and the transformer emf that is generated by an electric force due to a changing magnetic field (due to the differential form of the Maxwell–Faraday equation). James Clerk Maxwell drew attention to the separate physical phenomena in 1861. This is believed to be a unique example in physics of where such a fundamental law is invoked to explain two such different phenomena. Albert Einstein noticed that the two situations both corresponded to a relative movement between a conductor and a magnet, and the outcome was unaffected by which one was moving. This was one of the principal paths that led him to develop special relativity. Applications The principles of electromagnetic induction are applied in many devices and systems, including: Electrical generator The emf generated by Faraday's law of induction due to relative movement of a circuit and a magnetic field is the phenomenon underlying electrical generators. When a permanent magnet is moved relative to a conductor, or vice versa, an electromotive force is created. If the wire is connected through an electrical load, current will flow, and thus electrical energy is generated, converting the mechanical energy of motion to electrical energy. For example, the drum generator is based upon the figure to the bottom-right. A different implementation of this idea is the Faraday's disc, shown in simplified form on the right. In the Faraday's disc example, the disc is rotated in a uniform magnetic field perpendicular to the disc, causing a current to flow in the radial arm due to the Lorentz force. Mechanical work is necessary to drive this current. When the generated current flows through the conducting rim, a magnetic field is generated by this current through Ampère's circuital law (labelled "induced B" in the figure). The rim thus becomes an electromagnet that resists rotation of the disc (an example of Lenz's law). On the far side of the figure, the return current flows from the rotating arm through the far side of the rim to the bottom brush. The B-field induced by this return current opposes the applied B-field, tending to decrease the flux through that side of the circuit, opposing the increase in flux due to rotation. On the near side of the figure, the return current flows from the rotating arm through the near side of the rim to the bottom brush. The induced B-field increases the flux on this side of the circuit, opposing the decrease in flux due to r the rotation. The energy required to keep the disc moving, despite this reactive force, is exactly equal to the electrical energy generated (plus energy wasted due to friction, Joule heating, and other inefficiencies). This behavior is common to all generators converting mechanical energy to electrical energy. Electrical transformer When the electric current in a loop of wire changes, the changing current creates a changing magnetic field. A second wire in reach of this magnetic field will experience this change in magnetic field as a change in its coupled magnetic flux, . Therefore, an electromotive force is set up in the second loop called the induced emf or transformer emf. If the two ends of this loop are connected through an electrical load, current will flow. Current clamp A current clamp is a type of transformer with a split core which can be spread apart and clipped onto a wire or coil to either measure the current in it or, in reverse, to induce a voltage. Unlike conventional instruments the clamp does not make electrical contact with the conductor or require it to be disconnected during attachment of the clamp. Magnetic flow meter Faraday's law is used for measuring the flow of electrically conductive liquids and slurries. Such instruments are called magnetic flow meters. The induced voltage ε generated in the magnetic field B due to a conductive liquid moving at velocity v is thus given by: where ℓ is the distance between electrodes in the magnetic flow meter. Eddy currents Electrical conductors moving through a steady magnetic field, or stationary conductors within a changing magnetic field, will have circular currents induced within them by induction, called eddy currents. Eddy currents flow in closed loops in planes perpendicular to the magnetic field. They have useful applications in eddy current brakes and induction heating systems. However eddy currents induced in the metal magnetic cores of transformers and AC motors and generators are undesirable since they dissipate energy (called core losses) as heat in the resistance of the metal. Cores for these devices use a number of methods to reduce eddy currents: Cores of low frequency alternating current electromagnets and transformers, instead of being solid metal, are often made of stacks of metal sheets, called laminations, separated by nonconductive coatings. These thin plates reduce the undesirable parasitic eddy currents, as described below. Inductors and transformers used at higher frequencies often have magnetic cores made of nonconductive magnetic materials such as ferrite or iron powder held together with a resin binder. Electromagnet laminations Eddy currents occur when a solid metallic mass is rotated in a magnetic field, because the outer portion of the metal cuts more magnetic lines of force than the inner portion; hence the induced electromotive force is not uniform; this tends to cause electric currents between the points of greatest and least potential. Eddy currents consume a considerable amount of energy and often cause a harmful rise in temperature. Only five laminations or plates are shown in this example, so as to show the subdivision of the eddy currents. In practical use, the number of laminations or punchings ranges from 40 to 66 per inch (16 to 26 per centimetre), and brings the eddy current loss down to about one percent. While the plates can be separated by insulation, the voltage is so low that the natural rust/oxide coating of the plates is enough to prevent current flow across the laminations. This is a rotor approximately 20 mm in diameter from a DC motor used in a Note the laminations of the electromagnet pole pieces, used to limit parasitic inductive losses. Parasitic induction within conductors In this illustration, a solid copper bar conductor on a rotating armature is just passing under the tip of the pole piece N of the field magnet. Note the uneven distribution of the lines of force across the copper bar. The magnetic field is more concentrated and thus stronger on the left edge of the copper bar (a,b) while the field is weaker on the right edge (c,d). Since the two edges of the bar move with the same velocity, this difference in field strength across the bar creates whorls or current eddies within the copper bar. High current power-frequency devices, such as electric motors, generators and transformers, use multiple small conductors in parallel to break up the eddy flows that can form within large solid conductors. The same principle is applied to transformers used at higher than power frequency, for example, those used in switch-mode power supplies and the intermediate frequency coupling transformers of radio receivers. See also Inductance Moving magnet and conductor problem References Notes References Further reading Maxwell, James Clerk (1881), A treatise on electricity and magnetism, Vol. II, Chapter III, §530, p. 178. Oxford, UK: Clarendon Press. . External links The Laws of Induction - The Feynman Lectures on Physics A free java simulation on motional EMF Electrodynamics Physical phenomena Michael Faraday Maxwell's equations
Electromagnetic induction
Physics,Mathematics
2,596
7,382,070
https://en.wikipedia.org/wiki/Hourman%20%28Rex%20Tyler%29
Hourman (Rex Tyler) is a fictional superhero appearing in comics published by DC Comics. He is known as the original Hourman (spelled Hour-Man in his earliest appearances, also referred to as The Hour-Man, and The Hourman). He was created by writer Ken Fitch and artist Bernard Baily in Adventure Comics #48 (April 1940), during the Golden Age of Comic Books. He continued to appear in Adventure Comics until issue #83 (Feb 1943). Rex Tyler made his live-action debut in the first season of DC's Legends of Tomorrow before becoming a guest star in the second season, portrayed by Patrick J. Adams. Rex Tyler also appeared in the first season of the DC Universe series Stargirl, portrayed by Lou Ferrigno Jr. Fictional character biography Scientist Rex Tyler, raised in upstate New York, developed an affinity for chemistry, particularly biochemistry. Working his way through college, he landed a job researching vitamins and hormone supplements at Bannermain Chemical. A series of discoveries and accidents led him to the "miraculous vitamin" Miraclo. He found that concentrated doses of the "miraclo" given to test mice increased their strength and vitality several times that of normal, but only for one hour. After taking a dose himself, Rex found he could have superhuman strength and speed for an hour, before returning to human levels. Keeping the discovery of Miraclo a secret, Tyler decided that human trials would be limited to the only subject he could trust: himself. Feeling that the Miraclo-induced abilities should be used for good purposes, he decided to use the abilities to help those in need; in other words, he would become a superhero, based in Appleton City. He received his first mission by placing an ad stating that "The Man of The Hour" would help the needy. Tracking down one responder to the ad, he aided a housewife whose husband was falling in with the wrong crowd, and stopped a robbery. Using a costume he found in an abandoned costume shop, he started to adventure as The Hour-Man (later dropping the hyphen). In November 1940 Hourman became one of the founding members of the first superhero team, the Justice Society of America. After leaving the JSA in mid-1941 Tyler became one of Uncle Sam's initial group of Freedom Fighters. He later became part of the wartime All-Star Squadron. According to Jess Nevins' Encyclopedia of Golden Age Superheroes, "Hourman fights a variety of Doctors: the robot-wielding Dr. Darrk, the hypnotist Dr. Feher, the big-headed genius Dr. Glisten; the occultist and alchemist Dr. Iker; and the bio-engineer Dr. Togg. There is also the 90-Minute Man, who gains Hourman-like powers for 90 minutes from his radium armor". Hourman was one of many heroes whose popularity began to decline in the post-war years. Eventually, his adventures ended, but with the resurgence of super-heroes in the mid-1950s and early 1960s, interest in the Golden Age heroes returned, and Hourman was soon appearing as a guest star in issues of Justice League of America. Like all the other Golden Agers, he was now considered an elder statesman of the super-hero set. It is later revealed that Miraclo is addictive and that Rex is struggling with its effects. In Zero Hour: Crisis in Time!, Extant kills Hourman before the Hourman android rescues him and transports him to a pocket dimension. Rex is later resurrected, retires, and provides technical support for the JSA All-Stars, of whom his son is a member. In Doomsday Clock, Hourman is erased from existence when Doctor Manhattan alters the timeline, but is resurrected when Superman convinces Manhattan to undo his actions. Powers and abilities Through the use of Miraclo, Hourman can possess superhuman strength, speed, stamina, and durability, night vision, underwater survival, and expert martial arts skills for one full hour. In other media Television Rex Tyler appears in the Batman: The Brave and the Bold episode "The Golden Age of Justice!", voiced by Lex Lang. This version uses an hourglass-shaped device to fuel his powers instead of Miraclo and appears as a member of an aged Justice Society of America. Rex Tyler appears in the Robot Chicken episode "Tapping a Hero", voiced by Seth Green. Rex Tyler appears in Legends of Tomorrow, portrayed by Patrick J. Adams. This version is the leader of the Justice Society of America, who were active in the 1940s. At the end of the first season, he warns the Legends not to travel to 1942 due to their impending deaths, only to vanish shortly afterwards. In the second season, the team meets Tyler's past self when they ignore his warning. Tyler is later killed by the Reverse-Flash, erasing his future self who had discovered the Reverse-Flash's plans and warned the Legends from existence. Before his death, Rex was in a relationship with Vixen, who goes after and later joins the Legends to avenge Rex. Rex Tyler appears in Stargirl, portrayed by Lou Ferrigno Jr. This version is a member of the Justice Society of America whose powers are derived from an hourglass amulet. Ten years prior to the series, Rex and his wife Wendi are killed by Solomon Grundy. In the present, Rick Tyler assumes his father's mantle and amulet to avenge his parents' deaths. Film Rex Tyler appears in the opening credits of Justice League: The New Frontier, in which he falls to his death while running from police officers due to a ban on vigilantes. An alternate universe incarnation of Rex Tyler appears in Justice Society: World War II, voiced by Matthew Mercer. This version hails from Earth-2 and is a founding member of the Justice Society of America, who were active during their Earth's version of the titular war. References External links Grand Comics Database Hourman at Don Markstein's Toonopedia. Archived from the original on February 5, 2016. Comics Archives: JSA Fact File: Hourman I DC Indexes: Earth-2 Hourman I Comics characters introduced in 1940 DC Comics characters with superhuman durability or invulnerability DC Comics characters with superhuman strength DC Comics characters who can move at superhuman speeds DC Comics male superheroes DC Comics scientists DC Comics titles Earth-Two DC Comics characters with accelerated healing DC Comics metahumans DC Comics martial artists Golden Age superheroes
Hourman (Rex Tyler)
Chemistry
1,339
58,946,850
https://en.wikipedia.org/wiki/NGC%20681
NGC 681 is an intermediate spiral galaxy in the constellation of Cetus, located approximately 66.5 million light-years from Earth. NGC 681 is a member of the MCG -02-05-053 group (also known as LGG 33), which contains four galaxies, including NGC 701 and IC 1738. Observation history NGC 681 was discovered by the German-born British astronomer William Herschel on 28 November 1785 and was later also observed by William's son, John Herschel. John Louis Emil Dreyer, compiler of the first New General Catalogue of Nebulae and Clusters of Stars, described NGC 681 as being a "pretty faint, considerably large, round, small (faint) star 90 arcsec to [the] west" that becomes "gradually a little brighter [in the] middle". Physical characteristics NGC 681 shares many structural similarities with the Sombrero Galaxy, M104, although it is smaller, less luminous, and less massive. Its thin, dusty disc is seen almost perfectly edge-on and features a small, very bright nucleus in the center of a very pronounced bulge. Distinctly unlike M104, NGC 681's disc contains many H II regions, where star formation is likely to be occurring. The galaxy has a mass of M☉, a mass-to-light ratio of 3.6 , and a spiral pattern which is asymmetrical. The SIMBAD database lists NGC681 as a Seyfert II Galaxy, i.e. it has a quasar-like nucleus with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, the host galaxy is clearly detectable. Supernova One supernova has been observed in NGC 681. SN2024abup (typeIb/c, mag. 17.018) was discovered by ATLAS on 22 November 2024. Image gallery See also List of NGC objects (1–1000) Sombrero Galaxy Messier object List of spiral galaxies References External links Galaxies discovered in 1785 Intermediate spiral galaxies Cetus Discoveries by William Herschel 0681 006671 01467-1040 Seyfert galaxies
NGC 681
Astronomy
454
17,519,721
https://en.wikipedia.org/wiki/Entropic%20vector
The entropic vector or entropic function is a concept arising in information theory. It represents the possible values of Shannon's information entropy that subsets of one set of random variables may take. Understanding which vectors are entropic is a way to represent all possible inequalities between entropies of various subsets. For example, for any two random variables , their joint entropy (the entropy of the random variable representing the pair ) is at most the sum of the entropies of and of : Other information-theoretic measures such as conditional information, mutual information, or total correlation can be expressed in terms of joint entropy and are thus related by the corresponding inequalities. Many inequalities satisfied by entropic vectors can be derived as linear combinations of a few basic ones, called Shannon-type inequalities. However, it has been proven that already for variables, no finite set of linear inequalities is sufficient to characterize all entropic vectors. Definition Shannon's information entropy of a random variable is denoted . For a tuple of random variables , we denote the joint entropy of a subset as , or more concisely as , where . Here can be understood as the random variable representing the tuple . For the empty subset , denotes a deterministic variable with entropy 0. A vector h in indexed by subsets of is called an entropic vector of order if there exists a tuple of random variables such that for each subset . The set of all entropic vectors of order is denoted by . Zhang and Yeung proved that it is not closed (for ), but its closure, , is a convex cone and hence characterized by the (infinitely many) linear inequalities it satisfies. Describing the region is thus equivalent to characterizing all possible inequalities on joint entropies. Example Let X,Y be two independent random variables with discrete uniform distribution over the set . Then (since each is uniformly distributed over a two-element set), and (since the two variables are independent, which means the pair is uniformly distributed over .) The corresponding entropic vector is thus: On the other hand, the vector is not entropic (that is, ), because any pair of random variables (independent or not) should satisfy . Characterizing entropic vectors: the region Γn* Shannon-type inequalities and Γn For a tuple of random variables , their entropies satisfy: ,     for any In particular, , for any . The Shannon inequality says that an entropic vector is submodular: ,     for any It is equivalent to the inequality stating that the conditional mutual information is non-negative: (For one direction, observe this the last form expresses Shannon's inequality for subsets and of the tuple ; for the other direction, substitute , , ). Many inequalities can be derived as linear combinations of Shannon inequalities; they are called Shannon-type inequalities or basic information inequalities of Shannon's information measures. The set of vectors that satisfies them is called ; it contains . Software has been developed to automate the task of proving Shannon-type inequalities. Given an inequality, such software is able to determine whether the given inequality is a valid Shannon-type inequality (i.e., whether it contains the cone ). Non-Shannon-type inequalities The question of whether Shannon-type inequalities are the only ones, that is, whether they completely characterize the region , was first asked by Te Su Han in 1981 and more precisely by Nicholas Pippenger in 1986. It is not hard to show that this is true for two variables, that is, . For three variables, Zhang and Yeung proved that ; however, it is still asymptotically true, meaning that the closure is equal: . In 1998, Zhang and Yeung showed that for all , by proving that the following inequality on four random variables (in terms of conditional mutual information) is true for any entropic vector, but is not Shannon-type: Further inequalities and infinite families of inequalities have been found. These inequalities provide outer bounds for better than the Shannon-type bound . In 2007, Matus proved that no finite set of linear inequalities is sufficient (to deduce all as linear combinations), for variables. In other words, the region is not polyhedral. Whether they can be characterized in some other way (allowing to effectively decide whether a vector is entropic or not) remains an open problem. Analogous questions for von Neumann entropy in quantum information theory have been considered. Inner bounds Some inner bounds of are also known. One example is that contains all vectors in which additionally satisfy the following inequality (and those obtained by permuting variables), known as Ingleton's inequality for entropy: Entropy and groups Group-characterizable vectors and quasi-uniform distributions Consider a group and subgroups of . Let denote for ; this is also a subgroup of . It is possible to construct a probability distribution for random variables such that . (The construction essentially takes an element of uniformly at random and lets be the corresponding coset ). Thus any information-theoretic inequality implies a group-theoretic one. For example, the basic inequality implies that It turns out the converse is essentially true. More precisely, a vector is said to be group-characterizable if it can be obtained from a tuple of subgroups as above. The set of group-characterizable vectors is denoted . As said above, . On the other hand, (and thus ) is contained in the topological closure of the convex closure of . In other words, a linear inequality holds for all entropic vectors if and only if it holds for all vectors of the form , where goes over subsets of some tuple of subgroups in a group . Group-characterizable vectors that come from an abelian group satisfy Ingleton's inequality. Kolmogorov complexity Kolmogorov complexity satisfies essentially the same inequalities as entropy. Namely, denote the Kolmogorov complexity of a finite string as (that is, the length of the shortest program that outputs ). The joint complexity of two strings , defined as the complexity of an encoding of the pair , can be denoted . Similarly, the conditional complexity can be denoted (the length of the shortest program that outputs given ). Andrey Kolmogorov noticed these notions behave similarly to Shannon entropy, for example: In 2000, Hammer et al. proved that indeed an inequality holds for entropic vectors if and only if the corresponding inequality in terms of Kolmogorov complexity holds up to logarithmic terms for all tuples of strings. See also Inequalities in information theory References Thomas M. Cover, Joy A. Thomas. Elements of information theory New York: Wiley, 1991. Raymond Yeung. A First Course in Information Theory, Chapter 12, Information Inequalities, 2002, Print Information theory
Entropic vector
Mathematics,Technology,Engineering
1,466
31,853,491
https://en.wikipedia.org/wiki/Horticultural%20building%20system
Horticultural Building Systems are defined as the instance where vegetation and an architectural/architectonic system exist in a mutually defined and intentionally designed relationship that supports plant growth and an architectonic concept. The most common form of these systems in contemporary vernacular is green wall, vertical garden, green roof, roof garden, building-integrated agriculture (BIA), yet the history of these systems may be traced back through greenhouse technology, hydroponicums, horticultural growth chambers, and beyond. These horticultural building systems evolved form a reciprocal relationship between plant cultural requirements and architectural technology. Notes References Weiler, Susan K., and Katrin Scholz-Barth. 2009. Green roof systems: a guide to the planning, design, and construction of landscapes over structure. Hoboken, N.J.: John Wiley & Sons. Werthmann, Christian. 2007. Green roof: a case study. New York: Princeton Architectural Press. Zimmermann, Astrid. 2009. Constructing landscape: materials, techniques, structural components. Basel: Birkhäuser Hix, John. 1974. The glass house. Cambridge, Massachusetts: MIT Press Hindle, Richard L. Horticultural Building Systems: Evolution and Research Futures. In: Urban Nature CELA 2011. Figueroa Press, Los Angeles, CA. 2011 p 78. Horticulture
Horticultural building system
Engineering
268
2,618,270
https://en.wikipedia.org/wiki/Gravitational%20instanton
In mathematical physics and differential geometry, a gravitational instanton is a four-dimensional complete Riemannian manifold satisfying the vacuum Einstein equations. They are so named because they are analogues in quantum theories of gravity of instantons in Yang–Mills theory. In accordance with this analogy with self-dual Yang–Mills instantons, gravitational instantons are usually assumed to look like four dimensional Euclidean space at large distances, and to have a self-dual Riemann tensor. Mathematically, this means that they are asymptotically locally Euclidean (or perhaps asymptotically locally flat) hyperkähler 4-manifolds, and in this sense, they are special examples of Einstein manifolds. From a physical point of view, a gravitational instanton is a non-singular solution of the vacuum Einstein equations with positive-definite, as opposed to Lorentzian, metric. There are many possible generalizations of the original conception of a gravitational instanton: for example one can allow gravitational instantons to have a nonzero cosmological constant or a Riemann tensor which is not self-dual. One can also relax the boundary condition that the metric is asymptotically Euclidean. There are many methods for constructing gravitational instantons, including the Gibbons–Hawking Ansatz, twistor theory, and the hyperkähler quotient construction. Introduction Gravitational instantons are interesting, as they offer insights into the quantization of gravity. For example, positive definite asymptotically locally Euclidean metrics are needed as they obey the positive-action conjecture; actions that are unbounded below create divergence in the quantum path integral. A four-dimensional Ricci-flat Kähler manifold has anti-self-dual Riemann tensor with respect to the complex orientation. Consequently, a simply-connected anti-self-dual gravitational instanton is a four-dimensional complete hyperkähler manifold. Gravitational instantons are analogous to self-dual Yang–Mills instantons. Several distinctions can be made with respect to the structure of the Riemann curvature tensor, pertaining to flatness and self-duality. These include: Einstein (non-zero cosmological constant) Ricci flatness (vanishing Ricci tensor) Conformal flatness (vanishing Weyl tensor) Self-duality Anti-self-duality Conformally self-dual Conformally anti-self-dual Taxonomy By specifying the 'boundary conditions', i.e. the asymptotics of the metric 'at infinity' on a noncompact Riemannian manifold, gravitational instantons are divided into a few classes, such as asymptotically locally Euclidean spaces (ALE spaces), asymptotically locally flat spaces (ALF spaces). They can be further characterized by whether the Riemann tensor is self-dual, whether the Weyl tensor is self-dual, or neither; whether or not they are Kähler manifolds; and various characteristic classes, such as Euler characteristic, the Hirzebruch signature (Pontryagin class), the Rarita–Schwinger index (spin-3/2 index), or generally the Chern class. The ability to support a spin structure (i.e. to allow consistent Dirac spinors) is another appealing feature. List of examples Eguchi et al. list a number of examples of gravitational instantons. These include, among others: Flat space , the torus and the Euclidean de Sitter space , i.e. the standard metric on the 4-sphere. The product of spheres . The Schwarzschild metric and the Kerr metric . The Eguchi–Hanson instanton , given below. The Taub–NUT solution, given below. The Fubini–Study metric on the complex projective plane Note that the complex projective plane does not support well-defined Dirac spinors. That is, it is not a spin structure. It can be given a spinc structure, however. The Page space, which exhibits an explicit Einstein metric on the connected sum of two oppositely oriented complex projective planes . The Gibbons–Hawking multi-center metrics, given below. The Taub-bolt metric and the rotating Taub-bolt metric. The "bolt" metrics have a cylindrical-type coordinate singularity at the origin, as compared to the "nut" metrics, which have a sphere coordinate singularity. In both cases, the coordinate singularity can be removed by switching to Euclidean coordinates at the origin. The K3 surfaces. The ALE (asymptotically locally Euclidean) anti-self-dual manifolds. Among these, the simply connected ones are all hyper-Kähler, and each one is asymptotic to a flat cone over modulo a finite subgroup. Each finite sub-group of actually occurs. The complete list of possibilities consists of the cyclic groups together with the inverse images of the dihedral groups, the tetrahedral group, the octahedral group, and the icosahedral group under the double cover . Note that corresponds to the Eguchi–Hanson instanton, while for higher k, the cyclic group corresponds to the Gibbons–Hawking multi-center metrics, each of which diffeomorphic to the space obtained from the disjoint union of k copies of by using the Dynkin diagram as a plumbing diagram. This is a very incomplete list; there are many other possibilities, not all of which have been classified. Examples It will be convenient to write the gravitational instanton solutions below using left-invariant 1-forms on the three-sphere S3 (viewed as the group Sp(1) or SU(2)). These can be defined in terms of Euler angles by Note that for cyclic. Taub–NUT metric Eguchi–Hanson metric The Eguchi–Hanson space is defined by a metric the cotangent bundle of the 2-sphere . This metric is where . This metric is smooth everywhere if it has no conical singularity at , . For this happens if has a period of , which gives a flat metric on R4; However, for this happens if has a period of . Asymptotically (i.e., in the limit ) the metric looks like which naively seems as the flat metric on R4. However, for , has only half the usual periodicity, as we have seen. Thus the metric is asymptotically R4 with the identification , which is a Z2 subgroup of SO(4), the rotation group of R4. Therefore, the metric is said to be asymptotically R4/Z2. There is a transformation to another coordinate system, in which the metric looks like where (For a = 0, , and the new coordinates are defined as follows: one first defines and then parametrizes , and by the R3 coordinates , i.e. ). In the new coordinates, has the usual periodicity One may replace V by For some n points , i = 1, 2..., n. This gives a multi-center Eguchi–Hanson gravitational instanton, which is again smooth everywhere if the angular coordinates have the usual periodicities (to avoid conical singularities). The asymptotic limit () is equivalent to taking all to zero, and by changing coordinates back to r, and , and redefining , we get the asymptotic metric This is R4/Zn = C2/Zn, because it is R4 with the angular coordinate replaced by , which has the wrong periodicity ( instead of ). In other words, it is R4 identified under , or, equivalently, C2 identified under zi ~ zi for i = 1, 2. To conclude, the multi-center Eguchi–Hanson geometry is a Kähler Ricci flat geometry which is asymptotically C2/Zn. According to Yau's theorem this is the only geometry satisfying these properties. Therefore, this is also the geometry of a C2/Zn orbifold in string theory after its conical singularity has been smoothed away by its "blow up" (i.e., deformation). Gibbons–Hawking multi-centre metrics The Gibbons–Hawking multi-center metrics are given by where Here, corresponds to multi-Taub–NUT, and is flat space, and and is the Eguchi–Hanson solution (in different coordinates). FLRW-metrics as gravitational instantons In 2021 it was found that if one views the curvature parameter of a foliated maximally symmetric space as a continuous function, the gravitational action, as a sum of the Einstein–Hilbert action and the Gibbons–Hawking–York boundary term, becomes that of a point particle. Then the trajectory is the scale factor and the curvature parameter is viewed as the potential. For the solutions restricted like this, general relativity takes the form of a topological Yang–Mills theory. See also Gravitational anomaly Hyperkähler manifold References Riemannian manifolds Quantum gravity Mathematical physics 4-manifolds
Gravitational instanton
Physics,Mathematics
1,860
2,760,182
https://en.wikipedia.org/wiki/Tau%20Eridani
Tau Eridani (τ Eridani, τ Eri) is a group of fairly widely scattered stars in the constellation Eridanus. τ1 Eridani (1 Eridani) τ2 Eridani (2 Eridani) τ3 Eridani (11 Eridani) τ4 Eridani (16 Eridani) τ5 Eridani (19 Eridani) τ6 Eridani (27 Eridani) τ7 Eridani (28 Eridani) τ8 Eridani (33 Eridani) τ9 Eridani (36 Eridani) All of them were member of asterism 天苑 (Tiān Yuàn), Celestial Meadows, Hairy Head mansion. See also Map analysis of the 1961 Zeta Reticuli Incident References Eridanus (constellation) Eridani, Tau
Tau Eridani
Astronomy
177
31,207,598
https://en.wikipedia.org/wiki/Algophagy
Algophagy is a feeding behaviour whereby an animal eats algae as a food source. Algae is a group of photosynthetic organisms that mostly rely on aquatic environments. They grow low to the ground as they lack vascular tissue, an adaptation postdating their origin. While the group of algal species is large, it is generally accepted that algae is high in nutritional value and often contain a variety of concentrated vitamins and minerals. Algophagy as a feeding behaviour was first noted in literature by Deonier (1972) in their explanation of feeding habits of shore flies (Ephydridae). In this context, this term was used to describe the behaviour of these flies consuming and digesting algal matter. This feeding style has also been noted in other animals in recent literature. While this behaviour has been noted in a variety of insects (specifically Ameletus mayflies), it has also been observed in other invertebrates such as the crab Carcinus maenas and the Nanorchestes mite. Additionally, this behaviour has been noted in vertebrates such as the chimpanzee Pan troglodytes, the sheep Ovis aries, and the chicken Gallus gallus domesticus. This feeding behaviour has more recently been adopted by humans as well. Examples in invertebrates Algophagy is a feeding behaviour found commonly amongst many invertebrate species. Some examples of these observations include the mayfly, mites, and certain species of crab. Mayflies are a group of insects found to feed off of epilithic algae from near streams in New Mexico, United States. In a study to examine ingestion and digestion of algae by larval insects, Peterson (1998) analyzed the fecal composition of varying insect larvae and nymphs. All species studied showed epilithic algae in their fecal matter, markedly in the multiple species of mayfly. This study outlines the feeding behaviours used by specifically the Ameletus mayflies to feed off of and digest algae as a source of food. This behaviour has also been noted in species of mites. The Nanorchestes mite is a small invertebrate of the genus Pachygnathoid that lives in the ground and is often found in extremophilic conditions. Krantz and Lindquist (1979) made observations of these mites feeding and surviving off of green algae, while also delving into the background theory behind this. The authors argue that algal microflora predates that of vascular plants, a step to understanding the evolutionary pathway that follows algophagy. Because of this flora timeline, the mites relied on algae as an early source of nutrition. Algophagy also occurs in certain species of crab. The green crab is a highly invasive species found on nearly all continents of Earth. This littoral crab is an omnivore with a large array of preferred foods, forming an important ecological connection with many ocean environments. In a study performed by Ropes (1968), 3,979 green crabs were sampled and their gut contents were analyzed to reveal that algae was one of the two consumed plant foods. This was replicated in other studies such as that of Baeta, Cabral, Marques, and Pardal (2006) who also found these results nearly 40 years later. Examples in non-human vertebrates Algophagy has been observed in a variety of vertebrate species, such as the chimpanzee, species of sheep, and also in the common chicken. The chimpanzee is a primate in the same family as humans and are native to sub-Saharan Africa. While many chimpanzees are naturally hydrophobic, Sakamaki (1998) found that those in Mahale have been observed to submerge themselves into freshwater and eat algae. This observation is the first documentation of a primate using algae in the wild as a food source and is an important marker of possible adaptation in the species. While the chimpanzee in question, Sally, was one of the only algae-eaters in her group, it was assumed that she had adopted this behaviour from her natal group prior to immigrating to this new environment. Nonetheless, this anecdotal field study highlights the act of eating algae in chimpanzees. Another example here is found in certain species of sheep. The North Ronaldsay sheep is native to the island of Orkney off of Scotland and had been bred for wool until recently being listed as a vulnerable population. This species relies heavily on tidal algae as outlined by Paterson and Coleman (1982). The researchers here observed the sheep feeding largely on brown algae, commonly known as seaweed. The sheep relied on the tides to expose the nutrient rich algae and when the tides made the food inaccessible, the sheep supported their diet with other forms of grazing. Algophagy also been observed in the common chicken as well. When the Poultry Department of the University of Maryland did an assay of dried Chlorella pyrenoidosa, they found it to be a rich nutrient source that could be substituted into the diet of chickens. The researcher behind this outlined the benefits of using this food replacement for chickens in that it improved growth and wellbeing of the chicken. While this example is not a natural one, it does outline the use of algae as a food source for domestic chickens, an important consideration in the future of both algophagy and agriculture. Algophagy in humans While this feeding behaviour is not commonly associated with human evolution or adaptation, it has gained some momentum in recent application. New dieting and food trends have veered towards the inclusion of spirulina into supplements. Spirulina is a bacterium, but mostly referred to as blue-green algae, that is used to supplement a variety of nutrients including essential proteins, vitamins, and minerals. In a past review of spirulina by Belay, Ota, Miyakawa, and Shimamatsu (1993), it was outlined that the algae could even be correlated to reduced risk of cholesterol problems, cancer, and heavy metal nephrotoxicity. Spirulina is relatively popular among dietary supplement enthusiasts and can be used in a varied of forms from capsules, to smoothies, to baked goods. This outlines a contemporary example of algophagy in humans. See also Glossary of entomology terms Eating behaviour in Insects Animal behaviour Feeding behaviour References Eating behaviors
Algophagy
Biology
1,328
1,047,605
https://en.wikipedia.org/wiki/Recursive%20definition
In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively-definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set. A recursive definition of a function defines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, the factorial function is defined by the rules This definition is valid for each natural number , because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure for computing the value of the function , starting from and proceeding onwards with etc. The recursion theorem states that such a definition indeed defines a function that is unique. The proof uses mathematical induction. An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, one definition of the set of natural numbers is: 1 is in If an element n is in then is in is the smallest set satisfying (1) and (2). There are many sets that satisfy (1) and (2) – for example, the set satisfies the definition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members. Properties of recursively defined functions and sets can often be proved by an induction principle that follows the recursive definition. For example, the definition of the natural numbers presented here directly implies the principle of mathematical induction for natural numbers: if a property holds of the natural number 0 (or 1), and the property holds of whenever it holds of , then the property holds of all natural numbers (Aczel 1977:742). Form of recursive definitions Most recursive definitions have two foundations: a base case (basis) and an inductive clause. The difference between a circular definition and a recursive definition is that a recursive definition must always have base cases, cases that satisfy the definition without being defined in terms of the definition itself, and that all other instances in the inductive clauses must be "smaller" in some sense (i.e., closer to those base cases that terminate the recursion) — a rule also known as "recur only with a simpler case". In contrast, a circular definition may have no base case, and even may define the value of a function in terms of that value itself — rather than on other values of the function. Such a situation would lead to an infinite regress. That recursive definitions are valid – meaning that a recursive definition identifies a unique function – is a theorem of set theory known as the recursion theorem, the proof of which is non-trivial. Where the domain of the function is the natural numbers, sufficient conditions for the definition to be valid are that the value of (i.e., base case) is given, and that for , an algorithm is given for determining in terms of , (i.e., inductive clause). More generally, recursive definitions of functions can be made whenever the domain is a well-ordered set, using the principle of transfinite recursion. The formal criteria for what constitutes a valid recursive definition are more complex for the general case. An outline of the general proof and the criteria can be found in James Munkres' Topology. However, a specific case (domain is restricted to the positive integers instead of any well-ordered set) of the general recursive definition will be given below. Principle of recursive definition Let be a set and let be an element of . If is a function which assigns to each function mapping a nonempty section of the positive integers into , an element of , then there exists a unique function such that Examples of recursive definitions Elementary functions Addition is defined recursively based on counting as Multiplication is defined recursively as Exponentiation is defined recursively as Binomial coefficients can be defined recursively as Prime numbers The set of prime numbers can be defined as the unique set of positive integers satisfying 2 is a prime number, any other positive integer is a prime number if and only if it is not divisible by any prime number smaller than itself. The primality of the integer 2 is the base case; checking the primality of any larger integer by this definition requires knowing the primality of every integer between 2 and , which is well defined by this definition. That last point can be proved by induction on , for which it is essential that the second clause says "if and only if"; if it had just said "if", the primality of, for instance, the number 4 would not be clear, and the further application of the second clause would be impossible. Non-negative even numbers The even numbers can be defined as consisting of 0 is in the set of non-negative evens (basis clause), For any element in the set , is in (inductive clause), Nothing is in unless it is obtained from the basis and inductive clauses (extremal clause). Well formed formula The notion of a well-formed formula (wff) in propositional logic is defined recursively as the smallest set satisfying the three rules: is a wff if is a propositional variable. is a wff if is a wff. is a wff if and are wffs and • is one of the logical connectives ∨, ∧, →, or ↔. The definition can be used to determine whether any particular string of symbols is a wff: is a wff, because the propositional variables and are wffs and is a logical connective. is a wff, because is a wff. is a wff, because and are wffs and is a logical connective. Recursive definitions as logic programs Logic programs can be understood as sets of recursive definitions. For example, the recursive definition of even number can be written as the logic program: even(0). even(s(s(X))) :- even(X). Here :- represents if, and s(X) represents the successor of X, namely X+1, as in Peano arithmetic. The logic programming language Prolog uses backward reasoning to solve goals and answer queries. For example, given the query ?- even(s(s(0))) it produces the answer true. Given the query ?- even(s(0)) it produces the answer false. The program can be used not only to check whether a query is true, but also to generate answers that are true. For example: ?- even(X). X = 0 X = s(s(0)) X = s(s(s(s(0)))) X = s(s(s(s(s(s(0)))))) ..... Logic programs significantly extend recursive definitions by including the use of negative conditions, implemented by negation as failure, as in the definition: even(0). even(s(X)) :- not(even(X)). See also Definition Logic programming Mathematical induction Recursive data types Recursion Recursion (computer science) Structural induction Notes References Definition Mathematical logic Theoretical computer science Recursion
Recursive definition
Mathematics
1,559
6,866,874
https://en.wikipedia.org/wiki/Powder%20of%20sympathy
Powder of sympathy was a form of early pseudoscientific navigation and alchemy, in the 17th century in Europe, whereby a remedy was applied to the weapon that had caused a wound with the aim of healing the injury it had made. Weapon salve was a preparation, again applied to the weapon, but based on material from the wounded patient rather than on any remedy for the wound. History The powder is said to have consisted of green vitriol, first dissolved in water and afterward recrystallized or calcined in the sun. The Duke of Buckingham testified that Kenelm Digby had healed his secretary of a gangrenous wound by simply soaking the bloody bandage in a solution of the powder (possibly due to the oligodynamic effect). Digby claimed to have got the secret remedy from a Carmelite friar in Florence, and attributed its potency to the fact that the sun's rays extracted the spirits of the blood and the vitriol, while, at the same time, the heat of the wound caused the healing principle thus produced to be attracted to it by means of a current of air — a sort of wireless therapy. The powder was also applied to solve the longitude problem in the suggestion of an anonymous pamphlet of 1687 entitled Curious Enquiries. The pamphlet theorised that a wounded dog could be put aboard a ship, with the knife used to injure the dog left in the trust of a timekeeper on shore, who would then dip said knife into the powder at a predetermined time and cause the creature to yelp, thus giving the captain of the ship an accurate knowledge of the time. See also Sympathetic magic References Alchemical substances Magic powders Obsolete medical theories Superstitions
Powder of sympathy
Chemistry
356
306,081
https://en.wikipedia.org/wiki/Biochemist
Biochemists are scientists who are trained in biochemistry. They study chemical processes and chemical transformations in living organisms. Biochemists study DNA, proteins and cell parts. The word "biochemist" is a portmanteau of "biological chemist." Biochemists also research how certain chemical reactions happen in cells and tissues and observe and record the effects of products in food additives and medicines. Biochemist researchers focus on playing and constructing research experiments, mainly for developing new products, updating existing products and analyzing said products. It is also the responsibility of a biochemist to present their research findings and create grant proposals to obtain funds for future research. Biochemists study aspects of the immune system, the expressions of genes, isolating, analyzing, and synthesizing different products, mutations that lead to cancers, and manage laboratory teams and monitor laboratory work. Biochemists also have to have the capabilities of designing and building laboratory equipment and devise new methods of producing correct results for products. The most common industry role is the development of biochemical products and processes. Identifying substances' chemical and physical properties in biological systems is of great importance, and can be carried out by doing various types of analysis. Biochemists must also prepare technical reports after collecting, analyzing and summarizing the information and trends found. In biochemistry, researchers often break down complicated biological systems into their component parts. They study the effects of foods, drugs, allergens and other substances on living tissues; they research molecular biology, the study of life at the molecular level and the study of genes and gene expression; and they study chemical reactions in metabolism, growth, reproduction, and heredity, and apply techniques drawn from biotechnology and genetic engineering to help them in their research. About 75% work in either basic or applied research; those in applied research take basic research and employ it for the benefit of medicine, agriculture, veterinary science, environmental science, and manufacturing. Each of these fields allows specialization; for example, clinical biochemists can work in hospital laboratories to understand and treat diseases, and industrial biochemists can be involved in analytical research work, such as checking the purity of food and beverages. Biochemists in the field of agriculture research the interactions between herbicides with plants. They examine the relationships of compounds, determining their ability to inhibit growth, and evaluate the toxicological effects surrounding life. Biochemists also prepare pharmaceutical compounds for commercial distribution. Modern biochemistry is considered a sub-discipline of the biological sciences, due to its increased reliance on, and training, in accord with modern molecular biology. Historically, even before the term biochemist was formally recognized, initial studies were performed by those trained in basic chemistry, but also by those trained as physicians. Training Some of the job skills and abilities that one needs to attain to be successful in this field of work include science, mathematics, reading comprehension, writing, and critical thinking. These skills are critical because of the nature of the experimental techniques of the occupation. One will also need to convey trends found in research in written and oral forms. A degree in biochemistry or a related science such as chemistry is the minimum requirement for any work in this field. This is sufficient for a position as a technical assistant in industry or in academic settings. A Ph.D. (or equivalent) is generally required to pursue or direct independent research. To advance further in commercial environments, one may need to acquire skills in management. Biochemists must pass a qualifying exam or a preliminary exam to continue their studies when receiving a Ph.D. in biochemistry. Biochemistry requires an understanding of organic and inorganic chemistry. All types of chemistry are required, with emphasis on biochemistry, organic chemistry and physical chemistry. Basic classes in biology, including microbiology, molecular biology, molecular genetics, cell biology, and genomics, are focused on. Some instruction in experimental techniques and quantification is also part of most curricula. In the private industries for businesses, it is imperative to possess strong business management skills as well as communication skills. Biochemists must also be familiar with regulatory rules and management techniques. Biochemistry Blog publishes high quality research articles, papers, posts and jobs related to biochemistry. Biochemistry 2019, biochemistry papers latest. Due to the reliance on most principles of the basic science of Biochemistry, early contemporary physicians were informally qualified to perform research on their own in mainly this (today also related biomedical sciences) field. Employment Biochemists are typically employed in the life sciences, where they work in the pharmaceutical or biotechnology industry in a research role. They are also employed in academic institutes, where in addition to pursuing their research, they may also be involved with teaching undergraduates, training graduate students, and collaborating with post-doctoral fellows. The U.S. Bureau of Labor Statistics (BLS) estimates that jobs in the biochemist, combined with the statistics of biophysicists, field would increase by 31% between 2004 and 2014 because of the demand in medical research and development of new drugs and products, and the preservation of the environment. Because of a biochemists' background in both biology and chemistry, they may also be employed in the medical, industrial, governmental, and environmental fields. Slightly more than half of the biological scientists are employed by the Federal State and local governments. The field of medicine includes nutrition, genetics, biophysics, and pharmacology; industry includes beverage and food technology, toxicology, and vaccine production; while the governmental and environmental fields includes forensic science, wildlife management, marine biology, and viticulture. The average income of a biochemist was $82,150 in 2017. The range of the salaries begin around 44,640 to 153,810, reported in 2017. The Federal Government in 2005 reported the average salaries in different fields associated with biochemistry and being a biochemist. General biological scientists in nonsupervisory, supervisory, and managerial positions earned an average salary of $69,908; microbiologists, $80,798; ecologists, $72,021; physiologists, $93,208; geneticists, $85,170; zoologists, $101,601; and botanists, $62,207. See also List of biochemists References External links Biochemist Career Profile Science occupations
Biochemist
Chemistry,Biology
1,273
644,485
https://en.wikipedia.org/wiki/BALANCE%20Act
The Benefit Authors without Limiting Advancement or Net Consumer Expectations (BALANCE) Act of 2003 was a bill that would've amended Title 17 of the United States Code, "to safeguard the rights and expectations of consumers who lawfully obtain digital entertainment." The bill was proposed in the 108th Congress as H.R. 1066 by Congresswoman Zoe Lofgren (D-CA). In the 109th Congress, the bill was reintroduced and is numbered H.R. 4536. It has not been introduced into the 110th Congress. External links (108th Congress) (109th Congress) United States proposed federal intellectual property legislation United States federal copyright legislation Proposed legislation of the 108th United States Congress Proposed legislation of the 109th United States Congress Digital media
BALANCE Act
Technology
155
77,727,764
https://en.wikipedia.org/wiki/4-HO-TMT
4-HO-TMT, or 4-OH-TMT, also known as 4-hydroxy-N,N,N-trimethyltryptammonium or as dephosphorylated aeruginascin, is a substituted tryptamine derivative and the active form of aeruginascin (4-PO-TMT), analogously to how psilocin (4-HO-DMT) is the active form of psilocybin (4-PO-DMT). 4-HO-TMT is closely related to bufotenidine, the N-trimethyl analogue of serotonin. Like psilocin, 4-HO-DMT shows affinity for the serotonin 5-HT1A, 5-HT2A, and 5-HT2B receptors. However, its affinities for these receptors are lower than those of psilocin (by 8-, 6-, and 26-fold, respectively). Additionally, in another study, the value of 4-HO-TMT in activating the serotonin 5-HT2A receptor was 324-fold lower than that of psilocin (6800 and 21nM, respectively). Similarly to psilocin, 4-HO-TMT does not bind to the serotonin 5-HT3 receptor. This was in contrast to predictions, as the related compound bufotenidine is a strong and selective serotonin 5-HT3 receptor agonist. 4-HO-TMT is a quaternary trimethyl ammonium compound, and as a result, is less likely to be able to cross the blood–brain barrier (BBB) and enter the central nervous system than other tryptamines. Accordingly, 4-HO-TMT showed no ability to cross an artificial BBB-like membrane in a study. In rodents, 4-HO-TMT showed no head-twitch response (a behavioral proxy of psychedelic effects), hypolocomotion, or hypothermia, in contrast to psilocin and norpsilocin, but similarly to aeruginascin. A synthetic prodrug of 4-HO-TMT, 4-AcO-TMT, has been developed. It is analogous to psilacetin (4-AcO-DMT), a prodrug of psilocin. References External links 4-OH-TMT+ - isomer design Dimethylamino compounds Human drug metabolites Hydroxyarenes Mycotoxins Non-hallucinogenic 5-HT2A receptor agonists Peripherally selective drugs Serotonin receptor agonists Tryptamine alkaloids Quaternary ammonium compounds
4-HO-TMT
Chemistry
579
21,109,488
https://en.wikipedia.org/wiki/Minimum%20ignition%20energy
The minimum ignition energy (MIE) is a safety characteristic in explosion protection and prevention which determines the ignition capability of fuel-air mixtures, where the fuel may be combustible vapor, gas or dust. It is defined as the minimum electrical energy stored in a capacitor, which, when discharged, is sufficient to ignite the most ignitable mixture of fuel and air under specified test conditions. The MIE is one of the assessment criteria for the effectiveness of ignition, e.g. the discharge of electrostatic energy, mechanical ignition sources or electromagnetic radiation. It is an important parameter for the design of the protective measure of "avoidance of effective ignition sources". References Combustion Safety
Minimum ignition energy
Chemistry
142
12,535,065
https://en.wikipedia.org/wiki/Wildlife%20of%20Mali
The wildlife of Mali, composed of its flora and fauna, is widely varying from the Saharan desert zone (covering about 33% of the country) to the Sahelian east–west zone, to Mali, a landlocked francophone country in North Africa; large swathes of Mali remain unpopulated but has three sub-equal vegetation zones; the country has Sahara Desert in the north, the Niger River Basin at its center and the Senegal River on the south. The vegetation zones are the Saharan, the Sahel, and the Sudan–Guinea Savanna. Mali has many protected areas which include two national parks, one biosphere reserve, six faunal reserves, two partial faunal reserves, two sanctuaries (one is a UNESCO designated World Heritage Site), one chimp sanctuary, six game reserves, and three Ramsar Sites. Protected area in Mali, under legal acts and regulations (Law No. 86-43/AN-RM for trade and conservation of parks and reserves and Law No. 86-42/AN-RM for forest code), cover about , which is 4.7% area of the country. Adding the buffer zone and the peripheral zone of the Biosphere of Baoul, it becomes 6.2% of the total area of the country. The rich biodiversity of the country is reflected in its more than 1,700 plant species and about 1,000 animal species. Geography Niger River valley The Niger River valley, which dominates the topography of Mali, drained by the Niger River and its tributaries. Along its course, the central southern region is the narrowest and is known as the Inner Delta or the Inundation Zone of the Niger, formed of of flood-plains, along a river length of ; these form its wetlands of great ornithological interest. Saharan zone Habitat wise, the Saharan zone occupies a third of the country, and is made up of Sahara Desert and the Sahel (which is a zone of transition between the two). There is hardly any vegetation as the habitat comprises "unvegetated regs, hamadas, dunes and wadis" and also a few oases. On the south-eastern part of this zone is the Adrar des Iforhas Massif rising to a height of , which is part of the Ahaggar Massif in southern Algeria. Average precipitation in the zone is reported to be less than . Sahelian zone The Sahelian zone, widest in an east–west direction, has the Dogon plateau () elevation) and the Hombori mountains (, highest location in Mali) with the Inundation Zone of the Niger River located to its west. Average annual rainfall varies from about in the south to under in the north; the vegetation also changes accordingly from acacia-wooded grassland and deciduous bushland to thin coverage of annual grasslands (of Cenchrus biflorus). Sudan–Guinea zone The Sudan–Guinea zone is part of south-western region of Mali. The Senegal River and the Bafing and Baoulé Rivers rise here, and the basin is known for the lowest-lying land in the country of area which is below the contour. The zone also includes the Manding plateau (near Bamako, the capital of Mali) which is part of Fouta Djallon Mountains ( elevation) of Guinea. This forms the upper region of the catchment area that lies between the Senegal and Niger River systems. Geological formation reported is of sandstone. Vegetation in this zone is mainly of Isoberlinia sp. Protected areas There is very little wild life and a few national parks in Mali. The reserve and largest national park is the Boucle du Baoulé National Park (). located to the northeast of Bamako. There is hardly any wild life left in this park due to intense poaching of elephants, giraffes, buffalo, chimpanzees and lions. Monkeys are the only animals seen now. The Reserve de Ansongo Menaka is in the southeast, near the border with Niger. The Reserve de Douentza is the most interesting in terms of wildlife. Bafing National Park (). is in the south west bordering with Guinea which is a dry area between Mopti and Gao; it is home for desert elephants which move with change of seasons. The other notable parks are the Wongo National Park and the Kouroufing National Park. The Bafing Biosphere Reserve covers an area of and the Bafing Chimpanzee Sanctuary is exclusive to conserve chimpanzees. Flora The dominant vegetation in the inland delta of the Niger consists of hygrophilous grassland species of Eragrostis atrovirens, Panicum anabaptistum, Panicum fluviicola, Vetiveria nigritana, Echinochloa stagnina, wild rice Oryza barthii, Andropogon gayanus, Cynodon dactylon and Hyparrhenia dissolute. The many tree species reported are in patches. Dominant species of grasses in the transition zone between the higher levels of flood plains and its flooded zones are Acacia nilotica with Mimosa and Ziziphus spp. and Guiera senegalensis, Borassus and Hyphaene. Cram cram grasses are scattered in Mali. Fauna Mammals There are 146 species of mammals in Mali of which 2 are critically endangered (CR), 3 are endangered (EN), 10 are vulnerable (VU), and 3 are near-threatened (NT). The threatened species are the following. Addax nasomaculatus (addax) (CR) Gazella dama (dama gazelle) (CR) Pan troglodytes (common chimpanzee) (EN) Gazella leptoceros (rhim gazelle) (EN) Lycaon pictus (African wild dog) (EN) Ammotragus lervia (Barbary sheep) (VU) Acinonyx jubatus (cheetah) (VU) Loxodonta africana (African bush elephant) (VU) Trichechus senegalensis (African manatee) (VU) Profelis aurata (African golden cat) (VU) Panthera leo (lion) (VU) Gazella dorcas (dorcas gazelle) (VU) Hippopotamus amphibius (hippopotamus) (VU) Gazella rufifrons (red-fronted gazelle) (VU) Hipposideros jonesi (Jones's roundleaf bat) (NT) Felis margarita (sand cat) (NT) Crocuta crocuta (spotted hyena) (NT) Chimpanzees are found in the southernmost forests and monkeys are found in the Parc national de la Boucle du Baoule. Elephants in the Gourma region, known as the Sahelian herds of 360 to 630 numbers, migrate over (round trip) during the dry season between Burkina Faso and Mali to lake areas and return to Mali during the rainy season. Mali lions are found only around the Faleme River in the far west of cerde of Kenieba. Papio papio (Guinea baboon) and Massoutiera mzabi (Mzab gundi) are also reported. The African manatee (also known as the sea cow and West African manatee) is found all along the Niger River was hunted for meat in the past but its meat is now not marketed, which may be due its decreasing numbers or due to the legal protection given for its conservation. Birds Seventeen Important Bird Areas (IBAs) have been designated in Mali, encompassing an area of (about 2.3% of the surface area of the country). Ten include wetlands, nine are in the Inner Delta of the Niger river (key bird area), four (under the A3 criterion) include the Sudan–Guinea ecoregion of the Savanna biome, four are in the Sahel biome, and two in the Sahara–Sindian biome. The Kulicoro firefinch, also known as the Mali firefinch (Lagonostica virata), is the only endemic bird of Mali, found in rocky and grassy areas near Mopti and Bamako. In these IBAs, 622 bird species have been recorded; there have been 335 resident species noted, with 202 of these breeding in Mali. Additionally, 137 species of the noted 243 migratory species are of Palearctic origin. There are 12 species which are of global conservation concern, with seven being vagrants (Palearctic migrants). These are Marmaronetta angustirostris (VU), Aythya nyroca (VU), Circus macrourus (NT), Falco naumanni (VU), Neotis nuba (Nubian bustard) (NT), Gallinago media (NT), Glareola nordmanni (NT), Acrocephalus paludicola (VU), Lagonosticta virata (NT), Prinia fluviatilis (DD), Ceratogymna elata (NT). The Inner Delta is also rich in waterfowl and heron species, particularly the cosmopolitan Bubulcus ibis (LC) and Casmerodius albus (LC), as well as the world’s largest heron, Ardea goliath (Goliath heron, LC). The Goliath heron, which has small populations in South Asia as well, can stand upwards of 152 cm/5 ft tall, with a 230 cm/7 ft wingspan. Poicephalus senegalus (Senegal parrot, LC), Serinus mozambicus (yellow-fronted canary, LC) and Haliaeetus vocifer (African fish eagle, LC) are some of the other, more common, avian species reported in Mali. Reptiles and amphibians A few reptile species reported are Cerastes cerastes (desert horned viper) and Geochelone sulcata (African spurred tortoise). Other species of snakes or cobra are: Bitis arietans (puff adder), Cerastes cerastes (horned viper), Dispholidus typus (boomslang), Echis jogeri (Joger's carpet viper), Echis leucogaster (white-bellied carpet viper), Echis ocellatus (West African carpet viper), Naja katiensis (West African brown spitting cobra), Naja melanoleuca (forest cobra), Naja nigricollis (black-necked spitting cobra), and Naja senegalensis (Senegalese cobra). The Mali Uromastyx, Uromastyx maliensis, is a widely known species of lizard in Mali. Skink genera in Mali include Chalcides and Trachylepis. Other lizard species known in the Dogon Country of Mali are Agama sankaranica, Uromastyx geyri, Sphenops delislei, Trachylepis perrotetii, Trachylepis quinquetaeniata, Chalcides ocellatus, Chamaeleo africanus, Ptyodactylus ragazzii, Tarentola ephippiata, Tarentola annularis, Tropiocolotes tripolitanus, Latastia longicaudata, Varanus griseus, and Varanus niloticus. Turtles include Pelomedusa subrufa, and crocodiles include the Nile crocodile and Mecistops cataphractus (African slender-snouted crocodile). The Dogon are also familiar with the colubrid species Psammophis sibilans, Psammophis elegans, and Bamanophis dorri, and the cobra species Naja nigricollis (black spitting cobra) and Naja haje (Egyptian cobra). Other snake species of the Dogon Country in Mali include Atractaspis watsoni, Eryx muelleri, Python regius, Python sebae, Telescopus obtusus, and Cerastes vipera. Most fatal snakebites in Mali are due to encounters with Bitis spp. (particularly Bitis arietans) and Echis spp. (particularly Echis leucogaster and others). Untreated bites from Bitis species can result in hemorrhage (bleeding) within 5 hours, and Echis bites can cause hemorrhage in 1-2 days. Among the amphibians, Tomopterna milletihorsini (Mali screeching frog) and Bufo chudeaui (Bata marsh toad) are notable in Mali. In Dogon country, amphibian species that are well known by the local Dogon people include the edible bullfrog Hoplobatrachus occipitalis, Amnirana galamensis (Galam white-lipped frog), Phrynobatrachus accraensis, Hyperolius nitidulus (or Hyperolius viridiflavus), Amietophrynus xeros, Amietophrynus regularis, and Sclerophrys pentoni. Amietophrynus channingi and Hildebrandtia ornata are also reported from Dogon Country. Fish There are approximately 200 fish species in Mali. Fishing is a common practice in the Niger and other rivers in Mali, and the most popular variety of fish is capitaine. Many species of fish are found in the Niger River and its tributaries. Along the northern bend of the river in the eastern half of Mali, reported fish species include Alestes baremoze, Alestes dentex sethente, Brycinus macrolepidotus, Brycinus nurse, Hydrocynus forskahlii, Micralestes elongatus, Marcusenius senegalensis, Bagrus bajad, Synodontis schall, Schilbe intermedius, Clarias anguillaris/Clarias gariepinus, Malapterurus electricus, Mastacembelus nigromarginatus, Lates, Oreochromis niloticus, Citharinus latus, Labeo senegalensis, Mormyrus rume, Protopterus annectens, Tetraodon lineatus, Brycinus leuciscus, Sarotherodon galilaeus, Polypterus senegalus, Hydrocynus brevis, Heterotis niloticus, Hyperopisus bebe, Gymnarchus niloticus, Hepsetus odoe, Distichodus brevipinnis, Citharidium ansorgei, Citharinops distichodoides, Citharinus citharus, Labeo coubie, Bagrus docmac, Clarotes laticeps, Chrysichthys auratus, Chrysichthys nigrodigitatus, Auchenoglanis biscutatus, Auchenoglanis occidentalis, Schilbe mystus, Heterobranchus bidorsalis, Synodontis batensoda, Synodontis membranacea, Synodontis resupinata, Synodontis clarias, Synodontis budgetti, Synodontis vermiculata, Synodontis sorex, Synodontis gobroni, Synodontis filamentosa, Synodontis melanoptera, Synodontis nigrita, Arius gigas, Hemichromis, and Tilapia zillii. Many villages along the Niger River export fish and fish products to neighbouring regions. For example, in Mopti Region, fishing villages near Konna regularly export fish to market towns in the dry inland Dogon country, such as Douentza. In Douentza, fish species that are commonly found in the market are catfish (especially Clarias and Bagrus spp.), carp (Sarotherodon and Oreochromis spp.), capitaine (Lates spp.), dogfish (Hydrocynus spp.), Mormyrus, Marcusenius, and Labeo fish species. Although most Dogon villages do not have direct local access to fresh fish, some villages are located near pools at the bottoms of escarpments that harbour fish species belonging to genera such as Clarias, Marcusenius, and Brycinus. Invertebrates Termites are a unique feature of Mali found in many uncleared locations. Their habitat is notably along with specific trees and plants, and alates or flying ants are the species housed in the ant hills. A documentary on these termite hills has been made under the title "Termites: Castles of Clay", which is about the "Soul of the White ant". Other insects reported are Dracunculus medinensis (Guinea worm) and Necator americanus (hookworm). Scorpions are noted; the female Anopheles mosquito carries malaria. Threats The threats to the wildlife of Mali are on account of deforestation (in 1997, the economic damage amounted to an estimated 5.35 per cent of GDP,) intensive hunting pressure, proliferation of livestock farming, extension of agricultural land and also due to desertification (Sahara desert extending, erosion and drought due to climate change). In the past, droughts in the 1970s and 1980s (last great drought was in 1984) have also contributed to the decline of wildlife resources of the country. Increased anthropogenic and livestock pressures, due to people moving to the southern part of the country and settling on river banks, has also compounded the threats. Particular mention of effect on the fauna in the wild is of antelope species which are threatened. Other significant contributors to biodiversity degradation relate to pollution, mining, crop cultivation and also indiscriminate traditional slash and burn farming. Another aspect in the past was of concentrating protection measures only in the southwestern savannah region. Conservation The conservation of the protected areas is the responsibility of the National Parks Department of Mali. However, conservation and preservation of forest lands (including gazetted forest) rests with the Forest Service and both these agencies fall under the purview of the Department of Water and Forests of the Ministry of Natural Resources and Livestock. In the past, the traditional practice of protecting the forests and its flora and fauna rested with the Village elders. However, with Islam making inroads into the country, traditional rules have been relegated to a backseat and has resulted in over exploitation of the forest resources, which has been further aggravated by increased anthropological pressures. A major conservation effort has been launched with funding provided by the Global Environmental Facility (under the aegis of the UNDP) to be completed by 2014 with the objective of substantially increasing the area under protection estate and reinforce the management instruments to achieve effective protection area, particularly the southwest region in respect of endangered mammal species of Derby eland and the western chimpanzee. Gallery References Biota of Mali Mali
Wildlife of Mali
Biology
3,960
6,055,645
https://en.wikipedia.org/wiki/Phosphoribosyl%20pyrophosphate
Phosphoribosyl pyrophosphate (PRPP) is a pentose phosphate. It is a biochemical intermediate in the formation of purine nucleotides via inosine-5-monophosphate, as well as in pyrimidine nucleotide formation. Hence it is a building block for DNA and RNA. The vitamins thiamine and cobalamin, and the amino acid tryptophan also contain fragments derived from PRPP. It is formed from ribose 5-phosphate (R5P) by the enzyme ribose-phosphate diphosphokinase: It plays a role in transferring phospho-ribose groups in several reactions, some of which are salvage pathways: In de novo generation of purines, the enzyme amidophosphoribosyltransferase acts upon PRPP to create phosphoribosylamine. The histidine biosynthesis pathway involves the reaction between PRPP and ATP, which activates the latter to ring cleavage. Carbon atoms from ribose in PRPP form the linear chain and part of the imidazole ring in histidine. The same is true for the biosynthesis of tryptophan, with the first step being N-alkylation of anthranilic acid catalysed by the enzyme anthranilate phosphoribosyltransferase. Increased PRPP Increased levels of PRPP are characterized by the overproduction and accumulation of uric acid leading to hyperuricemia and hyperuricosuria. It is one of the causes of gout. Increased levels of PRPP are present in Lesch–Nyhan Syndrome. Decreased levels of hypoxanthine guanine phosphoribosyl transferase (HGPRT) causes this accumulation, as PRPP is a substrate used by HGPRT during purine salvage. See also 5-Aminoimidazole ribotide Purine biosynthesis Pyrimidine biosynthesis References Organophosphates Monosaccharides
Phosphoribosyl pyrophosphate
Chemistry
435
3,733,920
https://en.wikipedia.org/wiki/Program%20comprehension
Program comprehension (also program understanding or [source] code comprehension) is a domain of computer science concerned with the ways software engineers maintain existing source code. The cognitive and other processes involved are identified and studied. The results are used to develop tools and training. Software maintenance tasks have five categories: adaptive maintenance, corrective maintenance, perfective maintenance, code reuse, and code leverage. Theories of program comprehension Titles of works on program comprehension include Using a behavioral theory of program comprehension in software engineering The concept assignment problem in program understanding, and Program Comprehension During Software Maintenance and Evolution. Computer scientists pioneering program comprehension include Ruven Brooks, Ted J. Biggerstaff, and Anneliese von Mayrhauser. See also Program analysis (computer science) Program slicing References Computer programming
Program comprehension
Technology,Engineering
157
456,781
https://en.wikipedia.org/wiki/Bruck%E2%80%93Ryser%E2%80%93Chowla%20theorem
The Bruck–Ryser–Chowla theorem is a result on the combinatorics of block designs that implies nonexistence of certain kinds of design. It states that if a (v, b, r, k, λ)-design exists with v = b (a symmetric block design), then: if v is even, then k − λ is a square; if v is odd, then the following Diophantine equation has a nontrivial solution: x2 − (k − λ)y2 − (−1)(v−1)/2 λ z2 = 0. The theorem was proved in the case of projective planes by . It was extended to symmetric designs by . Projective planes In the special case of a symmetric design with λ = 1, that is, a projective plane, the theorem (which in this case is referred to as the Bruck–Ryser theorem) can be stated as follows: If a finite projective plane of order q exists and q is congruent to 1 or 2 (mod 4), then q must be the sum of two squares. Note that for a projective plane, the design parameters are v = b = q2 + q + 1, r = k = q + 1, λ = 1. Thus, v is always odd in this case. The theorem, for example, rules out the existence of projective planes of orders 6 and 14 but allows the existence of planes of orders 10 and 12. Since a projective plane of order 10 has been shown not to exist using a combination of coding theory and large-scale computer search, the condition of the theorem is evidently not sufficient for the existence of a design. However, no stronger general non-existence criterion is known. Connection with incidence matrices The existence of a symmetric (v, b, r, k, λ)-design is equivalent to the existence of a v × v incidence matrix R with elements 0 and 1 satisfying where is the v × v identity matrix and J is the v × v all-1 matrix. In essence, the Bruck–Ryser–Chowla theorem is a statement of the necessary conditions for the existence of a rational v × v matrix R satisfying this equation. In fact, the conditions stated in the Bruck–Ryser–Chowla theorem are not merely necessary, but also sufficient for the existence of such a rational matrix R. They can be derived from the Hasse–Minkowski theorem on the rational equivalence of quadratic forms. References van Lint, J.H., and R.M. Wilson (1992), A Course in Combinatorics. Cambridge, Eng.: Cambridge University Press. External links Theorems in combinatorics Theorems in projective geometry Theorems in statistics Design of experiments
Bruck–Ryser–Chowla theorem
Mathematics
564
55,332,942
https://en.wikipedia.org/wiki/Ceriporia%20purpurea
Ceriporia purpurea is a species of crust fungus in the family Irpicaceae. It was first described by Swedish mycologist Elias Magnus Fries in 1821 as Polyporus purpureus. Marinus Anton Donk gave the fungus its current name when he transferred it to the genus Ceriporia in 1971. A 2016 study identified six similar Ceriporia species, referred to as the Ceriporia purpurea group: Ceriporia bresadolae, the European species C. torpida and C. triumphalis, and the North American species C. manzanitae and C. occidentalis. Ceriporia purpurea is widely distributed in the temperate zone of Eurasia, where it grows exclusively on the decomposing wood of deciduous trees, and also in the American North-East. References Fungi described in 1821 Fungi of Asia Fungi of Europe Fungi of North America Irpicaceae Taxa named by Elias Magnus Fries Fungus species
Ceriporia purpurea
Biology
202
34,106,536
https://en.wikipedia.org/wiki/Karlino%20oil%20eruption
The Karlino oil eruption was an oil well blowout that took place on 9 December 1980, near Karlino, a town located in Pomerania in northern Poland, near the Baltic Sea coast. The eruption and the fire that followed it put an end to the hope of Poland becoming a "second Kuwait". It took more than a month for Polish, Soviet and Hungarian firefighters to completely extinguish the fire. The eruption was the result of an extensive search for underground oil deposits that took place in the area in 1980. Background In 1980, the town of Karlino became a symbol of Polish hopes for a "new Kuwait" because of the discovery of oil deposits surrounding the town. At that time Poland was in a severe economic crisis, foreign debt was mounting, and both the Communist authorities and the nation hoped to be able to sell oil from Karlino to the West and pay off the debt with the proceeds. The oil deposits took on a symbolic role as a further sign of a better future, after the election of John Paul II and the creation of Solidarity. Among disappointed officials who visited the site after the fire were First Secretary of the Communist Party Stanisław Kania and Solidarity chairman Lech Wałęsa. The eruption At 5:30 p.m. on 9 December 1980, in a 2800-m deep drill hole designated Daszewo - 1, located in the village of Krzywopłoty (4.5 kilometers from Karlino), a giant eruption of oil and natural gas took place. Soon afterwards, a fire broke out, with flames reaching up to 130 m. The temperature of the burning mix of gas and oil reached 900 °C, and in spite of sub-zero temperatures, leaves appeared on frozen trees nearby. During the night of 9/10 December, four workers were burned and had to be taken to hospital in Białogard. Eighteen teams of firefighters arrived at the site, and nearby households were evacuated. The Szczecin-bound lanes of the main East - West National Route Number 6 (Droga Krajowa nr 6) adjacent to the fire were closed. Workers' equipment and shacks were destroyed. Oil pressure reached 560 atmospheres, and the fire was visible from several km away. At that time, it was one of the biggest oil eruptions in the history of Europe. Extinguishing operation On 11 December, specialized units of the Polish Army, firefighters, police, a mining search and rescue unit from Kraków, and specialists from Kraków's AGH University of Science and Technology began working on protecting the local area from the flames. News of the fire was front-page news throughout Poland, with headlines such as "Days hot like oil", "Karlino without a Christmas break", "Karlino's torch", and "Artillery and sappers in Karlino". Five days later, on 16 December, firefighters from Hungary and the Soviet Union joined the Poles, and command of the operation was taken by engineer Adam Kilar and Soviet expert Leon Kalyna from Poltava, who himself was of Polish ancestry. By 18 December, the area adjacent to the drill hole was cleared of burned and destroyed steel parts of the oil well and a road had been constructed to the locations of two new wells. The operation to extinguish the fire was very elaborate and preparation lasted almost a month, until January 2, 1981. A special pipeline was constructed, and the population of Poland was vividly interested in these events. Every day, hundreds of letters came to Karlino with suggestions on how to put the fire out. After long consideration, the specialists came up with the following solution. First, remnants of the destroyed oil well and blowout preventer were removed. The well was removed by a specially constructed tractor, equipped with a 30-m hook arm. The preventer, however, was destroyed by a cannon from a distance of 25 m. The task was to aim at a small crack between the flanges. First an 85 mm cannon was used, then a 122 mm howitzer, and finally on 28 December, a 152 mm cannon-howitzer managed to destroy the preventer. After all the preparations, the fire was extinguished on 8 January 1981, almost a month after the eruption. The local Karlino Chronicle wrote: "At 10:42 a.m. a stream of water from 23 cannons was aimed at the burning geyser of oil. After 16 minutes, the fire went out, and specialists removed the damaged flanges". On 10 January 1981, at 3:38 p.m., a new preventer was installed, and the stream of oil was stopped. That was the end of the operation, after 32 days. Altogether, some 1000 people were involved in the mission. The new preventer was constructed at May 1 Works in Ploiești, Romania. It weighed 11 tons, and was lifted by a 100-ton crane with a 60-meter arm. The entire operation cost about 300 million zlotys, and during the fire, some 20,000–30,000 tons of oil and some 30–50 million cubic meters of natural gas burned. There was one injury in the extinguishing operation: a soldier was slightly burned. Aftermath The investigation established that the primary cause of the eruption was looseness in the blowout preventer. Most likely, gaskets were damaged and the preventer was not locked in time. The fire was caused by gasoline engines driving the pumps. Another reason of the eruption was an inaccurate geologic assessment: specialists had expected to encounter oil at a depth of 2,952 m underground, but it turned out to be at 2,792 meters, 160 m higher than expected. On 16 January 1981, at 3:55 p.m., a cargo train with seventeen tank cars filled with oil left Karlino rail station, heading towards a refinery in Trzebinia. Hopes were high, but it turned out that the oil deposits were sparse, and after a few years, the well ran dry. Altogether, more than 850 tons of oil were extracted and transported to Trzebinia—much less than expected. As experts later stated, if the fire had lasted for two more weeks, all the oil would have burned out. Also, the Karlino oil was assessed as of "average quality", with 0.58% sulphur content. Since October 2002, the Daszewo - 1 drill hole has been used for natural gas production. The drill hole currently serves as an underground natural gas storage facility, with a capacity of 30 million cubic metres. In December 2010, the mayor of Karlino announced plans for the opening of an interactive museum dedicated to the 1980 eruption. See also Oil well fire Kuwaiti oil fires Romania oil fires References External links Photos of the fire at Karlino 1980 disasters in Poland Explosions in 1980 1980 industrial disasters Polish People's Republic Gas explosions in Poland Engineering failures December 1980 events in Europe Białogard County History of West Pomeranian Voivodeship 1980 fires 1980s fires in Europe
Karlino oil eruption
Technology,Engineering
1,432
46,308,607
https://en.wikipedia.org/wiki/Design%20fiction
Design fiction is a design practice aiming at exploring and criticising possible futures by creating speculative, and often provocative, scenarios narrated through designed artifacts. It is a way to facilitate and foster debates, as explained by futurist Scott Smith: "... design fiction as a communication and social object creates interactions and dialogues around futures that were missing before. It helps make it real enough for people that you can have a meaningful conversation with". By inspiring new imaginaries about the future, Design Fiction moves forward innovation perspectives, as conveyed by author Bruce Sterling's own definition: "Design Fiction is the deliberate use of diegetic prototypes to suspend disbelief about change". Reflecting the diversity of media used to create design fictions and the breadth of concepts that are prototyped in the associated fictional worlds, researchers Joseph Lindley and Paul Coulton propose that design fiction be defined as: "(1) something that creates a story world, (2) has something being prototyped within that story world, (3) does so in order to create a discursive space", where 'something' may mean 'anything'. Examples of the media used to create design fiction storyworlds include physical prototypes, prototypes of user manuals, digital applications, videos, short stories, comics, fictional crowdfunding videos, fictional documentaries, catalogues or newspapers and pastiches of academic papers and abstracts. History Design fiction is part of the speculative design discipline, itself a relative of critical design. Although the term design fiction was coined by Bruce Sterling in 2005, where he says it is similar to science fiction but "makes more sense on the page", it was Julian Bleecker's 2009 essay that firmly established the idea. Bleecker brought together Sterling's original idea and combined it with David A. Kirby's notion of the diegetic prototype and a paper written by influential researchers Paul Dourish and Genevieve Bell which argued reading science fiction alongside Ubiquitous Computing research would shed further light on both areas. Since Bleecker's essay was published design fiction has become increasingly popular as demonstrated by the adoption of design fiction in a wide variety of academic research. Characteristics Although design fiction shows a lot of overlaps with other Discursive Design practices such as critical design, Adversarial Design, Interrogative Design, Design for Debate, reflective design, and contestational design, it is possible to draw some of its special features. Design fiction draws its inspiration from weak signals of our everyday lives, such as innovations in new technologies or new cultural trends, and use extrapolation to build disruptive visions of society. Through challenging the status quo, this practice aims at making ourselves question our current uses, norms, ethics or values, whether leading innovation, or consuming it at the other end of the line. Design fictions tend to stand aside from manichaean utopian/dystopian depictions, and rather dig into more ambiguous grey areas of the explored subjects. As explained by Fabien Girardin, co-founder of The Near Future Laboratory: "Design Fiction doesn't so much 'predict' the future. It is a way to consider the future differently; a way to tell stories about alternatives and unexpected trajectories; a way to discuss a different kind of future than the typical bifurcation into utopian and dystopian". Design fictions focus on the everyday, exploring and questioning interactions between people or HCI, habits, social behaviors, casual failures or rituals. Fabien Girardin on this point: "To contrast with other similar design approaches, we think Design Fiction is a bit different from critical design, which is a bit more abstract and theoretical compared to our own interest in design happening outside of galleries or museums. Design Fiction is about exploring a future mundane". Another approach to design fiction is through live action role-playing games (larps). Malthe Stavning Erslev argues that the research larp Civilisation's Waiting Room, which explores a future society run by an AI, is a form of design fiction using what he calls a mimetic method that is "making the technology appear" in "deeply embodied, ephemeral encounters of enactment". In recent pop culture, design fiction might be bonded to the Black Mirror anticipation series, each episode portraying a disturbing alternative present or near future where characters have to deal with the unexpected consequences of emerging technologies. Methodology and process Design fiction is an open and evolving practice, demonstrating a variety of approaches from designers and studios. However it is possible to draw some common lines: "What if?" questions Design fictions often rely on a question: "What if?", creating a provocative framework for speculation from the start. This questioning format stimulates the exploration of tensions and sticking points, leading to the construction of the new fictional universe, in an alternative present or near future, which includes a new set of morals and values: "The New Normal". Diegetic prototypes The speculative scenario and the fictional world in which it takes place are made tangible thanks to design tools and methods, to conceive what David A. Kirby was the first to call "diegetic prototypes". The term diegetic stands for their narrative attribute, made to be self-explanatory of the world they come from. At the same time, they purposely leave narrative spaces for the viewer's imagination to fill in: they "tell worlds rather than stories". As explained by Julian Bleecker: "Design fiction objects are totems through which a larger story can be told, or imagined or expressed. They are like artifacts from someplace else, telling stories about other worlds". These prototypes are effective entry points into complex topics subject to socio-technological controversies such as digital technologies, Internet of things, ubiquitous computing, biotechnology, synthetic biology, transhumanism, artificial intelligence, data or algorithms. They "help make things visceral and real enough to jump to discussions and get to decisions". Discussion and debate generation Design fictions are meant to be displayed in order to create a space for discussion and debate. They can be exposed in various contexts depending on the targeted audiences: online – video platforms, social media, dedicated websites,... – or offline – galleries or museums, convenient stores, forums, ... – unveiling or not their fictional nature. In 2013, the project 99¢ FUTURES driven by the Extrapolation Factory studio showed that provoked discussions and debates could happen successfully in non-institutional places, such as a convenient store: they shelved artefacts – previously imagined and conceived during a workshop - among "real" current consumption objects. Customers passing by started to discuss about these pieces of futures, even purchasing the one they liked the most for a few dollars. Application scope Public policy-making Design Fiction is a helpful tool used to discuss and move forward public policy-making processes. In 2015, ProtoPolicy, a co-design project led by the Design Friction studio, All-Party Parliamentary Design and Innovation Group (APDIG) and Age UK aimed at building a shared understanding of the constraints and opportunities of political issues around Ageing in Place and loneliness through design fictions. A series of creative workshops involving older people communities led to the conception of "Soulaje", a provocative self-administered euthanasia wearable designed to start discussion around the taboo issues of death and freedom of choice. A second scenario staged "The Smart Home Therapist", a new kind of therapist who, through human psychology and artificial intelligence expertise, facilitates and improves older people's relationship with their smart homes and eases their access to personalized domestic products and services. Innovative companies Design fiction can be a powerful tool for companies showing prospective approaches or interests within changing or emerging industries. It can be used to help inspiring new imaginaries about the future, collecting insights and qualitative data that will help to formulate strategic directions and decisions, anticipating risks, social and cultural obstacles, enabling discussion between stakeholders, involving internal teams and external audiences in future orientations, bringing out unexpected feedbacks, frictions, misuses, misappropriations or reappropriations of new technologies and highlight their multiple impacts on potential users and more broadly speaking on the society. The Near Future Laboratory on its approach of design fiction towards companies: "Design Fiction is one approach among others, but its contribution focus on the near future and is tangible. For instance, instead of participating to workshops of multidisciplinary experts with a powerpoint filled with ideas for a technology, we propose to create the user manual for the envisioned product or produce a video that showcases how an employee appropriates the technologies with its features and limitations. These artifacts are meant to materialize changes, opportunities and implication in the use of technologies. They particularly point out details in situations of use with the objective to avoid a "general discussion". ... For our clients a successful Design Fiction means that they can feel, touch and understand near future opportunities and with convincing material of potential changes of their customers, markets, technologies, or competition." General public Design fiction gets closer to activism when it comes to raising the awareness of the general public on emerging social, legal, political or economic issues. Oniria is a project developed by A Parede studio in 2016 in reaction to the Statute of the Unborn, a Brazilian law project settling the beginning of life at the stage of egg fertilisation, therefore prohibiting the Morning-After Pill in a country where abortion is already illegal under most circumstances. As a critique, designers imagined a scenario in which a company launches a contraceptive technology in line with this new measure. People were invited to share their own visions through various social media platforms on how their life would be affected and how they would bond to this new device in their everyday life. Publications The Manual of Design Fiction by Julian Bleecker, Nick Foster, Fabien Girardin, and Nicolas Nova, 2022 Speculative Everything: Design, Fiction and Social Dreaming by Dunne and Raby, MIT Press, 2013 2050: "Designing our Tomorrow", Architectural Design, Volume 85, Issue 4, July/August 2015. Edited by Chris Luebkeman with contributions from Tim Maughan, Dan Hill, Liam Young, Mitchell Joachim, et al. Ecotopia 2121: Visions of Our Future Green Utopia--in 100 Cities, written and illustrated by Alan Marshall, , an outcome of the Ecotopia 2121 Project "Design Fiction", A short essay on design, science fact and fiction by Julian Bleecker, 2009 Little Book of Design Fiction for the Internet of Things by Paul Coulton, Joe lindley, and Rachel Cooper, 2018 See also Critical design Critical making Dystopia Scenario-based design Science fiction prototyping Speculative design Superfiction Utopia References Critical design Critical theory Inquiry Design Narrative techniques
Design fiction
Technology,Engineering
2,221
63,967,359
https://en.wikipedia.org/wiki/Peninsular%20Malaysian%20rain%20forests
The Peninsular Malaysian rain forests is an ecoregion on the Malay Peninsula and adjacent islands. It is in the tropical and subtropical moist broadleaf forests biome. Geography The ecoregion covers most of the southern Malay Peninsula in Malaysia and southern Thailand, and extends southwards to Singapore, the Riau Archipelago, and Lingga Islands, and east to the Anamba Islands. The peninsula is fringed with mangroves, including the Indochina mangroves on the eastern shore, and the Myanmar coast mangroves on the western shore. The ecologically distinct Peninsular Malaysian peat swamp forests ecoregion are found in waterlogged lowlands on the east and west sides of the peninsula. The Titiwangsa Mountains form the mountainous backbone of the peninsula, and the range's higher elevations are home to the Peninsular Malaysian montane rain forests ecoregion. Flora The predominant trees are Cinnamomum cassia, Durio zibethinus, Garcinia mangostana, Artocarpus heterophyllus, Ficus benghalensis, Gnetum gnemon, Mangifera indica, Toona ciliata, Toona sinensis, Cocos nucifera, Tetrameles nudiflora, Ginkgo biloba, Shorea robusta, Prunus serrulata, Camphora officinarum, Tsuga dumosa, Ulmus lanceifolia, Tectona grandis, Terminalia elliptica, Terminalia bellirica, Quercus acutissima, and dipterocarps, including species of Anisoptera, Dipterocarpus, Dryobalanops, Hopea, and Shorea. The forests are home to over 15,000 tree species, and trees of the families Burseraceae and Sapotaceae are also common. Trees form a canopy 24-36 meters high, with emergent trees rising up 45 meters or more. The tallest emergent is Koompassia excelsa, known as tualang, which can grow more than 76 meters high. Fauna The ecoregion home to 195 mammal species, including several large and endangered species – Asian elephant (Elephas maximus), gaur (Bos gaurus), tiger (Panthera tigris), Malayan tapir (Tapirus indicus), and clouded leopard (Neofelis nebulosa). The Sumatran rhinoceros (Dicerorhinus sumatrensis) once inhabited the forests, but Malaysia's last rhinoceroses died in 2019, and the species' few remaining members survive only in Sumatra. Conservation A 2017 assessment found that 20,113 km2, or 16%, of the ecoregion is in protected areas. Protected areas in the ecoregion include Endau Rompin National Park (1191.59 km²), Endau-Kota Tinggi (West) Wildlife Reserve (805.49 km²), Endau-Kota Tinggi (East) Wildlife Reserve (106.5 km²), Krau Wildlife Reserve (623.96 km²), Mersin Wildlife Reserve (74.13 km²), and Ulu Muda Wildlife Reserve (1152.57 km²). Taman Negara National Park (4524.54 km²) and Royal Belum State Park (2072.0 km²) include portions of the ecoregion along with portions of the Peninsular Malaysian montane rain forests. References External links Biota of Singapore Ecoregions of Asia Ecoregions of Indonesia Ecoregions of Malaysia Ecoregions of Malesia Ecoregions of Thailand Environment of Singapore Flora of Malaya Indomalayan ecoregions Tropical and subtropical moist broadleaf forests
Peninsular Malaysian rain forests
Biology
750
71,232,367
https://en.wikipedia.org/wiki/Moty%20Heiblum
Mordehai "Moty" Heiblum (Hebrew: מוטי הייבלום – sometimes called Moti Heiblum, born May 25, 1947, in Holon) is an Israeli electrical engineer and condensed matter physicist, known for his research in mesoscopic physics. Biography Moty Heiblum was born and raised in Holon. His mother was the only Holocaust survivor in her immediate family, and most of his father's family perished in the Holocaust. From 1967 to 1971 Moty Heiblum served in the Israeli Defense Force (IDF) in the IDF Communications Corps and was an instructor at the IDF Air Force Technical School. Heiblum graduated in electrical engineering from the Technion with a bachelor's degree in 1973 and from Carnegie-Mellon University with a master's degree in 1974. He received in 1978 his Ph.D. with thesis Characteristics of metal-oxide-metal devices supervised by John Roy Whinnery. After completing his Ph.D., Heiblum joined the IBM Thomas J. Watson Research Center. After working at the IBM Thomas J. Watson Research Center for 12 years, Heiblum returned in 1990 to Israel and established at the Weizmann Institute, with the support of Professor Yoseph Imry, the Joseph H. and Belle R. Braun Center for Submicron Research with the mission to "study and develop submicron semiconductor structures working in the mesoscopic regime." The initial investment for the Submicron Center was approximately $16 million. Heiblum has headed the Submicron Center since its founding in 1990. That same year he was appointed a full professor at the Weizmann Institute. He established the Department of Condensed Matter Physics at the Weizmann Institute and was its first director from 1993 to 1996 and from 2007 to 2012 he was again its director. In 2000 he was appointed to the Alex and Ida Susan Professorial Chair of Submicron Studies. From 1991 to 1992, Heiblum headed a government committee that advised the Minister of Science on how to encourage the microelectronics industry in the State of Israel. Since 2001, he has chaired the board of directors of Braude College of Engineering. From 1993 to 1996, he was a visiting professor for several weeks each summer at the Vienna University of Technology. From 1996 to 1997 he was on sabbatical as a visiting professor at Stanford University in combination with Hewlett Packard Labs in Palo Alto, California. He was an editor for the journal Semiconductor Science and Technology and is now an editor for the journal Solid State Communications. He organized and conducted, in collaboration with Professor Elisha Cohen of the Technion, the 1998 International Conference of Semiconductors, which was attended by about 1,100 people and held in Jerusalem. The main focus of Heiblum’s research is the quantum behavior of electrons in high-purity mesoscopic materials, and especially the quantum Hall effect (QHE) regime. Noteworthy highlights of the research done by him and his group are "novel electronic interferometers – demonstrating one-electron and two-electron interference; which-path detectors – allowing to turn 'on and off' electrons’ coherence; detection of fractional charges via sensitive shot noise measurements; and observation of quantized heat flow in the fractional abelian and non-abelian states in the QHE regime." He received in 1986 the IBM Outstanding Innovation Award and in 2013 the EMET Prize. He was elected a life fellow of the IEEE, a fellow of the American Physical Society (1990), and a member of Israel Academy of Sciences and Humanities (2008). In 2008, he received the Rothschild Prize in physics. In 2021, he was received the Oliver E. Buckley Condensed Matter Prize with citation: His doctoral students include Amir Yacoby. Moty Heiblum's wife Rachel has a PhD in biology from the Hebrew University of Jerusalem. She worked in the Faculty of Agriculture of the Hebrew University of Jerusalem's Rehovot campus. They have four children. He has a younger brother, Zohar Heiblum, who is a director, manager, turn-around specialist, and investor in the high-tech industry. Selected publications References External links (talk by Moty Heiblum at Tel Aviv University) (talk by Moty Heiblum at the Tel Aviv-Tsinghau Xin Center 2nd International Winter School held at Tel Aviv University) (2009 talk by Moty Heiblum) 1947 births Living people People from Holon Israeli Jews Israeli electrical engineers Condensed matter physicists Israeli physicists Jewish physicists Technion – Israel Institute of Technology alumni Carnegie Mellon University alumni University of California, Berkeley alumni IBM employees Academic staff of Weizmann Institute of Science Members of the Israel Academy of Sciences and Humanities Fellows of the American Physical Society EMET Prize recipients in the Exact Sciences Oliver E. Buckley Condensed Matter Prize winners
Moty Heiblum
Physics,Materials_science
1,005
63,842,993
https://en.wikipedia.org/wiki/Economics%20of%20vaccines
Vaccine development and production is economically complex and prone to market failure. Development is unprofitable in rich and poor countries, and is done with public funding. Production is concentrated in the hands of a small number of powerful companies which acquire key legal monopolies and make very large profits. Many of the diseases most demanding a vaccine, including HIV, malaria and tuberculosis, exist principally in poor countries. Pharmaceutical firms and biotechnology companies have little incentive to develop vaccines for these diseases because there is little revenue potential. Even in more affluent countries, financial returns are usually minimal and the financial and other risks are great. Most vaccine development to date has therefore relied on "push" funding by government, universities and non-profit organizations. In almost all cases, pharmaceuticals including vaccines are developed with public funding, but profits and control of price and availability are legally accorded to private companies. Proposed solutions include requiring results from publicly-funded research to be public-domain. Past efforts along these lines have failed by regulatory capture. In contrast to research and development, the vaccine production market, even for out-of-patent vaccines, is highly concentrated. 80% of global production is in the hand of five large companies, which hold key patents. This reduces competition and allows high, uncompetitive prices, often more than 100 times the cost of production. Many vaccines have been highly cost-effective and beneficial for public health. Vaccine effort that is beneficial to society is vastly in excess of that which is beneficial to vaccine producers. The number of vaccines actually administered has risen dramatically in recent decades. Market concentration While vaccine research and development is done by many small companies, large-scale vaccine manufacturing is done by an oligopoly of big manufacturers. A March 2020 New York Times article described the political effects of this market structure: "government and international health organizations know that any vaccine developed in a lab will ultimately be manufactured by large pharmaceutical firms. At this critical juncture with coronavirus, no health expert would publicly criticize drug companies, but privately they complain that pharma is a major speed bump in developing lifesaving vaccines." Concentration and monopolization of the manufacture of specific drugs has also led to supply shortages, and significant healthcare costs for employing people to track down hard-to-get drugs. This oligopoly power allows vaccine manufacturers to engage in price discrimination, and vaccine prices are often two orders of magnitude (~100x) higher than the manufacturer's stated manufacturing costs, . Sales agreements often require that the buyer keeps the price secret and agrees to other non-competitive restrictions; the exact nature and extent of this problem is hard to characterize, due to agreements being secret. Price secrecy also disadvantages vaccine purchasers in price negotiations. It also makes market analysis difficult and hinders efforts to improve affordability. The first decade of the 2000s saw a large number of mergers and acquisitions, and , 80% of the global vaccine market was in the hands of five multinationals: GlaxoSmithKline, Sanofi Pasteur, Pfizer, Merck, and Novartis. Of these, Novartis does not focus on vaccine development. Patents on key manufacturing processes help maintain this oligopoly. National vaccine-manufacturing facilities Some countries have set up local manufacturing facilities, especially during the COVID-19 pandemic. Sometimes the government simply gives a private company money to set up a privately-owned vaccination facility locally; sometimes the facility is partly controlled or owned by the government. Facilities that produce less than 100 million doses per year face diseconomies of scale, increasing the costs of vaccines. Sequential stages in the production of a vaccine dose may also be done in different facilities and shipped across borders. In 2017, the UK had draft plans to build a national facility, later called the UK Vaccine Manufacturing Innovation Centre (VMIC). Plans came to involve industry partners including Merck and Johnson and Johnson. The facility was delayed by negotiations between industry funders and, which did not end until the country was well into the pandemic. It was originally slated to cost the government £66m. The facility was expanded and built in a rush during the pandemic, and eventually cost the government £200 million; by December of 2021, the government was trying to sell off its share (it was still trying ot sell it nearly a year later). The decision was widely criticized. It was suggested that the government not sell, or at least retain the ability to commandeer production. Ghana built a US$122 vaccine manufacturing facility using funding from the International Finance Corporation of the World Bank Group, working with a consortium of three Ghanaian pharmaceutical companies. It was planned to start shipping vaccines in 2024. Italy planned a public-private vaccine production facility. Canada built a publicly-owned production facility, which at 24 million doses per year is not expected to be cost-competitive with larger commercial facilities. Epidemic response In the past, the market power of pharmaceutical companies has delayed responses to epidemics. Manufacturers have successfully negotiated favourable terms, including market guarantees and indemnification, from governments, as a condition of manufacturing vaccines. This has delayed responses to some epidemics by months, and prevented responses to other pandemics entirely. Some intellectual property issues also hinder vaccine development for epidemic preparedness, as in the case of rVSV-ZEBOV. Market incentives There is also no business incentive for pharmaceutical companies to test vaccines that are only of use to poor people. Vaccines developed for rich countries may also have short expiry dates, and requirements that they be refrigerated until they are injected and given in multiple shots, all of which may be very difficult in remote areas. In some cases, it has simply never been tested whether the vaccine will still be effective if the requirements are not followed (say, if it retains potency for several days unrefrigerated). In almost all cases, pharmaceuticals including vaccines are developed with public funding, but profits and control of price and availability are legally accorded to private companies. The profits of large pharmaceutical companies are mostly used on dividends and share buybacks, which inflate executive pay, and on lobbying and advertising. Innovation is generally bought along with the small companies that developed it, rather than produced in-house; low percentage R&D spending is sometimes touted as an attraction to investors. The financialization focus of the pharmaceutical industry, especially in the US, has been cited as an obstacle to innovation. There have been ethical issues raised with accepting donations of generally unaffordable vaccines. Demand While the vaccine market makes up only 2-3% of the pharmaceutical market worldwide, it is growing at 10-15% per year, much faster than other pharmaceuticals (). Vaccine demand is increasing with new target population in emerging markets (partly due to international vaccine funders; in 2012, UNICEF bought half of the world's vaccine doses). Vaccines are becoming the financial driver of the pharmaceutical industry, and new business models may be emerging. Vaccines are newly being marketed like pharmaceuticals. Vaccines offer new opportunities for funding from public-private partnerships (such as CEPI and GAVI), governments, and philanthropic donors and foundations (such as GAVI and CEPI's donors). Pharmaceutical companies have representation on the boards of public-private global health funding bodies including GAVI and CEPI. Private donors often find it easier to exert influence through public-private partnerships like GAVI than through the traditional public sector and multilateral government institutions like the WHO; PPPs also appeal to public donors. Philanthropic funding means that vaccines are now rolled out to large developing markets less than 10 or 20 years after they are developed, during the patent validity term of the patent owner. Newer vaccines are much more expensive than older ones. Lower-income countries are increasingly a profitable vaccine market. Public domain Baker (2016) observed that the vast majority of the cost of most diagnostic, preventive and treatment procedures are patent royalties: The unit costs are almost universally a tiny fraction of the price to the consumer. Moreover, in the US "the government spends more than $30 billion a year on biomedical research through the National Institutes of Health". And researchers (individuals and organizations) routinely obtain patents on products whose development was paid for by taxpayers, per the Bayh–Dole Act of 1980. Baker claims that the US population would have better health care at lower cost if the results of that research were all placed in the public domain. Moreover, the cost of those diagnostic, preventive and treatment procedures would be lower the world over if the results of publicly-funded research were in the public domain. This would likely lead to better control of infectious diseases worldwide. That, in turn, would likely reduced the disease load in the US. References Vaccination Health economics Public health Market failure
Economics of vaccines
Biology
1,793
54,512,835
https://en.wikipedia.org/wiki/NGC%205011
NGC 5011 is an elliptical galaxy in the constellation of Centaurus. It was discovered on 3 June 1834 by John Herschel. It was described as "pretty bright, considerably small, round, among 4 stars" by John Louis Emil Dreyer, the compiler of the New General Catalogue. Optical companions Several galaxies are not physically associated with NGC 5011, but appear close to NGC 5011 in the night sky. PGC 45847 is a spiral galaxy that is also known as NGC 5011A. PGC 45918 is a lenticular galaxy some 156 million light-years away from the Earth, in the Centaurus Cluster, and is designated NGC 5011B. PGC 45917 is a dwarf galaxy, also designated NGC 5011C. Although NGC 5011B and 5011C appear close together, they are no signs of them interacting. NGC 5011C is actually much closer and is in the Centaurus A/M83 Group, at 13 million light years away. References Notes External links Elliptical galaxies 5011 045898 Centaurus 18340603
NGC 5011
Astronomy
223
2,941,579
https://en.wikipedia.org/wiki/Sergei%20Pankejeff
Sergei Konstantinovitch Pankejeff (; 24 December 1886 – 7 May 1979) was a Russian aristocrat from Odesa, Russian Empire. Pankejeff is best known for being a patient of Sigmund Freud, who gave him the pseudonym of Wolf Man (German: der Wolfsmann) to protect his identity, after a dream Pankejeff had of a tree full of white wolves. Biography Early life and education Pankejeff was born on the 24 December 1886 at his family's estate near Kakhovka on the river Dnieper. The Pankejeff family (Freud's German transliteration from the Russian; in English it would be transliterated as Pankeyev) was a wealthy family in St. Petersburg. His father was Konstantin Matviyovich Pankeyev and his mother was Anna Semenivna, née Shapovalova. Pankejeff's parents were married young and had a happy marriage, but his mother became sickly and was therefore somewhat absent from the lives of her two children. Pankejeff would later describe her as cold and lacking tenderness, though she would show special affection to him when he was sickly. His father Konstantin, while being a cultured man and a keen hunter, was also an alcoholic who suffered from depressive episodes. He had been treated by Moshe Wulff (a disciple of Freud). He would later be diagnosed by Kraepelin with manic depressive disorder. His mother (Pankejeff's grandmother) had fallen into a depressive state after the death of a daughter and was thought to have died of suicide, while a paternal uncle of Pankejeff's was diagnosed with paranoia by the neuropsychiatrist Korsakov and admitted to an asylum. Sergei and his sister Anna were brought up by two servants; Nanja and Grusha and an English governess named Miss Oven. Sergei's education would later be taken over by male tutors. Sergei attended a grammar school in Russia, but after the 1905 Russian Revolution he spent considerable time abroad studying. Psychological problems During his review of Freud's letters and other files, Jeffrey Moussaieff Masson uncovered notes for an unpublished paper by Freud's associate Ruth Mack Brunswick. Freud had asked her to review the Pankejeff case, and she discovered evidence that Pankejeff had been sexually abused by a family member during his childhood. In 1906, his older sister Anna committed suicide through the use of quicksilver while visiting the site of Mikhail Lermontov's fatal duel. She would die after two weeks of agony. By 1907, Sergei began to show signs of serious depression. Sergei's father Konstantin also suffered from depression, often connected to specific political happenings of the day, and committed suicide in 1907 by consuming an excess of sleeping medication, a few months after Sergei had left for Munich to seek treatment for his own ailment. While in Munich, Pankejeff saw many doctors and stayed voluntarily at a number of elite psychiatric hospitals. In the summers, he always visited Russia. During a stay in Kraepelin's sanatorium near Neuwittelsbach, he met a nurse who worked there, Theresa-Maria Keller, whom he fell in love with and wanted to marry. Pankejeff's family upon learning about the relationship was against it, as not only was Keller from a lower class, but also she was older than Pankejeff and a divorced woman with a daughter. The couple would marry in 1914. Der Wolfsmann (The Wolf Man) In January 1910, Pankejeff's physician Leonid Drosnes brought him to Vienna to have treatment with Freud. Pankejeff and Freud met with each other many times between February 1910 and July 1914, and a few times thereafter, including a brief psychoanalysis in 1919. Pankejeff's "nervous problems" included his inability to have bowel movements without the assistance of an enema, as well as debilitating depression. Initially, according to Freud, Pankejeff resisted opening up to full analysis, until Freud gave him a year deadline for analysis, prompting Pankejeff to give up his resistances. Freud's first publication on the "Wolf Man" was "From the History of an Infantile Neurosis" (Aus der Geschichte einer infantilen Neurose), written at the end of 1914, but not published until 1918. Freud's treatment of Pankejeff centered on a dream the latter had as a very young child which he described to Freud: I dreamt that it was night and that I was lying in bed. (My bed stood with its foot towards the window; in front of the window there was a row of old walnut trees. I know it was winter when I had the dream, and night-time.) Suddenly the window opened of its own accord, and I was terrified to see that some white wolves were sitting on the big walnut tree in front of the window. There were six or seven of them. The wolves were quite white, and looked more like foxes or sheep-dogs, for they had big tails like foxes and they had their ears pricked like dogs when they pay attention to something. In great terror, evidently of being eaten up by the wolves, I screamed and woke up. My nurse hurried to my bed, to see what had happened to me. It took quite a long while before I was convinced that it had only been a dream; I had had such a clear and life-like picture of the window opening and the wolves sitting on the tree. At last I grew quieter, felt as though I had escaped from some danger, and went to sleep again.(Freud 1918) Freud's eventual analysis (along with Pankejeff's input) of the dream was that it was the result of Pankejeff having witnessed a "primal scene" — his parents having sex a tergo or more ferarum ("from behind" or "doggy style") — at a very young age. Later in the paper, Freud posited the possibility that Pankejeff instead had witnessed copulation between animals, which was displaced to his parents. Pankejeff's dream played a major role in Freud's theory of psychosexual development, and along with Irma's injection (Freud's own dream, which launched dream analysis), it was one of the most important dreams for the developments of Freud's theories. Additionally, Pankejeff became one of the main cases used by Freud to prove the validity of psychoanalysis. It was the third detailed case study, after "Notes Upon a Case of Obsessional Neurosis" in 1908 (also known by its animal nickname "Rat Man"), that did not involve Freud analyzing himself, and which brought together the main aspects of catharsis, the unconscious, sexuality, and dream analysis put forward by Freud in his Studies on Hysteria (1895), The Interpretation of Dreams (1899), and his Three Essays on the Theory of Sexuality (1905). Later life Pankejeff later published his own memoir under Freud's given pseudonym and remained in contact with Freudian disciples until his own death (undergoing analysis for six decades despite Freud's pronouncement of his being "cured"), making him one of the longest-running famous patients in the history of psychoanalysis. A few years after finishing psychoanalysis with Freud, Pankejeff developed a psychotic delirium. He was observed in a street staring at his reflection in a mirror, convinced that after having consulted and been treated by a dermatologist to correct a minor injury on his nose, his dermatologist had left him with what he perceived to be a hole in his nose. This obsession with this perceived flaw led to an obsessive compulsion to look at himself “in every shop window; he carried a pocket mirror … his fate depended on what it revealed or was about to reveal." Ruth Mack Brunswick, a Freudian, explained the delusion as displaced castration anxiety. Having lost most of his family's wealth after the Russian Revolution, Pankejeff supported himself and his wife on his salary as an insurance clerk. The psychoanalytical movement also provided Pankejeff with financial support in Vienna; psychoanalysts like Kurt Eissler (a former student of Freud's) dissuaded Pankejeff from talking to any media. The reason for this was that Pankejeff being one of Freud's most famous "cured" patients and the fact revealing that he was still suffering from mental illness would hurt the reputation of Freud and psychoanalysis. Pankejeff was essentially bribed to keep quiet. In 1938, Pankejeff's wife committed suicide by inhaling gas. She had been depressed since the death of her daughter. As this coincided with the Anschluss; and the suicide wave among Jews who were trapped in Austria, research has also suggested that she was actually Jewish and that her suicide was prompted by her fear of the Nazis. Facing a major crisis and not being able to get help from Mack Brunswick who had fled to Paris Pankejeff approached Muriel Gardiner who managed to get him a visa to travel there. He would later follow her to London before returning to Vienna in 1938. Throughout the following decades, Pankejeff would go through some emotional crises which would ultimately lead to him becoming depressive. One of them being the death of Pankejeff's mother in 1953. Pankejeff would receive intermittent treatment for these episodes from various psychoanalysts, most frequently by the head of The Vienna Psychoanalytical Society Alfred von Winterstein and then by his successor, Wilhelm Solms-Rödelheim. Gardiner would also supply him with "wonder pills" (Dexamyl) to help Pankejeff alleviate his emotional turmoil. In July 1977, Pankejeff suffered a heart attack and then contracted pneumonia. He was admitted to the Steinhof psychiatric hospital in Vienna. Pankejeff broke his silence and agreed to talk to Karin Obholzer. Their conversations, which took place between January 1974 to September 1976, would later be recounted in the book "Conversations with the Wolf-Man Sixty years later" in 1980, after Pankejeff's death and per his own wishes. In Pankejeff's own words, his treatment by Freud had been "catastrophic." Death Pankejeff died on the 12th of May 1979 at the age of 92. Criticism of Freud's interpretation Critics, beginning with Otto Rank in 1926, have questioned the accuracy and efficacy of Freud's psychoanalytic treatment of Pankejeff. Similarly, in the mid-20th century, psychiatrist Hervey Cleckley dismissed Freud's diagnosis as far-fetched and entirely speculative. Dorpat has suggested that Freud's behavior in the Pankejeff case as an example of gaslighting (attempting to undermine someone's perceptions of reality). Daniel Goleman wrote in 1990 in the New York Times: Mária Török and Nicolas Abraham have reinterpreted the Wolf Man's case (in The wolf man's magic word, a cryptonymy), presenting their notion of "the crypt" and what they call “cryptonyms." They provide a different analysis of the case than Freud, whose conclusions they criticise. According to the authors, Pankejeff's statements hide other statements, while the actual content of his words can be illuminated by looking into his multi-lingual background. According to the authors, Pankejeff hid secrets concerning his older sister, and as the Wolf Man both wanted to forget and preserve these issues, he encrypted his older sister, as an idealised "other" in the heart of himself, and spoke these secrets out loud in a cryptic manner, through words hiding behind words, rebuses, wordplays etc. For example, in the Wolf Man's dream, where six or seven wolves were sitting in a tree outside his bedroom window, the expression "pack of six", a "sixter" = shiestorka: siestorka = sister, which gives the conclusion that his sister is placed in the centre of the trauma. The case forms a central part of the second plateau of Gilles Deleuze and Félix Guattari's A Thousand Plateaus, titled "One or Several Wolves?" In it, they repeat the accusation made in Anti-Oedipus that Freudian analysis is unduly reductive and that the unconscious is actually a "machinic assemblage". They argue that wolves are a case of the pack or multiplicity and that the dream was part of a schizoid experience. See also Notes References Whitney Davis, Drawing the Dream of the Wolves: Homosexuality, Interpretation and Freud's 'Wolf Man''' (Indianapolis: Indiana University Press, 1995), . Sigmund Freud, "From the History of an Infantile Neurosis" (1918), reprinted in Peter Gay, The Freud Reader (London: Vintage, 1995). Muriel Gardiner, The Wolf-Man and Sigmund Freud, London, Routledge, 1971 Karin Obholzer, The Wolf-Man Sixty Years Later, tr. M. Shaw, London, Routledge & P. Kegan, 1982, p. 36. Patrick J. Mahony, Cries of the WolfMan, New York : International Universities Press, 1984 "The Wolf-Man" [Sergei Pankejeff], The Wolf-Man (Pankejeff's memoirs, along with essays by Freud and Ruth Mack Brunswick), (New York: Basic Books, 1971). James L. Rice, Freud's Russia: National Identity in the Evolution of Psychoanalysis'' (New Brunswick, NJ: Transaction Publishers, 1993), 94–98. Torok Maria, Abraham Nicolas, The wolf man's magic word, a cryptonymy, 1986 External links Freud exhibit which contains images of Pankejeff 1886 births 1979 deaths Analysands of Ruth Mack Brunswick Analysands of Sigmund Freud Case studies by Sigmund Freud Dream People from Odesa Nobility from the Russian Empire Vasylivka, Odesa Raion Emigrants from the Russian Empire to Austria-Hungary
Sergei Pankejeff
Biology
2,974
49,550,577
https://en.wikipedia.org/wiki/Frontiers%20of%20Architectural%20Research
Frontiers of Architectural Research is a quarterly peer-reviewed open access academic journal covering the field of architecture, including architectural design and theory, architectural science and technology, urban planning, landscape architecture, existing building renovation and architectural heritage conservation. It is published by Elsevier on behalf of Higher Education Press. The journal was established in 2012 and the editor-in-chief is Jianguo Wang (Southeast University (China)). The journal is abstracted and indexed in the Arts & Humanities Citation Index, Scopus, Directory of Open Access Journals, and the Chinese Science Citation Database. External links Architecture journals English-language journals Quarterly journals Academic journals established in 2012 Building research Academic journals of China
Frontiers of Architectural Research
Engineering
137
4,721,938
https://en.wikipedia.org/wiki/Variable%20air%20volume
Variable air volume (VAV) is a type of heating, ventilating, and/or air-conditioning (HVAC) system. Unlike constant air volume (CAV) systems, which supply a constant airflow at a variable temperature, VAV systems vary the airflow at a constant or varying temperature. The advantages of VAV systems over constant-volume systems include more precise temperature control, reduced compressor wear, lower energy consumption by system fans, less fan noise, and additional passive dehumidification. Box technology The most simple form of a VAV box is the single duct terminal configuration, which is connected to a single supply air duct that delivers treated air from an air-handling unit (AHU) to the space the box is serving. This configuration can deliver air at variable temperatures or air volumes to meet the heating and cooling loads as well as the ventilation rates required by the space. Most commonly, VAV boxes are pressure independent, meaning the VAV box uses controls to deliver a constant flow rate regardless of variations in system pressures experienced at the VAV inlet. This is accomplished by an airflow sensor that is placed at the VAV inlet which opens or closes the damper within the VAV box to adjust the airflow. The difference between a CAV and VAV box is that a VAV box can be programmed to modulate between different flowrate setpoints depending on the conditions of the space. The VAV box is programmed to operate between a minimum and maximum airflow setpoint and can modulate the flow of air depending on occupancy, temperature, or other control parameters. A CAV box can only operate between a constant, maximum value, or an “off” state. This difference means the VAV box can provide tighter space temperature control while using much less energy. Another reason why VAV boxes save more energy is that they are coupled with variable-speed drives on fans, so the fans can ramp down when the VAV boxes are experiencing part load conditions. It is common for VAV boxes to include a form of reheat, either electric or hydronic heating coils. While electric coils operate on the principle of electric resistance heating, whereby electrical energy is converted to heat via electric resistance, hydronic heating uses hot water to transfer heat from the coil to the air. The addition of reheat coils allows the box to adjust the supply air temperature to meet the heating loads in the space while delivering the required ventilation rates. In some applications it is possible for the space to require such high air-change rates it causes a risk of over-cooling. In this scenario, the reheat coils could increase the air temperature to maintain the temperature setpoint in the space. This scenario tends to happen during cooling seasons in buildings which have perimeter and interior zones. The perimeter zones, with more sun exposure, require a lower supply air temperature from the air-handling unit than the interior zones, which have less sun exposure and tend to stay cooler than the perimeter zones when left un-conditioned. With the same supply air temperature being delivered to both zones, the reheat coils must heat the air for the interior zone to avoid over-cooling. Multiple-zone systems The air blower's flow rate is variable. For a single VAV air handler that serves multiple thermal zones, the flow rate to each zone must be varied as well. A VAV terminal unit, often called a VAV box, is the zone-level flow control device. It is basically a calibrated air damper with an automatic actuator. The VAV terminal unit is connected to either a local or a central control system. Historically, pneumatic control was commonplace, but electronic direct digital control systems are popular especially for mid- to large-size applications. Hybrid control, for example having pneumatic actuators with digital data collection, is popular as well. A common commercial application is shown in the diagram. This VAV system consists of a VAV box, ductwork, and four air terminals. Fan control for a pressure-independent system Control of the system's fan capacity is critical in VAV systems. Without proper and rapid flow rate control, the system's ductwork, or its sealing, can easily be damaged by overpressurization. In the cooling mode of operation, as the temperature in the space is satisfied, a VAV box closes to limit the flow of cool air into the space. As the temperature increases in the space, the box opens to bring the temperature back down. The fan maintains a constant static pressure in the discharge duct regardless of the position of the VAV box. Therefore, as the box closes, the fan slows down or restricts the amount of air going into the supply duct. As the box opens, the fan speeds up and allows more air flow into the duct, maintaining a constant static pressure. One of the challenges for VAV systems is providing adequate temperature control for multiple zones with different environmental conditions, such as an office on the glass perimeter of a building vs. an interior office down the hall. Dual duct systems provide cool air in one duct and warm air in a second duct to provide an appropriate temperature of mixed supply air for any zone. An extra duct, however, is cumbersome and expensive. Reheating the air from a single duct, using electric or hot water heating, is often a more cost-effective solution. Reheat applications - Controls and energy issues Traditional VAV reheat systems use minimum airflow rates of 30% to 50% the design airflow. These airflow minimums are selected to avoid the risk of under-ventilation and thermal comfort issues. However, published research supporting the efficacy of this approach is scarce. Systems operating at lower minimum airflow ranges (10% to 20% of design airflow) stand to use less fan and reheat coil energy relative to a traditional system, and recent research has shown that thermal comfort and adequate ventilation can still be attained at these lower minimums. VAV reheat systems using the higher minimum airflow typically employ a conventional "single maximum" control sequence. Under this control sequence, a single cooling maximum airflow setpoint is selected for design cooling conditions. The cooling airflow is gradually lowered to the minimum airflow setpoint, where it remains as the space temperature lowers beyond the cooling temperature setpoint. When the heating setpoint is reached, the electric or hydronic heating coil is activated and gradually provides more heat until the maximum heating capacity is reached at the design heating temperature. Research has shown that using a different, "dual maximum" control sequence can save substantial amounts of energy relative to the conventional "single maximum" control sequence. This is accomplished due to the "dual maximum" sequence’s use of lower minimum airflow rates. Under this control sequence, the same cooling maximum airflow is selected and is similarly lowered as the space temperature decreases. By the time the space temperature drops to the cooling temperature setpoint, the airflow reaches a lower minimum value than that used in the "single maximum" sequence (10% - 20% vs. 30% - 50% of maximum cooling airflow). When the space temperature reaches the heating temperature setpoint, the heating coil is activated and increases its electrical power (for electric coils) or hot water valve position (for hydronic coils) while the airflow remains at the minimum setpoint. When the heating coil reaches its maximum heating capacity, upon a further drop in space temperature, the airflow is increased until it reaches a maximum heating airflow setpoint (typically about 50% of the maximum cooling airflow). References Mechanical engineering Heating, ventilation, and air conditioning
Variable air volume
Physics,Engineering
1,569
25,665,872
https://en.wikipedia.org/wiki/C20H26O3
{{DISPLAYTITLE:C20H26O3}} The molecular formula C20H26O3 may refer to: Crotogoudin Cyclotriol Estradiol acetate (EA) Estradiol 17β-acetate Gestadienol Kahweol Molecular formulas
C20H26O3
Physics,Chemistry
65
36,950,793
https://en.wikipedia.org/wiki/Construction%20of%20a%20complex%20null%20tetrad
Calculations in the Newman–Penrose (NP) formalism of general relativity normally begin with the construction of a complex null tetrad , where is a pair of real null vectors and is a pair of complex null vectors. These tetrad vectors respect the following normalization and metric conditions assuming the spacetime signature Only after the tetrad gets constructed can one move forward to compute the directional derivatives, spin coefficients, commutators, Weyl-NP scalars , Ricci-NP scalars and Maxwell-NP scalars and other quantities in NP formalism. There are three most commonly used methods to construct a complex null tetrad: All four tetrad vectors are nonholonomic combinations of orthonormal tetrads; (or ) are aligned with the outgoing (or ingoing) tangent vector field of null radial geodesics, while and are constructed via the nonholonomic method; A tetrad which is adapted to the spacetime structure from a 3+1 perspective, with its general form being assumed and tetrad functions therein to be solved. In the context below, it will be shown how these three methods work. Note: In addition to the convention employed in this article, the other one in use is . Nonholonomic tetrad The primary method to construct a complex null tetrad is via combinations of orthonormal bases. For a spacetime with an orthonormal tetrad , the covectors of the nonholonomic complex null tetrad can be constructed by and the tetrad vectors can be obtained by raising the indices of via the inverse metric . Remark: The nonholonomic construction is actually in accordance with the local light cone structure. Example: A nonholonomic tetrad Given a spacetime metric of the form (in signature(-,+,+,+)) the nonholonomic orthonormal covectors are therefore and the nonholonomic null covectors are therefore la (na) aligned with null radial geodesics In Minkowski spacetime, the nonholonomically constructed null vectors respectively match the outgoing and ingoing null radial rays. As an extension of this idea in generic curved spacetimes, can still be aligned with the tangent vector field of null radial congruence. However, this type of adaption only works for , or coordinates where the radial behaviors can be well described, with and denote the outgoing (retarded) and ingoing (advanced) null coordinate, respectively. Example: Null tetrad for Schwarzschild metric in Eddington-Finkelstein coordinates reads so the Lagrangian for null radial geodesics of the Schwarzschild spacetime is which has an ingoing solution and an outgoing solution . Now, one can construct a complex null tetrad which is adapted to the ingoing null radial geodesics: and the dual basis covectors are therefore Here we utilized the cross-normalization condition as well as the requirement that should span the induced metric for cross-sections of {v=constant, r=constant}, where and are not mutually orthogonal. Also, the remaining two tetrad (co)vectors are constructed nonholonomically. With the tetrad defined, one is now able to respectively find out the spin coefficients, Weyl-Np scalars and Ricci-NP scalars that Example: Null tetrad for extremal Reissner–Nordström metric in Eddington-Finkelstein coordinates reads so the Lagrangian is For null radial geodesics with , there are two solutions (ingoing) and (outgoing), and therefore the tetrad for an ingoing observer can be set up as With the tetrad defined, we are now able to work out the spin coefficients, Weyl-NP scalars and Ricci-NP scalars that Tetrads adapted to the spacetime structure At some typical boundary regions such as null infinity, timelike infinity, spacelike infinity, black hole horizons and cosmological horizons, null tetrads adapted to spacetime structures are usually employed to achieve the most succinct Newman–Penrose descriptions. Newman-Unti tetrad for null infinity For null infinity, the classic Newman-Unti (NU) tetrad is employed to study asymptotic behaviors at null infinity, where are tetrad functions to be solved. For the NU tetrad, the foliation leaves are parameterized by the outgoing (advanced) null coordinate with , and is the normalized affine coordinate along ; the ingoing null vector acts as the null generator at null infinity with . The coordinates comprise two real affine coordinates and two complex stereographic coordinates , where are the usual spherical coordinates on the cross-section (as shown in ref., complex stereographic rather than real isothermal coordinates are used just for the convenience of completely solving NP equations). Also, for the NU tetrad, the basic gauge conditions are Adapted tetrad for exteriors and near-horizon vicinity of isolated horizons For a more comprehensive view of black holes in quasilocal definitions, adapted tetrads which can be smoothly transited from the exterior to the near-horizon vicinity and to the horizons are required. For example, for isolated horizons describing black holes in equilibrium with their exteriors, such a tetrad and the related coordinates can be constructed this way. Choose the first real null covector as the gradient of foliation leaves where is the ingoing (retarded) Eddington–Finkelstein-type null coordinate, which labels the foliation cross-sections and acts as an affine parameter with regard to the outgoing null vector field , i.e. Introduce the second coordinate as an affine parameter along the ingoing null vector field , which obeys the normalization Now, the first real null tetrad vector is fixed. To determine the remaining tetrad vectors and their covectors, besides the basic cross-normalization conditions, it is also required that: (i) the outgoing null normal field acts as the null generators; (ii) the null frame (covectors) are parallelly propagated along ; (iii) spans the {t=constant, r=constant} cross-sections which are labeled by real isothermal coordinates . Tetrads satisfying the above restrictions can be expressed in the general form that The gauge conditions in this tetrad are Remark: Unlike Schwarzschild-type coordinates, here r=0 represents the horizon, while r>0 (r<0) corresponds to the exterior (interior) of an isolated horizon. People often Taylor expand a scalar function with respect to the horizon r=0, where refers to its on-horizon value. The very coordinates used in the adapted tetrad above are actually the Gaussian null coordinates employed in studying near-horizon geometry and mechanics of black holes. See also Newman–Penrose formalism References General relativity Mathematical methods in general relativity
Construction of a complex null tetrad
Physics
1,458
8,844,844
https://en.wikipedia.org/wiki/Intuitor
Intuitor is a website promoting creative learning as both a method of enlightenment and a cultural theme in its own right. Created in 1996, two of its earliest features were instructions for the founder's own four-handed chess variant Forchess and an essay entitled Why Now Is the Most Exciting Time in History to Be Alive. Today, its eclectic format includes educational treatments of physics, statistics, and chess, as well as calls for paradigm shifts such as the adoption of hexadecimal for representing numbers in everyday use. Insultingly Stupid Movie Physics Intuitor's most well-known feature is Insultingly Stupid Movie Physics (ISMP), which produces original scientific critiques of contemporary cinema and television. Its main gimmick is a physics rating system parodying the explicit content ratings of the Motion Picture Association of America. Its movie reviews seek to promote a greater understanding of and appreciation for science by lampooning scientific portrayals in pop-culture. It has been cited on popular websites such as Fark and Slashdot, on radio programs throughout the U.S. and Canada, and in major print media. The ISMP was also Something Awful's awful link of the day on June 14, 2006. In calling for "Decency in Movie Physics", ISMP has named the science-fiction film The Core as the "Worst Physics Movie Ever". External links Intuitor American educational websites
Intuitor
Technology
288
16,853,729
https://en.wikipedia.org/wiki/Drydock%20Number%20One%2C%20Norfolk%20Naval%20Shipyard
Drydock Number One is the oldest operational drydock facility in the United States. Located in Norfolk Naval Shipyard in Portsmouth, Virginia, it was put into service in 1834, and has been in service since then. Its history includes the refitting of , which was modified to be the Confederate Navy ironclad . It was declared a National Historic Landmark in 1971. Description and history Drydock Number One is located on the west side of the central branch of the Elizabeth River. It measures in length, and is built of Massachusetts granite, stepped to allow access to and bracing of ships under repair. Stairs at the land end provide access to the various levels. The drydock can accommodate a maximum vessel length of with a beam. Depth is . the dock can be dewatered in 40 minutes and flooded in 90 minutes. The drydock was built between 1827 and 1834, and cost $974,365.65, a very high price at that time. It may have been designed by Loammi Baldwin Jr., then the Navy's superintendent of drydocks, and its construction was overseen by William P. S. Sanger, a civil engineer. The drydock was first used in June 1833, when was drydocked for recommissioning, the first time a large vessel was drydocked in the United States. During the opening phase of the American Civil War in April 1861, Union forces were dispatched from Washington on the USS Pawnee to assist in destroying military assets as the shipyard was being abandoned; however, efforts to blow-up the dry dock were unsuccessful. The shipyard was then taken over by the Confederate Navy, which was a severe blow to the Union, and it was here that was modified to become the ironclad . Today, Drydock Number One is still in operation, used primarily to service U.S. Navy vessels. See also List of U.S. National Historic Landmark ships, shipwrecks, and shipyards List of National Historic Landmarks in Virginia National Register of Historic Places listings in Portsmouth, Virginia References National Historic Landmarks in Virginia Buildings and structures in Portsmouth, Virginia 1827 establishments in Virginia Industrial buildings and structures on the National Register of Historic Places in Virginia Water transportation buildings and structures on the National Register of Historic Places Transportation buildings and structures on the National Register of Historic Places in Virginia National Register of Historic Places in Portsmouth, Virginia Drydocks United States Navy shipyards Historic Civil Engineering Landmarks
Drydock Number One, Norfolk Naval Shipyard
Engineering
491
757,269
https://en.wikipedia.org/wiki/Water%20filter
A water filter removes impurities by lowering contamination of water using a fine physical barrier, a chemical process, or a biological process. Filters cleanse water to different extents, for purposes such as: providing agricultural irrigation, accessible drinking water, public and private aquariums, and the safe use of ponds and swimming pools. Methods of filtration Filters use sieving, adsorption, ion exchanges, biofilms and other processes to remove unwanted substances from water. Unlike a sieve or screen, a filter can potentially remove particles much smaller than the holes through which its water passes, such as nitrates or germs like Cryptosporidium. Among the methods of filtration, notable examples are sedimentation, used to separate hard and suspended solids from water and activated charcoal treatment, where, typically, boiled water is poured through a piece of cloth to trap undesired residuals. Additionally, the use of machinery to work on desalinization and purification of water through the transposal of it into multiple-filtration water tanks is used. This technique is aimed at the filtration of water on bigger scales, such as serving entire cities. These three methods are particularly relevant, as they trace back centuries and are the base for many of the modern methods of filtration used today. Types Water treatment plant filters Types of water filters for municipal and other large treatment systems include media filters, screen filters, disk filters, slow sand filter beds, rapid sand filters, cloth filters, and biological filters such as algae scrubbers. Point-of-use filters Point-of-use filters for home use include granular-activated carbon filters used for carbon filtering, depth filter, metallic alloy filters, microporous ceramic filters, carbon block resin, microfiltration and ultrafiltration membranes. Some filters use more than one filtration method. An example of this is a multi-barrier system. Jug filters can be used for small quantities of drinking water. Some kettles have built-in filters, primarily to reduce limescale build-up. Portable water filters Water filters are used by hikers, aid organizations during humanitarian emergencies, and the military. These filters are usually small, portable and lightweight ( or less). These usually filter water by working a mechanical hand pump, although some use a siphon drip system to force water through, while others are built into water bottles. Dirty water is pumped via a screen-filtered flexible silicon tube through a specialized filter, ending up in a container. These filters work to remove bacteria, protozoa and microbial cysts that can cause disease. Filters may have fine meshes that must be replaced or cleaned, and ceramic water filters must have its outside abraded when they have become clogged with impurities. These water filters should not be confused with devices or tablets that disinfect water, which remove or kill viruses such as hepatitis A and rotavirus. Digital Water Filter A new and advanced water purification system using reverse osmosis (RO) technology ensures 99.9% sterilization, effectively removing bacteria, viruses, dissolved salts, and heavy metals. The RO process provides high precision filtration, typically down to 0.0001 microns, ensuring the removal of harmful contaminants while preserving the purity of water for safe consumption. Ceramic water filters Ceramic filters represent low-cost solutions to water filtration and are widely adhered to despite being one of the oldest methods of filtration. These filters are found not only inside the homes of families but also used in industrial engineering (as high-temperature filters) for several processes. The conventional ceramic filters used for day-to-day water consumption, known as candle-type filters, work with gravity and a central candle, which makes the filtration process significantly long. Water polishing The term water polishing can refer to any process that removes small (usually microscopic) particulate material, or removes very low concentrations of dissolved material from water. The process and its meaning vary from setting to setting: a manufacturer of aquarium filters may claim that its filters perform water polishing by capturing "micro particles" within nylon or polyester pads, just as a chemical engineer can use the term to refer to the removal of magnetic resins from a solution by passing the solution over a bed of magnetic particulate. In this sense, water polishing is simply another term for whole house water filtration systems. Polishing is also done on a large scale in water reclamation plants. History 4000 years ago, in India, Hindus devised the first drinking water standards. Hindus heated dirty water by boiling it and exposing it to sunlight or dipping it seven times in hot pieces of copper, then filtering it through earthen vessels and cooling it. This was an enlightened procedure to obtain sterilized drinking water as well as to keep it aesthetically pleasing. This method was directed at individuals and households rather than for use as a community water source. In China, boiling water was found to reduce the spread of disease. To this day, hot water just below boiling point is typically served in Chinese restaurants. 2,000 years ago, Mayan drinking water filtration systems used crystalline quartz and zeolite. Both minerals are used in modern water filtration. "The filters would have removed harmful microbes, nitrogen-rich compounds, heavy metals such as mercury and other toxins from the water". The Egyptians reportedly used used alum to clarify water as early as 1500 BC. Persian engineer Al-Karaji () wrote a book, The Extraction of Hidden Waters, which gave an early description of a water filtration process. Until the invention of the microscope, the existence of microscopic life was undiscovered. More than 200 years passed before the microscope was invented and the relationship between microorganisms and disease became clear. In the mid-19th century, cholera was proven to be transmitted by contaminated water. In the late 19th century, Louis Pasteur's theory of the particulate pathogen finally established a causal relationship between microorganisms and disease. Filtration as a method of water purification was established in the 18th century, and the first municipal water treatment plant was built in Scotland in 1832. However, the aesthetic value of water was important at the time, and effective water quality standards did not exist until the late 19th century. During the 19th and 20th centuries, water filters for domestic water production were generally divided into slow sand filters and rapid sand filters (also called mechanical filters and American filters). While there were many small-scale water filtration systems prior to 1800, Paisley, Scotland is generally acknowledged as the first city to receive filtered water for an entire town. The Paisley filter began operation in 1804 and was an early type of slow sand filter. Throughout the 1800s, hundreds of slow sand filters were constructed in the UK and on the European continent. An intermittent slow sand filter was constructed and operated at Lawrence, Massachusetts in 1893 due to continuing typhoid fever epidemics caused by sewage contamination of the water supply. The first continuously operating slow sand filter was designed by Allen Hazen for the city of Albany, New York in 1897. The most comprehensive history of water filtration was published by Moses N. Baker in 1948 and reprinted in 1981. In the 1800s, mechanical filtration was an industrial process that depended on the addition of aluminium sulfate prior to the filtration process. The filtration rate for mechanical filtration was typically more than 60 times faster than slow sand filters, thus requiring significantly less land area. The first modern mechanical filtration plant in the U.S. was built at Little Falls, New Jersey, for the East Jersey Water Company. George W. Fuller designed and supervised the construction of the plant which went into operation in 1902. In 1924, John R. Baylis developed a fixed grid backwash assist system, which consisted of pipes with nozzles that injected jets of water into the filter material during expansion. See also Backwashing (water treatment) Carbon filtering Distillation Kinetic degradation fluxion media Point of use water filter Point of use water treatment Reverse osmosis Reverse osmosis plant Sand separator Settling basin Swimming pool sanitation Water softening References External links Water filters Irrigation Hiking equipment Water conservation
Water filter
Chemistry
1,681
3,264,380
https://en.wikipedia.org/wiki/Bacterial%20translation
Bacterial translation is the process by which messenger RNA is translated into proteins in bacteria. Initiation Initiation of translation in bacteria involves the assembly of the components of the translation system, which are: the two ribosomal subunits (50S and 30S subunits); the mature mRNA to be translated; the tRNA charged with N-formylmethionine (the first amino acid in the nascent peptide); guanosine triphosphate (GTP) as a source of energy, and the three prokaryotic initiation factors IF1, IF2, and IF3, which help the assembly of the initiation complex. Variations in the mechanism can be anticipated. The ribosome has three active sites: the A site, the P site, and the E site. The A site is the point of entry for the aminoacyl tRNA (except for the first aminoacyl tRNA, which enters at the P site). The P site is where the peptidyl tRNA is formed in the ribosome. And the E site which is the exit site of the now uncharged tRNA after it gives its amino acid to the growing peptide chain. Canonical initiation: Shine-Dalgarno sequence The majority of mRNAs in E. coli are prefaced with a Shine-Dalgarno (SD) sequence. The SD sequence is recognized by an complementary "anti-SD" region on the 16S rRNA component of the 30S subunit. In the canonical model, the 30S ribosome is first joined up with the three initiation factors, forming an unstable "pre-initiation complex". The mRNA then pairs up with this anti-SD region, causing it to form a double-stranded RNA structure, roughly positioning the start codon at the P site. An initiating tRNAfMet arrives and is positioned with the help of IF2, starting the translation. There are a lot of uncertainties even in the canonical model. The initiation site has been shown to be not strictly limited to AUG. Well-known coding regions that do not have AUG initiation codons are those of lacI (GUG) and lacA (UUG) in the E. coli lac operon. Two studies have independently shown that 17 or more non-AUG start codons may initiate translation in E. coli. Nevertheless, AUG seems to at least be the strongest initiation codon among all possibilities. The SD sequence also does not appear strictly necessary, as a wide range of mRNAs lack them and are still translated, with an entire phylum of bacteria (Bacteroidetes) using no such sequence. Simply SD followed by AUG is also not sufficient to initiate translation. It does, at least, function as a very important initiating signal in E. coli. 70S scanning model When translating a polycistronic mRNA, a 70S ribosome ends translation at a stop codon. It is now shown that instead of immediately splitting into its two halves, the ribosome can "scan" forward until it hits another Shine–Dalgarno sequence and the downstream initiation codon, initiating another translation with the help of IF2 and IF3. This mode is thought to be important for the translation of genes that are clustered in poly-cistronic operons, where the canonical binding mode can be disruptive due to small distances between neighboring genes on the same mRNA molecule. Leaderless initiation A number of bacterial mRNAs have no 5'UTR whatsoever, or a very short one. The complete 70S ribosome, with the help of IF2 (recruiting fMet-tRNA), can simply start translating such a "leaderless" mRNA. A number of factors modify the efficiency of leaderless initiation. A 5' phosphate group attached to the start codon seems near-essential. AUG is strongly preferred in E. coli, but not necessarily in other species. IF3 inhibits leaderless initiation. A longer 5'UTR or one with significant secondary structure also inhibits leaderless initiation. Elongation Elongation of the polypeptide chain involves addition of amino acids to the carboxyl end of the growing chain. The growing protein exits the ribosome through the polypeptide exit tunnel in the large subunit. Elongation starts when the fMet-tRNA enters the P site, causing a conformational change which opens the A site for the new aminoacyl-tRNA to bind. This binding is facilitated by elongation factor-Tu (EF-Tu), a small GTPase. For fast and accurate recognition of the appropriate tRNA, the ribosome utilizes large conformational changes (conformational proofreading). Now the P site contains the beginning of the peptide chain of the protein to be encoded and the A site has the next amino acid to be added to the peptide chain. The growing polypeptide connected to the tRNA in the P site is detached from the tRNA in the P site and a peptide bond is formed between the last amino acids of the polypeptide and the amino acid still attached to the tRNA in the A site. This process, known as peptide bond formation, is catalyzed by a ribozyme (the 23S ribosomal RNA in the 50S ribosomal subunit). Now, the A site has the newly formed peptide, while the P site has an uncharged tRNA (tRNA with no amino acids). The newly formed peptide in the A site tRNA is known as dipeptide and the whole assembly is called dipeptidyl-tRNA. The tRNA in the P site minus the amino acid is known to be deacylated. In the final stage of elongation, called translocation, the deacylated tRNA (in the P site) and the dipeptidyl-tRNA (in the A site) along with its corresponding codons move to the E and P sites, respectively, and a new codon moves into the A site. This process is catalyzed by elongation factor G (EF-G). The deacylated tRNA at the E site is released from the ribosome during the next A-site occupation by an aminoacyl-tRNA again facilitated by EF-Tu. The ribosome continues to translate the remaining codons on the mRNA as more aminoacyl-tRNA bind to the A site, until the ribosome reaches a stop codon on mRNA(UAA, UGA, or UAG). The translation machinery works relatively slowly compared to the enzyme systems that catalyze DNA replication. Proteins in bacteria are synthesized at a rate of only 18 amino acid residues per second, whereas bacterial replisomes synthesize DNA at a rate of 1000 nucleotides per second. This difference in rate reflects, in part, the difference between polymerizing four types of nucleotides to make nucleic acids and polymerizing 20 types of amino acids to make proteins. Testing and rejecting incorrect aminoacyl-tRNA molecules takes time and slows protein synthesis. In bacteria, translation initiation occurs as soon as the 5' end of an mRNA is synthesized, and translation and transcription are coupled. This is not possible in eukaryotes because transcription and translation are carried out in separate compartments of the cell (the nucleus and cytoplasm). Termination Termination occurs when one of the three termination codons moves into the A site. These codons are not recognized by any tRNAs. Instead, they are recognized by proteins called release factors, namely RF1 (recognizing the UAA and UAG stop codons) or RF2 (recognizing the UAA and UGA stop codons). These factors trigger the hydrolysis of the ester bond in peptidyl-tRNA and the release of the newly synthesized protein from the ribosome. A third release factor RF-3 catalyzes the release of RF-1 and RF-2 at the end of the termination process. Recycling The post-termination complex formed by the end of the termination step consists of mRNA with the termination codon at the A-site, an uncharged tRNA in the P site, and the intact 70S ribosome. Ribosome recycling step is responsible for the disassembly of the post-termination ribosomal complex. Once the nascent protein is released in termination, Ribosome Recycling Factor and Elongation Factor G (EF-G) function to release mRNA and tRNAs from ribosomes and dissociate the 70S ribosome into the 30S and 50S subunits. IF3 then replaces the deacylated tRNA releasing the mRNA. All translational components are now free for additional rounds of translation. Depending on the tRNA, IF1–IF3 may also perform recycling. Polysomes Translation is carried out by more than one ribosome simultaneously. Because of the relatively large size of ribosomes, they can only attach to sites on mRNA 35 nucleotides apart. The complex of one mRNA and a number of ribosomes is called a polysome or polyribosome. Regulation of translation When bacterial cells run out of nutrients, they enter stationary phase and downregulate protein synthesis. Several processes mediate this transition. For instance, in E. coli, 70S ribosomes form 90S dimers upon binding with a small 6.5 kDa protein, ribosome modulation factor RMF. These intermediate ribosome dimers can subsequently bind a hibernation promotion factor (the 10.8 kDa protein, HPF) molecule to form a mature 100S ribosomal particle, in which the dimerization interface is made by the two 30S subunits of the two participating ribosomes. The ribosome dimers represent a hibernation state and are translationally inactive. A third protein that can bind to ribosomes when E. coli cells enter the stationary phase is YfiA (previously known as RaiA). HPF and YfiA are structurally similar, and both proteins can bind to the catalytic A- and P-sites of the ribosome. RMF blocks ribosome binding to mRNA by preventing interaction of the messenger with 16S rRNA. When bound to the ribosomes the C-terminal tail of E. coli YfiA interferes with the binding of RMF, thus preventing dimerization and resulting in the formation of translationally inactive monomeric 70S ribosomes. In addition to ribosome dimerization, the joining of the two ribosomal subunits can be blocked by RsfS (formerly called RsfA or YbeB). RsfS binds to L14, a protein of the large ribosomal subunit, and thereby blocks joining of the small subunit to form a functional 70S ribosome, slowing down or blocking translation entirely. RsfS proteins are found in almost all eubacteria (but not archaea) and homologs are present in mitochondria and chloroplasts (where they are called C7orf30 and iojap, respectively). However, it is not known yet how the expression or activity of RsfS is regulated. Another ribosome-dissociation factor in Escherichia coli is HflX, previously a GTPase of unknown function. Zhang et al. (2015) showed that HflX is a heat shock–induced ribosome-splitting factor capable of dissociating vacant as well as mRNA-associated ribosomes. The N-terminal effector domain of HflX binds to the peptidyl transferase center in a strikingly similar manner as that of the class I release factors and induces dramatic conformational changes in central intersubunit bridges, thus promoting subunit dissociation. Accordingly, loss of HflX results in an increase in stalled ribosomes upon heat shock and possibly other stress conditions. Effect of antibiotics Several antibiotics exert their action by targeting the translation process in bacteria. They exploit the differences between prokaryotic and eukaryotic translation mechanisms to selectively inhibit protein synthesis in bacteria without affecting the host. See also Prokaryotic initiation factors Prokaryotic elongation factors References Molecular biology Protein biosynthesis Gene expression
Bacterial translation
Chemistry,Biology
2,530
496,670
https://en.wikipedia.org/wiki/Gibberellin
Gibberellins (GAs) are plant hormones that regulate various developmental processes, including stem elongation, germination, dormancy, flowering, flower development, and leaf and fruit senescence. They are one of the longest-known classes of plant hormone. It is thought that the selective breeding (albeit unconscious) of crop strains that were deficient in GA synthesis was one of the key drivers of the "green revolution" in the 1960s, a revolution that is credited to have saved over a billion lives worldwide. Chemistry All known gibberellins are diterpenoid acids synthesized by the terpenoid pathway in plastids and then modified in the endoplasmic reticulum and cytosol until they reach their biologically active form. All are derived via the ent-gibberellane skeleton, but are synthesised via ent-kaurene. The gibberellins are named GA1 through GAn in order of discovery. Gibberellic acid, which was the first gibberellin to be structurally characterized, is GA3. , there are 136 GAs identified from plants, fungi, and bacteria. Gibberellins are tetracyclic diterpene acids. There are two classes, with either 19 or 20 carbons. The 19-carbon gibberellins are generally the biologically active forms. They have lost carbon 20 and, in place, possess a five-member lactone bridge that links carbons 4 and 10. Hydroxylation also has a great effect on its biological activity. In general, the most biologically active compounds are dihydroxylated gibberellins, with hydroxyl groups on both carbons 3 and 13. Gibberellic acid is a 19-carbon dihydroxylated gibberellin. Bioactive GAs The bioactive Gibberellins are GA1, GA3, GA4, and GA7. There are three common structural traits between these GAs: 1) hydroxyl group on C-3β, 2) a carboxyl group on carbon 6, and 3) a lactone between carbons 4 and 10. The 3β-hydroxyl group can be exchanged for other functional groups at C-2 and/or C-3 positions. GA5 and GA6 are examples of bioactive GAs without a hydroxyl group on C-3β. The presence of GA1 in various plant species suggests that it is a common bioactive GA. Biological function Gibberellins are involved in the natural process of breaking dormancy and other aspects of germination. Before the photosynthetic apparatus develops sufficiently in the early stages of germination, the seed reserves of starch nourish the seedling. Usually in germination, the breakdown of starch to glucose in the endosperm begins shortly after the seed is exposed to water. Gibberellins in the seed embryo are believed to signal starch hydrolysis through inducing the synthesis of the enzyme α-amylase in the aleurone cells. In the model for gibberellin-induced production of α-amylase, it is demonstrated that gibberellins from the scutellum diffuse to the aleurone cells, where they stimulate the secretion α-amylase. α-Amylase then hydrolyses starch (abundant in many seeds), into glucose that can be used to produce energy for the seed embryo. Studies of this process have indicated gibberellins cause higher levels of transcription of the gene coding for the α-amylase enzyme, to stimulate the synthesis of α-amylase. Exposition to cold temperatures increases the production of Gibberellins. They stimulate cell elongation, breaking and budding, and seedless fruits. Gibberellins cause also seed germination by breaking the seed's dormancy and acting as a chemical messenger. Its hormone binds to a receptor, and calcium activates the protein calmodulin, and the complex binds to DNA, producing an enzyme to stimulate growth in the embryo. Metabolism Biosynthesis Gibberellins are usually synthesized from the methylerythritol phosphate (MEP) pathway in higher plants. In this pathway, bioactive GA is produced from trans-geranylgeranyl diphosphate (GGDP), with the participation of three classes of enzymes: terpene syntheses (TPSs), cytochrome P450 monooxygenases (P450s), and 2-oxoglutarate–dependent dioxygenases (2ODDs). The MEP pathway follows eight steps: GGDP is converted to ent-copalyl diphosphate (ent-CDP) by ent-copalyl diphosphate synthase (CPS) ent-CDP is converted to ent-kaurene by ent-kaurene synthase (KS) ent-kaurene is converted to ent-kaurenol by ent-kaurene oxidase (KO) ent-kaurenol is converted to ent-kaurenal by KO ent-kaurenal is converted to ent-kaurenoic acid by KO ent-kaurenoic acid is converted to ent-7a-hydroxykaurenoic acid by ent-kaurenoic acid oxidase (KAO) ent-7a-hydroxykaurenoic acid is converted to GA12-aldehyde by KAO GA12-aldehyde is converted to GA12 by KAO. GA12 is processed to the bioactive GA4 by oxidations on C-20 and C-3, which is accomplished by 2 soluble ODDs: GA 20-oxidase and GA 3-oxidase. One or two genes encode the enzymes responsible for the first steps of GA biosynthesis in Arabidopsis and rice. The null alleles of the genes encoding CPS, KS, and KO result in GA-deficient Arabidopsis dwarves. Multigene families encode the 2ODDs that catalyze the formation of GA12 to bioactive GA4. AtGA3ox1 and AtGA3ox2, two of the four genes that encode GA3ox in Arabidopsis, affect vegetative development. Environmental stimuli regulate AtGA3ox1 and AtGA3ox2 activity during seed germination. In Arabidopsis, GA20ox overexpression leads to an increase in GA concentration. Sites of biosynthesis Most bioactive Gibberellins are located in actively growing organs on plants. Both GA20ox and GA3ox genes (genes coding for GA 20-oxidase and GA 3-oxidase) and the SLENDER1 gene (a GA signal transduction gene) are found in growing organs on rice, which suggests bioactive GA synthesis occurs at their site of action in growing organs in plants. During flower development, the tapetum of anthers is believed to be a primary site of GA biosynthesis. Differences between biosynthesis in fungi and lower plants The flower Arabidopsis and the fungus Gibberella fujikuroi possess different GA pathways and enzymes. P450s in fungi perform functions analogous to the functions of KAOs in plants. The function of CPS and KS in plants is performed by a single enzyme in fungi (CPS/KS). In plants the Gibberellin biosynthesis genes are found randomly on multiple chromosomes, but in fungi are found on one chromosome . Plants produce low amount of Gibberellic Acid, therefore is produced for industrial purposes by microorganisms. Industrially GA3 can be produced by submerged fermentation, but this process presents low yield with high production costs and hence higher sale value, nevertheless other alternative process to reduce costs of its production is solid-state fermentation (SSF) that allows the use of agro-industrial residues. Catabolism Several mechanisms for inactivating Giberellins have been identified. 2β-hydroxylation deactivates them, and is catalyzed by GA2-oxidases (GA2oxs). Some GA2oxs use 19-carbon Gibberellins as substrates, while other use C20-GAs. Cytochrome P450 mono-oxygenase, encoded by elongated uppermost internode (eui), converts Gibberellins into 16α,17-epoxides. Rice eui mutants amass bioactive Gibberellins at high levels, which suggests cytochrome P450 mono-oxygenase is a main enzyme responsible for deactivation GA in rice. The Gamt1 and gamt2 genes encode enzymes that methylate the C-6 carboxyl group of GAs. In a gamt1 and gamt2 mutant, concentrations of GA in developing seeds is increased. Homeostasis Feedback and feedforward regulation maintains the levels of bioactive Gibberellins in plants. Levels of AtGA20ox1 and AtGA3ox1 expression are increased in a Gibberellin deficient environment, and decreased after the addition of bioactive GAs, Conversely, expression of the Gibberellin deactivation genes AtGA2ox1 and AtGA2ox2 is increased with addition of Gibberellins. Regulation Regulation by other hormones The auxin indole-3-acetic acid (IAA) regulates concentration of GA1 in elongating internodes in peas. Removal of IAA by removal of the apical bud, the auxin source, reduces the concentration of GA1, and reintroduction of IAA reverses these effects to increase the concentration of GA1. This has also been observed in tobacco plants. Auxin increases GA 3-oxidation and decreases GA 2-oxidation in barley. Auxin also regulates GA biosynthesis during fruit development in peas. These discoveries in different plant species suggest the auxin regulation of GA metabolism may be a universal mechanism. Ethylene decreases the concentration of bioactive GAs. Regulation by environmental factors Recent evidence suggests fluctuations in GA concentration influence light-regulated seed germination, photomorphogenesis during de-etiolation, and photoperiod regulation of stem elongation and flowering. Microarray analysis showed about one fourth cold-responsive genes are related to GA-regulated genes, which suggests GA influences response to cold temperatures. Plants reduce growth rate when exposed to stress. A relationship between GA levels and amount of stress experienced has been suggested in barley. Role in seed development Bioactive GAs and abscisic acid (ABA) levels have an inverse relationship and regulate seed development and germination. Levels of FUS3, an Arabidopsis transcription factor, are upregulated by ABA and downregulated by Giberellins, which suggests that there is a regulation loop that establishes the balance of Gibberellins and Abscisic Acid. In the practice, this means that farmers can alter this balance to make all fruits mature a little later, at a same time, or 'glue' the fruit in the trees until the harvest day (because ABA participates in the maturation of the fruits, and many crops mature and drop a few fruits a day for several weeks, that is undesirable for markets). Signalling mechanism Receptor In the early 1990s, there were several lines of evidence that suggested the existence of a GA receptor in oat seeds located at the plasma membrane. However, despite intensive research, to date, no membrane-bound GA receptor has been isolated. This, along with the discovery of a soluble receptor, GA insensitive dwarf 1 (GID1) has led many to doubt that a membrane-bound receptor exists.GID1 was first identified in rice and in Arabidopsis there are three orthologs of GID1, AtGID1a, b, and c. GID1s have a high affinity for bioactive GAs. GA binds to a specific binding pocket on GID1; the C3-hydroxyl on GA makes contact with tyrosine-31 in the GID1 binding pocket. GA binding to GID1 causes changes in GID1 structure, causing a 'lid' on GID1 to cover the GA binding pocket. The movement of this lid results in the exposure of a surface which enables the binding of GID1 to DELLA proteins. DELLA proteins: Repression of a repressor DELLA proteins (such as SLR1 in rice or GAI and RGA in Arabidopsis) are repressors of plant development, characterized by the presence of a DELLA motif (aspartate-glutamate-leucine-leucine-alanine or D-E-L-L-A in the single letter amino acid code). DELLAs inhibit seed germination, seed growth, flowering and GA reverses these effects. When Gibberellins bind to the GID1 receptor, it enhances the interaction between GID1 and DELLA proteins, forming a GA-GID1-DELLA complex. In that complex it is thought that the structure of DELLA proteins experience changes, enabling their binding to F-box proteins for their degradation. F-box proteins (SLY1 in Arabidopsis or GID2 in rice) catalyse the addition of ubiquitin to their targets. Adding ubiquitin to DELLA proteins promotes their degradation via the 26S-proteosome. This releases cells from DELLAs repressive effects. Targets of DELLA proteins Transcription factors The first targets of DELLA proteins identified were Phytochrome Interacting Factors (PIFs). PIFs are transcription factors that negatively regulate light signalling and are strong promoters of elongation growth. In the presence of GA, DELLAs are degraded and this then allows PIFs to promote elongation. It was later found that DELLAs repress a large number of other transcription factors, among which are positive regulators of auxin, brassinosteroid and ethylene signalling. DELLAs can repress transcription factors either by stopping their binding to DNA or by promoting their degradation. Prefoldins and microtubule assembly In addition to repressing transcription factors, DELLAs also bind to prefoldins (PFDs). PFDs are molecular chaperones (they assist in the folding of other proteins) that work in the cytosol, but when DELLAs bind to them are restricted to the nucleus. An important function of PFDs is to assist in the folding of β-tubulin, a vital component of the cytoskeleton in the form of microtubules. As such, in the absence of Gibberellins (high level of DELLA proteins), PFDs reduce its activity, leading to a lower cellular pool of β-tubulin. When GA is present the DELLAs are degraded, PFDs can move to the cytosol and assist in the folding of β-tubulin. As such, GA allows for re-organisation of the cytoskeleton, and the elongation of cells. Microtubules are also required for the trafficking of membrane vesicles, that is needed for the correct positioning of several hormone transporters. One of the most well characterized hormone transporters are PIN proteins, which are responsible for the movement of the hormone auxin between cells. In the absence of Gibberellins, DELLA proteins reduce the levels of microtubules and thereby inhibit membrane vesicle trafficking. This reduces the level of PIN proteins at the cell membrane, and the level of auxin in the cell. GA reverses this process and allows for PIN protein trafficking to the cell membrane to enhance the level of auxin in the cell. References External links Plant hormones Agronomy Diterpenes Aging-related substances in plants
Gibberellin
Biology
3,286
40,351,046
https://en.wikipedia.org/wiki/Clinical%20physiology
Clinical physiology is an academic discipline within the medical sciences and a clinical medical specialty for physicians in the health care systems of Sweden, Denmark, Portugal and Finland. Clinical physiology is characterized as a branch of physiology that uses a functional approach to understand the pathophysiology of a disease. Overview As a specialty for medical doctors, clinical physiology is a diagnostic specialty in which patients are subjected to specialized tests for the functions of the heart, blood vessels, lungs, kidneys and gastrointestinal tract, and other organs. Testing methods include evaluation of electrical activity (e.g. electrocardiogram of the heart), blood pressure (e.g. ankle brachial pressure index), and air flow (e.g. pulmonary function testing using spirometry). In addition, Clinical Physiologists measure movements, velocities, and metabolic processes through imaging techniques such as ultrasound, echocardiography, magnetic resonance imaging (MRI), x-ray computed tomography (CT), and nuclear medicine scanners (e.g. single photon emission computed tomography (SPECT) and positron emission tomography (PET) with and without CT or MRI). History The field of clinical physiology was originally founded by Professor Torgny Sjöstrand in Sweden, and it continues to make its way around the world in other hospitals and academic environments. Sjöstrand was the first to establish departments for clinical physiology separate from those of physiology, during his work at the Karolinska Hospital in Stockholm. Along with Sjöstrand, another influential name in clinical physiology was P.K Anokhin. Anohkin heavily contributed to the branch of physiology where he worked diligently to use his theories of functional systems to solve medical mysteries amongst his patients. In Sweden, clinical physiology was originally a discipline on its own, however, between 2008 and 2015, clinical physiology was categorized as a sub-discipline of radiology. For this reason, those pursuing a career in clinical physiology had to first become registered and certified radiologists before becoming clinical physiologists. Since 2015, clinical physiology has been a separate discipline, independent of radiology. Role Human physiology is the study of bodily functions. Clinical physiology examinations typically involve assessments of such functions as opposed to assessments of structures and anatomy. The specialty encompasses the development of new physiological tests for medical diagnostics. Using equipments to measure, monitor and record patients proves very helpful for patients in many hospitals. Moreover, it is helpful to doctors, making it possible for patients to be diagnosed correctly. Some Clinical Physiology departments perform tests from related medical specialties including nuclear medicine, clinical neurophysiology, and radiology. In the health care systems of countries that lack this specialty, the tests performed in clinical physiology are often performed by the various organ-specific specialties in internal medicine, such as cardiology, pulmonology, nephrology, and others. In Australia, the United Kingdom, and many other commonwealth and European countries, clinical physiology is not a medical specialty for physicians. It is individually a non-medical allied health profession - scientist, physiologist or technologist - who may practice as a cardiac scientist, vascular scientist, respiratory scientist, sleep scientist or in Ophthalmic and Vision Science as an Ophthalmic Science Practitioner (UK). These professionals also aid in the diagnosis of disease and manage patients, with an emphasis on understanding physiological and pathophysiological pathways. Disciplines within clinical physiology field include audiologists, cardiac physiologists, gastro-intestinal physiologists, neurophysiologists, respiratory physiologists, and sleep physiologists. References External links Scandinavian Society of Clinical Physiology and Nuclear Medicine (SSCPNM) http://www.sscpnm.com/ The official journal of the SSCPNM: Clinical Physiology and Functional Imaging http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1475-097X Physiology Academic disciplines Medical specialties
Clinical physiology
Biology
838
492,171
https://en.wikipedia.org/wiki/Video4Linux
Video4Linux (V4L for short) is a collection of device drivers and an API for supporting realtime video capture on Linux systems. It supports many USB webcams, TV tuners, and related devices, standardizing their output, so programmers can easily add video support to their applications. Video4Linux is responsible for creating V4L2 device nodes aka a device file (/dev/videoX, /dev/vbiX and /dev/radioX) and tracking data from these nodes. The device node creation is handled by V4L device drivers using the video_device struct (v4l2-dev.h) and it can either be allocated dynamically or embedded in another larger struct. Video4Linux was named after Video for Windows (which is sometimes abbreviated "V4W"), but is not technically related to it. While Video4Linux is only available on Linux, there is a compatibility layer available for FreeBSD called Video4BSD. This provides a way for many programs that depend on V4L to also compile and run on the FreeBSD operating system. History V4L had been introduced late into the 2.1.X development cycle of the Linux kernel. Retroactively being renamed to V4L1, it was dropped in kernel 2.6.38. V4L2 is the second version of V4L. Video4Linux2 fixed some design bugs and started appearing in the 2.5.x kernels. Video4Linux2 drivers include a compatibility mode for Video4Linux1 applications, though the support can be incomplete and it is recommended to use Video4Linux1 devices in V4L2 mode. The project DVB-Wiki is now hosted on LinuxTV web site. Some programs support V4L2 through the media resource locator v4l2://. Software support aMSN Cheese (software) Cinelerra CloudApp Ekiga FFmpeg FreeJ GStreamer Guvcview kdetv Kopete Libav Linphone LiVES motion MPlayer mpv MythTV Open Broadcaster Software OpenCV Peek PyGame SDL3 Skype Tvheadend VLC media player xawtv Xine ZoneMinder Criticism Video4Linux has a complex negotiation process, which caused not all applications having support for all cameras. See also Direct Rendering Manager – defines a kernel-to-user-space interface for access to graphics rendering and video acceleration Mesa 3D – implements video acceleration APIs References External links media_tree development git v4l-utils development git Linux Media Infrastructure API (V4L2, DVB and Remote Controllers) Video4Linux-DVB wiki Video4Linux resources Video4BSD, a Video4Linux emulation layer Video For Linux (V4L) sample applications Video For Linux 2 (V4L2) sample application Access Video4Linux devices from Java kernel.org OpenWrt Wiki Linux UVC driver and tools, USB video device class (UVC) Digital television Free video software Interfaces of the Linux kernel Linux drivers Linux kernel features Television technology
Video4Linux
Technology
662
56,956,160
https://en.wikipedia.org/wiki/HCTU
HCTU is an amidinium coupling reagent used in peptide synthesis. It is analogous to HBTU. The HOBt moiety has a chlorine in the 6 position which improves reaction rates and the synthesis of difficult couplings HCTU and related reagents containing the 6-chloro-1-hydroxybenzotriazole moiety can be prepared by reaction with TCFH under basic conditions. It can exist in an N-form (guanidinium) or an O-form (uronium), but the N-form is generally considered to be more stable for this class of reagent. In vivo dermal sensitization studies according to OECD 429 confirmed HCTU is a strong skin sensitizer, showing a response at 0.50 wt% in the Local Lymph Node Assay (LLNA) placing it in Globally Harmonized System of Classification and Labelling of Chemicals (GHS) Dermal Sensitization Category 1A. References Peptide coupling reagents Hexafluorophosphates
HCTU
Chemistry,Biology
223
70,607,660
https://en.wikipedia.org/wiki/Phalaenopsis%20%C3%97%20leucorrhoda
Phalaenopsis × leucorrhoda is a species of orchid native to the Philippines. It is a natural hybrid of Phalaenopsis aphrodite and Phalaenopsis schilleriana. Etymology The specific epithet leucorrhoda, composed of leuco meaning white and rhodo meaning rose-coloured, is derived from the floral colouration. Taxonomy It has been confused with Phalaenopsis philippinensis, from which it differs in regard to the morphology of the callus of the labellum. References leucorrhoda Orchid hybrids Hybrid plants Plant nothospecies Interspecific plant hybrids Plants described in 1875 Orchids of the Philippines
Phalaenopsis × leucorrhoda
Biology
141
2,336,205
https://en.wikipedia.org/wiki/Hilbert%27s%20fifteenth%20problem
Hilbert's fifteenth problem is one of the 23 Hilbert problems set out in a list compiled in 1900 by David Hilbert. The problem is to put Schubert's enumerative calculus on a rigorous foundation. Introduction Schubert calculus is the intersection theory of the 19th century, together with applications to enumerative geometry. Justifying this calculus was the content of Hilbert's 15th problem, and was also the major topic of the 20 century algebraic geometry. In the course of securing the foundations of intersection theory, Van der Waerden and André Weil related the problem to the determination of the cohomology ring H*(G/P) of a flag manifold G/P, where G is a Lie group and P a parabolic subgroup of G. The additive structure of the ring H*(G/P) is given by the basis theorem of Schubert calculus due to Ehresmann, Chevalley, and Bernstein-Gel'fand-Gel'fand, stating that the classical Schubert classes on G/P form a free basis of the cohomology ring H*(G/P). The remaining problem of expanding products of Schubert classes as linear combinations of basis elements was called the characteristic problem by Schubert, and regarded by him as "the main theoretic problem of enumerative geometry". While enumerative geometry made no connection with physics during the first century of its development, it has since emerged as a central element of string theory. Problem statement The entirety of the original problem statement is as follows: The problem consists in this: To establish rigorously and with an exact determination of the limits of their validity those geometrical numbers which Schubert especially has determined on the basis of the so-called principle of special position, or conservation of number, by means of the enumerative calculus developed by him. Although the algebra of today guarantees, in principle, the possibility of carrying out the processes of elimination, yet for the proof of the theorems of enumerative geometry decidedly more is requisite, namely, the actual carrying out of the process of elimination in the case of equations of special form in such a way that the degree of the final equations and the multiplicity of their solutions may be foreseen. Schubert calculus Schubert calculus is a branch of algebraic geometry introduced in the nineteenth century by Hermann Schubert to solve various counting problems of projective geometry (part of enumerative geometry). It was a precursor of several more modern theories, for example characteristic classes, and in particular its algorithmic aspects are still of interest. The objects introduced by Schubert are the Schubert cells, which are locally closed sets in a Grassmannian defined by conditions of incidence of a linear subspace in projective space with a given flag. For details see Schubert variety. According to Van der Waerden and André Weil Hilbert problem fifteen has been solved. In particular, a) Schubert's characteristic problem has been solved by Haibao Duan and Xuezhi Zhao; b) Special presentations of the Chow rings of flag manifolds have been worked out by Borel, Marlin, Billey-Haiman and Duan-Zhao, et al.; c) Major enumerative examples of Schubert have been verified by Aluffi, Harris, Kleiman, Xambó, et al. References . . . 15 Algebraic geometry Unsolved problems in geometry
Hilbert's fifteenth problem
Mathematics
683
22,488,181
https://en.wikipedia.org/wiki/Gautieria%20graveolens
Gautieria graveolens is a species of hypogeal fungus in the family Gomphaceae. References Gomphaceae Fungus species
Gautieria graveolens
Biology
32
4,725,226
https://en.wikipedia.org/wiki/Frege%27s%20theorem
In metalogic and metamathematics, Frege's theorem is a metatheorem that states that the Peano axioms of arithmetic can be derived in second-order logic from Hume's principle. It was first proven, informally, by Gottlob Frege in his 1884 Die Grundlagen der Arithmetik (The Foundations of Arithmetic) and proven more formally in his 1893 Grundgesetze der Arithmetik I (Basic Laws of Arithmetic I). The theorem was re-discovered by Crispin Wright in the early 1980s and has since been the focus of significant work. It is at the core of the philosophy of mathematics known as neo-logicism (at least of the Scottish School variety). Overview In The Foundations of Arithmetic (1884), and later, in Basic Laws of Arithmetic (vol. 1, 1893; vol. 2, 1903), Frege attempted to derive all of the laws of arithmetic from axioms he asserted as logical (see logicism). Most of these axioms were carried over from his Begriffsschrift; the one truly new principle was one he called the Basic Law V (now known as the axiom schema of unrestricted comprehension): the "value-range" of the function f(x) is the same as the "value-range" of the function g(x) if and only if ∀x[f(x) = g(x)]. However, not only did Basic Law V fail to be a logical proposition, but the resulting system proved to be inconsistent, because it was subject to Russell's paradox. The inconsistency in Frege's Grundgesetze overshadowed Frege's achievement: according to Edward Zalta, the Grundgesetze "contains all the essential steps of a valid proof (in second-order logic) of the fundamental propositions of arithmetic from a single consistent principle." This achievement has become known as Frege's theorem. Frege's theorem in propositional logic In propositional logic, Frege's theorem refers to this tautology: (P → (Q→R)) → ((P→Q) → (P→R)) The theorem already holds in one of the weakest logics imaginable, the constructive implicational calculus. The proof under the Brouwer–Heyting–Kolmogorov interpretation reads . In words: "Let f denote a reason that P implies that Q implies R. And let g denote a reason that P implies Q. Then given a f, then given a g, then given a reason p for P, we know that both Q holds by g and that Q implies R holds by f. So R holds." The truth table to the right gives a semantic proof. For all possible assignments of false () or true () to P, Q, and R (columns 1, 3, 5), each subformula is evaluated according to the rules for material conditional, the result being shown below its main operator. Column 6 shows that the whole formula evaluates to true in every case, i.e. that it is a tautology. In fact, its antecedent (column 2) and its consequent (column 10) are even equivalent. Notes References – Edition in modern notation – Edition in modern notation Theorems in the foundations of mathematics Theorems in propositional logic Metatheorems
Frege's theorem
Mathematics
710
41,927,524
https://en.wikipedia.org/wiki/Ascosphaera%20callicarpa
Ascosphaera callicarpa is a fungus common on the larval feces of the solitary bee Chelostoma florisomne, which nests in the Phragmites reeds of thatched roofs in Europe. Pathogenic Ascosphaera species afflict only the larval stage of bees. Typically, diseased larvae die in the larval stage; in rare occurrences, however, larvae have been observed to enter pupation before being overcome by the fungus. Description The mating system is heterothallic. Infected larvae appear shrunken, pale buff, covered by a weft of hyphae, with or without the production of ascomata. The ascomata are greenish (immature) to black (mature) spore cysts produced on aerial hyphae above the larval cuticle, measuring 40–119 μm in diameter. The spore wall is pale greenish to yellowish-brown, nearly smooth with minute punctae at high magnification. Spore balls are hyaline to pale yellowish, without granules, 7–20 μm in diameter, and mostly persistent. The ascospores are ellipsoid to somewhat sausage shaped, and measure 2.1–3.9 by 1.1–1.7 μm. Cultures grown on Sabouraud dextrose agar show rapid growth after 2–6 days; they are white with abundant production of spore cysts when both mating strains are present. Ecology and distribution Ascosphaera aggregate is an obligate pathogen with a preference for bees belonging to the family Megachilidae. This species has a broad distribution, with reports from both North America and Europe. This fast-growing saprotroph is associated primarily with solitary bees. This species is typically found growing on pollen provisions. Less common substrates from which A. agra has been isolated include the surface of a diseased M. rotundata larva with chalkbrood caused by A. aggregate, from pollen within the gut of an otherwise healthy M. rotundata larva and from the honey of A. mellifera. Ascosphaera agra is the only species of the genus that has been found growing on plant material (grass silage) outside of the bee habitat. Pathogenicity studies demonstrated that A. agra is not a pathogen of solitary bees; however it has been concluded that it is a weak pathogen of honeybees. References Onygenales Fungi described in 2013 Fungi of Europe Fungus species
Ascosphaera callicarpa
Biology
513
14,367,845
https://en.wikipedia.org/wiki/Xenobiotic%20metabolism
Xenobiotic metabolism (from the Greek xenos "stranger" and biotic "related to living beings") is the set of metabolic pathways that modify the chemical structure of xenobiotics, which are compounds foreign to an organism's normal biochemistry, such as drugs and poisons. These pathways are a form of biotransformation present in all major groups of organisms, and are considered to be of ancient origin. These reactions often act to detoxify poisonous compounds; however, in cases such as in the metabolism of alcohol, the intermediates in xenobiotic metabolism can themselves be the cause of toxic effects. Xenobiotic metabolism is divided into three phases. In phase I, enzymes such as cytochrome P450 oxidases introduce reactive or polar groups into xenobiotics. These modified compounds are then conjugated to polar compounds in phase II reactions. These reactions are catalysed by transferase enzymes such as glutathione S-transferases. Finally, in phase III, the conjugated xenobiotics may be further processed, before being recognised by efflux transporters and pumped out of cells. The reactions in these pathways are of particular interest in medicine as part of drug metabolism and as a factor contributing to multidrug resistance in infectious diseases and cancer chemotherapy. The actions of some drugs as substrates or inhibitors of enzymes involved in xenobiotic metabolism are a common reason for hazardous drug interactions. These pathways are also important in environmental science, with the xenobiotic metabolism of microorganisms determining whether a pollutant will be broken down during bioremediation, or persist in the environment. The enzymes of xenobiotic metabolism, particularly the glutathione S-transferases are also important in agriculture, since they may produce resistance to pesticides and herbicides. Permeability barriers and detoxification That the exact compounds an organism is exposed to will be largely unpredictable, and may differ widely over time, is a major characteristic of xenobiotic toxic stress. The major challenge faced by xenobiotic detoxification systems is that they must be able to remove the almost-limitless number of xenobiotic compounds from the complex mixture of chemicals involved in normal metabolism. The solution that has evolved to address this problem is an elegant combination of physical barriers and low-specificity enzymatic systems. All organisms use cell membranes as hydrophobic permeability barriers to control access to their internal environment. Polar compounds cannot diffuse across these cell membranes, and the uptake of useful molecules is mediated through transport proteins that specifically select substrates from the extracellular mixture. This selective uptake means that most hydrophilic molecules cannot enter cells, since they are not recognised by any specific transporters. In contrast, the diffusion of hydrophobic compounds across these barriers cannot be controlled, and organisms, therefore, cannot exclude lipid-soluble xenobiotics using membrane barriers. However, the existence of a permeability barrier means that organisms were able to evolve detoxification systems that exploit the hydrophobicity common to membrane-permeable xenobiotics. These systems therefore solve the specificity problem by possessing such broad substrate specificities that they metabolise almost any non-polar compound. Useful metabolites are excluded since they are polar, and in general contain one or more charged groups. The detoxification of the reactive by-products of normal metabolism cannot be achieved by the systems outlined above, because these species are derived from normal cellular constituents and usually share their polar characteristics. However, since these compounds are few in number, specific enzymes can recognize and remove them. Examples of these specific detoxification systems are the glyoxalase system, which removes the reactive aldehyde methylglyoxal, and the various antioxidant systems that eliminate reactive oxygen species. Phases of detoxification The metabolism of xenobiotics is often divided into three phases: modification, conjugation, and excretion. These reactions act in concert to detoxify xenobiotics and remove them from cells. Phase I - modification In phase I, a variety of enzymes acts to introduce reactive and polar groups into their substrates. One of the most common modifications is hydroxylation catalysed by the cytochrome P-450-dependent mixed-function oxidase system. These enzyme complexes act to incorporate an atom of oxygen into nonactivated hydrocarbons, which can result in either the introduction of hydroxyl groups or N-, O- and S-dealkylation of substrates. The reaction mechanism of the P-450 oxidases proceeds through the reduction of cytochrome-bound oxygen and the generation of a highly-reactive oxyferryl species, according to the following scheme: Phase II - conjugation In subsequent phase II reactions, these activated xenobiotic metabolites are conjugated with charged species such as glutathione (GSH), sulfate, glycine, or glucuronic acid. These reactions are catalysed by a large group of broad-specificity transferases, which in combination can metabolise almost any hydrophobic compound that contains nucleophilic or electrophilic groups. One of the most important of these groups are the glutathione S-transferases (GSTs). The addition of large anionic groups (such as GSH) detoxifies reactive electrophiles and produces more polar metabolites that cannot diffuse across membranes, and may, therefore, be actively transported. Phase III - further modification and excretion After phase II reactions, the xenobiotic conjugates may be further metabolised. A common example is the processing of glutathione conjugates to acetylcysteine (mercapturic acid) conjugates. Here, the γ-glutamate and glycine residues in the glutathione molecule are removed by Gamma-glutamyl transpeptidase and dipeptidases. In the final step, the cystine residue in the conjugate is acetylated. Conjugates and their metabolites can be excreted from cells in phase III of their metabolism, with the anionic groups acting as affinity tags for a variety of membrane transporters of the multidrug resistance protein (MRP) family. These proteins are members of the family of ATP-binding cassette transporters and can catalyse the ATP-dependent transport of a huge variety of hydrophobic anions, and thus act to remove phase II products to the extracellular medium, where they may be further metabolised or excreted. Endogenous toxins The detoxification of endogenous reactive metabolites such as peroxides and reactive aldehydes often cannot be achieved by the system described above. This is the result of these species' being derived from normal cellular constituents and usually sharing their polar characteristics. However, since these compounds are few in number, it is possible for enzymatic systems to utilize specific molecular recognition to recognize and remove them. The similarity of these molecules to useful metabolites therefore means that different detoxification enzymes are usually required for the metabolism of each group of endogenous toxins. Examples of these specific detoxification systems are the glyoxalase system, which acts to dispose of the reactive aldehyde methylglyoxal, and the various antioxidant systems that remove reactive oxygen species. History Studies on how people transform the substances that they ingest began in the mid-nineteenth century, with chemists discovering that organic chemicals such as benzaldehyde could be oxidized and conjugated to amino acids in the human body. During the remainder of the nineteenth century, several other basic detoxification reactions were discovered, such as methylation, acetylation, and sulfonation. In the early twentieth century, work moved on to the investigation of the enzymes and pathways that were responsible for the production of these metabolites. This field became defined as a separate area of study with the publication by Richard Williams of the book Detoxication mechanisms in 1947. This modern biochemical research resulted in the identification of glutathione S-transferases in 1961, followed by the discovery of cytochrome P450s in 1962, and the realization of their central role in xenobiotic metabolism in 1963. See also Drug design Drug metabolism Microbial biodegradation Biodegradation Bioremediation Antioxidant SPORCalc, an example process for exploring xenobiotic and drug metabolism databases References Further reading External links Databases Drug metabolism database Directory of P450-containing Systems University of Minnesota Biocatalysis/Biodegradation Database Drug metabolism Small Molecule Drug Metabolism Drug metabolism portal Microbial biodegradation Microbial Biodegradation, Bioremediation and Biotransformation History History of Xenobiotic Metabolism Metabolism
Xenobiotic metabolism
Chemistry,Biology
1,848
24,722,548
https://en.wikipedia.org/wiki/Mi%27kma%27ki
Mi'kma'ki or Mi'gma'gi is composed of the traditional and current territories, or country, of the Mi'kmaq people, in what is now Nova Scotia, New Brunswick, Prince Edward Island, and eastern Quebec, Canada. It is shared by an inter-Nation forum among Mi'kmaq First Nations and is divided into seven geographical and traditional districts with Taqamkuk being separately represented as an eighth district, formerly joined with Unama'ki (Cape Breton). Mi'kma'ki and the Mi'kmaw Nation are one of the confederated entities within the Wabanaki Confederacy. History Each district was autonomous, headed by a Sagamaw. He would meet with Wampum readers and knowledge keepers called turkey keepers, a women's council, and the Kji Sagamaw, or Grand Chief, to form the Sante'Mawio'mi (or Mi'kmawey Mawio'mi), the Grand Council. The seat of the Sante'Mawio'mi is at Mniku, Unama'kik. It still functions as the capital today in the Potlotek reserve. Following European contact, Mi'kma'ki was colonized by the French and British in modern Nova Scotia, who made competing claims for the land. Siding with the French, the Mi'kmaq fought alongside other Wabanaki warriors during the repeated wars between France and Britain in North America in the 17th and 18th centuries, between 1688 and 1763. These European powers divided Mi'kma'ki in the treaties of Utrecht (1715) and Paris (1763). After the latter, when France ceded its territories east of the Mississippi River to Britain, the British claimed Mi'kma'ki as their possession by conquest. The defeated Mi'kmaq signed the Peace and Friendship Treaties to end hostilities and encourage cooperation between the Wabanaki nations and the British. They wanted to ensure the survival of the Mi'kmaq people, whose numbers had dwindled to a few thousand from disease and starvation. The power held within Mi'kma'ki faded further after the Confederation of Canada in 1867 united the colonies, establishing four provinces. The Dominion of Canada passed the Indian Act in 1876, which resulted in the loss of autonomous governance among the First Nations. The Mi'kmaq had said that they never conceded sovereignty of their traditional lands. Some analysts have advanced legal arguments that the Peace and Friendship treaties legitimized the takeover of the land by Britain. For more than 100 years, until 2020, the Sante'Mawio'mi (or Grand Council) was limited to functioning solely as a spiritual and dialogue forum. The Mi'kmaq and other First Nations were required to elect representatives for their governments. In 2020, however, by agreement with the Government of Canada, the Grand Council was authorized to consult on behalf of the Mi'kmaq First Nations and all First Nations in the province. Governance Traditionally each Mi'kmaq district had its own independent government. Those governments were composed of a chief and a council. The council included the band chiefs, elders, and other important leaders. The role of the councils was similar to those of any independent government and included the ability to make laws, establish a justice system, divide the common territory among the people for hunting and fishing, make war, and search for peace. The overarching Grand Council Santeꞌ Mawioꞌmi was composed of the keptinaq (captains), or the district chiefs. The Grand Council also included elders, putus (historians reading the wampum belts), and a Council of women. The Grand Council was headed by a grand chief who was one of the district chiefs, generally the Unama'kik chief. Succession was hereditary. The seat of the Grand Council was generally on Unamaꞌkik (Cape Breton Island). Districts The eight districts are the following: (names are spelled in the Francis-Smith orthography, followed by the Listuguj orthography in parens): Epekwitk aq Piktuk (Epegwitg aq Pigtug) Eskikewa'kik (Esge'gewa'gi) Kespek (Gespe'gewa'gi) Kespukwitk (Gespugwitg) Siknikt (Signigtewa'gi) Sipekni'katik (Sugapune'gati) Unama'kik (Unama'gi) Ktaqamkuk (Gtaqamg). See also Mi'kmaq Grand Council (Mi'kmaq) Mi'kmaq hieroglyphic writing References Mi'kmaq Cultural regions of Canada Human geography Former countries in North America
Mi'kma'ki
Environmental_science
982
76,393,797
https://en.wikipedia.org/wiki/Holmium%20iodate
Holmium iodate is an inorganic compound with the chemical formula Ho(IO3)3. It can be obtained by reacting holmium periodate and periodic acid in water at 170 °C. Its solubility in water is 1.162±0.001 (25 °C, 103 mol·dm−3). Adding ethanol or methanol to water will reduce the solubility. References Holmium compounds Iodates
Holmium iodate
Chemistry
89
1,020,602
https://en.wikipedia.org/wiki/Lucida
Lucida (pronunciation: ) is an extended family of related typefaces designed by Charles Bigelow and Kris Holmes and released from 1984 onwards. The family is intended to be extremely legible when printed at small size or displayed on a low-resolution display – hence the name, from 'lucid' (clear or easy to understand). There are many variants of Lucida, including serif (Fax, Bright), sans-serif (Sans, Sans Unicode, Grande, Sans Typewriter) and scripts (Blackletter, Calligraphy, Handwriting). Many are released with other software, most notably Microsoft Office. Bigelow and Holmes, together with the (now defunct) TeX vendor Y&Y, extended the Lucida family with a full set of TeX mathematical symbols, making it one of the few typefaces that provide full-featured text and mathematical typesetting within TeX. Lucida is still licensed commercially through the TUG store as well through their own web store. The fonts are occasionally updated. Key features The Lucida fonts have a large x-height (tall lower-case letters), open apertures and quite widely spaced letters, classic features of fonts designed for legibility in body text. Capital letters were designed to be somewhat narrow and short in order to make all-caps acronyms blend in. Bigelow has said in interview that the characters were designed based on hand-drawn bitmaps to see what parts of letters needed to be clear in bitmap, before creating outlines that would render as clear bitmaps. The fonts include ligatures, but these are not needed for text, allowing use on simplistic typesetting systems. x-heights are consistent between the fonts. Hinting was used to allow onscreen display. Lucida Arrows A family of fonts containing arrows. Lucida Blackletter A family of cursive blackletter fonts released in 1992. Lucida Bright Based on Lucida Serif, it features more contrasted strokes and serifs. The font was first used as the text face for Scientific American magazine, and its letter-spacing was tightened to give it a slightly closer fit for use in two and three column formats. Lucida Calligraphy A script font developed from Chancery cursive, released in 1991. Lucida Casual A casual font, released in 1994. Similar to Lucida Handwriting, but without connecting strokes. In 2014, Bigelow & Holmes released additional weights in normal and narrow widths. Lucida Console A monospaced font that is a variant of Lucida Sans Typewriter, with smaller line spacing and the addition of the WGL4 character set. In 2014, Bigelow & Holmes released bold weights and italics in normal and narrow widths. Lucida Console was the default font in Microsoft Notepad from Windows 2000 through Windows 7, its replacement being Consolas. This was also the font for the blue screen of death from Windows XP to Windows 7. Lucida Fax A slab serif font family released in 1992. Derived from Lucida, and specifically designed for telefaxing. Lucida Handwriting A font, released in 1992, designed to resemble informal cursive handwriting with modern plastic-tipped or felt-tipped pens or markers. In 2014, Bigelow & Holmes added additional weights and widths to the family. Lucida Icons A family of fonts for ornament and decoration uses. It contains ampersands, interrobangs, asterisms, circled Lucida Sans numerals, etc. Lucida Math A family of fonts for mathematical expressions. Lucida Math Extension contains only mathematical symbols. Lucida Math Italic contains Latin characters from Lucida Serif Italic, but with smaller line spacing, and added Greek letters. Lucida Math contains mathematical symbols, and blackletter (from Lucida Blackletter) and script letters in (from Lucida Calligraphy Italic) Letterlike Symbols region. Lucida OpenType First released in March 2012, this collection includes OpenType math fonts in regular and bold weights, and Lucida Bright, Lucida Sans Typewriter, and Lucida Sans text fonts in the usual four variants (regular, italic, bold, bold italic). The regular math font includes an entirely new math script alphabet in Roundhand style, among other new characters. The Lucida Bright text fonts include Unicode Latin character blocks including Basic Latin, Latin-1, and Latin Extended-A characters for American, Western European, Central European, Turkish, and other Latin-based orthographies. Lucida Sans A family of humanist sans-serif fonts complementing Lucida Serif. The italic is a "true italic" rather than a "sloped roman", inspired by chancery cursive handwriting of the Italian renaissance, which Bigelow and Holmes studied while at Reed College in the 1960s. Lucida Grande A version of Lucida Sans with expanded character sets, released around 2000. It supports Latin, Greek, Cyrillic, Arabic, Hebrew, Thai scripts. It is most notable for having been used as the system font for macOS until version 10.10. Lucida Sans Typewriter Also called Lucida Typewriter Sans, this is a sans-serif monospaced font family, designed for typewriters. Its styling is reminiscent of Letter Gothic and Andalé Mono; a variant, Lucida Console , replaced those two fonts on Microsoft Windows systems. Lucida Sans Unicode Based on Lucida Sans Regular, this version added characters in Arrows, Block Elements, Box Drawing, Combining Diacritical Marks, Control Pictures, Currency Symbols, Cyrillic, General Punctuation, Geometric Shapes, Greek and Coptic, Hebrew, IPA Extensions, Latin Extended-A, Latin Extended-B, Letterlike Symbols, Mathematical Operators, Miscellaneous Symbols, Miscellaneous Technical, Spacing Modifier Letters, Superscripts and Subscripts regions. Lucida Serif The original Lucida font designed in 1985, featuring a thickened serif. It was simply called Lucida when it was first released. Lucida Typewriter Serif Also called Lucida Typewriter, this font is a slab serif monospaced version of Lucida Fax, but with wider serifs. The letters are wider than Lucida Sans Typewriter. Usages Lucida Console is used in various parts of Microsoft Windows. From Windows 2000 until Windows 7, Lucida Console is used as the default typeface of Notepad. In Windows 2000 until Windows 7, and in Windows CE, Lucida Console is used as the typeface of the Blue Screen of Death. Lucida Grande, as well as Lucida Sans Demibold (identical outlines to Lucida Grande Bold but with tighter spacing of numerals), were used as the primary user interface font in Apple Inc.'s Mac OS X operating system until OS X Yosemite, as well as many programs including Front Row. Lucida is also used in the logo for Air Canada. A collection of Lucida variants are included in the Oracle JRE 9. Lucida Calligraphy was used in the logo for Gladden Entertainment. In April 2012, Lucida Sans was selected by GfK Blue Moon as the font for a package design as part of a proposed law in Australia banning logos on cigarette packaging. The proposed law requires cigarettes to be sold in dark olive-brown packages that depict graphic images of the effects of smoking and the cigarette's brand printed in Lucida Sans. According to Tom Delaney, a senior designer with New York design consultant Muts & Joy, "Lucida Sans is one of the least graceful sans-serif typefaces designed. It’s clumsy in its line construction." On August 15, 2012, the Australian government approved the ban on cigarette logos, effectively replacing them with the unattractive packaging. See also MathTime Wingdings References External links Lucida and TeX (TeX Users Group) Lucida Font Family Group - by Kris Holmes, Charles Bigelow (Linotype corporation) Notes on Lucida, by Charles Bigelow Lucida Family Overview by Charles Bigelow and Kris Holmes Lucida Calligraphy Text Samples - Thin, Lite, Normal, Bold, UltraBlack Lucida Handwriting Text Samples - Thin, Lite, Normal, Bold, UltraBlack Lucida Casual Text Samples - Thin, Lite, Normal, Bold, UltraBlack Lucida Grande Text Samples - Light, Normal, Bold, Black Lucida OpenType font set Lucida Bright Math OT Ulrik Vieth and Mojca Miklavec, Another incarnation of Lucida: Towards Lucida OpenType, TUGboat, Volume 32 (2011), No. 2 All Lucida fonts by Charles Bigelow and Kris Holmes Interview with Charles Bigelow (Yue Wang) Unified serif and sans-serif typeface families Symbol typefaces TeX Typefaces and fonts introduced in 1984 Mathematical OpenType typefaces Humanist sans-serif typefaces Typefaces designed by Charles Bigelow (type designer) Typefaces designed by Kris Holmes
Lucida
Mathematics
1,875
75,704,957
https://en.wikipedia.org/wiki/Concepci%C3%B3n%20Campa%20Huergo
Concepción Campa Huergo (born 1951) is a Cuban medical researcher. She was the lead scientist in the development of VA-MENGOC-BC, the first vaccine against meningitis B, in 1989. For this work she received the World Intellectual Property Organization gold medal. She was subsequently elected to the 5th Politburo of the Communist Party of Cuba in 1997. References 1951 births Cuban politicians Vaccinologists Living people Medical researchers
Concepción Campa Huergo
Biology
90
521,787
https://en.wikipedia.org/wiki/Von%20Karman%20Institute%20for%20Fluid%20Dynamics
The von Karman Institute for Fluid Dynamics (VKI) is a non-profit educational and scientific organization which specializes in three specific fields: aeronautics and aerospace, environment and applied fluid dynamics, turbomachinery and propulsion. Founded in 1956, it is located in Sint-Genesius-Rode, Belgium. About The von Karman Institute for Fluid Dynamics is a non-profit international, educational and scientific organization which is working in three specific fields: aeronautics and aerospace, environment and applied fluid dynamics, turbomachinery and propulsion. The VKI provides education in these specific areas for students from all over the world. A hundred students come to the Institute each year to study fluid dynamics, for a PhD programme, a research master in Fluid Dynamics, a final year project and also to gather further knowledge while doing a work placement in a specific area. Each year, Lecture Series and events are being organized inside and outside of the organization. These events emphasize on topics of great importance such as aerodynamics, fluid mechanics, heat transfer with application to aeronautics, space, turbomachinery, the environment and also industrial fluid dynamics. The Institute has built an international renown in these domains. Students who study these fields, researchers, industrials and engineers want to follow these Lecture Series. The information presented is accurate and reliable. History In the course of 1955, Professor Theodore von Kármán proposed with his assistants the establishment of an institution devoted to training and research in aerodynamics which would be open to young engineers and scientists of the NATO nations. It was strongly felt that this form of international undertaking would fulfil the important objective of fostering fruitful exchanges and understanding between the participating nations in a well-defined technical field. The von Karman Institute was established in October 1956 in the buildings which formed what then was the aeronautical laboratory of the Civil Aviation Authority of the Belgian Ministry of Communications. The history of the laboratory goes back to 1922 when, on farmland purchased by the Belgian Government, the first building was erected to house the STAé (Service Technique de l'Aéronautique), i.e. the technical services of the Civil Aviation Authority then under the Ministry of Defence. The building was designed to accommodate a large low speed wind tunnel of the Eiffel type with an open return circuit and open-jet test section of 2 m diameter, as well as offices and shops. It still exists and has been refurbished internally after removal of the low speed tunnel to make room for modern turbomachinery and high speed facilities. A second building was added in 1935 to house offices and laboratories. It is now the Institute's administrative building. The last addition was made after the war, in 1949, with the construction of a large building specially designed to house a supersonic tunnel and a multi-configuration low speed facility. The AGARD Study Group of 1955 stated, in terms of training and research: "The Institute should aim toward a training which, apart from its direct and obvious ties with aeronautical industries, would be of value in wider areas such as industrial or scientific research where the application of experimental techniques of aerodynamics would be profitable". Structure There are three departments which are hosted at the von Karman Institute for Fluid Dynamics, from the older to the younger: Aeronautics and Aerospace A wide spectrum of facilities and computational tools covers the flow range from the low-speed regime of commercial aircraft to the supersonic and hypersonic regime of atmospheric space entry. The department focuses in particular on the modeling, simulation and experimental validation of atmospheric entry flows and thermal protection systems (TPS), including transition to turbulence and stability. The experimental studies are carried out in its top level Mach 14, Mach 6 and Induction Coupled Plasma windtunnels, for which dedicated measuring techniques have been developed e.g. involving spectroscopic laser techniques. On the computational simulation side the department has developed an extendable software platform Coolfluid for high performance computational flow simulation which incorporates the research on numerical algorithms, advanced physico-chemical and plasma models as well as fluid-structure interaction and conjugate heat transfer. Turbomachinery & Propulsion The Turbomachinery and Propulsion department specializes in the aero-thermal aspects of turbomachinery components for aero-engines and industrial gas turbines, space propulsion units, steam turbines and process industry compressors and pumps. It has accumulated wide skills in high speed wind tunnel testing and related measurement techniques development and application. The department has acquired a world recognised expertise on steady/unsteady aerodynamic and aero/thermal aspects of high pressure, including cooling, and low pressure turbomachinery components through the design, development and use of a number of unique wind tunnels. On the computational side, the department has over 20 years of experience in the analysis of flow in turbomachines, and in the design techniques and multi-disciplinary optimization methods or their components. Environmental and applied Fluid Dynamics The Environmental and Applied Fluid Dynamics (EA) department covers all kinds of activities complementary to the other two departments related to fluid dynamics in the academic and industrial world. It has a large expertise in the study of aeroacoustics, multiphase flows, vehicle aerodynamics, biological flows and environmental flows (including the study of interaction between atmospheric winds and human activities). The department is also involved in the modeling of turbulence and in the development of advanced measurement techniques for fluid dynamics. The department has acquired a unique expertise in the study of fluid dynamics in industrial processes, with the development and construction of experimental facilities dedicated to the study of industrial processes and also in the simulation of industrial flows using CFD (Computational Fluid Dynamics) codes. Lecture series The VKI organises up to twelve different one-week Lecture Series with about 50 to 60 participants on specialized topics every year on various fields: industrial applications, turbomachinery, aerospace, aerodynamics, propulsion, aero engine, aeroacoustics, biological flows, large eddy simulation. Each year, the VKI also organises thematic conferences in collaboration with Belgian & foreign universities and research institutes. These courses have gained worldwide recognition. Subjects are chosen carefully and the lecturers are well known for their professionalism and excellency in specific fields. See also Belgian Federal Science Policy Office (BELSPO) European Space Agency (ESA) European Union (Europa) NATO Research and Technology Organisation (NATO-RTO) External links von Karman Institute home page Research institutes in Belgium Engineering research institutes Sint-Genesius-Rode
Von Karman Institute for Fluid Dynamics
Engineering
1,302
43,357,824
https://en.wikipedia.org/wiki/Bacoside
Bacosides are a class of chemical compounds isolated from Bacopa monnieri. Chemically, they are dammarane-type triterpenoid saponins. There are at least twelve known members of the class. See also Bacoside A References Saponins Triterpene glycosides
Bacoside
Chemistry
67
6,495,416
https://en.wikipedia.org/wiki/Incompressible%20string
An incompressible string is a string with Kolmogorov complexity equal to its length, so that it has no shorter encodings. The pigeonhole principle can be used to be prove that for any lossless compression algorithm, there must exist many incompressible strings. Example Suppose we have the string 12349999123499991234, and we are using a compression method that works by putting a special character into the string (say @) followed by a value that points to an entry in a lookup table (or dictionary) of repeating values. Let us imagine we have an algorithm that examines the string in 4 character chunks. Looking at our string, our algorithm might pick out the values 1234 and 9999 to place into its dictionary. Let us say that 1234 is entry 0 and 9999 is entry 1. Now the string can become: @0@1@0@1@0 This string is much shorter, although storing the dictionary itself will cost some space. However, the more repeats there are in the string, the better the compression will be. Our algorithm can do better though, if it can view the string in chunks larger than 4 characters. Then it can put 12349999 and 1234 into the dictionary, giving us: @0@0@1 This string is even shorter. Now consider another string: 1234999988884321 This string is incompressible by our algorithm. The only repeats that occur are 88 and 99. If we were to store 88 and 99 in our dictionary, we would produce: 1234@1@1@0@04321 This is just as long as the original string, because our placeholders for items in the dictionary are 2 characters long, and the items they replace are the same length. Hence, this string is incompressible by our algorithm. References Lossless compression algorithms String (computer science)
Incompressible string
Mathematics,Technology
400
10,109,173
https://en.wikipedia.org/wiki/Secondary%20crater
Secondary craters are impact craters formed by the ejecta that was thrown out of a larger crater. They sometimes form radial crater chains. In addition, secondary craters are often seen as clusters or rays surrounding primary craters. The study of secondary craters exploded around the mid-twentieth century when researchers studying surface craters to predict the age of planetary bodies realized that secondary craters contaminated the crater statistics of a body's crater count. Formation When a velocity-driven extraterrestrial object impacts a relatively stationary body, an impact crater forms. Initial crater(s) to form from the collision are known as primary craters or impact craters. Material expelled from primary craters may form secondary craters (secondaries) under a few conditions: Primary craters must already be present. The gravitational acceleration of the extraterrestrial body must be great enough to drive the ejected material back toward the surface. The velocity by which the ejected material returns toward the body's surface must be large enough to form a crater. If ejected material is within an atmosphere, such as on Earth, Venus, or Titan, then it is more difficult to retain high enough velocity to create secondary impacts. Likewise, bodies with higher resurfacing rates, such as Io, also do not record surface cratering. Self-secondary crater Self-secondary craters are a those that form from ejected material of a primary crater but that are ejected at such an angle that the ejected material makes an impact within the primary crater itself. Self-secondary craters have caused much controversy with scientists who excavate cratered surfaces with the intent to identify its age based on the composition and melt material. An observed feature on Tycho has been interpreted to be a self-secondary crater morphology known as palimpsests. Appearance Secondary craters are formed around primary craters. When a primary crater forms following a surface impact, the shock waves from the impact will cause the surface area around the impact circle to stress, forming a circular outer ridge around the impact circle. Ejecta from this initial impact is thrust upward out of the impact circle at an angle toward the surrounding area of the impact ridge. This ejecta blanket, or broad area of impacts from the ejected material, surrounds the crater. Chains and clusters Secondary craters may appear as small-scaled singular craters similar to a primary crater with a smaller radius, or as chains and clusters. A secondary crater chain is simply a row or chain of secondary craters lined adjacent to one another. Likewise, a cluster is a population of secondaries near to one another. Distinguishing factors of primary and secondary craters Impact energy Primary craters form from high-velocity impacts whose foundational shock waves must exceed the speed of sound in the target material. Secondary craters occur at lower impact velocities. However, they must still occur at high enough speeds to deliver stress to the target body and produce strain results that exceed the limits of elasticity, that is, secondary projectiles must break the surface. It can be increasing difficult to distinguish primary craters from secondaries craters when the projectile fractures and breaks apart prior to impact. This depends on conditions in the atmosphere, coupled with projectile velocity and composition. For instance, a projectile that strikes the moon will probably hit intact; whereas if it strikes the earth, it will be slowed and heated by atmospheric entry, possibly breaking up. In that case, the smaller chunks, now separated from the large impacting body, may impact the surface of the planet in the region outside the primary crater, which is where many secondary craters appear following primary surface impact. Impact angle For primary impacts, based on geometry, the most probable impact angle is 45° between two objects, and the distribution falls off rapidly outside of the range 30° – 60°. It is observed that impact angle has little effect on the shape of primary craters, except in the case of low angle impacts, where the resulting crater shape becomes less circular and more elliptical. The primary impact angle is much more influential on the morphology (shape) of secondary impacts. Experiments conducted from lunar craters suggests that the ejection angle is at its highest for the early-stage ejecta, that which is ejected from the primary impact at its earliest moments, and that the ejection angle decreases with time for the late-stage ejecta. For example, a primary impact that is vertical to the body surface may produce early-stage ejection angles of 60°-70°, and late-stage ejection angles that have decreases to nearly 30°. Target type Mechanical properties of a target's regolith (existing loose rocks) will influence the angle and velocity of ejecta from primary impacts. Research using simulations has been conducted that suggest that a target body's regolith decreases the velocity of ejecta. Secondary crater sizes and morphology also are affected by the distribution of rock sizes in the regolith of the target body. Projectile type The calculation of depth of secondary crater can be formulated based on the target body's density. Studies of the Nördlinger Ries in Germany and of ejecta blocks circling lunar and martian crater rims suggest that ejecta fragments having a similar density would likely express the same depth of penetration, as opposed to ejecta of differing densities creating impacts of varying depths, such as primary impactors, i.e. comets and asteroids. Size and Morphology Secondary crater size is dictated by the size of its parent primary crater. Primary craters can vary from microscopic to thousands of kilometers wide. The morphology of primary craters ranges from bowl-shaped to large, wide basins, where multi-ringed structures are observed. Two factors dominate the morphologies of these craters: material strength and gravity. The bowl-shaped morphology suggests that the topography is supported by the strength of the material, while the topography of the basin-shaped craters is overcome by gravitational forces and collapses toward flatness. The morphology, and size, of secondary craters is limited. Secondary craters exhibit a maximum diameter of < 5% of its parent primary crater. The size of a secondary crater is also dependent on its distance from its primary. The morphology of secondaries is simple but distinctive. Secondaries that form closer to their primaries appear more elliptical with shallower depths. These may form rays or crater chains. The more distant secondaries appear similar in circularity to their parent primaries, but these are often seen in an array of clusters. Age constraints due to secondary craters Scientists have long been collecting data surrounding impact craters from the observation that craters are present all throughout the span of the Solar System. Most notably, impact craters are studied for the purposes of estimating ages, both relative and absolute, of planetary surfaces. Dating terrains on planets from the according to density of craters has developed into a thorough technique, however 3 key assumptions control it: craters exist as independently, contingent occurrences. size frequency distribution (SFD) of primary craters is known. cratering rate relative to time is known. Photographs taken from notable lunar and martian missions have provided scientists the ability to count and log the number of observed craters on each body. These crater count databases are further sorted according to each craters size, depth, morphology, and location. The observations and characteristics of both primaries and secondaries are used in distinguishing impact craters within small crater cluster, which are characterized as clusters of craters with a diameter ≤1 km. Unfortunately, age research stemming from these crater databases is restrained due to the pollution of secondary craters. Scientists are finding it difficult to sort out all the secondary craters from the count, as they present false assurance of statistical vigor. Contamination by secondaries is often misused to calculate age constraints due to the erroneous attempts of using small craters to date small surface areas. Occurrence Secondary craters are common on rocky bodies in the Solar System with no or thin atmospheres, such as the Moon and Mars, but rare on objects with thick atmospheres such as Earth or Venus. However, in a study published in the Geological Society of America Bulletin the authors describe a field of secondary impact craters they believe was formed by the material ejected from a larger, primary meteor impact around 280 million years ago. The location of the primary crater is believed to be somewhere between Goshen and Laramie counties in Wyoming and Banner, Cheyenne, and Kimball counties in Nebraska. References Planetary geology Impact craters
Secondary crater
Astronomy
1,674
768,795
https://en.wikipedia.org/wiki/Escalade
Escalade is the act of scaling defensive walls or ramparts with the aid of ladders. Escalade was a prominent feature of sieges in ancient and medieval warfare. Although no longer common in modern warfare, escalade technologies are still developed and used in certain tactical applications. Overview Escalade consists of attacking soldiers advancing to the base of a wall, setting ladders, and climbing to engage the defending forces. Though very simple and direct, it was also one of the most dangerous options available; escalade would generally be conducted in the face of arrow fire from the battlements, and the defenders would naturally attempt to push ladders away from the wall. Heated or incendiary substances such as boiling water, heated sand, and pitch-coated missiles were sometimes poured on attacking soldiers. This made it difficult for attackers to reach the top of the wall, and those that did would often quickly be overwhelmed by the defenders on the walls, only being able to push into the defenses after suffering heavy attrition. Fortifications were often constructed in such a way as to impede escalade, or at least to make it a less attractive option. Countermeasures to escalade included moats (which prevented ladder-bearing soldiers from reaching the base of a wall), machicolations (which facilitated attacks on enemy soldiers while they climbed), and talus walls (which could weaken ladders or were too tall for ladders to reach the top of). Because of the difficulties involved, escalade was typically very costly for the attackers. Two critical factors in determining the success or failure of escalade were the number of ladders and the speed with which they could be arranged. A slow attack gave the defenders too much time to pick off the attackers with arrows, while having too few ladders meant that the number of troops would be insufficient to capture the battlements. A third important factor was the estimation of the height of the wall. If the ladders were made too long, they could be pushed over by the defenders, and if they were too short, the attackers would not be able to reach the top of the wall. Tactics employed included getting as many men as possible on the ladder at the same time (the more men that were on the ladder at the same time, the heavier it became, making pushing it over difficult), attacking by night, or scaling a remote section of the wall. Escalade was, in essence, an attempt to overwhelm defenders in a direct assault rather than sit through a protracted siege. Attackers would generally attempt escalade if they had reasons for wanting a swift conclusion, or if they had an overwhelming superiority in numbers. Otherwise, less costly siege tactics were often preferred. Modern warfare Escalade is no longer common in modern warfare, as new technologies and tactics have essentially made escalade obsolete; for example, most fortified walls that would have required attackers to use escalade may now simply be destroyed by explosives or nullified by military aircraft. However, escalade still exists as a viable (albeit niche) combat tactic, and is occasionally used by police tactical, counterterrorist, and special forces units to raid a structure through its upper levels, either to avoid a barricaded entrance or line of sight, or to breach the structure from multiple points. Mechanical assault ladders, typically installed on the roof of vehicles and featuring ramps that can extend or angle themselves to reach an entry point such as a window sill or balcony, are often used in this capacity. References See also Siege tower L'Escalade, the commemoration of the failed attack on Geneva by Savoy in 1602, conducted by escalade Escalade Assault tactics Ladders Siege equipment Siege engines Medieval siege engines
Escalade
Engineering
767
1,790,788
https://en.wikipedia.org/wiki/History%20of%20special%20relativity
The history of special relativity consists of many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. It culminated in the theory of special relativity proposed by Albert Einstein and subsequent work of Max Planck, Hermann Minkowski and others. Introduction Although Isaac Newton based his physics on absolute time and space, he also adhered to the principle of relativity of Galileo Galilei restating it precisely for mechanical systems. This can be stated as: as far as the laws of mechanics are concerned, all observers in inertial motion are equally privileged, and no preferred state of motion can be attributed to any particular inertial observer. However, as to electromagnetic theory and electrodynamics, during the 19th century the wave theory of light as a disturbance of a "light medium" or luminiferous aether was widely accepted, the theory reaching its most developed form in the work of James Clerk Maxwell. According to Maxwell's theory, all optical and electrical phenomena propagate through that medium, which suggested that it should be possible to experimentally determine motion relative to the aether. The failure of any experiment to detect motion through the aether led Hendrik Lorentz, starting in 1892, to develop a theory of electrodynamics based on an immobile luminiferous aether (about whose material constitution Lorentz did not speculate), physical length contraction, and a "local time" in which Maxwell's equations retain their form in all inertial frames of reference. Working with Lorentz's aether theory, Henri Poincaré, having earlier proposed the "relativity principle" as a general law of nature (including electrodynamics and gravitation), used this principle in 1905 to correct Lorentz's preliminary transformation formulas, resulting in an exact set of equations that are now called the Lorentz transformations. A little later in the same year Albert Einstein published his original paper on special relativity in which, again based on the relativity principle, he independently derived and radically reinterpreted the Lorentz transformations by changing the fundamental definitions of space and time intervals, while abandoning the absolute simultaneity of Galilean kinematics, thus avoiding the need for any reference to a luminiferous aether in classical electrodynamics. Subsequent work of Hermann Minkowski, in which he introduced a 4-dimensional geometric "spacetime" model for Einstein's version of special relativity, paved the way for Einstein's later development of his general theory of relativity and laid the foundations of relativistic field theories. Aether and electrodynamics of moving bodies Aether models and Maxwell's equations Following the work of Thomas Young (1804) and Augustin-Jean Fresnel (1816), it was believed that light propagates as a transverse wave within an elastic medium called luminiferous aether. However, a distinction was made between optical and electrodynamical phenomena so it was necessary to create specific aether models for all phenomena. Attempts to unify those models or to create a complete mechanical description of them did not succeed, but after considerable work by many scientists, including Michael Faraday and Lord Kelvin, James Clerk Maxwell (1864) developed an accurate theory of electromagnetism by deriving a set of equations in electricity, magnetism and inductance, named Maxwell's equations. He first proposed that light was in fact undulations (electromagnetic radiation) in the same aetherial medium that is the cause of electric and magnetic phenomena. However, Maxwell's theory was unsatisfactory regarding the optics of moving bodies, and while he was able to present a complete mathematical model, he was not able to provide a coherent mechanical description of the aether. After Heinrich Hertz in 1887 demonstrated the existence of electromagnetic waves, Maxwell's theory was widely accepted. In addition, Oliver Heaviside and Hertz further developed the theory and introduced modernized versions of Maxwell's equations. The "Maxwell–Hertz" or "Heaviside–Hertz" equations subsequently formed an important basis for the further development of electrodynamics, and Heaviside's notation is still used today. Other important contributions to Maxwell's theory were made by George FitzGerald, Joseph John Thomson, John Henry Poynting, Hendrik Lorentz, and Joseph Larmor. Search for the aether Regarding the relative motion and the mutual influence of matter and aether, there were two theories, neither entirely satisfactory. One was developed by Fresnel (and subsequently Lorentz). This model (stationary aether theory) supposed that light propagates as a transverse wave and aether is partially dragged with a certain coefficient by matter. Based on this assumption, Fresnel was able to explain the aberration of light and many optical phenomena.The other hypothesis was proposed by George Gabriel Stokes, who stated in 1845 that the aether was fully dragged by matter (later this view was also shared by Hertz). In this model the aether might be (by analogy with pine pitch) rigid for fast objects and fluid for slower objects. Thus the Earth could move through it fairly freely, but it would be rigid enough to transport light. Fresnel's theory was preferred because his dragging coefficient was confirmed by the Fizeau experiment in 1851, which measured the speed of light in moving liquids. Albert A. Michelson (1881) tried to measure the relative motion of the Earth and aether (Aether-Wind), as it was expected in Fresnel's theory, by using an interferometer. He could not determine any relative motion, so he interpreted the result as a confirmation of the thesis of Stokes. However, Lorentz (1886) showed Michelson's calculations were wrong and that he had overestimated the accuracy of the measurement. This, together with the large margin of error, made the result of Michelson's experiment inconclusive. In addition, Lorentz showed that Stokes' completely dragged aether led to contradictory consequences, and therefore he supported an aether theory similar to Fresnel's. To check Fresnel's theory again, Michelson and Edward W. Morley (1886) performed a repetition of the Fizeau experiment. Fresnel's dragging coefficient was confirmed very exactly on that occasion, and Michelson was now of the opinion that Fresnel's stationary aether theory was correct. To clarify the situation, Michelson and Morley (1887) repeated Michelson's 1881 experiment, and they substantially increased the accuracy of the measurement. However, this now famous Michelson–Morley experiment again yielded a negative result, i.e., no motion of the apparatus through the aether was detected (although the Earth's velocity is 60 km/s different in the northern winter than summer). So the physicists were confronted with two seemingly contradictory experiments: the 1886 experiment as an apparent confirmation of Fresnel's stationary aether, and the 1887 experiment as an apparent confirmation of Stokes' completely dragged aether. A possible solution to the problem was shown by Woldemar Voigt (1887), who investigated the Doppler effect for waves propagating in an incompressible elastic medium and deduced transformation relations that left the wave equation in free space unchanged, and explained the negative result of the Michelson–Morley experiment. The Voigt transformations include the Lorentz factor for the y- and z-coordinates, and a new time variable which later was called "local time". However, Voigt's work was completely ignored by his contemporaries. FitzGerald (1889) offered another explanation of the negative result of the Michelson–Morley experiment. Contrary to Voigt, he speculated that the intermolecular forces are possibly of electrical origin so that material bodies would contract in the line of motion (length contraction). This was in connection with the work of Heaviside (1887), who determined that the electrostatic fields in motion were deformed (Heaviside Ellipsoid), which leads to physically undetermined conditions at the speed of light. However, FitzGerald's idea remained widely unknown and was not discussed before Oliver Lodge published a summary of the idea in 1892. Also Lorentz (1892b) proposed length contraction independently from FitzGerald in order to explain the Michelson–Morley experiment. For plausibility reasons, Lorentz referred to the analogy of the contraction of electrostatic fields. However, even Lorentz admitted that that was not a necessary reason and length contraction consequently remained an ad hoc hypothesis. Lorentz's theory of electrons Lorentz (1892a) set the foundations of Lorentz aether theory, by assuming the existence of electrons which he separated from the aether, and by replacing the "Maxwell–Hertz" equations by the "Maxwell–Lorentz" equations. In his model, the aether is completely motionless and, contrary to Fresnel's theory, also is not partially dragged by matter. An important consequence of this notion was that the velocity of light is totally independent of the velocity of the source. Lorentz gave no statements about the mechanical nature of the aether and the electromagnetic processes, but, rather, tried to explain the mechanical processes by electromagnetic ones and therefore created an abstract electromagnetic æther. In the framework of his theory, Lorentz calculated, like Heaviside, the contraction of the electrostatic fields. Lorentz (1895) also introduced what he called the "Theorem of Corresponding States" for terms of first order in . This theorem states that a moving observer (relative to the aether) in his "fictitious" field makes the same observations as a resting observer in his "real" field. An important part of it was local time , which paved the way to the Lorentz transformation and which he introduced independently of Voigt. With the help of this concept, Lorentz could explain the aberration of light, the Doppler effect and the Fizeau experiment as well. However, Lorentz's local time was only an auxiliary mathematical tool to simplify the transformation from one system into another – it was Poincaré in 1900 who recognized that "local time" is actually indicated by moving clocks. Lorentz also recognized that his theory violated the principle of action and reaction, since the aether acts on matter, but matter cannot act on the immobile aether. A very similar model was created by Joseph Larmor (1897, 1900). Larmor was the first to put Lorentz's 1895 transformation into a form algebraically equivalent to the modern Lorentz transformations, however, he stated that his transformations preserved the form of Maxwell's equations only to second order of . Lorentz later noted that these transformations did in fact preserve the form of Maxwell's equations to all orders of . Larmor noticed on that occasion that length contraction was derivable from the model; furthermore, he calculated some manner of time dilation for electron orbits. Larmor specified his considerations in 1900 and 1904. Independently of Larmor, Lorentz (1899) extended his transformation for second-order terms and noted a (mathematical) time dilation effect as well. Other physicists besides Lorentz and Larmor also tried to develop a consistent model of electrodynamics. For example, Emil Cohn (1900, 1901) created an alternative electrodynamics in which he, as one of the first, discarded the existence of the aether (at least in the previous form) and would use, like Ernst Mach, the fixed stars as a reference frame instead. Due to inconsistencies within his theory, like different light speeds in different directions, it was superseded by Lorentz's and Einstein's. Electromagnetic mass During his development of Maxwell's Theory, J. J. Thomson (1881) recognized that charged bodies are harder to set in motion than uncharged bodies. Electrostatic fields behave as if they add an "electromagnetic mass" to the mechanical mass of the bodies. I.e., according to Thomson, electromagnetic energy corresponds to a certain mass. This was interpreted as some form of self-inductance of the electromagnetic field. He also noticed that the mass of a body in motion is increased by a constant quantity. Thomson's work was continued and perfected by FitzGerald, Heaviside (1888), and George Frederick Charles Searle (1896, 1897). For the electromagnetic mass they gave — in modern notation — the formula , where is the electromagnetic mass and is the electromagnetic energy. Heaviside and Searle also recognized that the increase of the mass of a body is not constant and varies with its velocity. Consequently, Searle noted the impossibility of superluminal velocities, because infinite energy would be needed to exceed the speed of light. Also for Lorentz (1899), the integration of the speed-dependence of masses recognized by Thomson was especially important. He noticed that the mass not only varied due to speed, but is also dependent on the direction, and he introduced what Abraham later called "longitudinal" and "transverse" mass. (The transverse mass corresponds to what later was called relativistic mass.) Wilhelm Wien (1900) assumed (following the works of Thomson, Heaviside, and Searle) that the entire mass is of electromagnetic origin, which was formulated in the context that all forces of nature are electromagnetic ones (the "Electromagnetic World View"). Wien stated that, if it is assumed that gravitation is an electromagnetic effect too, then there has to be a proportionality between electromagnetic energy, inertial mass and gravitational mass. In the same paper Henri Poincaré (1900b) found another way of combining the concepts of mass and energy. He recognized that electromagnetic energy behaves like a fictitious fluid with mass density of (or ) and defined a fictitious electromagnetic momentum as well. However, he arrived at a radiation paradox which was fully explained by Einstein in 1905. Walter Kaufmann (1901–1903) was the first to confirm the velocity dependence of electromagnetic mass by analyzing the ratio (where is the charge and the mass) of cathode rays. He found that the value of decreased with the speed, showing that, assuming the charge constant, the mass of the electron increased with the speed. He also believed that those experiments confirmed the assumption of Wien, that there is no "real" mechanical mass, but only the "apparent" electromagnetic mass, or in other words, the mass of all bodies is of electromagnetic origin. Max Abraham (1902–1904), who was a supporter of the electromagnetic world view, quickly offered an explanation for Kaufmann's experiments by deriving expressions for the electromagnetic mass. Together with this concept, Abraham introduced (like Poincaré in 1900) the notion of "electromagnetic momentum" which is proportional to . But unlike the fictitious quantities introduced by Poincaré, he considered it as a real physical entity. Abraham also noted (like Lorentz in 1899) that this mass also depends on the direction and coined the names "longitudinal" and "transverse" mass. In contrast to Lorentz, he did not incorporate the contraction hypothesis into his theory, and therefore his mass terms differed from those of Lorentz. Based on the preceding work on electromagnetic mass, Friedrich Hasenöhrl suggested that part of the mass of a body (which he called apparent mass) can be thought of as radiation bouncing around a cavity. The "apparent mass" of radiation depends on the temperature (because every heated body emits radiation) and is proportional to its energy. Hasenöhrl stated that this energy-apparent-mass relation only holds as long as the body radiates, i.e., if the temperature of a body is greater than 0 K. At first he gave the expression for the apparent mass; however, Abraham and Hasenöhrl himself in 1905 changed the result to , the same value as for the electromagnetic mass for a body at rest. Absolute space and time Some scientists and philosophers of science were critical of Newton's definitions of absolute space and time. Ernst Mach (1883) argued that absolute time and space are essentially metaphysical concepts and thus scientifically meaningless, and suggested that only relative motion between material bodies is a useful concept in physics. Mach argued that even effects that according to Newton depend on accelerated motion with respect to absolute space, such as rotation, could be described purely with reference to material bodies, and that the inertial effects cited by Newton in support of absolute space might instead be related purely to acceleration with respect to the fixed stars. Carl Neumann (1870) introduced a "Body alpha", which represents some sort of rigid and fixed body for defining inertial motion. Based on the definition of Neumann, Heinrich Streintz (1883) argued that in a coordinate system where gyroscopes do not measure any signs of rotation inertial motion is related to a "Fundamental body" and a "Fundamental Coordinate System". Eventually, Ludwig Lange (1885) was the first to coin the expression inertial frame of reference and "inertial time scale" as operational replacements for absolute space and time; he defined "inertial frame" as "a reference frame in which a mass point thrown from the same point in three different (non-co-planar) directions follows rectilinear paths each time it is thrown". In 1902, Henri Poincaré published a collection of essays titled Science and Hypothesis, which included: detailed philosophical discussions on the relativity of space, time, and on the conventionality of distant simultaneity; the conjecture that a violation of the relativity principle can never be detected; the possible non-existence of the aether, together with some arguments supporting the aether; and many remarks on non-Euclidean vs. Euclidean geometry. There were also some attempts to use time as a fourth dimension. This was done as early as 1754 by Jean le Rond d'Alembert in the Encyclopédie, and by some authors in the 19th century like H. G. Wells in his novel The Time Machine (1895). In 1901 a philosophical model was developed by Menyhért Palágyi, in which space and time were only two sides of some sort of "spacetime". He used time as an imaginary fourth dimension, which he gave the form (where , i.e. imaginary number). However, Palagyi's time coordinate is not connected to the speed of light. He also rejected any connection with the existing constructions of n-dimensional spaces and non-Euclidean geometry, so his philosophical model bears only little resemblance with spacetime physics, as it was later developed by Minkowski. Light constancy and the principle of relative motion In the second half of the 19th century, there were many attempts to develop a worldwide clock network synchronized by electrical signals. For that endeavor, the finite propagation speed of light had to be considered, because synchronization signals could travel no faster than the speed of light. In his paper The Measure of Time (1898), Henri Poincaré described some important consequences of this process and explained that astronomers, in determining the speed of light, simply assumed that light has a constant speed and that this speed is the same in all directions. Without this postulate, it would be impossible to infer the speed of light from astronomical observations, as Ole Rømer did based on observations of the moons of Jupiter. Poincaré also noted that the propagation speed of light can be (and in practice often is) used to define simultaneity between spatially separate events: In some other papers (1895, 1900b), Poincaré argued that experiments like that of Michelson and Morley show the impossibility of detecting the absolute motion of matter, i.e., the relative motion of matter in relation to the aether. He called this the "principle of relative motion". In the same year, he interpreted Lorentz's local time as the result of a synchronization procedure based on light signals. He assumed that two observers who are moving in the aether synchronize their clocks by optical signals. Since they believe themselves to be at rest, they consider only the transmission time of the signals and then cross-reference their observations to examine whether their clocks are synchronous. From the point of view of an observer at rest in the aether, the clocks are not synchronous and indicate the local time , but the moving observers fail to recognize this because they are unaware of their movement. So, contrary to Lorentz, Poincaré-defined local time can be measured and indicated by clocks. Therefore, in his recommendation of Lorentz for the Nobel Prize in 1902, Poincaré argued that Lorentz had convincingly explained the negative outcome of the aether drift experiments by inventing the "diminished" or "local" time, i.e. a time coordinate in which two events at different places could appear as simultaneous, although they are not simultaneous in reality. Like Poincaré, Alfred Bucherer (1903) believed in the validity of the relativity principle within the domain of electrodynamics, but contrary to Poincaré, Bucherer even assumed that this implies the nonexistence of the aether. However, the theory that he created later in 1906 was incorrect and not self-consistent, and the Lorentz transformation was absent within his theory as well. Lorentz's 1904 model In his paper Electromagnetic phenomena in a system moving with any velocity smaller than that of light, Lorentz (1904) was following the suggestion of Poincaré and attempted to create a formulation of electrodynamics, which explains the failure of all known aether drift experiments, i.e. the validity of the relativity principle. He tried to prove the applicability of the Lorentz transformation for all orders, although he did not succeed completely. Like Wien and Abraham, he argued that there exists only electromagnetic mass, not mechanical mass, and derived the correct expression for longitudinal and transverse mass, which were in agreement with Kaufmann's experiments (even though those experiments were not precise enough to distinguish between the theories of Lorentz and Abraham). And using the electromagnetic momentum, he could explain the negative result of the Trouton–Noble experiment, in which a charged parallel-plate capacitor moving through the aether should orient itself perpendicular to the motion. Also the experiments of Rayleigh and Brace could be explained. Another important step was the postulate that the Lorentz transformation has to be valid for non-electrical forces as well. At the same time, when Lorentz worked out his theory, Wien (1903) recognized an important consequence of the velocity dependence of mass. He argued that superluminal velocities were impossible, because that would require an infinite amount of energy — the same was already noted by Thomson (1893) and Searle (1897). And in June 1904, after he had read Lorentz's 1904 paper, he noticed the same in relation to length contraction, because at superluminal velocities the factor becomes imaginary. Lorentz's theory was criticized by Abraham, who demonstrated that on one side the theory obeys the relativity principle, and on the other side the electromagnetic origin of all forces is assumed. Abraham showed, that both assumptions were incompatible, because in Lorentz's theory of the contracted electrons, non-electric forces were needed in order to guarantee the stability of matter. However, in Abraham's theory of the rigid electron, no such forces were needed. Thus the question arose whether the Electromagnetic conception of the world (compatible with Abraham's theory) or the Relativity Principle (compatible with Lorentz's Theory) was correct. In a September 1904 lecture in St. Louis named The Principles of Mathematical Physics, Poincaré drew some consequences from Lorentz's theory and defined (in modification of Galileo's Relativity Principle and Lorentz's Theorem of Corresponding States) the following principle: "The Principle of Relativity, according to which the laws of physical phenomena must be the same for a stationary observer as for one carried along in a uniform motion of translation, so that we have no means, and can have none, of determining whether or not we are being carried along in such a motion." He also specified his clock synchronization method and explained the possibility of a "new method" or "new mechanics", in which no velocity can surpass that of light for all observers. However, he critically noted that the relativity principle, Newton's action and reaction, the conservation of mass, and the conservation of energy are not fully established and are even threatened by some experiments. Also Emil Cohn (1904) continued to develop his alternative model (as described above), and while comparing his theory with that of Lorentz, he discovered some important physical interpretations of the Lorentz transformations. He illustrated (like Joseph Larmor in the same year) this transformation by using rods and clocks: If they are at rest in the aether, they indicate the true length and time, and if they are moving, they indicate contracted and dilated values. Like Poincaré, Cohn defined local time as the time that is based on the assumption of isotropic propagation of light. Contrary to Lorentz and Poincaré it was noticed by Cohn, that within Lorentz's theory the separation of "real" and "apparent" coordinates is artificial, because no experiment can distinguish between them. Yet according to Cohn's own theory, the Lorentz transformed quantities would only be valid for optical phenomena, while mechanical clocks would indicate the "real" time. Poincaré's dynamics of the electron On June 5, 1905, Henri Poincaré submitted the summary of a work which closed the existing gaps of Lorentz's work. (This short paper contained the results of a more complete work which would be published later, in January 1906.) He showed that Lorentz's equations of electrodynamics were not fully Lorentz-covariant. So he pointed out the group characteristics of the transformation, and he corrected Lorentz's formulas for the transformations of charge density and current density (which implicitly contained the relativistic velocity-addition formula, which he elaborated in May in a letter to Lorentz). Poincaré used for the first time the term "Lorentz transformation", and he gave the transformations their symmetrical form used to this day. He introduced a non-electrical binding force (the so-called "Poincaré stresses") to ensure the stability of the electrons and to explain length contraction. He also sketched a Lorentz-invariant model of gravitation (including gravitational waves) by extending the validity of Lorentz-invariance to non-electrical forces. Eventually Poincaré (independently of Einstein) finished a substantially extended work of his June paper (the so-called "Palermo paper", received July 23, printed December 14, published January 1906 ). He spoke literally of "the postulate of relativity". He showed that the transformations are a consequence of the principle of least action and developed the properties of the Poincaré stresses. He demonstrated in more detail the group characteristics of the transformation, which he called the Lorentz group, and he showed that the combination is invariant. While elaborating his gravitational theory, he said the Lorentz transformation is merely a rotation in four-dimensional space about the origin, by introducing as a fourth imaginary coordinate (contrary to Palagyi, he included the speed of light), and he already used four-vectors. He wrote that the discovery of magneto-cathode rays by Paul Ulrich Villard (1904) seemed to threaten the entire theory of Lorentz, but this problem was quickly solved. However, although in his philosophical writings Poincaré rejected the ideas of absolute space and time, in his physical papers he continued to refer to an (undetectable) aether. He also continued (1900b, 1904, 1906, 1908b) to describe coordinates and phenomena as local/apparent (for moving observers) and true/real (for observers at rest in the aether). So, with a few exceptions, most historians of science argue that Poincaré did not invent what is now called special relativity, although it is admitted that Poincaré anticipated much of Einstein's methods and terminology. Special relativity Einstein 1905 Electrodynamics of moving bodies On September 26, 1905 (received June 30), Albert Einstein published his annus mirabilis paper on what is now called special relativity. Einstein's paper includes a fundamental description of the kinematics of the rigid body, and it did not require an absolutely stationary space, such as the aether. Einstein identified two fundamental principles, the principle of relativity and the principle of the constancy of light (light principle), which served as the axiomatic basis of his theory. To better understand Einstein's step, a summary of the situation before 1905, as it was described above, shall be given (it must be remarked that Einstein was familiar with the 1895 theory of Lorentz, and Science and Hypothesis by Poincaré, but possibly not their papers of 1904–1905): a) Maxwell's electrodynamics, as presented by Lorentz in 1895, was the most successful theory at this time. Here, the speed of light is constant in all directions in the stationary aether and completely independent of the velocity of the source; b) The inability to find an absolute state of motion, i.e. the validity of the relativity principle as the consequence of the negative results of all aether drift experiments and effects like the moving magnet and conductor problem which only depend on relative motion; c) The Fizeau experiment; d) The aberration of light; with the following consequences for the speed of light and the theories known at that time: The speed of light is not composed of the speed of light in vacuum and the velocity of a preferred frame of reference, by b. This contradicts the theory of the (nearly) stationary aether. The speed of light is not composed of the speed of light in vacuum and the velocity of the light source, by a and c. This contradicts the emission theory. The speed of light is not composed of the speed of light in vacuum and the velocity of an aether that would be dragged within or in the vicinity of matter, by a, c, and d. This contradicts the hypothesis of the complete aether drag. The speed of light in moving media is not composed of the speed of light when the medium is at rest and the velocity of the medium, but is determined by Fresnel's dragging coefficient, by c. In order to make the principle of relativity as required by Poincaré an exact law of nature in the immobile aether theory of Lorentz, the introduction of a variety ad hoc hypotheses was required, such as the contraction hypothesis, local time, the Poincaré stresses, etc.. This method was criticized by many scholars, since the assumption of a conspiracy of effects which completely prevent the discovery of the aether drift is considered to be very improbable, and it would violate Occam's razor as well. Einstein is considered the first who completely dispensed with such auxiliary hypotheses and drew the direct conclusions from the facts stated above: that the relativity principle is correct and the directly observed speed of light is the same in all inertial reference frames. Based on his axiomatic approach, Einstein was able to derive all results obtained by his predecessors – and in addition the formulas for the relativistic Doppler effect and relativistic aberration – in a few pages, while prior to 1905 his competitors had devoted years of long, complicated work to arrive at the same mathematical formalism. Before 1905 Lorentz and Poincaré had adopted these same principles, as necessary to achieve their final results, but did not recognize that they were also sufficient in the sense that there was no immediate logical need to assume the existence of a stationary aether in order to arrive at the Lorentz transformations. As Lorentz later said, "Einstein simply postulates what we have deduced". Another reason for Einstein's early rejection of the aether in any form (which he later partially retracted) may have been related to his work on quantum physics. Einstein discovered that light can also be described (at least heuristically) as a kind of particle, so the aether as the medium for electromagnetic "waves" (which was highly important for Lorentz and Poincaré) no longer fitted into his conceptual scheme. It's notable that Einstein's paper contains no direct references to other papers. However, many historians of science like Holton, Miller, Stachel, have tried to find out possible influences on Einstein. He stated that his thinking was influenced by the empiricist philosophers David Hume and Ernst Mach. Regarding the Relativity Principle, the moving magnet and conductor problem (possibly after reading a book of August Föppl) and the various negative aether drift experiments were important for him to accept that principle — but he denied any significant influence of the most important experiment: the Michelson–Morley experiment. Other likely influences include Poincaré's Science and Hypothesis, where Poincaré presented the Principle of Relativity (which, as has been reported by Einstein's friend Maurice Solovine, was closely studied and discussed by Einstein and his friends over a period of years before the publication of Einstein's 1905 paper), and the writings of Max Abraham, from whom he borrowed the terms "Maxwell–Hertz equations" and "longitudinal and transverse mass". Regarding his views on Electrodynamics and the Principle of the Constancy of Light, Einstein stated that Lorentz's theory of 1895 (or the Maxwell–Lorentz electrodynamics) and also the Fizeau experiment had considerable influence on his thinking. He said in 1909 and 1912 that he borrowed that principle from Lorentz's stationary aether (which implies validity of Maxwell's equations and the constancy of light in the aether frame), but he recognized that this principle together with the principle of relativity makes any reference to an aether unnecessary (at least as to the description of electrodynamics in inertial frames). As he wrote in 1907 and in later papers, the apparent contradiction between those principles can be resolved if it is admitted that Lorentz's local time is not an auxiliary quantity, but can simply be defined as time and is connected with signal velocity. Before Einstein, Poincaré also developed a similar physical interpretation of local time and noticed the connection with signal velocity, but contrary to Einstein he continued to argue that clocks at rest in the stationary aether show the true time, while clocks in inertial motion relative to the aether show only the apparent time. Eventually, near the end of his life in 1953 Einstein described the advantages of his theory over that of Lorentz as follows (although Poincaré had already stated in 1905 that Lorentz invariance is an exact condition for any physical theory): Mass–energy equivalence Already in §10 of his paper on electrodynamics, Einstein used the formula for the kinetic energy of an electron. In elaboration of this he published a paper (received September 27, November 1905), in which Einstein showed that when a material body lost energy (either radiation or heat) of amount E, its mass decreased by the amount This led to the famous mass–energy equivalence formula: E = mc2. Einstein considered the equivalency equation to be of paramount importance because it showed that a massive particle possesses an energy, the "rest energy", distinct from its classical kinetic and potential energies. As it was shown above, many authors before Einstein arrived at similar formulas (including a 4/3-factor) for the relation of mass to energy. However, their work was focused on electromagnetic energy which (as we know today) only represents a small part of the entire energy within matter. So it was Einstein who was the first to: (a) ascribe this relation to all forms of energy, and (b) understand the connection of mass–energy equivalence with the relativity principle. Early reception First assessments Walter Kaufmann (1905, 1906) was probably the first who referred to Einstein's work. He compared the theories of Lorentz and Einstein and, although he said Einstein's method is to be preferred, he argued that both theories are observationally equivalent. Therefore, he spoke of the relativity principle as the "Lorentz–Einsteinian" basic assumption. Shortly afterwards, Max Planck (1906a) was the first who publicly defended the theory and interested his students, Max von Laue and Kurd von Mosengeil, in this formulation. He described Einstein's theory as a "generalization" of Lorentz's theory and, to this "Lorentz–Einstein Theory", he gave the name "relative theory"; while Alfred Bucherer changed Planck's nomenclature into the now common "theory of relativity" ("Einsteinsche Relativitätstheorie"). On the other hand, Einstein himself and many others continued to refer simply to the new method as the "relativity principle". And in an important overview article on the relativity principle (1908a), Einstein described SR as a "union of Lorentz's theory and the relativity principle", including the fundamental assumption that Lorentz's local time can be described as real time. (Yet, Poincaré's contributions were rarely mentioned in the first years after 1905.) All of those expressions, (Lorentz–Einstein theory, relativity principle, relativity theory) were used by different physicists alternately in the next years. Following Planck, other German physicists quickly became interested in relativity, including Arnold Sommerfeld, Wilhelm Wien, Max Born, Paul Ehrenfest, and Alfred Bucherer. von Laue, who learned about the theory from Planck, published the first definitive monograph on relativity in 1911. By 1911, Sommerfeld altered his plan to speak about relativity at the Solvay Congress because the theory was already considered well established. Kaufmann–Bucherer-Neumann experiments Kaufmann (1903) presented results of his experiments on the charge-to-mass ratio of beta rays from a radium source, showing the dependence of the velocity on mass. He announced that these results confirmed Abraham's theory. However, Lorentz (1904a) reanalyzed results from Kaufmann (1903) against his theory and based on the data in tables concluded (p. 828) that the agreement with his theory "is seen to come out no less satisfactory than" with Abraham's theory. A recent reanalysis of the data from Kaufmann (1903) confirms that Lorentz's theory (1904a) does agree substantially better than Abraham's theory when applied to data from Kaufmann (1903). Kaufmann (1905, 1906) presented further results, this time with electrons from cathode rays. They represented, in his opinion, a clear refutation of the relativity principle and the Lorentz-Einstein-Theory, and a confirmation of Abraham's theory. For some years Kaufmann's experiments represented a weighty objection against the relativity principle, although it was criticized by Planck and Adolf Bestelmeyer (1906). Other physicists working with beta rays from radium, like Alfred Bucherer (1908) and Günther Neumann (1914), following on Bucherer's work and improving on his methods, also examined the velocity-dependence of mass and this time it was thought that the "Lorentz-Einstein theory" and the relativity principle were confirmed, and Abraham's theory disproved. Kaufmann–Bucherer–Neumann experiments A distinction needs to be made between work with beta ray electrons and cathode ray electrons since beta rays from radium have a substantially larger velocities than cathode-ray electrons and so relativistic effects are very substantially easier to detect with beta rays. Kaufmann's experiments with electrons from cathode rays only showed a qualitative mass increase of moving electrons, but they were not precise enough to distinguish between the models of Lorentz-Einstein and Abraham. It was not until 1940, when experiments with electrons from cathode rays were repeated with sufficient accuracy for confirming the Lorentz-Einstein formula. However, this problem occurred only with this kind of experiment. The investigations of the fine structure of the hydrogen lines already in 1917 provided a clear confirmation of the Lorentz-Einstein formula and the refutation of Abraham's theory. Relativistic momentum and mass Planck (1906a) defined the relativistic momentum and gave the correct values for the longitudinal and transverse mass by correcting a slight mistake of the expression given by Einstein in 1905. Planck's expressions were in principle equivalent to those used by Lorentz in 1899. Based on the work of Planck, the concept of relativistic mass was developed by Gilbert Newton Lewis and Richard C. Tolman (1908, 1909) by defining mass as the ratio of momentum to velocity. So the older definition of longitudinal and transverse mass, in which mass was defined as the ratio of force to acceleration, became superfluous. Finally, Tolman (1912) interpreted relativistic mass simply as the mass of the body. However, many modern textbooks on relativity do not use the concept of relativistic mass anymore, and mass in special relativity is considered as an invariant quantity. Mass and energy Einstein (1906) showed that the inertia of energy (mass–energy equivalence) is a necessary and sufficient condition for the conservation of the center of mass theorem. On that occasion, he noted that the formal mathematical content of Poincaré's paper on the center of mass (1900b) and his own paper were mainly the same, although the physical interpretation was different in light of relativity. Kurd von Mosengeil (1906) by extending Hasenöhrl's calculation of black-body radiation in a cavity, derived the same expression for the additional mass of a body due to electromagnetic radiation as Hasenöhrl. Hasenöhrl's idea was that the mass of bodies included a contribution from the electromagnetic field, he imagined a body as a cavity containing light. His relationship between mass and energy, like all other pre-Einstein ones, contained incorrect numerical prefactors (see Electromagnetic mass). Eventually Planck (1907) derived the mass–energy equivalence in general within the framework of special relativity, including the binding forces within matter. He acknowledged the priority of Einstein's 1905 work on , but Planck judged his own approach as more general than Einstein's. Experiments by Fizeau and Sagnac As was explained above, already in 1895 Lorentz succeeded in deriving Fresnel's dragging coefficient (to first order of v/c) and the Fizeau experiment by using the electromagnetic theory and the concept of local time. After first attempts by Jakob Laub (1907) to create a relativistic "optics of moving bodies", it was Max von Laue (1907) who derived the coefficient for terms of all orders by using the colinear case of the relativistic velocity addition law. In addition, Laue's calculation was much simpler than the complicated methods used by Lorentz. In 1911 Laue also discussed a situation where on a platform a beam of light is split and the two beams are made to follow a trajectory in opposite directions. On return to the point of entry the light is allowed to exit the platform in such a way that an interference pattern is obtained. Laue calculated a displacement of the interference pattern if the platform is in rotation – because the speed of light is independent of the velocity of the source, so one beam has covered less distance than the other beam. An experiment of this kind was performed by Georges Sagnac in 1913, who actually measured a displacement of the interference pattern (Sagnac effect). While Sagnac himself concluded that his theory confirmed the theory of an aether at rest, Laue's earlier calculation showed that it is compatible with special relativity as well because in both theories the speed of light is independent of the velocity of the source. This effect can be understood as the electromagnetic counterpart of the mechanics of rotation, for example in analogy to a Foucault pendulum. Already in 1909–11, Franz Harress (1912) performed an experiment which can be considered as a synthesis of the experiments of Fizeau and Sagnac. He tried to measure the dragging coefficient within glass. Contrary to Fizeau he used a rotating device so he found the same effect as Sagnac. While Harress himself misunderstood the meaning of the result, it was shown by Laue that the theoretical explanation of Harress' experiment is in accordance with the Sagnac effect. Eventually, the Michelson–Gale–Pearson experiment (1925, a variation of the Sagnac experiment) indicated the angular velocity of the Earth itself in accordance with special relativity and a resting aether. Relativity of simultaneity The first derivations of relativity of simultaneity by synchronization with light signals were also simplified. Daniel Frost Comstock (1910) placed an observer in the middle between two clocks A and B. From this observer a signal is sent to both clocks, and in the frame in which A and B are at rest, they synchronously start to run. But from the perspective of a system in which A and B are moving, clock B is first set in motion, and then comes clock A – so the clocks are not synchronized. Also Einstein (1917) created a model with an observer in the middle between A and B. However, in his description two signals are sent from A and B to an observer aboard a moving train. From the perspective of the frame in which A and B are at rest, the signals are sent at the same time and the observer "is hastening towards the beam of light coming from B, whilst he is riding on ahead of the beam of light coming from A. Hence the observer will see the beam of light emitted from B earlier than he will see that emitted from A. Observers who take the railway train as their reference-body must therefore come to the conclusion that the lightning flash B took place earlier than the lightning flash A." Spacetime physics Minkowski's spacetime Poincaré's attempt of a four-dimensional reformulation of the new mechanics was not continued by himself, so it was Hermann Minkowski (1907), who worked out the consequences of that notion (other contributions were made by Roberto Marcolongo (1906) and Richard Hargreaves (1908)). This was based on the work of many mathematicians of the 19th century like Arthur Cayley, Felix Klein, or William Kingdon Clifford, who contributed to group theory, invariant theory and projective geometry, formulating concepts such as the Cayley–Klein metric or the hyperboloid model in which the interval and its invariance was defined in terms of hyperbolic geometry. Using similar methods, Minkowski succeeded in formulating a geometrical interpretation of the Lorentz transformation. He completed, for example, the concept of four vectors; he created the Minkowski diagram for the depiction of spacetime; he was the first to use expressions like world line, proper time, Lorentz invariance/covariance, etc.; and most notably he presented a four-dimensional formulation of electrodynamics. Similar to Poincaré he tried to formulate a Lorentz-invariant law of gravity, but that work was subsequently superseded by Einstein's elaborations on gravitation. In 1907 Minkowski named four predecessors who contributed to the formulation of the relativity principle: Lorentz, Einstein, Poincaré and Planck. And in his famous lecture Space and Time (1908) he mentioned Voigt, Lorentz and Einstein. Minkowski himself considered Einstein's theory as a generalization of Lorentz's and credited Einstein for completely stating the relativity of time, but he criticized his predecessors for not fully developing the relativity of space. However, modern historians of science argue that Minkowski's claim for priority was unjustified, because Minkowski (like Wien or Abraham) adhered to the electromagnetic world picture and apparently did not fully understand the difference between Lorentz's electron theory and Einstein's kinematics. In 1908, Einstein and Laub rejected the four-dimensional electrodynamics of Minkowski as overly complicated "learned superfluousness" and published a "more elementary", non-four-dimensional derivation of the basic equations for moving bodies. But it was Minkowski's geometric model that (a) showed that the special relativity is a complete and internally self-consistent theory, (b) added the Lorentz invariant proper time interval (which accounts for the actual readings shown by moving clocks), and (c) served as a basis for further development of relativity. Eventually, Einstein (1912) recognized the importance of Minkowski's geometric spacetime model and used it as the basis for his work on the foundations of general relativity. Today special relativity is seen as an application of linear algebra, but at the time special relativity was being developed the field of linear algebra was still in its infancy. There were no textbooks on linear algebra as modern vector space and transformation theory, and the matrix notation of Arthur Cayley (that unifies the subject) had not yet come into widespread use. Cayley's matrix calculus notation was used by Minkowski (1908) in formulating relativistic electrodynamics, even though it was later replaced by Sommerfeld using vector notation. According to a recent source the Lorentz transformations are equivalent to hyperbolic rotations. However Varicak (1910) had shown that the standard Lorentz transformation is a translation in hyperbolic space. Vector notation and closed systems Minkowski's spacetime formalism was quickly accepted and further developed. For example, Arnold Sommerfeld (1910) replaced Minkowski's matrix notation by an elegant vector notation and coined the terms "four vector" and "six vector". He also introduced a trigonometric formulation of the relativistic velocity addition rule, which according to Sommerfeld, removes much of the strangeness of that concept. Other important contributions were made by Laue (1911, 1913), who used the spacetime formalism to create a relativistic theory of deformable bodies and an elementary particle theory. He extended Minkowski's expressions for electromagnetic processes to all possible forces and thereby clarified the concept of mass–energy equivalence. Laue also showed that non-electrical forces are needed to ensure the proper Lorentz transformation properties, and for the stability of matter – he could show that the "Poincaré stresses" (as mentioned above) are a natural consequence of relativity theory so that the electron can be a closed system. Lorentz transformation without second postulate There were some attempts to derive the Lorentz transformation without the postulate of the constancy of the speed of light. Vladimir Ignatowski (1910) for example used for this purpose (a) the principle of relativity, (b) homogeneity and isotropy of space, and (c) the requirement of reciprocity. Philipp Frank and Hermann Rothe (1911) argued that this derivation is incomplete and needs additional assumptions. Their own calculation was based on the assumptions that: (a) the Lorentz transformation forms a homogeneous linear group, (b) when changing frames, only the sign of the relative speed changes, (c) length contraction solely depends on the relative speed. However, according to Pauli and Miller such models were insufficient to identify the invariant speed in their transformation with the speed of light — for example, Ignatowski was forced to seek recourse in electrodynamics to include the speed of light. So Pauli and others argued that both postulates are needed to derive the Lorentz transformation. However, until today, others continued the attempts to derive special relativity without the light postulate. Non-euclidean formulations without imaginary time coordinate Minkowski in his earlier works in 1907 and 1908 followed Poincaré in representing space and time together in complex form (x,y,z,ict) emphasizing the formal similarity with Euclidean space. He noted that spacetime is in a certain sense a four-dimensional non-Euclidean manifold. Sommerfeld (1910) used Minkowski's complex representation to combine non-collinear velocities by spherical geometry and so derive Einstein's addition formula. Subsequent writers, principally Varićak, dispensed with the imaginary time coordinate, and wrote in explicitly non-Euclidean (i.e. Lobachevskian) form reformulating relativity using the concept of rapidity previously introduced by Alfred Robb (1911); Edwin Bidwell Wilson and Gilbert N. Lewis (1912) introduced a vector notation for spacetime; Émile Borel (1913) showed how parallel transport in non-Euclidean space provides the kinematic basis of Thomas precession twelve years before its experimental discovery by Thomas; Felix Klein (1910) and Ludwik Silberstein (1914) employed such methods as well. One historian argues that the non-Euclidean style had little to show "in the way of creative power of discovery", but it offered notational advantages in some cases, particularly in the law of velocity addition. (So in the years before World War I, the acceptance of the non-Euclidean style was approximately equal to that of the initial spacetime formalism, and it continued to be employed in relativity textbooks of the 20th century. Time dilation and twin paradox Einstein (1907a) proposed a method for detecting the transverse Doppler effect as a direct consequence of time dilation. And in fact, that effect was measured in 1938 by Herbert E. Ives and G. R. Stilwell (Ives–Stilwell experiment). And Lewis and Tolman (1909) described the reciprocity of time dilation by using two light clocks A and B, traveling with a certain relative velocity to each other. The clocks consist of two plane mirrors parallel to one another and to the line of motion. Between the mirrors a light signal is bouncing, and for the observer resting in the same reference frame as A, the period of clock A is the distance between the mirrors divided by the speed of light. But if the observer looks at clock B, he sees that within that clock the signal traces out a longer, angled path, thus clock B is slower than A. However, for the observer moving alongside B the situation is completely in reverse: Clock B is faster and A is slower. Lorentz (1910–1912) discussed the reciprocity of time dilation and analyzed a clock "paradox", which apparently occurs as a consequence of the reciprocity of time dilation. Lorentz showed that there is no paradox if one considers that in one system only one clock is used, while in the other system two clocks are necessary, and the relativity of simultaneity is fully taken into account. A similar situation was created by Paul Langevin in 1911 with what was later called the "twin paradox", where he replaced the clocks by persons (Langevin never used the word "twins" but his description contained all other features of the paradox). Langevin solved the paradox by alluding to the fact that one twin accelerates and changes direction, so Langevin could show that the symmetry is broken and the accelerated twin is younger. However, Langevin himself interpreted this as a hint as to the existence of an aether. Although Langevin's explanation is still accepted by some, his conclusions regarding the aether were not generally accepted. Laue (1913) pointed out that any acceleration can be made arbitrarily small in relation to the inertial motion of the twin, and that the real explanation is that one twin is at rest in two different inertial frames during his journey, while the other twin is at rest in a single inertial frame. Laue was also the first to analyze the situation based on Minkowski's spacetime model for special relativity – showing how the world lines of inertially moving bodies maximize the proper time elapsed between two events. Acceleration Einstein (1908) tried – as a preliminary in the framework of special relativity – also to include accelerated frames within the relativity principle. In the course of this attempt he recognized that for any single moment of acceleration of a body one can define an inertial reference frame in which the accelerated body is temporarily at rest. It follows that in accelerated frames defined in this way, the application of the constancy of the speed of light to define simultaneity is restricted to small localities. However, the equivalence principle that was used by Einstein in the course of that investigation, which expresses the equality of inertial and gravitational mass and the equivalence of accelerated frames and homogeneous gravitational fields, transcended the limits of special relativity and resulted in the formulation of general relativity. Nearly simultaneously with Einstein, Minkowski (1908) considered the special case of uniform accelerations within the framework of his spacetime formalism. He recognized that the worldline of such an accelerated body corresponds to a hyperbola. This notion was further developed by Born (1909) and Sommerfeld (1910), with Born introducing the expression "hyperbolic motion". He noted that uniform acceleration can be used as an approximation for any form of acceleration within special relativity. In addition, Harry Bateman and Ebenezer Cunningham (1910) showed that Maxwell's equations are invariant under a much wider group of transformation than the Lorentz group, i.e., the spherical wave transformations, being a form of conformal transformations. Under those transformations the equations preserve their form for some types of accelerated motions. A general covariant formulation of electrodynamics in Minkowski space was eventually given by Friedrich Kottler (1912), whereby his formulation is also valid for general relativity. Concerning the further development of the description of accelerated motion in special relativity, the works by Langevin and others for rotating frames (Born coordinates), and by Wolfgang Rindler and others for uniform accelerated frames (Rindler coordinates) must be mentioned. Rigid bodies and Ehrenfest paradox Einstein (1907b) discussed the question of whether, in rigid bodies, as well as in all other cases, the velocity of information can exceed the speed of light, and explained that information could be transmitted under these circumstances into the past, thus causality would be violated. Since this contravenes radically against every experience, superluminal velocities are thought impossible. He added that a dynamics of the rigid body must be created in the framework of SR. Eventually, Max Born (1909) in the course of his above-mentioned work concerning accelerated motion, tried to include the concept of rigid bodies into SR. However, Paul Ehrenfest (1909) showed that Born's concept lead the so-called Ehrenfest paradox, in which, due to length contraction, the circumference of a rotating disk is shortened while the radius stays the same. This question was also considered by Gustav Herglotz (1910), Fritz Noether (1910), and von Laue (1911). It was recognized by Laue that the classic concept is not applicable in SR since a "rigid" body possesses infinitely many degrees of freedom. Yet, while Born's definition was not applicable on rigid bodies, it was very useful in describing rigid motions of bodies. In connection to the Ehrenfest paradox, it was also discussed (by Vladimir Varićak and others) whether length contraction is "real" or "apparent", and whether there is a difference between the dynamic contraction of Lorentz and the kinematic contraction of Einstein. However, it was rather a dispute over words because, as Einstein said, the kinematic length contraction is "apparent" for a co-moving observer, but for an observer at rest it is "real" and the consequences are measurable. Acceptance of special relativity Planck, in 1909, compared the implications of the modern relativity principle — he particularly referred to the relativity of time – with the revolution by the Copernican system. Poincaré made a similar analogy in 1905. An important factor in the adoption of special relativity by physicists was its development by Poincaré and Minkowski into a spacetime theory. Consequently, by about 1911, most theoretical physicists accepted special relativity. In 1912 Wilhelm Wien recommended both Lorentz (for the mathematical framework) and Einstein (for reducing it to a simple principle) for the Nobel Prize in Physics – although it was decided by the Nobel committee not to award the prize for special relativity. Only a minority of theoretical physicists such as Abraham, Lorentz, Poincaré, or Langevin still believed in the existence of an aether. Einstein later (1918–1920) qualified his position by arguing that one can speak about a relativistic aether, but the "idea of motion" cannot be applied to it. Lorentz and Poincaré had always argued that motion through the aether was undetectable. Einstein used the expression "special theory of relativity" in 1915, to distinguish it from general relativity. Relativistic theories Gravitation The first attempt to formulate a relativistic theory of gravitation was undertaken by Poincaré (1905). He tried to modify Newton's law of gravitation so that it assumes a Lorentz-covariant form. He noted that there were many possibilities for a relativistic law, and he discussed two of them. It was shown by Poincaré that the argument of Pierre-Simon Laplace, who argued that the speed of gravity is many times faster than the speed of light, is not valid within a relativistic theory. That is, in a relativistic theory of gravitation, planetary orbits are stable even when the speed of gravity is equal to that of light. Similar models to that of Poincaré were discussed by Minkowski (1907b) and Sommerfeld (1910). However, it was shown by Abraham (1912) that those models belong to the class of "vector theories" of gravitation. The fundamental defect of those theories is that they implicitly contain a negative value for the gravitational energy in the vicinity of matter, which would violate the energy principle. As an alternative, Abraham (1912) and Gustav Mie (1913) proposed different "scalar theories" of gravitation. While Mie never formulated his theory in a consistent way, Abraham completely gave up the concept of Lorentz-covariance (even locally), and therefore it was irreconcilable with relativity. In addition, all of those models violated the equivalence principle, and Einstein argued that it is impossible to formulate a theory which is both Lorentz-covariant and satisfies the equivalence principle. However, Gunnar Nordström (1912, 1913) was able to create a model which fulfilled both conditions. This was achieved by making both the gravitational and the inertial mass dependent on the gravitational potential. Nordström's theory of gravitation was remarkable because it was shown by Einstein and Adriaan Fokker (1914), that in this model gravitation can be completely described in terms of spacetime curvature. Although Nordström's theory is without contradiction, from Einstein's point of view a fundamental problem persisted: It does not fulfill the important condition of general covariance, as in this theory preferred frames of reference can still be formulated. So contrary to those "scalar theories", Einstein (1911–1915) developed a "tensor theory" (i.e. general relativity), which fulfills both the equivalence principle and general covariance. As a consequence, the notion of a complete "special relativistic" theory of gravitation had to be given up, as in general relativity the constancy of light speed (and Lorentz covariance) is only locally valid. The decision between those models was brought about by Einstein, when he was able to exactly derive the perihelion precession of Mercury, while the other theories gave erroneous results. In addition, only Einstein's theory gave the correct value for the deflection of light near the Sun. Quantum field theory The need to put together relativity and quantum mechanics was one of the major motivations in the development of quantum field theory. Pascual Jordan and Wolfgang Pauli showed in 1928 that quantum fields could be made to be relativistic, and Paul Dirac produced the Dirac equation for electrons, and in so doing predicted the existence of antimatter. Many other domains have since been reformulated with relativistic treatments: relativistic thermodynamics, relativistic statistical mechanics, relativistic hydrodynamics, relativistic quantum chemistry, relativistic heat conduction, etc. Experimental evidence Important early experiments confirming special relativity as mentioned above were the Fizeau experiment, the Michelson–Morley experiment, the Kaufmann–Bucherer–Neumann experiments, the Trouton–Noble experiment, the experiments of Rayleigh and Brace, and the Trouton–Rankine experiment. In the 1920s, a series of Michelson–Morley type experiments were conducted, confirming relativity to even higher precision than the original experiment. Another type of interferometer experiment was the Kennedy–Thorndike experiment in 1932, by which the independence of the speed of light from the velocity of the apparatus was confirmed. Time dilation was directly measured in the Ives–Stilwell experiment in 1938 and by measuring the decay rates of moving particles in 1940. All of those experiments have been repeated several times with increased precision. In addition, that the speed of light is unreachable for massive bodies was measured in many tests of relativistic energy and momentum. Therefore, knowledge of those relativistic effects is required in the construction of particle accelerators. In 1962 J. G. Fox pointed out that all previous experimental tests of the constancy of the speed of light were conducted using light which had passed through stationary material: glass, air, or the incomplete vacuum of deep space. As a result, all were thus subject to the effects of the extinction theorem. This implied that the light being measured would have had a velocity different from that of the original source. He concluded that there was likely as yet no acceptable proof of the second postulate of special relativity. This surprising gap in the experimental record was quickly closed in the ensuing years, by experiments by Fox, and by Alvager et al., which used gamma rays sourced from high energy mesons. The high energy levels of the measured photons, along with very careful accounting for extinction effects, eliminated any significant doubt from their results. Many other tests of special relativity have been conducted, testing possible violations of Lorentz invariance in certain variations of quantum gravity. However, no sign of anisotropy of the speed of light has been found even at the 10−17 level, and some experiments even ruled out Lorentz violations at the 10−40 level, see Modern searches for Lorentz violation. Priority Some claim that Poincaré and Lorentz, not Einstein, are the true discoverers of special relativity. For more see the article on relativity priority dispute. Criticisms Some criticized Special Relativity for various reasons, such as lack of empirical evidence, internal inconsistencies, rejection of mathematical physics per se, or philosophical reasons. Although there still are critics of relativity outside the scientific mainstream, the overwhelming majority of scientists agree that Special Relativity has been verified in many different ways and there are no inconsistencies within the theory. See also Timeline of special relativity and the speed of light Einstein's thought experiments History of Lorentz transformations Tests of special relativity References Primary sources . See also: English translation. . See also the English translation. (translated by J. B. Sykes, 1973). ; (English translation in 1920 by Meghnad Saha). Various English translations on Wikisource: Space and Time Preface partly reprinted in "Science and Hypothesis", Ch. 12. Reprinted in Poincaré, Oeuvres, tome IX, pp. 395–413 . Reprinted in "Science and Hypothesis", Ch. 9–10. . See also the English translation. . Reprinted in "Science and Hypothesis", Ch. 6–7. . Reprinted in Poincaré 1913, Ch. 6. , see English translation. Notes and secondary sources . . = 4. Edition of Laue (1911). In English: (translated). The proof consists in showing that the Lorentz transformation takes Galilean form when written in Lobachevski coordinates. Non mainstream External links Mathpages: Corresponding States, The End of My Latin, Who Invented Relativity?, Poincaré Contemplates Copernicus Berger, Andy "All in Einstein's Head" June 2016, Discover magazine, explanations of Einstein's thought experiments Special relativity Special relativity Aether theories Hendrik Lorentz
History of special relativity
Physics
14,258
69,494,736
https://en.wikipedia.org/wiki/Aluminium%20phenolate
Aluminium phenolate is the metalloorganic compound with the formula [Al(OC6H5)3]n. It is a white solid. 27Al NMR studies suggest that aluminium phenolate exists in benzene solution as a mixture of dimer and trimer. The compound is can be prepared by the reaction of elemental aluminium with phenol: Al + 3 HOC6H5 → Al(OC6H5)3 + 1.5 H2 The compound is used as a catalyst for the alkylation of phenols with various alkenes. For example, the ethylphenols are generated commercially by treating phenol with ethylene in the presence of a catalytic amount of aluminium phenolate. Related compounds Aluminium isopropoxide References Phenolates Aluminium compounds
Aluminium phenolate
Chemistry
171
5,797,666
https://en.wikipedia.org/wiki/Heirs%20of%20Alexandria%20series
Heirs of Alexandria is an alternate history/historical fantasy series introduced in 2002 and set primarily in the Republic of Venice in the 1530s. The books are written by three authors, Mercedes Lackey, Eric Flint and Dave Freer. The books combine elements from the styles of all three authors, such as Lackey's approach to tolerance and magic and Flint's sense of history alteration. Plot summary In our own universe, Hypatia of Alexandria was killed for her non-Christian views, shortly before the destruction of the Library of Alexandria by an angry mob. In the universe of the novels, Hypatia was converted to Christianity by John Chrysostom, and stopped the mob from destroying the Library. She continued her correspondence with John and Augustine of Hippo, which eventually led to the modern (1530s) divisions of the Church. The Shadow of the Lion (2002) deals with Chernobog's attempt to destroy Venice and the awakening of the city's ancient powers. Marco is the main protagonist, while Chernobog acts through several intermediaries. This Rough Magic (2003) is set in Corfu and features several new antagonists. It is largely centered on Maria and Benito's awakening, Marco having fit comfortably in his new role in Venice. Elizabeth Bartholdy has replaced Chernobog as the major behind-the-scenes villain in the book. A Mankind Witch (2005) is a solo effort by Freer, and takes place between Shadow of the Lion and This Rough Magic. While Manfred and Eric are major characters, the focus is shifted to a thrall, Cair Aidin, and the Princess of Telemark, Signy. Trolls are the major antagonists of the story. Much Fall of Blood (2010) follows Manfred and Erik after their journey to Jerusalem. They are attempting to broker an agreement between the Ilkhan and their nomadic cousins, the Golden Horde, which is complicated by disguised agents of Chernobog who wish to ensure no agreement occurs. In parallel, and eventually intersecting, Elizabeth Bartholdy's latest plot seeks to exploit and destroy an ancient supernatural pact between the family line of Prince Vlad of Wallachia and the supernatural powers that live in his domain, and both her nephew Prince Emeric of Hungary and the dark magician Count Mindaug work their own plots subverting hers. Burdens of the Dead (2013) centers on Benito Valdosta's attempt to stop Chernobog's plots once and for all thanks after the revelations of Much Fall of Blood, through a naval war with Byzantium in an attempt to block a Black Sea fleet under construction for Chernobog from penetrating into the Mediterranean. The crossroads city of Constantinople is the focal point of their war, and the spirit of Hekate, goddess of crossroads and long worshipped in the Bosporus, quickly becomes involved in the war, and kidnapping and sorcery puts Benito's family at risk in an attempt to distract him and weaken the naval offensive. The original working title was Great Doom's Shadow. All the Plagues of Hell (2018), by Eric Flint & Dave Freer, focuses on the city of Milan. The condottiere Carlo Sforza foils its Duke's attempt to assassinate him, lethally, and takes control of the city. The illegitimate daughter of the dead duke awakens a spirit of plague in an attempt to take control for herself, and magicians across Europe seek the source of their premonitions that a plague is awakening. This is complicated by the arrival in Milan of a notorious black magician, Count Mindaug, who most of the Christian magicians believe is the architect of the plague, by the involvement of Sforza's illegitimate son, Benito Valdosta of Venice, and the antagonism Venice has had for Sforza, and by Sforza's belief that magic is faked and lacks any spiritual or supernatural power. Characters The following characters appear in two or more novels in the series: Aidoneus: God of the dead. Aldanto, Ceasare: Milanese sell-sword and spy. Bartholdy, Elizabeth: Hungarian countess and "aunt" to King Emeric. Hundreds of years old but appears to be in her early twenties. Engages in gruesome blood rituals to keep her youth. Bespi, Fortunato: Former Milanese spy, he is reprogrammed by the Strega to act as Marco's bodyguard. De Chevreuse, Francesca: Most powerful Courtesan in Venice, formerly of Orleans. Dell'Este, Enrico: The Duke of Ferrara; an excellent swordsmith, he is known as the Old Fox, perhaps the craftiest military mind Italy has seen in decades. Dorma, Petro: Head of the influential House Dorma, leader of the Lords of the Nightwatch, and a frontrunner for the position of Doge. Garavalli, Maria: A sharp-tongued canaler, one of the most feared women in the canals. Hakkonsen, Eric: An Icelander, bodyguard and mentor to Manfred. Hohenstauffen, Charles Fredrik: Holy Roman Emperor Evangelina: A member of the Hypatian order in Venice's St. Hypatia di Hagia Sophia. Jagiellon: Grand Duke of Lithuania, possessed by the demon Chernobog. Lopez, Eneko: A Basque cleric and ecclesiastical magician. He is perhaps the greatest sacred magician since Hypatia herself. Manfred, Prince of Brittany, Earl of Carnac, Marquis of Rennes, Baron of Ravensburg: Nephew of the Holy Roman Emperor, second in line to the throne, and Knight of the Cross. Mindaug, Kazimierz: Lithuanian count, advisor to various powers including Jagiellon, Countess Bartholdy, King Emeric, and Carlo Sforza. Montescue, Katerina (Kat): Heiress to the bankrupt House Montescue. She worked as a smuggler. Montescue, Ludovico: Current leader of House Montescue, having wasted most of his money in a pathetic effort to destroy the Valdostas. Sforza, Carlo: A notorious and skilled condottiere known as The Wolf of the North. He is a father of Benito Valdosta, with a long-standing grudge with Duke Enrico Dell'Este over the fate of Benito's mother. Carlo Sforza is substantially based on Francesco Sforza, a historical condottiere who became Duke of Milan in 1450. Valdosta, Benito: Grandson of the Duke of Ferrara, a pickpocket while in hiding. Valdosta, Marco: Grandson of the Duke of Ferrara; a skilled doctor (when trained) and powerful mage; heir to House Valdosta and the Lion Crown. Winged Lion of Venice: The city's ancient guardian, which answers only to the wearer of the Winged Mantle. The Church in Europe The Petrines Led by the Grand Metropolitan in Rome, the Petrine branch of the Church (named for St. Peter and built on the teachings of Hypatia and Chrysostom) is the creed of choice in Italy and Spain, with a relatively large following in Aquitaine. The Petrines are noted for taking a mediative role in politics and a more tolerant attitude to other faiths. The Paulines Most of central and northern Europe follow the Pauline creed (named for St. Paul and based on the writings of St. Augustine). The Paulines are recognized for a general intolerance to all non-Christians, though some members of the Church are more politic about it than others. There is no official head of the Pauline church, though the Holy Roman Emperor is the "Bulwark of the Faith". The Paulines very closely (with a few exceptions) resemble historical medieval Catholicism in faith, practice and politics. Magic The Church Most priests and Sisters of the Petrine branch of the Church are trained as magicians in the Vatican or Alexandria. They are typically trained in scrying, healing, and protection, though a number of them have taken up combative magic. The Order of Hypatia is a dedicated group of Petrine priests and Sisters who use magic to heal and protect. In the Pauline branch, only the Servants of the Holy Trinity are allowed to use magic (a fact which does not stop the Emperor from seeking a second opinion), and all forms of magic not sanctioned by them is heretical. Strega The Strega are magic-users and traditional witches who typically serve a higher purpose. In Venice, the Strega are welcomed, and about a third of the students at the Accademia are Strega or have Strega leanings. The Strega are led by a Grand Master, who is usually a Grimas (one who has mastered all three branches of Stregheria). Others The darker sides of magic are usually the antagonists of the series. The demon Chernobog, for instance, is the main villain, and his magical minions are the source of Venice's troubles. In This Rough Magic, King Emeric of Hungary is a witch, and a sect of sorceresses are the most powerful antagonists (their leader is the infamous Elizabeth Bartholdy). In A Mankind Witch, female Trolls and Alfar are shown to have powerful magic. Nations League of Armagh: A coalition of Celtic and Norse states. Most of their territory lies in the British Isles, but there are extensive settlements in Iceland and Vinland (North America). Manfred of Brittany is the heir to a part of the League as well as the Empire. Aquitaine: A realm that encompasses most of our universe's France and England. Francesca de Chevreuse hails from the southern capital, Orleans. Holy Roman Empire: Ruling over all of central Europe, including Austria, Germany, and Denmark, the Empire is the most powerful nation in Europe, and adheres to the Pauline creed. Manfred of Brittany is an heir to the Empire, currently ruled by Charles Fredrik Hohenstauffen. Grand Duchy of Lithuania and Poland: Dominating most of eastern Europe, the Duchy is ruled by the iron fist of Grand Duke Jagiellon, who is possessed by the demon Chernobog. Kingdom of Hungary: A brutal kingdom which has control of most of the Balkans. The current king, Emeric, is a warmonger who is not above using witchcraft to achieve his bloodthirsty ends. Ilkhan: A vast empire implied to be the result of a merger between the Mongols and the Islamic Caliphate. They are known in Europe as the current rulers of Egypt and the Holy Land, enforcing the peace in Jerusalem by aggressively upholding a policy of religious tolerance. The full extent of their empire is not clear, but includes most of the Middle East and extends deep into Asia. Genoa: The only rivals of the Veneze on the open seas, in terms of both trade and navy. Milan: The Milanese and the Visconti house are the leaders of the Motagnards, staunch Paulines who are bent on the Empire annexing northern Italy. They are oblivious to the fact that this is the last thing the Empire wants. Caesare Aldanto hails from Milan. Verona: Venice's land-based rival. Ferrara: Like Venice, the Ferrarese are politically non-aligned, although they have served as agents for both the Empire and the Grand Metropolitan. Duke Enrico Dell'Este, grandfather of Marco and Benito, is known as the Old Fox. Barbary Coast: as in history, a modest land area controlled by Muslim pirate king-admirals. Unlike our history, independent, without Ottomans ruling over them. Ruled by the Redbeards, the Aidin brothers, from their capital Carthage. Venice The most trade-oriented and tolerant city in Europe. Venice is in possession of a large empire in the Mediterranean; in addition to its own home territories in Italy, the city also rules Istria on the Adriatic coast, Crete, the Greek island of Corfu, and unnamed territories in Sicily, Sardinia, and North Africa. The city is also known for its policy of tolerance—it is the only city in Europe where all manner of creeds can live together. Jews and Strega are among the persecuted minorities who find safe haven in the city. The Republic's government is ruled by numerous bodies and individuals: The Doge is elected for a life term from all available candidates in the Senate. The Council of Ten are the Doge's cabinet. Membership is a state secret. The Lords of the Nightwatch serve as the heads of all "extra-military" matters of the republic, including but not limited to police work, detective work, security and espionage. The Senate consists of three hundred dignitaries, merchants, and heads of House. References Novels set in the 1530s Book series introduced in 2002 Alternate history book series Books by Eric Flint Fantasy novel series Collaborative book series Cultural depictions of Elizabeth Báthory Novels set in Venice Cultural depictions of Hypatia Cultural depictions of Augustine of Hippo Greek and Roman deities in fiction Hecate Library of Alexandria
Heirs of Alexandria series
Astronomy
2,708
2,867,718
https://en.wikipedia.org/wiki/Intrinsically%20disordered%20proteins
In molecular biology, an intrinsically disordered protein (IDP) is a protein that lacks a fixed or ordered three-dimensional structure, typically in the absence of its macromolecular interaction partners, such as other proteins or RNA. IDPs range from fully unstructured to partially structured and include random coil, molten globule-like aggregates, or flexible linkers in large multi-domain proteins. They are sometimes considered as a separate class of proteins along with globular, fibrous and membrane proteins. IDPs are a very large and functionally important class of proteins and their discovery has disproved the idea that three-dimensional structures of proteins must be fixed to accomplish their biological functions. For example, IDPs have been identified to participate in weak multivalent interactions that are highly cooperative and dynamic, lending them importance in DNA regulation and in cell signaling. Many IDPs can also adopt a fixed three-dimensional structure after binding to other macromolecules. Overall, IDPs are different from structured proteins in many ways and tend to have distinctive function, structure, sequence, interactions, evolution and regulation. History In the 1930s-1950s, the first protein structures were solved by protein crystallography. These early structures suggested that a fixed three-dimensional structure might be generally required to mediate biological functions of proteins. These publications solidified the central dogma of molecular biology in that the amino acid sequence of a protein determines its structure which, in turn, determines its function. In 1950, Karush wrote about 'Configurational Adaptability' contradicting this assumption. He was convinced that proteins have more than one configuration at the same energy level and can choose one when binding to other substrates. In the 1960s, Levinthal's paradox suggested that the systematic conformational search of a long polypeptide is unlikely to yield a single folded protein structure on biologically relevant timescales (i.e. microseconds to minutes). Curiously, for many (small) proteins or protein domains, relatively rapid and efficient refolding can be observed in vitro. As stated in Anfinsen's Dogma from 1973, the fixed 3D structure of these proteins is uniquely encoded in its primary structure (the amino acid sequence), is kinetically accessible and stable under a range of (near) physiological conditions, and can therefore be considered as the native state of such "ordered" proteins. During the subsequent decades, however, many large protein regions could not be assigned in x-ray datasets, indicating that they occupy multiple positions, which average out in electron density maps. The lack of fixed, unique positions relative to the crystal lattice suggested that these regions were "disordered". Nuclear magnetic resonance spectroscopy of proteins also demonstrated the presence of large flexible linkers and termini in many solved structural ensembles. In 2001, Dunker questioned whether the newly found information was ignored for 50 years with more quantitative analyses becoming available in the 2000s. In the 2010s it became clear that IDPs are common among disease-related proteins, such as alpha-synuclein and tau. Abundance It is now generally accepted that proteins exist as an ensemble of similar structures with some regions more constrained than others. IDPs occupy the extreme end of this spectrum of flexibility and include proteins of considerable local structure tendency or flexible multidomain assemblies. Intrinsic disorder is particularly elevated among proteins that regulate chromatin and transcription, and bioinformatic predictions indicate that is more common in genomes and proteomes than in known structures in the protein database. Based on DISOPRED2 prediction, long (>30 residue) disordered segments occur in 2.0% of archaean, 4.2% of eubacterial and 33.0% of eukaryotic proteins, including certain disease-related proteins. Biological roles Highly dynamic disordered regions of proteins have been linked to functionally important phenomena such as allosteric regulation and enzyme catalysis. Many disordered proteins have the binding affinity with their receptors regulated by post-translational modification, thus it has been proposed that the flexibility of disordered proteins facilitates the different conformational requirements for binding the modifying enzymes as well as their receptors. Intrinsic disorder is particularly enriched in proteins implicated in cell signaling and transcription, as well as chromatin remodeling functions. Genes that have recently been born de novo tend to have higher disorder. In animals, genes with high disorder are lost at higher rates during evolution. Flexible linkers Disordered regions are often found as flexible linkers or loops connecting domains. Linker sequences vary greatly in length but are typically rich in polar uncharged amino acids. Flexible linkers allow the connecting domains to freely twist and rotate to recruit their binding partners via protein domain dynamics. They also allow their binding partners to induce larger scale conformational changes by long-range allostery. The flexible linker of FBP25 which connects two domains of FKBP25 is important for the binding of FKBP25 with DNA. Linear motifs Linear motifs are short disordered segments of proteins that mediate functional interactions with other proteins or other biomolecules (RNA, DNA, sugars etc.). Many roles of linear motifs are associated with cell regulation, for instance in control of cell shape, subcellular localisation of individual proteins and regulated protein turnover. Often, post-translational modifications such as phosphorylation tune the affinity (not rarely by several orders of magnitude) of individual linear motifs for specific interactions. Relatively rapid evolution and a relatively small number of structural restraints for establishing novel (low-affinity) interfaces make it particularly challenging to detect linear motifs but their widespread biological roles and the fact that many viruses mimick/hijack linear motifs to efficiently recode infected cells underlines the timely urgency of research on this very challenging and exciting topic. Pre-structured motifs Unlike globular proteins, IDPs do not have spatially-disposed active pockets. Fascinatingly, 80% of target-unbound IDPs (~4 dozens) subjected to detailed structural characterization by NMR possess linear motifs termed PresMos (pre-structured motifs) that are transient secondary structural elements primed for target recognition. In several cases it has been demonstrated that these transient structures become full and stable secondary structures, e.g., helices, upon target binding. Hence, PresMos are the putative active sites in IDPs. Coupled folding and binding Many unstructured proteins undergo transitions to more ordered states upon binding to their targets (e.g. Molecular Recognition Features (MoRFs)). The coupled folding and binding may be local, involving only a few interacting residues, or it might involve an entire protein domain. It was recently shown that the coupled folding and binding allows the burial of a large surface area that would be possible only for fully structured proteins if they were much larger. Moreover, certain disordered regions might serve as "molecular switches" in regulating certain biological function by switching to ordered conformation upon molecular recognition like small molecule-binding, DNA/RNA binding, ion interactions etc. The ability of disordered proteins to bind, and thus to exert a function, shows that stability is not a required condition. Many short functional sites, for example Short Linear Motifs are over-represented in disordered proteins. Disordered proteins and short linear motifs are particularly abundant in many RNA viruses such as Hendra virus, HCV, HIV-1 and human papillomaviruses. This enables such viruses to overcome their informationally limited genomes by facilitating binding, and manipulation of, a large number of host cell proteins. Disorder in the bound state (fuzzy complexes) Intrinsically disordered proteins can retain their conformational freedom even when they bind specifically to other proteins. The structural disorder in bound state can be static or dynamic. In fuzzy complexes structural multiplicity is required for function and the manipulation of the bound disordered region changes activity. The conformational ensemble of the complex is modulated via post-translational modifications or protein interactions. Specificity of DNA binding proteins often depends on the length of fuzzy regions, which is varied by alternative splicing. Some fuzzy complexes may exhibit high binding affinity, although other studies showed different affinity values for the same system in a different concentration regime. Structural aspects Intrinsically disordered proteins adapt many different structures in vivo according to the cell's conditions, creating a structural or conformational ensemble. Therefore, their structures are strongly function-related. However, only few proteins are fully disordered in their native state. Disorder is mostly found in intrinsically disordered regions (IDRs) within an otherwise well-structured protein. The term intrinsically disordered protein (IDP) therefore includes proteins that contain IDRs as well as fully disordered proteins. The existence and kind of protein disorder is encoded in its amino acid sequence. In general, IDPs are characterized by a low content of bulky hydrophobic amino acids and a high proportion of polar and charged amino acids, usually referred to as low hydrophobicity. This property leads to good interactions with water. Furthermore, high net charges promote disorder because of electrostatic repulsion resulting from equally charged residues. Thus disordered sequences cannot sufficiently bury a hydrophobic core to fold into stable globular proteins. In some cases, hydrophobic clusters in disordered sequences provide the clues for identifying the regions that undergo coupled folding and binding (refer to biological roles). Many disordered proteins reveal regions without any regular secondary structure. These regions can be termed as flexible, compared to structured loops. While the latter are rigid and contain only one set of Ramachandran angles, IDPs involve multiple sets of angles. The term flexibility is also used for well-structured proteins, but describes a different phenomenon in the context of disordered proteins. Flexibility in structured proteins is bound to an equilibrium state, while it is not so in IDPs. Many disordered proteins also reveal low complexity sequences, i.e. sequences with over-representation of a few residues. While low complexity sequences are a strong indication of disorder, the reverse is not necessarily true, that is, not all disordered proteins have low complexity sequences. Disordered proteins have a low content of predicted secondary structure. Due to the disordered nature of these proteins, topological approaches have been developed to search for conformational patterns in their dynamics. For instance, circuit topology has been applied to track the dynamics of disordered protein domains. By employing a topological approach, one can categorize motifs according to their topological buildup and the timescale of their formation. Experimental validation IDPs can be validated in several contexts. Most approaches for experimental validation of IDPs are restricted to extracted or purified proteins while some new experimental strategies aim to explore in vivo conformations and structural variations of IDPs inside intact living cells and systematic comparisons between their dynamics in vivo and in vitro. In vivo approaches The first direct evidence for in vivo persistence of intrinsic disorder has been achieved by in-cell NMR upon electroporation of a purified IDP and recovery of cells to an intact state. Larger-scale in vivo validation of IDR predictions is now possible using biotin 'painting'. In vitro approaches Intrinsically unfolded proteins, once purified, can be identified by various experimental methods. The primary method to obtain information on disordered regions of a protein is NMR spectroscopy. The lack of electron density in X-ray crystallographic studies may also be a sign of disorder. Folded proteins have a high density (partial specific volume of 0.72-0.74 mL/g) and commensurately small radius of gyration. Hence, unfolded proteins can be detected by methods that are sensitive to molecular size, density or hydrodynamic drag, such as size exclusion chromatography, analytical ultracentrifugation, small angle X-ray scattering (SAXS), and measurements of the diffusion constant. Unfolded proteins are also characterized by their lack of secondary structure, as assessed by far-UV (170–250 nm) circular dichroism (esp. a pronounced minimum at ~200 nm) or infrared spectroscopy. Unfolded proteins also have exposed backbone peptide groups exposed to solvent, so that they are readily cleaved by proteases, undergo rapid hydrogen-deuterium exchange and exhibit a small dispersion (<1 ppm) in their 1H amide chemical shifts as measured by NMR. (Folded proteins typically show dispersions as large as 5 ppm for the amide protons.) Recently, new methods including Fast parallel proteolysis (FASTpp) have been introduced, which allow to determine the fraction folded/disordered without the need for purification. Even subtle differences in the stability of missense mutations, protein partner binding and (self)polymerisation-induced folding of (e.g.) coiled-coils can be detected using FASTpp as recently demonstrated using the tropomyosin-troponin protein interaction. Fully unstructured protein regions can be experimentally validated by their hypersusceptibility to proteolysis using short digestion times and low protease concentrations. Bulk methods to study IDP structure and dynamics include SAXS for ensemble shape information, NMR for atomistic ensemble refinement, Fluorescence for visualising molecular interactions and conformational transitions, x-ray crystallography to highlight more mobile regions in otherwise rigid protein crystals, cryo-EM to reveal less fixed parts of proteins, light scattering to monitor size distributions of IDPs or their aggregation kinetics, NMR chemical shift and Circular Dichroism to monitor secondary structure of IDPs. Single-molecule methods to study IDPs include spFRET to study conformational flexibility of IDPs and the kinetics of structural transitions, optical tweezers for high-resolution insights into the ensembles of IDPs and their oligomers or aggregates, nanopores to reveal global shape distributions of IDPs, magnetic tweezers to study structural transitions for long times at low forces, high-speed AFM to visualise the spatio-temporal flexibility of IDPs directly. Disorder annotation Intrinsic disorder can be either annotated from experimental information or predicted with specialized software. Disorder prediction algorithms can predict Intrinsic Disorder (ID) propensity with high accuracy (approaching around 80%) based on primary sequence composition, similarity to unassigned segments in protein x-ray datasets, flexible regions in NMR studies and physico-chemical properties of amino acids. Disorder databases Databases have been established to annotate protein sequences with intrinsic disorder information. The DisProt database contains a collection of manually curated protein segments which have been experimentally determined to be disordered. MobiDB is a database combining experimentally curated disorder annotations (e.g. from DisProt) with data derived from missing residues in X-ray crystallographic structures and flexible regions in NMR structures. Predicting IDPs by sequence Separating disordered from ordered proteins is essential for disorder prediction. One of the first steps to find a factor that distinguishes IDPs from non-IDPs is to specify biases within the amino acid composition. The following hydrophilic, charged amino acids A, R, G, Q, S, P, E and K have been characterized as disorder-promoting amino acids, while order-promoting amino acids W, C, F, I, Y, V, L, and N are hydrophobic and uncharged. The remaining amino acids H, M, T and D are ambiguous, found in both ordered and unstructured regions. A more recent analysis ranked amino acids by their propensity to form disordered regions as follows (order promoting to disorder promoting): W, F, Y, I, M, L, V, N, C, T, A, G, R, D, H, Q, K, S, E, P. As it can be seen from the list, small, charged, hydrophilic residues often promote disorder, while large and hydrophobic residues promote order. This information is the basis of most sequence-based predictors. Regions with little to no secondary structure, also known as NORS (NO Regular Secondary structure) regions, and low-complexity regions can easily be detected. However, not all disordered proteins contain such low complexity sequences. Prediction methods Determining disordered regions from biochemical methods is very costly and time-consuming. Due to the variable nature of IDPs, only certain aspects of their structure can be detected, so that a full characterization requires a large number of different methods and experiments. This further increases the expense of IDP determination. In order to overcome this obstacle, computer-based methods are created for predicting protein structure and function. It is one of the main goals of bioinformatics to derive knowledge by prediction. Predictors for IDP function are also being developed, but mainly use structural information such as linear motif sites. There are different approaches for predicting IDP structure, such as neural networks or matrix calculations, based on different structural and/or biophysical properties. Many computational methods exploit sequence information to predict whether a protein is disordered. Notable examples of such software include IUPRED and Disopred. Different methods may use different definitions of disorder. Meta-predictors show a new concept, combining different primary predictors to create a more competent and exact predictor. Due to the different approaches of predicting disordered proteins, estimating their relative accuracy is fairly difficult. For example, neural networks are often trained on different datasets. The disorder prediction category is a part of biannual CASP experiment that is designed to test methods according accuracy in finding regions with missing 3D structure (marked in PDB files as REMARK465, missing electron densities in X-ray structures). Disorder and disease Intrinsically unstructured proteins have been implicated in a number of diseases. Aggregation of misfolded proteins is the cause of many synucleinopathies and toxicity as those proteins start binding to each other randomly and can lead to cancer or cardiovascular diseases. Thereby, misfolding can happen spontaneously because millions of copies of proteins are made during the lifetime of an organism. The aggregation of the intrinsically unstructured protein α-synuclein is thought to be responsible. The structural flexibility of this protein together with its susceptibility to modification in the cell leads to misfolding and aggregation. Genetics, oxidative and nitrative stress as well as mitochondrial impairment impact the structural flexibility of the unstructured α-synuclein protein and associated disease mechanisms. Many key tumour suppressors have large intrinsically unstructured regions, for example p53 and BRCA1. These regions of the proteins are responsible for mediating many of their interactions. Taking the cell's native defense mechanisms as a model drugs can be developed, trying to block the place of noxious substrates and inhibiting them, and thus counteracting the disease. Computer simulations Owing to high structural heterogeneity, NMR/SAXS experimental parameters obtained will be an average over a large number of highly diverse and disordered states (an ensemble of disordered states). Hence, to understand the structural implications of these experimental parameters, there is a necessity for accurate representation of these ensembles by computer simulations. All-atom molecular dynamic simulations can be used for this purpose but their use is limited by the accuracy of current force-fields in representing disordered proteins. Nevertheless, some force-fields have been explicitly developed for studying disordered proteins by optimising force-field parameters using available NMR data for disordered proteins. (examples are CHARMM 22*, CHARMM 32, Amber ff03* etc.) MD simulations restrained by experimental parameters (restrained-MD) have also been used to characterise disordered proteins. In principle, one can sample the whole conformational space given an MD simulation (with accurate Force-field) is run long enough. Because of very high structural heterogeneity, the time scales that needs to be run for this purpose are very large and are limited by computational power. However, other computational techniques such as accelerated-MD simulations, replica exchange simulations, metadynamics, multicanonical MD simulations, or methods using coarse-grained representation with implicit and explicit solvents have been used to sample broader conformational space in smaller time scales. Moreover, various protocols and methods of analyzing IDPs, such as studies based on quantitative analysis of GC content in genes and their respective chromosomal bands, have been used to understand functional IDP segments. See also IDPbyNMR DisProt database MobiDB database Molten globule Prion Random coil Dark proteome References External links Intrinsically disordered protein at Proteopedia MobiDB: a comprehensive database of intrinsic protein disorder annotations IDEAL - Intrinsically Disordered proteins with Extensive Annotations and Literature D2P2 Database of Disordered Protein Predictions Gallery of images of intrinsically disordered proteins First IDP journal covering all topics of IDP research IDP Journal Database of experimentally validated IDPs IDP ensemble database Proteins by structure Protein structure
Intrinsically disordered proteins
Chemistry
4,326
6,703,481
https://en.wikipedia.org/wiki/Minkowski%20Prize
The Minkowski Prize is given by the European Association for the Study of Diabetes (EASD) in recognition to research which has been carried out by a person normally residing in Europe, as manifested by publications which contribute to the advancement of knowledge concerning diabetes mellitus. The Prize honors the name of Oskar Minkowski (1858–1931), a physician and physiologist who was the discoverer of the role of pancreas in the control of glucose metabolism. It has been awarded annually since 1966, and the winner is invited to pronounce a Minkowski Lecture during the EASD Annual Conference. It is traditionally seen as the most prestigious European prize in the field of diabetes research. Since 1966, the award is sponsored by a pharmaceutical company Sanofi-Aventis. The prize consists of a certificate and 20,000 euros plus travel expenses. The candidate must be less than 45 years of age on 1 January of the year of award. Self-nomination is possible. Winners With the city where the prize was awarded (Annual Conference), name and country. 1966 Aarhus – Philip Randle (United Kingdom) 1967 Stockholm – E. R. Froesch (Switzerland) 1968 Louvain – Lars Carlson (Sweden) 1969 Montpellier – Bo Hellman (Sweden) 1970 Warsaw – Bernard Jeanrenaud (Switzerland) 1971 Southampton – Charles Nicholas Hales (United Kingdom) 1972 Madrid – Willy J. Malaisse (Belgium) 1973 Brussels – Lelio Orci (Switzerland) 1974 Jerusalem – Erol Cerasi (Sweden) 1975 Munich – Pierre Freychet (France) 1976 Helsinki – Karl Dietrich Hepp (Germany) 1977 Geneva – John Wahren (Sweden) 1978 Zagreb – Jorn Nerup (Denmark) 1979 Vienna – Stephen John Haslem Ashcroft (United Kingdom) 1980 Athens – Inge-Bert Täljedal (Sweden) 1981 Amsterdam – Pierre De Meyts (Belgium) 1982 Budapest – Gian Franco Bottazzo (United Kingdom) 1983 Oslo – Simon Howell (United Kingdom) 1984 London – Ake Lernmark (Denmark) 1985 Madrid – Emmanuel Van Obberghen (France) 1986 Rome – Daniel Pipeleers (Belgium) 1987 Leipzig – Jean-Louis Carpentier (Switzerland) 1988 Paris – John Charles Hutton (United Kingdom) 1989 Lisbon – Hans-Ulrich Häring (Germany) 1990 Copenhagen – Philippe Halban (Switzerland) 1991 Dublin – Christian Boitard (France) 1992 Prague – Emile Van Schaftingen (Belgium) 1993 Istanbul – Hannele Yki-Järvinen (Finland) 1994 Düsseldorf – Thomas Mandrup Poulsen (Denmark) 1995 Stockholm – John Todd (United Kingdom) 1996 Vienna – Patrik Rorsman (Denmark) 1997 Helsinki – Philippe Froguel (France) 1998 Barcelona – Johan H. Auwerx (France) 1999 Brussels – Raphael Scharfmann (France) 2000 Jerusalem – Helena Edlund (Sweden) 2001 Glasgow – Juleen R. Zierath (Sweden) 2002 Budapest – Bart O. Roep (The Netherlands) 2003 Paris – Michael Stumvoll (Germany) 2004 Munich – Guy A. Rutter (United Kingdom) 2005 Athens – Peter Rossing (Denmark) 2006 Copenhagen – Michael Roden (Austria) 2007 Amsterdam – Markus Stoffel (Switzerland) 2008 Rome – Jens Claus Brüning (Germany) 2009 Vienna – Gianluca Perseghin (Italy) 2010 Stockholm - Fiona Gribble (United Kingdom) 2011 Lisbon - Naveed Sattar (United Kingdom} 2012 Berlin - Tim Frayling (United Kingdom) 2013 Barcelona - Miriam Cnop (Belgium) 2014 Vienna - Anna Gloyn (United Kingdom) 2015 Stockholm - Matthias Blüher (Germany) 2016 Munich - Patrick Schrauwen (Netherlands) 2017 Lisbon - Ewan Pearson (United Kingdom) 2018 Berlin - Fredrik Bäckhed (Sweden) 2019 Barcelona - Filip K. Knop (Denmark) 2020 Virtual Meeting - Gian Paulo Fadini (Italy) 2021 Virtual Meeting - Amélie Bonnefond (France) 2022 Stockholm - Martin Heni (Germany) References Science and technology awards Diabetes research Awards established in 1966 1966 establishments in Europe
Minkowski Prize
Technology
846
25,714,707
https://en.wikipedia.org/wiki/Gustav%20Tornier
Gustav Tornier (Dombrowken (today Dąbrowa Chełmińska, Poland), 9 May 1858 – Berlin, 25 April 1938) was a German zoologist and herpetologist. Life and career Tornier was born in the Kingdom of Prussia as the eldest child of Gottlob Adolf Tornier, a member of the Prussian landed gentry in Dombrowken, a village near Bromberg (now Bydgoszcz) in West Prussia. His father and mother had both died by 1877, leaving the nineteen-year-old Gustav as the master of a house and estate. The attached commitments kept him from commencing his university studies until the relatively advanced age of twenty-four. Enrolling at the university of Heidelberg in 1882, Tornier took his time, and he did not receive his doctorate for another ten years. In the meantime he wrote a monograph on evolution in support of Wilhelm Roux, Der Kampf mit der Nahrung ("The battle with/for Food", 1884). In the book, he took an uncompromisingly Darwinist stance, and applied the principles of natural selection and adaptation to the structures and functions of individual organisms. In 1891 he had already accepted a post as an assistant in the zoological museum of the Friedrich Wilhelm University (now Humboldt University) in Berlin. Initially he occupied himself with preparing anatomical specimens, but from 1893 he also worked in the herpetological department. When its curator, Paul Matschie, took over the mammal collection in 1895, Tornier succeeded him. In 1902, he became professor of zoology at the university, whilst later also accepting the post of head librarian at the museum (1903), assistant director of the museum (1921), and finally director ad interim of the museum (1922–1923). In addition, he served as a board member of the Berlin Society of Friends of Natural Science (Gesellschaft Naturforschender Freunde zu Berlin) from 1907 to 1924, and as such was closely involved with organizing the Tendaguru Expedition (1910-1912), still the largest dinosaur excavation expedition in history. Tornier retired in October 1923, and died in 1938 in Berlin. He was interred in the Luisenfriedhof-III in Berlin-Charlottenburg. Research Tornier's research interests focused on amphibians and reptiles, developmental anatomy, and systematics. He became the leading authority on the reptilian and amphibian fauna of German East Africa. Diplodocus Perhaps unfairly, Tornier's legacy has mainly been determined by his position in the controversy surrounding the posture of the sauropod dinosaur Diplodocus carnegii. Following the 1899 discovery of the animal in Wyoming, it had traditionally been depicted and mounted in an elephant-like stance. However, In 1909, Oliver P. Hay imagined two Diplodocus, being reptiles after all, with splayed lizard-like limbs on the banks of a river. Hay argued that Diplodocus had a sprawling, lizard-like gait with widely splayed legs. Tornier had arrived at the same conclusion and forcefully supported Hay's argument, arguing that the tail couldn't physically have made the curve down to the ground. To solve this issue, he lowered the entire animal. The hypothesis, at least as far as the position of the legs was concerned, was contested by W. J. Holland, who maintained that a sprawling Diplodocus would have needed a trench to pull its belly through. Tornier's acerbic and sometimes sarcastic reply to Holland led to a minor spat, with German authorities (including Kaiser William II himself) coming down on the former's side and even considering re-mounting the Berlin copy of Diplodocus, placed there only a few years before by Holland, in a more "reptilian" fashion. In the end, however, finds of sauropod footprints in the 1930s put Hay and Tornier's theory to rest. Taxonomy Tornier's frog, Litoria tornieri, which is an Australian endemic, was named after him, as was a large sauropod dinosaur found around 1910 in the Tendaguru formations of German East Africa, which was renamedTornieria africanus (Fraas) after the original name Gigantosaurus had been found to be occupied. Also, Tornier is commemorated in the scientific names of two species of African reptiles: a snake, Crotaphopeltis tornieri; and a tortoise, Malacochersus tornieri. Selected publications (1884). Der Kampf mit der Nahrung: Ein Beitrag zum Darwinismus. Berlin: W. Ißleib. (1896). Die Reptilien und Amphibien Ost-Afrikas. Berlin: Reimer. (1899). "Neues über Chamaeleons ". Zoolischer Anzeiger 22: 408–414. (1899). "Drei Reptilien aus Afrika ". Zoologischer Anzeiger 22 : 258–261. (1900). "Beschreibung eines neuen Chamaeleons ". Zoologischer Anzeiger 23: 21–23. (1900). "Neue Liste der Crocodilen, Schildkröten und Eidechsen Deutsch-Ost-Afrikas ". Zoologisches Jahrbuch für Systematik 13: 579–681. (1901). "Die Reptilien und Amphibien der Deutschen Tiefseeexpedition 1898/99 ". Zoologischer Anzeiger 24: 61–66. (1904). "Bau und Betätigung der Kopflappen und Halsluftsäcke bei Chamäleonen ". Zoologisches Jahrbuch für Anatomie 21: 1–40. (1908). “Über Eidechseneier, die von Pflanzen durchwachsen sind / Gibt es bei Wiederkäuern und Pferden ein Zehenatavismus? / Über eine albinotische Ringelnatter und ihr Entstehn.” Sitzungsberichte der Gesellschaft Naturforschender Freunde zu Berlin 1908, no. 8: 191–201. (1909). "Wie war der Diplodocus carnegii wirklich gebaut?" Sitzungsberichte der Gesellschaft Naturforschender Freunde zu Berlin 1909 (4): 193–209. (1909). "Ernstes und lustiges aus Kritiken über meine Diplodocusarbeit / War der Diplodocus Elefantenfüssig?" Sitzungsberichte der Gesellschaft Naturforschender Freunde zu Berlin 1909 (9): 505–556. (1909). “War der Diplodocus Elefantenfüssig.” Sitzungsberichte der Gesellschaft Naturforschender Freunde zu Berlin 1909, no. 9: 536–56. (1909). “III. Reptilia - Amphibia.” In Die Süßwasserfauna Deutschlands. Vol. 1, edited by August Brauer, 65–107. Jena: Gustav Fischer. (1910). “Bemerkungen zu dem vorhergehenden Artikel “ Diplodocus und seine Stellung usw. von Fr. Drevermann.”.” Sitzungsberichte der Gesellschaft Naturforschender Freunde zu Berlin: 402–6. (1910). “Über und gegen neue Diplodocus-Arbeiten. Teil I: Gegen O. Abels Rekonstruktion des Diplodocus.” Zeitschrift der Deutschen Geologischen Gesellschaft 62: 536–76. (1911). “Ueber die Art, wie aussere Einflüsse den Aufbau des Tieres abändern.” Verhandlungen der deutschen zoologischen Gesellschaft 20/21: 21–91. (1912). Biologie und Phylogenie der Rieseneidechsen und ihrer Verwandten (mit Demonstrationen). Berlin: Self-published. (1913). “Reptilia: Paläontologie.” In Handwörterbuch Der Naturwissenschaften. 8. Band, Quartärformation - Sekretion, edited by E. Korschelt, 337–76. Jena: Gustav Fischer. External links Online version of Tornier's Der Kampf mit der Nahrung: Ein Beitrag zum Darwinismus (Berlin: W. Ißleib) (Archive.org) Short biography in German (Chameleons Online) Species named by Tornier in The Reptile Database References 1858 births 1938 deaths People from Bydgoszcz County People from the Province of Prussia German paleontologists German herpetologists 19th-century German zoologists Evolutionary biologists Lamarckism Scientists active at the Museum für Naturkunde, Berlin Heidelberg University alumni Academic staff of the Humboldt University of Berlin Members of the German National Academy of Sciences Leopoldina
Gustav Tornier
Biology
1,941
71,224,862
https://en.wikipedia.org/wiki/Biodiversity%20and%20Conservation
Biodiversity and Conservation is a peer-reviewed scientific journal covering all aspects of biological diversity, its conservation, and sustainable use. It was established in 1992 and is published by Springer Science+Business Media. Abstracting and indexing The journal is abstracted and indexed in: AGRICOLA BIOSIS Previews Biological Abstracts CAB Abstracts According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.296. References External links English-language journals Publications with year of establishment missing Springer Science+Business Media academic journals Conservation biology Ecology journals
Biodiversity and Conservation
Biology,Environmental_science
109
32,940,598
https://en.wikipedia.org/wiki/Decimal%20data%20type
Some programming languages (or compilers for them) provide a built-in (primitive) or library decimal data type to represent non-repeating decimal fractions like 0.3 and −1.17 without rounding, and to do arithmetic on them. Examples are the decimal.Decimal or num7.Num type of Python, and analogous types provided by other languages. Rationale Fractional numbers are supported on most programming languages as floating-point numbers or fixed-point numbers. However, such representations typically restrict the denominator to a power of two. Most decimal fractions (or most fractions in general) cannot be represented exactly as a fraction with a denominator that is a power of two. For example, the simple decimal fraction 0.3 () might be represented as (0.299999999999999988897769…). This inexactness causes many problems that are familiar to experienced programmers. For example, the expression 0.1 * 7 == 0.7 might counterintuitively evaluate to false in some systems, due to the inexactness of the representation of decimals. Although all decimal fractions are fractions, and thus it is possible to use a rational data type to represent it exactly, it may be more convenient in many situations to consider only non-repeating decimal fractions (fractions whose denominator is a power of ten). For example, fractional units of currency worldwide are mostly based on a denominator that is a power of ten. Also, most fractional measurements in science are reported as decimal fractions, as opposed to fractions with any other system of denominators. A decimal data type could be implemented as either a floating-point number or as a fixed-point number. In the fixed-point case, the denominator would be set to a fixed power of ten. In the floating-point case, a variable exponent would represent the power of ten to which the mantissa of the number is multiplied. Languages that support a rational data type usually allow the construction of such a value from two integers, instead of a base-2 floating-point number, due to the loss of exactness the latter would cause. Usually the basic arithmetic operations (, , , , integer powers) and comparisons (, , , , , ) would be extended to act on them—either natively or through operator overloading facilities provided by the language. These operations may be translated by the compiler into a sequence of integer machine instructions, or into library calls. Support may also extend to other operations, such as formatting, rounding to an integer or floating point value, etc.. Standard formats IEEE 754 specifies three standard floating-point decimal data types of different precision: Decimal32 floating-point format Decimal64 floating-point format Decimal128 floating-point format Language support C# has a built-in data type consisting of 128 bits resulting in 28–29 significant digits. It has an approximate range of ±1.0 × 10 to ±7.9228 × 10. Starting with Python 2.4, Python's standard library includes a class in the module . Ruby's standard library includes a class in the module . Java's standard library includes a java.math.BigDecimal class. In Objective-C, the Cocoa and GNUstep APIs provide an NSDecimalNumber class and an NSDecimal C data type for representing decimals whose mantissa is up to 38 digits long, and exponent is from −128 to 127. Some IBM systems and SQL systems support DECFLOAT format with at least the two larger formats. ABAP's new DECFLOAT data type includes decimal64 (as DECFLOAT16) and decimal128 (as DECFLOAT34) formats. PL/I natively supports both fixed-point and floating-point decimal data. GNU Compiler Collection (gcc) provides support for decimal floats as an extension to C and C++. See also Arbitrary-precision arithmetic Floating-point arithmetic Floating-point error mitigation References Data types Computer arithmetic
Decimal data type
Mathematics
859
4,597,562
https://en.wikipedia.org/wiki/Universal%20extra%20dimensions
In particle physics, models with universal extra dimensions include one or more spatial dimensions beyond the three spatial and one temporal dimensions that are observed. Overview Models with universal extra dimensions, studied in 2001 assume that all fields propagate universally in the extra dimensions; in contrast, the ADD model requires that the fields of the Standard Model be confined to a four-dimensional membrane, while only gravity propagates in the extra dimensions. The universal extra dimensions are assumed to be compactified with radii much larger than the traditional Planck length, although smaller than in the ADD model, ~10−18 m. Generically, the—so far unobserved—Kaluza–Klein resonances of the Standard Model fields in such a theory would appear at an energy scale that is directly related to the inverse size ("compactification scale") of the extra dimension, The experimental bounds (based on Large Hadron Collider data) on the compactification scale of one or two universal extra dimensions are about 1 TeV. Other bounds come from electroweak precision measurements at the Z pole, the muon's magnetic moment, and limits on flavor-changing neutral currents, and reach several hundred GeV. Using universal extra dimensions to explain dark matter yields an upper limit on the compactification scale of several TeV. See also Large extra dimensions Kaluza–Klein theory Randall–Sundrum model Notes References Particle physics Physics beyond the Standard Model Dimension
Universal extra dimensions
Physics
293
1,997,617
https://en.wikipedia.org/wiki/Information%20security%20audit
An information security audit is an audit of the level of information security in an organization. It is an independent review and examination of system records, activities, and related documents. These audits are intended to improve the level of information security, avoid improper information security designs, and optimize the efficiency of the security safeguards and security processes. Within the broad scope of auditing information security there are multiple types of audits, multiple objectives for different audits, etc. Most commonly the controls being audited can be categorized as technical, physical and administrative. Auditing information security covers topics from auditing the physical security of data centers to auditing the logical security of databases, and highlights key components to look for and different methods for auditing these areas. When centered on the Information technology (IT) aspects of information security, it can be seen as a part of an information technology audit. It is often then referred to as an information technology security audit or a computer security audit. However, information security encompasses much more than IT. The audit process Step 1: Preliminary audit assessment The auditor is responsible for assessing the current technological maturity level of a company during the first stage of the audit. This stage is used to assess the current status of the company and helps identify the required time, cost and scope of an audit. First, you need to identify the minimum security requirements: Security policy and standards Organizational and Personal security Communication, Operation and Asset management Physical and environmental security Access control and Compliance IT systems development and maintenance IT security incident management Disaster recovery and business continuity management Risk management Step 2: Planning & preparation The auditor should plan a company's audit based on the information found in the previous step. Planning an audit helps the auditor obtain sufficient and appropriate evidence for each company's specific circumstances. It helps predict audit costs at a reasonable level, assign the proper manpower and time line and avoid misunderstandings with clients. An auditor should be adequately educated about the company and its critical business activities before conducting a data center review. The objective of the data center is to align data center activities with the goals of the business while maintaining the security and integrity of critical information and processes. To adequately determine whether the client's goal is being achieved, the auditor should perform the following before conducting the review: Meet with IT management to determine possible areas of concern Review the current IT organization chart Review job descriptions of data center employees Research all operating systems, software applications, and data center equipment operating within the data center Review the company's IT policies and procedures Evaluate the company's IT budget and systems planning documentation Review the data center's disaster recovery plan Step 3: Establishing audit objectives In the next step, the auditor outlines the objectives of the audit after that conducting a review of a corporate data center takes place. Auditors consider multiple factors that relate to data center procedures and activities that potentially identify audit risks in the operating environment and assess the controls in place that mitigate those risks. After thorough testing and analysis, the auditor is able to adequately determine if the data center maintains proper controls and is operating efficiently and effectively. Following is a list of objectives the auditor should review: Personnel procedures and responsibilities, including systems and cross-functional training Change management processes are in place and followed by IT and management personnel Appropriate backup procedures are in place to minimize downtime and prevent the loss of important data The data center has adequate physical security controls to prevent unauthorized access to the data center Adequate environmental controls are in place to ensure equipment is protected from fire and flooding Step 4: Performing the review The next step is collecting evidence to satisfy data center audit objectives. This involves traveling to the data center location and observing processes and within the data center. The following review procedures should be conducted to satisfy the pre-determined audit objectives: Data centre personnel – All data center personnel should be authorized to access the data center (key cards, login ID's, secure passwords, etc.). Datacenter employees are adequately educated about data center equipment and properly perform their jobs. Vendor service personnel are supervised when doing work on data center equipment. The auditor should observe and interview data center employees to satisfy their objectives. Equipment – The auditor should verify that all data center equipment is working properly and effectively. Equipment utilization reports, equipment inspection for damage and functionality, system downtime records and equipment performance measurements all help the auditor determine the state of data center equipment. Additionally, the auditor should interview employees to determine if preventative maintenance policies are in place and performed. Policies and Procedures – All data center policies and procedures should be documented and located at the data center. Important documented procedures include data center personnel job responsibilities, back up policies, security policies, employee termination policies, system operating procedures and an overview of operating systems. Physical security / environmental controls – The auditor should assess the security of the client's data center. Physical security includes bodyguards, locked cages, man traps, single entrances, bolted-down equipment, and computer monitoring systems. Additionally, environmental controls should be in place to ensure the security of data center equipment. These include Air conditioning units, raised floors, humidifiers and an uninterruptible power supply. Backup procedures – The auditor should verify that the client has backup procedures in place in the case of system failure. Clients may maintain a backup data center at a separate location that allows them to instantaneously continue operations in the instance of system failure Step 5: Preparing the Audit Report After the audit examination is completed, the audit findings and suggestions for corrective actions can be communicated to responsible stakeholders in a formal meeting. This ensures better understanding and support of the audit recommendations. It also gives the audited organization an opportunity to express its views on the issues raised. Writing a report after such a meeting and describing where agreements have been reached on all audit issues can greatly enhance audit effectiveness. Exit conferences also help finalize recommendations that are practical and feasible. Step 6: Issuing the review report The data center review report should summarize the auditor's findings and be similar in format to a standard review report. The review report should be dated as of the completion of the auditor's inquiry and procedures. It should state what the review entailed and explain that a review provides only "limited assurance" to third parties. Typically, a data center review report consolidates the entirety of the audit. It also offers recommendations surrounding proper implementation of physical safeguards and advises the client on appropriate roles and responsibilities of its personnel. Its contents may include: The auditors’ procedures and findings The auditors’ recommendations Objective, scope, and methodologies Overview/conclusions The report may optionally include rankings of the security vulnerabilities identified throughout the performance of the audit and the urgency of the tasks necessary to address them. Rankings like “high”, “low”, and “medium” can be used to describe the imperativeness of the tasks. Who performs audits Generally, computer security audits are performed by: Federal or State Regulators Information security audits would primarily be prepared by the partners of these regulators. Examples include: Certified accountants, Cybersecurity and Infrastructure Security Agency (CISA), Federal Office of Thrift Supervision (OTS), Office of the Comptroller of the Currency (OCC), U.S. Department of Justice (DOJ), etc. Corporate Internal Auditors If the information security audit is an internal audit, it may be performed by internal auditors employed by the organization. Examples include: Certificated accountants, Cybersecurity and Infrastructure Security Agency (CISA), and Certified Internet Audit Professional (CIAP) External Auditors Typically, third-party experts employed by an independent organization and specializing in the field of data security are hired when state or federal auditors are not accessible. Consultants Outsourcing the technology auditing where the organization lacks the specialized skill set. Jobs and certifications in information security Information Security Officer (ISO) Information Security Officer (ISO) is a relatively new position, which has emerged in organizations to deal in the aftermath of chaotic growth in information technology and network communication. The role of the ISO has been very nebulous since the problem that they were created to address was not defined clearly. The role of an ISO has become one of following the dynamics of the security environment and keeping the risk posture balanced for the organization. Certifications Information systems audits combine the efforts and skill sets from the accounting and technology fields. Professionals from both fields rely on one another to ensure the security of the information and data.With this collaboration, the security of the information system has proven to increase over time. In relation to the information systems audit, the role of the auditor is to examine the company’s controls of the security program. Furthermore, the auditor discloses the operating effectiveness of these controls in an audit report. The Information Systems Audit and Control Association (ISACA), an Information Technology professional organization, promotes gaining expertise through various certifications. The benefits of these certifications are applicable to external and internal personnel of the system. Examples of certifications that are relevant to information security audits include: Certified Information Systems Manager (CISM) Certified in Risk and Information Systems Control (CRISC) Certified in the Governance of Enterprise IT (CGEIT) Certified Information System Auditor (CISA) CSX (Cybersecurity Nexus Fundamentals) CSXP (Cybersecurity Nexus Practitioner) The audited systems Network vulnerabilities Interception: Data that is being transmitted over the network is vulnerable to being intercepted by an unintended third party who could put the data to harmful use. Availability: Networks have become wide-spanning, crossing hundreds or thousands of miles which many rely on to access company information, and lost connectivity could cause business interruption. Access/entry point: Networks are vulnerable to unwanted access. A weak point in the network can make that information available to intruders. It can also provide an entry point for viruses and Trojan horses. Controls Interception controls: Interception can be partially deterred by physical access controls at data centers and offices, including where communication links terminate and where the network wiring and distributions are located. Encryption also helps to secure wireless networks. Availability controls: The best control for this is to have excellent network architecture and monitoring. The network should have redundant paths between every resource and an access point and automatic routing to switch the traffic to the available path without loss of data or time. Access/entry point controls: Most network controls are put at the point where the network connects with an external network. These controls limit the traffic that passes through the network. These can include firewalls, intrusion detection systems, and antivirus software. The auditor should ask certain questions to better understand the network and its vulnerabilities. The auditor should first assess the extent of the network is and how it is structured. A network diagram can assist the auditor in this process. The next question an auditor should ask is what critical information this network must protect. Things such as enterprise systems, mail servers, web servers, and host applications accessed by customers are typically areas of focus. It is also important to know who has access and to what parts. Do customers and vendors have access to systems on the network? Can employees access information from home? Lastly, the auditor should assess how the network is connected to external networks and how it is protected. Most networks are at least connected to the internet, which could be a point of vulnerability. These are critical questions in protecting networks. Segregation of duties When you have a function that deals with money either incoming or outgoing it is very important to make sure that duties are segregated to minimize and hopefully prevent fraud. One of the key ways to ensure proper segregation of duties (SoD) from a systems perspective is to review individuals’ access authorizations. Certain systems such as SAP claim to come with the capability to perform SoD tests, but the functionality provided is elementary, requiring very time-consuming queries to be built and is limited to the transaction level only with little or no use of the object or field values assigned to the user through the transaction, which often produces misleading results. For complex systems such as SAP, it is often preferred to use tools developed specifically to assess and analyze SoD conflicts and other types of system activity. For other systems or for multiple system formats you should monitor which users may have superuser access to the system giving them unlimited access to all aspects of the system. Also, developing a matrix for all functions highlighting the points where proper segregation of duties has been breached will help identify potential material weaknesses by cross-checking each employee's available accesses. This is as important if not more so in the development function as it is in production. Ensuring that people who develop the programs are not the ones who are authorized to pull it into production is key to preventing unauthorized programs into the production environment where they can be used to perpetrate fraud. Types of audits Encryption and IT audit In assessing the need for a client to implement encryption policies for their organization, the Auditor should conduct an analysis of the client's risk and data value. Companies with multiple external users, e-commerce applications, and sensitive customer/employee information should maintain rigid encryption policies aimed at encrypting the correct data at the appropriate stage in the data collection process. Auditors should continually evaluate their client's encryption policies and procedures. Companies that are heavily reliant on e-commerce systems and wireless networks are extremely vulnerable to theft and loss of critical information in transmission. Policies and procedures should be documented and carried out to ensure that all transmitted data is protected. The auditor should verify that management has controls in place over the data encryption management process. Access to keys should require dual control, keys should be composed of two separate components and should be maintained on a computer that is not accessible to programmers or outside users. Furthermore, management should attest that encryption policies ensure data protection at the desired level and verify that the cost of encrypting the data does not exceed the value of the information itself. All data that is required to be maintained for an extensive amount of time should be encrypted and transported to a remote location. Procedures should be in place to guarantee that all encrypted sensitive information arrives at its location and is stored properly. Finally, the auditor should attain verification from management that the encryption system is strong, not attackable, and compliant with all local and international laws and regulations. Logical security audit Just as it sounds, a logical security audit follows a format in an organized procedure. The first step in an audit of any system is to seek to understand its components and its structure. When auditing logical security the auditor should investigate what security controls are in place, and how they work. In particular, the following areas are key points in auditing logical security: Passwords: Every company should have written policies regarding passwords, and employees' use of them. Passwords should not be shared and employees should have mandatory scheduled changes. Employees should have user rights that are in line with their job functions. They should also be aware of proper log on/ log off procedures. Also helpful are security tokens, small devices that authorized users of computer programs or networks carry to assist in identity confirmation. They can also store cryptographic keys and biometric data. The most popular type of security token (RSA's SecurID) displays a number that changes every minute. Users are authenticated by entering a personal identification number and the number on the token. Termination Procedures: Proper termination procedures so that, old employees can no longer access the network. This can be done by changing passwords and codes. Also, all id cards and badges that are in circulation should be documented and accounted for. Special User Accounts: Special User Accounts and other privileged accounts should be monitored and have proper controls in place. Remote Access: Remote access is often a point where intruders can enter a system. The logical security tools used for remote access should be very strict. Remote access should be logged. Specific tools used in network security Network security is achieved by various tools including firewalls and proxy servers, encryption, logical security and access controls, anti-virus software, and auditing systems such as log management. Firewalls are a very basic part of network security. They are often placed between the private local network and the internet. Firewalls provide a flow-through for traffic in which it can be authenticated, monitored, logged, and reported. Some different types of firewalls include network layer firewalls, screened subnet firewalls, packet filter firewalls, dynamic packet filtering firewalls, hybrid firewalls, transparent firewalls, and application-level firewalls. The process of encryption involves converting plain text into a series of unreadable characters known as the ciphertext. If the encrypted text is stolen or attained while in transit, the content is unreadable to the viewer. This guarantees secure transmission and is extremely useful to companies sending/receiving critical information. Once encrypted information arrives at its intended recipient, the decryption process is deployed to restore the ciphertext back to plaintext. Proxy servers hide the true address of the client workstation and can also act as a firewall. Proxy server firewalls have special software to enforce authentication. Proxy server firewalls act as a middle man for user requests. Antivirus software programs such as McAfee and Symantec software locate and dispose of malicious content. These virus protection programs run live updates to ensure they have the latest information about known computer viruses. Logical security includes software safeguards for an organization's systems, including user ID and password access, authentication, access rights and authority levels. These measures are to ensure that only authorized users are able to perform actions or access information in a network or a workstation. Behavioral audit Vulnerabilities in an organization's  IT systems are often not attributed to technical weaknesses, but rather related to individual behavior of employees within the organization. A simple example of this is users leaving their computers unlocked or being vulnerable to phishing attacks. As a result, a thorough InfoSec audit will frequently include a penetration test in which auditors attempt to gain access to as much of the system as possible, from both the perspective of a typical employee as well as an outsider. A behavioral audit ensures preventative measures are in place such as a phishing webinar, where employees are made aware of what phishing is and how to detect it. System and process assurance audits combine elements from IT infrastructure and application/information security audits and use diverse controls in categories such as Completeness, Accuracy, Validity (V) and Restricted access (CAVR). Auditing application security Application security Application Security centers on three main functions: Programming Processing Access When it comes to programming it is important to ensure proper physical and password protection exists around servers and mainframes for the development and update of key systems. Having physical access security at one's data center or office such as electronic badges and badge readers, security guards, choke points, and security cameras is vitally important to ensuring the security of applications and data. Then one needs to have security around changes to the system. Those usually have to do with proper security access to make the changes and having proper authorization procedures in place for pulling programming changes from development through test and finally into production. With processing, it is important that procedures and monitoring of a few different aspects such as the input of falsified or erroneous data, incomplete processing, duplicate transactions and untimely processing are in place. Making sure that input is randomly reviewed or that all processing has proper approval is a way to ensure this. It is important to be able to identify incomplete processing and ensure that proper procedures are in place for either completing it or deleting it from the system if it was in error. There should also be procedures to identify and correct duplicate entries. Finally, when it comes to processing that is not being done on a timely basis one should back-track the associated data to see where the delay is coming from and identify whether or not this delay creates any control concerns. Finally, access, it is important to realize that maintaining network security against unauthorized access is one of the major focuses for companies as threats can come from a few sources. First, one have internal unauthorized access. It is very important to have system access passwords that must be changed regularly and that there is a way to track access and changes so one is able to identify who made what changes. All activity should be logged. The second arena to be concerned with is remote access, people accessing one's system from the outside through the internet. Setting up firewalls and password protection to on-line data changes are key to protecting against unauthorized remote access. One way to identify weaknesses in access controls is to bring in a hacker to try and crack one's system by either gaining entry to the building and using an internal terminal or hacking in from the outside through remote access. Summary An information security audit can be defined by examining the different aspects of information security. External and internal professionals within an institution  have the responsibility of maintaining and inspecting the adequacy and effectiveness of information security. As in any institution, there are various controls to be implemented and maintained. To secure the information, an institution is expected to apply security measures to circumvent outside intervention. By and large, the two concepts of application security and segregation of duties are both in many ways connected and they both have the same goal, to protect the integrity of the companies’ data and to prevent fraud. For application security, it has to do with preventing unauthorized access to hardware and software through having proper security measures both physical and electronic in place. With segregation of duties, it is primarily a physical review of individuals’ access to the systems and processing and ensuring that there are no overlaps that could lead to fraud. The type of audit the individual performs determines the specific procedures and tests to be executed throughout the audit process. See also Computer security Defensive computing Directive 95/46/EC on the protection of personal data (European Union) Ethical hack Information security Penetration test Security breach Computing References Bibliography External links Examining Data Centers Network Auditing The OpenXDAS project Information Systems and Audit Control Association (ISACA) The Institute of Internal Auditors Information technology audit Audit Types of auditing
Information security audit
Engineering
4,519
4,039,688
https://en.wikipedia.org/wiki/Application%20discovery%20and%20understanding
Application discovery and understanding (ADU) is the process of automatically analyzing artifacts of a software application and determining metadata structures associated with the application in the form of lists of data elements and business rules. The relationships discovered between this application and a central metadata registry is then stored in the metadata registry itself. Business benefits of ADU On average, developers are spending only 5% of their time writing new code, 20% modifying the legacy code and up to 60% understanding the existing code. Thus, ADU saves a great deal of time and expense for organizations that are involved in the change control and impact analysis of complex computer systems. Impact analysis allows managers to know that if specific structures are changed or removed altogether, what the impact of those changes might be to enterprise-wide systems. This process has been largely used in the preparation of Y2K changes and validations in software. Application Discovery and Understanding is part of the process enabling development teams to learn and improve themselves by providing information on the context and current state of the application. The process of gaining application understanding is greatly accelerated when the extracted metadata is displayed using interactive diagrams. When a developer can browse the metadata, and drill down into relevant details on demand, then application understanding is achieved in a way that is natural to the developer. Significant reductions in the effort and time required to perform full impact analysis have been reported when ADU tools are implemented. ADU tools are especially beneficial to newly hired developers. A newly hired developer will be productive much sooner and will require less assistance from the existing staff when ADU tools are in place. ADU process ADU software is usually written to scan the following application structures: Data structures of all kinds Application source code User interfaces (searching for labels of forms) Reports The output of the ADU process frequently includes: Lists of previously registered data elements discovered within an application List of unregistered data elements discovered Note that a registered data element is any data element that already exists within a metadata registry. See also metadata metadata registry data element Related Configuration Management References Metadata
Application discovery and understanding
Technology
412
2,254,029
https://en.wikipedia.org/wiki/Astrophysical%20plasma
Astrophysical plasma is plasma outside of the Solar System. It is studied as part of astrophysics and is commonly observed in space. The accepted view of scientists is that much of the baryonic matter in the universe exists in this state. When matter becomes sufficiently hot and energetic, it becomes ionized and forms a plasma. This process breaks matter into its constituent particles which includes negatively charged electrons and positively charged ions. These electrically charged particles are susceptible to influences by local electromagnetic fields. This includes strong fields generated by stars, and weak fields which exist in star forming regions, in interstellar space, and in intergalactic space. Similarly, electric fields are observed in some stellar astrophysical phenomena, but they are inconsequential in very low-density gaseous media. Astrophysical plasma is often differentiated from space plasma, which typically refers to the plasma of the Sun, the solar wind, and the ionospheres and magnetospheres of the Earth and other planets. Observing and studying astrophysical plasma Plasmas in stars can both generate and interact with magnetic fields, resulting in a variety of dynamic astrophysical phenomena. These phenomena are sometimes observed in spectra due to the Zeeman effect. Other forms of astrophysical plasmas can be influenced by preexisting weak magnetic fields, whose interactions may only be determined directly by polarimetry or other indirect methods. In particular, the intergalactic medium, the interstellar medium, the interplanetary medium and solar winds consist of diffuse plasmas. Possible related phenomena Scientists are interested in active galactic nuclei because such astrophysical plasmas could be directly related to the plasmas studied in laboratories. Many of these phenomena seemingly exhibit an array of complex magnetohydrodynamic behaviors, such as turbulence and instabilities. In Big Bang cosmology, the entire universe was in a plasma state prior to recombination. Early history Norwegian explorer and physicist Kristian Birkeland predicted that space is filled with plasma. He wrote in 1913: Birkeland assumed that most of the mass in the universe should be found in "empty" space. References External links "US / Russia Collaboration in Plasma Astrophysics" Space plasmas Space physics Solar phenomena Stellar phenomena
Astrophysical plasma
Physics,Astronomy
447
1,243,042
https://en.wikipedia.org/wiki/Nasal%20spray
Nasal sprays are used to deliver medications locally in the nasal cavities or systemically. They are used locally for conditions such as nasal congestion and allergic rhinitis. In some situations, the nasal delivery route is preferred for systemic therapy because it provides an agreeable alternative to injection or pills. Substances can be assimilated extremely quickly and directly through the nose. Many pharmaceutical drugs exist as nasal sprays for systemic administration (e.g. sedative-analgesics, treatments for migraine, osteoporosis and nausea). Other applications include hormone replacement therapy, treatment of Alzheimer's disease and Parkinson's disease. Nasal sprays are seen as a more efficient way of transporting drugs with potential use in crossing the blood–brain barrier. Antihistamines Antihistamines work by competing for receptor sites to block the function of histamine, thereby reducing the inflammatory effect. Antihistamine nasal sprays include: Azelastine hydrochloride Levocabastine hydrochloride Olopatadine hydrochloride Corticosteroids Corticosteroid nasal sprays can be used to relieve the symptoms of sinusitis, hay fever, allergic rhinitis and non-allergic (perennial) rhinitis. They can reduce inflammation and histamine production in the nasal passages, and have been shown to relieve nasal congestion, runny nose, itchy nose and sneezing. Side effects may include headaches, nausea and nose bleeds. Corticosteroid nasal sprays include: Beclomethasone dipropionate Budesonide Ciclesonide Flunisolide Fluticasone furoate Fluticasone propionate Mometasone Triamcinolone acetonide Saline Saline sprays are typically non medicated. A mist of saline solution containing sodium chloride is delivered to help moisturize dry or irritated nostrils. This is a form of nasal irrigation. They can also relieve nasal congestion and remove airborne irritants such as pollen and dust thereby providing sinus allergy relief. Three types of nasal sprays preparations of sodium chloride are available including hypertonic (3% sodium chloride or sea water), isotonic (0.9% sodium chloride) and hypotonic (0.65% sodium chloride). Isotonic solutions have the same salt concentration as the human body, whereas hypertonic solutions have a higher salt content and hypotonic solutions have a lower salt content. Isotonic saline nasal sprays are commonly used in infants and children to wash out the thick mucus from the nose in case of allergic rhinitis. Hypertonic solutions may be more useful at drawing moisture from the mucous membrane and relieving nasal congestion. Natural nasal sprays that include chemical complexes derived from plant sources such as ginger, capsaicin and tea-tree oil are also available. There is however no trial-verified evidence that they have a measurable effect on symptoms. Topical decongestants Decongestant nasal sprays are available over-the-counter in many countries. They work to very quickly open up nasal passages by constricting blood vessels in the lining of the nose. Prolonged use of these types of sprays can damage the delicate mucous membranes in the nose. This causes increased inflammation, an effect known as rhinitis medicamentosa or the rebound effect. Decongestant nasal sprays are advised for short-term use only, preferably 5 to 7 days at maximum. Some doctors advise to use them 3 days at maximum. A recent clinical trial has shown that a corticosteroid nasal spray may be useful in reversing this condition. Topical nasal decongestants include: Oxymetazoline Phenylephrine Xylometazoline Allergy combinations Combination use of two medications are available as nasal sprays. List of some combination nasal sprays: Azelastine together with fluticasone propionate (brand names including Dymista) Xylometazoline together with cromoglicic acid In some countries, Dymista is marketed by Viatris after Upjohn merged with Mylan to create Viatris. In 2022, the combination azelastine/fluticasone was the 299th most commonly prescribed medication in the United States, with more than 300,000 prescriptions. References External links Dosage forms Drug delivery devices Medical treatments Nose
Nasal spray
Chemistry
914
39,151,935
https://en.wikipedia.org/wiki/Kepler-69
Kepler-69 (KOI-172, 2MASS J19330262+4452080, KIC 8692861) is a G-type main-sequence star similar to the Sun in the constellation Cygnus, located about from Earth. On April 18, 2013 it was announced that the star has two planets. Although initial estimates indicated that the terrestrial planet Kepler-69c might be within the star's habitable zone, further analysis showed that the planet very likely is interior to the habitable zone and is far more analogous to Venus than to Earth and thus completely inhospitable. Nomenclature and history Prior to Kepler observation, Kepler-69 had the 2MASS catalogue number 2MASS J19330262+4452080. In the Kepler Input Catalog it has the designation of KIC 8692861, and when it was found to have transiting planet candidates it was given the Kepler object of interest number of KOI-172. The star's planets were discovered by NASA's Kepler Mission, a mission tasked with discovering planets in transit around their stars. The transit method that Kepler uses involves detecting dips in brightness in stars. These dips in brightness can be interpreted as planets whose orbits move in front of their stars from the perspective of Earth. The name Kepler-69 derives directly from the fact that the star is the catalogued 69th star discovered by Kepler to have confirmed planets. The designations b, c derive from the order of discovery. The designation of b is given to the first planet orbiting a given star, followed by the other lowercase letters of the alphabet. In the case of Kepler-69, all of the known planets in the system were discovered at one time, so b is applied to the closest planet to the star and c to the farthest. Stellar characteristics Kepler-69 is a G4 star that is approximately 81% the mass of and 93% the radius of the Sun. It has a surface temperature of 5638 ± 168 K and is 9.8 billion years old. In comparison, the Sun has a surface temperature of 5778 K and is 4.6 billion years old. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 13.7. Therefore, Kepler-69 is too dim to be seen with the naked eye. Planetary system Kepler-69 has two known planets orbiting around it. Kepler-69b is a hot super-Earth-sized exoplanet. Kepler-69c is a super-Earth-sized exoplanet, about 2.2 times larger than Earth. It receives a similar amount of flux from its star as Venus does from the Sun, and is thus a likely candidate for a super-Venus. Newer measurements suggest higher planetary radii than previous estimates: for Kepler 69 b and c respectively, in contrast with the earlier estimate from Barclay et al. (2013): . See also List of potentially habitable exoplanets Kepler-70 Notes References External links Kepler Mission – NASA. Kepler – Discoveries – Summary Table – NASA. Kepler – Discovery of New Planetary Systems (2013). Kepler – Tally of Planets/interactive (2013) – NYT. Video (02:27) - NASA Finds Three New Planets in "Habitable Zone" (04/18/2013). G-type main-sequence stars Planetary transit variables Cygnus (constellation) 172 Planetary systems with two confirmed planets
Kepler-69
Astronomy
713
7,105,280
https://en.wikipedia.org/wiki/Handbra
A handbra (also hand bra or hand-bra) is the practice of covering female nipples and areolae with hands or arms. It often is done in compliance with censors' guidelines, public authorities and community standards when female breasts are required to be covered in film or other media. If the arms are used instead of the hands the expression is arm bra. The use of long hair for this purpose is called a hair bra. Moreover, a handbra may also be used by women to cover their breasts to maintain their modesty, when they find themselves with their breasts uncovered in front of others. Social conventions requiring females to cover all or part of their breasts in public have been widespread throughout history and across cultures. Contemporary Western cultures usually regard the exposure of the nipples and areolae as immodest, and sometimes prosecute it as indecent exposure. Covering them, as with pasties, is often sufficient to avoid legal sanction. In art Employment of the handbra technique and its variations has a long history in art. Judean pillar figures show a nude goddess, supporting or cupping her prominent breasts with her hands. In print media Similar community standards apply in other media, with female models being required to at least cover their breasts in some way. The handbra technique became less common and an unnecessary pose in early 20th century European and American pinup postcard media as toplessness and nudity became more common. In America, after bare breasts become repressed in mainstream media circa 1930, the handbra became an increasingly durable pose, especially as more widespread American pinup literature emerged in the 1950s. Once bare breasts became common in pinup literature, after the early 1960s, the handbra pose became less necessary. As with pinup magazines of the 1950s, the handbra pose was a mainstay of late 20th century mainstream media, especially lad mags, such as FHM, Maxim, and Zoo Weekly, that prominently featured photos of scantily clad actresses and models who wished to avoid topless and nude glamour photography. Examples include Brigitte Bardot (1955, 1971), Elizabeth Taylor in a Playboy magazine pictorial from the set of Cleopatra, Peggy Moffitt modeling Rudi Gernreich's topless maillot and how Life magazine handled the story (1964), and the emergence of handbras in publications such as the Sports Illustrated Swimsuit Issue by model Elle MacPherson (1989). Toward the end of the 20th century, the handbra appeared on numerous celebrity magazine covers. The August 1991 cover of Vanity Fair magazine, known as the More Demi Moore cover, contained a controversial handbra nude photograph of the then seven-months pregnant Demi Moore taken by Annie Leibovitz. Two years later Janet Jackson appeared on the September 1993 cover of Rolling Stone with her nipples covered by a pair of male hands. The magazine later named it their "Most Popular Cover Ever". In July 1994, Ronald Reagan's daughter Patti Davis appeared on the cover of Playboy with another model covering her breasts. Photographer Raphael Mazzucco created an eight-woman handbra on the cover of the 2006 Sports Illustrated Swimsuit Issue and a photo of Marisa Miller covering her breasts with her arms and her vulva with an iPod in the 2007 Swimsuit Issue. The handbra was the subject of a pointed parody advertisement for Holding Your Own Boobs magazine performed by Sarah Michelle Gellar and Will Ferrell on the May 15, 1999 episode of Saturday Night Live. In cinema At the start of the 20th century, the use of the handbra was not very common in European or American cinema, where toplessness and discreet full nudity of the female form was accepted. In the 1930s, the Hays Code brought an end to nudity in all its forms, including toplessness, in Hollywood films. To remain within the censors' guidelines or community standards of decency and modesty, breasts of actresses in an otherwise topless scene were required to be covered, especially the nipples and areolae, with their hands (using a handbra gesture), arms, towel, pasties, some other object, or the angle of the body in relation to the camera. Social upheaval in the 1960s resulted first in toplessness then full nudity in film being accepted (albeit subject to movie ratings in many countries), after which the use of the handbra decreased. It has, however, not disappeared, remaining a concession to modesty in "PG" pictures. On the Internet In 2014 Playboy Enterprises made its Playboy.com website content "safe for work" by covering nipples with handbras and armbras. Other uses A brassiere called the "handbra" has a pair of hands parodying the technique. Lady Gaga wore one in the music video for her 2013 single "Applause". See also Glamour photography Toplessness References External links Human positions 1990s neologisms Breast
Handbra
Biology
996
62,770,507
https://en.wikipedia.org/wiki/List%20of%20chemistry%20awards
This list of chemistry awards is an index to articles about notable awards for chemistry. It includes awards by the Royal Society of Chemistry, the American Chemical Society, the Society of Chemical Industry and awards by other organizations. Awards of the Royal Society of Chemistry The Royal Society of the United Kingdom offers a number of awards for chemistry. Awards of the American Chemical Society The American Chemical Society of the United States offers a number of awards related to chemistry. Awards of the Society of Chemical Industry The Society of Chemical Industry was established in 1881 by scientists, inventors and entrepreneurs. It offers a number of awards related to chemistry. Other awards See also Lists of awards Lists of science and technology awards List of biochemistry awards References History of the chemical industry Chemistry
List of chemistry awards
Chemistry,Technology
147
38,334,384
https://en.wikipedia.org/wiki/Hydrogenase%20%28NAD%2B%2C%20ferredoxin%29
Hydrogenase (NAD+, ferredoxin) (, bifurcating [FeFe] hydrogenase) is an enzyme with systematic name hydrogen:NAD+, ferredoxin oxidoreductase. This enzyme catalyses the following chemical reaction 2 H2 + NAD+ + 2 oxidized ferredoxin 5 H+ + NADH + 2 reduced ferredoxin The enzyme from Thermotoga maritima contains a [Fe-Fe] cluster (H-cluster) and iron-sulfur clusters. I References External links EC 1.12.1
Hydrogenase (NAD+, ferredoxin)
Chemistry
122
61,170,148
https://en.wikipedia.org/wiki/C1377H2208N382O442S17
{{DISPLAYTITLE:C1377H2208N382O442S17}} The molecular formula C1377H2208N382O442S17 (molar mass: 31731.9 g/mol) may refer to: Asparaginase Pegaspargase Molecular formulas
C1377H2208N382O442S17
Physics,Chemistry
72
10,939,959
https://en.wikipedia.org/wiki/Tyler%20poison%20gas%20plot
The Tyler poison gas plot was an American domestic terrorism plan in Tyler, Texas, thwarted in April 2003 with the arrest of three individuals and the seizure of a cyanide gas bomb along with a large arsenal. Authorities had been investigating the white supremacist conspirators for several years and the case received little media coverage and limited attention in public from the government. Conspirators The three individuals were linked to white supremacist and anti-government groups. They were: William Joseph Krar, born 1940, originally from Connecticut then Goffstown New Hampshire Judith L. Bruey, Krar's common-law wife Edward S. Feltus of Old Bridge, New Jersey, an employee of Monmouth County Department of Human Services Feltus was a member of the New Jersey Militia. Krar was alleged to have made his living travelling across the country selling bomb components and other weapons to violent underground anti-government groups. After leaving community college, he moved to New Hampshire and first opened a restaurant and then in 1984 began selling weapons without a license under the name International Development Corporation (IDC). His father was a gunsmith. He was convicted and fined for impersonating a police officer in 1985. He worked for a building supply company and often traveled to Central America - though not on company business - until the company closed in 1988, when he stopped filing tax returns for IDC. He met Bruey in 1989. Investigation Federal authorities had been observing Krar since at least 1995, when ATF agents investigated a possible plot to bomb government buildings, but Krar was not charged. In June 2001, police investigating a fire at Krar's Goffstown storage facility found guns and ammunition, but were persuaded this was legitimate as part of his business. Since the September 11 attacks, their attention was focused on Middle Eastern terrorist activities. They were only alerted to Krar's recent activities by accident when he mailed Feltus a package of counterfeit birth certificates from North Dakota, Vermont, and West Virginia, and United Nations Multinational Force and Observers and Defense Intelligence Agency IDs in January 2002. The package was mistakenly delivered to a Staten Island man who alerted police. In August 2002, FBI investigators spoke to Feltus, who admitted to being in a militia and to be storing weapons. In January 2003, a Nashville state trooper stopped Krar in a routine traffic stop and found drugs, chemicals, false IDs and weapons in the car. Krar was arrested and the FBI were alerted. Krar was bailed and one month later an FBI lab reported that white powder found in the car was sodium cyanide; an arrest warrant was issued for Krar. In April 2003, investigators found weapons, pure sodium cyanide and white supremacist material in a storage facility in Noonday, Texas rented by Krar and Bruey. More weapons were found at their Tyler, Texas home. The weapons included at least 100 other conventional bombs (including briefcase bombs and pipe bombs), machine guns, an assault rifle, an unregistered silencer, and 500,000 rounds of ammunition. The chemical stockpile seized included sodium cyanide, hydrochloric acid, nitric acid and acetic acid. The cyanide was in a device with acid that would trigger its release as a gas bomb. Sentencing On May 4, 2004, Krar was sentenced to 135 months in prison after he pleaded guilty to building and possessing chemical weapons. Bruey was sentenced to 57 months after pleading to "conspiracy to possess illegal weapons." As per a lookup at the Bureau of Prisons prisoner database on September 18, 2012, Krar (09751-078) was listed as deceased on May 7, 2009, Bruey (10601-078) was released on May 30, 2008, and no information is available for Edward Feltus. Reactions Paul Krugman writing in the New York Times noted how John Ashcroft and the US Justice Department gave no comment or press release about the case, in contrast to other foiled plots of international terrorism. Krugman's piece was noted in Congress by John Conyers. The Christian Science Monitor noted in December 2003 "there have been two government press releases and a handful of local stories, but no press conference and no coverage in the national newspapers." References External links April 2003 crimes in the United States Terrorist incidents in the United States in 2003 Tyler, Texas Failed terrorist attempts in the United States 2003 in Texas Chemical weapons attacks Terrorist incidents in Texas Deaths by cyanide poisoning White nationalist terrorism
Tyler poison gas plot
Chemistry
934
23,013,515
https://en.wikipedia.org/wiki/Advances%20in%20Electrical%20and%20Computer%20Engineering
Advances in Electrical and Computer Engineering is a peer-reviewed scientific journal published by the Ștefan cel Mare University of Suceava since 2001. , the editor-in-chief is Adrian Graur. The journal covers research on all aspects of electrical and computer engineering. Extended versions of selected papers presented at the Development and Application Systems Conference are published in this journal. Abstracting and indexing The journal is abstracted and indexed in: Science Citation Index Expanded Scopus Inspec ProQuest databases EBSCO databases According to the Journal Citation Reports, the journal has a 2018 impact factor of 0.650, and a 5 years impact factor of 0.639. References External links Electrical and electronic engineering journals English-language journals Open access journals Academic journals established in 2001
Advances in Electrical and Computer Engineering
Engineering
155
4,937,086
https://en.wikipedia.org/wiki/Global%20commons
Global commons is a term typically used to describe international, supranational, and global resource domains in which common-pool resources are found. Global commons include the earth's shared natural resources, such as the high oceans, the atmosphere and outer space and the Antarctic in particular. Cyberspace may also meet the definition of a global commons. Definition and usage "Global commons" is a term typically used to describe international, supranational, and global resource domains in which common-pool resources are found. In economics, common goods are rivalrous and non-excludable, constituting one of the four main types of goods. A common-pool resource, also called a common property resource, is a special case of a common good (or public good) whose size or characteristics makes it costly, but not impossible, to exclude potential users. Examples include both natural or human-made resource domains (e.g., a "fishing hole" or an irrigation system). Unlike global public goods, global common-pool resources face problems of congestion, overuse, or degradation because they are subtractable (which makes them rivalrous). The term "commons" originates from the term common land in the British Isles. "Commoners rights" referred to traditional rights held by commoners, such as mowing meadows for hay or grazing livestock on common land held in the open field system of old English common law. Enclosure was the process that ended those traditional rights, converting open fields to private property. Today, many commons still exist in England, Wales, Scotland, and the United States, although their extent is much reduced from the millions of acres that existed until the 17th century. There are still over 7,000 registered commons in England alone. The term "global commons" is typically used to indicate the earth's shared natural resources, such as the deep oceans, the atmosphere, outer space and the Northern and Southern polar regions, the Antarctic in particular. According to the World Conservation Strategy, a report on conservation published by the International Union for Conservation of Nature and Natural Resources (IUCN) in collaboration with UNESCO and with the support of the United Nations Environment Programme (UNEP) and the World Wildlife Fund (WWF): Today, the Internet, World Wide Web and resulting cyberspace are often referred to as global commons. Other usages sometimes include references to open access information of all kinds, including arts and culture, language and science, though these are more formally referred to as the common heritage of mankind. Management of the global commons The key challenge of the global commons is the design of governance structures and management systems capable of addressing the complexity of multiple public and private interests, subject to often unpredictable changes, ranging from the local to the global level. As with global public goods, management of the global commons requires pluralistic legal entities, usually international and supranational, public and private, structured to match the diversity of interests and the type of resource to be managed, and stringent enough with adequate incentives to ensure compliance. Such management systems are necessary to avoid, at the global level, the classic tragedy of the commons, in which common resources become overexploited. There are several key differences in management of resources in the global commons from those of the commons, in general. There are obvious differences in scale of both the resources and the number of users at the local versus the global level. Also, there are differences in the shared culture and expectations of resource users; more localized commons users tend to be more homogeneous and global users more heterogeneous. This contributes to differences in the possibility and time it takes for new learning about resource usage to occur at the different levels. Moreover, global resource pools are less likely to be relatively stable and the dynamics are less easily understood. Many of the global commons are non-renewable on human time scales. Thus, resource degradation is more likely to be the result of unintended consequences that are unforeseen, not immediately observable, or not easily understood. For example, the carbon dioxide emissions that drive climate change continue to do so for at least a millennium after they enter the atmosphere and species extinctions last forever. Importantly, because there are significant differences in the benefits, costs, and interests at the global level, there are significant differences in externalities between more local resource uses and uses of global-level resources. Several environmental protocols have been established (see List of international environmental agreements) as a type of international law, "an intergovernmental document intended as legally binding with a primary stated purpose of preventing or managing human impacts on natural resources." International environmental protocols came to feature in environmental governance after trans-boundary environmental problems became widely perceived in the 1960s. Following the Stockholm Intergovernmental Conference in 1972, creation of international environmental agreements proliferated. Due to the barriers already discussed, environmental protocols are not a panacea for global commons issues. Often, they are slow to produce the desired effects, tend to the lowest common denominator, and lack monitoring and enforcement. They also take an incremental approach to solutions where sustainable development principles suggest that environmental concerns should be mainstream political issues. The global ocean The global or world ocean, as the interconnected system of the Earth's oceanic (or marine) waters that comprise the bulk of the hydrosphere, is a classic global commons. It is divided into a number of principal oceanic areas that are delimited by the continents and various oceanographic features. In turn, oceanic waters are interspersed by many smaller seas, gulfs, and bays. Further, most freshwater bodies ultimately empty into the ocean and are derived through the Earth's water cycle from ocean waters. The Law of the Sea is a body of public international law governing relationships between nations in respect to navigational rights, mineral rights, and jurisdiction over coastal waters. Maritime law, also called Admiralty law, is a body of both domestic law governing maritime activities and private international law governing the relationships between private entities which operate vessels on the oceans. It deals with matters including marine commerce, marine navigation, shipping, sailors, and the transportation of passengers and goods by sea. However, these bodies of law do little to nothing to protect deep oceans from human threats. In addition to providing significant means of transportation, a large proportion of all life on Earth exists in its ocean, which contains about 300 times the habitable volume of terrestrial habitats. Specific marine habitats include coral reefs, kelp forests, seagrass meadows, tidepools, muddy, sandy and rocky bottoms, and the open ocean (pelagic) zone, where solid objects are rare and the surface of the water is the only visible boundary. The organisms studied range from microscopic phytoplankton and zooplankton to huge cetaceans (whales) 30 meters (98 feet) in length. At a fundamental level, marine life helps determine the very nature of our planet. Marine life resources provide food (especially food fish), medicines, and raw materials. It is also becoming understood that the well-being of marine organisms and other organisms are linked in very fundamental ways. The human body of knowledge regarding the relationship between life in the sea and important cycles is rapidly growing, with new discoveries being made nearly every day. These cycles include those of matter (such as the carbon cycle) and of air (such as Earth's respiration, and movement of energy through ecosystems including the ocean). Marine organisms contribute significantly to the oxygen cycle, and are involved in the regulation of the Earth's climate. Shorelines are in part shaped and protected by marine life, and some marine organisms even help create new land. The United Nations Environment Programme (UNEP) has identified several areas of need in managing the global ocean: strengthen national capacities for action, especially in developing countries; improve fisheries management; reinforce cooperation in semi-enclosed and regional seas; strengthen controls over ocean disposal of hazardous and nuclear wastes; and advance the Law of the Sea. Specific problems identified as in need of attention include Current rising sea levels; contamination by hazardous chemicals (including oil spills); microbiological contamination; ocean acidification; harmful algal blooms; and over-fishing and other overexploitation. Further, the Pew Charitable Trusts Environmental Initiative program has identified a need for a worldwide system of very large, highly protected marine reserves where fishing and other extractive activities are prohibited. Atmosphere The atmosphere is a complex dynamic natural gaseous system that is essential to support life on planet Earth. A primary concern for management of the global atmosphere is air pollution, the introduction into the atmosphere of chemicals, particulates, or biological materials that cause discomfort, disease, or death to humans, damage other living organisms such as food crops, or damage the natural environment or built environment. Stratospheric ozone depletion due to air pollution has long been recognized as a threat to human health as well as to the Earth's ecosystems. Pollution of breathable air is a central problem in the management of the global commons. Pollutants can be in the form of solid particles, liquid droplets, or gases and may be natural or man-made. Although controversial and limited in scope by methods of enforcement, in several parts of the world the polluter pays principle, which makes the party responsible for producing pollution responsible for paying for the damage done to the natural environment, is accepted. It has strong support in most Organisation for Economic Co-operation and Development (OECD) and European Community (EC) countries. It is also known as extended producer responsibility (EPR). EPR seeks to shift the responsibility dealing with waste from governments (and thus, taxpayers and society at large) to the entities producing it. In effect, it attempts to internalise the cost of waste disposal into the cost of the product, theoretically resulting in producers improving the waste profile of their products, decreasing waste and increasing possibilities for reuse and recycling. The 1979 Convention on Long-Range Transboundary Air Pollution, or CLRTAP, is an early international effort to protect and gradually reduce and prevent air pollution. It is implemented by the European Monitoring and Evaluation Programme (EMEP), directed by the United Nations Economic Commission for Europe (UNECE). The Montreal Protocol on Substances that Deplete the Ozone Layer, or Montreal Protocol (a protocol to the Vienna Convention for the Protection of the Ozone Layer), is an international treaty designed to protect the ozone layer by phasing out the production of numerous substances believed to be responsible for ozone depletion. The treaty was opened for signature on 16 September 1987, and entered into force on 1 January 1989. After more three decades of work the Vienna Convention and Montreal Protocol were widely regarded as highly successful, both in achieving ozone reductions and as a pioneering model for management of the global commons. Global dimming is the gradual reduction in the amount of global direct irradiance at the Earth's surface, which has been observed for several decades after the start of systematic measurements in the 1950s. Global dimming is thought to have been caused by an increase in particulates such as sulfate aerosols in the atmosphere due to human action. It has interfered with the hydrological cycle by reducing evaporation and may have reduced rainfall in some areas. Global dimming also creates a cooling effect that may have partially masked the effect of greenhouse gases on global warming. Global warming and climate change in general are a major concern of global commons management. The Intergovernmental Panel on Climate Change (IPCC), established in 1988 to develop a scientific consensus, concluded in a series of reports that reducing emissions of greenhouse gases was necessary to prevent catastrophic harm. Meanwhile, a 1992 United Nations Framework Convention on Climate Change (FCCC) pledged to work toward "stabilisation of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic [i.e., human-induced] interference with the climate system" (as of 2019 there were 197 parties to the convention, although not all had ratified it). The 1997 Kyoto Protocol to the FCCC set forth binding obligations on industrialised countries to reduce emissions. These were accepted by many countries but not all, and many failed to meet their obligations. The Protocol expired in 2012 and was followed by the 2015 Paris Agreement in which nations made individual promises of reductions. However, the IPCC concluded in a 2018 report that dangerous climate change was inevitable unless much greater reductions were promised and carried out. Polar regions The eight Arctic nations Canada, Denmark (Greenland and the Faroe Islands), Norway, the United States (Alaska), Sweden, Finland, Iceland, and Russia, are all members of the treaty organization, the Arctic Council, as are organizations representing six indigenous populations. The council operates on consensus basis, mostly dealing with environmental treaties and not addressing boundary or resource disputes. Currently, the Antarctic Treaty and related agreements, collectively called the Antarctic Treaty System or ATS, regulate international relations with respect to Antarctica, Earth's only continent without a native human population. The treaty, entering into force in 1961 and currently having 50 signatory nations, sets aside Antarctica as a scientific preserve, establishes freedom of scientific investigation and bans military activity on that continent. Climate change in the Arctic region is leading to widespread ecosystem restructuring. The distribution of species is changing along with the structure of food webs. Changes in ocean circulation appear responsible for the first exchanges of zooplankton between the North Pacific and North Atlantic regions in perhaps 800,000 years. These changes can allow the transmission of diseases from subarctic animals to Arctic ones, and vice versa, posing an additional threat to species already stressed by habitat loss and other impacts. Where these changes lead is not yet clear, but are likely to have far-reaching impacts on Arctic marine ecosystems. Climate models tend to reinforce that temperature trends due to global warming will be much smaller in Antarctica than in the Arctic, but ongoing research may show otherwise. Outer space Management of outer space global commons has been contentious since the successful launch of the Sputnik satellite by the former Soviet Union on 4 October 1957. There is no clear boundary between Earth's atmosphere and space, although there are several standard boundary designations: one that deals with orbital velocity (the Kármán line), one that depends on the velocity of charged particles in space, and some that are determined by human factors such as the height at which human blood begins to boil without a pressurized environment (the Armstrong line). Space policy regarding a country's civilian space program, as well as its policy on both military use and commercial use of outer space, intersects with science policy, since national space programs often perform or fund research in space science, and also with defense policy, for applications such as spy satellites and anti-satellite weapons. It also encompasses government regulation of third-party activities such as commercial communications satellites and private spaceflight as well as the creation and application of space law and space advocacy organizations that exist to support the cause of space exploration. Scientists have outlined rationale for governance that regulates the current free externalization of true costs and risks, treating orbital space around the Earth as part of the global commons – as an "additional ecosystem" or "part of the human environment" – which should be subject to the same concerns and regulations like e.g. oceans on Earth. The study concluded in 2022 that it needs "new policies, rules and regulations at national and international level". Policies The Outer Space Treaty provides a basic framework for international space law. It covers the legal use of outer space by nation states. The treaty states that outer space is free for all nation states to explore and is not subject to claims of national sovereignty. It also prohibits the deployment of nuclear weapons in outer space. The treaty was passed by the United Nations General Assembly in 1963 and signed in 1967 by the USSR, the United States of America and the United Kingdom. As of mid-year, 2013 the treaty has been ratified by 102 states and signed by an additional 27 states. Beginning in 1958, outer space has been the subject of multiple resolutions by the United Nations General Assembly. Of these, more than 50 have concerned the international co-operation in the peaceful uses of outer space and preventing an arms race in space. Four additional space law treaties have been negotiated and drafted by the UN's Committee on the Peaceful Uses of Outer Space. Still, there remain no legal prohibitions against deploying conventional weapons in space and anti-satellite weapons have been successfully tested by the US, USSR and China. The 1979 Moon Treaty turned the jurisdiction of all heavenly bodies (including the orbits around such bodies) over to the international community. However, this treaty has not been ratified by any nation that currently practices crewed spaceflight. In 1976 eight equatorial states (Ecuador, Colombia, Brazil, Congo, Zaire, Uganda, Kenya, and Indonesia) met in Bogotá, Colombia to make the "Declaration of the First Meeting of Equatorial Countries," also known as "the Bogotá Declaration", a claim to control the segment of the geosynchronous orbital path corresponding to each country. These claims are not internationally accepted. The International Space Station The International Space Station programme is a joint project among five participating space agencies: NASA, the Russian Federal Space Agency (RSA), Japan Aerospace Exploration Agency (JAXA), European Space Agency (ESA), and Canadian Space Agency (CSA). National budget constraints led to the merger of three space station projects into the International Space Station. In 1993 the partially built components for a Soviet/Russian space station Mir-2, the proposed American Freedom, and the proposed European Columbus merged into this multinational programme. The ownership and use of the space station is established by intergovernmental treaties and agreements. The ISS is arguably the most expensive single item ever constructed, and may be one of the most significant instances of international cooperation in modern history. According to the original Memorandum of Understanding between NASA and the RSA, the International Space Station was intended to be a laboratory, observatory and factory in space. It was also planned to provide transportation, maintenance, and act as a staging base for possible future missions to the Moon, Mars and asteroids. In the 2010 United States National Space Policy, it was given additional roles of serving commercial, diplomatic and educational purposes. Internet As a global system of computers interconnected by telecommunication technologies consisting of millions of private, public, academic, business, and government resources, it is difficult to argue that the Internet is a global commons. These computing resources are largely privately owned and subject to private property law, although many are government owned and subject to public law. The World Wide Web, as a system of interlinked hypertext documents, either public domain (like Wikipedia itself) or subject to copyright law, is, at best, a mixed good. The resultant virtual space or cyberspace, however, is often viewed as an electronic global commons that allows for as much or more freedom of expression as any public space. Access to those digital commons and the actual freedom of expression allowed does vary widely by geographical area. Management of the electronic global commons presents as many issues as do other commons. In addition to issues related to inequity in access, issues such as net neutrality, Internet censorship, Internet privacy, and electronic surveillance arise. However, the term global commons generally represents stateless maneuver space, where no nation or entity can claim preeminence, and since 100 percent of cyberspace is owned by either a public or private entity, although it is often perceived as such, cyberspace may not be said to be a true global commons. See also Environmental economics Environmental law Free and open-source software Goods Global public goods Digital public goods Human ecology Tragedy of the commons Wikipedia References External links The Global Environmental Facility Share the World's Resources Sustainable Economics to End Global Poverty – the Global Commons in Economic Practice. Further reading World Goods (economics) Globalization Human ecology
Global commons
Physics,Environmental_science
4,042
14,025,168
https://en.wikipedia.org/wiki/Protein-glutamate%20O-methyltransferase
In enzymology, a protein-glutamate O-methyltransferase () is an enzyme that catalyzes the chemical reaction S-adenosyl-L-methionine + protein L-glutamate S-adenosyl-L-homocysteine + protein L-glutamate methyl ester Thus, the two substrates of this enzyme are S-adenosyl methionine and protein L-glutamic acid, whereas its two products are S-adenosylhomocysteine and protein L-glutamate methyl ester. This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:protein-L-glutamate O-methyltransferase. Other names in common use include methyl-accepting chemotaxis protein O-methyltransferase, S-adenosylmethionine-glutamyl methyltransferase, methyl-accepting chemotaxis protein methyltransferase II, S-adenosylmethionine:protein-carboxyl O-methyltransferase, protein methylase II, MCP methyltransferase I, MCP methyltransferase II, protein O-methyltransferase, protein(aspartate)methyltransferase, protein(carboxyl)methyltransferase, protein carboxyl-methylase, protein carboxyl-O-methyltransferase, protein carboxylmethyltransferase II, protein carboxymethylase, protein carboxymethyltransferase, and protein methyltransferase II. This enzyme participates in bacterial chemotaxis - general and bacterial chemotaxis - organism-specific. CheR proteins are part of the chemotaxis signaling mechanism which methylates the chemotaxis receptor at specific glutamate residues. Methyl transfer from the ubiquitous S-adenosyl-L-methionine (AdoMet/SAM) to either nitrogen, oxygen or carbon atoms is frequently employed in diverse organisms ranging from bacteria to plants and mammals. The reaction is catalysed by methyltransferases (Mtases) and modifies DNA, RNA, proteins and small molecules, such as catechol for regulatory purposes. The various aspects of the role of DNA methylation in prokaryotic restriction-modification systems and in a number of cellular processes in eukaryotes including gene regulation and differentiation is well documented. Flagellated bacteria swim towards favourable chemicals and away from deleterious ones. Sensing of chemoeffector gradients involves chemotaxis receptors, transmembrane (TM) proteins that detect stimuli through their periplasmic domains and transduce the signals via their cytoplasmic domains . Signalling outputs from these receptors are influenced both by the binding of the chemoeffector ligand to their periplasmic domains and by methylation of specific glutamate residues on their cytoplasmic domains. Methylation is catalysed by CheR, an S-adenosylmethionine-dependent methyltransferase, which reversibly methylates specific glutamate residues within a coiled coil region, to form gamma-glutamyl methyl ester residues. The structure of the Salmonella typhimurium chemotaxis receptor methyltransferase CheR, bound to S-adenosylhomocysteine, has been determined to a resolution of 2.0 Angstrom. The structure reveals CheR to be a two-domain protein, with a smaller N-terminal helical domain linked via a single polypeptide connection to a larger C-terminal alpha/beta domain. The C-terminal domain has the characteristics of a nucleotide-binding fold, with an insertion of a small anti-parallel beta-sheet subdomain. The S-adenosylhomocysteine-binding site is formed mainly by the large domain, with contributions from residues within the N-terminal domain and the linker region. Structural studies As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes and . References Further reading Protein domains EC 2.1.1 Enzymes of known structure
Protein-glutamate O-methyltransferase
Biology
932
2,532,134
https://en.wikipedia.org/wiki/Methanandamide
Methanandamide (AM-356) is a synthetically created stable chiral analog of anandamide. Its effects have been observed to act on the cannabinoid receptors (specifically on CB1 receptors, which are part of the central nervous system) found in different organisms such as mammals, fish, and certain invertebrates (e.g. Hydra). References Fatty acid amides Primary alcohols AM cannabinoids
Methanandamide
Chemistry,Biology
89