source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Memory%20inhibition
In psychology, memory inhibition is the ability not to remember irrelevant information. The scientific concept of memory inhibition should not be confused with everyday uses of the word "inhibition". Scientifically speaking, memory inhibition is a type of cognitive inhibition, which is the stopping or overriding of a mental process, in whole or in part, with or without intention. Memory inhibition is a critical component of an effective memory system. While some memories are retained for a lifetime, most memories are forgotten. According to evolutionary psychologists, forgetting is adaptive because it facilitates selectivity of rapid, efficient recollection. For example, a person trying to remember where they parked their car would not want to remember every place they have ever parked. In order to remember something, therefore, it is essential not only to activate the relevant information, but also to inhibit irrelevant information. There are many memory phenomena that seem to involve inhibition, although there is often debate about the distinction between interference and inhibition. History In the early days of psychology, the concept of inhibition was prevalent and influential (e.g., Breese, 1899; Pillsbury, 1908; Wundt, 1902). These psychologists applied the concept of inhibition (and interference) to early theories of learning and forgetting. Starting in 1894, German scientists Muller and Shumann conducted empirical studies that demonstrated how learning a second list of items interfered with memory of the first list. Based on these experiments, Muller argued that the process of attention was based on facilitation. Arguing for a different explanation, Wundt (1902) claimed that selective attention was accomplished by the active inhibition of unattended information, and that to attend to one of several simultaneous stimuli, the others had to be inhibited. American Psychologist Walter Pillsbury combined Muller and Wundt's arguments, claiming that attention bo
https://en.wikipedia.org/wiki/List%20of%20EC%20numbers%20%28EC%206%29
This list contains a list of EC numbers for the sixth group, EC 6, ligases, placed in numerical order as determined by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology. All official information is tabulated at the website of the committee. The database is developed and maintained by Andrew McDonald. EC 6.1: Forming Carbon-Oxygen Bonds EC 6.1.1: Ligases Forming Aminoacyl-tRNA and Related Compounds (Aminoacyl tRNA synthetase) : tyrosine—tRNA ligase : tryptophan—tRNA ligase : threonine—tRNA ligase : leucine—tRNA ligase : isoleucine—tRNA ligase : lysine—tRNA ligase : alanine—tRNA ligase : Deleted : valine—tRNA ligase : methionine—tRNA ligase : serine—tRNA ligase : aspartate—tRNA ligase : D-alanine—poly(phosphoribitol) ligase : glycine—tRNA ligase : proline—tRNA ligase : cysteine—tRNA ligase : glutamate—tRNA ligase : glutamine—tRNA ligase : arginine—tRNA ligase : phenylalanine—tRNA ligase : histidine—tRNA ligase : asparagine—tRNA ligase : aspartate—tRNAAsn ligase : glutamate—tRNAGln ligase EC 6.1.1.25: The tRNAPyl is now known only to be charged with pyrrolysine (cf. ) : pyrrolysine—tRNAPyl ligase : O-phospho-L-serine—tRNA ligase EC 6.1.1.28: proline/cysteine—tRNA ligase. Later published work having demonstrated that this was not a genuine enzyme, EC 6.1.1.28 was withdrawn at the public-review stage before being made official EC 6.1.2: Acid—alcohol ligases (ester synthases) : D-alanine—(R)-lactate ligase : nebramycin 5′ synthase * * No Wikipedia article EC 6.1.3: Cyclo-ligases : olefin β-lactone synthetase * * No Wikipedia article EC 6.2: Forming Carbon-Sulfur Bonds EC 6.2.1: Acid-Thiol Ligases : acetate—CoA ligase : medium-chain acyl-CoA ligase : long-chain-fatty-acid—CoA ligase : succinate—CoA ligase (GDP-forming) : succinate—CoA ligase (ADP-forming) : glutarate—CoA ligase : cholate—CoA ligase : oxalate—CoA ligase : malate—CoA ligase : carboxylic acid—CoA ligase (GDP-forming)
https://en.wikipedia.org/wiki/List%20of%20EC%20numbers%20%28EC%205%29
This list contains a list of EC numbers for the fifth group, EC 5, isomerases, placed in numerical order as determined by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology. All official information is tabulated at the website of the committee. The database is developed and maintained by Andrew McDonald. EC 5.1: Epimerases and racemases EC 5.1.1: Acting on Amino acids and Derivatives : alanine racemase : methionine racemase : glutamate racemase : proline racemase : lysine racemase : threonine racemase : diaminopimelate epimerase : 4-hydroxyproline epimerase : arginine racemase : amino-acid racemase : phenylalanine racemase (ATP-hydrolysing) : ornithine racemase : aspartate racemase : nocardicin-A epimerase : 2-aminohexano-6-lactam racemase : protein-serine epimerase : isopenicillin-N epimerase : serine racemase : L-Ala-D/L-Glu epimerase * : isoleucine 2-epimerase * : 4-hydroxyproline betaine 2-epimerase * : UDP-N-acetyl-α-D-muramoyl-L-alanyl-L-glutamate epimerase * : histidine racemase * *No Wikipedia article EC 5.1.2: Acting on Hydroxy acids and Derivatives : lactate racemase : mandelate racemase : 3-hydroxybutyryl-CoA epimerase : acetoin racemase : tartrate epimerase : isocitrate epimerase : tagaturonate epimerase * *No Wikipedia article EC 5.1.3: Acting on Carbohydrates and Derivatives : ribulose-phosphate 3-epimerase : UDP-glucose 4-epimerase : aldose 1-epimerase : L-ribulose-5-phosphate 4-epimerase : UDP-arabinose 4-epimerase : UDP-glucuronate 4-epimerase : UDP-N-acetylglucosamine 4-epimerase : N-acylglucosamine 2-epimerase : N-acylglucosamine-6-phosphate 2-epimerase : CDP-paratose 2-epimerase : cellobiose epimerase : The enzyme has never been purified and the activity was later shown not to exist : dTDP-4-dehydrorhamnose 3,5-epimerase : UDP-N-acetylglucosamine 2-epimerase (non-hydrolysing) : glucose-6-phosphate 1-epimerase : UDP-glucosamine 4-epimerase : hep
https://en.wikipedia.org/wiki/List%20of%20EC%20numbers%20%28EC%204%29
This list contains a list of EC numbers for the fourth group, EC 4, lyases, placed in numerical order as determined by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology. All official information is tabulated at the website of the committee. The database is developed and maintained by Andrew McDonald. EC 4.1: Carbon-Carbon Lyases EC 4.1.1: Carboxy-lyases : pyruvate decarboxylase : oxalate decarboxylase EC 4.1.1.3: Now recognized to be two enzymes [oxaloacetate decarboxylase (Na+ extruding)] and (oxaloacetate decarboxylase). : acetoacetate decarboxylase : acetolactate decarboxylase : cis-aconitate decarboxylase : benzoylformate decarboxylase : oxalyl-CoA decarboxylase : malonyl-CoA decarboxylase EC 4.1.1.10: Now included with , aspartate 4-decarboxylase : aspartate 1-decarboxylase : aspartate 4-decarboxylase EC 4.1.1.13: deleted : valine decarboxylase : glutamate decarboxylase : hydroxyglutamate decarboxylase : ornithine decarboxylase : lysine decarboxylase : arginine decarboxylase : diaminopimelate decarboxylase : phosphoribosylaminoimidazole carboxylase : histidine decarboxylase : orotidine-5′-phosphate decarboxylase : aminobenzoate decarboxylase : tyrosine decarboxylase EC 4.1.1.26: Now included with aromatic-L-amino-acid decarboxylase EC 4.1.1.27: Now included with aromatic-L-amino-acid decarboxylase : aromatic-L-amino-acid decarboxylase : sulfoalanine decarboxylase : pantothenoylcysteine decarboxylase : phosphoenolpyruvate carboxylase : phosphoenolpyruvate carboxykinase (GTP) : diphosphomevalonate decarboxylase : dehydro-L-gulonate decarboxylase : UDP-glucuronate decarboxylase : phosphopantothenoylcysteine decarboxylase : uroporphyrinogen decarboxylase : phosphoenolpyruvate carboxykinase (diphosphate) : ribulose-bisphosphate carboxylase : hydroxypyruvate decarboxylase EC 4.1.1.41: Now , (S)-methylmalonyl-CoA decarboxylase : carnitine decarboxylase : phenylpyruvate dec
https://en.wikipedia.org/wiki/List%20of%20EC%20numbers%20%28EC%203%29
This list contains a list of EC numbers for the third group, EC 3, hydrolases, placed in numerical order as determined by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology. All official information is tabulated at the website of the committee. The database is developed and maintained by Andrew McDonald. EC 3.1: Acting on Ester Bonds EC 3.1.1: Carboxylic Ester Hydrolases : carboxylesterase : arylesterase : triacylglycerol lipase : phospholipase A2 : lysophospholipase : acetylesterase : acetylcholinesterase : cholinesterase EC 3.1.1.9: deleted, a side reaction of cholinesterase : tropinesterase : pectinesterase EC 3.1.1.12: deleted, identical with carboxylesterase : sterol esterase : chlorophyllase : L-arabinonolactonase EC 3.1.1.16: deleted, mixture of (muconolactone Δ-isomerase) and (3-oxoadipate enol-lactonase) : gluconolactonase EC 3.1.1.18: deleted, Now included with gluconolactonase : uronolactonase : tannase EC 3.1.1.21: deleted, now known to be catalysed by , carboxylesterase and , triacylglycerol lipase. : hydroxybutyrate-dimer hydrolase : acylglycerol lipase : 3-oxoadipate enol-lactonase : 1,4-lactonase : galactolipase : 4-pyridoxolactonase : acylcarnitine hydrolase : aminoacyl-tRNA hydrolase : D-arabinonolactonase : 6-phosphogluconolactonase : phospholipase A1 : 6-acetylglucose deacetylase : lipoprotein lipase : dihydrocoumarin hydrolase : limonin-D-ring-lactonase : steroid-lactonase : triacetate-lactonase : actinomycin lactonase : orsellinate-depside hydrolase : cephalosporin-C deacetylase : chlorogenate hydrolase : α-amino-acid esterase : 4-methyloxaloacetate esterase : carboxymethylenebutenolidase : deoxylimonate A-ring-lactonase : 1-alkyl-2-acetylglycerophosphocholine esterase : fusarinine-C ornithinesterase : sinapine esterase : wax-ester hydrolase : phorbol-diester hydrolase : phosphatidylinositol deacylase : sialate O-acetylesterase : acetoxybutynyl
https://en.wikipedia.org/wiki/List%20of%20EC%20numbers%20%28EC%202%29
This list contains a list of EC numbers for the second group, EC 2, transferases, placed in numerical order as determined by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology. All official information is tabulated at the website of the committee. The database is developed and maintained by Andrew McDonald. EC 2.1: Transferring One-Carbon Groups EC 2.1.1: Methyltransferases : nicotinamide N-methyltransferase : guanidinoacetate N-methyltransferase : thetin—homocysteine S-methyltransferase : acetylserotonin O-methyltransferase : betaine—homocysteine S-methyltransferase : catechol O-methyltransferase : nicotinate N-methyltransferase : histamine N-methyltransferase : thiol S-methyltransferase : homocysteine S-methyltransferase : magnesium protoporphyrin IX methyltransferase : methionine S-methyltransferase : methionine synthase : 5-methyltetrahydropteroyltriglutamate—homocysteine S-methyltransferase : fatty-acid O-methyltransferase : methylene-fatty-acyl-phospholipid synthase : phosphatidylethanolamine N-methyltransferase : polysaccharide O-methyltransferase : trimethylsulfonium—tetrahydrofolate N-methyltransferase : glycine N-methyltransferase : methylamine—glutamate N-methyltransferase : carnosine N-methyltransferase : now covered by , and : now covered by , and : phenol O-methyltransferase : iodophenol O-methyltransferase : tyramine N-methyltransferase : phenylethanolamine N-methyltransferase : Now covered by , and : tRNA (purine-2- or -6-)-methyltransferase: Reactions previously described are due to : Now covered by and : Now covered by , , and : tRNA (guanine46-N7)-methyltransferase : tRNA (guanosine18-2′-O)-methyltransferase : tRNA (uracil54-C5)-methyltransferase : Now covered by , , , : DNA (cytosine-5-)-methyltransferase : O-demethylpuromycin O-methyltransferase : inositol 3-methyltransferase : inositol 1-methyltransferase : sterol 24-C-methyltransferase : flavone 3
https://en.wikipedia.org/wiki/Peopleware%3A%20Productive%20Projects%20and%20Teams
Peopleware: Productive Projects and Teams is a 1987 book on the social side of software development, specifically managing project teams. It was written by software consultants Tom DeMarco and Tim Lister, from their experience in the world of software development. This book was revised in 1999 and 2016. Overview Peopleware is a popular book about software organization management. The first chapter of the book claims, "The major problems of our work are not so much technological as sociological in nature". The book approaches sociological or 'political' problems such as group chemistry and team jelling, "flow time" and quiet in the work environment, and the high cost of turnover. Other topics include the conflicts between individual work perspective and corporate ideology, corporate entropy, "teamicide" and workspace theory. The authors presented most subjects as principles backed up by some concrete story or other information. As an example, the chapter "Spaghetti Dinner" presents a fictional example of a manager inviting a new team over for dinner, then having them buy and prepare the meal as a group, in order to produce a first team success. Other chapters use real-life stories or cite various studies to illustrate the principles being presented. Editions 1st Edition: 1987 2nd Edition: 1999 The second edition kept the original content with only a few changes or corrections. The bulk of the new content was eight chapters in a new section at the end. The new section's chapters revisited some of the concepts of the original chapters with changes and added new ones. The eBook PDF version published by DorsetHouse was not searchable as each page appeared to be an image. The Kindle version is searchable. 3rd Edition: 2016 The new content of the third edition is spread out through the book. There are six new chapters, but the original content has also been updated. See also Peopleware as a more general concept. The Mythical Man-Month References Information t
https://en.wikipedia.org/wiki/Expansive%20homeomorphism
In mathematics, the notion of expansivity formalizes the notion of points moving away from one another under the action of an iterated function. The idea of expansivity is fairly rigid, as the definition of positive expansivity, below, as well as the Schwarz–Ahlfors–Pick theorem demonstrate. Definition If is a metric space, a homeomorphism is said to be expansive if there is a constant called the expansivity constant, such that for every pair of points in there is an integer such that Note that in this definition, can be positive or negative, and so may be expansive in the forward or backward directions. The space is often assumed to be compact, since under that assumption expansivity is a topological property; i.e. if is any other metric generating the same topology as , and if is expansive in , then is expansive in (possibly with a different expansivity constant). If is a continuous map, we say that is positively expansive (or forward expansive) if there is a such that, for any in , there is an such that . Theorem of uniform expansivity Given f an expansive homeomorphism of a compact metric space, the theorem of uniform expansivity states that for every and there is an such that for each pair of points of such that , there is an with such that where is the expansivity constant of (proof). Discussion Positive expansivity is much stronger than expansivity. In fact, one can prove that if is compact and is a positively expansive homeomorphism, then is finite (proof). External links Expansive dynamical systems on scholarpedia Dynamical systems
https://en.wikipedia.org/wiki/Ishango%20bone
The Ishango bone, discovered at the "Fisherman Settlement" of Ishango in the Democratic Republic of Congo, is a bone tool and possible mathematical device that dates to the Upper Paleolithic era. The curved bone is dark brown in color, about 10 centimeters in length, and features a sharp piece of quartz affixed to one end, perhaps for engraving. Because the bone has been narrowed, scraped, polished, and engraved to a certain extent, it is no longer possible to determine what animal the bone belonged to, although it is assumed to belong to a mammal. The ordered engravings have led many to speculate the meaning behind these marks, including interpretations like mathematical significance or astrological relevance. It is thought by some to be a tally stick, as it features a series of what has been interpreted as tally marks carved in three columns running the length of the tool, though it has also been suggested that the scratches might have been to create a better grip on the handle or for some other non-mathematical reason. Others argue that the marks on the object are non-random and that it was likely a kind of counting tool and used to perform simple mathematical procedures. Other speculations include the engravings on the bone serving as a lunar calendar. Dating to 20,000 years before present, it is regarded as the oldest mathematical tool to humankind, with the possible exception of the approximately 40,000-year-old Lebombo bone from southern Africa. History Archaeological discovery The Ishango bone was found in 1950 by Belgian Jean de Heinzelin de Braucourt while exploring what was then the Belgian Congo. It was discovered in the area of Ishango near the Semliki River. Lake Edward empties into the Semliki which forms part of the headwaters of the Nile River (now on the border between modern-day Uganda and D.R. Congo). Some archaeologists believe the prior inhabitants of Ishango were a "pre-sapiens species". However, the most recent inhabitants, who gave the a
https://en.wikipedia.org/wiki/Delegated%20administration
In computing, delegated administration or delegation of control describes the decentralization of role-based-access-control systems. Many enterprises use a centralized model of access control. For large organizations, this model scales poorly and IT teams become burdened with menial role-change requests. These requests — often used when hire, fire, and role-change events occur in an organization — can incur high latency times or suffer from weak security practices. Such delegation involves assigning a person or group specific administrative permissions for an Organizational Unit. In information management, this is used to create teams that can perform specific (limited) tasks for changing information within a user directory or database. The goal of delegation is to create groups with minimum permissions that grant the ability to carry out authorized tasks. Granting extraneous/superfluous permissions would create abilities beyond the authorized scope of work. One best practice for enterprise role management entails the use of LDAP groups. Delegated administration refers to a decentralized model of role or group management. In this model, the application or process owner creates, manages and delegates the management of roles. A centralized IT team simply operates the service of directory, metadirectory, web interface for administration, and related components. Allowing the application or business process owner to create, manage and delegate groups supports a much more scalable approach to the administration of access rights. In a metadirectory environment, these roles or groups could also be "pushed" or synchronized with other platforms. For example, groups can be synchronized with native operating systems such as Microsoft Windows for use on an access control list that protects a folder or file. With the metadirectory distributing groups, the central directory is the central repository of groups. Some enterprise applications (e.g., PeopleSoft) support
https://en.wikipedia.org/wiki/EigenTrust
EigenTrust algorithm is a reputation management algorithm for peer-to-peer networks, developed by Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina. The algorithm provides each peer in the network a unique global trust value based on the peer's history of uploads and thus aims to reduce the number of inauthentic files in a P2P network. It has been cited by approximately 3853 other articles according to Google Scholar. Overview Peer-to-peer systems available today (like Gnutella) are open, often anonymous and lack accountability. Hence a user with malicious intent can introduce into the peer-to-peer network resources that may be inauthentic, corrupted or malicious (Malware). This reflects poorly on the credibility of current peer-to-peer systems. A research team from Stanford provides a reputation management system, where each peer in the system has a unique global trust value based on the peer's history of uploads. Any peer requesting resources will be able to access the trust value of a peer and avoid downloading files from untrusted peers. Algorithm The Eigentrust algorithm is based on the notion of transitive trust: If a peer i trusts any peer j, it would also trust the peers trusted by j. Each peer i calculates the local trust value sij for all peers that have provided it with authentic or fake downloads based on the satisfactory or unsatisfactory transactions that it has had. where sat (i, j) refers to the number of satisfactory responses that peer i has received from peer j, and unsat(i, j) refers to the number of unsatisfactory responses that peer i has received from peer j. The local value is normalized, to prevent malicious peers from assigning arbitrarily high local trust values to colluding malicious peers and arbitrarily low local trust values to good peers. The normalized local trust value cij is then The local trust values are aggregated at a central location or in a distributed manner to create a trust vector for the whole network. Based o
https://en.wikipedia.org/wiki/Species%20distribution
Species distribution, or species dispersion, is the manner in which a biological taxon is spatially arranged. The geographic limits of a particular taxon's distribution is its range, often represented as shaded areas on a map. Patterns of distribution change depending on the scale at which they are viewed, from the arrangement of individuals within a small family unit, to patterns within a population, or the distribution of the entire species as a whole (range). Species distribution is not to be confused with dispersal, which is the movement of individuals away from their region of origin or from a population center of high density. Range In biology, the range of a species is the geographical area within which that species can be found. Within that range, distribution is the general structure of the species population, while dispersion is the variation in its population density. Range is often described with the following qualities: Sometimes a distinction is made between a species' natural, endemic, indigenous, or native range, where it has historically originated and lived, and the range where a species has more recently established itself. Many terms are used to describe the new range, such as non-native, naturalized, introduced, transplanted, invasive, or colonized range. Introduced typically means that a species has been transported by humans (intentionally or accidentally) across a major geographical barrier. For species found in different regions at different times of year, especially seasons, terms such as summer range and winter range are often employed. For species for which only part of their range is used for breeding activity, the terms breeding range and non-breeding range are used. For mobile animals, the term natural range is often used, as opposed to areas where it occurs as a vagrant. Geographic or temporal qualifiers are often added, such as in British range or pre-1950 range. The typical geographic ranges could be the latitudinal range an
https://en.wikipedia.org/wiki/SoapUI
SoapUI is an open-source web service testing application for Simple Object Access Protocol (SOAP) and representational state transfers (REST). Its functionality covers web service inspection, invoking, development, simulation and mocking, functional testing, load and compliance testing. A commercial version, ReadyAPI (formerly SoapUI Pro), which mainly focuses on features designed to enhance productivity, was also developed by Eviware Software AB. In 2011, SmartBear Software acquired Eviware. SoapUI was initially released to SourceForge in September 2005. It is free software, licensed under the terms of the European Union Public License. Since the initial release, SoapUI has been downloaded more than 2,000,000 times. It is built entirely on the Java platform, and uses Swing for the user interface. This means that SoapUI is cross-platform. Today, SoapUI also supports IDEA, Eclipse, and NetBeans. SoapUI can test SOAP and REST web services, JMS, AMF, as well as make any HTTP(S) and JDBC calls. Features SoapUI Core features include web services: inspection invoking development simulation and mocking functional, compliance and security testing ReadyAPI ReadyAPI is the commercial enterprise version. ReadyAPI adds a number of productivity enhancements to the SoapUI core, which are designed to ease many recurring tasks when working with SoapUI. Awards SoapUI has been given a number of awards. These include: Jolt Awards 2014: The Best Testing Tools ATI Automation Honors, 2009 InfoWorld Best of Open Source Software Award, 2008 SOAWorld Readers' Choice Award, 2007 See also Apache JMeter Automated testing itko List of unit testing frameworks LoadUI Software testing System testing Test case Test-driven development TestComplete xUnit – a family of unit testing frameworks References External links API Testing Dojo Free computer programming tools Cross-platform software Web service development tools Software testing tools 2005 software Software us
https://en.wikipedia.org/wiki/Transrepression
In the field of molecular biology, transrepression is a process whereby one protein represses (i.e., inhibits) the activity of a second protein through a protein-protein interaction. Since this repression occurs between two different protein molecules (intermolecular), it is referred to as a trans-acting process. The protein that is repressed is usually a transcription factor whose function is to up-regulate (i.e., increase) the rate of gene transcription. Hence the net result of transrepression is down regulation of gene transcription. An example of transrepression is the ability of the glucocorticoid receptor to inhibit the transcriptional promoting activity of the AP-1 and NF-κB transcription factors. In addition to transactivation, transrepression is an important pathway for the anti-inflammatory effects of glucocorticoids. Other nuclear receptors such as LXR and PPAR have been demonstrated to also have the ability to transrepress the activity of other proteins. See also Selective glucocorticoid receptor agonist References Molecular biology
https://en.wikipedia.org/wiki/Toobin%27
Toobin is an Atari Games and Midway Games arcade video game released in 1988. It is based on the recreational activity tubing. Toobin was ported to the Amiga, Commodore 64, Atari ST, Amstrad CPC, Nintendo Entertainment System, MS-DOS, Game Boy Color, ZX Spectrum, and MSX. Players assume control of tubers Bif or Jet, guiding them along vertically scrolling rivers on an inner tube. Gameplay The player competes in a river race against the computer or another player. The player's score increases by swishing the gates, hitting other characters with cans, collecting hidden letters to spell Toobin, and collecting treasures. Players try to avoid obstacles while pushing each other into them. Power-ups allow players to carry multiple cans and combinations of gates increase a score multiplier. The game has three different classes, each with five rivers, for a total of 15. Legacy The game is included as part of Midway Arcade Treasures and Arcade Party Pak, where it was given a remixed soundtrack. It was also included in the 2012 compilation Midway Arcade Origins. The game is one of the 23 arcade games that are included with the Midway Arcade Level Pack for Lego Dimensions, unlocked by using the hidden Arcade Dock in the level "Follow The Lego Brick Road". It is also an included title on the Arcade1Up Midway Legacy Edition cabinet. See also Swimmer References External links "Toobin'" at the Arcade History database 1988 video games Arcade video games Atari arcade games Amiga games Amstrad CPC games Atari ST games Commodore 64 games Domark games DOS games Game Boy Color games Midway video games MSX games Nintendo Entertainment System games Tengen (company) games Teque London games Unauthorized video games Video games developed in the United States Video games scored by Allister Brimble Video games scored by Brad Fuller Video games scored by David Whittaker Video games scored by Matt Furniss Video games set in Canada Video games set in Colorado Video games set in Egypt
https://en.wikipedia.org/wiki/Control%20register
A control register is a processor register that changes or controls the general behavior of a CPU or other digital device. Common tasks performed by control registers include interrupt control, switching the addressing mode, paging control, and coprocessor control. History When IBM developed a paging version of the System/360, they added 16 control registers to the design for what became the 360/67. IBM did not provide control registers on other S/360 models, but made them a standard part of System/370, although with different register and bit assignments. As IBM added new features to the architecture, e.g., DAS, S/370-XA, S/370-ESA, ESA/390, they added additional fields to the control registers. With z/Architecture, IBM doubled the control register size to 64 bits. Control registers in IBM 360/67 On the 360/67, CR0 and CR2 are used by address translation, CR 4-6 contain miscellaneous flags including interrupt masks and Extended Control Mode, and CR 8-14 contain the switch settings on the 2167 Configuration Unit. M67 CR0 Control Register 0 contains the address of the segment table for dynamic address translation. M67 CR2 Control register 2 is the Relocation exception address register. M67 CR4 CR4 is the extended mask register for channels 0-31. Each bit is the 1/0 channel mask for the corresponding channel. M67 CR5 CR5 is reserved for the extended mask register for channels 32–63. Each bit is the 1/0 channel mask for the corresponding channel. M67 CR6 CR6 contains two mode flags plus extensions to the PSW mask bits. M67 CR8 Control Register 8 contains the assignments of Processor Storage units 1–4 to central processing units (CPUs) and channel controllers (CCs). M67 CR9 Control Register 9 contains the assignments of Processor Storage units 5–8 to central processing units (CPUs) and channel controllers (CCs). M67 CR10 Control Register 10 contains the Processor storage address assignment codes. M67 CR11 Control Register 11 contains channel
https://en.wikipedia.org/wiki/Moore%20plane
In mathematics, the Moore plane, also sometimes called Niemytzki plane (or Nemytskii plane, Nemytskii's tangent disk topology), is a topological space. It is a completely regular Hausdorff space (also called Tychonoff space) that is not normal. It is named after Robert Lee Moore and Viktor Vladimirovich Nemytskii. Definition If is the (closed) upper half-plane , then a topology may be defined on by taking a local basis as follows: Elements of the local basis at points with are the open discs in the plane which are small enough to lie within . Elements of the local basis at points are sets where A is an open disc in the upper half-plane which is tangent to the x axis at p. That is, the local basis is given by Thus the subspace topology inherited by is the same as the subspace topology inherited from the standard topology of the Euclidean plane. Properties The Moore plane is separable, that is, it has a countable dense subset. The Moore plane is a completely regular Hausdorff space (i.e. Tychonoff space), which is not normal. The subspace of has, as its subspace topology, the discrete topology. Thus, the Moore plane shows that a subspace of a separable space need not be separable. The Moore plane is first countable, but not second countable or Lindelöf. The Moore plane is not locally compact. The Moore plane is countably metacompact but not metacompact. Proof that the Moore plane is not normal The fact that this space is not normal can be established by the following counting argument (which is very similar to the argument that the Sorgenfrey plane is not normal): On the one hand, the countable set of points with rational coordinates is dense in ; hence every continuous function is determined by its restriction to , so there can be at most many continuous real-valued functions on . On the other hand, the real line is a closed discrete subspace of with many points. So there are many continuous functions from L to . Not all these functions
https://en.wikipedia.org/wiki/Glaze3D
Glaze3D was a family of graphics cards announced by BitBoys Oy on August 2, 1999, that would have produced substantially better performance than other consumer products available at the time. The family, which would have come in the Glaze3D 1200, Glaze3D 2400 and Glaze3D 4800 models, was supposed to offer full support for DirectX 7, OpenGL 1.2, AGP 4×, 4× anisotropic filtering, full-screen anti-aliasing and a host of other technologies not commonly seen at the time. The 1.5 million gate GPU would have been fabricated by Infineon on a 0.2 μm eDRAM process, later to be reduced to 0.17 μm with a minimum of 9 MB of embedded DRAM and 128 to 512 MB of external SDRAM. The maximum supported video resolution was 2048×1536 pixels. Development history The Glaze3D family of cards were developed in several generations, beginning with the original Glaze3D "400" with multi-channel RDRAM instead of internal eDRAM. This was offered only as IP but with no takers. Bitboys revised the design and decided to have it manufactured themselves, in cooperation with Infineon Technologies, the chip fabrication arm of Siemens. They came up with a new Glaze3D pitched for release in Q1, 2000. The card promised extremely high performance compared to contemporary consumer GPUs. As bug-hunting, validation and manufacturing problems delayed the launch, new features became necessary and a DX7 variant with built-in hardware Transform & Lighting was announced, but never appeared. The GPU was later redesigned under a new codename, Axe, to take advantage of DirectX 8 and compete with a developing competition. The new version sported such features as an additional 3 MB of eDRAM, proprietary Matrix Antialiasing and a vastly improved fillrate, as well as offering a programmable vertex shader and widened internal memory bus. The new card was to have been released as Avalanche3D by the end of 2001. The third development, codenamed Hammer, started development as Axe lost viability toward the end of 2001. Thi
https://en.wikipedia.org/wiki/Extension%20topology
In topology, a branch of mathematics, an extension topology is a topology placed on the disjoint union of a topological space and another set. There are various types of extension topology, described in the sections below. Extension topology Let X be a topological space and P a set disjoint from X. Consider in X ∪ P the topology whose open sets are of the form A ∪ Q, where A is an open set of X and Q is a subset of P. The closed sets of X ∪ P are of the form B ∪ Q, where B is a closed set of X and Q is a subset of P. For these reasons this topology is called the extension topology of X plus P, with which one extends to X ∪ P the open and the closed sets of X. As subsets of X ∪ P the subspace topology of X is the original topology of X, while the subspace topology of P is the discrete topology. As a topological space, X ∪ P is homeomorphic to the topological sum of X and P, and X is a clopen subset of X ∪ P. If Y is a topological space and R is a subset of Y, one might ask whether the extension topology of Y – R plus R is the same as the original topology of Y, and the answer is in general no. Note the similarity of this extension topology construction and the Alexandroff one-point compactification, in which case, having a topological space X which one wishes to compactify by adding a point ∞ in infinity, one considers the closed sets of X ∪ {∞} to be the sets of the form K, where K is a closed compact set of X, or B ∪ {∞}, where B is a closed set of X. Open extension topology Let be a topological space and a set disjoint from . The open extension topology of plus is Let . Then is a topology in . The subspace topology of is the original topology of , i.e. , while the subspace topology of is the discrete topology, i.e. . The closed sets in are . Note that is closed in and is open and dense in . If Y a topological space and R is a subset of Y, one might ask whether the open extension topology of Y – R plus R is the same as the original topology
https://en.wikipedia.org/wiki/Michael%20Bartosh
Michael Bartosh (September 18, 1977 – June 11, 2006) was president and CTO of 4am Media, Inc, an Apple Certified Trainer, certified member of the Apple Consultants Network, published author and former systems engineer for Apple Computer. Previous to joining Apple full-time he had worked as an Apple campus rep (at Texas A&M) and had the opportunity to meet Steve Jobs after his 1999 MacWorld keynote. His main focus and expertise was directory services and integration, and was considered by members of the Macintosh support and development community to be one of the foremost experts on the subject, having literally "written the book." His most recent work includes Mac OS X Tiger Server Administration (published posthumously), Essential Mac OS X Panther Server Administration, articles published on O'Reilly network (Open Directory and Active Directory parts 1-4 and Panther and Active Directory ), as well as presentations and classes at many training centers/events, trade shows and conferences. He was also a regular contributor on several technical mailing lists related to Mac OS X and Mac OS X Server. Death He died as a result of injuries caused by a fall from a balcony at a friend's home in Tokyo in June 2006. Police ruled the death an accident. The Michael Bartosh Memorial Scholarship was created in his honor. Bibliography Mac OS X Tiger Server Administration, O'Reilly Media, September 2006, Essential Mac OS X Panther Server Administration, O'Reilly Media, May 2005, References External links 4AM Media was Michael's training and consulting business. Bio and list of articles at O'Reilly. 1977 births 2006 deaths Accidental deaths from falls Apple Inc. employees Computer systems engineers Technical writers Accidental deaths in Japan
https://en.wikipedia.org/wiki/W.%20E.%20P.%20Duncan
Wilfred Eben Pinkerton Duncan (1897 – 28 January 1977) was an important figure in the early period of the Toronto Transit Commission's history. He was born in Glasgow, Scotland, and graduated with a B.Sc. degree in engineering from Glasgow University. He emigrated to Canada and worked from 1910 to 1914 in the construction department of the Canadian Pacific Railway. Between 1915 and 1919 he served overseas in the Great War with the Canadian Expeditionary Force and the Royal Engineers, attaining the rank of Major. After the war he worked as a construction engineer in Toronto. He joined the Toronto Transportation Commission in 1921, and served in various engineering roles. By 1945 he was the TTC's Chief Engineer, and he became General Manager, the senior staff position, in 1952. In 1959, when the senior position was split in two, he became General Manager – Subway Construction, while John G. Inglis assumed the role of General Manager - Operations. Duncan retired in 1961 but remained active as a General Consultant to the TTC until the opening of the University Subway in 1963. He was instrumental in the growth of the system and was in charge of the TTC during the building of the Yonge Subway. The Duncan Shops, a heavy bus maintenance facility at the TTC's Hillcrest Complex, is named in his honour. References TTC Coupler, September 1952 Vol 27 No 9 TTC Coupler, March 1961 Vol 36 No 3 TTC Coupler, March 1977 Vol 52 No 3 Specific 1897 births 1977 deaths Engineers from Glasgow Alumni of the University of Glasgow Canadian civil engineers Toronto Transit Commission general managers Scottish emigrants to Canada Royal Engineers
https://en.wikipedia.org/wiki/Test%20register
A test register, in the Intel 80386 and Intel 80486 processor, was a register used by the processor, usually to do a self-test. Most of these registers were undocumented, and used by specialized software. The test registers were named TR3 to TR7. Regular programs don't usually require these registers to work. With the Pentium, the test registers were replaced by a variety of model-specific registers (MSRs). In the 80386, two test registers, TR6 and TR7, were provided for the purpose of TLB testing. TR6 was the test command register, and TR7 was the test data register. The 80486 provided three additional registers, TR3, TR4 and TR5, for testing of the L1 cache. TR3 was a data register, TR4 was an address register and TR5 was a command register. These registers were accessed by variants of the MOV instruction. A test register may either be the source operand or the destination operand. The MOV instructions are defined in both real-address mode and protected mode. The test registers are privileged resources. In protected mode, the MOV instructions that access them can only be executed at privilege level 0. An attempt to read or write the test registers when executing at any other privilege level causes a general protection exception. Also, those instructions generate invalid opcode exception on most CPUs newer than 80486. The instruction is encoded in two ways, depending on the flow of data. Moving data from a general purpose register into a test register is encoded as 0F 26 /r (with r/m being the GPR, and reg being the test register). Moving data the other way (i.e. from the test register into a general purpose register) is encoded as 0F 24 /r (with r/m being the GPR, and reg being the test register). Only register-register moves are allowed; memory forms of the ModR/M byte are undefined. In other words, the mod field (the two MSBs) must be set to 1. The test registers and/or associated opcodes were supported in the following x86 processors: See also Control reg
https://en.wikipedia.org/wiki/X86%20debug%20register
On the x86 architecture, a debug register is a register used by a processor for program debugging. There are six debug registers, named DR0...DR7, with DR4 and DR5 as obsolete synonyms for DR6 and DR7. The debug registers allow programmers to selectively enable various debug conditions associated with a set of four debug addresses. Two of these registers are used to control debug features. These registers are accessed by variants of the MOV instruction. A debug register may be either the source operand or destination operand. The debug registers are privileged resources; the MOV instructions that access them can only be executed at privilege level zero. An attempt to read or write the debug registers when executing at any other privilege level causes a general protection fault. DR0 to DR3 Each of these registers contains the linear address associated with one of four breakpoint conditions. Each breakpoint condition is further defined by bits in DR7. The debug address registers are effective whether or not paging is enabled. The addresses in these registers are linear addresses. If paging is enabled, the linear addresses are translated into physical addresses by the processor's paging mechanism. If paging is not enabled, these linear addresses are the same as physical addresses. Note that when paging is enabled, different tasks may have different linear-to-physical address mappings. When this is the case, an address in a debug address register may be relevant to one task but not to another. For this reason the x86 has both global and local enable bits in DR7. These bits indicate whether a given debug address has a global (all tasks) or local (current task only) relevance. DR6 - Debug status The debug status register permits the debugger to determine which debug conditions have occurred. When the processor detects an enabled debug exception, it will set the corresponding bits of this register before entering the debug exception handler. DR7 - Debug control The de
https://en.wikipedia.org/wiki/Multimedia%20over%20Coax%20Alliance
The Multimedia over Coax Alliance (MoCA) is an international standards consortium that publishes specifications for networking over coaxial cable. The technology was originally developed to distribute IP television in homes using existing cabling, but is now used as a general-purpose Ethernet link where it is inconvenient or undesirable to replace existing coaxial cable with optical fiber or twisted pair cabling. MoCA 1.0 was approved in 2006, MoCA 1.1 in April 2010, MoCA 2.0 in June 2010, and MoCA 2.5 in April 2016. The most recently released version of the standard, MoCA 3.0, supports speeds of up to . Membership The Alliance currently has 45 members including pay TV operators, OEMs, CE manufacturers and IC vendors. MoCA's board of directors consists of Arris, Comcast, Cox Communications, DirecTV, Echostar, Intel, InCoax, MaxLinear and Verizon. Technology Within the scope of the Internet protocol suite, MoCA is a protocol that provides the link layer. In the 7-layer OSI model, it provides definitions within the data link layer (layer 2) and the physical layer (layer 1). DLNA approved of MoCA as a layer 2 protocol. A MoCA network can contain up to 16 nodes for MoCA 1.1 and higher, with a maximum of 8 for MoCA 1.0. The network provides a shared-medium, half-duplex link between all nodes using time-division multiplexing; within each timeslot, any pair of nodes communicates directly with each other using the highest mutually-supported version of the standard. Versions MoCA 1.0 The first version of the standard, MoCA 1.0, was ratified in 2006 and supports transmission speeds of up to 135 Mb/s. MoCA 1.1 MoCA 1.1 provides 175 Mbit/s net throughputs (275 Mbit/s PHY rate) and operates in the 500 to 1500 MHz frequency range. MoCA 2.0 MoCA 2.0 offers actual throughputs (MAC rate) up to 1 Gbit/s. Operating frequency range is 500 to 1650 MHz. Packet error rate is 1 packet error in 100 million. MoCA 2.0 also offers lower power modes of sleep and standby and is backw
https://en.wikipedia.org/wiki/National%20Academy%20of%20Engineering
The National Academy of Engineering (NAE) is an American nonprofit, non-governmental organization. The National Academy of Engineering is part of the National Academies of Sciences, Engineering, and Medicine, along with the National Academy of Sciences (NAS), the National Academy of Medicine, and the National Research Council (now the program units of NASEM). The NAE operates engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. New members are annually elected by current members, based on their distinguished and continuing achievements in original research. The NAE is autonomous in its administration and in the selection of its members, sharing with the rest of the National Academies the role of advising the federal government. History The National Academy of Sciences was created by an Act of Incorporation dated March 3, 1863, which was signed by then President of the United States Abraham Lincoln with the purpose to "...investigate, examine, experiment, and report upon any subject of science or art..." No reference to engineering was in the original act, the first recognition of any engineering role was with the setup of the Academy's standing committees in 1899. At that time, there were six standing committees: (mathematics and astronomy; physics and engineering; chemistry; geology and paleontology; biology; and anthropology. In 1911, this committee structure was again reorganized into eight committees: biology was separated into botany; zoology and animal morphology; and physiology and pathology; anthropology was renamed anthropology and psychology with the remaining committees including physics and engineering, unchanged. In 1913, George Ellery Hale presented a paper on the occasion of the Academy's 50th anniversary, outlining an expansive future agenda for the Academy. Hale proposed a vision of an Academy that interacted with the "whole range of science", one that act
https://en.wikipedia.org/wiki/Institute%20of%20Transportation%20Engineers
The Institute of Transportation Engineers (ITE) is an international educational and scientific association of transportation professionals who are responsible for meeting mobility and safety needs. ITE facilitates the application of technology and scientific principles to research, planning, functional design, implementation, operation, policy development, and management for any mode of ground transportation. History The organization formed in 1930 amid growing public demand for experts to alleviate traffic congestion and the frequency of crashes that came from the rapid development of automotive transportation. It formed as the Institute of Traffic Engineers and its first president was Ernest P. Goodrich. The organization consists of 10 districts, 62 sections, and 30 chapters from various parts of the world. Standards development ITE is also a standards development organization designated by the United States Department of Transportation (USDOT). One of the current standardization efforts is the advanced transportation controller. ITE is also known for publishing articles about trip generation, parking generation, parking demand, and various transportation-related material through ITE Journal, a monthly publication. Criticism Urbanists such as Jeff Speck have criticized ITE standards for encouraging towns to build more, wider streets making pedestrians less safe and cities less walkable. Donald Shoup in his book The High Cost of Free Parking argues that ITE estimates give towns the false confidence to regulate minimum parking requirements which reinforce sprawl. See also National Transportation Communications for Intelligent Transportation System Protocol (NTCIP) Canadian Institute of Transportation Engineers References External links Transportation engineering Organizations based in Washington, D.C. Road transport organizations Organizations established in 1930 Engineering organizations Transportation organizations based in the United States
https://en.wikipedia.org/wiki/Progenitor%20cell
A progenitor cell is a biological cell that can differentiate into a specific cell type. Stem cells and progenitor cells have this ability in common. However, stem cells are less specified than progenitor cells. Progenitor cells can only differentiate into their "target" cell type. The most important difference between stem cells and progenitor cells is that stem cells can replicate indefinitely, whereas progenitor cells can divide only a limited number of times. Controversy about the exact definition remains and the concept is still evolving. The terms "progenitor cell" and "stem cell" are sometimes equated. Properties Most progenitors are identified as oligopotent. In this point of view, they can compare to adult stem cells, but progenitors are said to be in a further stage of cell differentiation. They are "midway" between stem cells and fully differentiated cells. The kind of potency they have depends on the type of their "parent" stem cell and also on their niche. Some research found that progenitor cells were mobile and that these progenitor cells could move through the body and migrate towards the tissue where they are needed. Many properties are shared by adult stem cells and progenitor cells. Research Progenitor cells have become a hub for research on a few different fronts. Current research on progenitor cells focuses on two different applications: regenerative medicine and cancer biology. Research on regenerative medicine has focused on progenitor cells, and stem cells, because their cellular senescence contributes largely to the process of aging. Research on cancer biology focuses on the impact of progenitor cells on cancer responses, and the way that these cells tie into the immune response. The natural aging of cells, called their cellular senescence, is one of the main contributors to aging on an organismal level. There are a few different ideas to the cause behind why aging happens on a cellular level. Telomere length has been shown to positive
https://en.wikipedia.org/wiki/Kenji%20Urada
Kenji Urada (c. 1944 – July 4, 1981) was a Japanese factory worker who was killed by a robot. Urada is often incorrectly reported to be the first person killed by a robot, but Robert Williams, a worker at the Ford Motor Company's Michigan Casting Center, had been killed by a robot over two years earlier, on January 25, 1979. Urada was a maintenance worker at the Kawasaki Heavy Industries plant in Akashi. He died while checking a malfunctioning robot; after jumping over a safety barrier, which was designed to shut down power to the machine when open, he apparently started the robot inadvertently. The robot, built by Kawasaki under a license from Unimation, pinned him against an adjacent machine and either crushed him or stabbed him in the back. Other workers in the factory were unable to stop the machine as they were unfamiliar with its operation. International newswire service UPI reported Urada was the first human killed by a robot on December 8, 1981. The circumstances of his death were not made public until after an investigation by the Hyōgo labor standards bureau was completed. The investigation concluded that workers were not sufficiently familiar with the machines and the machines were not sufficiently regulated. The robot that killed Urada was removed from the Akashi plant, and man-high fences were erected around the other two robots in the plant in the wake of the accident. See also List of unusual deaths References External links 1940s births 1981 deaths Deaths caused by industrial robots Accidental deaths in Japan
https://en.wikipedia.org/wiki/Zeek
Zeek is a free and open-source software network analysis framework. Vern Paxson began development work on Zeek in 1995 at Lawrence Berkeley National Lab. Zeek is a network security monitor (NSM) but can also be used as a network intrusion detection system (NIDS). The Zeek project releases the software under the BSD license. Output Zeek's purpose is to inspect network traffic and generate a variety of logs describing the activity it sees. A complete list of log files is available at the project documentation site. Log example The following is an example of one entry in JSON format from the conn.log: Threat hunting One of Zeek's primary use cases involves cyber threat hunting. Name It principal author, Paxson, originally named the software "Bro" as a warning regarding George Orwell's Big Brother from the novel Nineteen Eighty-Four. In 2018 the project leadership team decided to rename the software. At LBNL in the 1990s, the developers ran their sensors as a pseudo-user named "zeek", thereby inspiring the name change in 2018. Zeek deployment Security teams identify locations on their network where they desire visibility. They deploy one or more network taps or enable switch SPAN ports for port mirroring to gain access to traffic. They deploy Zeek on servers with access to those visibility points. The Zeek software on the server deciphers network traffic as logs, writing them to local disk or remote storage. Zeek application architecture and analyzers Zeek's event engine analyzes live or recorded network traffic to generate neutral event logs. Zeek uses common ports and dynamic protocol detection (involving signatures as well as behavioral analysis) to identify network protocols. Developers write Zeek policy scripts in the Turing complete Zeek scripting language. By default Zeek logs information about events to files, but analysts can also configure Zeek to take other actions, such as sending an email, raising an alert, executing a system command, updating an
https://en.wikipedia.org/wiki/Stress%E2%80%93energy%E2%80%93momentum%20pseudotensor
In the theory of general relativity, a stress–energy–momentum pseudotensor, such as the Landau–Lifshitz pseudotensor, is an extension of the non-gravitational stress–energy tensor that incorporates the energy–momentum of gravity. It allows the energy–momentum of a system of gravitating matter to be defined. In particular it allows the total of matter plus the gravitating energy–momentum to form a conserved current within the framework of general relativity, so that the total energy–momentum crossing the hypersurface (3-dimensional boundary) of any compact space–time hypervolume (4-dimensional submanifold) vanishes. Some people (such as Erwin Schrödinger) have objected to this derivation on the grounds that pseudotensors are inappropriate objects in general relativity, but the conservation law only requires the use of the 4-divergence of a pseudotensor which is, in this case, a tensor (which also vanishes). Also, most pseudotensors are sections of jet bundles, which are now recognized as perfectly valid objects in general relativity. Landau–Lifshitz pseudotensor The Landau–Lifshitz pseudotensor, a stress–energy–momentum pseudotensor for gravity, when combined with terms for matter (including photons and neutrinos), allows the energy–momentum conservation laws to be extended into general relativity. Requirements Landau and Lifshitz were led by four requirements in their search for a gravitational energy momentum pseudotensor, : that it be constructed entirely from the metric tensor, so as to be purely geometrical or gravitational in origin. that it be index symmetric, i.e. , (to conserve angular momentum) that, when added to the stress–energy tensor of matter, , its total 4-divergence vanishes (this is required of any conserved current) so that we have a conserved expression for the total stress–energy–momentum. that it vanish locally in an inertial frame of reference (which requires that it only contains first order and not second or higher order derivatives o
https://en.wikipedia.org/wiki/Fundamental%20thermodynamic%20relation
In thermodynamics, the fundamental thermodynamic relation are four fundamental equations which demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like G (Gibbs free energy) or H (enthalpy). The relation is generally expressed as a microscopic change in internal energy in terms of microscopic changes in entropy, and volume for a closed system in thermal equilibrium in the following way. Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume. This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in terms of the enthalpy H as in terms of the Helmholtz free energy F as and in terms of the Gibbs free energy G as . The first and second laws of thermodynamics The first law of thermodynamics states that: where and are infinitesimal amounts of heat supplied to the system by its surroundings and work done by the system on its surroundings, respectively. According to the second law of thermodynamics we have for a reversible process: Hence: By substituting this into the first law, we have: Letting be reversible pressure-volume work done by the system on its surroundings, we have: This equation has been derived in the case of reversible changes. However, since U, S, and V are thermodynamic state functions that depends on only the initial and final states of a thermodynamic process, the above relation holds also for non-reversible changes. If the composition, i.e. the amounts of the chemical components, in a system of uniform temperature and pressure can also change, e.g. due to a chemical reaction, the fundamental thermodynam
https://en.wikipedia.org/wiki/Least-concern%20species
A least-concern species is a species that has been categorized by the International Union for Conservation of Nature (IUCN) as evaluated as not being a focus of species conservation because the specific species is still plentiful in the wild. They do not qualify as threatened, near threatened, or (before 2001) conservation dependent. Species cannot be assigned the "Least Concern" category unless they have had their population status evaluated. That is, adequate information is needed to make a direct, or indirect, assessment of its risk of extinction based on its distribution or population status. Evaluation Since 2001 the category has had the abbreviation "LC", following the IUCN 2001 Categories & Criteria (version 3.1). Before 2001 "least concern" was a subcategory of the "Lower Risk" category and assigned the code "LR/lc" or lc. Around 20% of least concern taxa (3261 of 15,636) in the IUCN database still use the code "LR/lc", which indicates they have not been re-evaluated since 2000. Number of species While "least concern" is not considered a red listed category by the IUCN, the 2006 IUCN Red List still assigns the category to 15,636 taxa. The number of animal species listed in this category totals 14,033 (which includes several undescribed species such as a frog from the genus Philautus). There are also 101 animal subspecies listed and 1500 plant taxa (1410 species, 55 subspecies, and 35 varieties). No fungi or protista have the classification, though only four species in those kingdoms have been evaluated by the IUCN. Humans were formally assessed as a species of least concern in 2008. List of LC species See also Conservation status References External links List of Least Concern species as identified by the IUCN Red List of Threatened Species Biota by conservation status IUCN Red List
https://en.wikipedia.org/wiki/Thermal%20transmittance
Thermal transmittance is the rate of transfer of heat through matter. The thermal transmittance of a material (such as insulation or concrete) or an assembly (such as a wall or window) is expressed as a U-value. The thermal insulance of a structure is the reciprocal of its thermal transmittance. U-value Although the concept of U-value (or U-factor) is universal, U-values can be expressed in different units. In most countries, U-value is expressed in SI units, as watts per square metre-kelvin: W/(m2⋅K) In the United States, U-value is expressed as British thermal units (Btu) per hour-square feet-degrees Fahrenheit: Btu/(h⋅ft2⋅°F) Within this article, U-values are expressed in SI unless otherwise noted. To convert from SI to US customary values, divide by 5.678. Well-insulated parts of a building have a low thermal transmittance whereas poorly insulated parts of a building have a high thermal transmittance. Losses due to thermal radiation, thermal convection and thermal conduction are taken into account in the U-value. Although it has the same units as heat transfer coefficient, thermal transmittance is different in that the heat transfer coefficient is used to solely describe heat transfer in fluids while thermal transmittance is used to simplify an equation that has several different forms of thermal resistances. It is described by the equation: Φ = A × U × (T1 - T2) where Φ is the heat transfer in watts, U is the thermal transmittance, T1 is the temperature on one side of the structure, T2 is the temperature on the other side of the structure and A is the area in square metres. Thermal transmittances of most walls and roofs can be calculated using ISO 6946, unless there is metal bridging the insulation in which case it can be calculated using ISO 10211. For most ground floors it can be calculated using ISO 13370. For most windows the thermal transmittance can be calculated using ISO 10077 or ISO 15099. ISO 9869 describes how to measure the thermal transmi
https://en.wikipedia.org/wiki/Siemens%20Communications
Siemens Communications was the communications and information business arm of German industrial conglomerate Siemens AG, until 2006. It was the largest division of Siemens, and had two business units – Mobile Networks and Fixed Networks; and Enterprise. Siemens Communications division was founded in 1998 through the amalgamation of a number of early groups / divisions of Siemens AG, the oldest of which traces back to the company 'Siemens & Halske Telegraph Construction Company' founded in 1847, and the most prominent predecessor being the 1978-founded 'Siemens Communication Systems'. On October 1, 2006, Siemens AG decided to divide Siemens Communications into two companies: 'Siemens Networks GmbH & Co. KG' and 'Siemens Enterprise Communications GmbH & Co. KG'. The company remains extant, through a series of mergers and divisions, as Siemens Enterprise Communications – a 2008 joint venture with the Gores Group where Siemens AG hold 49% with the balance of 51% held by the American partner. History Origins (1847–1978) Siemens Communications traces its origins to the company Siemens & Halske Telegraph Construction Company (German legal name: Telegraphen-Bauanstalt von Siemens & Halske) founded by Werner von Siemens on 12 October 1847. Based on the telegraph, his invention used a needle to point to the sequence of letters, instead of using Morse code. In 1848, the company built the first long-distance telegraph line in Europe – 500 km from Berlin to Frankfurt am Main – and by the early 1850s the company was involved in building long distance telegraph networks in Russia. In 1867, Siemens completed the monumental Indo-European telegraph line stretching over 11,000 km from London to Calcutta. In 1897, Siemens & Halske went public. During the first half of the 20th century, there were a series of mergers and divisions, which led to formation of three separate companies. First was the original company, Siemens & Halske, which focused on communications engineering; the
https://en.wikipedia.org/wiki/Flavodoxin
Flavodoxins (Fld) are small, soluble electron-transfer proteins. Flavodoxins contains flavin mononucleotide as prosthetic group. The structure of flavodoxin is characterized by a five-stranded parallel beta sheet, surrounded by five alpha helices. They have been isolated from prokaryotes, cyanobacteria, and some eukaryotic algae. Background Originally found in cyanobacteria and clostridia, flavodoxins were discovered over 50 years ago. These proteins evolved from an anaerobic environment, due to selective pressures. Ferredoxin, another redox protein, was the only protein able to be used in this manner. However, when oxygen became present in the environment, iron became limited. Ferredoxin is iron-dependant as well as oxidant-sensitive. Under these limited iron conditions, ferredoxin was no longer preferred. Flavodoxin on the other hand is the opposite of these traits, as it is oxidant-resistant and has iron-free isofunctional counterparts. Therefore, for some time flavodoxin was the primary redox protein. Now however, when ferredoxin and flavodoxin are present in the same genome, ferredoxin is still used but under low iron conditions, flavodoxin is induced. Structure Three forms of flavodoxin exist: Oxidized, (OX) semiquinone, (SQ) and hydroquinone (HQ). While relatively small (Mw = 15-22 kDa), flavodoxins exist in "long" and "short" chain classifications. Short chain flavodoxins contain between 140 and 180 amino acid residues, while long chain flavodoxins include a 20 amino acid insertion into the last beta-strand. These residues form a loop which may be used to increase the binding affinity of flavin mononucleotide as well as assist in the formation of folded intermediates. However, it is still not certain what the loops true function is. In addition, the flavin mononucleotide is non-covalently bound to the flavodoxin protein and works to shuttle electrons. Medical applications Heliobacter pylori (Hp), the most prevalent human gastric pathogen, requires fla
https://en.wikipedia.org/wiki/Interactive%20design
Interactive design is a user-oriented field of study that focuses on meaningful communication using media to create products through cyclical and collaborative processes between people and technology. Successful interactive designs have simple, clearly defined goals, a strong purpose and intuitive screen interface. Interactive design compared to interaction design In some cases interactive design is equated to interaction design; however, in the specialized study of interactive design there are defined differences. To assist in this distinction, interaction design can be thought of as: Making devices usable, useful, and fun, focusing on the efficiency and intuitive hardware A fusion of product design, computer science, and communication design A process of solving specific problems under a specific set of contextual circumstances The creation of form for the behavior of products, services, environments, and systems Making dialogue between technology and user invisible, i.e. reducing the limitations of communication through and with technology. About connecting people through various products and services, Whereas interactive design can be thought of as: Giving purpose to interaction design through meaningful experiences Consisting of six main components including User control, Responsiveness, Real-Time Interactions, Connectedness, Personalization, and Playfulness Focuses on the use and experience of the software Retrieving and processing information through on-demand responsiveness Acting upon information to transform it The constant changing of information and media, regardless of changes in the device Providing interactivity through a focus on the capabilities and constraints of human cognitive processing While both definitions indicate a strong focus on the user, the difference arises from the purposes of interactive design and interaction design. In essence interactive design involves the creation of interactive products and services, whi
https://en.wikipedia.org/wiki/AEGIS%20SecureConnect
AEGIS SecureConnect (or simply AEGIS) is the former name of a network authentication system used in IEEE 802.1X networks. It was developed by Meetinghouse Data Communications, Inc.; the system was renamed "Cisco Secure Services Client" when Meetinghouse was acquired by Cisco Systems. The AEGIS Protocol is an 802.1X supplicant (i.e. handles authentication for wired and wireless networks, such as those that use WPA-PSK, WPA-Radius, or Certificate-based authentication), and is commonly installed along with a Network Interface Card's (NIC) or VPN drivers. References External links Cisco Secure Services Client Q&A (Cisco Systems, Inc.) Computer network security IEEE 802.11
https://en.wikipedia.org/wiki/E-selectin
E-selectin, also known as CD62 antigen-like family member E (CD62E), endothelial-leukocyte adhesion molecule 1 (ELAM-1), or leukocyte-endothelial cell adhesion molecule 2 (LECAM2), is a selectin cell adhesion molecule expressed only on endothelial cells activated by cytokines. Like other selectins, it plays an important part in inflammation. In humans, E-selectin is encoded by the SELE gene. Structure E selectin has a cassette structure: an N-terminal, C-type lectin domain, an EGF (epidermal-growth-factor)-like domain, 6 Sushi domain (SCR repeat) units, a transmembrane domain (TM) and an intracellular cytoplasmic tail (cyto). The three-dimensional structure of the ligand-binding region of human E-selectin has been determined at 2.0 Å resolution in 1994. The structure reveals limited contact between the two domains and a coordination of Ca2+ not predicted from other C-type lectins. Structure/function analysis indicates a defined region and specific amino-acid side chains that may be involved in ligand binding. The E-selectin bound to sialyl-LewisX (SLeX; NeuNAcα2,3Galβ1,4[Fucα1,3]GlcNAc) tetrasaccharide was solved in 2000. Gene and regulation In humans, E-selectin is encoded by the SELE gene. Its C-type lectin domain, EGF-like, SCR repeats, and transmembrane domains are each encoded by separate exons, whereas the E-selectin cytosolic domain derives from two exons. The E-selectin locus flanks the L-selectin locus on chromosome 1. Different from P-selectin, which is stored in vesicles called Weibel-Palade bodies, E-selectin is not stored in the cell and has to be transcribed, translated, and transported to the cell surface. The production of E-selectin is stimulated by the expression of P-selectin which in turn, is stimulated by tumor necrosis factor α (TNFα), interleukin-1 (IL-1) and lipopolysaccharide (LPS). It takes about two hours, after cytokine recognition, for E-selectin to be expressed on the endothelial cell's surface. Maximal expression of E-selectin
https://en.wikipedia.org/wiki/FX8010
The FX8010, is a DSP architecture, designed for realtime audio effects, designed by E-mu, around their E-mu 10K1 chip. One key feature of the architecture, is not providing any branching instructions, but rather running the whole program in a sample locked constant loop, i.e. a constant number of instructions is executed per sample. Instructions are given conditional execution flag akin to some RISC processors (notably the ARM), thus providing a constant runtime. External links kxProject documentation page - Some documentation available here Digital signal processors
https://en.wikipedia.org/wiki/Matching%20pursuit
Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions (called atoms) taken from . An approximation with atoms has the form where is the th column of the matrix and is the scalar weighting factor (amplitude) for the atom . Normally, not every atom in will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small, where the residual after calculating and is denoted by . If converges quickly to zero, then only a few atoms are needed to get a good approximation to . Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is where is the pseudo-norm (i.e. the number of nonzero elements of ). In the previous notation, the nonzero entries of are . Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used. For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals . By taking an extremely redundant dictionary, we can look
https://en.wikipedia.org/wiki/Space-time%20adaptive%20processing
Space-time adaptive processing (STAP) is a signal processing technique most commonly used in radar systems. It involves adaptive array processing algorithms to aid in target detection. Radar signal processing benefits from STAP in areas where interference is a problem (i.e. ground clutter, jamming, etc.). Through careful application of STAP, it is possible to achieve order-of-magnitude sensitivity improvements in target detection. STAP involves a two-dimensional filtering technique using a phased-array antenna with multiple spatial channels. Coupling multiple spatial channels with pulse-Doppler waveforms lends to the name "space-time." Applying the statistics of the interference environment, an adaptive STAP weight vector is formed. This weight vector is applied to the coherent samples received by the radar. History The theory of STAP was first published by Lawrence E. Brennan and Irving S. Reed in the early 1970s. At the time of publication, both Brennan and Reed were at Technology Service Corporation (TSC). While it was formally introduced in 1973, it has theoretical roots dating back to 1959. Motivation and applications For ground-based radar, cluttered returns tend to be at DC, making them easily discriminated by Moving Target Indication (MTI). Thus, a notch filter at the zero-Doppler bin can be used. Airborne platforms with ownship motion experience relative ground clutter motion dependent on the angle, resulting in angle-Doppler coupling at the input. In this case, 1D filtering is not sufficient, since clutter can overlap the desired target's Doppler from multiple directions. The resulting interference is typically called a "clutter ridge," since it forms a line in the angle-Doppler domain. Narrowband jamming signals are also a source of interference, and exhibit significant spatial correlation. Thus receiver noise and interference must be considered, and detection processors must attempt to maximize the signal-to-interference and noise ratio (SINR).
https://en.wikipedia.org/wiki/Markov%20strategy
In game theory, a Markov strategy is one that depends only on state variables that summarize the history of the game in one way or another. For instance, a state variable can be the current play in a repeated game, or it can be any interpretation of a recent sequence of play. A profile of Markov strategies is a Markov perfect equilibrium if it is a Nash equilibrium in every state of the game. The Markov strategy was invented by Andrey Markov. References Game theory
https://en.wikipedia.org/wiki/LED%20circuit
In electronics, an LED circuit or LED driver is an electrical circuit used to power a light-emitting diode (LED). The circuit must provide sufficient current to light the LED at the required brightness, but must limit the current to prevent damaging the LED. The voltage drop across an LED is approximately constant over a wide range of operating current; therefore, a small increase in applied voltage greatly increases the current. Very simple circuits are used for low-power indicator LEDs. More complex, current source circuits are required when driving high-power LEDs for illumination to achieve correct current regulation. Basic circuit The simplest circuit to drive an LED is through a series resistor. It is commonly used for indicators and digital displays in many consumer appliances. However, this circuit is not energy-efficient, because energy is dissipated in the resistor as heat. An LED has a voltage drop specified at the intended operating current. Ohm's law and Kirchhoff's circuit laws are used to calculate the appropriate resistor value, by subtracting the LED voltage drop from the supply voltage and dividing by the desired operating current. With a sufficiently high supply voltage, multiple LEDs in series can be powered with one resistor. If the supply voltage is close or equal to the LED forward voltage, then no reasonable value for the resistor can be calculated, so some other method of current limiting is used. Power source considerations The voltage versus current characteristics of an LED is similar to any diode. Current is approximately an exponential function of voltage according to the Shockley diode equation, and a small voltage change may result in a large change in current. If the voltage is below or equal to the threshold no current flows and the result is an unlit LED. If the voltage is too high, the current will exceed the maximum rating, overheating and potentially destroying the LED. LED drivers are designed to handle fluctuation load,
https://en.wikipedia.org/wiki/Digital%20channel%20election
A digital channel election was the process by which television stations in the United States chose which physical radio-frequency TV channel they would permanently use after the analog shutdown in 2009. The process was managed and mandated by the Federal Communications Commission for all full-power TV stations. Low-powered television (LPTV) stations are going through a somewhat different process, and are also allowed to flash-cut to digital. Process Stations could choose to keep their initial digital TV channel allocation, do a flash-cut to their former analog TV channel, or attempt to select another channel, often an analog channel or pre-transition digital channel from another station that had been orphaned. Stations on channels 52 to 69 did not have the first option, as the FCC and then the U.S. Congress revoked them from the bandplan. Many stations have chosen to keep their new channels permanently, after being forced to buy all new transmitters and television antennas. In some cases where the station's current analog tower could not handle the stress of the new digital antenna's weight and wind load, station owners had to construct entirely new broadcast towers in order to comply with the FCC's DTV mandate. Most broadcasters were bitter at having to purchase digital equipment and broadcast a digital signal when very few homeowners had digital television sets. The FCC allowed broadcasters the opportunity to petition the Federal Communications Commission (FCC) for special temporary authority (STA) to operate their digital facilities at low power, thereby allowing broadcasters additional time in which to purchase their full-power digital facilities. However, the FCC gave a stern July 2006 deadline for all full-power television stations to at least replicate 80% of their current analog coverage area, or run the risk of losing protection from encroachment by other stations. Most stations made an election in the first round, and most of those received
https://en.wikipedia.org/wiki/Ethnolichenology
Ethnolichenology is the study of the relationship between lichens and people. Lichens have and are being used for many different purposes by human cultures across the world. The most common human use of lichens is for dye, but they have also been used for medicine, food and other purposes. Lichens for dye Lichens are a common source of natural dyes. The lichen dye is usually extracted by either boiling water or ammonia fermentation. Although usually called ammonia fermentation, this method is not actually a fermentation and involves letting the lichen steep in ammonia (traditionally urine) for at least two to three weeks. In North America the most significant lichen dye is Letharia vulpina. Indigenous people through most of this lichen's range in North America traditionally make a yellow dye from this lichen by boiling it in water. Many of the traditional dyes of the Scottish Highlands were made from lichens including red dyes from the cudbear lichen, Lecanora tartarea, the common orange lichen, Xanthoria parietina, and several species of leafy Parmelia lichens. Brown or yellow lichen dyes (called crottle or crotal), made from Parmelia saxatilis scraped off rocks, and red lichen dyes (called corkir) were used extensively to produce tartans. Purple dyes from lichens were historically very important throughout Europe from the 15th to 17th centuries. They were generally extracted from Roccella spp. lichens imported from the Canary Islands, Cape Verde Islands, Madagascar, or India. These lichens, and the dye extracted from them, are called orchil (variants archil, orchilla). The same dye was also produced from Ochrolechia spp. lichens in Britain and was called cudbear. Both Roccella spp. and Ochrolechia spp. contain the lichen substance orcin, which converts into the purple dye orcein in the ammonia fermentation process. Litmus, a water-soluble pH indicator dye mixture, is extracted from Roccella species. Lichens for medicine Many lichens have been used medicinall
https://en.wikipedia.org/wiki/Miroslav%20Fiedler
Miroslav Fiedler (7 April 1926 – 20 November 2015) was a Czech mathematician known for his contributions to linear algebra, graph theory and algebraic graph theory. His article, "Algebraic Connectivity of Graphs", published in the Czechoslovak Math Journal in 1973, established the use of the eigenvalues of the Laplacian matrix of a graph to create tools for measuring algebraic connectivity in algebraic graph theory. Fiedler is honored by the Fiedler eigenvalue (the second smallest eigenvalue of the graph Laplacian), with its associated Fiedler eigenvector, as the names for the quantities that characterize algebraic connectivity. Since Fiedler's original contribution, this structure has become essential to large areas of research in network theory, flocking, distributed control, clustering, multi-robot applications and image segmentation. References External links Home page at the Academy of Sciences of the Czech Republic. 1926 births 2015 deaths Mathematicians from Prague Czech mathematicians Graph theorists Recipients of Medal of Merit (Czech Republic) Combinatorialists Charles University alumni
https://en.wikipedia.org/wiki/Excluded%20point%20topology
In mathematics, the excluded point topology is a topology where exclusion of a particular point defines openness. Formally, let X be any non-empty set and p ∈ X. The collection of subsets of X is then the excluded point topology on X. There are a variety of cases which are individually named: If X has two points, it is called the Sierpiński space. This case is somewhat special and is handled separately. If X is finite (with at least 3 points), the topology on X is called the finite excluded point topology If X is countably infinite, the topology on X is called the countable excluded point topology If X is uncountable, the topology on X is called the uncountable excluded point topology A generalization is the open extension topology; if has the discrete topology, then the open extension topology on is the excluded point topology. This topology is used to provide interesting examples and counterexamples. Properties Let be a space with the excluded point topology with special point The space is compact, as the only neighborhood of is the whole space. The topology is an Alexandrov topology. The smallest neighborhood of is the whole space the smallest neighborhood of a point is the singleton These smallest neighborhoods are compact. Their closures are respectively and which are also compact. So the space is locally relatively compact (each point admits a local base of relatively compact neighborhoods) and locally compact in the sense that each point has a local base of compact neighborhoods. But points do not admit a local base of closed compact neighborhoods. The space is ultraconnected, as any nonempty closed set contains the point Therefore the space is also connected and path-connected. See also Finite topological space Fort space List of topologies Particular point topology References Topological spaces
https://en.wikipedia.org/wiki/Borel%20hierarchy
In mathematical logic, the Borel hierarchy is a stratification of the Borel algebra generated by the open subsets of a Polish space; elements of this algebra are called Borel sets. Each Borel set is assigned a unique countable ordinal number called the rank of the Borel set. The Borel hierarchy is of particular interest in descriptive set theory. One common use of the Borel hierarchy is to prove facts about the Borel sets using transfinite induction on rank. Properties of sets of small finite ranks are important in measure theory and analysis. Borel sets The Borel algebra in an arbitrary topological space is the smallest collection of subsets of the space that contains the open sets and is closed under countable unions and complementation. It can be shown that the Borel algebra is closed under countable intersections as well. A short proof that the Borel algebra is well-defined proceeds by showing that the entire powerset of the space is closed under complements and countable unions, and thus the Borel algebra is the intersection of all families of subsets of the space that have these closure properties. This proof does not give a simple procedure for determining whether a set is Borel. A motivation for the Borel hierarchy is to provide a more explicit characterization of the Borel sets. Boldface Borel hierarchy The Borel hierarchy or boldface Borel hierarchy on a space X consists of classes , , and for every countable ordinal greater than zero. Each of these classes consists of subsets of X. The classes are defined inductively from the following rules: A set is in if and only if it is open. A set is in if and only if its complement is in . A set is in for if and only if there is a sequence of sets such that each is in for some and . A set is in if and only if it is both in and in . The motivation for the hierarchy is to follow the way in which a Borel set could be constructed from open sets using complementation and countable un
https://en.wikipedia.org/wiki/Java%204K%20Game%20Programming%20Contest
The Java 4K Game Programming Contest, also known as Java 4K and J4K, is an informal contest that was started by the Java Game Programming community to challenge their software development abilities. Concept The goal of the contest is to develop the best game possible within four kibibytes (4096 bytes) of data. While the rules originally allowed for nearly any distribution method, recent years have required that the games be packaged as either an executable JAR file, a Java Webstart application, or a Java Applet, and now only an applet. Because the Java class file format incurs quite a bit of overhead, creating a complete game in 4K can be quite a challenge. As a result, contestants must choose how much of their byte budget they wish to spend on graphics, sound, and gameplay. Finding the best mix of these factors can be extremely difficult. Many new entrants believe that impressive graphics alone are enough to carry a game. However, entries with more modest graphics and focus on gameplay have regularly scored higher than such technology demonstrations. Prizes When first conceived, the "prize" for winning the contest was a bundle of "Duke Dollars", a virtual currency used on Sun Microsystems' Java forums. This currency could theoretically be redeemed for physical prizes such as watches and pens. The artificial currency was being downplayed by the introduction of the 4K contest, thus leaving no real prize at all. While there has been some discussion of providing prizes for the contest, it has continued to thrive without them. Spin-offs Following the creation of the Java4K contest, spin-offs targeting 8K, 16K, or a specific API like LWJGL have been launched, usually without success. While there has been a great deal of debate on why the Java 4K contest is so successful, the consensus from the contestants seems to be that it provides a very appealing challenge: not only do the entrants get the chance to show off how much they know about Java programming, but th
https://en.wikipedia.org/wiki/Australian%20Bird%20Count
The Australian Bird Count (ABC) was a project of the Royal Australasian Ornithologists Union (RAOU). Following the first and successful Atlas of Australian Birds project, which led to the publication of a book on the distribution of Australian birds in 1984, it was suggested by Ken Rogers that the RAOU should next look at bird migration and other movements in Australia. Methodology for a suitable project involving volunteers was worked out through experimental fieldwork and a workshop on ‘Monitoring the Populations and Movements of Australian Birds’. A project manager, [Stephen Ambrose], was appointed and project fieldwork ran from January 1989 to August 1995. Some 950 volunteer observers carried out 79,000 surveys, for fixed 20-minute periods in 1700 three-hectare locations over Australia. Project management started at the Australian Museum in Sydney and was later moved to the RAOU National Office in Melbourne. Financial support came at first from the Australian Nature Conservation Agency and subsequently from BP Australia which pledged A$260,000 to the project over five years. While much of the data has yet to be analysed, significant seasonal movements of several species of birds, (demonstrated through geographical shifts in seasonal abundance) have been quantified. A report on some of the findings of the project was published as a supplement to the RAOU's magazine Wingspan in 1999. See also Aussie Backyard Bird Count BioBlitz ("24-hour inventory") Breeding Bird Survey Christmas Bird Count (CBC) (in the Western Hemisphere) Systematic Census of Australian Plants Tucson Bird Count (TBC) (in Arizona in the US) References Bird censuses Ornithological equipment and methods Ornithology in Australia
https://en.wikipedia.org/wiki/Microwave%20transmission
Microwave transmission is the transmission of information by electromagnetic waves with wavelengths in the microwave frequency range of 300 MHz to 300 GHz (1 m - 1 mm wavelength) of the electromagnetic spectrum. Microwave signals are normally limited to the line of sight, so long-distance transmission using these signals requires a series of repeaters forming a microwave relay network. It is possible to use microwave signals in over-the-horizon communications using tropospheric scatter, but such systems are expensive and generally used only in specialist roles. Although an experimental microwave telecommunication link across the English Channel was demonstrated in 1931, the development of radar in World War II provided the technology for practical exploitation of microwave communication. During the war, the British Army introduced the Wireless Set No. 10, which used microwave relays to multiplex eight telephone channels over long distances. A link across the English Channel allowed General Bernard Montgomery to remain in continual contact with his group headquarters in London. In the post-war era, the development of microwave technology was rapid, which led to the construction of several transcontinental microwave relay systems in North America and Europe. In addition to carrying thousands of telephone calls at a time, these networks were also used to send television signals for cross-country broadcast, and later, computer data. Communication satellites took over the television broadcast market during the 1970s and 80s, and the introduction of long-distance fibre optic systems in the 1980s and especially 90s led to the rapid rundown of the relay networks, most of which are abandoned. In recent years, there has been an explosive increase in use of the microwave spectrum by new telecommunication technologies such as wireless networks, and direct-broadcast satellites which broadcast television and radio directly into consumers' homes. Larger line-of-sight links are
https://en.wikipedia.org/wiki/Gated%20reverb
Gated reverb or gated ambience is an audio processing technique that combines strong reverb and a noise gate that cuts the tail of the reverb. The effect is typically applied to recordings of drums (or live sound reinforcement of drums in a PA system) to make the hits sound powerful and "punchy" while keeping the overall mix clean and transparent sounding. As one of the more prominent effects in many British pop and rock songs of the 1980s, it was brought to mainstream attention in 1979 by producer Steve Lillywhite and engineer Hugh Padgham while working on Peter Gabriel's self-titled third solo album, after Phil Collins played drums without using cymbals at London's Townhouse Studios. The effect is most quintessentially demonstrated in Collins' hit song "In the Air Tonight". Unlike many reverberation or delay effects, the gated reverb effect does not try to emulate any kind of reverb that occurs in nature. In addition to drums, the effect has occasionally been applied to vocals. History Producer Steve Lillywhite claimed he first experimented the "ambience thing" on drums during the recording of Siouxsie and the Banshees' album The Scream (1978), when drummer Kenny Morris played without using cymbals on several songs. Lillywhite explained to journalist John Robb: "When you listen, you can hear elements of this gated room sound, big compressed room sound that I did on the Banshees." He also listed his production's work on Psychedelic Furs's single "Sister Europe"; this was "all done before the Peter Gabriel album". Lillywhite recognized that the gated reverb drum sound first really showed its head in that form during the recording of the Peter Gabriel 1980 album with engineer Hugh Padgham. Lillywhite's and Padgham's work on Peter Gabriel 3 was bookended with their work on XTC's Drums and Wires (1979) and Black Sea (1980). In this period they perfected their technique on Terry Chambers' drums, which can be heard most distinctively on Black Sea (particularly song
https://en.wikipedia.org/wiki/POTS%20codec
A POTS codec is a type of audio coder-decoder (codec) that uses digital signal processing to transmit audio digitally over standard telephone lines (plain old telephone service, POTS) at a higher level of audio quality than the telephone line would normally provide in its analog mode. The POTS codec is one of a family of broadcast codecs differentiated by the type of telecommunications circuit used for transmission. The ISDN codec, which instead uses ISDN lines, and the IP codec which uses private or public IP networks are also common. Primarily used in broadcast engineering to link remote broadcast locations to the host studio, a hardware codec, implemented with digital signal processing, is used to compress the audio data enough to travel through a pair of a 33.6k modems. POTS codecs have the disadvantages of being restricted to relatively low bit rates and being susceptible to variable line quality. ISDN and IP codecs have the advantage of being natively digital, and operate at much higher bitrates, which results in fewer compression artifacts. Special lines must be run to a location, however, and must be ordered well in advance of the event so that there is ample time for installation of equipment. Since POTS lines are almost universally available, the POTS codec can be set up nearly anywhere with little or no notice. Uses The main use of a broadcast codec is for remote broadcasting by radio stations. Functions Codecs usually come in two types of units: rackmount for the studio and portable for the remote. Audio can be sent in either direction, and most can also pass low-speed non-audio data, allowing the remote DJ to control broadcast automation or other studio equipment via RS-232. Many have an automatic redial if the line should become disconnected. The remote unit usually has some basic mixer functions, while the studio unit usually has some kind of digital output. Some codecs can be configured to use ISDN, POTS or IP rather than requiring a differe
https://en.wikipedia.org/wiki/Bil%20Herd
Bil Herd is a computer engineer who created several designs for 8-bit home computers while working for Commodore Business Machines in the early to mid-1980s. Biography He attended the Indiana school system. Herd did not have a college degree, and did not graduate high school, though he was working as an engineer by the age of 20. Military service Military service: 1977–1980: 238th Cavalry - 38th Division Indiana Army National Guard 1980–1982: 103rd Medical Battalion - 28th Division Pennsylvania Army National Guard 1981: Army Commendation Medal for meritorious service. Working for Commodore After first acting as the principal engineer on the Commodore Plus/4, C16/116, C264, and C364 machines, Herd designed the significantly more successful Commodore 128, a dual-CPU, triple-OS, compatible successor to the Commodore 64. Prior to the C128, Herd had done the initial architecture of the Commodore LCD computer, which was not released. After Commodore After leaving Commodore, Herd continued to design faster and more powerful computers with emphasis on machine vision and is a co-author on a patent involving n-dimensional pattern matching. He also designed an ultrasonic backup sensor for vehicles while working for Indian Valley Mfg. in 1986, a feature found on many modern vehicles today. Voluntary health care work: 1989–1996: Fellowship First Aid Squad / Mount Laurel EMS Inc. Highest rank: Captain (also served as president) 1991–1995: Cooper Trauma Center - Camden, NJ: Trauma Technician Herd has undertaken an entrepreneurial role and is owner of several small companies. As for recent low-level computer hacking, he did a "cameo appearance" by contributing a snippet of sprite logic code to the C64 DTV product designed by Jeri Ellsworth. Herd appeared in and narrated the documentary "Growing the 8 Bit Generation" (a.k.a. "The Commodore Wars") about the early days of Commodore and the home computers explosion. Subsequently, he narrated the documentary "Easy to l
https://en.wikipedia.org/wiki/Neutral%20mutation
Neutral mutations are changes in DNA sequence that are neither beneficial nor detrimental to the ability of an organism to survive and reproduce. In population genetics, mutations in which natural selection does not affect the spread of the mutation in a species are termed neutral mutations. Neutral mutations that are inheritable and not linked to any genes under selection will be lost or will replace all other alleles of the gene. That loss or fixation of the gene proceeds based on random sampling known as genetic drift. A neutral mutation that is in linkage disequilibrium with other alleles that are under selection may proceed to loss or fixation via genetic hitchhiking and/or background selection. While many mutations in a genome may decrease an organism’s ability to survive and reproduce, also known as fitness, those mutations are selected against and are not passed on to future generations. The most commonly-observed mutations that are detectable as variation in the genetic makeup of organisms and populations appear to have no visible effect on the fitness of individuals and are therefore neutral. The identification and study of neutral mutations has led to the development of the neutral theory of molecular evolution, which is an important and often-controversial theory that proposes that most molecular variation within and among species is essentially neutral and not acted on by selection. Neutral mutations are also the basis for using molecular clocks to identify such evolutionary events as speciation and adaptive or evolutionary radiations. History Charles Darwin commented on the idea of neutral mutation in his work, hypothesizing that mutations that do not give an advantage or disadvantage may fluctuate or become fixed apart from natural selection. "Variations neither useful nor injurious would not be affected by natural selection, and would be left either a fluctuating element, as perhaps we see in certain polymorphic species, or would ultimately become
https://en.wikipedia.org/wiki/Ragone%20plot
A Ragone plot ( ) is a plot used for comparing the energy density of various energy-storing devices. On such a chart the values of specific energy (in W·h/kg) are plotted versus specific power (in W/kg). Both axes are logarithmic, which allows comparing performance of very different devices. Ragone plots can reveal information about gravimetric energy density, but do not convey details about volumetric energy density. The Ragone plot was first used to compare performance of batteries. However, it is suitable for comparing any energy-storage devices, as well as energy devices such as engines, gas turbines, and fuel cells. The plot is named after David V. Ragone. Conceptually, the vertical axis describes how much energy is available per unit mass, while the horizontal axis shows how quickly that energy can be delivered, otherwise known as power per unit mass. A point in a Ragone plot represents a particular energy device or technology. The amount of time (in hours) during which a device can be operated at its rated power is given as the ratio between the specific energy (Y-axis) and the specific power (X-axis). This is true regardless of the overall scale of the device, since a larger device would have proportional increases in both power and energy. Consequently, the iso curves (constant operating time) in a Ragone plot are straight lines. For electrical systems, the following equations are relevant: where V is voltage (V), I electric current (A), t time (s) and m mass (kg). References Capacitors Battery (electricity) Charts
https://en.wikipedia.org/wiki/Near-threatened%20species
A near-threatened species is a species which has been categorized as "Near Threatened" (NT) by the International Union for Conservation of Nature (IUCN) as that may be vulnerable to endangerment in the near future, but it does not currently qualify for the threatened status. The IUCN notes the importance of re-evaluating near-threatened taxon at appropriate intervals. The rationale used for near-threatened taxa usually includes the criteria of vulnerable which are plausible or nearly met, such as reduction in numbers or range. Near-threatened species evaluated from 2001 onwards may also be ones which are dependent on conservation efforts to prevent their becoming threatened, whereas before this conservation-dependent species were given a separate category ("Conservation Dependent"). Additionally, the 402 conservation-dependent taxa may also be considered near-threatened. IUCN Categories and Criteria version 2.3 Before 2001, the IUCN used the version 2.3 Categories and Criteria to assign conservation status, which included a separate category for conservation-dependent species ("Conservation Dependent", LR/cd). With this category system, Near Threatened and Conservation Dependent were both subcategories of the category "Lower Risk". Taxa which were last evaluated before 2001 may retain their LR/cd or LR/nt status, although had the category been assigned with the same information today the species would be designated simply "Near Threatened (NT)" in either case. Gallery See also IUCN Red List near threatened species, ordered by taxonomic rank. :Category:IUCN Red List near threatened species, ordered alphabetically. List of near threatened amphibians List of near threatened arthropods List of near threatened birds List of near threatened fishes List of near threatened insects List of near threatened invertebrates List of near threatened mammals List of near threatened molluscs List of near threatened reptiles References External links List of Near Threatened
https://en.wikipedia.org/wiki/Ingress%20router
An ingress router is a label switch router that is a starting point (source) for a given label-switched path (LSP). An ingress router may be an egress router or an intermediate router for any other LSP(s). Hence the role of ingress and egress routers is LSP specific. Usually, the MPLS label is attached with an IP packet at the ingress router and removed at the egress router, whereas label swapping is performed on the intermediate routers. However, in special cases (such as LSP Hierarchy in RFC 4206, LSP Stitching and MPLS local protection) the ingress router could be pushing label in label stack of an already existing MPLS packet (instead of an IP packet). Note that, although the ingress router is the starting point of an LSP, it may or may not be the source of the under-lying IP packets. MPLS networking
https://en.wikipedia.org/wiki/OSSIM
OSSIM (Open Source Security Information Management) is an open source security information and event management system, integrating a selection of tools designed to aid network administrators in computer security, intrusion detection and prevention. The project began in 2003 as a collaboration between Dominique Karg, Julio Casal and later Alberto Román. In 2008 it became the basis for their company AlienVault. Following the acquisition of the Eureka project label and completion of R&D, AlienVault began selling a commercial derivative of OSSIM ('AlienVault Unified Security Management'). AlienVault was acquired by AT&T Communications and renamed AT&T Cybersecurity in 2019. OSSIM has had four major-version releases since its creation and is on a 5.x.x version numbering. An information visualization of the contributions to the source code for OSSIM was published at 8 years of OSSIM. The project has approximately 7.4 million lines of code. The current version of OSSIM is 5.7.5 and was released on September 16, 2019. Information about this release and past versions can be found here As a SIEM system, OSSIM is intended to give security analysts and administrators a more complete view of all the security-related aspects of their system, by combining log management which can be extended with plugins and asset management and discovery with information from dedicated information security controls and detection systems. This information is then correlated together to create contexts to the information not visible from one piece alone. Alarm and availability views along with reporting capabilities are provided to enhance the capabilities of the tool and its utility to the security and systems engineers. OSSIM performs these functions using other well-known open-source software security components, unifying them under a single browser-based user interface. The interface provides graphical analysis tools for information collected from the underlying open source software compo
https://en.wikipedia.org/wiki/Recombinase
Recombinases are genetic recombination enzymes. Site specific recombinases DNA recombinases are widely used in multicellular organisms to manipulate the structure of genomes, and to control gene expression. These enzymes, derived from bacteria (bacteriophages) and fungi, catalyze directionally sensitive DNA exchange reactions between short (30–40 nucleotides) target site sequences that are specific to each recombinase. These reactions enable four basic functional modules: excision/insertion, inversion, translocation and cassette exchange, which have been used individually or combined in a wide range of configurations to control gene expression. Types include: Cre recombinase Hin recombinase Tre recombinase FLP recombinase Homologous recombination Recombinases have a central role in homologous recombination in a wide range of organisms. Such recombinases have been described in archaea, bacteria, eukaryotes and viruses. Archaea The archaeon Sulfolobus solfataricus RadA recombinase catalyzes DNA pairing and strand exchange, central steps in recombinational repair. The RadA recombinase has greater similarity to the eukaryotic Rad51 recombinase than to the bacterial RecA recombinase. Bacteria RecA recombinase appears to be universally present in bacteria. RecA has multiple functions, all related to DNA repair. RecA has a central role in the repair of replication forks stalled by DNA damage and in the bacterial sexual process of natural genetic transformation. Eukaryotes Eukaryotic Rad51 and its related family members are homologous to the archaeal RadA and bacterial RecA recombinases. Rad51 is highly conserved from yeast to humans. It has a key function in the recombinational repair of DNA damages, particularly double-strand damages such as double-strand breaks. In humans, over- or under-expression of Rad51 occurs in a wide variety of cancers. During meiosis Rad51 interacts with another recombinase, Dmc1, to form a presynaptic filament that is
https://en.wikipedia.org/wiki/ASMO%20449
ASMO 449 is a, now technologically obsolete, 7-bit coded character set to encode the Arabic language. History This character set was devised by the now extinct Arab Standardization and Metrology Organization in 1982 to be the 7-bit standard to be used in Arabic-speaking countries. The design of this character set is derived from the 7-bit ISO 646 (version of 1973) but with modifications suited for the Arabic language. In code points ranging from 0x41 to 0x72 (hexadecimal), Latin letters were replaced with Arabic letters. Punctuation marks which were identical in the Latin and Arabic scripts remained the same, but where they differed (comma, semicolon, question mark), the Latin ones were replaced by Arabic ones. Only nominal letters are encoded, no preshaped forms of the letters, so shaping processing is required for display. This character set is not bidirectional and was intended to be used in right to left writing. Therefore, symmetrical punctuation marks ("(", ")", "<", ">", "[", "]", "{" and "}") appears as reversed (")", "(", ">", "<", "]", "[", "}" and "{"). ASMO 449 was registered in the International Register of Coded Character Sets as IR 089 in 1985 and approved as an ISO standard as ISO 9036:1987 Information processing - Arabic 7-bit coded character set for information interchange. Character set There is a variant, sometimes named ASMO 449+ which adds the characters NBSP in 0x75, "ﹳ" in 0x76, "لآ" in 0x77, "لأ" in 0x78, "لإ" in 0x79 and "لا" in 0x7A. Relationship with other character sets ASMO 449 is a 7-bit character set. Although some encodings allocate this 7-bit character set in the upper part of the 8-bit character set, it should not be confused with ASMO 708. In the character sets that allocate ASMO 449 (or some variant of it) in the upper part of the 8-bit character set, the existence of apparently repeated characters is due to the fact that the characters in the lower part are for left-to-right script while the characters in the upper part are
https://en.wikipedia.org/wiki/Prony%27s%20method
Prony analysis (Prony's method) was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal. The method Let be a signal consisting of evenly spaced samples. Prony's method fits a function to the observed . After some manipulation utilizing Euler's formula, the following result is obtained, which allows more direct computation of terms: where are the eigenvalues of the system, are the damping components, are the angular-frequency components, are the phase components, are the amplitude components of the series, is the imaginary unit (). Representations Prony's method is essentially a decomposition of a signal with complex exponentials via the following process: Regularly sample so that the -th of samples may be written as If happens to consist of damped sinusoids, then there will be pairs of complex exponentials such that where Because the summation of complex exponentials is the homogeneous solution to a linear difference equation, the following difference equation will exist: The key to Prony's Method is that the coefficients in the difference equation are related to the following polynomial: These facts lead to the following three steps within Prony's method: 1) Construct and solve the matrix equation for the values: Note that if , a generalized matrix inverse may be needed to find the values . 2) After finding the values, find the roots (numerically if necessary) of the polynomial The -th root of this polynomial will be equal to . 3) With the values, the values are part of a system of linear equations that may be used to solve for the values: where unique values are used. It is possible to
https://en.wikipedia.org/wiki/Coloured%20Book%20protocols
The Coloured Book protocols were a set of communication protocols for computer networks developed in the United Kingdom in the 1970s. These protocols were designed to enable communication and data exchange between different computer systems and networks. The name originated with each protocol being identified by the colour of the cover of its specification document. The protocols were in use until the 1990s when the Internet protocol suite came into widespread use. History In the mid-1970s, the British Post Office Telecommunications division (BPO-T) worked with the academic community in the United Kingdom and the computer industry to develop a set of standards to enable interoperability among different computer systems based on the X.25 protocol suite for packet-switched wide area network (WAN) communication. First defined in 1975, the standards evolved through experience developing protocols for the NPL network in the late 1960s and the Experimental Packet Switched Service in the early 1970s. The Coloured Book protocols were used on SERCnet from 1980, and SWUCN from 1982, both of which became part of the JANET academic network from 1984. The protocols were influential in the development of computer networks, particularly in the UK, gained some acceptance internationally as the first complete X.25 standard, and gave the UK "several years lead over other countries". From late 1991, Internet protocols were adopted on the Janet network instead; they were operated simultaneously for a while, until X.25 support was phased out entirely in August 1997. Protocols The standards were defined in several documents, each addressing different aspects of computer network communication. They were identified by the colour of the cover: Pink Book The Pink Book defined protocols for transport over Ethernet. The protocol was basically X.25 level 3 running over LLC2. Orange Book The Orange Book defined protocols for transport over local networks using the Cambridge Ring (computer
https://en.wikipedia.org/wiki/Vacuum%20deposition
Vacuum deposition is a group of processes used to deposit layers of material atom-by-atom or molecule-by-molecule on a solid surface. These processes operate at pressures well below atmospheric pressure (i.e., vacuum). The deposited layers can range from a thickness of one atom up to millimeters, forming freestanding structures. Multiple layers of different materials can be used, for example to form optical coatings. The process can be qualified based on the vapor source; physical vapor deposition uses a liquid or solid source and chemical vapor deposition uses a chemical vapor. Description The vacuum environment may serve one or more purposes: reducing the particle density so that the mean free path for collision is long reducing the particle density of undesirable atoms and molecules (contaminants) providing a low pressure plasma environment providing a means for controlling gas and vapor composition providing a means for mass flow control into the processing chamber. Condensing particles can be generated in various ways: thermal evaporation sputtering cathodic arc vaporization laser ablation decomposition of a chemical vapor precursor, chemical vapor deposition In reactive deposition, the depositing material reacts either with a component of the gaseous environment (Ti + N → TiN) or with a co-depositing species (Ti + C → TiC). A plasma environment aids in activating gaseous species (N2 → 2N) and in decomposition of chemical vapor precursors (SiH4 → Si + 4H). The plasma may also be used to provide ions for vaporization by sputtering or for bombardment of the substrate for sputter cleaning and for bombardment of the depositing material to densify the structure and tailor properties (ion plating). Types When the vapor source is a liquid or solid the process is called physical vapor deposition (PVD). When the source is a chemical vapor precursor, the process is called chemical vapor deposition (CVD). The latter has several variants: low-pressure chemi
https://en.wikipedia.org/wiki/Coding%20best%20practices
Coding best practices or programming best practices are a set of informal rules (best practices) that many software developers in computer programming follow to improve software quality. Many computer programs remain in use for long periods of time, so any rules need to facilitate both initial development and subsequent maintenance and enhancement of source code by people other than the original authors. In the ninety-ninety rule, Tom Cargill is credited with an explanation as to why programming projects often run late: "The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time." Any guidance which can redress this lack of foresight is worth considering. The size of a project or program has a significant effect on error rates, programmer productivity, and the amount of management needed. Software quality As listed below, there are many attributes associated with good software. Some of these can be mutually contradictory (e.g. being very fast versus performing extensive error checking), and different customers and participants may have different priorities. Weinberg provides an example of how different goals can have a dramatic effect on both effort required and efficiency. Furthermore, he notes that programmers will generally aim to achieve any explicit goals which may be set, probably at the expense of any other quality attributes. Sommerville has identified four generalized attributes which are not concerned with what a program does, but how well the program does it: Maintainability Dependability Efficiency Usability Weinberg has identified four targets which a good program should meet: Does a program meet its specification ("correct output for each possible input")? Is the program produced on schedule (and within budget)? How adaptable is the program to cope with changing requirements? Is the program efficient enough for the environment in which i
https://en.wikipedia.org/wiki/Specialty%20engineering
In the domain of systems engineering, Specialty Engineering is defined as and includes the engineering disciplines that are not typical of the main engineering effort. More common engineering efforts in systems engineering such as hardware, software, and human factors engineering may be used as major elements in a majority of systems engineering efforts and therefore are not viewed as "special". Examples of specialty engineering include electromagnetic interference, safety, and physical security. Less common engineering domains such as electromagnetic interference, electrical grounding, safety, security, electrical power filtering/uninterruptible supply, manufacturability, and environmental engineering may be included in systems engineering efforts where they have been identified to address special system implementations. These less common but just as important engineering efforts are then viewed as "specialty engineering". However, if the specific system has a standard implementation of environmental or security for example, the situation is reversed and the human factors engineering or hardware/software engineering may be the "specialty engineering" domain. The key take away is; the context of the system engineering project and unique needs of the project are fundamental when thinking of what are the specialty engineering efforts. The benefit of citing "specialty engineering" in planning is the notice to all team levels that special management and science factors may need to be accounted for and may influence the project. Specialty engineering may be cited by commercial entities and others to specify their unique abilities. References Eisner, Howard. (2002). "Essentials of Project and Systems Engineering Management". Wiley. p. 217. Systems engineering Engineering disciplines
https://en.wikipedia.org/wiki/Figure%20of%20merit
A figure of merit (FOM) is a performance metric that characterizes the performance of a device, system, or method, relative to its alternatives. Examples Accuracy of a rifle Audio amplifier figures of merit such as gain or efficiency Battery life of a laptop computer Calories per serving Clock rate of a CPU is often given as a figure of merit, but is of limited use in comparing between different architectures. FLOPS may be a better figure, though these too aren't completely representative of the performance of a CPU. Contrast ratio of an LCD Frequency response of a speaker Fill factor of a solar cell Resolution of the image sensor in a digital camera Measure of the detection performance of a sonar system, defined as the propagation loss for which a 50% detection probability is achieved Noise figure of a radio receiver The thermoelectric figure of merit, zT, a material constant proportional to the efficiency of a thermoelectric couple made with the material The figure of merit of digital-to-analog converter, calculated as (power dissipation)/(2ENOB × effective bandwidth) [J/Hz] Luminous efficacy of lighting Profit of a company Residual noise remaining after compensation in an aeromagnetic survey Heat absorption and transfer quality for a solar cooker Computational benchmarks are synthetic figures of merit that summarize the speed of algorithms or computers in performing various typical tasks. References Engineering ratios
https://en.wikipedia.org/wiki/Unlicensed%20Personal%20Communications%20Services
Unlicensed Personal Communications Services or UPCS band is the 1920–1930 MHz frequency band allocated by the United States Federal Communications Commission (FCC) for short range Personal Communications Services (PCS) applications in the United States, such as the Digital Enhanced Cordless Telecommunications (DECT) wireless protocol. History Prior to an FCC rules change in April 2005, the band also included the frequencies 1910-1920 MHz and 2390–2400 MHz. These were used for a variety of short range communications, including point-to-point microwave links. Allocation These allocation rules are described in Title 47, Part 15 of the Code of Federal Regulations. Licensed PCS, although not necessarily distinguished as such from UPCS, is used for digital mobile phone services. DECT devices designed to operate in this band in the US use the marketing term DECT 6.0. See also Amateur radio (Licence Required) Citizens band radio Family Radio Service General Mobile Radio Service Multi-Use Radio Service Bandplans Telephone services Consumer electronics Radio technology Radio regulations
https://en.wikipedia.org/wiki/Carlos%20J.%20Finlay%20Prize%20for%20Microbiology
The Carlos J. Finlay Prize is a biennial scientific prize sponsored by the Government of Cuba and awarded since 1980 by the United Nations Educational, Scientific and Cultural Organization (UNESCO) to people or organizations for their outstanding contributions to microbiology (including immunology, molecular biology, genetics, etc.) and its applications. Winners receive a grant of $5,000 USD donated by the Government of Cuba and an Albert Einstein Silver Medal from UNESCO. The Prize is awarded in odd years (to coincide with UNESCO's General Conference) and is named after Carlos Juan Finlay (1833 – 1915), a Cuban physician and microbiologist widely known for his pioneering discoveries in the field of yellow fever. Winners Source: UNESCO 1980 - Roger Y. Stanier (Canada) 1983 - César Milstein, FRS (Argentina, United Kingdom) 1985 - Victor Nussenzweig and Ruth Nussenzweig (Brazil) 1987 - Hélio Gelli Pereira (Brazil) and Peter Reichard (Sweden) 1989 - Georges Cohen (France) and Walter Fiers (Belgium) 1991 - Margarita Salas and Eladio Viñuela (Spain) and Jean-Marie Ghuysen (Belgium) 1993 - International Society of Soil Science, James Michael Lynch (UK), James Tiedje (USA), Johannes Antonie Van Veen (Netherlands) 1995 - Jan Balzarini (Belgium) and Pascale Cossart (France) 1996 - Etienne Pays (Belgium) and Sheikh Riazzudin (Pakistan) 1999 - Ádám Kondorosi (Hungary) 2001 - Susana López Charreton and Carlos Arias Ortiz (Mexico) 2003 - Antonio Peña Díaz (Mexico) 2005 - Khatijah Binti Mohamad Yusoff (Malaysia) 2015 - Yoshihiro Kawaoka (Japan) 2017 - Samir Kumar Saha (Bangladesh) and Shahida Hasnain (Pakistan) 2020 - Kenya Honda (Japan) See also List of biology awards References Biology awards UNESCO awards Awards established in 1980
https://en.wikipedia.org/wiki/Pixel%20aspect%20ratio
A Pixel aspect ratio (often abbreviated PAR) is a mathematical ratio that describes how the width of a pixel in a digital image compared to the height of that pixel. Most digital imaging systems display an image as a grid of tiny, square pixels. However, some imaging systems, especially those that must be compatible with standard-definition television motion pictures, display an image as a grid of rectangular pixels, in which the pixel width and height are different. Pixel aspect ratio describes this difference. Use of pixel aspect ratio mostly involves pictures pertaining to standard-definition television and some other exceptional cases. Most other imaging systems, including those that comply with SMPTE standards and practices, use square pixels. PAR is also known as sample aspect ratio and abbreviated SAR, though it can be confused with storage aspect ratio. Introduction The ratio of the width to the height of an image is known as the aspect ratio, or more precisely the display aspect ratio (DAR) – the aspect ratio of the image as displayed; for TV, DAR was traditionally 4:3 (a.k.a. fullscreen), with 16:9 (a.k.a. widescreen) now the standard for HDTV. In digital images, there is a distinction with the storage aspect ratio (SAR), which is the ratio of pixel dimensions. If an image is displayed with square pixels, then these ratios agree; if not, then non-square, "rectangular" pixels are used, and these ratios disagree. The aspect ratio of the pixels themselves is known as the pixel aspect ratio (PAR) – for square pixels this is 1:1 – and these are related by the identity: {| class="wikitable" |- | SAR × PAR = DAR |} Rearranging (solving for PAR) yields: {| class="wikitable" |- | PAR = DAR / SAR |} For example: A 640 × 480 VGA image has a SAR of 640/480 = 4:3, and if displayed on a 4:3 display (DAR = 4:3) has square pixels, hence a PAR of 1:1. By contrast, a 720 × 576 D-1 PAL image has a SAR of 720/576 = 5:4, but if displayed on a 4:3 display (DAR = 4:3)
https://en.wikipedia.org/wiki/Bluebird%20of%20happiness
The symbol of a bluebird as the harbinger of happiness is found in many cultures and may date back thousands of years. Origins of idiom Chinese mythology One of the oldest examples of a blue bird in myth (found on oracle bone inscriptions of the Shang dynasty, 1766–1122 BC) is from pre-modern China, where a blue or green bird (qingniao) was the messenger bird of Xi Wangmu (the 'Queen Mother of the West'), who began life as a fearsome goddess and immortal. By the Tang dynasty (618–906 AD), she had evolved into a Daoist fairy queen and the protector/patron of "singing girls, dead women, novices, nuns, adepts and priestesses...women [who] stood outside the roles prescribed for women in the traditional Chinese family". Depictions of Xi Wangmu often include a bird—the birds in the earliest depictions are difficult to identify, and by the Tang dynasty, most of the birds appear in a circle, often with three legs, as a symbol of the sun. Native American folklore Among some Native Americans, the bluebird has mythological or literary significance. According to the Cochiti tribe, the firstborn son of Sun was named Bluebird. In the tale "The Sun's Children", from Tales of the Cochiti Indians (1932) by Ruth Benedict, the male child of the sun is named Bluebird (Culutiwa). The Navajo identify the mountain bluebird as a spirit in animal form, associated with the rising sun. The "Bluebird Song" is sung to remind tribe members to wake at dawn and rise to greet the sun: The "Bluebird Song" is still performed in social settings, including the nine-day Ye'iibicheii winter Nightway ceremony, where it is the final song, performed just before sunrise of the ceremony's last day. Most O'odham lore associated with the "bluebird" likely refers not to the bluebirds (Sialia) but to the blue grosbeak. European folklore In Russian fairy tales, the blue bird is a symbol of hope. More recently, Anton Denikin has characterized the Ice March of the defeated Volunteer Army in the Russian Civi
https://en.wikipedia.org/wiki/Bradytroph
A bradytroph is a strain of an organism that exhibits slow growth in the absence of an external source of a particular metabolite. This is usually due to a defect in an enzyme required in the metabolic pathway producing this chemical. Such defects are the result of mutations in the genes encoding these enzymes. As the organism can still produce small amounts of the chemical, the mutation is not lethal. In these bradytroph strains, rapid growth occurs when the chemical is present in the cell's growth media and the missing metabolite can be transported into the cell from the external environment. A bradytroph may also be referred to as a "leaky auxotroph". The first usage of "bradytroph" was to describe Escherichia coli mutants partially defective in arginine biosynthesis. Among many other examples of bradytrophic strains of microorganisms are Bacillus subtilis strains with mutations affecting thiamine production and Saccharomyces cerevisiae strains with mutations that impair arginine biosynthesis. See also Autotroph Auxotrophy References Cell biology
https://en.wikipedia.org/wiki/Leaning%20toothpick%20syndrome
In computer programming, leaning toothpick syndrome (LTS) is the situation in which a quoted expression becomes unreadable because it contains a large number of escape characters, usually backslashes ("\"), to avoid delimiter collision. The official Perl documentation introduced the term to wider usage; there, the phrase is used to describe regular expressions that match Unix-style paths, in which the elements are separated by slashes /. The slash is also used as the default regular expression delimiter, so to be used literally in the expression, it must be escaped with a backslash \, leading to frequent escaped slashes represented as \/. If doubled, as in URLs, this yields \/\/ for an escaped //. A similar phenomenon occurs for DOS/Windows paths, where the backslash is used as a path separator, requiring a doubled backslash \\ – this can then be re-escaped for a regular expression inside an escaped string, requiring \\\\ to match a single backslash. In extreme cases, such as a regular expression in an escaped string, matching a Uniform Naming Convention path (which begins \\) requires 8 backslashes \\\\\\\\ due to 2 backslashes each being double-escaped. LTS appears in many programming languages and in many situations, including in patterns that match Uniform Resource Identifiers (URIs) and in programs that output quoted text. Many quines fall into the latter category. Pattern example Consider the following Perl regular expression intended to match URIs that identify files under the pub directory of an FTP site: m/ftp:\/\/[^\/]*\/pub\// Perl, like sed before it, solves this problem by allowing many other characters to be delimiters for a regular expression. For example, the following three examples are equivalent to the expression given above: m{ftp://[^/]*/pub/} m#ftp://[^/]*/pub/# m!ftp://[^/]*/pub/! Or this common translation to convert backslashes to forward slashes: tr/\\/\// may be easier to understand when written like this: tr{\\}{/} Quoted-text
https://en.wikipedia.org/wiki/Digital%20Himalaya
The Digital Himalaya project was established in December 2000 by Mark Turin, Alan Macfarlane, Sara Shneiderman, and Sarah Harrison. The project's principal goal is to collect and preserve historical multimedia materials relating to the Himalaya, such as photographs, recordings, and journals, and make those resources available over the internet and offline, on external storage media. The project team have digitized older ethnographic collections and data sets that were deteriorating in their analogue formats, so as to protect them from deterioration and make them available and accessible to originating communities in the Himalayan region and a global community of scholars. The project was founded at the Department of Anthropology of the University of Cambridge, moved to Cornell University in 2002 (when a collaboration with the University of Virginia was initiated), and then back to the University of Cambridge in 2005. From 2011 to 2014, the project was jointly hosted between the University of Cambridge and Yale University. In 2014, the project moved to the University of British Columbia, where it is presently located, and maintains a distant collaboration with Sichuan University. Project Team Digital Himalaya has a team of 9 individuals who work together to develop user-friendly and accessible online resources: Sarah Harrison Daniel Ho Hikmat Khadka Wachiraporn Klungthanaboon Alan Macfarlane Pragyajan Rai (Yalamber) Sara Shneiderman Komintal Thami Mark Turin The project is supported by an active international Advisory Board, including the following individuals: General Sir Sam Cowan Richard Feldman [Martin Gaenszle Ann Gammie David Germano Mark Goodridge David Holmberg Michael Hutt Kathryn March Christina Monson Since its establishment, the Digital Himalaya project has benefited from skilled student interns and research assistants in Canada, Nepal, the United Kingdom, and the United States. Funding For the first five years of active developme
https://en.wikipedia.org/wiki/Zellweger%20off-peak
Zellweger is the brand name of an electric switching device also known as a Ripple Control Receiver used to control off-peak electrical loads such as water heaters by switching these loads OFF over peak energy use times of the day and switching them ON after peak energy use times of the day, hence the term 'off peak' control. It is an example of carrier current signaling. The Ripple Control Signal is generated at substations owned by Electricity Supply Authorities (as distinct from Electricity Generating Authorities) connected to the High Voltage transmission grid and injected into the Medium Voltage transmission grid at 11kV, 22kV, 33kV and 66kV, through a Coupling Cell consisting of a tuned L-C circuit (Tuning Coil - Capacitor). The Coupling Cell enables the Ripple Control Frequency to be superimposed on the 50 Hertz (Hz) mains frequency, which promulgates into the 415 V 3 phase power distribution lines providing energy to industrial and domestic customers of the Electricity Supply Authority. To avoid problems with other equipment connected to the distribution system (); i.e. industrial machinery and domestic appliances, the ripple frequency is selected to be offset from the third harmonic and its multiples, typically starting at 167 Hz and including, 217, 317, 425, 750, 1050, 1650. The choice of frequency depends upon the density of the load into which the ripple frequency is to be injected and the length of the distribution Power stations transmit a on the main transmission lines when off-peak rates start (often around 10 pm). This ripple noise is picked up by the Zellweger, which after a random delay turns the hot water heater on. The noise is often picked up by other equipment, especially audio amplifiers and stereos and the noise can cause problems with other electrical devices. It is especially audible from ceiling fans running at low speed. Even some telephone lines can pick up the noise. The noise can be particularly obtrusive from some fluorescent ligh
https://en.wikipedia.org/wiki/Knowledge%20Interchange%20Format
Knowledge Interchange Format (KIF) is a computer language designed to enable systems to share and re-use information from knowledge-based systems. KIF is similar to frame languages such as KL-One and LOOM but unlike such language its primary role is not intended as a framework for the expression or use of knowledge but rather for the interchange of knowledge between systems. The designers of KIF likened it to PostScript. PostScript was not designed primarily as a language to store and manipulate documents but rather as an interchange format for systems and devices to share documents. In the same way KIF is meant to facilitate sharing of knowledge across different systems that use different languages, formalisms, platforms, etc. KIF has a declarative semantics. It is meant to describe facts about the world rather than processes or procedures. Knowledge can be described as objects, functions, relations, and rules. It is a formal language, i.e., it can express arbitrary statements in first order logic and can support reasoners that can prove the consistency of a set of KIF statements. KIF also supports non-monotonic reasoning. KIF was created by Michael Genesereth, Richard Fikes and others participating in the DARPA knowledge sharing Effort. Although the original KIF group intended to submit to a formal standards body, that did not occur. A later version called Common Logic has since been developed for submission to ISO and has been approved and published. A variant called SUO-KIF is the language in which the Suggested Upper Merged Ontology is written. A practical application of the Knowledge interchange format is an agent communication language in a multi-agent system. See also Knowledge Query and Manipulation Language References External links Knowledge Interchange Format page at the Stanford AI Lab Common Logic Knowledge representation languages Ontology (information science) Logic in computer science
https://en.wikipedia.org/wiki/VOMS
VOMS is an acronym used for Virtual Organization Membership Service in grid computing. It is structured as a simple account database with fixed formats for the information exchange and features single login, expiration time, backward compatibility, and multiple virtual organizations. The database is manipulated by authorization data that defines specific capabilities and roles for users. Administrative tools can be used by administrators to assign roles and capability information in the database. A command-line tool allows users to generate a local proxy credential based on the contents of the VOMS database. This credential includes the basic authentication information that standard Grid proxy credentials contain, but it also includes role and capability information from the VOMS server. VOMS-aware applications can use the VOMS data to make authentication decisions regarding user requests. VOMS was originally developed by the European DataGrid and Enabling Grids for E-sciencE projects and is now maintained by the Italian National Institute for Nuclear Physics (INFN). VOMS is also an acronym for VOucher Management System used for providing recharge management services for Prepaid Systems of Telecom Service Providers. Typically external Voucher Management Systems are used with Intelligent Network based prepaid systems. See also Shibboleth References External links VOMS The VOMS website The VOMS Attribute Certificate Format standard from Open Grid Forum. INFN The Italian National Institute for Nuclear Physics Grid computing Computer access control
https://en.wikipedia.org/wiki/Polarization%20of%20an%20algebraic%20form
In mathematics, in particular in algebra, polarization is a technique for expressing a homogeneous polynomial in a simpler fashion by adjoining more variables. Specifically, given a homogeneous polynomial, polarization produces a unique symmetric multilinear form from which the original polynomial can be recovered by evaluating along a certain diagonal. Although the technique is deceptively simple, it has applications in many areas of abstract mathematics: in particular to algebraic geometry, invariant theory, and representation theory. Polarization and related techniques form the foundations for Weyl's invariant theory. The technique The fundamental ideas are as follows. Let be a polynomial in variables Suppose that is homogeneous of degree which means that Let be a collection of indeterminates with so that there are variables altogether. The polar form of is a polynomial which is linear separately in each (that is, is multilinear), symmetric in the and such that The polar form of is given by the following construction In other words, is a constant multiple of the coefficient of in the expansion of Examples A quadratic example. Suppose that and is the quadratic form Then the polarization of is a function in and given by More generally, if is any quadratic form then the polarization of agrees with the conclusion of the polarization identity. A cubic example. Let Then the polarization of is given by Mathematical details and consequences The polarization of a homogeneous polynomial of degree is valid over any commutative ring in which is a unit. In particular, it holds over any field of characteristic zero or whose characteristic is strictly greater than The polarization isomorphism (by degree) For simplicity, let be a field of characteristic zero and let be the polynomial ring in variables over Then is graded by degree, so that The polarization of algebraic forms then induces an isomorphism of vector spaces in
https://en.wikipedia.org/wiki/BLOSUM
In bioinformatics, the BLOSUM (BLOcks SUbstitution Matrix) matrix is a substitution matrix used for sequence alignment of proteins. BLOSUM matrices are used to score alignments between evolutionarily divergent protein sequences. They are based on local alignments. BLOSUM matrices were first introduced in a paper by Steven Henikoff and Jorja Henikoff. They scanned the BLOCKS database for very conserved regions of protein families (that do not have gaps in the sequence alignment) and then counted the relative frequencies of amino acids and their substitution probabilities. Then, they calculated a log-odds score for each of the 210 possible substitution pairs of the 20 standard amino acids. All BLOSUM matrices are based on observed alignments; they are not extrapolated from comparisons of closely related proteins like the PAM Matrices. Biological background The genetic instructions of every replicating cell in a living organism are contained within its DNA. Throughout the cell's lifetime, this information is transcribed and replicated by cellular mechanisms to produce proteins or to provide instructions for daughter cells during cell division, and the possibility exists that the DNA may be altered during these processes. This is known as a mutation. At the molecular level, there are regulatory systems that correct most — but not all — of these changes to the DNA before it is replicated. The functionality of a protein is highly dependent on its structure. Changing a single amino acid in a protein may reduce its ability to carry out this function, or the mutation may even change the function that the protein carries out. Changes like these may severely impact a crucial function in a cell, potentially causing the cell — and in extreme cases, the organism — to die. Conversely, the change may allow the cell to continue functioning albeit differently, and the mutation can be passed on to the organism's offspring. If this change does not result in any significant physical
https://en.wikipedia.org/wiki/Substitution%20tiling
In geometry, a tile substitution is a method for constructing highly ordered tilings. Most importantly, some tile substitutions generate aperiodic tilings, which are tilings whose prototiles do not admit any tiling with translational symmetry. The most famous of these are the Penrose tilings. Substitution tilings are special cases of finite subdivision rules, which do not require the tiles to be geometrically rigid. Introduction A tile substitution is described by a set of prototiles (tile shapes) , an expanding map and a dissection rule showing how to dissect the expanded prototiles to form copies of some prototiles . Intuitively, higher and higher iterations of tile substitution produce a tiling of the plane called a substitution tiling. Some substitution tilings are periodic, defined as having translational symmetry. Every substitution tiling (up to mild conditions) can be "enforced by matching rules"—that is, there exist a set of marked tiles that can only form exactly the substitution tilings generated by the system. The tilings by these marked tiles are necessarily aperiodic. A simple example that produces a periodic tiling has only one prototile, namely a square: By iterating this tile substitution, larger and larger regions of the plane are covered with a square grid. A more sophisticated example with two prototiles is shown below, with the two steps of blowing up and dissecting merged into one step. One may intuitively get an idea how this procedure yields a substitution tiling of the entire plane. A mathematically rigorous definition is given below. Substitution tilings are notably useful as ways of defining aperiodic tilings, which are objects of interest in many fields of mathematics, including automata theory, combinatorics, discrete geometry, dynamical systems, group theory, harmonic analysis and number theory, as well as crystallography and chemistry. In particular, the celebrated Penrose tiling is an example of an aperiodic substitution
https://en.wikipedia.org/wiki/Between%20Silk%20and%20Cyanide
Between Silk and Cyanide: A Codemaker's War 1941–1945 is a memoir of public interest by former Special Operations Executive (SOE) cryptographer Leo Marks, describing his work including memorable events, actions and omissions of his colleagues during the Second World War. It was first published in 1998. Date The book was written in the early 1980s. It was published on UK Government approval in 1998. Title The title is derived from an incident related in the book, when Marks was asked why agents in occupied Europe should have their cryptographic material printed on silk (which was in very short supply). He summed his reply up by saying that it was "between silk and cyanide", meaning that it was a choice between the agent's surviving by making reliable coded radio transmissions with the help of the printed silk, and having to take a suicide pill to avoid being tortured into revealing the code and other secret information. Unlike paper, given away by rustling, silk is not detected by a casual or typical body search if concealed in the lining of clothing. SOE A major theme is Marks's inability to convince his superiors in the Special Operations Executive (SOE) that apparent mistakes made in radio transmissions from agents working with or in an alike role as the Dutch resistance were their prearranged duress codes, which it transpired they were as he alleged, and which fact haunted him. SOE management, unwilling to face the possibility that their Dutch network was compromised, insisted that the errors were attributable to poor operation by the recently trained Morse code operators and continued to parachute in new agents to sites prearranged with the compromised network, leading to their immediate capture and later execution by the order of the command of Nazi Germany. Marks' interest in cryptography arose from reading Edgar Allan Poe's The Gold-Bug as a child. Furthermore, his father Benjamin was a partner in bookshop Marks & Co at 84 Charing Cross Road. As a boy,
https://en.wikipedia.org/wiki/Spectral%20splatter
In radio electronics or acoustics, spectral splatter (also called switch noise) refers to spurious emissions that result from an abrupt change in the transmitted signal, usually when transmission is started or stopped. For example, a device transmitting a sine wave produces a single peak in the frequency spectrum; however, if the device abruptly starts or stops transmitting this sine wave, it will emit noise at frequencies other than the frequency of the sine wave. This noise is known as spectral splatter. When the signal is represented in the time domain, an abrupt change may not be visually apparent; in the frequency domain, however, the abrupt change causes the appearance of spikes at various frequencies. A sharper change in the time domain usually results in more spikes or stronger spikes in the frequency domain. Spectral splatter can thus be reduced by making the change more smooth. Controlling the power ramp shape (i.e. the way in which the signal increases ("power-on ramp") or falls off ("power-down ramp")) can help reduce the splatter. In some cases one can use a filter to remove unwanted emissions. Note that a completely abrupt change (in the mathematical sense) is not possible in physical reality; the change is always somewhat smoothed naturally, for example due to the capacitance (in electronics) or inertia (in acoustics) of the components involved. In radio electronics, the need to minimize spectral splatter arises because signals are usually required by government regulations to be contained in a particular frequency band, defined by a spectral mask. Spectral splatter can cause emissions that violate this mask. See also Gibbs phenomenon Radio electronics Acoustics
https://en.wikipedia.org/wiki/234%20%28number%29
234 (two hundred [and] thirty-four) is the integer following 233 and preceding 235. Additionally: 234 is a practical number. There are 234 ways of grouping six children into rings of at least two children with one child at the center of each ring. References Integers
https://en.wikipedia.org/wiki/Retrofitting
Retrofitting is the addition of new technology or features to older systems. Retrofits can happen for a number of reasons, for example with big capital expenditures like naval vessels, military equipment or manufacturing plants, businesses or governments may retrofit in order to reduce the need to replace a system entirely. Other retrofits may be due to changing codes or requirements, such as seismic retrofit which are designed strengthening older buildings in order to make them earthquake resistant. Retrofitting is also an important part of climate change mitigation and climate change adaptation: because society invested in built infrastructure, housing and other systems before the magnitude of changes anticipated by climate change. Retrofits to increase building efficiency, for example, both help reduce the overall negative impacts of climate change by reducing building emissions and environmental impacts while also allowing the building to be more healthy during extreme weather events. Retrofitting also is part of a circular economy, reducing the amount of newly manufactured goods, thus reducing lifecycle emissions and environmental impacts. In different contexts Building efficiency and greening Manufacturing Principally retrofitting describes the measures taken in the manufacturing industry to allow new or updated parts to be fitted to old or outdated assemblies (like blades to wind turbines). Retrofitting parts are necessary for manufacture when the design of a large assembly is changed or revised. If, after the changes have been implemented, a customer (with an old version of the product) wishes to purchase a replacement part, then retrofit parts and assembling techniques will have to be used so that the revised parts will fit suitably onto the older assembly. Retrofitting is an important process used for valves and actuators to ensure optimal operation of an industrial plant. One example is retrofitting a 3-way valve into a 2-way valve, which results i
https://en.wikipedia.org/wiki/Flood%20bypass
A flood bypass is a region of land or a large man-made structure that is designed to convey excess flood waters from a river or stream in order to reduce the risk of flooding on the natural river or stream near a key point of interest, such as a city. Flood bypasses, sometimes called floodways, often have man-made diversion works, such as diversion weirs and spillways, at their head or point of origin. The main body of a flood bypass is often a natural flood plain. Many flood bypasses are designed to carry enough water such that combined flows down the original river or stream and flood bypass will not exceed the expected maximum flood flow of the river or stream. Flood bypasses are typically used only during major floods and act in a similar nature to a detention basin. Since the area of a flood bypass is significantly larger than the cross-sectional area of the original river or stream channel from which water is diverted, the velocity of water in a flood bypass will be significantly lower than the velocity of the flood water in the original system. These low velocities often cause increased sediment deposition in the flood bypass, thus it is important to incorporate a maintenance program for the entire flood bypass system when it is not being actively used during a flood operation. When not being used to convey water, flood bypasses are sometimes used for agricultural or environmental purposes. The land is often owned by a public authority and then rented to farmers or ranchers, who in turn plant crops or herd livestock that feed off the flood plain. Since the flood bypass is subjected to sedimentation during flood events, the land is often very productive and even a loss of crops due to flooding can sometimes be recovered due to the high yield of the land during the non-flood periods. Examples Bonnet Carré Spillway Eastside Bypass Fargo-Moorhead Area Diversion Project Yolo Bypass Hydraulic engineering Hydrology Flood control
https://en.wikipedia.org/wiki/LM13700
The LM13700 is an integrated circuit consisting of two current controlled operational transconductance amplifiers (OTA), each having differential inputs and a push-pull output. The LM13700 is like a standard op-amp: each has a pair of differential inputs and a single output, but an OTA is voltage in and current out rather than voltage in and voltage out; and OTAs are programmable via the IABC pin. Linearizing diodes at the input reduce distortion and allow increased input levels. The darlington output buffers provided are specifically designed to complement the wide dynamic range of the OTA. This chip is very useful in audio electronics especially in analog synthesizer circuits like voltage controlled oscillators, voltage controlled filters, and voltage controlled amplifiers. The darlington output buffers on the LM13700 are different from those on the LM13600 in that their bias currents (and hence their output DC levels) are independent of IABC pin. This may result in performance superior to that of the LM13600 in audio applications. See also Transconductance Operational amplifier Transconductance amplifier current mirror References External links A Short Discussion of the Operational Transconductance Amplifier The LM13600/LM13700 Story Linear integrated circuits
https://en.wikipedia.org/wiki/Real%20analytic%20Eisenstein%20series
In mathematics, the simplest real analytic Eisenstein series is a special function of two variables. It is used in the representation theory of SL(2,R) and in analytic number theory. It is closely related to the Epstein zeta function. There are many generalizations associated to more complicated groups. Definition The Eisenstein series E(z, s) for z = x + iy in the upper half-plane is defined by for Re(s) > 1, and by analytic continuation for other values of the complex number s. The sum is over all pairs of coprime integers. Warning: there are several other slightly different definitions. Some authors omit the factor of ½, and some sum over all pairs of integers that are not both zero; which changes the function by a factor of ζ(2s). Properties As a function on z Viewed as a function of z, E(z,s) is a real-analytic eigenfunction of the Laplace operator on H with the eigenvalue s(s-1). In other words, it satisfies the elliptic partial differential equation    where The function E(z, s) is invariant under the action of SL(2,Z) on z in the upper half plane by fractional linear transformations. Together with the previous property, this means that the Eisenstein series is a Maass form, a real-analytic analogue of a classical elliptic modular function. Warning: E(z, s) is not a square-integrable function of z with respect to the invariant Riemannian metric on H. As a function on s The Eisenstein series converges for Re(s)>1, but can be analytically continued to a meromorphic function of s on the entire complex plane, with in the half-plane Re(s) 1/2 a unique pole of residue 3/π at s = 1 (for all z in H) and infinitely many poles in the strip 0 < Re(s) < 1/2 at where corresponds to a non-trivial zero of the Riemann zeta-function. The constant term of the pole at s = 1 is described by the Kronecker limit formula. The modified function satisfies the functional equation analogous to the functional equation for the Riemann zeta function ζ(s). Scalar produ
https://en.wikipedia.org/wiki/Miscellaneous%20Technical
Miscellaneous Technical is a Unicode block ranging from U+2300 to U+23FF, which contains various common symbols which are related to and used in the various technical, programming language, and academic professions. For example: Symbol ⌂ (HTML hexadecimal code is &#x2302;) represents a house or a home. Symbol ⌘ (&#x2318;) is a "place of interest" sign. It may be used to represent the Command key on a Mac keyboard. Symbol ⌚ (&#x231A;) is a watch (or clock). Symbol ⏏ (&#x23CF;) is the "Eject" button symbol found on electronic equipment. Symbol ⏚ (&#x23DA;) is the "Earth Ground" symbol found on electrical or electronic manual, tag and equipment. It also includes most of the uncommon symbols used by the APL programming language. Miscellaneous Technical (2300–23FF) in Unicode In Unicode, Miscellaneous Technical symbols placed in the hexadecimal range 0x2300–0x23FF, (decimal 8960–9215), as described below. (2300–233F) 1.Unicode code points U+2329 & U+232A are deprecated. (2340–237F) (2380–23BF) (23C0–23FF) Block Emoji The Miscellaneous Technical block contains eighteen emoji: U+231A–U+231B, U+2328, U+23CF, U+23E9–U+23F3 and U+23F8–U+23FA. All of these characters have standardized variants defined, to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for each character, for a total of 36 variants. History The following Unicode-related documents record the purpose and process of defining specific characters in the Miscellaneous Technical block: See also Unicode mathematical operators and symbols Unicode symbols Media control symbols References Symbols Unicode blocks
https://en.wikipedia.org/wiki/Eclipse%20process%20framework
The Eclipse process framework (EPF) is an open source project that is managed by the Eclipse Foundation. It lies under the top-level Eclipse Technology Project, and has two goals: To provide an extensible framework and exemplary tools for software process engineering - method and process authoring, library management, configuring and publishing a process. To provide exemplary and extensible process content for a range of software development and management processes supporting iterative, agile, and incremental development, and applicable to a broad set of development platforms and applications. For instance, EPF provides the OpenUP, an agile software development process optimized for small projects. By using EPF Composer, engineers can create their own software development process by structuring it using a predefined schema. This schema is an evolution of the SPEM 1.1 OMG specification referred to as the unified method architecture (UMA). Major parts of UMA went into the adopted revision of SPEM, SPEM 2.0. EPF is aiming to fully support SPEM 2.0 in the near future. The UMA and SPEM schemata support the organization of large amounts of descriptions for development methods and processes. Such method content and processes do not have to be limited to software engineering, but can also cover other design and engineering disciplines, such as mechanical engineering, business transformation, and sales cycles. IBM supplies a commercial version, IBM Rational Method Composer. Limitations The "content variability" capability severely limits users to one-to-one mappings. Processes trying to integrate various aspects may require block-copy-paste style clones to get around this limitation. This may be a limitation of the SPEM model and might be based on presumption that agile methods are being described as these methods tend not to have deep dependencies. See also Meta-process modeling References External links Eclipse Process Framework site Open content Eclipse (soft
https://en.wikipedia.org/wiki/Honda%20P%20series
The P series is a series of prototype humanoid robots developed by Honda between 1993 and 2000. They were preceded by the Honda E series (whose development was not revealed to the public at the time) and followed by the ASIMO series, then the world's most advanced humanoid robots. Honda Motor's President and CEO Hiroyuki Yoshino, at the time, described Honda's humanoid robotics program as consistent with its direction to enhance human mobility. History Work to develop an advanced humanoid robot began in 1986, when Honda established a research center focused on fundamental technologies, including humanoid robotics. Honda engineers had to research how humans walk, using the human skeleton for reference to create a replica and have it function like a human being. In 1986, the first two-legged robot was made to walk, used by Honda engineers to establish stable walking technology, including steps and sloped surfaces. In 1993, Honda began developing "Prototype" models ("P" series), attaching the legs to a torso with arms that could perform basic tasks. P2, the second prototype model, debuted in December 1996, using wireless techniques making it the first self-regulating, two-legged walking robot. P2 weighed 463 pounds with a height of six feet tall. In September 1997, P3 was introduced as the first completely independent bi-pedal humanoid walking robot, standing five feet, four inches tall and weighing 287 pounds. Features Honda engineers determined a robot should be easy to operate and small in size, enabling it to help people—for instance, to look eye to eye with someone sitting in a chair. ASIMO can be controlled by a portable controller whereas P3 was controlled from a workstation. P1 developed in 1993 P2 unveiled in 1996 P3 unveiled in 1997 P4 developed in 2000 Notes: 1. – The P1 was developed in 1993 but was not unveiled and Honda kept its existence a secret until the announcement of the P2 in 1996. 2. – The P4 was developed in 2000 and originally unveiled
https://en.wikipedia.org/wiki/Hypervariable%20region
A hypervariable region (HVR) is a location within nuclear DNA or the D-loop of mitochondrial DNA in which base pairs of nucleotides repeat (in the case of nuclear DNA) or have substitutions (in the case of mitochondrial DNA). Changes or repeats in the hypervariable region are highly polymorphic. Mitochondrial There are two mitochondrial hypervariable regions used in human mitochondrial genealogical DNA testing. HVR1 is considered a "low resolution" region and HVR2 is considered a "high resolution" region. Getting HVR1 and HVR2 DNA tests can help determine one's haplogroup. In the revised Cambridge Reference Sequence of the human mitogenome, the most variable sites of HVR1 are numbered 16024-16383 (this subsequence is called HVR-I), and the most variable sites of HVR2 are numbered 57-372 (i.e., HVR-II) and 438-574 (i.e., HVR-III). In some bony fishes, for example certain Protacanthopterygii and Gadidae, the mitochondrial control region evolves remarkably slowly. Even functional mitochondrial genes accumulate mutations faster and more freely. It is not known whether such hypovariable control regions are more widespread. In the Ayu (Plecoglossus altivelis), an East Asian protacanthopterygian, control region mutation rate is not markedly lowered, but sequence differences between subspecies are far lower in the control region than elsewhere. This phenomenon completely defies explanation at present. Antibodies In antibodies, hypervariable regions form the antigen-binding site and are found on both light and heavy chains. They also contribute to the specificity of each antibody. In a variable domain, the 3 HV segments of each heavy or light chain fold together at the N-terminus to form an antigen binding pocket. See also Cambridge Reference Sequence Genealogical DNA test Human mitochondrial DNA haplogroup mtDNA control region References External links DNA: Forensic and Legal Applications, Explanation of Hypervariable Regions Genetic genealogy Genetic engine
https://en.wikipedia.org/wiki/Ecoprovince
An ecoprovince is a biogeographic unit smaller than an ecozone that contains one or more ecoregions. According to Demarchi (1996), an ecoprovince encompasses areas of uniform climate, geological history and physiography (i.e. mountain ranges, large valleys, plateaus). Their size and broad internal uniformity make them ideal units for the implementation of natural resource policies. See also Bioregion Ecological land classification References Biogeography Ecology terminology Ecoregions
https://en.wikipedia.org/wiki/Microsoft%20Exchange%20Hosted%20Services
Microsoft Exchange Hosted Services, also known as FrontBridge, is an email filtering system owned by Microsoft. It was acquired in 2005 from Frontbridge Inc. FrontBridge Technologies began in 2000 as Bigfish Communications in Marina del Rey, California. The service is sold directly and through partnership with Sprint Nextel. On 30 March 2006, Microsoft announced new branding, a new licensing model and the road map for Microsoft Exchange Hosted Services (EHS), formerly known as FrontBridge Technologies Inc. With Microsoft Exchange Hosted Services (EHS), four new products was introduced. EHS Filtering The Filter was to actively help protect inbound and outbound e-mail from spam, viruses, phishing scams and e-mail policy violations. EHS Archive Message archiving system for e-mail and instant messages. EHS Continuity Security-enhanced Web interface that allowed ongoing access to e-mail during and after unplanned outages of an on-premises e-mail environment. EHS Encryption Preserve e-mail confidentiality by allowing users to send and receive encrypted e-mail See also Microsoft Forefront Online Protection for Exchange Hosted desktop References Email Spam filtering Anti-spam Communication software
https://en.wikipedia.org/wiki/First-order%20hold
First-order hold (FOH) is a mathematical model of the practical reconstruction of sampled signals that could be done by a conventional digital-to-analog converter (DAC) and an analog circuit called an integrator. For FOH, the signal is reconstructed as a piecewise linear approximation to the original signal that was sampled. A mathematical model such as FOH (or, more commonly, the zero-order hold) is necessary because, in the sampling and reconstruction theorem, a sequence of Dirac impulses, xs(t), representing the discrete samples, x(nT), is low-pass filtered to recover the original signal that was sampled, x(t). However, outputting a sequence of Dirac impulses is impractical. Devices can be implemented, using a conventional DAC and some linear analog circuitry, to reconstruct the piecewise linear output for either predictive or delayed FOH. Even though this is not what is physically done, an identical output can be generated by applying the hypothetical sequence of Dirac impulses, xs(t), to a linear time-invariant system, otherwise known as a linear filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct piecewise linear function in the output. Basic first-order hold First-order hold is the hypothetical filter or LTI system that converts the ideally sampled signal {| |- | | |- | | |} to the piecewise linear signal resulting in an effective impulse response of where is the triangular function. The effective frequency response is the continuous Fourier transform of the impulse response. {| |- | | |- | | |- | | |} where is the normalized sinc function. The Laplace transform transfer function of FOH is found by substituting s = i 2 π f: {| |- | | |- | | |} This is an acausal system in that the linear interpolation function moves toward the value of the next sample before such sample is applied to the hypothetical FOH filter. Delayed first-order ho
https://en.wikipedia.org/wiki/Isomorph
An isomorph is an organism that does not change in shape during growth. The implication is that its volume is proportional to its cubed length, and its surface area to its squared length. This holds for any shape it might have; the actual shape determines the proportionality constants. The reason why the concept is important in the context of the Dynamic Energy Budget (DEB) theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. Since volume grows faster than surface area, this controls the ultimate size of the organism. Alfred Russel Wallace wrote this in a letter to E. B. Poulton in 1865. The surface area that is of importance is the part that is involved in substrate uptake (e.g. the gut surface), which is typically a fixed fraction of the total surface area in an isomorph. The DEB theory explains why isomorphs grow according to the von Bertalanffy curve if food availability is constant. Organisms can also change in shape during growth, which affects the growth curve and the ultimate size, see for instance V0-morphs and V1-morphs. Isomorphs can also be called V2/3-morphs. Most animals approximate isomorphy, but plants in a vegetation typically start as V1-morphs, then convert to isomorphs, and end up as V0-morphs (if neighbouring plants affect their uptake). See also Dynamic energy budget V0-morph V1-morph shape correction function References Developmental biology
https://en.wikipedia.org/wiki/Definable%20set
In mathematical logic, a definable set is an n-ary relation on the domain of a structure whose elements satisfy some formula in the first-order language of that structure. A set can be defined with or without parameters, which are elements of the domain that can be referenced in the formula defining the relation. Definition Let be a first-order language, an -structure with domain , a fixed subset of , and a natural number. Then: A set is definable in with parameters from if and only if there exists a formula and elements such that for all , if and only if The bracket notation here indicates the semantic evaluation of the free variables in the formula. A set is definable in without parameters if it is definable in with parameters from the empty set (that is, with no parameters in the defining formula). A function is definable in (with parameters) if its graph is definable (with those parameters) in . An element is definable in (with parameters) if the singleton set is definable in (with those parameters). Examples The natural numbers with only the order relation Let be the structure consisting of the natural numbers with the usual ordering. Then every natural number is definable in without parameters. The number is defined by the formula stating that there exist no elements less than x: and a natural number is defined by the formula stating that there exist exactly elements less than x: In contrast, one cannot define any specific integer without parameters in the structure consisting of the integers with the usual ordering (see the section on automorphisms below). The natural numbers with their arithmetical operations Let be the first-order structure consisting of the natural numbers and their usual arithmetic operations and order relation. The sets definable in this structure are known as the arithmetical sets, and are classified in the arithmetical hierarchy. If the structure is considered in second-order logic instead o
https://en.wikipedia.org/wiki/Logic%20Control
Logic Control is a control surface originally designed by Emagic in cooperation with Mackie. History Logic Control was designed by Emagic as a dedicated control surface for their Logic digital audio workstation software. It was manufactured by Mackie, but distributed by Emagic. About 6 months later, Mackie introduced a physically identical product called "Mackie Control" which included support for most major DAW applications, but not Logic. The Emagic Logic Control was still available and would only work with Logic. Later, Mackie Control's firmware was revised to include compatibility with Logic, combining together Mackie Control, Logic Control and Human User Interface (HUI) into a single protocol. As a result, the name was changed to "Mackie Control Universal" (MCU). Out of the box, MCU included Lexan overlays with different button legends to support control of other DAWs such as Pro Tools and Cubase. Description Logic Control (and now MCU) allows control of almost all Logic parameters with hardware faders, buttons and "V-Pots" (rotary knobs). Its touch-sensitive, motorized faders react to track automation. All transport functions and wheel scrubbing are also available. The unit also controls plug-in parameters. Visual feedback including current parameters being edited, parameter values, project location (SMPTE time code or bars/beats/divisions/ticks) are conveyed by a two-line LCD and red 7-segment LED displays. See also Logic Pro Mackie References Computer peripherals Electronic musical instruments Music hardware
https://en.wikipedia.org/wiki/V1-morph
An V1-morph is an organism that changes in shape during growth such that its surface area is proportional to its volume. In most cases both volume and surface area are proportional to length The reason the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake. Since uptake is proportional to maintenance for V1-morphs, there is no size control, and an organism grows exponentially at constant food (substrate) availability. Filaments, such as fungi that form hyphae growing in length, but not in diameter, are examples of V1-morphs. Sheets that extend, but do not change in thickness, like some colonial bacteria and algae, are another example. An important property of V1-morphs is that the distinction between the individual and the population level disappears; a single long filament grows as fast as many small ones of the same diameter and the same total length. See also Dynamic Energy Budget V0-morph isomorph shape correction function References Developmental biology Metabolism
https://en.wikipedia.org/wiki/V0-morph
A V0-morph is an organism whose surface area remains constant as the organism grows. The reason why the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake. Biofilms on a flat solid substrate are examples of V0-morphs; they grow in thickness, but not in surface area that is involved in nutrient exchange. Other examples are dinophyta and diatoms that have a cell wall that does not change during the cell cycle. During cell-growth, when the amounts of protein and carbohydrates increase, the vacuole shrinks. The outer membrane that is involved in nutrient uptake remains constant. At cell division, the daughter cells rapidly take up water, complete a new cell wall and the cycle repeats. Rods (bacteria that have the shape of a rod and grow in length, but not in diameter) are a static mixture between a V0- and a V1-morph, where the caps act as V0-morphs and the cylinder between the caps as V1-morph.The mixture is called static because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function are constant during growth. Crusts, such as lichens that grow on a solid substrate, are a dynamic mixture between a V0- and a V1-morph, where the inner part acts as V0-morph, and the outer annulus as V1-morph.The mixture is called dynamic because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function change during growth. The Dynamic Energy Budget theory explains why the diameter of crusts grow linearly in time at constant substrate availability. References See also Dynamic energy budget isomorph V1-morph shape correction function Developmental biology Metabolism
https://en.wikipedia.org/wiki/GRE%20Biochemistry%2C%20Cell%20and%20Molecular%20Biology%20Test
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g.