content
stringlengths
86
994k
meta
stringlengths
288
619
Patent US20050273688 - Data communication system with multi-dimensional error-correction product codes The present invention relates generally to error correction in communication channels, and more particularly but not by limitation to error correction in data storage devices. With the increasing demand for high data rate communications systems, there is an need for improved error detection and correction. In this area of technology, iterative error-correction codes (ECC) such as convolutional turbo codes (CTC), low-density parity-check (LDPC) codes, and turbo product codes (TPC) are being considered for various communication applications. The main advantage of these types of codes is that they enable very low bit-error rates (BER) for storage devices or communication links at low signal-to-noise ratios (SNR). However, as density and speed increases, higher error rates are encountered that are difficult to lower in real time using these techniques. Embodiments of the present invention provide solutions to these and other problems, and offer other advantages over the prior art. Disclosed are a method and data communication system that reduce errors. The data communication system comprises a combiner circuit that combines a set of information symbols with error correction codes. The combiner circuit generates a set of product codes that are at least three dimensional. The data communication system comprises a communication channel that receives the set of product codes. The communication channel provides the set of product codes with errors after a channel delay. A channel detector receives the set of product codes with the errors and generates a channel detector output. The data communication system comprises an error correction circuit that receives the channel detector output. The error correction circuit iteratively removes the errors to provide a set of reproduced information symbols with a reduced number of the errors. Other features and benefits that characterize embodiments of the present invention will be apparent upon reading the following detailed description and review of the associated drawings. FIG. 1 illustrates an oblique view of a disc drive. FIG. 2A illustrates a three dimensional set of product codes. FIG. 2B illustrates a pattern of errors in a set of product codes that is not correctable by three dimensional error correction. FIG. 3 illustrates a block diagram of a first embodiment of a data communication system. FIG. 4 illustrates a block diagram of a second embodiment of a data communication system. FIG. 5 illustrates a method of passing a set of information symbols through a communication system. FIG. 6 illustrates a process of error correction using turbo product codes. FIG. 7 illustrates a graph of bit error rates as a function of signal-to-noise ratio for two dimensional error correction. FIG. 8 illustrates a graph of bit error rates as a function of signal-to-noise ratio for three dimensional error correction. In the embodiments described below, a data communication system (such as a disc drive) includes a combiner circuit that combines a set of information symbols (user data) with error correction codes and that generates a set of product codes that are at least three dimensional. The use of three dimensional (or higher) product codes greatly enhances the ability to correct larger numbers of errors in a set of information symbols. A communication channel (such as a disc read/write system) receives the set of product codes and provides the set of product codes with errors after a channel delay. The errors are generated by imperfect read or write operations. During long variable delays between writing and reading information, the original information is deleted from the host system and can't be retransmitted to the communication channel. A channel detector receives the set of product codes with the errors and generates a channel detector output. An error correction circuit receives the channel detector output and iteratively removes the errors to provide a set of reproduced information symbols with a reduced number of the errors. The embodiments described correct errors in spite of the long channel delay and noisiness of the channel. FIG. 1 is an oblique view of a disc drive 100 in which embodiments of the present invention are useful. Disc drive 100 includes a housing with a base 102 and a top cover (not shown). Disc drive 100 further includes a disc pack 106, which is mounted on a spindle motor (not shown) by a disc clamp 108. Disc pack 106 includes a plurality of individual discs, which are mounted for co-rotation about central axis 109 in a direction indicated by arrow 107. Each disc surface has an associated disc head slider 110 which is mounted to disc drive 100 for communication with the disc surface. In the example shown in FIG. 1, sliders 110 are supported by suspensions 112 which are in turn attached to track accessing arms 114 of an actuator 116. The actuator shown in FIG. 1 is of the type known as a rotary moving coil actuator and includes a voice coil motor (VCM), shown generally at 118. Voice coil motor 118 rotates actuator 116 with its attached heads 110 about a pivot shaft 120 to position heads 110 over a desired data track along an arcuate path 122 between a disc inner diameter 124 and a disc outer diameter 126. Voice coil motor 118 is driven by servo electronics 130 based on signals generated by heads 110 and a host computer (not shown). The disc drive 100 is an example of a communication channel that receives sets of symbols (data blocks to be written) and then reproduces the sets of symbols (reads the data block) after a time delay T. The communication channel is noisy, and the reproduced sets of symbols can have errors. Further, the time delay T between writing and reading a particular set of symbols is typically so long that the host computer no longer retains the original set of symbols. The original set of symbols is thus not available to retransmit through the communication channel. To overcome this problem, a set of product codes with three or more dimensions are provided, and the product codes are used to correct errors introduced by the noisy channel. Examples of error corrections that are applicable to disc drives and other noisy communication channels with delays are described below in connection with FIGS. 2-8. FIG. 2A illustrates an oblique view of a three dimensional product code set (set of product codes) 200. The product code set 200 is illustrated as a rectangular prism that represents a three (3) dimensional array of bits (not individually illustrated) that make up the product code set 200. The rectangular prism representing the product code set 200 has dimensions in bits along three mutually perpendicular axes designated as i, j and k. The dimensions of the product code set are i=n[1], j=n[2], k=n[3 ]as illustrated. In a preferred arrangement, n[1]=n[2]=n[3 ]and the product code set 200 is represented by a cube. The product code set 200 represents a block of data of a standard size that can be conveniently handled as a data object by the communication channel. In the case of a disc drive, for example, the block of data (data object) typically comprises a data sector such as 4,096 bits (512 eight-bit bytes) with n[l]=n[2]=n[3]=16 bits. Other data sector sizes can also be used. The product code set 200 includes a symbol set (set of symbols) 202 that is indicated by a stippled rectangular prism within the product code set 200. This set of symbols 202 is typically data provided by a host system to a data communication system. The remaining portions of the product code set 200 (those not included in the symbol set 202) comprise bits used for error correction. The error correction codes in all three dimensions are bit-wise error correction codes. As shown in FIG. 2A, the set of symbols 202 occupy a rectangular space with dimensions p[1], p[2], p[3]. The product code set 200 occupies a rectangular space with dimensions n[1], n[2], n[3]. While the graphical representation in FIG. 2A represents a three (3) dimensional product code set 200, it will be understood that product code sets with more than three dimensions can also be used. FIG. 2B illustrates a pattern of eight errors 220 in a set of product codes 200 that is not correctable by three dimensional error correction using product codes that are constructed with single parity check codes. Each of the eight errors 220 is represented in FIG. 2B by a dot at one corner of a rectangular prism. Each of the errors 220 is three dimensionally aligned with three other errors, and thus three dimensional error correction methods with single parity check component codes are generally not able to correct such an arrangement of eight aligned errors, even through the use of iterative techniques. Three dimensional error corrections with single parity check component codes are, however, capable of correcting larger numbers of errors that are not aligned as shown in FIG. 2B. If any one of the eight errors shown in FIG. 2B is not present, then the three dimensional error correcting technique is capable of iteratively correcting seven errors even when the errors are dimensionally aligned with one another in a pattern similar to that shown in FIG. 2B. The three dimensional error correction technique is thus capable of correcting all but a few error patterns that occur. FIG. 3 illustrates a block diagram of a first embodiment of a data communication system 300. The data communication system 300 comprises a combiner circuit 302. The combiner circuit 302 combines a set of information symbols 304 with error correction codes 306 to generate a set of product codes 308 that are at least three dimensional as explained above in an example illustrated in FIG. 2A. The data communication system 300 also comprises a communication channel 310. The communication channel 310 receives the set of product codes 308 as a signal U(t) and provides the a set 312 of product codes with errors as a signal U(t-T) after a channel delay T. The communication channel 310 is a noisy communication channel that introduces one or more errors into the product code set with errors 312. The errors are in bits represented by dots 314, 316, 318. There is a delay T associated with the communication channel 310 producing the product code set with errors 312. By the time that the communication channel 310 generates the product code set with errors 312, the original information symbol set 304 is typically no longer present in the host system. The data communication system 300 comprises a channel detector 320 that receives the set of product codes with the errors 312 and that generates a channel detector output 322. An error correction circuit 324 receives the channel detector output 322. The error correction circuit 324 iteratively removes the errors to provide a set of reproduced information symbols 326 with a reduced number of In a preferred arrangement, the set of product codes 308 comprise turbo product codes. The set of information symbols 304 is preferably un-encoded. The set of product codes 308 provide distance (such as Hamming distance) between individual information symbols (such as bits or bytes) in the set of information symbols 304. In a preferred arrangement, the error correction circuit 324 corrects errors using a psi function as described in more detail below in connection with an example illustrated in FIG. 6. The psi (Ψ) function is preferably of the form: $Ψ ⁡ ( x ) = log ⁡ [ ⅇ x + 1 ⅇ x - 1 ] . Equation ⁢ ⁢ 1$ The term “e” is the base of natural logarithms (2.718 . . . ) and the term “x” is an independent variable. The error correction codes can comprise single parity check codes, checksum codes or other well-known error checking codes. The channel delay T can be variable without interfering with the error correcting performed in the error correction circuit 324. The channel delay T can be longer than a transmission time of the set of product codes 308 without interfering with the error correcting performed in the error correction circuit 324. In another preferred arrangement, the channel detector 320 comprises a viterbi detector that couples the set of product codes with errors 312 to the error detection circuit 324. FIG. 4 illustrates a block diagram of a second embodiment of a data communication system 400. The data communication system 400 comprises a combiner 402. The combiner 402 combines a set of information symbols 404 with N-dimensional error codes 406 to produce an N-dimensional product code 408, where N is at least three (3). The product code at 408 is provided to an interleaver (π) 410. The interleave 410 reorders bits of the received product code 408 to produce a reordered interleaver output 412. Successive bits in the product code 408 are reordered to separate successive bits and provide distance between the successive bits at the interleaver output 412. This interleaver arrangement randomizes bursts of noise and enhances the ability to perform error correction. The interleaver output 412 comprises a interleaved set of product codes for coupling to a communication channel 416. In a preferred arrangement, the interleaver output is passed through a pre-coder 414 before being passed on to the communication channel 416. The communication channel 416 is noisy and also has a long, variable delay as described above in connection with communication channel 320 in FIG. 3. The communication channel 416 provides a communication channel output (with errors) on line 418 that is applied to a soft output viterbi algorithm (SOVA) detector 420 that is part of an iterative turbo decoder 421. The SOVA detector provides a SOVA detector output 422 to a de-interleaver 424. The de-interleaver 424 initially provides a de-interleaved set of product codes with errors on line 426 to an error correction circuit 428. The error correction circuit 428 feeds corrected data back through interleaver 430 to the SOVA channel detector 420. The operation of the error detection circuit 428 is iterative and loops through turbo iterations 432 until an optimum number of correctable errors are corrected. This looping process is described in more detail below in an example illustrated in FIG. 6. The error correction circuit 428 reproduces a set of information symbols 434 with a reduced number of errors. FIG. 5 illustrates a method of passing a set of information symbols through a data communication channel. The method begins at START 502 and continues along line 504 to action block 506. At action block 506, a set of information symbols is combined with error correction codes to generate a set of product codes that are at least three dimensional. After completion of action block 506, the method continues along line 508 to action block 510. At action block 510, the set of product codes is received in the communication channel, and then the communication channel provides a set of product codes with errors after a channel delay. After completion of action block 510, the method continues along line 512 to action block 514. At action block 514, the set of product codes with the errors is received at a channel detector, the channel detector generates a channel detector output. After completion of action block 514, the method continues along line 516 to action block 518. At action block 518, an error correction circuit receives the channel detector output, and the error correction circuit iteratively removes the errors to provide a set of reproduced information symbols with a reduced number of the errors. After completion of action block 518, the method continues along line 520 to END 522. At end 522, the method is ready to return to start 502 to apply the method to pass a subsequent set of information symbols through the communication channel. Using the set of product codes preferably provide distance between information symbols in the set of information symbols. The channel delay can vary to a time longer than a transmission time of the set of product codes, without adversely affecting the error correction method. In a preferred arrangement, a psi function is used in the error correction circuit to correct the errors as described in more detail below in an example shown in FIG. 6. FIG. 6 illustrates a process of error correction using a three dimensional turbo product code constructed via single parity check component codes. Referring back to FIG. 4, a de-interleaver 424 provides a de-interleaved set of product codes. The reliability information for this de-interleaved set of product codes is L[0 ](i,j,k) where i, j, k are the indices of a three dimensional block of product codes with errors such as the one illustrated in FIG. 2B. The method in FIG. 6 begins at START 602 and continues along line 604 to action block 606 that provides initialization. At action block 606, the reliability information L[1], L[2], L[3 ]are all set to L[0]. After completion of action block 606, the method continues along line 608 to action block 610 which begins a main loop part of the method. At action block 610, reliability information L[1 ]is updated in an iteration. After completion of action block 610, the method continues along line 612 to action block 614. At action block 614, reliability information L[2 ]is updated. After completion of action block 614, the method continues along line 616 to action block 618. At action block 618, reliability information L[3 ]is updated. After completion of action block 618, the method continues along line 620 to action block 622. At action block 622, bit decisions are made based on the sign of P(i,j,k) and extrinsic information E(i,j,k) is passed back to the channel detector (such as channel detector 420 in FIG. 4) by way of an interleaver (such as interleaver 430 in FIG. 4). FIG. 7 illustrates a graph of bit error rates as a function of signal-to-noise ratio for two and three dimensional error correction. FIG. 7 has a logarithmic vertical axis 702 that represents bit error rate, and a horizontal axis 704 that represents signal-to-noise ratio. A key 706 identifies four different simulation conditions. In FIG. 7, the code rate R is about 0.83 and the codeword size “n” is about 5000 bits. In FIG. 7, traces 710, 712 show bit error rates for two-dimensional turbo product codes with single parity check, and traces 714, 716 shown bit error rates for three dimensional turbo product codes with single parity check. With the three dimensional turbo product codes of traces 714, 716, there is an advantage of lower bit error rates when compared to the two dimensional turbo product codes of traces 710, 712. At BER=10^−6 (line 720), about 0.5 dB gain in SNR is observed when comparing the 3-D TPC performance with the 2-D TPC performance. FIG. 8 illustrates a graph of bit error rates as a function of signal-to-noise ratio for two and three dimensional error correction. FIG. 8 has a logarithmic vertical axis 802 that represents bit error rate, and a horizontal axis 804 that represents signal-to-noise ratio. A key 806 identifies four different simulation conditions. In FIG. 8, the code rate R is about 0.91 and the codeword size is about 36,000 bits. In FIG. 8, traces 810, 812 show bit error rates for two-dimensional turbo product codes with single parity check, and traces 814, 816 shown bit error rates for three dimensional turbo product codes with single parity check. With the three dimensional turbo product codes of traces 814, 816, there is an advantage of lower bit error rates when compared to the two dimensional turbo product codes of traces 810, 812. In FIG. 8, the 3-D TPC/SPC performed about 0.65 dB better when compared to 2-D TPC/SPC's at BER=10^6 (line 820). The number of turbo iterations affected the performance significantly for the 3-D TPC/SPC when increased from two to three. For both FIGS. 7-8, the number of channel iterations was set to three. The number of turbo iterations was set to three for the 2-D case, whereas for the 3-D cases both two and three turbo iterations were simulated. In the keys 706, 806, (m,m−1)^2 2-D TPC/SPC denotes a two-dimensional TPC formed with (m,m−1) SPC component codes; i.e., (m^−1) user bits of a row/column are used to calculate the even parity bit for each row/column. To achieve a larger block size, the 2-D product codewords are arranged in sub-blocks to form one large codeword. Similar to the 2-D case, (m,m−1)^3 3-D TPC/SPC denotes a three-dimensional TPC where all three dimensions are encoded using a (m,m−1) SPC code. No precoding was used for the 3-D case, whereas a 1/(1/D^2) precoder was needed for the 2-D case. For the two dimensional case, iterative decoding within the error correction circuit is performed by applying a loop consisting of row decoding, followed by column decoding, followed by row decoding, etc. A row decoding followed by a column decoding (or vice versa) is called a turbo iteration, whereas the information exchange between the error correction circuit and the channel detector is called channel iteration. For the three dimensional case, the information bits are arranged in a three-dimensional array and encoding is performed on all three dimensions. With a 3-D structure, each information bit is protected by three codes instead of only two, as it is the case with a 2-D structure. Only a few permanent error patterns can remain with this 3-D structure when the component codes are SPC codes as illustrated in FIG. 2B. In this case, the remaining permanent error patterns are formed by a minimum of eight errors located at the corners of a box-shaped grid as shown in FIG. 2B. Regarding the iterative decoding process, the message passing algorithm (MPA) is extended to the 3-D case as shown in FIG. 6. In FIG. 6, the use of a SOVA (Soft-Output Viterbi Algorithm) detector is an example. The channel detector can be implemented using any algorithm providing soft reliability information, such as a MAP (maximum A posteriori) or BCJR (Bahl, Cocke, Jelinek, and Raviv) algorithm. As can be observed from FIG. 6, decoding is performed on all three dimensions. At the final iteration, bit decisions are made based on the sign of P(i,j,k), and extrinsic information E(i,j,k) is passed back to the channel detector. The algorithm described above can be easily extended to dimensions greater than three. The magnetic recording channel can be modeled as a partial response (PR) channel. Interleavers and de-interleavers are denoted by π and π^−1, respectively. Turbo iterations are decoding loops inside the TPC decoder, whereas a channel iteration is defined as information exchange between the TPC decoder and the channel detector. In the example shown in FIG. 4, the channel detector implements the It is to be understood that even though numerous characteristics and advantages of various embodiments of the invention have been set forth in the foregoing description, together with details of the structure and function of various embodiments of the invention, this disclosure is illustrative only, and changes may be made in detail, especially in matters of structure and arrangement of parts within the principles of the present invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed. For example, the particular elements may vary depending on the particular application for the data communication system while maintaining substantially the same functionality without departing from the scope of the present invention. The arrangements shown can be applied to electronic, optical and magnetic communication channels. In addition, although the preferred embodiment described herein is directed to a data communication system for a disc drive, it will be appreciated by those skilled in the art that the teachings of the present invention can be applied to MRAM and other data communication systems with long delay times, without departing from the scope of the present invention.
{"url":"http://www.google.com/patents/US20050273688?dq=%22Meaning-based+information+organization+and+retrieval%22","timestamp":"2014-04-16T11:43:33Z","content_type":null,"content_length":"105547","record_id":"<urn:uuid:78029946-47ed-4930-80a6-d5e6dd757dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
This applet is similar to PointSingle in that estimates are generated in a sequence (denoted in red). Click on the "1 Trial", "5 Trials" or "10 Trials" buttons to add trials to the sequence. When the maximum number of trials is reached, a message is displayed at the top of the graph. Then you must click the "New Sequence" button to begin a new estimation sequence. For comparison, prior sequences are saved and displayed in different colors. When there are many sequences, the older sequences are displayed in gray. The The menu in the lower-right corner may be used to change the true population probability. Click on the "New Sequence" button or simply click on the graph to generate a new estimation sequence. Changing the probability with the menu in the lower right or using the reset button will clear the current and prior sequences from the display. Longer Sequences of Trials
{"url":"http://www.cengage.com/statistics/book_content/0495110817_wackerly/applets/seeingstats/Chpt9/pointByPoint.html","timestamp":"2014-04-17T12:40:34Z","content_type":null,"content_length":"4293","record_id":"<urn:uuid:3be594c9-429b-4db1-b6df-a33385e8e0aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionBackground of the RSSI-Based WLAN Positioning and Pedestrian Motion Dynamics EstimateSensing Motion Dynamics with a SmartphoneMoved DistanceHeadingHybrid Indoor Positioning Solution in the SmartphoneHidden Markov Models and the SolutionsAugmenting WLAN Positioning with MDIUtilizing MDI to Calculate the State Transition ProbabilitiesFlexibility for Different Situations of MDI AvailabilityExperimental EvaluationHeading Accuracy of the Smartphone CompassAccuracy of Pedestrian Distance EstimationPositioning ResultsComparison of the Dynamic Positioning Results Using Different Combinations of MDIPositioning Accuracy of the Viterbi Algorithm vs. the Grid-Based FilterConclusionsReferencesFigures and Tables Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s121217208 sensors-12-17208 Article A Hybrid Smartphone Indoor Positioning Solution for Mobile LBS LiuJingbin* ChenRuizhi PeiLing GuinnessRobert KuusniemiHeidi Department of Navigation and Positioning, Finnish Geodetic Institute, Geodeetinrinne 2, Masala 02431, Finland; E-Mails: ruizhi.chen@fgi.fi (R.C.); ling.pei@fgi.fi (L.P.); robert.guinness@fgi.fi (R.G.); heidi.kuusniemi@fgi.fi (H.K.) Author to whom correspondence should be addressed; E-Mail: jingbin.liu@fgi.fi; Tel.: +358-9-2955-5313; Fax: +358-9-2955-5200. 12 2012 12 12 2012 12 12 17208 17233 29 10 2012 06 12 2012 07 12 2012 © 2012 by the authors; licensee MDPI, Basel, Switzerland 2012 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). Smartphone positioning is an enabling technology used to create new business in the navigation and mobile location-based services (LBS) industries. This paper presents a smartphone indoor positioning engine named HIPE that can be easily integrated with mobile LBS. HIPE is a hybrid solution that fuses measurements of smartphone sensors with wireless signals. The smartphone sensors are used to measure the user’s motion dynamics information (MDI), which represent the spatial correlation of various locations. Two algorithms based on hidden Markov model (HMM) problems, the grid-based filter and the Viterbi algorithm, are used in this paper as the central processor for data fusion to resolve the position estimates, and these algorithms are applicable for different applications, e.g., real-time navigation and location tracking, respectively. HIPE is more widely applicable for various motion scenarios than solutions proposed in previous studies because it uses no deterministic motion models, which have been commonly used in previous works. The experimental results showed that HIPE can provide adequate positioning accuracy and robustness for different scenarios of MDI combinations. HIPE is a cost-efficient solution, and it can work flexibly with different smartphone platforms, which may have different types of sensors available for the measurement of MDI data. The reliability of the positioning solution was found to increase with increasing precision of the MDI data. smartphone positioning mobile LBS probabilistic algorithms sensor fusion ubiquitous computing Smartphone indoor positioning technology is a boost to the rapidly growing mobile location-based services (LBS) industry. As the latest initiative, the In-Location Alliance, formed by 22 member companies, including Nokia, Qualcomm, Broadcom, etc.[1], was recently launched to drive innovation and market adoption of high-accuracy indoor positioning and related services. The continued development of accurate and reliable LBS will not only improve the experience of smartphone users, but will also create new marketing opportunities. Emerging indoor LBS include social networking, people finders, marketing campaigns, asset tracking, etc. Because most indoor LBS are used by pedestrians, in this work we focus the development of our proposed indoor positioning solution on a pedestrian scenario. Multiple sensors and signals of opportunity have been used for indoor positioning and navigation [2,3]. Examples of such sensors include accelerometers, gyroscopes, compasses, cameras, proximity sensors, and electromyography sensors [4]. In this work, signals of opportunity are defined as signals that were not originally intended for positioning and navigation, and they include radio frequency (RF) signals, e.g., cellular networks, wireless local area networks (WLAN) and Bluetooth [5], and naturally occurring signals, such as Earth’s magnetic field and polarized light from the sun [6]. Each method has its own respective drawback. For example, cellular positioning systems offer limited accuracy. Inertial sensors only provide a relative location estimate with accuracy degrading over time, and they must be used together with other technologies, e.g., Global Positioning System (GPS), to estimate absolute location [7,8]. Due to the cost effectiveness and extensive availability of the existing network infrastructure, WLAN signals have been widely used for indoor positioning [9–11]. Traditional solutions usually have utilized a special-purpose hardware unit to observe the WLAN received signal strength indication (RSSI) signals for indoor positioning. RSSI observables are location dependent, and they are commonly used to estimate indoor locations through a fingerprinting approach. This study develops a smartphone indoor positioning solution using the built-in hardware and computational resources of smartphones. Significant advantages of re-using an existing smartphone platform for positioning include cost efficiency of the positioning solution and the effective combination of measurements from multiple sensors and signals for enhanced positioning performance. Further, the smartphone-based positioning solution is more convenient for integration with related applications and services because smartphones have become a common platform for mobile LBS. A major challenge in the fingerprinting approach is the large variance of RSSI observables caused by the significantly non-stationary nature of WLAN signals. Most of the previous WLAN positioning solutions pursued the position estimation problem as single-point positioning in which positions were considered as a series of isolated points [11–13]. In the single-point positioning approach, the results are vulnerable to RSSI variance, and the positioning accuracy and reliability are degraded significantly. To mitigate the impact of RSSI variances, the position estimate can be augmented by motion information because the dynamics of indoor users are usually restricted, and their locations are highly correlated over time. Location changes over time are represented in this paper as motion dynamics information (MDI) such as the distance moved and movement direction and/or direction change. In our approach, MDI is physically measured using the smartphone sensors, and MDI is further integrated with RSSI observables through the methodology of hidden Markov models (HMM). RSSI measurements and the corresponding media access control (MAC) addresses can be obtained without an authenticated link. Thus, WLAN positioning can be performed autonomously, avoiding the privacy concerns that typically arise in other positioning techniques. Further, the positioning functionality can be operated in conjunction with communication services, which facilitates the deployment of related applications and services. In contrast to previous studies, which commonly utilized simplified motion models, e.g., a linear model, to represent a user’s motion [13–17], our approach uses smartphone sensors to measure the real motion of a user. Because the motion of an indoor user is usually quite complicated and he/she can change motion states at any time, e.g., stationary, walking, walking speed change, direction change, and even sudden turnaround, existing models are not capable of describing user motion accurately. By taking advantage of multiple sensors in a smartphone, our proposed solution measures MDI more accurately, and our solution is more effective for situations in which different motion states occur. The utilization of the HMM methodology incorporates motion dynamics information into RSSI positioning, and it allows for the use of current RSSI measurements in the position estimate as well as historical information regarding the position estimate. For two reasons, the HMM is preferred for the integration of different types of measurements in this study. First, Markov processes do not impose any deterministic form of models to restrict the user’s movement, and they are hence widely suitable for the representation of complicated motion processes of indoor users. Second, the methodology of HMM has been well-developed in mathematics, and the solutions of HMM problems can be used effectively in a smartphone platform to resolve position estimates. The proposed solution is a hybrid data fusion solution: the WLAN RSSI observables are fused with the measured MDI. The hybrid fusion approach, named HIPE (hybrid indoor positioning engine), was developed with a Nokia N8 smartphone and is currently being implemented with other smartphone platforms. Because different smartphone platforms may have different combinations of sensors available for the measurement of MDI, this paper presents methods for dealing with different scenarios in which different types of MDI are available. In summary, this paper proposes a smartphone indoor positioning engine that can be easily integrated with mobile LBS. Our positioning solution is a data fusion scheme, and it uses the smartphone built-in sensors to physically measure the motion dynamics information of indoor users. In comparison with previous works, our solution is more widely applicable for various motion scenarios and different smartphone platforms. Two algorithms of HMM problems, i.e., the grid-based filter and the Viterbi algorithm, are applied in this paper to resolve position estimates. The rest of this paper is organized as follows: Section 2 provides an overview of the related research in WLAN positioning and pedestrian motion estimation using smartphone sensors; Section 3 presents the methods of measuring pedestrian MDI with a smartphone. The proposed positioning solution is presented in Section 4. Section 5 evaluates the proposed solution with experimental results. Finally, Section 6 concludes the paper and provides directions for future work. Two basic approaches are used for the estimation of locations with WLAN RSSI measurements. The trilateration-based approach first translates RSSI measurements into the distances between a mobile user and multiple access points (APs) based on a radio propagation model and then calculates the user’s location using the obtained distances and AP coordinates [18]. The major challenges in this approach include the large errors associated with estimated distances and difficulties in system deployment, e.g., the trouble associated with obtaining the AP coordinates indoors. In contrast, the fingerprinting approach determines a user’s position by matching RSSI measurements with a fingerprint database in a deterministic or stochastic way. The k-nearest neighbors (KNN) method employs a deterministic approach to estimate a location [19,20], which is the centroid of the k closest neighbors, in terms of the Euclidean distance between the online RSSI measurements and the RSSI measurements in the database. The stochastic methods impose a probabilistic model on the online RSSI measurements and calculate the posterior probability distribution [11–13]. Different probabilistic models have been used in previous studies, ranging from a simple Gaussian model to more complex kernel functions [21–23]. Many of the previous methods used memory-less or single-point positioning approaches, which only utilize current RSSI measurements discretely for determining a position estimate [10–12]. The accuracy and reliability of single-point positioning solutions are degraded by the non-stationary nature of RSSI, due to the multipath and non-line-of-sight propagation of WLAN signals. The studies presented in [13,24–28] show that positioning accuracy can be improved by the incorporation of current RSSI measurements in conjunction with knowledge of motion dynamics and historical measurements. Motion dynamics describe the correlation of the spatial coordinates of user positions over time. In previous studies, two approaches have been proposed to use motion dynamics information for improving positioning accuracy. One approach represents a user’s motion with a set of predefined motion models, which describes the time evolution of the user’s positions [13–17]. The other approach uses a map to restrict the potential direction of motion and the space of the user [14,15]. Based on both approaches, a form of Bayesian filters has been used to perform position estimation. For example, Kushki et al.[13] used a linear motion model to describe the motion of an indoor user and utilized a nonparametric information (NI) filter to resolve position estimates. An alpha-beta (αβ) filter was also used for positioning based on a constant speed motion model [29,30]. Au et al.[14] assumed a linear motion model and performed a map-adaptive Kalman filter (KF) to estimate positions, while the position accuracy was improved by resetting the KF when the user is located at an intersection on a map. Particle filters can further improve positioning accuracy by applying more sophisticated non-linear and non-Gaussian models, as well as map information [15,31–36]. The applicability of these solutions is restricted by the fidelity of the motion models. When considering general users, for example, in an office or in a shopping center, the commonly used models are insufficient for the representation of indoor user motion, which may involve abrupt turns or stops. The motion dynamics of a pedestrian user are especially complex: user motion is governed by decision models, purpose of the movement, choice of destination, and interactions with other people or objects in the environment. An incorrect model results in inaccurate estimates [14]. Map data can only provide static information, e.g., potential movement directions and intersections and are incapable of presenting real-time motion status. For example, a pedestrian may turn around suddenly in a corridor. Furthermore, in the specific cases given in previous studies, the utilization of map data was based on a unique layout of the indoor environment. As a result, these solutions must process the map data of different indoor environments on a per-case basis, and they cannot be applied universally until a unified method for obtaining indoor map information exists. To make our solution widely applicable, this paper does not include the utilization of map data in the proposed solution, although map data also can be used to improve the positioning accuracy further. In our smartphone positioning solution, the built-in smartphone sensors offer the ability to physically measure the motion dynamics information, including the distance moved and the heading. Three types of methods have been developed in past works to measure pedestrian distance. One is direct foot-to-foot step length estimation using a six-degree-of-freedom inertial measurement unit (IMU), installed on the feet [8,37]. The second method uses radio sensors, such as an ultrasound sensor, usually installed on the feet as well, to directly measure foot-to-foot ranges [38]. The third method monitors the occurrence of step events and estimates step lengths based on the periodic acceleration pattern of a pedestrian user. Pedestrian acceleration can be measured using accelerometers, and the features of the acceleration pattern, such as the magnitude of the total acceleration and its periodic pattern, are closely correlated with the pedestrian dynamics, e.g., the motion states and the walking speed. The third is preferable for smartphone positioning, while the others are usually used for special applications in dedicated positioning systems. Heading can be determined using two approaches. An absolute direction can be directly measured or estimated by sensors, such as a digital compass or GPS, while a relative change in heading can be measured by gyroscope sensors. A relative heading change can be further used to calculate an absolute heading based on a previously determined heading. The first approach is attractive because it directly produces an absolute heading. However, GPS depends on the visibility of signals in space and is usually not available indoors. A digital compass is self-contained, and it can output measurements ubiquitously. Digital compasses, however, are susceptible to errors, including effects from electric devices and steel structures, and calibration and filtering processing are needed to improve compass accuracy. On the contrary, a gyroscope can be used to measure a relative direction change with no impacts from the environment. As the gyroscope measurements are integrated over time, however, the error increases over time; hence, an external reference is needed for periodic calibration. To further improve heading results, motion recognition methods can be used to detect motions that may cause heading changes. For example, a U-turn may indicate a rotation of 180° [39,40]. This paper employs a 3-axis accelerometer and a digital compass, which are available in the smartphone platform used (Nokia N8), to measure MDI data at run time. We limit the scope of this work to MDI estimation using these two sensors to demonstrate the effectiveness of the proposed positioning solution. Other sensors and techniques of MDI estimation, e.g., vision-based techniques, will be integrated with HIPE in the future. Motion dynamics are defined in this paper as position changes over time, which are represented by the distance moved and the movement heading. Smartphone sensors can be used in different capacities to measure the motion dynamics of a user. Traditionally, one method to estimate changes in location and direction is by using an IMU, which typically consists of accelerometers, gyroscopes and/or compasses. In a platform where the attitude is known, acceleration measurements are integrated once to determine speed and twice to determine travelled distance. The movement heading of the platform is observed by a gyroscope and a compass, which provide the measurements of heading change and absolute heading, respectively. However, this approach is not applicable to the scenario of smartphone users. Built-in smartphone sensors are commonly low-cost and have worse performance than traditional IMUs. Furthermore, the integration operation is not suitable for smartphone pedestrian scenarios due to the lack of knowledge about the platform’s attitude. It is not practical for a pedestrian user to maintain the device in a fixed attitude, and it is also complicated to estimate a changing smartphone attitude using the built-in sensors. Various sensors included in modern smartphones can provide multiple approaches to measure the MDI of a pedestrian user. This section presents the methods used in this study to estimate the distance moved and heading using an accelerometer and a digital compass, respectively. Presentation of how the estimated MDI is used in the proposed positioning solution follows in the section after. Figure 1 shows the three axes and six directions of the device body frame of the smartphone platform. The body frame uses the right-hand Cartesian coordinate system [41]. In this study, the pedestrian motion distance is estimated using two procedures: step detection and step length estimation, both of which are widely used in the pedestrian dead reckoning (PDR) approach. The fundamental idea behind the PDR approach is derived from pedestrian acceleration characteristics [42]. Figure 2 shows typical acceleration patterns in stationary and walking states. A walking step event can be explicitly divided into two phases. In the first phase, one foot of a pedestrian is in contact with the ground, and in the shorter second phase, both feet are in contact with the ground. Step detection is used to identify these two-phase step events. Once the step events have been detected, the step length and the step heading of particular steps are determined The pedestrian acceleration characteristics are measured with a smartphone three-axis accelerometer, which outputs a three-dimensional (3D) composite acceleration vector due to Earth’s gravity and pedestrian acceleration. It is difficult to separate pedestrian acceleration from that of gravity because the sensor attitude is unknown. In the PDR approach, the norm of 3D acceleration is used to detect the step events and estimate the step length. When the device is stationary, the magnitude of the gravity is learned by taking the average of the acceleration norm for a certain period, e.g., 1 s. Then, gravity is separated from the acceleration norm to obtain the pedestrian acceleration as follows: ‖ a p ‖ t = ‖ a ‖ t − g ^where ĝ is the measured value of the Earth’s gravity, and ||a[p]| |[t] is the pedestrian acceleration. Using the acceleration measurements, step detection and step length estimation can be accomplished through different methods [43,44]. In this study, the Nokia N8 smartphone outputs accelerometer measurements at a rate of 35 Hz. The values of pedestrian acceleration are first calculated by Equation (1). Then, they are processed with a sliding window for smoothing, and the window length is nine measurements (equal to roughly 0.25 s). The smoothed results are used for motion state recognition and step detection through peak detection and zero-crossing algorithms, which can be found in [43,44]. When step events are detected, the length of each step is estimated using a constant model [45]. The constant model uses an empirically-derived constant value of the step length (70 cm per step in this paper), based on generic pedestrian motion. The heading is measured directly in this study with the digital compass of the smartphone. According to the phone’s software development kit (SDK) documents [46], the built-in compass outputs the azimuth of the device as degrees from magnetic north in a clockwise direction with respect to the Y-axis shown in Figure 1. Compasses are susceptible to magnetic interferences and must be calibrated after being placed near anything that bears a magnetic force. The accuracy of a compass may be affected by any nearby ferromagnetic materials. In the SDK [46], the status of calibration is indicated by a number from 0 to 1. A value of 1 is the highest level that the device can support, and 0 is the worst. If the device is not calibrated, the azimuth may be inaccurate. The device is calibrated by rotating it through all of its axes, e.g., rotating the device in a number eight pattern [47]. This paper evaluates compass measurements of the smartphone in real indoor dynamic environments, and the obtained values of accuracy are used as tolerance thresholds in the proposed positioning Using the distance moved and the heading, user locations are correlated over time. Current positions are recursively propagated in a locally horizontal frame during a successive process from a previously determined position as follows: [ E N ] t = [ E N ] t − 1 + [ d t sin ( α t ) d t cos ( α t ) ]where t is the epoch time, E, N are the east and north coordinate components in the locally horizontal (East-North-Up, ENU) frame, respectively. d[t] and α[t] are the distance moved and the heading during the current epoch. This section presents the proposed smartphone positioning solution, named hybrid indoor positioning engine (HIPE). The proposed HIPE solution was implemented with a Nokia N8 smartphone. The device runs on the Symbian^3 operating system (OS), and it has a CPU (central processing unit) clock rate of 680 MHz (ARM™ 11) and an internal memory of 135 MB [48]. The built-in WLAN, accelerometer and compass are used in this study. The Qt SDK and Qt Creator IDE (integrated development environment) are used for software development [41,46,47]. Figure 3 shows the graphical interface of the HIPE. The methodology of HMM is adopted in this study to fuse the MDI data, the current RSSI observables and the historical information of position estimates. Figure 4 shows the general architecture of the proposed HIPE solution. This section also presents the flexibility of the HIPE, which can work with different combinations of MDI. This section first introduces the fundamentals of hidden Markov models and the related solutions of HMM problems, and it then presents the methods of position estimation based on HMM with an emphasis on the utilization of MDI to augment WLAN positioning. The concept of hidden Markov models arises from the well-known Markov model in which each state corresponds to a physically observable symbol. Observable Markov models are too restrictive for application to many problems of interest because they require each state to be directly observed. Subsequently, the concept of Markov models is extended to include the case of hidden Markov models, in which states are not directly observable (hidden), and an observation is a probabilistic function of the hidden states. In the HMM, the underlying stochastic process (state evolution) is not directly observable, but it can be observed in the Bayesian sense through another set of stochastic processes, which produce the sequence of observables. Hidden Markov models are significantly more applicable in the real world than observable Markov models when physical states of interest are largely unobservable. The basic theory and selected applications of HMM have been presented with details in [45,49]. For the sake of completeness, this section introduces the related fundamentals briefly. A general hidden Markov model characterizes a physical system with a state-space model, as shown in Figure 5. Formally, an HMM includes five elements, given as follows [49]: S, the state space that consists of N hidden states S = {S[1], S[2], …, S[N]}. O, a set of observables at epoch t O(t) = {o^1, o^2, …, o^M}, where M is the number of observable symbols. A, the matrix of state transition probabilities A = {a[ij]}. A state transition probability a[ij] defines the probability that the state transits from a value S[i] at the immediately prior epoch to another value S[j] at the current epoch, i.e., a[ij]= P(X[t+1] = S[j]|X[t] = S[i]), 1 ≤ i, j ≤ N. B, the matrix of emission probabilities, B = {b[j](t)}, where b[j](t) = P(O(t)|X(t) = S[j]) 1 ≤ j ≤ N. π, an initial state probability distribution π = {π[i]}, where π[i] defines the probability that the state has a value S[i] at the first epoch, i.e., π[i] = P(X[1] = S[i]) 1 ≤ i ≤ N. The principle of HMM has been used in numerous applications, and the evaluation problems associated with HMM can be categorized into three groups: the estimation of the probability (or likelihood) of an observable sequence given a specific HMM; the determination of a best sequence of model states, given an HMM and an observation sequence; and the learning of model parameters to best account for the observed signals [45,49]. In the problem of position estimation, a hidden Markov model represents the temporal correlation of a user’s positions without the restriction of any particular forms of motion models. The solution of the position estimate acts as the central processor for data fusion to combine MDI data and RSSI observables. In this paper, two algorithms of HMM problems, i.e., the grid-based filter and the Viterbi algorithm, are proposed to resolve position estimates for different types of applications. The details of both solutions can be found at pages 173–175 in [45]. The grid-based filter solution gives the state estimate that has the maximum posterior probability, while the Viterbi algorithm produces the most likely state sequence that has produced the observable sequence [50]. The two algorithms have distinct interpretations from each other, although they both produce the position estimate. Given the hidden state space has a finite number of states, e.g., reference points in the positioning problem, the grid-based filter algorithm produces an optimal estimation of each current epoch using historical information and current observations [51], whereas it does not necessarily produce the most likely state sequence for all epochs. In other words, the difference in the position estimates obtained by the two algorithms is described as follows: the Viterbi algorithm recalculates the entire sequence when every new observation (evidence) is obtained, while the grid-based filter algorithm directly appends a current optimal state estimate to the previously generated sequence. For location-based applications, one of the two algorithms is selected according to the situation of a specific application. For example, real-time navigation requires the grid-based filter to estimate an optimal state for up-to-date time instants, while an application of location tracking may prefer the Viterbi algorithm to produce the most likely position trajectory over the whole time period. In the HMM approach, a user’s positions are the hidden states to be estimated, and the sequence of positions has the Markov property. Observables in this study are WLAN RSSI, and the emission probabilities of observables are probabilistic RSSI-position dependent, obtained from a knowledge database, which is created in a prior learning phase and uses a parameterized Weibull function to represent the RSSI-position dependence [21,52,53]. In the HMM approach, the calculation of state transition probabilities is a critical issue to improve the positioning performance. The positioning solution increases in reliability with precision of the state transition probabilities. In the position estimation problem, the state transition probability of a pair of states, i.e., the reference points (RPs), is determined based on the coherence level between the user’s real motion trajectory and the relative location of the concerned two RPs. In principle, the state transition probabilities a[ij] meet the following properties: a ij ≥ 0 ∑ j = 1 N a ij = 1 In our HIPE solution, the state transition probabilities are refined by MDI data. The shortest accessible distance and direction between any two states are calculated with their coordinates and are stored as a look-up table in the prior knowledge database. The utilization of a look-up table reduces the computational complexity of the online positioning phase. It should be noted that a physically accessible route is usually bounded by the layout of an indoor environment. For example, one cannot walk through a wall. A user’s MDI, including the distance moved and the heading, is measured at run-time by the smartphone accelerometers and compass, respectively, and it is then compared with the distance and direction for a pair of state candidates related to the previous and current epochs, which are looked up from the knowledge database. As a result, a higher state transition probability (a^h) is determined for the state pairs that have a distance and direction consistent with the measured values, and the other state pairs are associated with the lower state transition probability (a^l). In the proposed solution, the values of high and low transition probabilities are calculated as follows: a h = K ⋅ a l ∑ j = 1 N a ij = I ⋅ a h + ( N − 1 ) ⋅ a l = [ N + ( K − 1 ) ⋅ I ] ⋅ a l = 1where K is the ratio between the high and low values of transitional probabilities, and I is the number of state pairs that have higher transitional probabilities. In the HIPE solution, the value of K is adaptive to the reliability of the MDI and positioning solution of the previous epoch. The K value is adjusted every epoch, and it is evaluated greater when the previous positioning solution and the current MDI are more reliable, and vice versa. To make the HIPE positioning engine usable with different smartphone platforms, it is necessary to address different situations of MDI availability because different smartphone platforms may have different sensors available. This subsection presents the flexibility of the HIPE solution to cope with situations when either partial or no measured MDI data are available. Table 1 gives four different scenarios, each with a different level of MDI data available. When an accelerometer can be used, the distance moved is estimated based on the accumulated step lengths. Otherwise, based on limited indoor dynamics, the moved distance range of an indoor user can be estimated with an empirical maximum speed model, e.g., 1 m/s in this work for the scenarios “Measured heading & assumed speed” and “Assumed speed”. When a compass can be used, the heading is measured directly, otherwise the heading remains unknown, e.g., in the scenarios “Measured distance” and “Assumed speed”, and all directions are considered as possible headings because the user may change his/her heading at any time. The four scenarios given in Table 1 cover all of the possible situations regardless of the sensor types used. The calculation of state transition probabilities is shown in Figure 6 for the different MDI availabilities. The grid points in Figure 6 denote all possible state candidates for the current epoch, and the point i (the triangle) is a state candidate of the previous epoch. When only the distance moved is measured (the scenario “Measured distance” in Table 1), the subset of state candidates (reference points) C[Dist] defined by Equation (7) has higher transitional probabilities, while the others have lower transitional probabilities: C Dist = { j | d − ɛ < ‖ P j − P i ‖ < d + ɛ , j ∈ 1 , ⋯ , N }where P[j] is the coordinate of state candidate j, P[i] is the coordinate of state candidate i, d and ε are the measured movement distance and its uncertainty, and ||·|| is the distance between two RPs. For this case, the subset C[Dist] of state candidates is located within the ring zone around the point i, as shown in Figure 6(a). The radius and width of the ring are determined based on the measured distance d and its uncertainty ε, respectively. When only the heading is measured and an empirical constant speed model is used to calculate a maximum walking range within a time interval (the scenario “Measured heading & assumed speed” in Table 1), the subset of state candidates C[Heading] defined by (8) have higher transitional probabilities, while the others have lower transitional probabilities: C Heading = { j | | ∠ i j − α | < γ ∩ ‖ P j − P i ‖ < ρ , j ∈ 1 , ⋯ , N }where α and γ are the measured heading and its uncertainty, ∠ i j is the true direction between the i-th and j-th RPs, and ρ is the calculated maximum walking range within the epoch time interval. For this case, the subset C[Heading] of the state candidates is located within the sector zone radiating from point i, as shown in Figure 6(b). The angle of the sector zone is determined based on the uncertainty γ. When both the distance moved and the heading are measured (the scenario “Measured distance & heading” in Table 1), the intersection of C[Dist] and C[Heading] is the subset of state candidates possessing higher transitional probabilities, as defined by Equation (9): C Dist & Heading = { j | j ∈ ( C Dist ∩ C Heading ) } In this case, the subset C[Heading] of state candidates is located within the intersection zone of the ring and the sector area, as shown Figure 6(c). Finally, when neither the distance moved nor the heading is measured, the proposed solution is still usable. In this case, the maximum speed model is used to calculate a maximum walking range within the epoch time interval (the scenario “Assumed maximum speed” in Table 1). The subset of state candidates C[range] defined by Equation (10) has higher transitional probabilities, while the others have lower transitional probabilities: C Range = { j | ‖ P j − P i ‖ < ρ , j ∈ 1 , ⋯ , N } In this case, the subset of state candidates C[range] is located within the whole circle area as shown in Figure 6(d). The radius of the circle is given by the range ρ. In summary, this subsection indicates that the availability and accuracy of the measured MDI data have a direct impact on the calculation of transitional probabilities in the proposed solution. A robust positioning system should be able to recover a correct positioning result even if incorrect MDI data have been provided or erroneous positioning results have been produced in previous epochs. The proposed solution uses two practices to achieve robustness: First, it constraints all transitional probabilities to values greater than zero. In other words, Equation (3) is slightly modified as: a ij > 0 Secondly, the scale K in Equation (5) is evaluated continuously with a changing value based on the reliability of MDI and the positioning solution of the previous epoch. In contrast to previous studies that impose deterministic forms of motion models, the proposed HIPE solution measures the changing MDI using the smartphone sensors. Additionally, flexibility is important for the HIPE solution to work with different MDI combinations, thus it is usable with different smartphone platforms, which may have different types of sensors available for MDI The proposed HIPE solution was evaluated through a field experiment conducted on the third floor of an office building, occupied by the Finnish Geodetic Institute (FGI). The office building has a total of three floors, and it is a typical office environment, including corridors, office rooms, an elevator, staircases, and electronic devices, such as computers and printers. Figure 7 shows the layout of the building. The lengths of the two corridors are approximately 40 m each. This section first evaluates the accuracy of the measured MDI data, i.e., the distance moved and the heading, in an actual indoor environment. The experimental results provide the readers with perspective on the reliability of the smartphone sensors, and the accuracy values are then used as thresholds in the HMM positioning approach. To evaluate the sensor measurements and the positioning results, three statistical error types are used, i.e., the root mean square error (RMSE), the error mean (EM) and the maximum error (ME). These statistics indicate consistency between a measurement and its true value from different aspects. For the compass and moved distance measurements, the error ε is calculated as: ɛ t = H t − H ¯ twhere H[t] and H̄[t] are the measurement and its reference value at epoch t, respectively. For positioning results, the error ε is calculated as: ɛ t = ‖ z ( t ) − z ¯ ( t ) ‖where z(t) and z̄(t) is the positioning result and the corresponding reference at epoch t, respectively. The calculations for RMSE, EM, and ME are defined as follows: RMSE = 1 N ∑ t = 1 T ɛ t 2 EM = 1 N ∑ t = 1 T ɛ t ME = Max ( ɛ t ) t = 1 , 2 , ⋯ , Twhere T is the number of epochs. For the distance moved, a relative error rate is calculated for each test case to evaluate the case-by-case accuracy of the step length model: ξ l = ɛ t H t t = 1 , 2 , ⋯ , Lwhere L is the number of test cases. A mean error rate is calculated for all cases to evaluate whether there is any bias in the step length model: ξ ¯ = 1 L ∑ l = 1 L ξ l l = 1 , 2 , ⋯ , L The smartphone compass was evaluated in a real indoor navigation scenario, where a tester held the device in hand and moved naturally in a manner of his or her choosing, which mean the tester can walk freely around the testing area and he/she can start or stop walking at any time. Before the experiments, the compass was calibrated by rotating the device in a number eight pattern for roughly one minute until it had the highest calibration level [47]. The true-north directions of the walking routes are adjusted with magnetic declination to obtain magnetic-north directions. The magnetic declination is the angle between magnetic north and true north, and its value can be acquired from [54] for a given geographical location and date. For the experimental area (Helsinki, Finland, August 2012), the current magnetic declination is 7°43′. The obtained magnetic-north directions were used as the references for comparison with the compass measurements. The differences were considered as measurement errors, which can be caused by multiple factors, e.g., sensor noise, environmental disturbance, body swings of the tester, etc. This study is not intended to identify these factors or to reduce the errors. Instead, the experiment evaluated the uncertainty level of the smartphone compass in a real office environment. Two testers each operated an experiment that included two motion states, walking and stationary. Table 2 shows the error statistics in terms of the RMSE, EM and ME of both tests. About 4,000 measurements were collected in each test. The results show that the measurements of the smartphone compass have a RMSE of approximately 10° during the stationary state and 30° during the walking state. This observation is consistent with those reported in previous studies [38]. Figure 8 further shows the epoch-by-epoch measurements of tester 1 and the reference used to investigate the error distribution over time. It can be concluded that the measurements are relatively smooth when the tester is stationary and significantly large errors and variations arise when the tester comes close to the elevator. This experiment was conducted in a real pedestrian navigation scenario, and a pedestrian held the device in hand and moved naturally in a manner of his or her choosing. Within the same experimental environment, two testers each performed a test. With measurements of the smartphone accelerometers, HIPE recognizes the current motion states, either stationary or walking. When it is recognized that the pedestrian is walking, HIPE counts the number of steps and further calculates the walking distance by multiplying the step number with an empirical step length of 0.7 m per step [43,44]. Figure 9 shows the results of the motion state recognition and step detection for the first tester (Test 1). The test consists of three segments of a static state and four segments of walking. Figure 9(a) illustrates the whole process with the recognized motion states, and Figure 9(b) displays a magnified account of the first walking segment to show the detected steps in detail. Raw accelerometer measurements are output at a rate of 35 Hz, and the blue line in Figure 9 indicates the smoothed pedestrian acceleration that was used to detect steps. The detected steps are shown as green circles in Figure 9. Table 3 shows the errors of the derived walking distances of the two testers. Though illustrated only for two testers, the results clearly indicate that different pedestrians may have significantly different step lengths. Although the generic step length model used is appropriate for fitting the step length of different pedestrians and has a relatively small mean error rate of 1.86%, this model may have an error of approximately ±8% for the estimated walking distance of a specific pedestrian. The distance error is caused by step misdetection and the difference between the individual’s actual step lengths and the generic step length model. It can be observed that recognition of the static state is highly reliable because there is no walking state detected during static segments. This means that, when HIPE recognizes the current motion state as static, the result can be used with high confidence. For example, the ratio K in (5) can be given with a greater value when the current state is recognized as static, as shown in Table 4. To enhance the robustness of the positioning solution, the HIPE solution can tolerate a larger error in the distance estimate than the above experimental results. As shown in Table 4, HIPE assumes a relative error range of ±10% for a distance estimate and a least absolute error of 1.5 m that is half the separation distance of most RPs in the experimental area considered. Using the accuracy results of the MDI data evaluated in Sections 5.1 and 5.2, the proposed smartphone positioning solution was tested in the office environment described above. The tester held the smartphone and moved around in the test area in a manner of his choosing so that he could start and stop walking at anywhere any time. The experiment spanned for more than 1,500 s with approximately 160 RSSI observation epochs. This positioning test lasted for a longer period than the previous tests in Sections 5.1 and 5.2, and it did not use a predefined testing route for real performance evaluation. Figure 7 shows the experimental environment. Table 4 gives the values used for the parameters in the HMM solution. For the error calculations, the time instance was recorded when each reference point was passed. The actual position of each RSSI observation epoch was then computed through interpolation utilizing the pedestrian dynamics information. The actual positions were used as the references in Equation (13) to calculate the positioning errors by epoch. As stated earlier, HIPE can work flexibly with different smartphone platforms, which may have different types of sensors available for the calculation of motion dynamics information. This subsection compares the positioning results that were produced by the grid-based filter algorithm using different combinations of MDI and the same RSSI measurements. The scenarios of different combinations of MDI are defined in Table 1. A common speed of 1 m/s was used to calculate a maximum walking range in the scenarios “Measured heading & assumed speed” and “Assumed maximum speed.” In this study, the proposed HMM solutions were compared with the classic method of MLE (maximum likelihood estimation). The MLE method is a classic fingerprint algorithm used in many previous studies [21,45,53,55], and it resolves location estimates with maximum likelihoods. Figure 10 illustrates the epoch-by-epoch positioning errors of the MLE method and the HMM solutions using different combinations of MDI data. As shown in Table 5, the HMM solution in all cases has better performance than the MLE fingerprint algorithm. Figure 10 and Table 5 indicate that the MDI data improve positioning accuracy, and the positioning accuracy improves with increased use of MDI data. When different combinations of the MDI data given in Table 1 were used, the grid-based filter achieved an RMSE improvement of 1.34 m (30.3%), 1.26 m (28.4%), 0.95 m (21.4%), and 0.56 m (12.6%), and an EM improvement of 1 m (32.6%), 0.93 m (30.3%), 0.77 m (25.1%), and 0.51 m (16.6%) over the MLE, respectively. To gain insight into the distribution of the positioning errors, Figure 11 presents the empirical cumulative probabilities of the positioning errors of the different cases. The comparison further shows that the MDI data effectively reduce the positioning errors. When more MDI is used, large positioning errors can be mitigated significantly and positioning reliability is improved. As previously described, both solutions of the HMM problems, i.e., the Viterbi algorithm and the grid-based filter, can be used to estimate positions, and each is suitable for different location-based applications. HIPE implemented both of these algorithms. Figure 12 compares the positioning accuracy of the two algorithms, which use the same RSSI measurements and different combinations of MDI, as defined in Table 1. This figure shows that the statistics of the positioning errors have only slight differences in all cases. It is found that the Viterbi algorithm and the grid-based filter have comparable positioning performances when the same set of MDI data is applied. This paper presented a smartphone indoor positioning engine named HIPE. Because the operation of HIPE only uses the built-in hardware and computational resources of a smartphone, the positioning solution presented here is more cost-efficient and convenient for integration with related applications and services than alternative systems presented previously. The proposed HIPE solution is a hybrid solution, fusing multiple smartphone sensors with WLAN signals. The smartphone sensors are used to measure the motion dynamics information of the mobile user, and the MDI data augment the WLAN positioning by mitigating the impact of RSSI variance. In this paper, two algorithms of HMM problems, i.e., the grid-based filter and the Viterbi algorithm, were used for data fusion to resolve the position estimates. Both algorithms demonstrated comparable positioning accuracy and are suitable for different types of applications. In comparison to previous studies, which have commonly used deterministic motion models, the proposed HIPE solution is more widely applicable for various motion scenarios because it measures actual motion dynamics using smartphone sensors. The experimental results of the indoor positioning experiment showed that HIPE has adequate positioning accuracy and reliability. The accuracy of the positioning solution increased with increasing usage of MDI data. The HIPE was implemented in this paper with the Nokia N8 smartphone, and it can be transferred to different smartphone platforms, even if such platforms utilize different combinations of sensors for MDI data measurement. This paper has presented the methods used to address different scenarios in which the various types of MDI are available. In the future, other smartphone sensors, such as cameras and gyroscopes, will be integrated with HIPE to measure MDI. Three novel LBS smartphone applications, such as iParking [56], will be developed based on HIPE for demonstrations related to route guidance and indoor navigation in city ecosystems. This work was supported in part by the project iSPACE (indoor/outdoor Seamless Positioning and Applications for City Ecosystem) funded by TEKES (Finnish Funding Agency for Technology and Innovation) together with the Finnish Geodetic Institute, Nokia Inc., Fastrax Ltd., Space Systems Finland Ltd., Bluegiga Ltd., and Indagon Ltd. The authors thank Tuomas Keränen and Ahsan Feroz, two students from Aalto University, Finland, for their help in conducting the experiments. This work was also supported in part by the National Natural Science Foundation of China (Grant No. 41174029, 41204028). Accurate Mobile Indoor Positioning Industry AllianceAvailable online: http://press.nokia.com/2012/08/23/ accurate-mobile-indoor-positioning-industry-alliance-called-in-location-to-promote-deployment-of-location-based-indoor-services-and-solutions/ (accessed on 31 August 2012) ChenR.ChenY.PeiL.ChenW.KuusniemiH.LiuJ.LeppäkoskiH.TakalaJ.A DSP-Based Multi-Sensor Multi-Network Positioning PlatformProceedings of the 2009 ION GNSSSavannah, GA, USA22–25 September 2009615621 KuusniemiH.LiuJ.PeiL.ChenY.ChenL.ChenR.Reliability considerations of multi-sensor multi-network pedestrian navigation2012615716410.1049/iet-rsn.2011.0247 ChenR.ChenW.ChenX.ZhangX.ChenY.Sensing strides using EMG signal for pedestrian navigation20111516117010.1007/s10291-010-0180-x PeiL.ChenR.ChenY.LeppäkoskiH.PerttulaA.Indoor/Outdoor Seamless Positioning Technologies Integrated on Smart PhoneProceedings of the International Conference on Advances in Satellite and Space CommunicationsColmar, France20–25 July 2009141145 MillerM.M.RaquetJ.F.de HaagM.U.Navigating in Difficult Environments: Alternatives to GPS-2Available online: http://www.rta.nato.int/ (accessed on 28 August 2012) YangJ.WangZ.WangG.LiuJ.MengY.On clock jumps of GPS receiver200727123127 GodhaS.LachapelleG.CannonM.E.Integrated GPS/INS System for Pedestrian Navigation in a Signal Degraded EnvironmentProceedings of the 2006 ION GNSSAustin, TX, USA26–29 September 2006 FengK.-T.ChenC.-L.ChenC.-H.Gale: An enhanced geometry assisted location estimation algorithm for NLOS environments2008719921310.1109/TMC.2007.70721 KjargaardM.B.A taxonomy for radio location fingerprinting20074718139156 KushkiA.PlataniotisK.VenetsanopoulosA.Kernel-based positioning in wireless local area networks2007668970510.1109/TMC.2007.1017 JieY.QiangY.LionelN.Learning adaptive temporal radio maps for signal-strength-based location estimation2008786988610.1109/TMC.2007.70764 KushkiA.PlataniotisK.N.VenetsanopoulosA.N.Intelligent dynamic radio tracking in indoor wireless local area networks2010940541910.1109/TMC.2009.141 AuA.FengC.ValaeeS.ReyesS.SorourS.MarkowitzS.N.GoldD.GordonK.EizenmanM.Indoor tracking and navigation using received signal strength and compressive sensing on a mobile device2012in press EvennouF.MarxF.NovakovE.Map-Aided Indoor Mobile Positioning System Using Particle FilterProceeding of 2005 IEEE Wireless Communications and Networking ConferenceNew Orleans, LA, USA13–17 March 200524902494 WangH.SzaboA.BambergerJ.BrunnD.HanebeckU.Performance Comparison of Nonlinear Filters for Indoor WLAN PositioningProceedings of the International Conference on Information FusionCologne, Germany30 June–3 July 2008 PaulA.WanE.Wi-Fi Based Indoor Localization and Tracking Using Sigma-point Kalman Filtering MethodsProceedings of the IEEE/ION Position, Location and Navigation SymposiumMonterey, CA, USA5–8 May 2008646659 SinghR.MacchiL.RegazzoniC.PlataniotisK.A Statistical Modelling Based Location Determination Method Using Fusion in WLANProceedings of the International Workshop on Wireless Ad-Hoc NetworksLondon, UK23–26 May 2005 BahlP.PadmanabhanV.RADAR: An In-building RF-based User Location and Tracking SystemProceedings of the IEEE InfocomTel-Aviv, Israel26–27 March 2000775784 YoussefM.AgrawalaA.The Horus WLAN Location Determination SystemProceedings of the 3rd International Conference on Mobile Systems, Applications, and ServicesNew York, NY, USA5 June 2005205218 LiuJ.ChenR.PeiL.ChenW.TenhunenT.KuusniemiH.KrogerT.ChenY.Accelerometer Assisted Robust Wireless Signal Positioning Based On A Hidden Markov ModelProceedings of the IEEE/ION Position Location and Navigation SymposiumPalm Springs, CA, USA4–6 May 2010 RoosT.MyllymakiP.TirriH.MisikangasP.SievanenJ.A probabilistic approach to wlan user location estimation2002915516410.1023/A:1016003126882 KushkiA.Ph.D. DissertationUniversity of TorontoToronto, ON, Canada2008 GuvencI.AbdallahC.T.JordanR.DedeogluO.Enhancements to RSS Based Indoor Tracking Systems Using Kalman FiltersProceedings of the Global Signal Processing Expo and International Signal Processing ConferenceDallas, TX, USA31 March–3 April 2003 BesadaJ.A.BernardosA.M.TarrioP.CasarJ.R.Analysis of Tracking Methods for Wireless Indoor LocalizationProceedings of the 2007 Wireless Pervasive ComputingSan Juan, Puerto Rico5–7 February 2007493497 EvennouF.MarxF.Improving Positioning Capabilities for Indoor Environments with WifiAvailable online: http://www.eurasip.org/Proceedings/Ext/IST05/papers/259.pdf (accessed on 11 December 2012) KingT.KopfS.HaenselmannT.LubbergerC.EffelsbergW.COMPASS: A Probabilistic Indoor Positioning System Based on 802.11 and Digital CompassesProceedings of the International Workshop on Wireless Network Testbeds, Experimental Evaluation and CharacterizationLos Angeles, CA, USA29 September 20063440 EvennouF.MarxF.Advanced integration of WIFI and inertial navigation systems for indoor mobile positioning200610.1155/ASP/2006/86706 Object Tracking under High Correlation for Kalman & αβ FilterAvailable online: http://cdn.intechweb.org/pdfs/8570.pdf (accessed on 11 December 2012) WangC.ChiouY.DaiY.An Adaptive Location Estimator Based on α-β Filtering for Wireless Sensor NetworksProceedings of the 2007 Wireless Communications and Networking ConferenceHong Kong11–15 March 200732853290 CattoniA.F.DoreA.RegazzoniC.S.Video-radio Fusion Approach for Target Tracking in Smart SpacesProceedings of the International Conference on Information FusionQuebec, Canada9–12 July 2007 DoreA.CattoniA.F.RegazzoniC.S.A Particle Filter Based Fusion Framework for Video-Radio Tracking in Smart SpacesProceedings of the 2007 IEEE Conference on Advanced Video and Signal Based SurveillanceLondon, UK5–7 September 200799104 GentileC.Klein-BerndtL.Robust location using system dynamics and motion constraints2004313601364 LiuJ.ChenR.WangZ.ZhangH.Spherical cap harmonic model for mapping and predicting regional TEC20111510911910.1007/s10291-010-0174-8 WidyawanK.M.BeauregardS.A Novel Backtracking Particle Filter for Pattern Matching Indoor LocalizationProceedings of the 1st ACM International Workshop on Mobile Entity Localization and Tracking in GPS-Less EnvironmentsSan Francisco, CA, USA14–19 September 20087984 LinkJ.A.B.SmithP.ViolN.WehrleK.FootPath: Accurate Map-Based Indoor Navigation Using SmartphonesProceedings of the 2011 International Conference on Indoor Positioning and Indoor NavigationGuimarães, Portugal21–23 September 2011 OjedaL.BorensteinJ.Non-GPS Navigation for Security Personnel and First Responders20076039140310.1017/S0373463307004286 SaarinenJ.Ph.D. DissertationHelsinki University of TechnologyHelsinki, Finland2009 PeiL.ChenR.LiuJ.ChenW.KuusniemiH.TenhunenT.KrögerT.ChenY.LeppäkoskiH.TakalaJ.Motion Recognition Assisted Indoor Wireless Navigation on a Mobile PhoneProceedings of the 2010 ION GNSSPortland, OR, USA21–24 September 201033663375 PeiL.LiuJ.GuinnessR.ChenY.KuusniemiH.ChenR.Using ls-svm based motion recognition for smartphone indoor wireless positioning2012126155617510.3390/ s12050615522778636 QtMobility Project Reference Documentation: SensorsAvailable online: http://doc.qt.nokia.com/qtmobility/sensors-api.html#accessing-sensor-data-in-a-generic-fashion (accessed on 11 December 2012) ChenR.PeiL.ChenY.A Smart Phone Based PDR Solution for Indoor NavigationProceedings of the 24th International Technical Meeting of the Satellite Division of the Institute of NavigationPortland, OR, USA20–23 September 201114041408 ChenW.ChenR.ChenY.KuusniemiH.WangJ.FuZ.An Effective Pedestrian Dead Reckoning Algorithm Using a Unified Heading Error ModelProceedings of the IEEE/ION PLANS 2010Monterey, California, USA5–8 May 2014340347 MoafipoorS.Ph.D. DissertationThe Ohio State UniversityColumbus, OH, USA2009 LiuJ.Hybrid positioning with smartphonesChenR.IGI GlobalHershey, PA, USA2012159194 QtMobility Project Reference Documentation: QCompassReading Class ReferenceAvailable online: http://doc.qt.nokia.com/qtmobility/qcompassreading.html (accessed on 23 July 2012) Calibrating the Magnetometer SensorAvailable online: http://www.developer.nokia.com/Community/Wiki/CS001671_Calibrating_the_magnetometer_sensor (accessed on 25 August 2012) Nokia N8-00 SpecificationsAvailable online: http://www.nokia.com/gb-en/products/phone/n8-00/specifications/ (accessed on 25 August 2012) RabinerL.R.A Tutorial on hidden Markov models and selected applications in speech recognition19897725728610.1109/5.18626 BengioY.The Viterbi AlgorithmAvailable online: http://www.iro.umontreal.ca/~bengioy/ift6265/hmms/node9.html#SECTION00023000000000000000 (accessed on 25 August 2012) RisticB.ArulampalamS.GordonN.Artech House PublishersBoston, MA, USA2004 ChenR.PeiL.LiuJ.LeppäkoskiH.WLAN and bluetooth positioning in smart phonesChenR.IGI GlobalHershey, PA, USA20124468 PeiL.ChenR.LiuJ.KuusniemiH.TenhunenT.ChenY.Using inquiry-based bluetooth rssi probability distributions for indoor positioning20109122130 Estimated Value of Magnetic DeclinationAvailable online: http://www.ngdc.noaa.gov/geomagmodels/Declination.jsp (accessed on 15 August 2012) LeppäkoskiH.TikkinenS.PerttulaA.TakalaJ.Comparison of Indoor Positioning Algorithms Using WLAN FingerprintsProceedings of the European Navigation Conference—Global Navigation Satellite SystemsNaples, Italy3–6 May 2009 LiuJ.ChenR.ChenY.PeiL.ChenL.iParking: An intelligent indoor location-based smartphone parking service201212146121462910.3390/s12111461223202179 The smartphone body frame defined for the Nokia N8 consists of three axes and six directions, and it uses the right-hand Cartesian coordinate system. The various sensors all use the common body The acceleration patterns of a pedestrian in stationary and walking states. The interface of the HIPE allows developers to select the sensor options. The graphical interface is not required when the engine is embedded into a specific application. The general high-level architecture of the HMM solution that fuses the measurements of the sensors and WLAN to estimate the absolute positions. The representation of a physical system by a hidden Markov model. The grid points in the black areas indicate candidate states that have higher transitional probabilities based on the different combinations of MDI available. The other grid points indicate candidate states of lower transitional probabilities. The triangle point i is the assumed state of the previous epoch. The sub-plots (a–d) illustrate respectively the four scenarios given in Table 1: (a) denotes the scenario that uses measured distance, (b) denotes the scenario that uses measured heading and assumed maximum speed, (c) denotes the scenario that use measured distance and heading, (d) denotes the scenario that uses an assumed maximum speed. The layout and indoor environment of the experimental area. The heading measurements of the smartphone compass and the corresponding reference in an indoor environment. The results of motion state recognition and step detection based on the periodic acceleration pattern of a pedestrian. The blue line is the smoothed pedestrian acceleration, and the green circles indicate the detected steps. The epoch-by-epoch positioning errors of the HMM solutions and the MLE solution. The HMM solutions use different combinations of MDI as defined in Table 1 and utilize the grid-based filter algorithm to produce position estimates. The cumulative probability distributions of the positioning errors related to the HMM solutions and the MLE solution. The HMM solutions use different combinations of MDI, as defined in Table 1, and utilize the grid-based filter algorithm to produce the position estimates. A comparison of the positioning accuracy of the Viterbi algorithm and the grid-based filter in terms of (a) the RMS errors, (b) the error mean and (c) the maximum errors. Both algorithms use different combinations of MDI, as defined in Table 1, and as specified at the bottom of this figure. Different scenarios using various combinations of MDI. Combinations of MDI Sensors and methods used to obtain MDI Distance Heading Measured distance & heading accelerometers compass accumulated step lengths directly measured Measured distance accelerometers --- accumulated step lengths unknown Measured heading & assumed maximum speed --- compass a constant speed model of 1 m/s directly measured Assumed maximum speed --- --- a constant speed model of 1 m/s unknown The smartphone digital compass error statistics for stationary and walking states. Test 1 Test 2 RMSE (°) 9.50 12.24 Stationary EM (°) −0.33 −6.02 ME (°) 21.18 35.82 Number of measurements 2984 2686 RMSE (°) 27.25 26.59 Walking EM (°) −5.72 −5.06 ME (°) 174.30 165.71 Number of measurements 1420 1265 The evaluation results for step detection and the distance moved estimation in the indoor office environment. Tester 1 Tester 2 Duration (s) 40.2 39.7 38.8 38.9 35.7 41 35.4 35.9 Step number True 61 62 60 60 55 57 57 56 Estimated 60 59 60 58 54 55 53 55 True distance 39 39 39 39 39 39 39 39 Estimated distance 42 41.3 42 40.6 37.8 38.5 37.1 38.5 Error of the estimated distance Error rate 7.7% 5.9% 7.7% 4.1% −3.1% −1.3% −4.9% −1.3% Mean error rate 1.86% The parameter settings used in the HMM solutions. Tolerated motion distance error ±10% of a distance estimate, at least 1.5 m Tolerated heading error 55° K (the ratio between high and low transitional probabilities) [200, 200,000] The positioning error statistics of the grid-based filter algorithm using different combinations of MDI (unit: m). Applied motion dynamics RMS error Error mean Maximum error Measured distance & heading 3.09 2.07 6 Measured distance 3.17 2.14 9 Measured heading & assumed speed 3.48 2.30 15 Assumed speed 3.87 2.56 15 MLE 4.43 3.07 15
{"url":"http://www.mdpi.com/1424-8220/12/12/17208/xml","timestamp":"2014-04-18T08:39:02Z","content_type":null,"content_length":"127753","record_id":"<urn:uuid:528cf0c6-4c2a-4ed5-9808-ecf5f89e76a2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Rome, GA Geometry Tutor Find a Rome, GA Geometry Tutor ...I currently work in the math lab at school, which provides tutoring services to the Berry College community, but I would like to expand my tutoring to the rest of Rome and surrounding communities. As a rising senior, I have taken my fair share of math classes, so almost all subjects are open for... 23 Subjects: including geometry, reading, calculus, statistics ...AP means it is a college level course. Additionally, I teach a 9th grade Honors Physics class. This is aimed at a somewhat lower level than regular physics taught to 12th graders. 19 Subjects: including geometry, physics, GED, SAT math ...I have taught special needs students in areas of ADD/ADHD in all grade levels for over 8 years. I am fully certified in this area in the state of Georgia. I have had much success in developing an individualized teaching method for ADD/ADHD students and helping them achieve their goals. 47 Subjects: including geometry, reading, English, biology ...This will never change, and unless the Skype call is a lesson, there will never be a charge. My standard rate is $35 per hour. However, payment options are negotiable to an extent(please message me for more details). Lessons are booked in two hour blocks, with a 24 hour cancellation policy. 14 Subjects: including geometry, reading, biology, algebra 2 ...I have a keen sense of real-world professional graphic design, layout and workflow and I work extensively with responsive design and mobile. I have an extensive portfolio of clients and websites across a wide variety of industries. I hold a BS in Physics from UT Austin, and I took my first of several differential equations classes as an undergraduate. 126 Subjects: including geometry, chemistry, English, calculus Related Rome, GA Tutors Rome, GA Accounting Tutors Rome, GA ACT Tutors Rome, GA Algebra Tutors Rome, GA Algebra 2 Tutors Rome, GA Calculus Tutors Rome, GA Geometry Tutors Rome, GA Math Tutors Rome, GA Prealgebra Tutors Rome, GA Precalculus Tutors Rome, GA SAT Tutors Rome, GA SAT Math Tutors Rome, GA Science Tutors Rome, GA Statistics Tutors Rome, GA Trigonometry Tutors Nearby Cities With geometry Tutor Acworth, GA geometry Tutors Armuchee geometry Tutors Austell geometry Tutors Calhoun, GA geometry Tutors Canton, GA geometry Tutors Cartersville, GA geometry Tutors Doraville, GA geometry Tutors Forest Park, GA geometry Tutors Hiram, GA geometry Tutors Kennesaw geometry Tutors Lindale, GA geometry Tutors Shannon, GA geometry Tutors Silver Creek, GA geometry Tutors Union City, GA geometry Tutors Villa Rica, PR geometry Tutors
{"url":"http://www.purplemath.com/rome_ga_geometry_tutors.php","timestamp":"2014-04-17T04:08:21Z","content_type":null,"content_length":"23734","record_id":"<urn:uuid:7929c24d-8234-4985-b1f0-d8009efa28f8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Pascal's Trianglw December 11th 2006, 02:30 PM Pascal's Trianglw Oops, sorry for the typo in the title. :| Need help with these please: In the expansion of (1-ax)^n, the first three terms are 1-12x-63x^2. Find the values of a and n. In the expansion of (x-(1/x^2))^8 find: a) The term that has x^-1 b) The coefficient of the term that has x^2. Just to make the second one a little clearer, here's something I whipped up with my MSPaint skills: December 11th 2006, 04:20 PM Is there a typo in #1. Should it be +63 and not –63? December 11th 2006, 06:46 PM December 11th 2006, 08:40 PM (1-ax)^n=1 + n(-ax) + n(n-1)(-ax)^2/2 + .. Expanding this last equation: December 11th 2006, 08:52 PM In the expansion of (x-(1/x^2))^8 find: a) The term that has x^-1 b) The coefficient of the term that has x^2. Just to make the second one a little clearer, here's something I whipped up with my MSPaint skills: rearrange (x-(1/x^2))^8, to: (1/x^16) (1-x^3)^8 = (1/x^16)[1+8x^3+28x^6+54x^9+70x^12+54x^15+ a) so the coefficient of x^(-1) in this is the coefficient of x^15 in the []'s, so is 54. b) so the coefficient of x^2 in this is the coefficient of x^18 in the []'s, so is 28.
{"url":"http://mathhelpforum.com/statistics/8725-pascals-trianglw-print.html","timestamp":"2014-04-17T20:39:52Z","content_type":null,"content_length":"7505","record_id":"<urn:uuid:8aa9922f-acfe-453f-9ad9-e84c42d9b6ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Systems Provides the material for two graduate-level courses: one in linear systems and optimal control and the other in robust control Includes H[∞] and sliding mode methods together for the first time in book form Develops mathematical analyses, including the derivation of H[∞], to avoid learning additional mathematical tools Highlights the use of MATLAB^® software to solve practical problems via the computer Contains end-of-chapter exercises and offers a solutions manual with qualifying course adoptions Balancing rigorous theory with practical applications, Linear Systems: Optimal and Robust Control explains the concepts behind linear systems, optimal control, and robust control and illustrates these concepts with concrete examples and problems. Developed as a two-course book, this self-contained text first discusses linear systems, including controllability, observability, and matrix fraction description. Within this framework, the author develops the ideas of state feedback control and observers. He then examines optimal control, stochastic optimal control, and the lack of robustness of linear quadratic Gaussian (LQG) control. The book subsequently presents robust control techniques and derives H[∞] control theory from the first principle, followed by a discussion of the sliding mode control of a linear system. In addition, it shows how a blend of sliding mode control and H[∞] methods can enhance the robustness of a linear system. By learning the theories and algorithms as well as exploring the examples in Linear Systems: Optimal and Robust Control, students will be able to better understand and ultimately better manage engineering processes and systems. Table of Contents Contents of the Book State Space Description of a Linear System Transfer Function of a Single Input/Single Output (SISO) System State Space Realizations of a SISO System SISO Transfer Function from a State Space Realization Solution of State Space Equations Observability and Controllability of a SISO System Some Important Similarity Transformations Simultaneous Controllability and Observability Multiinput/Multioutput (MIMO) Systems State Space Realizations of a Transfer Function Matrix Controllability and Observability of a MIMO System Matrix-Fraction Description (MFD) MFD of a Transfer Function Matrix for the Minimal Order of a State Space Realization Controller Form Realization from a Right MFD Poles and Zeros of a MIMO Transfer Function Matrix Stability Analysis State Feedback Control and Optimization State Variable Feedback for a Single Input System Computation of State Feedback Gain Matrix for a Multiinput System State Feedback Gain Matrix for a Multiinput System for Desired Eigenvalues and Eigenvectors Fundamentals of Optimal Control Theory Linear Quadratic Regulator (LQR) Problem Solution of LQR Problem via Root Locus Plot: SISO Case Linear Quadratic Trajectory Control Frequency-Shaped LQ Control Minimum-Time Control of a Linear Time-Invariant System Control with Estimated States Open-Loop Observer Closed-Loop Observer Combined Observer–CONTROLLER Reduced-Order Observer Response of a Linear Continuous-Time System to White Noise Kalman Filter: Optimal State Estimation Stochastic Optimal Regulator in Steady State Linear Quadratic Gaussian (LQG) Control Impact of Modeling Errors on Observer-Based Control Robust Control: Fundamental Concepts and H[2], H[∞], and μ Techniques Important Aspects of Singular Value Analysis Robustness: Sensitivity and Complementary Sensitivity Robustness of LQR and Kalman Filter (KF) Feedback Loops LQG/LTR Control H[2] and H[∞ ]Norms H[2] Control Well-Posedness, Internal Stability, and Small Gain Theorem Formulation of Some Robust Control Problems with Unstructured Uncertainties Formulation of Robust Control Problems with Structured Uncertainties H[∞ ]Control Loop Shaping Controller Based on μ Analysis Robust Control: Sliding Mode Methods Basic Concepts of Sliding Modes Sliding Mode Control of a Linear System with Full State Feedback Sliding Mode Control of an Uncertain Linear System with Full State Feedback: Blending H[∞ ]and Sliding Mode Methods Sliding Mode Control of a Linear System with Estimated States Optimal Sliding Mode Gaussian (OSG) Control Appendix A: Linear Algebraic Equations, Eigenvalues/Eigenvectors, and Matrix Inversion Lemma System of Linear Algebraic Equations Eigenvalues and Eigenvectors Matrix Inversion Lemma Appendix B: Quadratic Functions, Important Derivatives, Fourier Integrals, and Parseval’s Relation Quadratic Functions Derivative of a Quadratic Function Derivative of a Linear Function Fourier Integrals and Parseval’s Theorem Appendix C: Norms, Singular Values, Supremum, and Infinimum Vector Norms Matrix Norms Singular Values of a Matrix Singular Value Decomposition (SVD) Properties of Singular Values Supremum and Infinimum Appendix D: Stochastic Processes Stationary Stochastic Process Power Spectrum or Power Spectral Density (PSD) White Noise: A Special Stationary Stochastic Process Response of a SISO Linear and Time-Invariant System Subjected to a Stationary Stochastic Process Vector Stationary Stochastic Processes Appendix E: Optimization of a Scalar Function with Constraints in the Form of a Symmetric Real Matrix Equal to Zero Appendix F: A Flexible Tetrahedral Truss Structure Appendix G: Space Shuttle Dynamics during Reentry Exercises appear at the end of each chapter.
{"url":"http://www.crcpress.com/product/isbn/9780849392177","timestamp":"2014-04-20T16:29:00Z","content_type":null,"content_length":"111993","record_id":"<urn:uuid:db86e150-73a5-4e84-9976-94a77cb66077>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Bifold: a simultaneous foldr and foldl Noah Easterly noah.easterly at gmail.com Tue Nov 30 04:41:50 CET 2010 Somebody suggested I post this here if I wanted feedback. So I was thinking about the ReverseState monad I saw mentioned on r/haskell a couple days ago, and playing around with the concept of information flowing two directions when I came up with this function: bifold :: (l -> a -> r -> (r,l)) -> (l,r) -> [a] -> (r,l) bifold _ (l,r) [] = (r,l) bifold f (l,r) (a:as) = (ra,las) where (ras,las) = bifold f (la,r) as (ra,la) = f l a ras (I'm sure someone else has come up with this before, so I'll just say I discovered it, not invented it). Basically, it's a simultaneous left and right fold, passing one value from the start of the list toward the end, and one from the end toward the start. It lets you do some interesting stuff, like filter based on positionor other left-dependent information: evenIndexed :: [a] -> [a] evenIndexed = fst . bifold alternate (0,[]) where alternate 0 x xs = (x:xs, 1) alternate 1 _ xs = (xs, 0) maximums :: (Ord a) => [a] -> [a] maximums [] = [] maximums (a:as) = a : (fst $ bifold (\m a l -> if a > m then (a:l,a) else (l,m)) (a,[]) as) As long as you don't examine the left-to-right value, it can still work on infinite lists: ghci> take 20 $ evenIndexed [0..] Also, it can be used for corecursive data (or, at least, doubly-linked data DList a = Start { first :: DList a } | Entry { value :: a, next :: DList a, prev :: DList a } | End { last :: DList a } deriving (Eq) ofList :: [a] -> (DList a, DList a) ofList as = (start,end) where start = Start first end = End last (first,last) = bifold mkEntry (start,end) as mkEntry p v n = let e = Entry v n p in (e,e) It's just been running around my head all night, so I thought I'd share. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20101129/2826adf2/attachment.html More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2010-November/086804.html","timestamp":"2014-04-21T03:50:08Z","content_type":null,"content_length":"4859","record_id":"<urn:uuid:0c9c355f-4678-4926-a97e-05855b667d6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Kaput Center for Research and Innovation in STEM Education Guershon Harel Wednesday 19^th November 2008 Watch the talk The goal of this talk is to contribute to the debate on a pair of questions that are on the mind of many mathematics educators—teachers, teacher leaders, curriculum developers, and researchers who study the processes of learning and teaching—namely: (1) What is the mathematics that we should teach in school? (2) How should we teach it? Clearly, one presentation is not sufficient to address these colossal questions, which are inextricably linked to other difficult questions—about student learning, teacher knowledge, school culture, societal need, and educational policy, to mention a few. The goal the talk, thus, is merely to articulate a pedagogical stance on these two questions. The stance is not limited to a particular mathematical area or grade level; rather, it encompasses the learning and teaching of mathematics in general. This stance is oriented within a theoretical framework, called DNR-based instruction in mathematics (DNR, for short). The initials D, N, ard R stand for three foundational instructional principles in the framework: duality, necessity, and repeated reasoning. DNR can be thought of as a system consisting of three categories of constructs: premises—explicit assumptions underlying the DNR concepts and claims; concepts—constructs defined and oriented within these premises; and instructional principles—claims about the potential effect of teaching actions on student learning. Biographical Sketch: Dr Guershon Harel is Professor of Mathematics at the University of California, San Diego, where he has taught for the past 8 years. Prior to UCSD, he taught at Purdue University and Northern Illinois University. He is a Principal Investigator for the Rational Number Project, a project that has received funding from the National Science Foundation to create a companion module to the current RNP Fraction Lessons for the Middle Grades. Other research projects include the Algebraic Thinking Institute (ATI) at UCSD, Proof Understanding, Production, and Appreciation (PUPA), and Development of Mathematics Teachers' Knowledge Base Through DNR-Based Instruction: Focus on Proofs in Algebra. He holds a BS, MS and PhD in Mathematics from Ben-Gurion University of the Negev. His areas of interest include: cognition and epsitemology of mathematics and their implications to mathematics curricula and teacher education; advanced mathematical thinking, particularly the concept of proof, the learning and teaching of linear algebra, and the development of the multiplicative conceptual field.
{"url":"http://www.kaputcenter.umassd.edu/events/cs/08-09/harel_111908/","timestamp":"2014-04-17T12:50:23Z","content_type":null,"content_length":"7515","record_id":"<urn:uuid:362cd054-9a6a-4ce9-a607-118bd817f7a7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Irreducible representations of the unitriangular group up vote 5 down vote favorite Hi, I wonder how much is known about the irreducible representations of the nxn unitriangular group over a finite field with q elements. I know that all characterdegrees are a power of q and all degrees which occur are known.But what is known about the irreducible representations or the complete charactertable at least for small values of n?For example is the charactertable for n=3 known? Thanks for helping characters gr.group-theory rt.representation-theory add comment 5 Answers active oldest votes Not much—the theory of individual irreducible representations is a 'wild' problem, in some technical sense (that I don't know). My understanding, which comes entirely from informal conversations with Nat Thiem, is that the state of the art is to lump together representations until you get more nicely behaved objects called supercharacters. As far as I know, the up vote 5 original definitions are due to André (who calls them ‘basic characters’) and Yan, and there is an explicit supercharacter table. down vote add comment Let q = p^k be a prime power, n be a positive integer, and U be a Sylow p-subgroup of GL(n, q). For n = 1, U = 1 is the trivial group and its character table is known (just the identity/principal character). For n = 2, U = q is elementary abelian of order q, and its character table is known (just q distinct linear characters). For n = 3, U is similar to an extra-special q-group, and the same calculation works to find its ordinary character table. A sketch: U/[U, U] is elementary abelian of order q^2, giving q^2 up vote (known) linear characters. Since |U|−[U : [U, U]] = q^3−q^2 = q^2(q−1), and the degrees of the irreducible characters are powers of q and their squares sum to |U|, one must have the remaining 2 down characters have degree q and there are q−1 of them. Since U is monomial, each of these characters is induced from a (non-principal) linear character on a subgroup of index q (which must be vote abelian). Hence each such character vanishes off of Z(U), and acts in the expected way on its center. That is, take any non-identity irreducible character θ of Z(U), there are q−1 of these, and then define χ(g) = q⋅θ(g) if g in Z(U) and χ(g) = 0 otherwise. Personally I just execute the definition of induced character and then check the norm of χ is (χ,χ)=1, but I think there are cleverer ways. I think prime powers q for higher n work similarly, but I'm not very familiar with even the prime case for n ≥ 4. add comment An earlier answer tells you what happens when $n=3$. This is a special case of the Heisenberg group (at least in odd characteristic; not sure otherwise), any exposition of which might be illuminating if you're looking for more context. up vote 2 More generally, Boyarchenko has recently shown that all representations of "nilpotent algebra" groups, of which unitriangular groups are a special case, are induced from one-dimensional down vote representations of "nilpotent algebra" subgroups. While this fact is not a magical tool for computing the full character table (a wild problem, as LS says), it's pretty interesting, and might allow you to work out the $n=4$ case if you were interested in such an exercise. thanks,i know that result.it would be interesting to know what has been done for n=4,before i try ;) I know for example that all characterdegrees with their multiplicity is known.can someone by the way say what exactly "wild" means? – trew Jun 19 '11 at 19:24 1 Since this question is about finite algebra groups, it should be pointed out that the result that says that any irrep of an algebra group over a finite field is induced from a 1-dim'l rep of an algebra subgroup is due to Halasi. Boyarchenko has given a new proof valid also for algebra groups over local fields. – A Stasinski Jun 19 '11 at 22:14 A somewhat relevant question I'm curious about: if we work over $\mathbb{R}$ instead of a finite field, the problem of determining the irreps of the group remains just as "wild"? – Mark Jun 19 '11 at 22:26 @AStansinski: You're right, I should have mentioned Halasi, but have local fields on the brain. – Jeff Adler Jun 20 '11 at 3:01 @trew, I believe that the sense in which the classification is wild is that discussed in mathoverflow.net/questions/10481/…. – L Spice Jun 20 '11 at 3:58 add comment I think you might enjoy Kirillov's survey article which describes Orbit method approach in this particular case. Also, if I recall correctly, this article by Ery Arias-Castro, Persi up vote 2 Diaconis and Richard Stanley gives a very readable introduction to the state of art on the conjugacy classes and characters. down vote add comment Here is what I found out about the characters when n=4.I dont know if thats interesting and how to get the actuall irreducible characters then.Maybe someone has an idea: There are $ q^{3} $ linear and from http://fourier.math.uoc.gr/~marial/uni1.published.pdf there are $q^{3}-q $ characters of degree q and $q(q-1)$ characters of degree $q^{2}$. Now lets look at the characters of $G/Z(G)=1+J/J^{3}$ ,where Z(G) is the center of the group and J are the lower triangular matrices with zeros on the diagonal.This is again an algebragroup,so we have: $q^{5}=q^{3}+aq^{2}+bq^ {4}$,where a is the number of the degree q characters of G/Z(G) and b the number of degree $q^{2}$ characters. Assume $ \phi$ is a degree $q^{2}$ character in G/Z(G),then $ [ \phi_{Z(G/Z(G))} , \psi] \neq 0$ ,for a linear character $\psi $ of Z(G/Z(G)).But then since $\psi$ is G/Z(G) invariant as a character of the center and using Clifford: $\phi_{Z(G/Z(G))}=q^{2} \psi$ and then $ up \psi^{G/Z(G)} = q^{2} \phi +...$ which is not possible because of $ \psi^{G/Z(G)}(1)=q^{3} < q^{4} =q^{2} \phi(1)$. So we have b=0 and a=q^{3}-q from $q^{5}=q^{3}+aq^{2}$.So all the degree q vote 1 characters of G are also the degree q characters of G/Z(G). Let now $\chi$ be a character of degree q of G/Z(G) then we can choose a linear character $\psi$ of Z(G/Z(G)) with $[\chi_{Z(G/Z down (G))} ,\psi] \neq 0 $ and again as above: $ \psi^{G/Z(G)} = q^{2} \chi +...$ Since $(1_{Z(G/Z(G))})^{G/Z(G)}$ has all linear characters as constituents $\psi^{G/Z(G)}$ can only have degree q vote irreducible constituents.So for all(there are $q^{2}-1$ such) the nontrivial linear characters $\psi_k $ of Z(G/Z(G)),we have : $ (\psi_k)^{G/Z(G)} = q \sum\limits_{i=1}^{q} {\chi_i}$. Similary one can show that for all nontrivial linear characters $\vartheta_k $ of Z(G) one has: $(\vartheta_k)^{G}=q \sum\limits_{i=1}^{q} {\phi_i} $,where $\phi_i$ are degree-$q^{2}$ characters of G. add comment Not the answer you're looking for? Browse other questions tagged characters gr.group-theory rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/68207/irreducible-representations-of-the-unitriangular-group/68227","timestamp":"2014-04-18T19:05:11Z","content_type":null,"content_length":"76839","record_id":"<urn:uuid:e167368a-1cd0-4ab6-9a38-13d643746994>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Forex Fibonacci Indicator Explained As most of you know that the forex market basically moves in waves and there will be time where the market extends and there will also be time where the market retraces. One of the best tools that you can use to time this retracement and extension is the forex Fibonacci levels. So What Exactly Is Fibonacci? It is a number sequences that is named after Leonardo of Pisa and the Fibonacci number sequences goes like this 0, 1, 1, 2, 3, 5, 8, 13 and so on. (Add the first 2 numbers to get the next) However in trading, we are not interested in the sequences, we are actually interested in the Fibonacci ratio that the sequences create. These are the ratio that we use as a forex trader. Below are the retracement ratios - 0.236 - 0.382 - 0.500 - 0.618 - 0.764 Below are the extension ratios - 1.272 - 1.382 - 1.500 - 1.618 So How Can You Use These Ratios In Trading? The Fibonacci ratio is in fact used as a level of support and resistance. These are areas where you will SELL or BUY depending on what you see and where you are. Although there are quite a number of ratios given above, the important ones are the 0.382, 0.500, and 0.618 as they are usually area of strong support when the price retraces down and area of strong resistance when the price retraces up. Here Is How You Should Plot Your Fibonacci Levels In An Uptrend Step 1: Using the tool provided by your platform, Pick a high point and a low point Step 2: Select the levels that you want to display. (We will select 0.382, 0.500, 0.618, 1.272, 1.382, 1.500 and 1.618) If you are in a downtrend, all you have to do is to switch the step 1 points. In the case where you are in an uptrend, you will find that the retracement of your price will usually land on the 0.382, 0.500 or the 0.618 level as these are area of strong support and the price will then extend up to continue in its uptrend movement. If you ever find the price moving below the 0.382 level, there is a high chance that the trend is reversing. In the case where you are in a downtrend, the market will retrace upward and it will also find its resistance at 0.382, 0.500 and the 0.618 levels. Similar to the uptrend, if the market retraces above the 0.382 level, there is a high chance that the market is reversing. In my next blog post, I will show you how to trade using Fibonacci Strategy and how you can make use of the 1.272, 1.382 and the 1.618 levels. In the meantime, you should try to plot the Fibonacci levels on your chart to see the power of it. 5 Responses to “Forex Fibonacci Indicator Explained” 1. I learned a lot from your blogs and videos. Thanks a ton. Cheers, Sid 2. Hi kelvin, please how do I know where to place these numbers on the chart. Is there a particular line for each of these numbers 0.382,0.500 etc □ Hi Sharon Are you referring to the 0.382, 0.500 and 0.618 numbers? It should be done automatically with your trading tools on your platform. 3. Hi, this is my biggest problem : Here Is How You Should Plot Your Fibonacci Levels In An Uptrend Step 1: Using the tool provided by your platform, Pick a high point and a low point so Step 1 : it’s yesterday high point ? or week high ? there are too many high ponts, so how can i pick a high point ? and which time frame ? thanks so much □ Hi Boyxx The high point you should use are usually the most recent high. As for the time frame to use, it depends on your trading style. If you are trading off the 15 minutes chart, the high will be at that time frame and if you are trading off the hourly chart, the high will be from that time frame. Speak Your Mind Tell us what you're thinking... and oh, if you want a pic to show with your comment, go get a gravatar!
{"url":"http://www.forexindicator.org/forex-fibonacci-indicator-explained.html","timestamp":"2014-04-17T21:22:25Z","content_type":null,"content_length":"62849","record_id":"<urn:uuid:2afc849d-92b2-4f67-8614-d4da32510663>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Trinomials: factoring when leading coeff. not 1: 4x^2+6x+9 I am having a problem figuring out trinomials when there is a number added to the first term. I understand the second sign determines both signs below and the sign on the left determines what they will be. I also believe I figured out the first term. The problem is there are five different options with this problem. Is there a fast way to determine if you are on the right track without having to FOIL all five options? 4x^2 + 6x + 9 (x + 1) (4x + 9) (x + 9) (4x + 1) (x + 3) (4x + 3) (2x + 1) (2x + 9) (2x + 3) (2x + 3) Tiger wrote:I am having a problem figuring out trinomials when there is a number added to the first term. To learn a much easier method, try here. Once you've learned that method, note that (4)(+9) = +36. Since the 36 is positive, then the factors you're looking for both have the same sign. Since the middle term's coefficient is +6, the factors have to be positive and add to 6. The factor pairs for 36 are 1 and 36, 2 and 18, 3 and 12, 4 and 9, and 6 and 6. No pair adds to 6! This tells you that the quadratic is not factorable; it is "prime". Re: Trinomials: factoring when leading coeff. not 1: 4x^2+6x+9 Thank you for your help. The box method is much faster.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=393","timestamp":"2014-04-19T23:48:41Z","content_type":null,"content_length":"20957","record_id":"<urn:uuid:06c61682-b8ef-4d0e-a453-87814e92cc60>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
NAG Library NAG Library Routine Document 1 Purpose E02AHF determines the coefficients in the Chebyshev series representation of the derivative of a polynomial given in Chebyshev series form. 2 Specification SUBROUTINE E02AHF ( NP1, XMIN, XMAX, A, IA1, LA, PATM1, ADIF, IADIF1, LADIF, IFAIL) INTEGER NP1, IA1, LA, IADIF1, LADIF, IFAIL REAL (KIND=nag_wp) XMIN, XMAX, A(LA), PATM1, ADIF(LADIF) 3 Description E02AHF forms the polynomial which is the derivative of a given polynomial. Both the original polynomial and its derivative are represented in Chebyshev series form. Given the coefficients , for $\mathit{i}=0,1,\dots ,n$ , of a polynomial of degree , where the routine returns the coefficients , for $\mathit{i}=0,1,\dots ,n-1$ , of the polynomial of degree , where $qx=dpx dx =12a-0+a-1T1x-+⋯+a-n-1Tn-1x-.$ denotes the Chebyshev polynomial of the first kind of degree with argument . It is assumed that the normalized variable in the interval was obtained from your original variable in the interval by the linear transformation $x-=2x-xmax+xmin xmax-xmin$ and that you require the derivative to be with respect to the variable . If the derivative with respect to is required, set Values of the derivative can subsequently be computed, from the coefficients obtained, by using The method employed is that of Chebyshev series (see Chapter 8 of Modern Computing Methods (1961) ), modified to obtain the derivative with respect to . Initially setting , the routine forms successively $a-i-1=a-i+1+2xmax-xmin 2iai, i=n,n-1,…,1.$ 4 References Modern Computing Methods (1961) Chebyshev-series NPL Notes on Applied Science 16 (2nd Edition) HMSO 5 Parameters 1: NP1 – INTEGERInput On entry , where is the degree of the given polynomial . Thus is the number of coefficients in this polynomial. Constraint: ${\mathbf{NP1}}\ge 1$. 2: XMIN – REAL (KIND=nag_wp)Input 3: XMAX – REAL (KIND=nag_wp)Input On entry : the lower and upper end points respectively of the interval . The Chebyshev series representation is in terms of the normalized variable , where $x-=2x-xmax+xmin xmax-xmin .$ Constraint: ${\mathbf{XMAX}}>{\mathbf{XMIN}}$. 4: A(LA) – REAL (KIND=nag_wp) arrayInput On entry : the Chebyshev coefficients of the polynomial . Specifically, element must contain the coefficient , for $\mathit{i}=0,1,\dots ,n$ . Only these elements will be accessed. Unchanged on exit, but see , below. 5: IA1 – INTEGERInput On entry : the index increment of . Most frequently the Chebyshev coefficients are stored in adjacent elements of , and must be set to . However, if for example, they are stored in ${\mathbf{A}}\left(1\right),{\mathbf{A}}\left(4\right),{\mathbf{A}}\left(7\right),\dots \text{}$ , then the value of must be . See also Section 8 Constraint: ${\mathbf{IA1}}\ge 1$. 6: LA – INTEGERInput On entry : the dimension of the array as declared in the (sub)program from which E02AHF is called. Constraint: ${\mathbf{LA}}\ge 1+\left({\mathbf{NP1}}-1\right)×{\mathbf{IA1}}$. 7: PATM1 – REAL (KIND=nag_wp)Output On exit : the value of . If this value is passed to the integration routine with the coefficients of , then the original polynomial is recovered, including its constant coefficient. 8: ADIF(LADIF) – REAL (KIND=nag_wp) arrayOutput On exit : the Chebyshev coefficients of the derived polynomial . (The differentiation is with respect to the variable .) Specifically, element contains the coefficient , for $\mathit{i}=0,1,\dots ,n-1$ . Additionally, element is set to zero. A call of the routine may have the array name the same as , provided that note is taken of the order in which elements are overwritten, when choosing the starting elements and increments , i.e., the coefficients ${a}_{0},{a}_{1},\dots ,{a}_{i-1}$ must be intact after coefficient is stored. In particular, it is possible to overwrite the completely by having , and the actual arrays for 9: IADIF1 – INTEGERInput On entry : the index increment of . Most frequently the Chebyshev coefficients are required in adjacent elements of , and must be set to . However, if, for example, they are to be stored in ${\mathbf{ADIF}}\left(1\right),{\mathbf{ADIF}}\left(4\right),{\mathbf{ADIF}}\left(7\right),\dots \text{}$ , then the value of must be . See Section 8 Constraint: ${\mathbf{IADIF1}}\ge 1$. 10: LADIF – INTEGERInput On entry : the dimension of the array as declared in the (sub)program from which E02AHF is called. Constraint: ${\mathbf{LADIF}}\ge 1+\left({\mathbf{NP1}}-1\right)×{\mathbf{IADIF1}}$. 11: IFAIL – INTEGERInput/Output On entry must be set to $-1\text{ or }1$ . If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit unless the routine detects an error or a warning has been flagged (see Section 6 6 Error Indicators and Warnings If on entry , explanatory error messages are output on the current error message unit (as defined by Errors or warnings detected by the routine: On entry, ${\mathbf{NP1}}<1$, or ${\mathbf{XMAX}}\le {\mathbf{XMIN}}$, or ${\mathbf{IA1}}<1$, or ${\mathbf{LA}}\le \left({\mathbf{NP1}}-1\right)×{\mathbf{IA1}}$, or ${\mathbf{IADIF1}}<1$, or ${\mathbf{LADIF}}\le \left({\mathbf{NP1}}-1\right)×{\mathbf{IADIF1}}$. 7 Accuracy There is always a loss of precision in numerical differentiation, in this case associated with the multiplication by in the formula quoted in Section 3 The time taken is approximately proportional to $n+1$. The increments are included as parameters to give a degree of flexibility which, for example, allows a polynomial in two variables to be differentiated with respect to either variable without rearranging the 9 Example Suppose a polynomial has been computed in Chebyshev series form to fit data over the interval . The following program evaluates the first and second derivatives of this polynomial at equally spaced points over the interval. (For the purposes of this example, and the Chebyshev coefficients are simply supplied in DATA statements. Normally a program would first read in or generate data and compute the fitted polynomial.) 9.1 Program Text 9.2 Program Data 9.3 Program Results
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/E02/e02ahf.html","timestamp":"2014-04-16T23:27:54Z","content_type":null,"content_length":"39124","record_id":"<urn:uuid:d31be7ee-fbde-4fc5-9b13-b958d923b47d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Hypothesis Testing and Power Calculations for Taxonomic-Based Human Microbiome Data This paper presents new biostatistical methods for the analysis of microbiome data based on a fully parametric approach using all the data. The Dirichlet-multinomial distribution allows the analyst to calculate power and sample sizes for experimental design, perform tests of hypotheses (e.g., compare microbiomes across groups), and to estimate parameters describing microbiome properties. The use of a fully parametric model for these data has the benefit over alternative non-parametric approaches such as bootstrapping and permutation testing, in that this model is able to retain more information contained in the data. This paper details the statistical approaches for several tests of hypothesis and power/sample size calculations, and applies them for illustration to taxonomic abundance distribution and rank abundance distribution data using HMP Jumpstart data on 24 subjects for saliva, subgingival, and supragingival samples. Software for running these analyses is Citation: La Rosa PS, Brooks JP, Deych E, Boone EL, Edwards DJ, et al. (2012) Hypothesis Testing and Power Calculations for Taxonomic-Based Human Microbiome Data. PLoS ONE 7(12): e52078. doi:10.1371/ Editor: Ethan P. White, Utah State University, United States of America Received: April 2, 2012; Accepted: November 13, 2012; Published: December 20, 2012 Copyright: © 2012 La Rosa et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by National Institutes of Health (NIH) Grant U54 HG004968 “Human Microbiome Project Consortium Sequencing of Healthy People”, NIH Grant 1UH2AI083265 “The Neonatal Microbiome and Necrotizing Enterocolotis”, and St. Louis Children’s Hospital and Children Discovery Institute Grant “The St. Louis Neonatal Gut Microbiome Initiative”. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. The NIH Human Microbiome Project (HMP) [1] aims at characterizing, using next generation sequencing technology, the genetic diversity of microbial populations living in and on humans, and at investigating their roles in the functioning of the human body, such as their effects in nutrition and susceptibility to disease [2]. In just a few years, much work has been done to optimize the processes for collecting microbiome samples, processing the DNA, running the sequencing technology, and generating taxonomies/phylogenies from these sequences [3]. These developments will facilitate access to microbiome technology for laboratories of all sizes, enabling application in varied fields of biology, from agriculture to human disease research. However, the biostatistical analysis of metagenomic data is still being developed. Several methods to analyze metagenomic data have been proposed based on exploratory cluster analysis, bootstrap or resampling methods, and application of univariate and non-parametric statistics to subsets of the data [4]–[12]. However, these methods require a significant reduction of information, such as Unifrac [7] which reduces sequence data to pairwise distances, or ignoring correlations and the multivariate structure inherent in microbiome data, such as Metastats [12] which does univariate ‘one-taxa-at-a-time’ analyses. Given the multivariate nature of the metagenomic data, having multivariate analysis tools is becoming important in the microbiome research community. Microbiome researchers are interested in testing multivariate hypotheses concerning the effects of treatments or experimental factors on whole assemblages of bacterial taxa, and in estimating sample sizes for such experiments. These types of analyses are useful for studies aiming at assessing the impact of microbiota on human health and on characterizing the microbial diversity in general. Statistical methods to design and analyze such studies will contribute to the translation of microbiome research from technical (bench) development to clinical (bedside) application. The focus of this work is to develop multivariate methods to test for differences in bacterial taxa composition between groups of metagenomic samples. Multivariate non-parametric methods based on permutation test such as Mantel test [13], [14], Analysis of Similarity (ANOSIM) [15], and NP-Manova [16] are widely used among community ecologists for this purpose. However, although these three methods are attractive when a parametric distribution of the data is unknown, we believe they are not always appropriate for analyzing microbiome data. First, although a hypothesis of group difference can be tested, the results of these tests are difficult to interpret since they cannot quantify the size of the difference between the groups in terms of bacterial taxa composition. Second, permutation tests work under the assumption that the dispersion (variability) of samples within groups is the same in all groups [16], a strong assumption which when violated can lead to inflation of type I error. Third, non-parametric methods are usually less powerful than parametric methods, so when a parametric alternative is available it should be the preferred method to model metagenomic data. In this paper, we present biostatistical methods for the analysis of microbiome data based on a fully multivariate parametric approach. In particular, the parametric model used in this paper is the Dirichlet-Multinomial distribution which has been shown recently to model metagenomic data well. In [17] the authors apply the Dirichlet-multinomial mixture for the probabilistic modeling of microbial metagenomics data, which was used to successfully cluster communities into groups with a similar composition. However, a multivariate hypothesis testing framework to compare populations using this model was not derived. In this work, we apply a different parameterization of Dirichlet-multinomial model to the one presented in [17], which is suitable to perform hypothesis testing across groups based on difference between location (mean comparison) as well as scales (variance comparison/dispersion). Using this model, we develop methods to perform parameter estimation, multivariate hypothesis testing power and sample size calculation. An open source R statistical software package (‘HMP: Hypothesis Testing and Power Calculations for Comparing Metagenomic Samples from HMP’) for fitting these models and tests is available [18]. In addition, the methods developed here are not constrained by computational resources and work for any size microbiome dataset (e.g., number of sequence reads and samples). These methods and are also likely applicable to phylogenetic analysis which is currently being investigated. Materials and Methods Ethics Statement Subjects involved in the study provided written informed consent for screening, enrollment and specimen collection. The protocol was reviewed and approved by the Institutional Review Board at Washington University in St. Louis. The data were analyzed without personal identifiers. Research was conducted according to the principles expressed in the Declaration of Helsinki. Human Microbiome Data Human microbiome data analyzed in this paper are from the subgingival, supragingival, and saliva oral sites of 24 subjects (male and female), 18–40 years old, from two geographic regions of the US: Houston, TX and St. Louis, MO [19]. The analyses presented here illustrate how the Dirichlet-multinomial biostatistical analysis is used with real data. Approximately 1×10^5 sequences were obtained from the V1–V3 and V3–V5 variable regions of the 16S ribosomal RNA gene, and collapsed into a single sample. The sequencing was performed at one of four genome sequencing centers (J. Craig Venter Institute, Broad Institute, Human Genome Sequencing Center at Baylor, and Genome Sequencing Center at Washington University in St. Louis). Sequence reads were assigned to bacterial taxa using the Ribosomal Database Project (RDP) classifier [20], which provides a confidence score for each taxonomic classification. Only taxa labels with a confidence score > = 80% were retained in this analysis, and taxa labels below this threshold were relabeled as unknown. Although the choice of an 80% threshold on the confidence score is arbitrary, in [21] it was shown that threshold ranging between 50% to 90% provided an average classification performance of between 77% at the genus level up to 97% at the phylum level. Statistical Model for HMP Data Dirichlet-multinomial model. Consider a set of microbiome samples measured on subjects with distinct taxa at an arbitrary level (e.g., phylum, class, etc.) identified across all samples. Not all taxa need to be found in all samples. Let be the number of reads in subject for taxon k, and let be the taxa count vector obtained from sample . Note that is 0 when taxon k is not in sample . Let be the total number of sequence reads in sample , be the total number of sequence reads for taxon across all samples, and be the total number of sequences over all samples and taxa. Table 1 shows the format of an RDP-mapped microbiome data set. Table 1. Format of a microbiome data set for subjects and distinct taxa at an arbitrary level (e.g., Phylum, Class, etc.). Count data such as this is routinely analyzed using a multinomial distribution which is appropriate when the true frequency of each category (e.g., each taxon in microbiome data) is the same across all samples. This implies that as the number of sample points increases (i.e., number of reads) within each sample, taxa frequencies in all samples converge to the same value (e.g., all samples converge onto 40% taxa A, 25% taxa B,…) with no variability between samples. When the data exhibit overdispersion this convergence result does not occur (i.e., taxa frequencies in all samples do not converge to the same values), and the multinomial model is incorrect [22]. Hypothesis testing based on the multinomial model in the presence of overdispersion can result in an increased Type I Error (i.e., saying the microbiome samples are different when they are not) [23]. The Dirichlet-multinomial distribution prevents Type I Error inflation by taking into account the overdispersion in count data in the form displayed in Table 1. It can be characterized by the following two set of parameters [24]: which is a vector of the expected taxa frequencies, and which is a number indicating the amount of overdispersion. Using this parameterization, the Dirichlet-multinomial distribution is defined as [24]:(1) The above parameterization of the Dirichlet-multinomial distribution is suitable to perform hypothesis testing across groups based on difference between locations (comparisons of vectors) as well as scales (comparison of values). Other parameterizations of the Dirichlet-multinomial distribution can be found in [23], [25]. Note that the Dirichlet-multinomial distribution is a generalization of the multinomial model, which results when . When the data variability is larger than what is expected from the multinomial distribution, and the Dirichlet-multinomial distribution provides a better fit to the data. On a side note, if the elements of the taxa count vector, obtained from a sample are ranked (i.e., ), then the Dirichlet-multinomial can be used to model the rank abundance distributions (RAD) vector across samples. This is useful if the analyst is interested in comparing community structure and complexity across microbiome samples and body sites, but not interested in the names of the community members [26]–[28]. If the elements of the taxa count vector, obtained from a sample are not ranked (i.e., has the same taxa label across all samples), then we are modeling the abundance of species keeping their labels. This type of analysis is useful to compare community composition across microbiome samples and body sites, and it is usually referred to as analysis of species composition data [29]. Since we are interested in analyzing different taxonomic levels, we will refer to this as analysis of taxa composition data. The interested reader is referred to [26]–[29] and references therein for more details on the importance and applications of taxa composition data and RAD data analyses to study biodiversity. Estimating and . Referring to the data structure in Table 1 on a set of samples with counts on taxa, we compute the frequency of taxon in sample as the percentage of reads within that sample that belong to that taxa (i.e., ). The elements of the parameter are then computed as the weighted average of the taxa frequency from each sample (i.e., ) with weights given by proportion of the number of reads in sample with respect to the total number of sequence reads (i.e., ). To understand the overdispersion parameter a graphical example is shown. In Figure 1 we have four plots showing the taxa frequencies for each of the five hypothetical samples (dashed lines) with 12 taxa in each sample, and the vector of taxa frequencies (solid line). The plots on the left correspond to taxa frequencies of five samples drawn from a multinomial distribution and the plots on the right correspond to taxa frequencies of five samples drawn from a Dirichlet-multinomial . The top row of plots is for samples with a smaller number of sequence reads, while the bottom row of plots is for samples with a larger number of sequence reads. As the number of sequence reads increases the multinomial samples get closer and closer to the , while the Dirichlet-multinomial samples continue to show variability and no convergence onto . This pattern will hold true in the Dirichlet-multinomial distribution no matter how large the number of sequence reads becomes. Figure 1. Description of Dirichlet-multinomial parameters. Intuitive description of the meaning of the overdispersion parameter . The four plots show the taxa frequencies for each of the five hypothetical samples (dashed lines) with 12 taxa in each sample, and the corresponding weighted average across the five samples given by the vector of taxa frequencies (solid line). The plots on the left show the taxa frequencies of samples drawn from a Multinomial distribution and the plots on the right show taxa frequencies of five samples drawn from a Dirichlet Multinomial. The top row of plots is for samples with a smaller number of sequence reads, while the bottom row of plots is for samples with a larger number of sequence reads. As the number of reads increases for the multinomial distribution increases each samples taxa frequencies converge onto the mean, while for the Dirichlet-multinomial an increased number of reads is still associated with the same variability between the individual samples. Given taxa counts vectors for subjects, denoted in vector form as (see Table 1), the set of parameters and can be estimated using either the method of moments [24], [25], [30] or maximum likelihood estimation (MLE) [24] computational procedures. The method of moments estimators of are [25](2) and of is [24], [30](3) where , and , and with . Alternatively, the MLEs and are given by (4) where is the Dirichlet-multinomial likelihood function. The method of moments and MLE estimation procedures perform equally well in terms of statistical properties (e.g., bias, variance) for the number of subjects and reads we routinely encounter in our microbiome studies. These results are available from the authors as a Technical Report. Multinomial versus Dirichlet-multinomial test. Since the presence of overdispersion increases the Type 1 Error if not controlled for, it is good to test if overdispersion is present in a set of microbiome samples. This can be done by formally testing the null hypothesis (implying no overdispersion) versus the alternative hypothesis (implying overdispersion is present). An optimal test-statistic calculated from the raw metagenomic data (see Table 1) for this hypothesis is the following [31]:(5) which approaches a Chi-square distribution with degrees of freedom when the number of sequence reads is large and the same in all samples. In the case that the number of reads varies across samples (such as in microbiomes samples) the test statistics converges to a weighted Chi-square with a modified degree of freedom (see [31] for more details). This is a more complicated formulation and is not presented here, but an approximate solution presented in [31] has been included in the R HMP Package. Note that this hypothesis test establishes that the data are better represented by a Dirichlet-multinomial than a multinomial. However, it does not affirm than Dirichlet-multinomial fits the data best. A goodness-of-fit test statistic for doing this is currently being derived. Hypothesis Testing Comparing to a previously specified microbiome population. Consider the problem of comparing microbiome samples to a vector of taxa frequencies gathered in an earlier study or hypothesized by the investigator. This might be done to test if new samples come from e the same or different population from earlier samples, such as comparing a population to the HMP healthy controls. This test is analogous to a one sample t-test in classical statistics, which, in our case, corresponds to assessing whether the vector of taxa frequencies for the new samples, estimated using method of moments or MLE, are equal to the taxa frequencies vector from the previously studied population. The following statistic formally tests the hypothesis versus the alternative that : [32](6) which is a generalized Wald test statistic where is an unbiased estimator of , is the Moore-Penrose generalized inverse, and with a diagonal matrix with diagonal elements given by and , and where is the total number of reads in the samples. The asymptotic null distribution of is a Chi-square with degrees of freedom equal to the rank of the matrix , from which the statistical significance (P value) is calculated for the test. Comparing from two sample sets. Consider the problem of comparing microbiome samples between two groups of subjects (e.g., healthy versus diseased), or two body sites (e.g., oral versus skin). This can be done to test if two sets of microbiome samples are the same or different, such as is in a case-control study. This test is analogous to a two sample t-test in classical statistics, which, in our case, corresponds to evaluate whether the taxa frequencies observed in both groups of metagenomic samples, denoted by and , are equal. The following statistic formally tests the hypothesis versus the alternative that[32], [33](7) which is a generalized Wald-type test statistics where and are the method of moments estimates, required for Wald-type statistics, of and , and is a diagonal matrix given by(8) where is the total number of reads in group m, is the method of moments estimates of the overdispersion parameter of group m, is a diagonal matrix with diagonal elements given by , a weighted average of estimated group means where and is the number of subjects in group m. The asymptotic null distribution of is Chi-square with degrees of freedom equal to , where is the number of taxa, from which the statistical significance (P value) is calculated for the test. Comparing from more than two groups. Consider the problem of comparing microbiome populations between more than two groups of subjects (e.g., healthy, moderately sick, severely sick), or several body sites (e.g., saliva, subgingival and supragingival). This can be done to test if multiple sets of metagenomic samples are the same or different. This test is analogous to an analysis-of-variance test in classical statistics, which in our case corresponds to inquiry whether the taxa frequencies observed in multiple groups of microbiome samples, denoted by , are equal. The following statistic formally tests the hypothesis versus the alternative that for at least one pair of groups [32], [33](9) which is a generalized Wald-type test statistics given by the weighted difference between each estimated group mean, , a weighted average of the estimated group means, with weights , and a diagonal matrix given by The asymptotic null distribution of is Chi-square with degrees of freedom equal to , where J is the number of groups and K is the number of taxa, from which the statistical significance (P value) is calculated for the test. Note that there does not yet exist a multiple comparisons test analogous to Tukey’s Least Significance Difference or Duncan’s Range Test [34] routinely used in ANOVA to determine which groups are different when the omnibus rejects the null hypothesis, and is a focus of ongoing work in our lab. Power and Sample Size When designing an experiment the goal is to simultaneously reduce the probability of deciding that the groups are different when they are not (Type I Error), and reduce the probability of deciding the groups are not different when in fact they are (Type II Error). From convention we often set the Type I Error = 0.05 (significance or P value) and the Type II Error = 0.2 resulting in power = 0.8, or 80% (power = 1– Type II error). The sample size needed to achieve these error rates depend on the probability model parameters, the hypothesis being tested, and the effect size indicating how different the groups are. Power can be calculated in the R package for each of the four hypothesis tests discussed above, but for clarity we will only discuss comparison of across two groups. Assume that the model parameters and are known for each group, and we are interested in formally testing the hypothesis versus the alternative that. Intuitively, the effect size is defined by how far apart the vector of taxa frequencies and are from each other. There are several ways to quantify this. For example, a modified Cramer’s criterion can be used which ranges from 0, denoting the taxa frequencies are the same in both groups, to 1, denoting the taxa frequencies are maximally different (see Appendix S1 for more details). In Figure 2 we show examples of hypothetical data where the effect size is small ( = 0.07) and large ( = 0.65) across two groups. It would be expected that more samples will be needed to test the 2 group comparison hypotheses for the small effect size than it would be for the large effect size parameters. Figure 2. Definition of effect size. Illustration of a small and a large effect size when comparing two groups. Power and sample size calculations are part of the R HMP package for the hypotheses presented in this paper [18]. The technical details of the mathematics for doing this are beyond the scope of this paper. We therefore have included for interested readers the mathematics for power and sample estimation in the Technical Report available from the authors. Performance Properties of these Tests Statistical methods need to be tested for their performance to ensure the Type I and II error, P values, power and sample size calculations, and other results from their application are correct. This can be done analytically and proven mathematically, as well as through comprehensive Monte Carlo simulation studies. We chose the latter approach to confirm that these statistics behave as expected and present the results in the Technical Report available from the authors. We elected not to include these results in detail in this paper since it would detract from the primary goal of presenting statistical methods for applied analysis of metagenomic data. However, we briefly discuss those results which showed uniformly that these methods and software are valid. We simulated Dirichlet-multinomial data for a variety of sample sizes, number of taxa, overdispersion, and effect size, and ran hypothesis tests for one sample, two sample and multiple sample comparisons. These simulations showed the Type I and II Error rates were as expected. We performed simulated power and sample size calculations and obtained the correct results and show, as expected, the effect size, overdispersion, and sample size influence power. As the effect size increases, overdispersion decreases, or sample size increases, the power goes up. Of particular interest is that in some examples the number of reads also impacts power, with power increasing as the number of reads increases, holding effect size, overdispersion, and sample size constant. This appears to be related to the value of the overdispersion parameter, where for smaller overdispersion the number of reads has the greatest impact on power. Recall that as overdispersion goes to 0, the data converge to a multinomial distribution where the number of reads is known to have significant impact on power. The Technical Report also presents several other tests of hypothesis that we did not include here since they seem less likely relevant to researchers. This includes comparing the overdispersion parameter across groups, and comparing distributions defined simultaneously by both and . Results of Taxa Composition Data Analysis In this section, we present results of analyses of metagenomic data from the 24 samples described above for saliva, subgingival and supragingival plaques analyzing the data at the class level. In our experience with metagenomic data analysis two types of analyses are routinely done. When the investigator is interested in community composition (what bacteria are there) the analysis proceeds with taxa labels preserved. In ecology this is usually known as analysis of species composition data [29], and here we will refer to this as taxa-composition data analysis. Alternatively, when the investigator is interested in community structure (what are the high level descriptions of the samples such as richness and diversity) the analysis proceeds without the taxa labels. In ecology this is called as analysis of rank abundance distribution (RAD) data [26]–[28]. The methods presented in this paper can be applied to both of these situations as illustrated below. In this section the samples are analyzed using a taxa-composition data analysis approach, and in the following section the same analyses are applied using a RAD data analysis approach. It should be noted that for these examples, when the taxa labels are ignored there is a loss of information in the data and the subsequent test of hypotheses show a decrease in power. One technical issue for the applied data analysis involves the presence of rare taxa. The test statistics proposed are based on the Chi-square distribution and the calculation of the P value is more precise when there are not many rare taxa. This is related to the technical issue of the convergence rate of the test statistic onto its Chi-square distribution. To improve the convergence rates of these test statistics all taxa frequencies whose weighted average across all groups is smaller than 1% are combined into a single taxon labeled as ‘Pooled taxa’. An illustration of the taxa composition data to be analyzed is shown in Figure 3 a) where we see that taxa from Mollicutes to Deinococci have low prevalence and found that their weighted average across both groups was less than 1%. In Figure 3 b) the same data are shown where these rare taxa are pooled, which are the data analyzed in the rest of this section. An alternative approach would be to drop the rare taxa. Figure 3. Comparison of two metagenomic groups using a taxa composition data analysis approach. Taxa frequency means at Class level obtained from subgingival plaque samples (blue curve) and from supragingival plaques samples (red curve): a) The mean of all taxa frequencies found in each group, b) The mean of taxa frequencies whose weighted average across both groups is larger than 1%. The remaining taxa are pooled into an additional taxon labeled as ‘Pooled taxa’. Multinomial versus Dirichlet-multinomial Test Since overdispersion increases the Type 1 Error it is important to test if overdispersion is present in a set of microbiome samples. To do this we use Equation 5 to formally test the null hypothesis (implying no overdispersion) versus the alternative hypothesis (implying overdispersion is present). In both subgingival and supragingival plaque samples, the null hypothesis that the data come from a multinomial distribution was rejected in favor of the Dirichlet-multinomial alternative. The overdispersion parameters, using method of moments (see Equation 2), are estimated to be greater than 0 and equal 0.047 for subgingival (T = 18,968; df = 11; P<0.00001), and 0.054 for supragingival (T = 18,953; df = 11; P<0.00001). Comparing from Two Sample Sets Consider the problem of comparing microbiome samples between the subgingival and supragingival samples to test if two sets of microbiome samples are different, such as is done in a case-control study. The application of Equation 7 hypothesis test to compare taxa frequencies (see Figure 3 b) versus corresponding to subgingiva and supragingiva is significant ( = 25.64; df = 11; P = 0.007). From this it is concluded that the null hypothesis that both taxa frequencies are the same is rejected in favor of the alternative that they are different. Power and Sample Size Calculation Table 2 shows a power analysis to compare the taxa frequencies of the subgingival plaque versus the supragingival plaque populations from Figure 3b (effect size ) using 1% and 5% significance levels. To calculate power requires the Dirichlet-multinomial parameters, significance level, and specified number of subjects and reads to be defined. In this example the Dirichlet-multinomial parameters are obtained from the subgingival and supragingival 24 sample dataset, the significance levels based on conventional P-values, and a range of subject numbers and reads that could reasonably be obtained in the typical experimental setting. Table 2. Power calculation as a function of number of sequence reads and sample size for the comparison of from the subgingiva and supragingiva populations, using as a reference the taxa frequencies obtained from the 24 samples, and 1% and 5% significant levels. Table 2 entries are the power achieved for the specified significance level, number of subjects, and number of reads. For example, for significance level = 1%, number of subjects = 15, and number of reads per subject = 10,000, the study has 56% power to detect the effect size observed in the data. Note that the power is not impacted by increasing the number of reads. In this paper we show the results out to 1,000,000 expected reads per sample, but have conducted experiments running the number of reads out to 10,000,000 and reached the same conclusion. The likely cause of this is that increasing the number of reads does not impact the standard error around , while increasing the number of subjects does. However, in experiments based on unlabeled taxa (i.e., rank abundance distributions) the number of reads does impact power. Comparing from Three Sample Sets It may be of interest to an investigator to compare three or more groups. Here, for purpose of illustration, we compare the saliva, subgingival and supragingival plaque populations from our 24 subjects. Figure 4 a) shows the taxa frequency to be analyzed where we see that taxa including Deinococci up to Planctomycetacia have very low prevalence. Following the same rationale as for the two sample comparison above, rare taxa were pooled, and the data analyzed is presented in Figure 4 b). It can be seen that the taxa here are the same as used in the comparison of subgingival versus supragingival plaque samples alone. To test if the saliva samples also are better fit to a Dirichlet-multinomial versus multinomial distribution we tested the hypothesis versus and conclude that in fact the Dirichlet-multinomial is the better distribution (P<0.00001). Figure 4. Comparison of three metagenomic groups using a taxa composition data analysis approach. Taxa frequencies at class level obtained from saliva (black line), subgingival plaque (blue line), and from supragingival plaques samples (red line): a) The mean of all taxa frequencies found in each group, b) the mean of taxa frequencies whose weighted average across both groups is larger than 1%. The remaining taxa are pooled into an additional taxon labeled as ‘Pooled taxa’. The application of Equation 9 hypothesis test to compare taxa frequencies (see Figure 4) versus versus corresponding to subgingiva, supragingiva, and saliva is significant ( = 258.158; df = 22; P <0.00001). From this it is concluded that the null hypothesis that taxa frequencies across the three groups are the same is rejected in favor of the alternative that they are different. The next step in this approach to hypothesis testing is to determine which of the groups are different. In the analysis-of-variance literature this is known as multiple comparisons. A simple approach calculates all pairwise P values and adjusts for the number of tests using a Bonferroni adjustment. In Table 3, we show the p-values (unadjusted and adjusted using Bonferroni) for all pairwise comparisons between saliva, supragingiva and subgingiva samples. This suggests that all three sample sets are statistically different. Table 3. Unadjusted and Bonferroni adjusted p-values for all pairwise comparisons between saliva, supragingiva and subgingiva samples. Result of Rank Abundance Distributions Data Analysis Here we present the same analyses as in the previous example except using rank abundance distributions (RAD) which is of interest when the focus is on community structure (e.g., richness and diversity). Many analysts reduce each sample to a single measure of richness or diversity and then compare these values across groups. However, this results in a significant loss of information which should be avoided when analyzing data. The analyses presented here preserve most of the information (except taxa labels) which should prove to be more valuable for many situations. To illustrate, the RAD data to be analyzed in the following is shown in Figure 5 a) where we see that ranked taxa from 11^th to 19^th have low prevalence. In Figure 5 b) the same data is shown where these rare ranked taxa are pooled, which are the data analyzed in the rest of this section. Figure 5. Comparison of two metagenomic groups using rank abundance distribution data. Ranked taxa frequencies mean at class level obtained from subgingival plaque samples (blue curve) and from supragingival plaques samples (red curve): a) The means of all ranked taxa frequencies found in each group; b) The mean of ranked taxa frequencies whose weighted average across both groups is larger than 1%. The remaining taxa are pooled into an additional taxon labeled as ‘Pooled taxa’. Multinomial versus Dirichlet-multinomial Test In both subgingival and supragingival plaque samples, the null hypothesis that the data come from a multinomial distribution was rejected in favor of the Dirichlet-multinomial alternative. The overdispersion parameters, using method of moments (Equation 2), are estimated to be greater than 0 and equal 0.008 for subgingival (T normalized = 69945; df = 215; P<0.00001), and 0.02 for supragingival (T normalized = 141301; df = 216; P<0.00001). Note that this hypothesis test establishes that the data are better represented by a Dirichlet-multinomial than a multinomial. Comparing from Two Sample Sets The application of the hypothesis test to compare ranked taxa frequencies (see Figure 5 b) versus corresponding to subgingiva and supragingiva is not significant ( = 11.08; df = 10; P = 0.29). From this it is concluded that there is not enough evidence to reject the null hypothesis that ranked taxa frequencies are the same. Power and Sample Size Calculation Table 4 shows a power analysis to compare the taxa frequencies of the subgingival plaque versus the supragingival plaque populations from Figure 5 b) (effect size ) using 1% and 5% significant levels, respectively. To calculate power requires the DM parameters, significance level, and specified number of subjects and reads be defined. In this example the Dirichlet-multinomial parameters are obtained from the subgingival and supragingival 24 sample dataset, the significance levels set based on conventional P-values, and a range of subject number and reads that could reasonably be obtained in the typical experimental setting. The table entries are the power achieved for the specified significance level, number of subjects, and number of reads. For example, for significance level = 5%, number of subjects = 15, and number of reads = 10,000, the study has 40% power to detect the effect size observed in the data. Note that compared to the power calculations for the taxa composition data analysis (Table 2) the power is lower for the RAD comparison due to the smaller effect size observed in the data with this analysis. Table 4. Power calculation as a function of number of sequence reads and sample size for the comparison of ranked from the subgingiva and supragingiva populations, using as a reference the taxa frequencies obtained from the 24 samples, and 1% and 5% significant levels. Comparing from Three Sample Sets Figure 6 a) shows the ranked taxa frequency to be analyzed where we see that ranked taxa between the 11^th to the 22^nd most abundant taxa have very low prevalence. Following the same rationale as for the two sample comparison above, ranked rare taxa were pooled, and the data analyzed is presented in Figure 6 b). It can be seen that the taxa here are the same as used in the comparison of subgingival vs supragingival plaque samples alone. To test if the saliva samples also are better fit to a Dirichlet-multinomial versus multinomial distribution we tested the hypothesis versus and conclude that in fact the Dirichlet-multinomial is the better distribution (P<0.00001). Figure 6. Comparison of three metagenomic groups using rank abundance distribution data. Ranked taxa frequencies mean at class level obtained from subgingival plaque samples (blue curve) and from supragingival plaques samples (red curve): a) The means of all ranked taxa frequencies found in each group; b) The mean of ranked taxa frequencies whose weighted average across both groups is larger than 1%. The remaining taxa are pooled into an additional taxon labeled as ‘Pooled taxa’. The application of Equation 9 hypothesis test to compare taxa frequencies (see Figure 6 b)) versus versus corresponding to subgingiva, supragingiva, and saliva is not significant (. From this we concluded that there is not enough evidence to reject the null hypothesis that ranked taxa frequencies across the three groups are the same. Since the test of the three groups does not reject the null hypothesis the multiple comparison tests is not applicable. The major contribution of this work is to begin formulating a biostatistical foundation for the analysis of metagenomic data. The Dirichlet-multinomial model is designed for count data and accounts for over dispersion, which if not adjusted for will result in increased Type I Error. The model gives rise to a broad class of statistical methods, including one sample and multi-sample tests of hypothesis, as well as calculating sample size and power estimates for experimental design. It also provides a set of parameters that can be interpreted analogous to the mean and variance of the bacterial diversity in a population. Computationally this model can accommodate large datasets consisting of multiple samples and essentially unlimited number of reads. For illustration of these methods we presented results of analyses and sample size/power calculations for three body sites for normal healthy individuals collected through the Human Microbiome Project. Several issues that were referred to in the paper are discussed here. First, the performance of statistical tests depends on their behaving as predicted by statistical theory. For example, a test statistic under the null hypothesis should result in 5% of the tests being significant at the P< = 0.05 level. This and other measures of statistical performance have been confirmed through extensive simulation studies and are in a Technical Report available from the authors. Second, the Dirichlet-multinomial model can be applied to taxa labeled and unlabeled data corresponding to Taxa composition and Rank Abundance Distribution (RAD) data analyses. In ecology this represents two alternative strategies focused on comparing individual species or diversity (RAD) across communities. The tools proposed here have general use in ecology, but we focused only on metagenomics in this paper. We leave it for others with in- depth experience in ecology to explain how these analyses can best be used in that field [26]–[29]. Third, in statistics a parametric model is usually preferred over a non-parametric models (e.g., permutation, bootstrapping) when available. In almost all cases parametric models are more efficient and require less data to achieve a given level of power. They also retain more information contained in the data (see the Introduction Section for a detailed discussion). Also, unlike non-parametric methods, our test statistics are appropriate when comparing groups that do not have the same within group variability, a common occurrence in microbiome data. One of the potential limitations of our method is the incorporation of the rare taxa in the analysis. The performance of the test statistics proposed depends on their convergence to the Chi-square distribution which requires that on having rare taxa with a minimum frequency across subjects. Though, the proposed approached of ‘pooling rare taxa’ can be seen as loss of information, it currently stands as a practical approach which avoids giving importance to artificial rare taxa due to the effect of noise in the data. The analysis of rare taxa in metagenomic data is an ongoing topic of discussion and study; it is difficult to identify rare taxa from noise due to sequencing and classification errors, which is not the focus of these methods. Several methods will be developed extending the Dirichlet-multinomial model for more complex metagenomic research designs and datasets. First, when parameters are shown to be different across groups, it is important to determine which taxa or ranked taxa are causing this difference. To avoid multiple testing problems from doing all univariate comparisons, methods analogous to linear contrasts from analysis-of-variance are being investigated. Second, application of the Dirichlet-multinomial to repeated measures, or mixed models analysis, can be used to monitor changes in the microbiome over time. Third, regression analysis adjusting for covariates can model changes in the microbiome such as how diet, age, or gender affects the stool microbiome. The three topics are current areas of research by the authors. Supporting Information Measure of effect size. Introduction of a modified Cramer’s φ criterion such that it does not depend on the sample size when the test statistics takes into account the overdispersion. Author Contributions Conceived and designed the experiments: GW ES. Performed the experiments: GW ES. Analyzed the data: PSL ED WDS. Wrote the paper: PSL WDS. Design Statistical Methods: PSL JPB ELB DJE QW WDS. Design Software: PSL ED WDS.
{"url":"http://www.ploscollections.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0052078","timestamp":"2014-04-24T06:46:13Z","content_type":null,"content_length":"225375","record_id":"<urn:uuid:1f59ebbb-a336-450f-821c-0f3c15185681>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what would the graph look like for this soe? 2x + 3y = –3 x – y = –4 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fce6fa5e4b0c6963ad968e7","timestamp":"2014-04-19T07:09:38Z","content_type":null,"content_length":"68649","record_id":"<urn:uuid:9678f6a8-b253-49ac-b37c-2abba7ba673c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Software Rendering At the end of May 2007 I began programming a software rasterizer which I ended up using in my GP2X demo that I entered in the GBAX 2007 Coding Competition. You can check out the finished demo at pouet.net . Check out the screenshot below: My first implementation was based on Nicolas Capens article about advanced rasterization. In his article he describes a block based apporoach using edge equations to rasterize triangles. I implemented this approach and used blocks of 8x8 pixels. I do the perspective correction at the corners of the blocks and within each block I use linear interpolation for the varying parameters. It works quite well and the block based appoach also allows a hirarchical depth test which could speed up scenes with a high depth complexity. The rasterizer is coded entirely with fixed point math (as the GP2X does not have floating point hardware) and can interpolate an arbitrary number of integer attributes across the triangle. I also programmed a vertex transformation and clipping pipeline. At the end I am now able to use vertex and pixel shaders coded in C++ with my rasterizer. In my GP2X demo some scenes were quite slow. The block based approach seemd to have quite some performance problems especially with small triangles as at least a whole 8x8 block of pixels was I reimplemented the triangle rasterizer to use a scanline based approach just the way Chris Hecker describes it in his articles on Perspective Texture Mapping. First I implemented a version which only did affine (perspective incorrect) interpolation and it turned out to be 50-100% faster than the block based approach. But this version also had a lot of rendering errors since the perspective correction was missing. I then implemented the subdividing affine method as described in Chris Heckes articles (so that I didn't need to do the perspective division for each pixel) which worked quite well and gave acceptable results. It naturally was slower than the affine version but still 30-60% faster than the block based approach. Now in the final version the programmer can enable/disable perspective correction depending on what kind of geometry he wants to render (e.g. billboards don't need perspective correction as they face the camera). This gives the best performance. Below you can find the source code. If you have a hard time finding out how to use it just drop me a line. It's released under the BSD license, so you are free to use this in a commercial product. In this case, although you are not required to, I would like you to give me a donation. Trenkis Software Renderer Resources: Source code (Subdividing affine, scanline based) Source code of older versions (block based etc.) F341 Demo (Precompiled for gp2x and win32; includes source code) Usage example pack for the renderer You might also want to check out my OpenGL ES-CL 1.0 implementation Fusion2X based on this software renderer.
{"url":"http://www.trenki.net/content/view/18/38/","timestamp":"2014-04-20T00:38:19Z","content_type":null,"content_length":"16863","record_id":"<urn:uuid:e082fa39-b41d-4c76-a766-a3c13de9e262>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Civil-Comp Press - Publications - ISBN 0-948749-77-6 - Contents Page Civil-Comp Press Computational, Engineering & Technology Conferences and Publications PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON CIVIL AND STRUCTURAL ENGINEERING COMPUTING Edited by: B.H.V. Topping click on a paper title to read the abstract or obtain the full-text paper from I INTERNET APPLICATIONS IN CIVIL AND STRUCTURAL ENGINEERING 1 Exchanging Geotechnical Data through the World Wide Web D.G. Toll and A.C. Cubitt 2 E-Commerce in Construction: Barriers and Enablers K. Ruikar, C.J. Anumba, P.M. Carrillo and G. Stevenson II INFORMATION TECHNOLOGY IN CIVIL AND STRUCTURAL ENGINEERING 3 The IT Concerns of Small and Medium Sized Construction Businesses in the Information Age J.H.M. Tah, V. Carr and S. Hoile 4 A Quality Management Tool for a Public-Private Partnership Highway J. Rankin, A.J. Christian and B. Lundrigan 5 Information Management in a Decision Support System for Pavement A.P. Chassiakos, D.D. Theodorakopoulos and I.D. Manariotis 6 A Platform for the Integration of Civil Engineering Services and Tools Z. Turk and R.J. Scherer III CONSTRUCTION MANAGEMENT AND CONSTRUCTION ENGINEERING 7 Managing Geotechnics in a Mega-Project: The Egnatia Motorway Case in S. Lambropoulos and E. Sakoubenta 8 The Implementation of a Multi-Agent System for Construction Claims Z. Ren, C.J. Anumba and O.O. Ugwu 9 Component State Model and its Application in Constructablity Analysis of Construction Schedules D.K.H. Chua and Y. Song IV COMPUTER AIDED DESIGN 10 Basic Study for Creating 3D Model Spaces from 2D Digital Images with Photogrammetric Technology S. Tanaka, H. Furuta, E. Kitagawa, H. Noda and H. Muraki 11 Data Extraction for Design and Construction Integration: An Application in Petrochemical Industry K. Lueprasert and L. Meepradit 12 The Ideal Method of Using Digital Data in Highway Construction Works: From Design to Administration M. Yamasaki, T. Hongou and Y. Chiba V SOFTWARE DEVELOPMENT 13 Efficient Object-Oriented Implementation of Boundary Element Software I.A. Jones, P. Wang, A.A. Becker, D. Chen and T.H. Hyde VI DATA ACQUISITION, MONITORING AND CONTROL 14 Practical Application of an Advanced Real Time Structural Monitoring A. Goodier and S.L. Matthews VII COMPUTERS IN STRUCTURAL ANALYSIS 15 Efficient Graph Theoretical Methods for Examining the Rigidity of Planar Trusses A. Kaveh and F.N. Ehsani VIII COMPUTERS IN STRUCTURAL ENGINEERING DESIGN 16 Three-Dimensional Structural Modelling of Multi-Storey Buildings for Obtaining Moment Envelopes T.M. Nahhas and M.H. Imam 17 Behaviour of Pre-Damaged T-Shaped Reinforced Concrete Beams M.B. Emara and A.G. Sherif 18 Behaviour of Steel-Concrete Composite Beam with Flexible Shear Stud H.G. Kwak and Y.J. Seo 19 The Effects of Infill Walls on the Behaviour of Frames under Horizontal A. Karaduman, Z. Polat and M.Y. Kaltakci 20 The Influence of Repaired Slabs in Coupled Shear Walls A. Nadjai and D. Johnson 21 Segmentation of Structures into Planar Elements: An Error-tolerant Computation Method S.P. Manikandan and B. Emmanuel 22 A Study of the Effect of Crack Propagation and Fracturing on Rock Slope Stability Analysis by Discontinuous Deformation Analysis R. Naderi IX ANALYSIS AND DESIGN OF TENSION STRUCTURES 23 Development of an Advanced System for Analysis and Design of Tensile T.H. Zhang and S.L. McCabe 24 A Cable and Membrane Pseudo Stiffness Implementation J. Muylle and B.H.V. Topping X STRUCTURAL ANALYSIS: BUCKLING &AMP; STABILITY COMPUTATIONS 25 The Influence of Column Base Connectivity on the Carrying Capacity of H.H. Lau, M.H.R. Godley and R.G. Beale 26 A Lateral Torsion Buckling Analysis of Elastic Beam under Axial Force and Bending Moment K.M. Hsiao and W.Y. Lin 27 A New Beam Finite Element for Tapered Members N. Boissonnade and J.P. Muzeau 28 Dynamic Buckling of Columns Considering Shear Deformation and Rotary M. Ghorashi 29 Buckling Behaviour of FRP Thin-Walled Lipped Channel Members N. Silvestre and D. Camotim 30 Elastic Flexural-Torsional Buckling and Postbuckling of Arches subjected to a Central Concentrated Load Y.L. Pi and M.A. Bradford 31 Lateral Buckling Analysis of Thin-walled Composite I-section Beams J. Lee and S. Lee 32 Optimal Design of Stiffened Plates for Buckling under in-plane Forces and Bending Moments M. Ghorashi, A. Askarian and M. Gashtasby XI STRUCTURAL ANALYSIS: DYNAMIC COMPUTATIONS 33 Impact Envelope Formula of Simple Beams due to High Speed Trains J.D. Yau and Y.B. Yang 34 Dynamical Analysis of Composite Steel Decks Floors Subjected to Rhythmic Load Actions J.G.S. da Silva, F.J. da C.P. Soeiro, P.C.G. da S. Vellasco, S.A.L. de Andrade and R. Werneck 35 A New Method for Dynamic Modelling of a Suspension Bridge for Aerodynamic Instability C.P. Pagwiwoko, M.A.M. Said and C.K. Keong XII EARTHQUAKE AND SEISMIC COMPUTATIONS 36 Seismic Hazard Assessment in The State of Kuwait A.W. Sadek 37 Study of the Dynamic and Equivalent Static Analysis Methods for Seismic Design of Bridges: Ranges of Applicability, Effect of Modelling Assumptions, and Support Conditions M.M. Bakhoum and S. Athanasious 38 Design Optimization of Seismic-Resistant Steel Frames H. Moharrami and S.A. Alavinasab 39 A Review of Procedures used for the Correction of Seismic data N.A. Alexander, A.A. Chanerley and N. Goorvadoo 40 Distress and Restoration of an Old Building damaged by the 07.09.99 Athens Earthquake I.D. Lefas and V.N. Georgiannou 41 Modelling of Continuous Slab-Girder Bridges for Seismic Analysis S. Maleki 42 Non-linear Finite Element Analysis of Slab Effects in Reinforced Concrete Structures Subjected to Earthquake Loads M.B. Emara and H.M. Hosny XIII ANALYSIS AND DESIGN OF STEEL STRUCTURES 43 Non-linear Analysis of Steel I-Girders Curved in-plan under a Uniformly Distributed Load M.A. Bradford, B. Uy and Y.L. Pi 44 Practical Non-linear Analysis for 3D Semi-rigid Frames S.E. Kim 45 Collapse Load of Optimally Designed Unbraced Flexibly Connected Steel E.S. Kameshki 46 Optimum Design of Pitched Roof Steel Frames with Haunched Rafters by Genetic Algorithm M.P. Saka 47 Modelling of the Structural Fire Response of Steel Framed Buildings A.Y. Elghazouli and B.A. Izzuddin XIV ANALYSIS AND DESIGN OF CONNECTIONS AND FASTENERS 48 Design of Bolted Joints in Pressure Vessels by Dynamic Modelling M. Ghorashi 49 Finite Element Modelling of Threaded Fastener Loosening due to Dynamic M. Holland and D. Tran XV ANALYSIS AND DESIGN OF REINFORCED CONCRETE STRUCTURES 50 An Experimental Study on the Behaviour of Normal and Lightweight Reinforced Concrete Corbels and Analysis with Truss/Strut-and-Tie Model M.Y. Kaltakci and G. Yavuz 51 Cracking Analysis of Reinforced Concrete Tension Members using Polynomial Strain Distribution Function H.G. Kwak and J.Y. Song 52 Thermal Load Produced Part-Through Cracks in Cement Mortar Layer on Foamed Concrete System Floors J.H.J. Kim 53 Efficient Procedure for Stress Integration in Concrete Sections using a Gauss-Legendre Quadrature J.L. Bonet, P.F. Miguel, M.A. Fernandez and M.L. Romero 54 Effects of Torsion on the Flexural Stiffness of the Rectangular Reinforced Concrete Sections M.J. Fadaee and M. Banihashemi 55 Derivation and Parametric Study of a Damaged Reinforced Concrete Y. Liu, C.K. Soh and Y.X. Dong 56 Analytical Solutions for Uniaxial Bending Design of Reinforced Concrete T Cross Sections according to The Eurocode 2 Standard M. Skrinar 57 Optimum Design of Reinforced Concrete Continuous Beams by Genetic M.N.S. Hadi XVI COMPUTATIONAL MODELLING OF COMPOSITE MATERIALS AND STRUCTURES 58 Free Vibration of Sandwich Beams using the Dynamic Stiffness Method J.R. Banerjee 59 Theoretical Study of Anisotropic Laminated Shells with Shear I.N. Kwun, J.Y. Kim and T.J. Kwun 60 Homogenization Method in Stochastic Finite Element Analysis of some 1D Composite Structures M. Kaminski 61 Mechanical and Thermal Fatigue of Curved Composite Beams L. Figiel and M. Kaminski 62 Three-Dimensional Progressive Damage Analysis of Composite Joints P. Perugini, A. Riccio and F. Scaramuzzino XVII FIRE RESISTANCE OF STRUCTURES 63 Fire Performance of Single Leaf Masonry Walls A. Nadjai, M. O'Gara and F. Ali 64 Finite Element Analysis of the Fire Resistance of Reinforced Concrete X.X. Zha, L.Y. Li and J.A. Purkiss 65 Fire Resistance of Slim Floors Protected using Intumescent Coatings W. Sha 66 Non-linear Fire Resistance Analysis of Reinforced Concrete Frames S. Bratina, G. Turk, M. Saje and I. Planinc 67 Fire Resistance of Protected Asymmetric Slim Floor Beams W. Sha XVIII FINITE ELEMENT METHODS IN CIVIL AND STRUCTURAL ENGINEERING 68 Finite Element Analyses of Steel Beam to Concrete-Filled Circular Steel Tube Column Connections C.C. Chen and H.L. Li 69 Finite Element Simulation of Post-Elastic Strain Energy Release Rate for Ductile Thin Wall Structure D. Tran 70 The Effects of Temperature Variation on the Creep Behaviour of Pressure Vessels using Theta Projection Data M. Law, W. Payten and K. Snowden XIX WAVE PROPAGATION PROBLEMS 71 Wave Problems in Infinite Domains M. Premrov and I. Spacapan 72 Special Finite Elements for High Frequency Elastodynamic Problems: First Numerical Experiments O. Laghrouche, P. Bettess and D. Le Houédec 73 Wave Motion In Infinite Inhomogeneous Waveguides I. Spacapan and M. Premrov XX NON-LINEAR ANALYSIS 74 A Study on the Effect of Static and Cyclic Loading and Linear and Non-Linear Material Properties in the Analysis of Flexible Pavements by Finite Element Modelling M.N.S. Hadi and B.C. Bodhinayake 75 Updated Lagrangian Formulation using ESA Approach in Large Rotation Problems of Thin-Walled Beam-Type Structures G. Turkalj, J. Brnic and J. Prpic-Orsic 76 Non-linear Analysis of Composite Floor Slabs with Geometric Orthotropy B.A. Izzuddin, X.Y. Tao and A.Y. Elghazouli 77 Non-linear Behaviour, Failure Loads and Inelastic Buckling of Multispan Cable-Stayed Bridges M.M. Bakhoum, G. Helmy, W.A. Attia and M. Mourad 78 Geometric Non-linear Analysis of General Shell Structures Using a Flat Triangular Shell Element M.H. Jang, J.Y. Kim and T.J. Kwun 79 Insitu Considerations for Non-linear Buckling Analysis S.H. Lee XXI COMPUTATIONAL METHODS 80 Grid Generation Using Finite Fourier Series T. Ohkami and S. Goto 81 Constitutive Error Estimator for the Control of Contact Problems involving Friction J.Ph. Combe, F. Louf and J.P. Pelle 82 The Dynamic Behaviour of a Cracked Beam Subjected to a White Noise P. Cacciola, N. Impollonia and G. Muscolino 83 About Sensitivity Analysis for Elastoplastic Systems at Large Strains T. Rojc and B. Stok 84 Determination of Constitutive Material Parameters for Sheet Metal M. Kompis and T.G. Faurholdt 85 Local Error Estimator for Stresses in 3D Structural Analysis E. Florentin, L. Gallimard, P. Ladevèze and J.P. Pelle 86 The Three-Dimensional Beam Theory: Finite Element Formulation based on D. Zupan and M. Saje 87 Element-Free Crack Propagation by Partition of Unity Weighted A. Carpinteri, G. Ferro and G. Ventura 88 Sensitivity of Inverse Boundary Element Techniques to Errors in Photoelastic Measurements P. Wang, A.A. Becker, I.A. Jones and T.H. Hyde XXII PARALLEL AND DISTRIBUTED COMPUTATIONS 89 Influence of Domain Decomposition on Solution of Equation Systems J. Kruis and Z. Bittnar 90 An Explicit Parallel Procedure for Non-linear Structural Mechanics with Distributed Computing M.L. Romero, J.I. Aliaga, J.L. Bonet, M.A. Fernandez and P.F. Miguel 91 Generation of All-Quadrilateral Meshes Using a Triangular Mesh D. Rypl and Z. Bittnar 92 Convergence of the Iterative Group-Implicit Algorithm for Parallel Transient Finite Element Analysis Y. Dere and E.D. Sotelino XXIII OPTIMIZATION 93 Quantitative Stiffness-based Optimal Design of Tall Buildings using a Condensed Lateral Stiffness Matrix H.J. Lee, D.H. Lee, H.W. Lee and H.S. Kim 94 Multiobjective Optimal Design of Structures under Stochastic Loads H. Jensen 95 Extended Study on Limit Analysis of Masonry Wall with Openings A. Miyamura, A. DeStefano, Y. Kohama and T. Takada 96 A Review of the Self-Designing Structures Approach on the Optimisation of Engineering Structures J.W. Bull and Z. Pitouras 97 Topological Optimization of an Aircraft Engine Mount via Bit-masking Oriented Genetic Algorithms L. Iuspa, F. Scaramuzzino and P. Petrenga 98 Shape Optimization Problem for Incompressible Viscous Flow based on Optimal Control Theory T. Ochiai and M. Kawahara 99 A Computational Methodology to Select the Best Material Combinations and Optimally Design Composite Sandwich Panels for Minimum Cost M. Walker and R. Smith 100 Optimum Design of Cable-Stayed Bridges with Imprecise Data L.M.C. Simões and J.H. Negrão 101 Structural Optimisation of an Orthotropic Plate D. Tran 102 Application of Simulated Annealing to Optimal Barreling of Externally Pressurised Shells J. Blachut XXIV GEOTECHNICAL ENGINEERING: INFORMATION TECHNOLOGY 103 Development of a Database Oriented Software for Construction Material Selection in Contaminated Soils A.J. Puppala, V. Mohan, E.C. Crosby and S. Valluru 104 Geotechnical Parameter Prediction from Large Data Sets I. Davey-Wilson 105 Enhancing Geotechnical Education using Interactive Multimedia M. Budhu XXV GEOTECHNICAL ENGINEERING: ANALYSIS AND DESIGN 106 Numerical Modeling of Nailed Soil Walls in Vertical Excavation Y.S. Hong, R.H. Chen, C.S. Wu 107 A Limit Analysis Method for Nailed Earth Slopes Y.S. Hong 108 Lateral Pile Response due to Interface Yielding W.D. Guo 109 Finite Element Predictions of Centrifuge Tests on Liquefiable Reinforced Soils O.O.R. Famiyesin, A.A. Rodger and A. Matheson 110 A Microstructural Computation Simulation Model of Loess Soils S.C. Dibben, I.F. Jefferson and I.J. Smalley 111 Finite Element Analysis of an Offshore Pipeline Buried in a Porous Seabed: Effects of Cover Layer D.S. Jeng and P.F. Postma 112 Subgrade Modulus for Laterally Loaded Piles W.D. Guo 113 Modelling of the Effect of an Impulse on a Ground Anchorage System R.D. Neilson, A. Ivanovic, A. Starkey and A.A. Rodger 114 Numerical Modelling of Ground Anchorages Employed in the Field A. Ivanovic, A. Starkey, R.D. Neilson and A.A. Rodger 115 Propagation of Vibrations from a Railway Track Lying on a Semi-Infinite Soft Ground B. Picoux, G. Lefeuve-Mesgouez and D. Le Houédec 116 Combined Structural and Coastal Loads on an Offshore Pile: A Numerical J.A. Eicher, H. Guan and D.S. Jeng 117 A Software with Integrated Graphics Platform for Limit Analyses of Geotechnical Problems L. Santos da Silva, M.M. Farias and C.L. Sahlit 118 Finite Element Predictions of the Dynamic Effects on an adjacent A. Rouaiguia and I. Jefferson 119 Computer Monitoring of Load Test on Piles G. Lipnik, B. Kovacic and P. Sparl 120 Computer Simulation and Video for Consolidation Testing U.F.A. Karim and J. de Goeijen 121 Consolidation of Soft Clays with Large Strains C.J. Leo and K.H. Xie 122 Behaviour of Landfill Liners under Earthquake Loading S.P.G. Madabhushi and S. Singh XXVI SOIL-STRUCTURE INTERACTION: ANALYSIS AND MODELLING 123 Implicit Integration of Elastoplastic Constitutive Equations of Interface Element F. Cai and K. Ugai 124 Dynamic Analysis of a Steam Turbine Support Structure V. Karthigeyan, G.K.V. Prakhya and K. Vekaria 125 Effect of Base Level to Internal Forces of a Structure in case of J. Györgyi and S. Ádány XXVII WATER ENGINEERING: ANALYSIS AND DESIGN 126 Regional Flood Frequency Analysis using L-Moments G. Onusluel, S.D. Ozkul and N.B. Harmancioglu 127 Assessment of Information related to Floods N.B. Harmancioglu, C.P. Cetinkaya and S.D. Ozkul click on a paper title to read the abstract or obtain the full-text paper from CTResources.info 1 Exchanging Geotechnical Data through the World Wide Web D.G. Toll and A.C. Cubitt 2 E-Commerce in Construction: Barriers and Enablers K. Ruikar, C.J. Anumba, P.M. Carrillo and G. Stevenson 3 The IT Concerns of Small and Medium Sized Construction Businesses in the Information Age J.H.M. Tah, V. Carr and S. Hoile 4 A Quality Management Tool for a Public-Private Partnership Highway Project J. Rankin, A.J. Christian and B. Lundrigan 5 Information Management in a Decision Support System for Pavement Management A.P. Chassiakos, D.D. Theodorakopoulos and I.D. Manariotis 6 A Platform for the Integration of Civil Engineering Services and Tools Z. Turk and R.J. Scherer 7 Managing Geotechnics in a Mega-Project: The Egnatia Motorway Case in Greece S. Lambropoulos and E. Sakoubenta 8 The Implementation of a Multi-Agent System for Construction Claims Negotiation Z. Ren, C.J. Anumba and O.O. Ugwu 9 Component State Model and its Application in Constructablity Analysis of Construction Schedules D.K.H. Chua and Y. Song IV COMPUTER AIDED DESIGN 10 Basic Study for Creating 3D Model Spaces from 2D Digital Images with Photogrammetric Technology S. Tanaka, H. Furuta, E. Kitagawa, H. Noda and H. Muraki 11 Data Extraction for Design and Construction Integration: An Application in Petrochemical Industry K. Lueprasert and L. Meepradit 12 The Ideal Method of Using Digital Data in Highway Construction Works: From Design to Administration M. Yamasaki, T. Hongou and Y. Chiba V SOFTWARE DEVELOPMENT 13 Efficient Object-Oriented Implementation of Boundary Element Software I.A. Jones, P. Wang, A.A. Becker, D. Chen and T.H. Hyde VI DATA ACQUISITION, MONITORING AND CONTROL 14 Practical Application of an Advanced Real Time Structural Monitoring System A. Goodier and S.L. Matthews 15 Efficient Graph Theoretical Methods for Examining the Rigidity of Planar Trusses A. Kaveh and F.N. Ehsani 16 Three-Dimensional Structural Modelling of Multi-Storey Buildings for Obtaining Moment Envelopes T.M. Nahhas and M.H. Imam 17 Behaviour of Pre-Damaged T-Shaped Reinforced Concrete Beams M.B. Emara and A.G. Sherif 18 Behaviour of Steel-Concrete Composite Beam with Flexible Shear Stud H.G. Kwak and Y.J. Seo 19 The Effects of Infill Walls on the Behaviour of Frames under Horizontal Loads A. Karaduman, Z. Polat and M.Y. Kaltakci 20 The Influence of Repaired Slabs in Coupled Shear Walls A. Nadjai and D. Johnson 21 Segmentation of Structures into Planar Elements: An Error-tolerant Computation Method S.P. Manikandan and B. Emmanuel 22 A Study of the Effect of Crack Propagation and Fracturing on Rock Slope Stability Analysis by Discontinuous Deformation Analysis R. Naderi IX ANALYSIS AND DESIGN OF TENSION STRUCTURES 23 Development of an Advanced System for Analysis and Design of Tensile Structures T.H. Zhang and S.L. McCabe 24 A Cable and Membrane Pseudo Stiffness Implementation J. Muylle and B.H.V. Topping X STRUCTURAL ANALYSIS: BUCKLING &AMP; STABILITY COMPUTATIONS 25 The Influence of Column Base Connectivity on the Carrying Capacity of Columns H.H. Lau, M.H.R. Godley and R.G. Beale 26 A Lateral Torsion Buckling Analysis of Elastic Beam under Axial Force and Bending Moment K.M. Hsiao and W.Y. Lin 27 A New Beam Finite Element for Tapered Members N. Boissonnade and J.P. Muzeau 28 Dynamic Buckling of Columns Considering Shear Deformation and Rotary Inertia M. Ghorashi 29 Buckling Behaviour of FRP Thin-Walled Lipped Channel Members N. Silvestre and D. Camotim 30 Elastic Flexural-Torsional Buckling and Postbuckling of Arches subjected to a Central Concentrated Load Y.L. Pi and M.A. Bradford 31 Lateral Buckling Analysis of Thin-walled Composite I-section Beams J. Lee and S. Lee 32 Optimal Design of Stiffened Plates for Buckling under in-plane Forces and Bending Moments M. Ghorashi, A. Askarian and M. Gashtasby XI STRUCTURAL ANALYSIS: DYNAMIC COMPUTATIONS 33 Impact Envelope Formula of Simple Beams due to High Speed Trains J.D. Yau and Y.B. Yang 34 Dynamical Analysis of Composite Steel Decks Floors Subjected to Rhythmic Load Actions J.G.S. da Silva, F.J. da C.P. Soeiro, P.C.G. da S. Vellasco, S.A.L. de Andrade and R. Werneck 35 A New Method for Dynamic Modelling of a Suspension Bridge for Aerodynamic Instability C.P. Pagwiwoko, M.A.M. Said and C.K. Keong 36 Seismic Hazard Assessment in The State of Kuwait A.W. Sadek 37 Study of the Dynamic and Equivalent Static Analysis Methods for Seismic Design of Bridges: Ranges of Applicability, Effect of Modelling Assumptions, and Support Conditions M.M. Bakhoum and S. Athanasious 38 Design Optimization of Seismic-Resistant Steel Frames H. Moharrami and S.A. Alavinasab 39 A Review of Procedures used for the Correction of Seismic data N.A. Alexander, A.A. Chanerley and N. Goorvadoo 40 Distress and Restoration of an Old Building damaged by the 07.09.99 Athens Earthquake I.D. Lefas and V.N. Georgiannou 41 Modelling of Continuous Slab-Girder Bridges for Seismic Analysis S. Maleki 42 Non-linear Finite Element Analysis of Slab Effects in Reinforced Concrete Structures Subjected to Earthquake Loads M.B. Emara and H.M. Hosny 43 Non-linear Analysis of Steel I-Girders Curved in-plan under a Uniformly Distributed Load M.A. Bradford, B. Uy and Y.L. Pi 44 Practical Non-linear Analysis for 3D Semi-rigid Frames S.E. Kim 45 Collapse Load of Optimally Designed Unbraced Flexibly Connected Steel Frames E.S. Kameshki 46 Optimum Design of Pitched Roof Steel Frames with Haunched Rafters by Genetic Algorithm M.P. Saka 47 Modelling of the Structural Fire Response of Steel Framed Buildings A.Y. Elghazouli and B.A. Izzuddin 48 Design of Bolted Joints in Pressure Vessels by Dynamic Modelling M. Ghorashi 49 Finite Element Modelling of Threaded Fastener Loosening due to Dynamic Forces M. Holland and D. Tran 50 An Experimental Study on the Behaviour of Normal and Lightweight Reinforced Concrete Corbels and Analysis with Truss/Strut-and-Tie Model M.Y. Kaltakci and G. Yavuz 51 Cracking Analysis of Reinforced Concrete Tension Members using Polynomial Strain Distribution Function H.G. Kwak and J.Y. Song 52 Thermal Load Produced Part-Through Cracks in Cement Mortar Layer on Foamed Concrete System Floors J.H.J. Kim 53 Efficient Procedure for Stress Integration in Concrete Sections using a Gauss-Legendre Quadrature J.L. Bonet, P.F. Miguel, M.A. Fernandez and M.L. Romero 54 Effects of Torsion on the Flexural Stiffness of the Rectangular Reinforced Concrete Sections M.J. Fadaee and M. Banihashemi 55 Derivation and Parametric Study of a Damaged Reinforced Concrete Element Y. Liu, C.K. Soh and Y.X. Dong 56 Analytical Solutions for Uniaxial Bending Design of Reinforced Concrete T Cross Sections according to The Eurocode 2 Standard M. Skrinar 57 Optimum Design of Reinforced Concrete Continuous Beams by Genetic Algorithms M.N.S. Hadi 58 Free Vibration of Sandwich Beams using the Dynamic Stiffness Method J.R. Banerjee 59 Theoretical Study of Anisotropic Laminated Shells with Shear Deformation I.N. Kwun, J.Y. Kim and T.J. Kwun 60 Homogenization Method in Stochastic Finite Element Analysis of some 1D Composite Structures M. Kaminski 61 Mechanical and Thermal Fatigue of Curved Composite Beams L. Figiel and M. Kaminski 62 Three-Dimensional Progressive Damage Analysis of Composite Joints P. Perugini, A. Riccio and F. Scaramuzzino 63 Fire Performance of Single Leaf Masonry Walls A. Nadjai, M. O'Gara and F. Ali 64 Finite Element Analysis of the Fire Resistance of Reinforced Concrete Columns X.X. Zha, L.Y. Li and J.A. Purkiss 65 Fire Resistance of Slim Floors Protected using Intumescent Coatings W. Sha 66 Non-linear Fire Resistance Analysis of Reinforced Concrete Frames S. Bratina, G. Turk, M. Saje and I. Planinc 67 Fire Resistance of Protected Asymmetric Slim Floor Beams W. Sha 68 Finite Element Analyses of Steel Beam to Concrete-Filled Circular Steel Tube Column Connections C.C. Chen and H.L. Li 69 Finite Element Simulation of Post-Elastic Strain Energy Release Rate for Ductile Thin Wall Structure D. Tran 70 The Effects of Temperature Variation on the Creep Behaviour of Pressure Vessels using Theta Projection Data M. Law, W. Payten and K. Snowden XIX WAVE PROPAGATION PROBLEMS 71 Wave Problems in Infinite Domains M. Premrov and I. Spacapan 72 Special Finite Elements for High Frequency Elastodynamic Problems: First Numerical Experiments O. Laghrouche, P. Bettess and D. Le Houédec 73 Wave Motion In Infinite Inhomogeneous Waveguides I. Spacapan and M. Premrov XX NON-LINEAR ANALYSIS 74 A Study on the Effect of Static and Cyclic Loading and Linear and Non-Linear Material Properties in the Analysis of Flexible Pavements by Finite Element Modelling M.N.S. Hadi and B.C. Bodhinayake 75 Updated Lagrangian Formulation using ESA Approach in Large Rotation Problems of Thin-Walled Beam-Type Structures G. Turkalj, J. Brnic and J. Prpic-Orsic 76 Non-linear Analysis of Composite Floor Slabs with Geometric Orthotropy B.A. Izzuddin, X.Y. Tao and A.Y. Elghazouli 77 Non-linear Behaviour, Failure Loads and Inelastic Buckling of Multispan Cable-Stayed Bridges M.M. Bakhoum, G. Helmy, W.A. Attia and M. Mourad 78 Geometric Non-linear Analysis of General Shell Structures Using a Flat Triangular Shell Element M.H. Jang, J.Y. Kim and T.J. Kwun 79 Insitu Considerations for Non-linear Buckling Analysis S.H. Lee XXI COMPUTATIONAL METHODS 80 Grid Generation Using Finite Fourier Series T. Ohkami and S. Goto 81 Constitutive Error Estimator for the Control of Contact Problems involving Friction J.Ph. Combe, F. Louf and J.P. Pelle 82 The Dynamic Behaviour of a Cracked Beam Subjected to a White Noise Input P. Cacciola, N. Impollonia and G. Muscolino 83 About Sensitivity Analysis for Elastoplastic Systems at Large Strains T. Rojc and B. Stok 84 Determination of Constitutive Material Parameters for Sheet Metal Forming M. Kompis and T.G. Faurholdt 85 Local Error Estimator for Stresses in 3D Structural Analysis E. Florentin, L. Gallimard, P. Ladevèze and J.P. Pelle 86 The Three-Dimensional Beam Theory: Finite Element Formulation based on Curvature D. Zupan and M. Saje 87 Element-Free Crack Propagation by Partition of Unity Weighted Quadrature A. Carpinteri, G. Ferro and G. Ventura 88 Sensitivity of Inverse Boundary Element Techniques to Errors in Photoelastic Measurements P. Wang, A.A. Becker, I.A. Jones and T.H. Hyde 89 Influence of Domain Decomposition on Solution of Equation Systems J. Kruis and Z. Bittnar 90 An Explicit Parallel Procedure for Non-linear Structural Mechanics with Distributed Computing M.L. Romero, J.I. Aliaga, J.L. Bonet, M.A. Fernandez and P.F. Miguel 91 Generation of All-Quadrilateral Meshes Using a Triangular Mesh Generator D. Rypl and Z. Bittnar 92 Convergence of the Iterative Group-Implicit Algorithm for Parallel Transient Finite Element Analysis Y. Dere and E.D. Sotelino 93 Quantitative Stiffness-based Optimal Design of Tall Buildings using a Condensed Lateral Stiffness Matrix H.J. Lee, D.H. Lee, H.W. Lee and H.S. Kim 94 Multiobjective Optimal Design of Structures under Stochastic Loads H. Jensen 95 Extended Study on Limit Analysis of Masonry Wall with Openings A. Miyamura, A. DeStefano, Y. Kohama and T. Takada 96 A Review of the Self-Designing Structures Approach on the Optimisation of Engineering Structures J.W. Bull and Z. Pitouras 97 Topological Optimization of an Aircraft Engine Mount via Bit-masking Oriented Genetic Algorithms L. Iuspa, F. Scaramuzzino and P. Petrenga 98 Shape Optimization Problem for Incompressible Viscous Flow based on Optimal Control Theory T. Ochiai and M. Kawahara 99 A Computational Methodology to Select the Best Material Combinations and Optimally Design Composite Sandwich Panels for Minimum Cost M. Walker and R. Smith 100 Optimum Design of Cable-Stayed Bridges with Imprecise Data L.M.C. Simões and J.H. Negrão 101 Structural Optimisation of an Orthotropic Plate D. Tran 102 Application of Simulated Annealing to Optimal Barreling of Externally Pressurised Shells J. Blachut 103 Development of a Database Oriented Software for Construction Material Selection in Contaminated Soils A.J. Puppala, V. Mohan, E.C. Crosby and S. Valluru 104 Geotechnical Parameter Prediction from Large Data Sets I. Davey-Wilson 105 Enhancing Geotechnical Education using Interactive Multimedia Simulations M. Budhu 106 Numerical Modeling of Nailed Soil Walls in Vertical Excavation Y.S. Hong, R.H. Chen, C.S. Wu 107 A Limit Analysis Method for Nailed Earth Slopes Y.S. Hong 108 Lateral Pile Response due to Interface Yielding W.D. Guo 109 Finite Element Predictions of Centrifuge Tests on Liquefiable Reinforced Soils O.O.R. Famiyesin, A.A. Rodger and A. Matheson 110 A Microstructural Computation Simulation Model of Loess Soils S.C. Dibben, I.F. Jefferson and I.J. Smalley 111 Finite Element Analysis of an Offshore Pipeline Buried in a Porous Seabed: Effects of Cover Layer D.S. Jeng and P.F. Postma 112 Subgrade Modulus for Laterally Loaded Piles W.D. Guo 113 Modelling of the Effect of an Impulse on a Ground Anchorage System R.D. Neilson, A. Ivanovic, A. Starkey and A.A. Rodger 114 Numerical Modelling of Ground Anchorages Employed in the Field A. Ivanovic, A. Starkey, R.D. Neilson and A.A. Rodger 115 Propagation of Vibrations from a Railway Track Lying on a Semi-Infinite Soft Ground B. Picoux, G. Lefeuve-Mesgouez and D. Le Houédec 116 Combined Structural and Coastal Loads on an Offshore Pile: A Numerical Study J.A. Eicher, H. Guan and D.S. Jeng 117 A Software with Integrated Graphics Platform for Limit Analyses of Geotechnical Problems L. Santos da Silva, M.M. Farias and C.L. Sahlit 118 Finite Element Predictions of the Dynamic Effects on an adjacent Structure A. Rouaiguia and I. Jefferson 119 Computer Monitoring of Load Test on Piles G. Lipnik, B. Kovacic and P. Sparl 120 Computer Simulation and Video for Consolidation Testing U.F.A. Karim and J. de Goeijen 121 Consolidation of Soft Clays with Large Strains C.J. Leo and K.H. Xie 122 Behaviour of Landfill Liners under Earthquake Loading S.P.G. Madabhushi and S. Singh 123 Implicit Integration of Elastoplastic Constitutive Equations of Interface Element F. Cai and K. Ugai 124 Dynamic Analysis of a Steam Turbine Support Structure V. Karthigeyan, G.K.V. Prakhya and K. Vekaria 125 Effect of Base Level to Internal Forces of a Structure in case of Earthquake J. Györgyi and S. Ádány 126 Regional Flood Frequency Analysis using L-Moments G. Onusluel, S.D. Ozkul and N.B. Harmancioglu 127 Assessment of Information related to Floods N.B. Harmancioglu, C.P. Cetinkaya and S.D. Ozkul
{"url":"http://www.civil-comp.com/pubs/catalog.htm?t=contents&f=77_6","timestamp":"2014-04-19T06:51:38Z","content_type":null,"content_length":"36268","record_id":"<urn:uuid:bfa6f509-d6f9-4275-a9bc-24a470bba7ba>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
ATV: a very special delivery - Lesson notes "In spite of the opinions of certain narrow-minded people, who would shut up the human race upon this globe, as within some magic circle which it must never out step, we shall one day travel to the moon, the planets, and the stars, with the same facility, rapidity, and certainty as we now make the voyage from Liverpool to New York." (Jules Verne, "From the Earth to the Moon", 1865) 1 - Controlling a collision Imagine driving a car between two lorries on a motorway, with only a couple of centimetres’ clearance on either side, whilst both you and the lorries travel at 80 km per hour. This is what the Automated Transfer Vehicle (ATV) does in order to dock with the International Space Station (ISS). The only difference is that both vehicles are travelling at 8 km per second and you want to manoeuvre with the precision equivalent to a one Euro coin. Essentially, a docking manoeuvre is a controlled, inelastic collision. You allow two objects to collide and stick together at a relatively low velocity. So, even though both objects are moving at about 8 km s^-1, they approach each other at a few centimetres per second. There are a number of reasons for this. Since there is relatively low friction compared to on Earth at ground level, any stabilising actions of the thrusters must be performed wisely, otherwise it could throw the ATV wildly off-course. If small adjustments are the only adjustments that can be made, you really want to be approaching the docking point at a very small relative velocity. Think about a driver reversing a car into a very tight space, with a one centimetre clearance on either side. He wouldn’t want do this quickly would he? It is the same principle. However, for the ATV, this process is fully automated using information from its own rendezvous sensors, whilst being monitored from the ATV Control Centre in Toulouse, France. Furthermore, if you have ever witnessed the inelastic collision of two masses on an air track in a laboratory, you will appreciate that momentum is conserved before and after the collision (Principle of conservation of momentum.) The last thing scientists want is for a 20 tonne mass to bump heavily into the ISS. Apart from the damage, it would affect the velocity, and therefore the trajectory of the ISS. If the ISS has a mass of approximately 250 tonnes and the ATV is approximately 20 tonnes, we can show that the increase in velocity of the docked ATV and ISS should be about a tenth of that of the relative approach velocity. We will simplify matters and treat this as a linear collision. For the principle of conservation of momentum: This can be simplified, in the ISS observer’s frame, and rearranged to give: This represents an increase in orbital speed, Δv, for the ISS after docking, relative to the approach speed of the ATV. If we consider^-1, is fortunately tiny! 2 - The ATV: “Where am I?” In the early stages of navigation, the ATV uses a combination of star tracker and Global Positioning System (GPS) data to get closer to the ISS. In a similar way that humans have used the stars to navigate on Earth for many centuries, star tracker – a modern day equivalent to the sextant – is able to recognise different constellations in the sky to calculate its own orientation in space. In addition, GPS, measures angles between the orbiting GPS satellites in order to give positional information. This is the same GPS technology that we use here on Earth. The GPS receiver uses the network of at least 24 operational GPS satellites. Each satellite orbits with a period of twelve hours such that at any point in time on Earth, a GPS receiver can receive an electromagnetic time-code signal from at least four of the satellites as well as positional data of the individual satellites. The same is true for the ATV in space. The time-codes transmitted by the satellites are all synchronised according to atomic clocks. The GPS receiver also requires a synchronised clock onboard so that it can calculate the differences in time codes (which are travelling at the speed of light) and hence the distance to each satellite. The GPS receiver will calculate the distance to each of the satellites and find the singular point where the surfaces of the spheres, of radii corresponding to the distances from each of the satellites, intersect in space, in order to calculate its position. This is known as three-dimensional trilateration. If we know, for example,that the time-code signal from satellite A corresponds to t[A]=0.0796963586 s and from satellite B t[B]=0.0832568645 s, we can work out the distance to each satellite: Distance to satellite A, d[A]=t[A]c=23908.90758 km Distance to satellite B, d[B]=t[B]c=24977.05935 km Atomic clocks are extremely expensive and would make the cost of owning a GPS receiver prohibitive, which is why, on Earth, each GPS receiver has an internal quartz clock instead. Unfortunately, this is not as accurate as an atomic clock. If a GPS receiver on Earth is slightly out of synch, it will give a tiny mistake of a few thousandths of seconds for each satellite. Referring to the previous satellites A and B, the times could be, for example t[A]=0.0797963586 s and t[B]=0.0835568645 s. This would correspond to: Satellite A: d[A]=t[A]c=23938.90758 km and a resulting circle error of 30 km Satellite B: d[B]=t[B]c=25067.05935 km an a resulting circle error of 90 km This intersection point in the map would give a completely wrong position! Fortunately a third satellite signal is received and the receiver can make an adjustment based on each of the time-code signals it receives. Indeed, each of the distances will be proportionally incorrect and won’t intersect at a single point. So, the receiver adjusts its internal clock such that the measured distances intersect at a unique point. The receiver is constantly doing this, so you could argue that you have access to a very accurate clock if you have your own car satellite navigation system. For three-dimensional positioning, a fourth satellite is required. This will allow us to find the right position of a plane, in order to provide altitude information. Soon, Galileo will be Europe’s own global navigation satellite system, providing a highly accurate, guaranteed, global positioning service under civilian control (see related link on right column). This exercise is partially based on the `Lift-Off’ exercises published by ESA BR-223. 3 - Rendezvous in space The ATV is launched into a 300-km orbit, from which an elliptical transfer orbit is used to carry the ATV into a rendezvous trajectory towards the ISS at 350 km altitude. Consider the ISS orbit as fixed in space inclined at 51.6° over the Equator, with the Earth rotating underneath. The best moment to launch (“launch window”) the ATV is when Kourou is almost underneath the position of the ISS orbit. This is known as the launch window and means that, when launched, the ATV will be put into an orbit which is in the same plane of the ISS, resulting in fewer correction manoeuvres during the transfer orbit. From Kepler’s Third Law, we can work out the orbital period of the elliptical transfer orbit. Firstly, we have to find out the value of the constant k in Kepler’s Third Law. For this purpose, we consider a circular orbit to be a special case of an elliptical orbit. (this exercise is based on the `Lift Off’ exercised published by ESA BR-223) 4 - How far to docking, Sir? During the final stages of approach, a range-finding technique using pulsed electromagnetic radiation is used to align the ATV during docking with the ISS. At a few hundred metres, a laser pulse is sent to the retro-reflectors on the ISS and the beam transit time is measured to calculate the distance between the ATV and the ISS. A retro-reflector will reflect light back in the same direction as the source, so the distance measured will be a direct line-of-sight distance. For example, if the pulses are returning with a 1.5µs delay, then we can calculate the distance as follows: Remembering, of course, that the time has to be divided by two (the time measured is the time for the pulse to travel to and from the retro-reflector). The telegoniometer provides 10000 pulses per second at a different wavelength, ensuring that the final approach is carefully monitored in conjunction with the videometer sensors. 5 - Re-boosting – why such a low orbit? Each day, the ISS loses about 100m altitude due to residual air resistance. Being at a relatively low orbit of 400km, there is still a small amount of atmosphere present with a mean density of the order of: 3.8 x 10 kg m (The actual density varies according to solar activity and whether it is day or night. This approximate value over a year for mean levels of solar activity is taken from the MSISe-90 Atmosphere We can calculate the drag force F[D] at a velocity v using [D] is the (dimensionless) drag coefficient of the object, ρ is the density of the fluid (the residual atmosphere) and A is the cross-sectional area of the moving object. The drag coefficient of the ISS is about 2,07 and for ATV 2,4. The cross sectional area of the ISS can vary between about 700 m^2 and 2300 m^2, depending on the ISS configuration. If we consider A= 1000 m^2, this results in a typical drag force of 0,25 N. The decrease in velocity corresponding to this force is If we consider a typical day, then t=86400 s and Δv=0.09 m s^-1, giving a decrease of orbital height of about 100 m per day (see Section 6). Despite the low atmospheric density at 400 km, there are still several million billions of atoms per cubic centimetre and this results in drag on the ISS, causing it to lose height over time. You would think that the issue of having to re-boost ISS regularly would be a good argument for putting it into a higher orbit. This way, there would be no need to transport large amounts of fuel on-board the ATV to carry out the re-boosting operations. If only it were that simple! Let’s see how much energy is required to get to such an orbital height. To find out how much energy is required (the work done, W[g]) to move a mass from the surface of the Earth (R[E]) to an altitude of 400 km (R[E] + 400 km), we need to calculate the difference in gravitational potential energy (GPE) at these two points. Work done = GPE at 400 km – GPE at Earth’s surface: So, each kilogramme of mass requires 3.7 × 10^6 J of energy to move to 400km this distance from Earth. This is the work done against gravity alone. When you start to include the mass of the ATV body, its fuel and drag due to the atmosphere during lunch, you can see quite rapidly that this requires huge amounts of energy. In fact, the work done against gravity alone is of the order of 7.0 × 10^10 J (this is comparable to the energy consumed by an average automobile in one year). Hence the need for a powerful launch vehicle such as the Ariane 5 rocket. (See link on the right column) Obviously, the higher the orbit, the more the energy required to reach it. This is the principle reason that ISS is in a Low Earth Orbit (LEO) at 400km. Any higher and more fuel would be needed to get there, resulting in much higher launch costs. 6. Maintaining orbital velocity Re-boosting ISS is done by increasing its orbital speed by a small amount, Δv, which causes it to move to a higher orbit. Since we know the mass of the Earth and the mass of the ISS, we can find its orbital speed. The actual linear velocity is constantly changing, so the value we get for this calculation will be a scalar quantity, hence speed. We can do this by equating the gravitational force between two masses (namely the Earth and ISS) with the centripetal force required to keep ISS in orbit. In other words, gravity provides the centripetal force. The gravitational force between two masses M and m, separated by a distance r is given by Centripetal force is given by F=ma. If we assume circular motion and substitute for the centripetal acceleration, then we have So, we have Rearranging to find v: Giving just under 8 km s^-1. At this velocity, you could fly from Paris to Berlin (877km) in less than two minutes! The ATV is fitted with 4 re-boosting thrusters, each of which can generate 490 N of thrust. When the ATV is docked with ISS, two of these thrusters re-boost the orbit and produce the required Δv. Depending on the height of ISS, a typical re-boost burn may last about half an hour to increase the orbital height by about 30-50 km. 7. Air resistance upon re-entry Once the ATV’s re-supply mission is complete (approximately 6 months), it will leave ISS and be bound for re-entry into the Earth’s atmosphere. Unlike the manned shallow-angle Space Shuttle re-entries, the ATV will move into a steep and destructive trajectory, causing it to start the re-entry at a velocity approaching 8 km s^-1. As it descends, the atmospheric density will increase, causing drag. Whereas the Ariane 5 rocket has an aerodynamic design, to minimise the effects of atmospheric drag during its launch, this is not the case for the ATV. Coupled with the fact that it will be travelling much faster, it will experience very high levels of heating. If it enters the Earth’s atmosphere at just under 8 km s^-1 the resulting compression of the air in front of the ATV will create a gas shock layer that will reach a temperature high enough to dissociate and ionise atoms in the air, creating a plasma. Clearly, this is a destructive entry. Between 70 and 50 km the ATV will start to dismantle and will repeatedly break up into smaller and smaller fragments, due to the high temperatures experienced. The resulting fragments and waste will completely burn up in the atmosphere, but there is always a small risk that a fragment of debris will fall to ground level, which is why such destructive re-entries are made over the Pacific Ocean. Last update: 18 April 2012
{"url":"http://www.esa.int/Education/Space_In_Bytes/ATV_a_very_special_delivery_-_Lesson_notes","timestamp":"2014-04-19T23:27:38Z","content_type":null,"content_length":"135613","record_id":"<urn:uuid:c90ae786-3dde-4b1c-939a-5d67f7dc2067>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
The Monty Hall Problem From Math Images Let's Make a Deal The Monty Hall problem is a probability puzzle based on the 1960's game show Let's Make a Deal. When the Monty Hall problem was published in Parade Magazine in 1990, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong. It remains one of the most disputed mathematical puzzles of all time. Basic Description The Problem The show's host, Monty Hall, asks a contestant to pick one of three doors. One door leads to a brand new car, but the other two lead to goats. Once the contestant has picked a door, Monty opens one of the two remaining doors. He is careful never to open the door hiding the car. After Monty has opened one of these other two doors, he offers the contestant the chance to switch doors. Is it to his advantage to stay with his original choice, switch to the other unopened door, or does it not matter? The Solution If you answered that the contestant's decision doesn't matter, then you are among about 90% of respondents who were quickly able to determine that the two remaining doors must be equally likely to hide the car. You are also wrong. The answer to the Monty Hall Problem is viewed by most people—including mathematicians—as extremely counter–intuitive. It is actually to the contestant's advantage to switch: the probability of winning if the contestant doesn't switch is 1/3, but if the contestant switches, the probability becomes 2/3. To see why this is true, we examine each possible scenario below. We can first imagine the case where the car is behind door 1. In the diagram on the below, we can see what prize the contestant will win if he always stays with his initial pick after Monty opens a door. If the contestant uses the strategy of always staying, he will only win if he originally picked door 1. If the contestant always switches doors when Monty shows him a goat, then he will win if he originally picked door 2 or door 3. A player who stays with the initial choice wins in only one out of three of these equally likely possibilities, while a player who switches wins in two out of three. Since we know that the car is equally likely to be behind each of the three doors, we can generalize our strategy for the case where the car is behind door 1 to any placement of the car. The probability of winning by staying with the initial choice is 1/3, while the probability of winning by switching is 2/3. The contestant's best strategy is to always switch doors so he can drive home happy and goat-free. Aids to Comprehension The Monty Hall problem has the distinction of being one of the rare math problems that has gained recognition on the front page of the Sunday New York Times. On July 21, 1991, the Times published a story that explained a heated argument between a Parade columnist, Marilyn vos Savant, and numerous angry readers. Many of these readers held distinguished degrees in mathematics, and the problem seemed far too elementary to warrant such difficulty in solving. Further explanation of readers' debate with vos Savant's can be found in the Why It's Interesting section. However, if you aren't completely convinced that switching doors is the best strategy, be aware that the Monty Hall problem has been called "math's most contentious brain teaser." The following explanations are alternative approaches to the problem that may help clarify that the best strategy is, in fact, switching doors. Why the Probability is not 1/2 The most common misconception is that the odds of winning are 50-50 no matter which door a contestant chooses. Most people assume that each door is equally likely to contain the car since the probability was originally distributed evenly between the three doors. They believe that they have no reason to prefer one door, so it does not matter whether they switch or stick with their original This reasoning seems logical until we realize that the two doors cannot be equally likely to hide the car. The critical fact is that Monty's choice of which door to open is not random, so when he opens a door, it gives the contestant new information. Marilyn defended her answer in a subsequent column addressing this point specifically. Suppose we pause after Monty has revealed a goat and a UFO settles down onto the stage and a little green woman emerges. The host asks her to point to one of the two unopened doors. Then the chances that she'll randomly choose the one with the prize are 1/2. But, that's because she lacks the advantage the original contestant had—the help of the host. "When you first choose door #1 from three, there's a 1/3 chance that the prize is behind that one and a 2/3 chance that it's behind one of the others. But then the host steps in and gives you a clue. If the prize is behind #2, the host shows you #3, and if the prize is behind #3, the host shows you #2. So when you switch, you win if the prize is behind #2 or #3. You win either way! But if you don't switch, you win only if the prize is behind door #1," Marilyn explained. This is true because when Monty opens a door, he is reducing the probability that it contains a car to 0. When the contestant makes an initial pick, there is a 1/3 chance that he picked the car and a 2/3 chance that one of the other two doors has the car. When Monty shows him a goat behind one of those two doors, the 2/3 chance is only for the one unopened door because the probability must be 0 for the one that the host opened. An Extreme Case of the Problem Imagine that you are on Let's Make a Deal are there are now 1 million doors. You choose your door, then Monty opens all but one of the remaining doors, showing you that they hide goats. It’s clear that your first choice is unlikely to have been the right choice out of 1 million doors. Since you know that the car must be hidden behind one of the unopened doors and it is very unlikely to be behind your door, you know that it must be behind the other door. In fact, on average in 999,999 out of 1,000,000 times the other door will contain the prize because 999,999 out of 1,000,000 times the player first picked a door with a goat. Switching to the other door is the best Using a simulation is another useful way to show that the probability of winning by switching is 2/3. A simulation using playing cards allows us to perform multiple rounds of the game easily. One simulation proposed by vos Savant herself requires only two participants, a player and a host. Three cards are held by the host, one ace that represents the prize and two lower cards that represent the mules. The host holds up the three cards so only he can see their values. The contestant picks a card, and it is placed aside so that he still cannot see the value. Monty then reveals one of the remaining low cards which represents a mule. He must choose between the two lower cards if they both remain in his hand. If the card remaining in the host's hand is an ace, then this is recorded as a round where the player would have won by switching. Contrastingly, if the host is holding a low card, the round is recorded as one where staying would have won. Performing this simulation repeatedly will reveal that a player who switches will win the prize approximately 2/3 of the time. Play the Game A More Mathematical Explanation The following explanation uses Bayes' Theorem to show how Monty revea [...] The following explanation uses Bayes' Theorem to show how Monty revealing a goat changes the game. Let the door picked by the contestant be called door a and the other two doors be called b and c. Also, V[a], V[b], and V[c], are the events that the car is actually behind door a, b, and c respectively. We begin by looking at a scenario that leads to Monty opening door b, so let O[b] be the event that Monty Hall opens curtain b. Then, the problem can be restated as follows: Is $P(V_a|O_b) = P(V_c|O_b)$? $P(V_a|O_b)$ is the probability that door a hides the car given that Monty opens door b. Similarly, $P(V_c|O_b)$ is the probability that door c hides the car given that monty opens door b. So, when $P(V_a|O_b) = P(V_c|O_b)$ the probability that the car is behind one unopened door the same as the probability that the car is behind the other unopened door. If this is the case, it won't matter if the contestant stays or switches. Using Bayes' Theorem, we know that Also, we can assume that the prize is randomly placed behind the curtains, so $P(V_a) = P(V_b) = P(V_c) = \frac{1}{3}$ Then we can calculate the conditional probabilities for the event O[b], which we can then use to calculate the probability of event O[b]. First, we can calculate the conditional probability that Monty opens door b if the car is hidden behind door a. $P(O_b|V_a) = 1/2$ because if the prize is behind a, Monty can open either b or c. $P(O_b|V_b) = 0$ because if the prize is behind door b, Monty can't open door b. $P(O_b|V_c) = 1$ because if the prize is behind door c, Monty can only open door b. Each of these probabilities is conditional on the fact that the prize is hidden behind a specific door, but we are assuming that each of these probabilities is mutually exclusive since the car can only be hidden behind one door. As a result, we know that P(O[b]) is equal to $P(O_b) = P(O_b \cap V_a) + P(O_b \cap V_b) + P(O_b \cap V_c)$ Using the equation for the probability of non-independent events, we can say $P(O_b)= P(V_a)P(O_b|V_a) + P(V_b)P(O_b|V_b) +P(V_c)P(O_b|V_c)$ $= \frac{1}{3} * \frac{1}{2} + \frac{1}{3} * 0 + \frac{1}{3} * 1$ $= \frac{1}{2}$ Then, we can use $P(O_b)$, $P(O_b|V_a)$, and $P(V_a)$ to calculated $P(V_a|O_b)$. $P(V_a|O_b) = \frac {P(V_a)*P(O_b|V_a)}{P(O_b)}$ $= \frac {\frac{1}{3} * \frac{1}{2}} {\frac{1}{2}}$ $= \frac {1}{3}$ $P(V_c|O_b) = \frac {P(V_c)*P(O_b|V_c)}{P(O_b)}$ $= \frac {\frac{1}{3} * 1} {\frac{1}{2}}$ $= \frac {2}{3}$ The probability of V[c] (the event that car is hidden behind door c) in this case is not equal to the probability of V[a] (the case where the car is hidden behind the door that Monty hasn't opened and the contestant hasn't selected). The contestant is offered an opportunity to switch to door c. We have calculated that the probability of winning when door c is selected is 2/3 and the probability of winning with the contestant's original choice, door a is 1/3. Since Monty is equally likely to open any of the three doors, we can generalize this strategy for any door that he opens. The probability that the car is hidden behind the contestant's original choice is 1/3, but the probability that the car is hidden behind the unopened and unselected door is 2/3. If the contestant switches, he doubles his chance of winning. Why It's Interesting Variations of the problem have been popular game teasers since the 19th century, but the "Lets Make a Deal" version is most widely known. History of the Problem The earliest of several probability puzzles related to the Monty Hall problem is Bertrand's box paradox, posed by Joseph Bertrand in 1889. In Bertrand's puzzle there are three boxes: a box containing two gold coins, a box with two silver coins, and a box with one of each. The player chooses one random box and draws a coin without looking. The coin happens to be gold. What is the probability that the other coin is gold as well? As in the Monty Hall problem the intuitive answer is 1/2, but the actual probability 2/3. Ask Marilyn: A Story of Misguided Hatemail The question was originally proposed by a reader of “Ask Marilyn”, a column in Parade Magazine in 1990. Marilyn's correct solution, that switching doors was the best strategy, caused an uproar among mathematicians. While most people responded that switching should not matter, the contestant’s chances for winning in fact double if he switches doors. Part of the controversy, however, was caused by the lack of agreement on the statement of the problem itself. Most statements of the problem, including the one in Marilyn's column, do not match the rules of the actual game show. This was a source of great confusion when the problem was first presented. The main ambiguities in the problem arise from the fact that it does not fully specify the host's behavior. For example, imagine a host who wasn't required to always reveal a goat. The host's strategy could be to open a door only when the contestant has selected the correct door initially. This way, the host could try to tempt the contestant to switch and lose. When first presented with the Monty Hall problem, an overwhelming majority of people assume that switching does not change the probability of winning the car even when the problem was stated to remove all sources of ambiguity. An article by Burns and Wieth cited various studies on the Monty Hall problem that document difficulty solving the Monty Hall problem specifically. These previous articles reported 13 studies using standard versions of the Monty Hall dilemma, and reported that most people do not switch doors. Switch rates ranged from 9% to 23% with a mean of 14.5%, even when the problem was stated explicitely. This consistency is especially remarkable given that these studies include a range of different wordings, methods of presentations, languages, and cultures. Marilyn quotes cognitive psychologist Massimo Piattelli-Palmarini in her own book saying "... no other statistical puzzle comes so close to fooling all the people all the time" and "that even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer" (Bostonia July/August 1991). When the Monty Hall problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong. One letter written to vos Savant by Dr. E. Ray Bobo of Georgetown University was especially critical of Marilyn's solution: "You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively toward the solution to a deplorable situation. How many irate mathematicians are needed to get you to change your mind?" Monty and Monkeys A recent article published in The New York Times uncovered an interesting relationship between the Monty Hall problem and a study on cognitive dissonance using monkeys. If the calculations of Yale economist M. Keith Chen are correct, then some of the most famous experiments in psychology might be flawed. Chen believes the researchers drew conclusions based on natural inclination to incorrectly evaluate probability. The most famous experiment in question is the 1956 study "Postdecision changes in the desirability of alternatives" on rationalizing choices. The researchers studied which M&M colors were most preferred by monkeys. After identifying a few colors of M&Ms that were approximately equally favored by a monkey - say, red, blue, and yellow, - the researchers gave the monkey a choice between two of the colors. In one case, imagine that a monkey chose yellow over blue. Then, the monkey would be offered the choice between blue and red M&Ms. Researchers noted that about two-thirds of the time the monkey would choose red. The 1956 study claimed that their results reinforced the theory of rationalization: Once we reject something we are convinced that we never like it anyway. Dr. Chen reexamined the experimental procedure, and says that monkey's rejection of blue might be attributable to statistics alone. Chen says that although the three colors of M&M's are approximately equally favored, there must be some slight difference in preference between the original red, blue, and yellow. If this is the case, then the monkey's choice of yellow over blue wasn't arbitrary. Like Monty Hall's decision to open a door that hid a goat, the monkey's choice between yellow and blue discloses additional information. In fact, when a monkey favors yellow over blue, there's a two-thirds chance that it also started off with a preference for red over blue- which would explain why the monkeys chose red 2/3 of the time in the Yale experiment. To why this is true, consider Chen's conjecture that monkeys must have some slight preference between the three colors they are being offered. The table below shows all the possible combinations of ways that a monkey could possibly rank its M&Ms. We can see that in the case where the monkey preferred yellow over blue, they monkey preferred red over blue in 2/3 of the rankings. Although Chen agrees that the study may have still discovered useful information about preferences, but he doesn't believe it has been measured correctly yet. "The whole literature suffers from this basic problem of acting as if Monty's choice means nothing" (Tierney 2008). Monty Hall problem, the study of monkeys, and other problems involving unequal distributions of probability are notoriously difficult for people to solve correctly. Even academic studies may be littered with mistakes caused by difficulty interpreting statistics. The 2008 movie 21 increased public awareness of the Monty Hall problem. 21 opens with an M.I.T. math professor using the Monty Hall Problem to explain theories to his students. The Monty Hall problem is included in the movie to show the intelligence of the main character because he is immediately able to solve such a notoriously difficult problem. Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Heart of Mathematics. Edward B. Burger and Michael P. Starbird. The Monty Hall Problem: The Remarkable Story of Math's Most Contentious Brain Teaser. Jason Rosenhouse. Brehm, J. W. (1956) Postdecision changes in the desirability of alternatives, Journal of Abnormal and Social Psychology, 52, 384-9 Future Directions for this Page Helper Pages: □ Bayes' Theorem □ Probability If you are able, please consider adding to or editing this page! Have questions about the image or the explanations on this page? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"url":"http://mathforum.org/mathimages/index.php?title=The_Monty_Hall_Problem&oldid=14563","timestamp":"2014-04-19T23:25:46Z","content_type":null,"content_length":"49661","record_id":"<urn:uuid:1e2a8603-e89a-479d-867f-d2847eef06bb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
If a line has no y-intercept, what can you say about the line? What if a line has no x-intercept? Think of a real-life situation where a graph would have no x- or y-intercept. Will what you say about the line always be true in that situation? If a line has no y-intercept, what can you say about the line? What if a line has no x-intercept? Think of a real-life situation where a graph would have no x- or y-intercept. Will what you say about the line always be true in that situation? A line that has no y-intercept is a vertical line. A line that has no x-intercept is a horizontal line. A real life situation would be: the gravitational force between two different planets (where x is the distance between the centers, [ and y is the force). There is no y-intercept because this would imply that two planets are at the same point in space, and there is no x-intercept because the planets would have to be infinitely far for there to be zero gravitational force. ] Asked 6/14/2012 2:13:22 PM 0 Answers/Comments Not a good answer? Get an answer now. (FREE) There are no new answers.
{"url":"http://www.weegy.com/?ConversationId=2193C956","timestamp":"2014-04-19T20:29:46Z","content_type":null,"content_length":"31293","record_id":"<urn:uuid:75461fdc-2969-4d2c-804a-dbe9199ecb43>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
A Student Stands At The Edge Of A Cliff And Throws ... | Chegg.com A student stands at the edge of a cliff and throws a stone horizontally over the edge with a speed of 17.0 m/s. The cliff is h = 41.0 m above a flat, horizontal beach as shown in the figure. (a) What are the coordinates of the initial position of the stone? (b) What are the components of the initial velocity? (c) Write the equations for the x- and y-components of the velocity of the stone with time. (Use the following as necessary: t. Let the variable t be measured in seconds. Do not state units in your (d) Write the equations for the position of the stone with time, using the coordinates in the figure. (Use the following as necessary: t. Let the variable t be measured in seconds. Do not state units in your answer.)
{"url":"http://www.chegg.com/homework-help/questions-and-answers/student-stands-edge-cliff-throws-stone-horizontally-edge-speed-170-m-s-cliff-h-410-m-flat--q1958143","timestamp":"2014-04-21T14:22:16Z","content_type":null,"content_length":"22145","record_id":"<urn:uuid:23bf363f-f0f8-4c6e-b95f-feae55ad7ab7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
RTextTools is a machine learning package for automatic text classification that makes it simple for novice users to get started with machine learning, while allowing experienced users to easily experiment with different settings and algorithm combinations. The package includes nine algorithms for ensemble classification (svm, slda, boosting, bagging, random forests, glmnet, decision trees, neural networks, maximum entropy), comprehensive analytics, and thorough documentation. maxent is an R package with tools for low-memory multinomial logistic regression, also known as maximum entropy. The focus of this maximum entropy classifier is to minimize memory consumption on very large datasets, particularly sparse document-term matrices represented by the tm package. The classifier is based on an efficient C++ implementation written by Dr. Yoshimasa Tsuruoka. Computationally efficient procedures for regularized estimation with the semiparametric additive hazards regression model. Regression modeling using rules with added instance-based corrections L1 regularized regression (Lasso) solver using the Cyclic Coordinate Descent algorithm aka Lasso Shooting is fast. This implementation can choose which coefficients to penalize. It support coefficient-specific penalities and it can take X'X and X'y instead of X and y. This package facilitates the use of data mining algorithms in classification and regression tasks by presenting a short and coherent set of functions. While several DM algorithms can be used, it is particularly suited for Neural Networks (NN) and Support Vector Machines (SVM). Versions: 1.3.1 minor corrections; 1.3 - new classification and regression metrics (improved mmetric function); 1.2 - new input importance methods (improved Importance function); 1.1 - minor error corrections; 1.0 - first version. The Stuttgart Neural Network Simulator (SNNS) is a library containing many standard implementations of neural networks. This package wraps the SNNS functionality to make it available from within R. Using the RSNNS low-level interface, all of the algorithmic functionality and flexibility of SNNS can be accessed. Furthermore, the package contains a convenient high-level interface, so that the most common neural network topologies and learning algorithms integrate seamlessly into R. Two classification ensemble methods based on logic regression models. Logforest uses a bagging approach to contruct an ensemble of logic regression models. LBoost uses a combination of boosting and cross-validation to construct and ensemble of logic regression models. Both methods are used for classification of binary responses based on binary predictors and for identification of important variables and variable interactions predictive of a binary outcome. RGP is a simple modular Genetic Programming (GP) system build in pure R. In addition to general GP tasks, the system supports Symbolic Regression by GP through the familiar R model formula interface. GP individuals are represented as R expressions, an (optional) type system enables domain-specific function sets containing functions of diverse domain- and range types. A basic set of genetic operators for variation (mutation and crossover) and selection is provided. Functions to perform dimensionality reduction for classification if the covariance matrices of the classes are unequal.
{"url":"http://www.inside-r.org/category/packagetags/machinelearning","timestamp":"2014-04-17T14:01:58Z","content_type":null,"content_length":"18456","record_id":"<urn:uuid:7bc4926e-c1a0-49a8-8cbf-8306e4d6f848>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Power of Model Selection Strategies for Genome-Wide Association Studies • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information PLoS Genet. Jul 2009; 5(7): e1000582. Statistical Power of Model Selection Strategies for Genome-Wide Association Studies Bruce Walsh, Editor^ Genome-wide association studies (GWAS) aim to identify genetic variants related to diseases by examining the associations between phenotypes and hundreds of thousands of genotyped markers. Because many genes are potentially involved in common diseases and a large number of markers are analyzed, it is crucial to devise an effective strategy to identify truly associated variants that have individual and/or interactive effects, while controlling false positives at the desired level. Although a number of model selection methods have been proposed in the literature, including marginal search, exhaustive search, and forward search, their relative performance has only been evaluated through limited simulations due to the lack of an analytical approach to calculating the power of these methods. This article develops a novel statistical approach for power calculation, derives accurate formulas for the power of different model selection strategies, and then uses the formulas to evaluate and compare these strategies in genetic model spaces. In contrast to previous studies, our theoretical framework allows for random genotypes, correlations among test statistics, and a false-positive control based on GWAS practice. After the accuracy of our analytical results is validated through simulations, they are utilized to systematically evaluate and compare the performance of these strategies in a wide class of genetic models. For a specific genetic model, our results clearly reveal how different factors, such as effect size, allele frequency, and interaction, jointly affect the statistical power of each strategy. An example is provided for the application of our approach to empirical research. The statistical approach used in our derivations is general and can be employed to address the model selection problems in other random predictor settings. We have developed an R package markerSearchPower to implement our formulas, which can be downloaded from the Comprehensive R Archive Network (CRAN) or http://bioinformatics.med.yale.edu/group/. Author Summary Almost all published genome-wide association studies are based on single-marker analysis. Intuitively, joint consideration of multiple markers should be more informative when multiple genes and their interactions are involved in disease etiology. For example, an exhaustive search among models involving multiple markers and their interactions can identify certain gene–gene interactions that will be missed by single-marker analysis. However, an exhaustive search is difficult, or even impossible, to perform because of the computational requirements. Moreover, searching more models does not necessarily increase statistical power, because there may be an increased chance of finding false positive results when more models are explored. For power comparisons of different model selection methods, the published studies have relied on limited simulations due to the highly computationally intensive nature of such simulation studies. To enable researchers to compare different model search strategies without resorting to extensive simulations, we develop a novel analytical approach to evaluating the statistical power of these methods. Our results offer insights into how different parameters in a genetic model affect the statistical power of a given model selection strategy. We developed an R package to implement our results. This package can be used by researchers to compare and select an effective approach to detecting SNPs. In genome-wide association studies (GWAS), hundreds of thousands of markers are genotyped to identify genetic variations associated with complex phenotypes of interest. The detection of truly associated markers can be framed as a model selection problem: a group of statistical models are considered to assess how well each model predicts the phenotype, and the selected models are expected to include all or some of the truly associated genetic markers and few, if any, markers not associated with the phenotype. In the literature, three model-selecting procedures have been advocated: marginal search, exhaustive search, and forward search. Marginal search analyzes markers individually and is the simplest and computationally least expensive among these three search methods. Under certain assumptions, such as no interactions among covariates (or markers in the GWAS context), Fan and Lv [1] proved that the truly associated covariates will be among those having the highest marginal correlations. However, Fan and Lv acknowledged that marginal search may suffer when an important covariate is jointly associated as a group but marginally unassociated as individuals with the response (phenotype). In GWAS, the phenotypes are likely associated with multiple genes, their gene-gene interactions (i.e. epistases), and gene-environment interactions. Therefore, marginal search may not be optimal for the analysis of GWAS data. In contrast to marginal search, exhaustive search and forward search simultaneously consider multiple markers in the model. Exhaustive search examines all possible models within a given model dimension, and forward search identifies markers in a stepwise fashion. As they consider interactions, they may gain statistical power compared to marginal search [2]–[5]. In practice, exhaustive search bears a much larger computational burden because the number of models that need to be explored is an exponential function of the number of markers jointly considered. For example, if 500,000 markers are genotyped, an exhaustive search of all marker pairs would study around 10^11 candidate models. This requires significant computational resources, especially when permutations are needed to establish overall significance levels, e.g. for the purpose of appropriately accounting for dependencies among markers. Because of this computational burden, it is difficult or even impossible to assess the power of exhaustive search through simulation studies. Based on limited simulations and real data analysis, conflicting results exist in the literature on the relative merit of exhaustive search and forward search. Because exhaustive search considers many more models, it may increase the probability that the truly associated markers do not rise to the top as more models involving unrelated markers may outperform the true models simply due to chance. Forward search explores a smaller model space, allowing a less stringent threshold for significance. However, forward search may miss the markers that have a strong interaction effect but weak marginal effect. Through limited simulation studies, Marchini and colleagues [4],[5] concluded that exhaustive search is more powerful in finding truly associated markers in the presence of epistasis. On the contrary, based on the analysis of a real data set for yeast, Storey and colleagues [2],[3] recommended sequential forward search. They reported that exhaustive search suffers from lower power because a substantial increase in the number of models. By analytically demonstrating the conditions under which exhaustive search is better than forward search, and the reverse, our research systematically explains these contradictory results. It is clear that the optimal model selection strategy depends on the underlying genetic model, which is unknown to researchers. In the most extreme case, if the underlying genetic model has no marginal association, an exhaustive search is the only way to find influential genes. On the other hand, for a model with purely additive genetic effects, marginal or forward search will be the most effective. For the cases between these two extremes, the optimal model selection strategy should achieve a delicate balance between computational efficiency, statistical power, and a low false positive rate. Without the knowledge of underlying models, it is necessary to evaluate the different methods by thoroughly comparing them across a large genetic model space, in which both computationally intensive simulations and limited real data analysis are difficult to fully explore. In this article, we derive the analytical results for statistical power of marginal search, exhaustive search, and forward search. These formulas can significantly reduce the computational burden in power estimation. To implement the formulas, we developed an R package markerSearchPower. We demonstrate through simulations that our results are accurate. Through our results, we can systematically assess different SNP search methods across a large model space and efficiently identify the optimal one. Our derivation approaches are general and can be applied to the model selection procedures in other random predictor settings. The rest of this article is organized as follows: in the Results section, we present the model set-up, the validation of our analytical results through simulations, and the comparisons among three model selection strategies; in the Discussion section, we summarize the power comparison results and discuss our methodological contributions; and in the Methods section, we outline the derivations of asymptotic distributions and power calculations. The Text S1 available online gives statistical details of proofs and derivations, extended power comparisons, and relevant formulas for the estimates of distribution parameters. Model Setup A genetic model relates phenotype to genotypes, and this relationship can be rather complex. In general, statistical power depends on the effects of risk alleles, allele frequencies in the population, epistasis, as well as environmental risk factors and their interactions with genetic factors. We focus on a model commonly used in the literature, which offers valuable insights into the relative performance of model selection methods. Assume that genotype data are available from p independent single nucleotide polymorphisms (SNPs). Our results can be generalized to other types of markers. We use X[i][1], …, X[ip], in, to denote the genotypes for the ith sampled individual, for SNPs 1, …, p, respectively. Let the alleles at the jth SNP be M[j] and m[j] with frequencies p[j] and q[j]p[j], respectively. Under the assumption of Hardy-Weinberg equilibrium and additive allelic effects, we use the following coding for this SNP: We focus on the scenario that two of these SNPs, indexed by 1 and 2, are truly associated with a quantitative outcome Y through the following genetic model where ε[i]~N(0, σ^2) is independent of the genotypes. The interaction term represents the epistatic effect, and its coefficient b[3] measures the direction and magnitude of this effect. Based on the observed data, we fit the following models using Ordinary Least Squares (OLS) involving one or two SNPs: The subscripts in the above models index the SNP(s) included in these models. Based on models (3) and (4), three model selection methods seek candidate markers according to the corresponding test statistics. In marginal search, we fit simple linear model (3) and compare the T-statistics [6] T[j] for jp. A model, and thus its involved SNP, is selected if the corresponding T-statistic is among the largest from all tests. In two-dimensional exhaustive search, we fit regression model (4) for all SNP pairs and compare the F-statistics [6] F[jk] for all j<k where j, kp}. The models with the highest values of the F-statistics are selected. In forward search, we first conduct a marginal model selection through model (3) and select the jth SNP if |T[j]| is the largest. With X[j], we then add another SNP X[k] (k≠j) for different SNPs, and choose models in format (4) which generate the highest F-statistics. Two criteria are adopted to decide if the chosen models are correct. On one hand, we could be rather stringent and call a model correct only if it matches the true underlying genetic model. This is consistent with the concept of “joint significance” in Storey et al. [2]. On the other hand, we could be more generous and call a model correct if it contains at least one of the truly associated markers. This is consistent with the null hypothesis used in some published simulation studies [4],[5]. Accordingly, we consider two definitions of power for a model selection procedure: 1. the probability of identifying exactly the true model (in marginal search, it is the probability of detecting both true SNPs); 2. the probability of detecting at least one of the true SNPs. Under power definition (A), the null model is any model other than the true genetic model; under power definition (B), the null model is any model containing neither true SNP. Comparison between Analytical and Simulation Results We evaluated the accuracy of the asymptotic results derived in the Methods section by comparing the analytical results with those from simulations. To estimate power through simulation studies, we generated 1,000 data sets with n subjects and p candidate SNPs assuming Hardy-Weinberg equilibrium, as indicated in (1). The quantitative trait values were generated through true model (2) involving two true SNPs. We then used marginal search, exhaustive search, and forward search to identify SNPs associated with the trait. Under power definition (A), the target model(s) were the true model (or models with one true SNP in marginal search), and the other models were considered null models. Under power definition (B), the target models were those containing at least one true SNP, and the rest were considered null models. The empirical power estimated from these simulations was the proportion of that datasets that we were able to successfully find the target model(s) through model selection procedures, under the control of a pre-specified number (R) of falsely discovered null models. Such control offers a fair comparison of power among the three model selection methods and is numerically equal to the detection probability (DP) control [7], which is the probability of including a “correct model” when selecting R (or R+1 in marginal search under power definition (A)) of the most significant models. In the first set-up for model (2), we considered npb[1]b[2]b[3]q[j]jp, and variance σ^2Table 1 summarizes the calculated power and the simulated power under definitions (A) and (B). The second set-up is the same as the first except b[3]Table 2 shows the results under definitions (A) and (B). The two values of b[3] represent large and small interaction terms with which the simulation generated a broad spectrum of power values. In both set-ups, the analytical power is very close to the empirical power based on simulations. The probability of detecting the exact true model (or both true SNPs in marginal search) under power definition A, and the probability of detecting at least one of the true SNPs under power definition B, with the false discovery number R varying. b[1] ... The probability of detecting the exact true model (or both true SNPs in marginal search) under power definition A, and the probability of detecting at least one of the true SNPs under power definition B, with the false discovery number R varying. b[1] ... We chose these two set-ups in which the power was reasonably large to approximate most practical settings. The chosen value of p is much smaller than that in GWAS (in the 100,000's) for the feasibility of simulation. As discussed in the Methods section, the asymptotic results are derived by assuming a large p. Therefore, we expect better approximations if p has a value similar to those in a real GWAS. Power Comparisons of Model Selection Methods The simulation results shown in Table 1 and Table 2 demonstrate that our analytical results provide good approximations to the true power, which is the basis for comparing the performance of these model search methods in a practical GWAS. We now consider a more realistic setting with a sample size of 1000 individuals (n) and a total of 300,000 SNPs (p). We assumed a genetic model of form (2) with σ^2b[1]b[2] as well as that of b[3] from −1 to 1 by a step size of 0.1. To simplify the discussion, we assumed all SNPs had the same allele frequency of q[j]jp. Note that this setting can be changed without affecting the qualitative nature of the comparison results. Figure 1 gives the 3D plots of statistical power over the genetic model space for different model selection methods (in columns) under two power definitions (A) and (B) (in rows), when controlling the number of false discoveries to be RMethods section, the marginally non-detectable region for SNP 1, where b[1]+b[3](p[2]−q[2])b[1], epistatic effect b[3], and the allele frequency p[2] of SNP 2. The non-detectable region for SNP 2 is analogous by symmetry. In exhaustive search, such region does not exist, as indicated by formula (12). So, exhaustive search can better identify the signals when they are counterbalanced. 3D plots of statistical power over genetic model space. In order to better visualize the difference of model selection methods, we show the power differences between different methods. The left, middle, and right columns of Figure 2 and Figure 3 present the power difference between marginal search and exhaustive search, between marginal search and forward search, and between forward search and exhaustive search, respectively. For a specific comparison, the red areas represent negative values, indicating the former method has lower power, and the green areas represent positive values, indicating the former method has higher power. The dashed contours in these plots represent the heritability of the genetic model, i.e., the proportion of the total variation due to genetic effects, which is defined as Under our model set-up, In each plot, there are two areas in which the difference of power is close to 0. First, in the central area where the signal is weak (small H^2), all model selection procedures have low power and tend to fail to pick up the true SNPs. Second, in the edge areas where the signals are strong, all model selection procedures have similarly good power. The light colored areas represent these two special situations in which there is little difference in power among model selection methods. Comparisons among model selection power for detecting the true model or both true SNPs in marginal search over genetic model space. Comparisons among model selection power for detecting either true SNP over genetic model space. To compare marginal search and exhaustive search, the left columns of Figure 2 and Figure 3 exhibit the power difference under power definitions (A) and (B), respectively. Exhaustive search has significant advantage in the red areas where the interaction effect b[3] is large or b[1]+b[3](p[2]−q[2]) is small. Such advantage is more pronounced under power definition (A) than under power definition (B). Marginal search performs better in the green areas where b[3] is small and b[1] and b[2] are both moderate. There are two reasons for the better performance of marginal search. First, with a small interaction term b[3] in these green areas, marginal search well detects the signals when the two-marker genetic effects are projected onto a marginal space through the simple regression of form (3). At the same time, with moderate b[1] and b[2], the power for these two methods is not close to 0 or 1, so that they are distinguishable. Second, marginal search considers fewer models so that the desired models are more likely to be found from the models with the best fit. Under different power definitions, the performance of forward search relative to that of marginal search can change. Capable of including interaction terms, forward search has an advantage over marginal search in finding the full correct model under power definition (A), as shown by the red areas in the middle column of Figure 2. Based on the analytical formulas in the Methods section, there is a positive correlation between the test statistics in the first and second steps of forward search. Therefore, if one of the associated SNPs can be picked up in the first step, the contribution of the epistatic term makes forward search more powerful to identify the second correct SNP. Under power definition (B), the middle column of Figure 3 shows that marginal search always has similar or slightly better power than forward search, because forward search is less likely than marginal search to pick up a true SNP if an incorrect SNP is chosen first. The power of forward search will not improve greatly even if the number of false discoveries R increases. As shown in the right column of Figure 2, exhaustive search under power definition (A) always has a similar or higher power to detect the true model when compared to forward search. Although forward search can also detect the interaction terms through joint analysis, its ability to capture the interaction terms is restricted, especially when marginal effect is small in the deep red areas of b[1] +b[3](p[2]−q[2])≈0. Under power definition (B), forward search is more powerful than exhaustive search when R, the number of controlled false discoveries, is small, but is less powerful when R is large. With small R (e.g. RFigure 3. This benefit is reduced for larger R and will eventually be dominated by the advantage of exhaustive search. Since the first step of forward search is essentially a marginal search, the advantage of exhaustive search over marginal search also applies to forward search. This is reflected in the right columns of Figure 2 and Figure 3, where the red areas are similar to those in the left columns. As reflected by the change of red/green areas between the first and the second rows in both Figure 2 and Figure 3, if we raise the number of allowed false discoveries R, the power of marginal search will increase the most, followed by the power of exhaustive search, and then the power of forward search. With the same increase in R, marginal search includes a much higher proportion of the models with true SNPs than exhaustive search. For forward search, the increase of power is smaller because it is more difficult to identify a correct SNP in the second step when an incorrect SNP is more likely to be selected in the first step. We also explored additional model set-ups in Text S1 Section 3 with npRq[j]jp, and σ^2b[1]b[2] and b[3] varied from −2 to 2 by a step size of 0.2. When q[j]b[1]b[2]b[3]Figure 2 and Figure 3. An Example of Power Comparisons Motivated from Real GWAS In the following we provide an example to show how to apply our approach to calculating and comparing the power of model selection methods in empirical analysis. Because there are no consistently replicated interaction effects from real studies, we constructed hypothetical interaction models based on real data so that the marginal associations between traits and markers were matched, while allowing the interaction term to vary. Specifically, we calculated power based on a set of genetic models derived from a genome-wide association study of adult height by Weedon et al. [8]. Based on the reported 20 loci that putatively influence adult height, we set up a two-marker genetic model composed of SNPs rs11107116 and rs10906982, each of which showed moderate marginal effect. According to the Supplementary Table 4 in the original publication, the estimated marginal effects of rs11107116 and rs10906982 are respectively 0.045s.d. and 0.046s.d. with a sample standard deviation (s.d.) of height of 6.82 cm. Assuming different levels of interaction between the two SNPs (quantified by b[3]), we estimated the parameters b[1], b[2], and σ^2 using model (2) so that the marginal effects matched the observed values. The Methods section gives the details of how these parameters were estimated. We used the set-up of Weedon's study: sample size npp[1]p[2] Figure 4 shows the comparisons among the power of the three model selection methods over different values of b[3]. For the detection of both SNPs, graphs A (RRb[3] is large, exhaustive search (red dashed curve) has significant advantage over forward search (green dotted curve), which is better than marginal search (black solid curve). If b[3] is small, marginal search has higher power than the other two. For the detection of at least one of the two SNPs, graphs B (RRb[3]. The relative performance of exhaustive search strongly depends on the magnitude of epistasis. Comparing graphs B (RRR is tolerated. Plots of model selection power with given observed marginal effects. With Rb[3]>0.3 or 0.6, respectively. We studied the statistical significance of the interaction terms with the simulated data (1,000 runs) when b[3] equals these two cutoffs. When b[3]b[3] This example demonstrates that the value of the interaction term and the number of false discoveries affect the relative performance of model selection methods, which can be one of the reasons for the conflicting results about the power of model selection methods in the existing literature [2],[4]. Therefore, the suspected values of parameters such as epistatic effects can affect the researchers' choice of model selection methods. In this article, we have derived rigorous analytical results for the statistical power of three common model selection methods, and applied these results to compare the methods' performance for GWAS data. These results not only make the computationally expensive simulations unnecessary, but also systematically reveal how different genetic model parameters affect the power. The comparison results among the three model selection methods illustrate the trade-off between searching the full model space and a reduced space. In one extreme, exhaustive search explores the full 2-dimensional space covering all possible epistatic effects, but it may reduce the probability that the true model(s) ranks among the top models because many more models are considered. In the other extreme, marginal search casts the true 2-dimensional model onto a 1-dimensional space without considering epistasis at all. However, we have a better chance to find more true positives when the marginal association is retained in the 1-dimenisonal space, because fewer models are examined and the false positive control appears comparatively liberal. Between these two extremes, forward search first considers marginal projection, and then partially searches the 2-dimensional space via residual projection given the chosen predictor in the first step. Thus, forward search has the partial benefit of joint analysis which considers epistatic effects conditionally. The stringency of its false positive control exists between those of exhaustive search and of marginal search. The relative performance of these model selection methods also depends on the definition of power. Based on definition (A), exhaustive search performs the best in finding the true underlying genetic model in most of the model space considered. Under power definition (B), marginal search is a good choice: it is not much worse than exhaustive search for a large proportion of the model space, and it is always better than the classic forward search through which only one SNP is picked up in the first step. For most geneticists, finding at least one of the truly associated SNPs under power definition (B) is a primary concern, especially in the first stage of GWAS. Because we do not have prior information about the true genetic model in the beginning, marginal search, which is easy to compute, is a good start in the first stage of GWAS to find one or some of the main genetic effects. In the later stage(s), if the promising SNP candidates are limited, exhaustive search can be applied with less demanding computation, especially when epistasis among loci is of interest. Our conclusions based on the analytical studies justify this multi-stage strategy in GWAS. Difference between Our Methods and Traditional Power Calculation and Simulations Our power calculation for model selection strategies is different from a traditional power calculation for multiple regression models [9]. The traditional approach is to calculate the probability of accepting a specific multiple-regression model and rejecting the null hypothesis that the response and the covariates have no association, when controlling the type I error rate. This power calculation focuses on models instead of model selection methods, as it does not address any procedure of model selection. In contrast, our analytical approach is to calculate the probability that a model selection method can pick up the models that contain the true covariates (true SNPs in GWAS). Our analytical approach leads to new insights into model selection methods than simulations and limited real data analysis. Furthermore, our approach addresses a critical limitation of prior studies [4],[5] that do not distinguish the models with all correct predictors from those with only a subset of the correct predictors. In those studies, the null distribution assumes the test statistic is from a model without any of the true predictors, and the alternative distribution assumes the statistic is from any model containing at least one true predictor (or, when considering the power for finding both true loci, the models with either true locus are ignored from the null distribution). This is a common problem of traditional multiple testing for model selection method, as pointed out by Storey et al. [2], who stated that “there is no statistically rigorous method to test for joint linkage, which exists only if both loci have nonzero terms in the full model.” To address this issue, all involved models (including true, partially true, and wrong models) are considered and ranked by how well they fit the observed data. Our power calculation distinguishes the case that model selection procedures find the true model based on power definition (A) from the case that the procedures find a partially true model based on definition (B). We have derived the null and alternative distributions for each case, and thus provide the basis for model performance comparisons. To compare the power of model selection methods, our approach explicitly considers the correlation structures among the test statistics for the null and alternative hypotheses, which achieves more accurate assessment of model selection methods than Bonferroni-corrected type I error control that is commonly used in the literature [4],[5]. Bonferroni-based control is usually a conservative control when the test statistics are dependent on each other. As illustrated by both simulations (results not shown) and the theoretical derivations in the Methods section, the considered models and their test statistics usually exhibit complex correlation structures. Therefore Bonferroni-based control is not optimal as it only considers the number of models evaluated (that is, the number of hypothesis tests) and ignores correlation structures generated by different search strategies. The adequacy of our approach has been demonstrated through a good agreement between the analytical and the simulation results shown in Table 1 and Table 2. Furthermore, our study of correlation structures improves the understanding of the mechanism of different search strategies discovering genetic signals. For example, in forward search, the failure of the first stage is likely to cause the failure of the second stage even if there is a large epistatic effect, because the test statistics for the true predictors are positively correlated between the two stages. Control Related to Type I Error Rate and False Discovery Proportion To obtain the significance threshold, we control the number of false discoveries at R depending on how the power is defined. This control is practically meaningful and equals to the detection probability (DP) control [7] as discussed in the Results section. Furthermore, controlling the number of false discoveries is related to controlling the type I error rate. Since the type I error rate is defined as the probability of rejecting a hypothesis given it is a true null, with the definition of null models corresponding to the power definition (A) or (B), the estimation of component-wise type I error rate could be considered as The model selection problem is also a large-scale simultaneous hypothesis testing problem. A widely applied significance control criterion in this scenario is the false discovery rate (FDR) [10]. The false discovery number control in our study is also related to the control of the false discovery proportion (FDP), which is an estimate of FDR. Under power definition (A) where power(R) denotes the power calculated based on the number of selected null models R, and i indicates the number of correct models: ii On the Derivation of Asymptotic Distributions Through the simulations in the Results section, our derivation of asymptotic distributions is shown to be accurate for moderately small genetic effects when the sample size nF distributions for the test statistics are based on fixed predictors [7],[11],[12]. As functions of predictor variables, these non-central parameters are not statistically consistent when genotypes are random. Although one may integrate the power over all possible configurations of markers [13], it is very cumbersome unless n is small. Our method, based on asymptotic theorems, provides a satisfactory solution for models with random predictors. Our novel approach presented here can be applied to derive the distributions of such models' test statistics. Second, the derived asymptotic multivariate normal distributions for theoretical null and alternative hypotheses allow us to incorporate complex correlations among the test statistics into power calculation based on population parameters. For a given GWAS data set, the correlations presented in the data may also be addressed by empirical estimation of the null hypothesis [14],[15]. Third, the ideas behind the asymptotic derivation can be applied to study the distributions for hypothesis testing and power calculation in general as long as the statistics have certain functions of random variables. On Simplifying Assumptions We have assumed that the markers are independent in this paper. There may be linkage disequilibrium (LD) among SNPs. However, LD in general is weak among tagging SNPs [16]–[18]. Furthermore, simulations based on real GWAS data (results not shown) indicate that even in the presence of LD, our analytical results are quite accurate when more false positives are acceptable, i.e. a large R value. In addition, the analytical power approximations are more accurate for power definition (B) than for definition (A). In general, when the dependency among true SNPs and the ensemble of unrelated SNPs is weak or moderate, our power calculation provides acceptable approximations. In reality, the underlying true model could be more complicated than model (2) with more related SNPs and interactions. Our analytical results of power calculation can be extended through the approaches similar to the one we developed here. Although the genetic models studied are simple, our results provide insights into the relative performance of different model selection procedures. Asymptotic Distribution Results To calculate the power of model selection procedures shown in the Results section, we first derive general results on the asymptotic distributions. Let Z[i]Z[i][1], …, Z[is]), in, be n independent and identically distributed (iid) random vectors of dimension s. Assume the mean vector is θE(Z[i])θ[1], …, θ[s]) with θ[j]E(Z[ij]) and the variance-covariance matrix is ΣCov(Z[i]) with (Σ)[jk]Cov(Z [ij], Z[ik]), j, ks. Let where [19]. We extend the above result in two ways to suit our needs of deriving the distribution of test statistics that are examined in model selection procedures (the proofs are given in Text S1 Section 1.1). First, we consider two real valued functions Secondly, if where if h(θ), we have 1. cdrank[A], if A is idempotent; 2. c≈trace(A^2)/2trace(A), d≈trace(A)^2/trace(A^2), if A is not idempotent. Power Calculations With the results above, we derive the relevant distributions of T- and F-statistics associated with three types of regression models, which will be used for calculating the power of model selection methods. Specifically, F[12] is the F statistics for the correct model in which both SNPs are true. T[i] and F[ij], ijp, are test statistics for “half” correct models in which only one SNP is truly associated. T[j] and F[lk], 3≤l<k≤p, are the statistics for incorrect models in which neither SNP in the models is associated with the phenotype. Complex correlations exist among the models even with the assumption of independence among SNPs. The correlations come from two sources. First, since the quantitative trait is associated with both SNPs 1 and 2, the fitted regression models containing either of these SNPs have correlated test statistics. Second, models sharing a common SNP (no matter it is true or wrong) also have correlated test statistics. To allow correlations, we therefore explore the marginal and the joint distributions of various test statistics for different models, and then derive how likely a “half” correct model would stand out from incorrect models, as well as how likely a correct model would outperform “half” correct models or incorrect models. Marginal Search Statistics and asymptotic distributions To calculate the power of marginal search, we need to obtain the distributions of the involved test statistics. We first derive the T-statistic for the two true SNPs in the marginal model. In the simple regression model involving the first true SNP (SNP 1), i.e. T-statistic has the following asymptotic distribution (see Text S1 Section 2.1 for proof): and the formula of n) is given in Text S1 Section 4.1. For the marginal model of the second SNP (SNP 2), the asymptotic distribution of T[2] is gotten by symmetry between indices 1 and 2. Based on the asymptotic mean of T[1] derived above, we can quantify the influence of genetic parameters of SNP 2 and epistasis on the power of marginal search to pick up SNP 1. As for some genetically interesting observations, when there is no epistatic effect (i.e. b[3]X[1] and thus the power of marginal search to find X[1] are decreasing functions of the main effect of X[2], the minor allele frequency (MAF) of X[2], and the random error variance σ^2, with the decreasing rate specifically given by b[3]≠0) but b[1]h[1](θ) reflects the marginally projected signal of epistasis, which is still a decreasing function of the MAF of X[2]. The influence of b[2] depends on the allele frequencies p[1] and q[1]. On the other hand, if b[1]≠0, it is possible that b[1]+b[3](p[2]−q[2])b [1] and interaction effects b[3] have opposite directions (assuming q[2] is the MAF). With such epistatic pattern, marginal detection surely fails to detect the true genetic variants no matter how strong the true genetic effects are. Now we derive the joint distribution of T[1] and T[2]. Since Y is a function of both X[1] and X[2] in the underlying true model (2), T[1] and T[2] are correlated even when X[1] and X[2] are independent and do not interact, i.e. b[3]T[1] and T[2] can be substantial in certain genetic models. The asymptotic joint distribution of (T[1], T[2])′ is where i[1,2]Cov(T[1], T[2]). The covariance τ[1,2] is gotten based on the result in (6), and its formula (as a constant of n) is given in Text S1 Section 4.1. Let T[j], jp, be the T-statistic from model (3) for a wrong SNP j, according to the asymptotic result in (5), which holds regardless of the allele frequencies and the underlying true genetic model. The proof for T[3] as an example is provided in Text S1 Section 2.2. It can be shown that T[j] is also independent of T[1] and T[2] according to the result in (6). Under the assumption of fixed design matrix, T distribution with n−2 degrees of freedom based on a traditional linear model analysis [6], [12]. This null distribution is still asymptotically valid for random predictors since the T distribution converges to the standard normal as n→∞. Power of marginal search procedure Based on the above results for the distributions of T-statistics, we first calculate the power of marginal search under power definition (A). If the marginal search is allowed to contain R wrong SNPs, i.e. the number of false discoveries is controlled by R, the power of identifying both true SNPs is just the probability that both |T[1]| and |T[2]| are greater than the Rth largest value in the set {T[j], j≥3}: where rp−2−R+1, |T|[(r)] is the rth smallest (or the Rth largest) order statistics of |T[j]|, jp, and g(t[1], t[2]) is the joint probability density function (PDF) of (T[1], T[2])′ given in (9). Let Φ(·) be the cumulative distribution function (CDF) of N(0, 1), then To get the power of marginal search under definition (B) that either SNP 1 or SNP 2 is selected, we calculate the probability that either |T[1]| or |T[2]| is larger than the random cutoff point: P(|T [1]|T[2]|≥|T|[(r)]), where |T[1]|T[2]|T[1]|, |T[2]|}. Exhaustive Search Statistics and asymptotic distributions The distributions of the relevant test statistics are derived first for calculating the power of exhaustive search. We first get the joint distribution of the test statistics involving true SNPs 1 and 2: T[1], T[2], and F[12]. Define Text S1 Section 2.3 for details of derivation), we have The formula of h[1](θ) is given in (8), and The formulas of [12,i]Cov(T[12], T[i]), in and are given in Text S1 Section 4.1. We then derive the F-statistics for the incorrect models in form (4) to fit Y with X[j] and X[k], 3≤j<k≤p. Following the result in (7), F[jk] has a common marginal asymptotic distribution: With F[34] as an example, the detailed proof is given in Text S1 Section 2.4. Based on the traditional power calculation for regression models, the null model is the incorrect model with neither SNP associated with the phenotype. When the design matrix is fixed, the null distribution of F[jk] is an F distribution with degrees of freedom (3, n−4) [12]. Result (13) indicates the F distribution for null is also valid when the genotypes are treated as random variables, because F(3, n−4) converges to n is large. In order to calculate the power of model selection methods, we need to address the correlation structures among involved statistics. The statistics are correlated when two epistatic models in form (4) share a common SNP. Also, F-statistics involving X[1] and those involving X[2] are correlated because the true underlying model includes both SNPs. Consequently, the elements in the set {F[12], F [ij], ijp} are all correlated with each other. To capture the important dependency, we decompose F-statistics as follows: when ih[1](θ) is given by equation (8). The detailed proof for decomposing F[13] as an example is shown in Text S1 Section 1.2. Through this decomposition, the correlation between F[ij] and F[ik], can be explained by F[i] while we treat F[j][|i] and F[k][|i] to be independent. Furthermore, with the result (14) we can use the joint distribution (11) to capture the correlation between F[12] and Based on the asymptotic distribution in (7), we have where ijp, cv/2e, and de^2/v, with E(F[j][|i])→e and Var(F[j][|i])→v. Text S1 Section 2.5 shows the detailed proof for F[3|1]. The formulas of e and v are given in Text S1 Section 4.2. Based on our numerical studies (results not shown), c is close to 1/2 and d is close to 2 in a large proportion of the parameter space of {q[i], q[j], b, σ^2} (e.g. when allele frequencies q[i] and q[j] do not converge to 0 or 1, genetic effect bb[1], b[2], b[3])′ and random error variance σ^2 are not too large). When cdF distribution with degrees of freedom 2 and n−4. F(2, n−4) is the distribution of F[j] [|i] when X is fixed [6],[12]. Our results demonstrate that for the random design matrix, the weighted chi-square distribution (15) is more appropriate. Power of exhaustive search procedure With the distribution of test statistics derived above, we first calculate the probability of exhaustive search to identify the exact true model. Under power definition (A), the test statistic F[12] for the exact true model corresponds to the “alternative” distribution, whereas the F-statistics for all other models such as totally incorrect models and “half” correct models are combined together to generate a mixed “null” distribution. Let S[1]F[ij], ijp}, S[2]F[jk], 3≤j<k≤p}, and F[S][,[R]] denote the Rth largest variable in a set S. When controlling the false discovery number by R, the probability of detecting the exact true model (2) is where g(t[12],t[1],t[2]) is the PDF of (11), in which S[2], G[1i](•) is the CDF of distribution (15) for iG[2](•) is the CDF of distribution (13). The test statistics within the sets S^*F[j][|1], F[j][|2], jp} and S[2]F[jk], 3≤j<k≤p} are treated as asymptotically independent as p→∞ (see Text S1 Section 1.3 for details). According to the power definition (B), the probability of exhaustive search to detect at least one of the associated SNPs is g[2(N−R+1)](•) is the PDF of the (N−R+1)th order statistics distribution with the following density function: G[2](•) and g[2](•) are the CDF and PDF of the distribution of (13) respectively. If R is neither too small nor too large, i.e. R/N→c, 0<c<1, as N→∞, we can use quantiles to replace the order statistics in order to simplify the calculation [20], i.e., t[12],t[1],t[2]), we can approximately replace the integrand where I[A](x) denotes the indicator function of set A. Simulations (results not shown) illustrate that the approximation of integrand is reasonably accurate for the integration. Forward Search Statistics and asymptotic distributions For forward search, first we derive the distributions of test statistics, which will be used to calculate the corresponding statistical power. Here we need to handle the comparison between two models: the model with SNPs 1 and j, jp, taking form (4), and the model with SNP j taking form (3). Let F[1|j] be the F statistic measuring the significance of the extra terms in the bigger model over the smaller model [6]. Define b[1]+b[3](p[2]−q[2])≠0, following the asymptotic result in (5), we can derive and the formula for jText S1 Section 4.3. Both p[j]. When b[1]+b[3](p[2]−q[2])j in form (4) with the model having SNP j in form (3). The covariance Cov(T[1|j],T[2|j]) can also be calculated. As an example the formula of Cov(T[1|3],T[2|3]) is given in Text S1 Section 4.3. Moreover, the statistics (T[1],T[2],T[1|j],T[2|j])′ involving true SNPs have a multivariate normal distribution: When jText S1 Sections 2.6 and 4.6. Through result (6), we have proved that T[j] and F[1|j] are asymptotically independent (refer to Text S1 Section 2.6 for details), i.e. When comparing the model having two incorrect SNPs j and k (3≤j<k≤p) in form (4) with the model having SNP j in form (3), the corresponding F-statistic F[k][|j] has the asymptotic distribution Based on the result in (7), Text S1 Section 2.7 shows the proof for (19) with jkF(2, n−4) which can be derived with the fixed design matrix and is routinely used for F[k][|j] in the traditional model comparison [6],[12]. Power of forward search procedure In the forward search procedure, we first apply marginal search to find the most significant SNP among models (3). Based on the selected SNP, we then fit models (4) in the second step to find the SNPs that have strong joint effects, while controlling for R false discoveries. Under power definition (A) for finding the exact true model, we need to calculate the probability of forward search to choose SNP 1 or 2 in the first step, and then pick up the true model in the second step. Define i^*[i][]{|T[i]|}, p→∞, we can write the power as where g(t[12],t[1],t[2]) is the PDF of (T[12],T[1],T[2])′ given in (11), F-statistic decomposition (14), and where i^*rp−2−R+1, and i^* is fixed for an observed value (t[1],t[2])′ of random vector (T[1],T[2])′, so it is easy to implement the power calculation with Monte Carlo integration. Note that j≥3, so with p→∞, j^*≠k^*, When R and p are large, we can simplify the formula of Rth largest variable in set Under power definition (B), the power of forward model selection method is the sum of P[A]: the probability to detect SNP 1 or 2 in the 1st step, and P[B]: the probability that step 1 fails but step 2 picks up at least one correct SNP, while controlling for R incorrect models as false positives. Specifically, where g(t[1],t[2]) is the PDF of joint distribution of (T[1],T[2])′ given in (9). Defining For each k≥3, F[i][|k] and T[k] are independent, so j^*. Hence, F[i][|j], jp. We then have where g(t[1],t[2],t[1|j],t[2|j]) is the PDF of (T[1],T[2],T[1|j],T[2|j])′ given in (17), in which rp−3−R+1, and G(•) is the CDF of F[k][|j], 3≤j<k≤p, given in (19). We can approximate Calculating Post-Hoc Power with a Given Marginal Model To demonstrate how to evaluate the power of model selection methods in the empirical analysis, we have applied our approach in a real study example. In this example, the simple regression model on X based on the full model (2). So the estimator of main effect is X[2]. To estimate the variance of random error, note that With an assumed value of b[3] and the corresponding estimators Supporting Information Text S1 Supplementary Note for proofs and arguments, distributions of test statistics, extended comparisons of power for model selection methods, and formulas for distribution parameters of test statistics. (0.91 MB PDF) We are grateful to Yale University Biomedical High Performance Computing Center for computation support. We thank Dr. Joshua Sampson and Dr. Yedan Zhang for their comments on the paper. The authors have declared that no competing interests exist. This work was supported by NIH grant RR19895, NIH grant GM 59507, and NSF grant DMS 0714817. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Fan J, Lv J. Sure independence screening for ultra-high dimensional feature space. J R Statist Soc B. 2008;70:849–911. [PMC free article] [PubMed] Brem RB, Storey JD, Whittle J, Kruglyak L. Genetic interactions between polymorphisms that affect gene expression in yeast. Nature. 2005;436(7051):701–703. [PMC free article] [PubMed] Marchini J, Donnelly P, Cardon LR. Genome-wide strategies for detecting multiple loci that influence complex diseases. Nat Genet. 2005;37(4):413–417. [PubMed] 6. Kutner MH, Nachtsheim CJ, Li W, Neter J. Applied linear statistical models. 5th ed. New York: McGraw-Hill Irwin; 2005. p. 1396. Gail MH, Pfeiffer RM, Wheeler W, Pee D. Probability of detecting disease-associated single nucleotide polymorphisms in case-control genome-wide association studies. Biostatistics. 2008;9(2):201. [ Weedon MN, et al. Genome-wide association analysis identifies 20 loci that influence adult height. Nat Genet. 2008;40:575–583. [PMC free article] [PubMed] 9. Cohen J. Statistical power analysis for the behavioral sciences. Hillsdale, , NJ: Lawrence Erlbaum Association; 1988. p. 572. 10. Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J R Statist Soc B. 1995;57:289–300. 11. Scheffé H. The analysis of variance. New York: John Wiley & Sons Inc; 1959. p. 477. 12. Searle SR. Linear models. New York: John Wiley & Sons Inc; 1971. p. 532. Ambrosius WT, Lange EM, Langefeld CD. Power for genetic association studies with random allele frequencies and genotype distributions. Am J Hum Genet. 2004;74(4):683–693. [PMC free article] [PubMed] 14. Efron B. Large-scale simultaneous hypothesis testing: The choice of a null hypothesis. J Am Stat Assoc. 2004;99(465):96–104. 15. Efron B. Correlation and large-scale simultaneous significance testing. J Am Stat Assoc. 2007;102(477):93–103. Gibbs RA, Belmont JW, Hardenbol P, Willis TD, Yu F, et al. The international HapMap project. Nature. 2003;426(6968):789–796. [PubMed] Ke X, Cardon LR. Efficient selective screening of haplotype tag SNPs. Bioinformatics. 2003;19(2):287–288. [PubMed] Weale ME, Depondt C, Macdonald SJ, Smith A, Lai PS, et al. Selection and evaluation of tagging SNPs in the neuronal-sodium-channel gene SCN1A: Implications for linkage-disequilibrium gene mapping. Am J Hum Genet. 2003;73(3):551–565. [PMC free article] [PubMed] 19. Lehmann EL, Casella G. Theory of point estimation, Second Edition. New York: Springer Verlag; 1998. p. 589. 20. David HA, Nagaraja HN. Order statistics, Third Edition. New York: J. Wiley; 2003. p. 488. Articles from PLoS Genetics are provided here courtesy of Public Library of Science • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2712761/?tool=pubmed","timestamp":"2014-04-16T04:50:57Z","content_type":null,"content_length":"174390","record_id":"<urn:uuid:c6fbb576-21b0-4bd6-8a8e-02df010d5af7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Showing a function to be analytic... March 21st 2010, 09:32 PM #1 Super Member Feb 2008 Showing a function to be analytic... Let f and g be ananlytic on a domain. Show that: f + g is analytic on D and (f + g)' = f' + g' I also have to do this for fg and f/g. Can someone show how to do the above, so I can attempt fg and f/g on my one? Thanks in advance for any help... Well I figured out how to prove (f+g)' = f' + g' But what do I need to do to show this is analytic? March 22nd 2010, 01:47 PM #2 Super Member Feb 2008 March 22nd 2010, 01:59 PM #3
{"url":"http://mathhelpforum.com/differential-geometry/134978-showing-function-analytic.html","timestamp":"2014-04-20T13:57:53Z","content_type":null,"content_length":"34041","record_id":"<urn:uuid:d019ffa8-1166-49b1-936a-f40b7acacd0c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Celebrating 25 years of celebrating computation My third example is another pursuit of shy, elusive mathematical objects. It concerns the simple equation a + b = c , where a , b and c are positive integers that have no divisors in common (other than 1); for example, the equation 4+5=9 qualifies under this condition. Now for some number theory. Multiply the three numbers a, b and c , then find all the prime factors of the product. From the list of factors, cast out any duplicates, so that each prime appears just once. The product of the unique primes is called the radical of abc , or rad ( abc ). For the triple {4, 5, 9}, the product is 4×5×9=180, and the factor list is 2, 2, 3, 3, 5. Removing the duplicated 2s and 3s leaves the unique factor list 2, 3, 5, so that rad (180)=30. In this example, c is less than rad ( abc ). Can it ever happen than c is greater than rad ( abc )? Yes: The triple {5, 27, 32} has the product 5×27×32=4,320, for which the unique primes are again 2, 3 and 5. Thus c =32 is greater than rad (4,320)=30. Triples where c exceeds rad ( abc ) are called abc -hits. As with MSTD sets, there are infinitely many of them, and yet they are rare. Among all abc triples with c ≤10,000, there are just 120 abc -hits. If c can be greater than rad ( abc ), how much greater? It's been shown that c can exceed rad ( abc ) plus any constant or rad ( abc ) multiplied by any constant. How about rad ( abc ) raised to some power greater than 1? A conjecture formulated by Joseph Oesterlé of the University of Paris and David W. Masser of the University of Basel claims there are only finitely many exceptional cases where c > rad ( abc ) ^ 1+ε , for any ε no matter how small. The conjecture has made the search for abc -hits more than an idle recreation. If the conjecture could be proved, there would consequences in number theory, such as a much simpler proof of Fermat's Last Theorem. In a program to search for abc -hits the one sticky point is factoring the product abc . Factoring integers is a notorious unclassified problem in computer science, with no efficient algorithm known but also no proof that the task is hard. If you want to get serious about the search, you need to give some thought to factoring algorithms—or else latch on to code written by someone else who has done that thinking. On the other hand, for merely getting a sense of what abc -hits look like and where they're found, the simplest factoring method—trial division—works quite well. Searchers for abc -hits can also join ABC@home ( www.abcathome.com ), a distributed computing project.
{"url":"http://www.americanscientist.org/issues/pub/calculemus/5","timestamp":"2014-04-18T16:52:04Z","content_type":null,"content_length":"133064","record_id":"<urn:uuid:c605e226-2715-4b67-a3c3-f40d96396578>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Serveral problems in OpenCl,Help! [Archive] - Public discussions about the Khronos Dynamic Media APIs 06-10-2010, 02:00 AM I want to program an algebra calculation which must be devided into two parts, the latter's input is the former's output. For example: The former part is to use a vector (vectorA with n factors) as its input to add another vector(vectorB with n factors), the output is vectorC. The latter part is to use vectorC as the input paramter of some algebra fucntions F(X)(X is vector type), because of F(X) is variable, the whole calculation must be took apart. My problems are: For the first part, if the kernel is: __kernel void adder(__global const float* a, __global const float* b, __global float* result) int idx = get_global_id(0); result[idx] = a[idx] + b[idx]; 1.Dose each work item excute only once? 2. For vectorA (with n factors) plus vectorB(with n factors), must there be n work items to complete the addition. 3.Are the n work items in the same work group? 4.In what condition, the work items are in different work group? 5.I want to continue the latter part of this calculation,input vectorC which is in global memory to kernel which has the same function of F(X), what can I do? Use event or commandqueue or some else? How ? That's all,please help me!
{"url":"http://www.khronos.org/message_boards/archive/index.php/t-6646.html","timestamp":"2014-04-18T18:38:17Z","content_type":null,"content_length":"5812","record_id":"<urn:uuid:499f3431-dc13-446b-b22f-a11a6052ad6b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating synchronous motor output power I have the output torque ,speed and torque angle I am looking for the output power. I assumed i could just multiply the torque by the radial speed and get the output power, but I am getting values higher than the input power. Any advice is appreciated.
{"url":"http://www.physicsforums.com/showthread.php?t=145144","timestamp":"2014-04-16T16:12:39Z","content_type":null,"content_length":"19540","record_id":"<urn:uuid:3dbb9b99-f031-44d7-b9da-dcdf36cae49b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Floating pins? - Arduino Forum I'm having an issue that I'm sure is just a problem with my understanding of Arduinos. I have a Due and I need to use almost all of the pins. I'm setting the pins as output and then calling digitalwrite(pin, high) on them like this: //NUMBUTTONS = 49 //BUTTONS = an array of uint8_t for(uint8_t index = 0; index < NUMBUTTONS; index++) pinMode(BUTTONS[index], INPUT); //this sets the pin to input mode digitalWrite(BUTTONS[index], HIGH); //this turns on the resistor What I'm finding is that pins 13, 12, 8, 9, and others work great as expected, but pins 5, 2, 17... are floating. Am I missing something?
{"url":"http://forum.arduino.cc/index.php?topic=154872.msg1161996","timestamp":"2014-04-18T21:00:26Z","content_type":null,"content_length":"83123","record_id":"<urn:uuid:9b09337d-f956-4b8e-994d-70ba5722e0ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
How scientific is climate science? [Keenan, 2011] For years, some researchers have argued that the evidence for global warming is not nearly as strong as has been officially claimed. The details of the arguments are often technical. As a result, policy makers and other people outside the debate have relied on the pronouncements of a group of climate scientists. I think that is unnecessary. I believe that what is arguably the most important reason to doubt global warming can be explained in terms that most people can understand. Figure 1. Global temperatures. Consider the graph of global temperatures in Figure 1, which uses data from NASA. At first, it might seem obvious that the graph shows an increase in temperatures. In fact, the story is more involved, as we will now see. Imagine tossing a coin ten times. If the coin came up Heads each time, we would have very significant evidence that the coin was not a fair coin. Suppose instead that the coin was tossed only three times. If the coin came up Heads each time, we would not have significant evidence that the coin was unfair: getting Heads three times can reasonably occur just by chance. Figure 2. Coin tosses: H, T, H (left); T, H, T (mid); H, T, T (right). In Figure 2, each graph has three segments, one segment for each toss of a coin. If the coin came up Heads, then the segment slopes upward; if it came up Tails, then the segment slopes downward. In Figure 2, the graph on the left illustrates tossing Heads, Tails, Heads; the middle graph illustrates Tails, Heads, Tails; and the last graph illustrates Heads, Tails, Tails. Figure 3. Coin tosses: H, H, H. Now consider Figure 3. At first, it might seem obvious that the graph shows an increase. This graph, however, illustrates Heads, Heads, Heads. Three Heads is not significant evidence for anything other than random chance occurring. A statistician would say that although Figure 3 shows an increase, the increase is “not significant”. Suppose that instead of tossing coins, we roll ordinary six-sided dice. If a die comes up 1, a line segment is drawn sloping downward; if it comes up 6, a segment is drawn sloping upward; and if it comes up 2, 3, 4, or 5, a segment is drawn straight across. We will roll each die three times. Some examples are given in Figure 4. Figure 4. Dice rolls: 3, 6, 3 (left); 1, 5, 2 (mid); 4, 6, 1 (right). Next consider Figure 5, which corresponds to rolling 6 three times. This outcome will occur by chance just once out of 216 times, and so it gives significant evidence that the die is not rolling randomly. That is, the increase shown in Figure 5 is significant. Figure 5. Dice rolls: 6, 6, 6. Note that Figure 3 and Figure 5 look identical. In Figure 3, the increase is not significant; yet in Figure 5, the increase is significant. These examples illustrate that we cannot determine whether a line shows a significant increase just by looking at it. Rather, we must know something about the process that generated the line. In practice, the process might be very complicated, which can make the determination difficult. Consider again the graph of global temperatures in Figure 1. We cannot tell if global temperatures are significantly increasing just by looking at the graph. Moreover, the process that generates global temperatures—Earth's climate system—is extremely complicated. Hence determining whether there is a significant increase is likely to be difficult. Time series This brings us to the statistical concept of a time series. A time series is any series of measurements taken at regular time intervals. Examples include the following: prices on the New York Stock Exchange at the close of each business day; the maximum temperature in London each day; the total wheat harvest in Canada each year. Another example is the average global temperature each year. In the analysis of time series, a basic question is how to determine whether a given series is significantly increasing (or decreasing). The mathematics of time-series analysis gives us some methods to answer that question. The first thing to do is to state what we know about the time series. For example, we might state that the series goes up one step whenever a certain coin comes up Heads, and that the series comprises three upward steps, as in Figure 3. The next things to do are some computations based on what we have stated. For example, we compute that the probability of a coin coming up Heads three times in a row is ½ × ½ × ½ = ⅛, i.e. a 12.5% probability of occurring randomly. From that, we conclude that the three upward steps in the coin-toss time series can be reasonably attributed to chance—and thus that the increase shown in Figure 3 is not significant. Similarly, in order to determine if the global temperature series is significantly increasing, we must first state what we know about the temperature series. What do we know about the series? Not enough to do viable time-series analysis, unfortunately. What we must do, then, is make some assumptions about the series, and then do our analysis based on those assumptions. This is the way that is advocated by time-series analysts. As long as the assumptions are reasonable, we can be confident that the conclusions drawn from our analysis are reasonable. The IPCC assumption The primary body advising governments on global warming is the U.N.'s Intergovernmental Panel on Climate Change (IPCC). The IPCC's most-recent report on the scientific basis for global warming was published in 2007. Chapter 3 considers the global temperature series illustrated in Figure 1. The chapter's principle conclusion is that the increase in global temperatures is extremely significant. To draw that conclusion, the IPCC had to make an assumption about the global temperature series. The assumption that it made is known as the “AR1” assumption (this is from the statistical concept of “first-order autoregression”). The assumption implies, among other things, that only the current value in a time series has a direct effect on the next value. For the global temperature series, it means that this year's temperature affects next year's, but temperatures in previous years do not. For example, if the last several years were extremely cold, that on its own would not affect the chance that next year will be colder than average. Hence, the assumption made by the IPCC seems intuitively implausible. There are standard checks to (partially) test whether a time series conforms to a given statistical assumption. If a series does not conform, then any conclusions based on that assumption must be considered unfounded. For example, if the significance of the increase in Figure 5 were computed assuming that the probability of a line segment sloping upward were one in two, instead of one in six, then that would lead to an incorrect conclusion. The IPCC chapter, however, does not report doing such checks. In other words, the assumption used by the IPCC is simply made by proclamation. Science is supposed to be based on evidence and logic. The failure of the IPCC to present any evidence or logic to support its assumption is a serious violation of basic scientific principles. Moreover, standard checks show that the global temperature series does not conform to the assumption made by the IPCC; one such check is discussed in a separate section below. Thus, the claim that the increase in global temperatures is significant—the principal conclusion of a major chapter of the IPCC report—was based on an assumption that is insupportable. More generally, the IPCC has failed to demonstrate that global temperatures are significantly increasing. These problems are not unique to the IPCC, either. The U.S. Climate Change Science Program (CCSP), which advises Congress, published its report on temperature changes in 2006. That report relies on the same insupportable assumption as the IPCC chapter. None of this is opinion. This is factual and indisputable. And it applies to any warming—whether attributable to humans or to nature. Until research to choose an appropriate assumption is done, no conclusion about the significance of temperature changes can be drawn. Mr. Keenan previously did mathematical research and financial trading on Wall Street and in the City of London; since 1995, he has been studying independently. He supports environmentalism and energy An insupportable assumption Figure 6. Sunlight intensity (inverted) and global ice volume. Over many millennia, the most important fluctuations in Earth's climate have been those related to the ice ages. The ice ages are caused by natural variations in Earth's orbit around the sun. Those variations in the orbit alter the intensity of summertime sunlight. Some relevant data is presented in Figure 6: the black line represents the amount of ice globally and the green line represents the intensity of summertime sunlight in the Northern Hemisphere (where the effects are greatest). Notice, though, that the similarity between the two lines is very weak. Figure 7. Sunlight intensity (inverted) and changes in global ice volume. Why is the similarity so weak? To understand what is happening, we have to consider the changes in the amount of ice globally. For example, if the amount of ice at different times were 17, 15, 14, 19, …, then subtracting adjacent amounts gives the changes: 2, 1, −5, …. The black line in Figure 7 shows the changes in the amount of ice, while the green line, as before, shows the intensity of summertime sunlight. Now the similarity between the two lines is strong. This is excellent evidence that the ice ages are indeed caused by orbital variations. (There is other evidence as well.) A connection between ice ages and orbital variations was first proposed in 1920, by the Serbian astrophysicist Milutin Milankovitch. To check the proposed connection, data on the amount of ice in past millennia is obviously needed; such data became available in 1976. Yet it was not until 2006 that scientists considered the changes in the amount of ice. In other words, it took 30 years for scientists to think to do the subtraction needed to draw the black line in Figure 7. During those three decades, scientists analyzing Milankovitch's proposal based their studies on graphs like Figure 6, and they considered a variety of ideas to try to explain the weak similarity between the two lines. Alternative assumptions The foregoing raises a question: for global temperatures, what happens if we analyze the changes, instead of the temperatures themselves? It turns out that then there is an obvious alternative to the assumption used by the IPCC. How good is the alternative assumption compared to the IPCC assumption? One common method of comparing assumptions is to use what statisticians call “AICc” (Akaike Information Criterion with correction). This method shows that the alternative is so much better than the IPCC assumption, that we conclude the IPCC assumption is insupportable. That is, the IPCC made the same mistake as the scientists who worked for 30 years to verify Milankovitch's proposal: failing to consider the changes in a series. Under the alternative assumption, the increase in global temperatures is not significant. We do not know, however, whether the alternative assumption itself is reasonable—other assumptions might be even better. Determining how viable the alternative assumption is would require study. There have been studies that consider other assumptions and thereby reach different conclusions about the temperature data. The IPCC report nods toward such studies, but without acknowledging that the soundness of its conclusions rests upon its choice of assumption—or that making a good choice, one that well corresponds with physical reality, requires further, difficult research. Technical details for this essay are at www.informath.org/media/a41/b8.pdf.
{"url":"http://www.informath.org/media/a42.htm","timestamp":"2014-04-20T18:23:05Z","content_type":null,"content_length":"16730","record_id":"<urn:uuid:585efd4a-80e4-4fdf-9dfc-7b5618aacbcc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
I need help with Present Value(Adv. Algebra) April 21st 2013, 04:53 PM I need help with Present Value(Adv. Algebra) Question - Find the present value of the amount A = $15,000 invested at rate of r=6% for t=5 years, compunded n= 4 times per year. please and thank you. April 21st 2013, 05:02 PM Re: I need help with Present Value(Adv. Algebra) I'm pretty sure you are provided with such a formula: A(amount after t years) = (1 + r/n)^(nt), and you apparently have all the data to calculate it,
{"url":"http://mathhelpforum.com/advanced-algebra/217963-i-need-help-present-value-adv-algebra-print.html","timestamp":"2014-04-20T07:30:10Z","content_type":null,"content_length":"3734","record_id":"<urn:uuid:a87a8297-b5fd-472b-af19-db3e32823241>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Geometry/General Relativity Computer Algebra up vote 4 down vote favorite could anybody recommend a CAS suited to DG/GR applications such as computation of connection coefficients or generating (and possibly solving) PDEs for, for example, an unknown metric of given curvature. Oh, and compatible Linux (I'm using Maple through wine but am having myriad problems. Also tried Maxima but I don't think it has a PDE solving tool.) dg.differential-geometry computer-algebra add comment 3 Answers active oldest votes Mathematica has had GR stuff for decades (here is a random link: up vote 3 down but google search for Mathematica "general relativity" returns lots. vote accepted I don't understand your comment about Maple -- it certainly has a linux version. I used this code from Hartle's book a lot. The only issue is that it's SLOW (although this is Mathematica's fault). I did first order perturbation of metrics and even then it took its time to figure out the various tensors. Also, Mathematica is pretty good at solving really ugly PDE's if you know how to give it the right kick. – Alex R. Jan 27 '11 at 6:14 @Igor: I'm using my old windows version of Maple through Linux' `Wine' platform but it dies a lot. – kangdon Jan 28 '11 at 0:27 add comment Although I don't have much experience using them (I keep telling myself I should learn one of them well) I know several such systems: 1. MathTensor for Mathematica up vote 3 down vote 2. GRTensor for Maple (and a limited subset for Mathematica) 3. Cadabra, not tied to any particular CAS system; although it uses LiE I think I'm in the same boat as you José. I use Maple and MatLab just unoften enough that I need to relearn everything every time. Thanks for the links. – kangdon Jan 28 '11 at add comment I recommend for Mathematica : http://www.xact.es/ it seems to be the most advanced package for General Relativity. up vote 1 down vote add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry computer-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/53440/differential-geometry-general-relativity-computer-algebra?sort=oldest","timestamp":"2014-04-20T08:55:58Z","content_type":null,"content_length":"59458","record_id":"<urn:uuid:5631ec6d-2c2e-48d6-8a9d-1a8e5e888f59>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
ADVrider - View Single Post - Hotrodding the GS Originally Posted by So, how many in the next batch and what is the new ETA for the next batch. Does the next batch fill all outstanding orders? If not, what is the ETA for all orders filled/shipped? I believe this batch was the 2nd 30 so there is a third 30 to go and the final batch will be 16. PS will correct me if I got this wrong. I'm hoping to be in the third 30 but suspect I may be in the last batch of 16. 1st 30 (shipped) 2nd 30 (shipped) 3rd 30 Last 16
{"url":"http://www.advrider.com/forums/showpost.php?p=17119356&postcount=1612","timestamp":"2014-04-21T05:27:59Z","content_type":null,"content_length":"14518","record_id":"<urn:uuid:1e2af3e1-0f7a-4058-a9c7-3a58bf525e0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Having difficulties with the algebra of this limit, and a quick question about UD lim lim x -> 16 I know you have to do the conjugate (multiply both top and bottom by 4+√x) but then I get (16-x)/(x-16)(4+√x) and i'm not exactly sure how to get the x-16 terms to cancel out, although I know the answer will be negative 1/8. Sorry, i'm not too savvy with algebra. Furthermore I have a question about limits that don't exist. At what point do you know they don't exist. For example (2-x)/(x-1)² or (x-1)/x²(x+2) both of which have limits that are undefined/DNE. At what point when attempting to solve them do you realize they don't exist, as opposed to thinking maybe you just haven't done an algebraic trick yet. I know this is pretty ambiguous but if you have any personal tips i'd appreciate it. And finally, thank you alll sooooooooooooooooo much!!! you're so helpful with all of this and i'm envious of you guys lol. Take care!!!
{"url":"http://mathhelpforum.com/calculus/203352-having-difficulties-algebra-limit-quick-question-about-ud-lim.html","timestamp":"2014-04-19T07:57:49Z","content_type":null,"content_length":"56546","record_id":"<urn:uuid:ec55b709-56f9-4894-ab9a-ceb4e7633fef>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Tempe Precalculus Tutor Find a Tempe Precalculus Tutor ...First in high school, working with Professors at Arizona State University, moving on to studying for an Undergraduate degree. While I moved on from studying astrophysics, I am still an avid amateur astronomer. I have had 3 semesters of formal training in Java, along with tutoring students in Java. 15 Subjects: including precalculus, calculus, geometry, algebra 1 ...I took algebra 2 in high school and college. I tutored this subject a lot during my year of tutoring at CGCC. I have a lot of experience working with and explaining algebra 2 problems. 13 Subjects: including precalculus, chemistry, physics, calculus ...Previously, I was a paid tutor for the AVID (Advancement Via Individual Determination) Program, tutoring younger students with the Socratic method (Desert Ridge High School, Mesa). This program and my own methods emphasize the importance of deeply understanding material (rather than simply memori... 32 Subjects: including precalculus, Spanish, reading, English ...Playing the piano has been one of my most cherished hobbies for the greater part of my life. I've been playing since I was 7 and have always exceeded the expectations of my instructors in the rate at which I would learn new material. As a younger student I received private lessons and as an adult I have completed several classes. 26 Subjects: including precalculus, chemistry, physics, reading ...Everyone learns differently and with my BA in Psychology, I have researched learning styles and study approaches which means I can usually find an angle or strategy you may have missed. I am great with general sciences, maths, and standardized test prep like the GED, ACT, and SAT. I am a warm, tentative, and focused tutor who makes the most out of lessons. 23 Subjects: including precalculus, chemistry, statistics, geometry
{"url":"http://www.purplemath.com/Tempe_Precalculus_tutors.php","timestamp":"2014-04-18T04:17:47Z","content_type":null,"content_length":"23862","record_id":"<urn:uuid:360387d5-9ac8-4c66-9d83-ec26f9153864>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
GRE Practice Tests Now you can Read Books Online Free at ReadCentral.com Business Phone Etiquette Quiz Rage Against The Machines Quiz Harry Potter Personality Quizzes GRE Practice Test - Math - Quadratic Equations Test Questions Welcome to the QuizMoz GRE Practice Test - Math - Quadratic Equations Test Questions. QuizMoz offers one of the Internet's largest collection of quizzes for you to tease your brain and pit your wits against the experienced QuizMoz quiz masters. Go ahead and find out what you know about yourself and the world around you. Best of luck!! Coverage : The GRE Practice Test - Math - Quadratic Equations has been designed to test the important concepts related to quadratic equations. The test covers important topics like finding roots for an equation, solving equations for the numbers, solving equations for finding the smallest and the greatest numbers etc. 1. If the sum of the squares of two consecutive odd natural numbers is 202, find out those numbers. a. 5,6 b. 7,9 c. 7,8 d. 9,11 e. 19,16 2. The sum between two natural numbers is 20 and their difference is 12, find out those numbers. a. 18,5 b. 14,6 c. 16,4 d. 15,5 e. 12,8 3. The sum between two natural numbers is 32 and their difference is 14, find out those numbers. a. 23,9 b. 24,8 c. 16,16 d. 18,14 e. 12,20 4. A natural numbers is greater than 4 times the other by 8 and the product is 32, find out those numbers. a. 16,6 b. 1,19 c. 7,18 d. 14,2 e. 16,2 5. The sum of a natural number and its reciprocal is 37/6, find the number. a. 6 b. 2 c. 4 d. 1 e. 9 6. The sum of a natural number and twice its reciprocal is 27/5, find the number. a. 7 b. 13 c. 10 d. 5 e. 1 7. The sum of a natural number and thrice its reciprocal is 19/4, find the number. a. 4 b. 2 c. 1 d. 3 e. 9 8. If the sum of five consecutive numbers is 625, find the smallest number. a. 123 b. 112 c. 357 d. 123 e. 125 9. If the sum of five consecutive numbers is 1045, find the smallest number. a. 210 b. 207 c. 117 d. 114 e. 115 10. If the sum of five consecutive numbers is 8035, find the smallest number. a. 623 b. 512 c. 357 d. 1605 e. 1025 11. If 10 were taken from the one-fourth of a number, the result is 22, find the number. a. 312 b. 128 c. 131 d. 123 e. 615 12. If 12 were taken from the one-sixth of a number, the result is 24, find the number. a. 106 b. 352 c. 216 d. 140 e. 150 Think you know more about this quiz! Please enter your Name and what you would like to tell everyone about GRE Practice Test - Math - Quadratic Equations Test Questions Think you know more about GRE Practice Test - Math - Quadratic Equations Test Questions and would like others to know too? Whether its a great fact, a joke, a personal experience or an interesting anecdote, please share it with all the human beings on planet earth. Your contribution will help keep QuizMoz a free site for all. (average submission size - 5 to 10 lines) Know the Latest News about GRE Practice Test - Math - Quadratic Equations Test Questions! What others think about GRE Practice Test - Math - Quadratic Equations Test Questions By: Reema on 4/20/2014 Great Quiz! The quizmaster laid out this quiz so that even beginners could learn more By: Roger on 4/19/2014 I love Quiz Games. QuizMoz is an excellent Quiz site By: quiz girl on 4/18/2014 hey folks! i like the amazing quizzes on quizmoz. it increases your general knowledge By: Erin on 4/17/2014 it was a fun and interseting quiz! By: Teresa on 4/16/2014 I took the quiz. It let me know that I failed. But I wasn't able to see what the correct answers. It would be great to see what the answers are so I can learn. By: Roger on 4/15/2014 I love Quiz Games. QuizMoz is an excellent Quiz site By: Tracy on 4/14/2014 Great test. A nice way to gauge one's knowledge By: Hot girl on 4/13/2014 I love answering Quiz Questions By: Aumkar on 4/12/2014 By: Laura on 4/11/2014 I appreciate the time and effort that the quiz maker put into the quiz By: Shannon on 4/10/2014 I have never seen such an excellent quiz website before this. By: Hannah on 4/9/2014 Enjoyed it, and learned a lot about general knowledge By: Teena on 4/8/2014 I love this quiz Website. This is the best free quiz site. By: Bobby Kalsi on 4/7/2014 Try to make it easier to search for a QUIZ Category.. (It should be easier to seach for a quiz category...) By: Tallitha on 4/6/2014 By: Samantha on 4/5/2014 This is so cool. Even though I really did not know some of the questions, it was still fun! By: Quiz Game Player on 4/4/2014 One day I will crack all the Impossible quizzes in the world By: QuizMoz Fan on 4/3/2014 I love all the QuizMoz quizzes. The general knowledge quiz is my favorite By: Nancy on 4/2/2014 I would like to see a complete page of horror movie quizzes for the horror genre fans! By: Alice on 4/1/2014 Its a very good Quiz. It rocks my socks!!!!! Quizzes for this month are sponsored by www.ExpertRating.com Harry Potter and the Deathly Hallows
{"url":"http://www.quizmoz.com/quizzes/GRE-Practice-Tests/g/GRE-Practice-Test-Math-Quadratic-Equations-Test-Questions.asp","timestamp":"2014-04-20T15:57:08Z","content_type":null,"content_length":"163038","record_id":"<urn:uuid:43587d0a-72d4-401d-9c69-f5622dfca1f1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] Suppressing of numpy __mul__, __div__ etc James Bergstra james.bergstra@gmail.... Thu Dec 17 15:27:27 CST 2009 I develop another symbolic-over-numpy package called theano, and somehow we avoid this problem. In [1]: import theano In [2]: import numpy In [3]: numpy.ones(4) * theano.tensor.dmatrix() Out[3]: Elemwise{mul,no_inplace}.0 In [4]: theano.tensor.dmatrix() * theano.tensor.dmatrix() Out[4]: Elemwise{mul,no_inplace}.0 In [5]: theano.tensor.dmatrix() * numpy.ones(4) Out[5]: Elemwise{mul,no_inplace}.0 The dmatrix() function returns an instance of the TensorVariable class defined in this file: I think the only thing we added for numpy was __array_priority__ = 1000, which has already been suggested here. I'm confused by why this thread goes on. 2009/12/17 Dmitrey <tmp50@ukr.net>: > От кого: Sebastian Walter <sebastian.walter@gmail.com> > let me rephrase then. I don't understand why p * ones(2) should give > Polynomial([ 0., 1., 1.], [-1., 1.]). > A polynomial over the reals is a data type with a ring structure and > should therefore behave "similarly" to floats IMHO. > Since I'm not a numpy developer, I cannot give you irrefutable answer, but I > guess it is much more useful for numpy users that are mostly engineering > programmers, not researchers of a data type with a ring structure. > Also, this is not only up to polynomials - as it has been mentioned, this > issue is important for stacking with SAGE data types, oofuns etc, where > users certainly want to get same type instead of an ndarray. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev More information about the SciPy-Dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2009-December/013553.html","timestamp":"2014-04-20T01:59:39Z","content_type":null,"content_length":"5013","record_id":"<urn:uuid:1ce65089-08b0-4a66-985c-332f0f26068b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2012 [00060] [Date Index] [Thread Index] [Author Index] Re: Help with map /@ • To: mathgroup at smc.vnet.net • Subject: [mg128590] Re: Help with map /@ • From: Bill Rowe <readnews at sbcglobal.net> • Date: Thu, 8 Nov 2012 02:08:06 -0500 (EST) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com • Delivered-to: l-mathgroup@wolfram.com • Delivered-to: mathgroup-newout@smc.vnet.net • Delivered-to: mathgroup-newsend@smc.vnet.net On 11/7/12 at 12:56 AM, hussain.alqahtani at gmail.com (KFUPM) wrote: >I have a this expression: >Ex= T1[x]+T2[y]+T3[z]; >I want to integrate the first term with respect to x, the second >w.r.t y and the third with respect to z and then sum them all. I >want to use the map function (/@) or similar to do that for me >automatically. Your help is really appreciated. Here are a couple of ways to do this with an *indefinite* integral which might meet your needs: Total[MapThread[Integrate[#1, #2] &, {List @@ Ex, {x, y, z}}]] Total[Integrate @@@ Transpose@{List @@ Ex, {x, y, z}}] If you wanted to have the same integration limits the syntax for the first example would become: Total[MapThread[Integrate[#1, {#2, a, b}] &, {List @@ Ex, {x, y, z}}]] But do note, these work when T1, T2 and T3 are undefined functions. If they were defined functions then Mathematica would have evaluated Ex to something unlikely to be as cleanly separated into discrete functions by simply using List@@ For example: ex = f[x] + g[y]; Total[MapThread[Integrate[#1, {#2, a, b}] &, {List @@ ex, {x, y}}]] works. But keeping ex defined as above and defining f, g as: f[x_] := 2 x + 4 g[y_] := 1/y + 3; Total[MapThread[Integrate[#1, {#2, a, b}] &, {List @@ ex, {x, y}}]] to fail with an error message since In[18]:= Length[List @@ ex] == Length[{x, y}] Out[18]= False
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Nov/msg00060.html","timestamp":"2014-04-19T04:58:25Z","content_type":null,"content_length":"26509","record_id":"<urn:uuid:b983f7c0-2632-4df9-82ab-dd069d35afba>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
I really need help understanding this one.... October 8th 2010, 05:19 AM I really need help understanding this one.... 5√2 + √2 - (√2)2 I know the answer is 6√2 - 2 but i would like to know the steps on how to come to that conclusion please. Thank you so much. October 8th 2010, 05:28 AM Hi redlinethecar, You know you can combine radicals as long as the index and radicand are the same. And when you square a square root, you simply remove the radical. The final result is: $\boxed{6\sqrt{2}-2}$ October 8th 2010, 06:13 AM Oh wow it was the invisible one. Thanks so much.
{"url":"http://mathhelpforum.com/algebra/158811-i-really-need-help-understanding-one-print.html","timestamp":"2014-04-21T10:46:37Z","content_type":null,"content_length":"5561","record_id":"<urn:uuid:557453b0-9ff6-40b6-a620-899a1849c871>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
A Hollow Cylinder, Of Radius R And Mass M, Rolls ... | Chegg.com A hollow cylinder, of radius R and mass M, rolls without slipping down a loop-the-loop track of radius r. The cylinder starts from rest at a height h above the horizontal section of the track. What is the minimum value of r so that the cylinder remains on the track all the way around the loop? I start with mgh = 1/2mv^2 + 1/2Iw^2 which becomes mgh = 1/2mv^2 + 1/2(mR^2)(v^2/r^2) but beyond that I don't know what to do.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/hollow-cylinder-radius-r-mass-m-rolls-without-slipping-loop-loop-track-radius-r-cylinder-s-q1606390","timestamp":"2014-04-24T14:12:18Z","content_type":null,"content_length":"21094","record_id":"<urn:uuid:2ee1cef4-ecdf-421a-ac48-b2d0cba732e9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2010 [00344] [Date Index] [Thread Index] [Author Index] Re: Through[(a+b+b)[x]] • To: mathgroup at smc.vnet.net • Subject: [mg109176] Re: Through[(a+b+b)[x]] • From: Andrzej Kozlowski <akoz at mimuw.edu.pl> • Date: Fri, 16 Apr 2010 05:49:17 -0400 (EDT) On 15 Apr 2010, at 12:13, Derek Yates wrote: > Through[(a+b)[x]] yields a[x]+b[x] as expected, but Through[(a+b+b) > [x]] yields a[x]+(2b)[x]. Through[(2b)[x]] yields 2[x]b[x]. Now, I can > obviously get around this in this specific case, but generically is > there a way to solve this so that Through[(a+b+b)[x]] yields a[x] > +2b[x]? The case where I envisage this happening is when a sum of > functions is supplied (say, for a given value of y, Through[(f[y]+g[y] > +h[y]+j[y])[x]] and for some values of y, g == h. Then one will end up > with the problem above. Other than some post processing using pattern > matching, which feels a bit clunky, I can't think of a way around this. The only way I can see of doing this without some post-processing is by using Unevaluated: Through[Unevaluated[(a + b + b)[x]]] a(x)+2 b(x) Andrzej Kozlowski
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Apr/msg00344.html","timestamp":"2014-04-19T02:01:28Z","content_type":null,"content_length":"25766","record_id":"<urn:uuid:4c2fbf99-6075-4715-a54d-888ce72b4ddc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
"'Category' was defined in order to define 'functor', which was defined in order to define 'natural transformation'" up vote 8 down vote favorite I am looking for the source (and original version) of the above oft-repeated quotation. Mac Lane mentions it in Categories for the Working Mathematician, attributing it to Eilenberg-Mac Lane; however, I didn't see it while briefly skimming their paper General Theory of Natural Equivalences. reference-request ho.history-overview add comment 1 Answer active oldest votes CW since some of the recent posts on MO have required little more than googling. Prior to the book you mentioned, MacLane attributed this saying to Peter Freyd in: MacLane, S. (1965). Categorical algebra. Bulletin of the American Mathematical Society, 71(1), 40-106. up vote 16 down vote accepted Relevant excerpt: (p. 48) With regard to the original language, Eric Wofsey points out that Freyd's Abelian Categories (1964) begins with this description: 3 Freyd makes this statement on this first page of his book Abelian categories: "It is not too misleading, at least historically, to say that categories are what one must define in order to define functors, and that functors are what one must define in order to define natural transformations." – Eric Wofsey Oct 2 '13 at 4:47 Great; I've added in an image from page 1 of Freyd's book. – Benjamin Dickman Oct 2 '13 at 4:56 3 Is it no true that the "serious disservice" to which Peter Freyd refers is committed in a number of text books? – Ronnie Brown Oct 2 '13 at 14:45 add comment Not the answer you're looking for? Browse other questions tagged reference-request ho.history-overview or ask your own question.
{"url":"http://mathoverflow.net/questions/143754/category-was-defined-in-order-to-define-functor-which-was-defined-in-order","timestamp":"2014-04-17T15:49:50Z","content_type":null,"content_length":"54041","record_id":"<urn:uuid:c35444d4-b919-4e53-8870-0d4d808eb95f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Sydney on Tuesday, September 7, 2010 at 10:30pm. 1) Your are planning to spend no less than $8000.00 and no more than $12,000 on your landscape project a) Write an inequality that demonstrates how much money you will be will to spend on the project B) Suppose you want to cover the backyard with the decorative rock and plant some tress as the first phase of the project. You need 20 tons of rock to cover the area. If each ton cost %50 and each tree cost $60, what is the maximum numbert of tress you can buy with a budget for rock and tress of $2000? Write an inequality that illustrates the problem and solve. Express your answer as an inequality and explain how you arrive at your answer 2 You are going to help a neighbor build a ramp so he can easily go into his house in a wheelchair. In order to meet federal guideline, the ramp must not rise more than 1 foot over a horizontal distance of 12 vfeet. a)What is the maxium slope of the ramp into house? B)If the horizontal distance into the house is 49 feet, what is the maxium allowable rise of the ramp? 3. You are going to plant trees in your hilly backyard. Tree A is located at coordinates(1,2) and Tree B is located at (3,12). Wgat us tge slope of the hill between the two trees? Show how you arrived at your answer. • Alegbra 116 - Sydney, Tuesday, September 7, 2010 at 11:34pm I guess no one wants to help me and my friend sydney.... • Alegbra 116 Help Please - Sydney, Wednesday, September 8, 2010 at 5:44am 1) Your are planning to spend no less than $8000.00 and no more than $12,000 on your landscape project a) Write an inequality that demonstrates how much money you will be will to spend on the project B) Suppose you want to cover the backyard with the decorative rock and plant some tress as the first phase of the project. You need 20 tons of rock to cover the area. If each ton cost %50 and each tree cost $60, what is the maximum numbert of tress you can buy with a budget for rock and tress of $2000? Write an inequality that illustrates the problem and solve. Express your answer as an inequality and explain how you arrive at your answer 2 You are going to help a neighbor build a ramp so he can easily go into his house in a wheelchair. In order to meet federal guideline, the ramp must not rise more than 1 foot over a horizontal distance of 12 vfeet. a)What is the maxium slope of the ramp into house? B)If the horizontal distance into the house is 49 feet, what is the maxium allowable rise of the ramp? 3. You are going to plant trees in your hilly backyard. Tree A is located at coordinates(1,2) and Tree B is located at (3,12). Wgat us tge slope of the hill between the two trees? Show how you arrived at your answer • Alegbra 116 - Sydney, Wednesday, September 8, 2010 at 1:03pm 1) Your are planning to spend no less than $8000.00 and no more than $12,000 on your landscape project a) Write an inequality that demonstrates how much money you will be will to spend on the project 8000< + x < 12,000 B) Suppose you want to cover the backyard with the decorative rock and plant some tress as the first phase of the project. You need 20 tons of rock to cover the area. If each ton cost %50 and each tree cost $60, what is the maximum numbert of tress you can buy with a budget for rock and tress of $2000? Write an inequality that illustrates the problem and solve.yx20+50x=1000 Express your answer as an inequality and explain how you arrive at your answer 2 You are going to help a neighbor build a ramp so he can easily go into his house in a wheelchair. In order to meet federal guideline, the ramp must not rise more than 1 foot over a horizontal distance of 12 vfeet. a)What is the maxium slope of the ramp into house? B)If the horizontal distance into the house is 49 feet, what is the maxium allowable rise of the ramp? 3. You are going to plant trees in your hilly backyard. Tree A is located at coordinates(1,2) and Tree B is located at (3,12). Wgat us tge slope of the hill between the two trees? Show how you arrived at your answer • Alegbra 116 - lucy, Sunday, December 19, 2010 at 9:17pm would 9 tress be a solution to the inequality in Related Questions math 115 - You are planning to spend no less than $8000. and no more than 12,000... MATH 116 - You are planning to spend no less than $6,000 and no more than $10,... algebra - You are planning to spend no less than $6,000 and no more than $10,000... algebra - You are planning to spend no less than $6,000 and no more than $10,000... algebra - You are planning to spend no less than $6,000 and no more than $10,000... Math Problem - You are planning to spend no less than $6,000 and no more than $... Alegbra 116 Help Please - 1) Your are planning to spend no less than $8000.00 ... algebra - Your company is planning to spend no less than $80,000 and no more ... math - You are planning to spend no less than $6,000 and no more than $10,000 on... algebra - 1. You are planning to spend no more than $10,000 and no less than $8,...
{"url":"http://www.jiskha.com/display.cgi?id=1283913014","timestamp":"2014-04-18T04:13:47Z","content_type":null,"content_length":"12572","record_id":"<urn:uuid:a6fcc8b3-31b3-484a-b575-fd866570f121>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Understand Witten's "QFT and Jones Polynomials" - how does he get to the twisted Dirac operator L_{-}? up vote 7 down vote favorite Hi, this is my first post here, so I hope I am asking the question the right way. I am trying to understand to following piece of algebra: In his paper, Witten claims that $\int_M Tr(B \wedge DB) + \int_M Tr(\phi \wedge \ast D \ast B) = \langle B , \ast DB \rangle + \langle \phi, D \ast B \rangle$ (where B is a Lie-algebra valued 1-form, $\phi$ is a Lie-algebra valued 3-form, $\ast$ is the Hodge star, D is the covariant derivative with respect to some flat connection, and $M$ is a compact closed Riemannian 3-manifold) can be regarded as a product of the form $\langle H, L_- H \rangle$, where $H = B+\phi \in \Omega^1(M,\mathfrak{g}) \oplus \Omega^3(M,\mathfrak{g})$ and $L_- = \ast D + D \ast$ is what he calls the twisted Dirac Operator acting on 1 and 3 forms. The scalar product just comes from extending the inner products of 1- and 3-forms orthogonally onto the direct sum, i.e. 1-forms and 3-forms are orthogonal w.r.t. this inner product. Witten does not bother to go into any detail explaining that, so I looked it up in another book, "Differential Topology and Quantum Field Theory" by Charles Nash. Now he claims the following (essentially equation 12.104): $\langle H, L_- H\rangle = \langle B + \phi, (\ast D + D \ast) (B+\phi) \rangle = \langle B+\phi, \ast D B + D \ast B + D \ast \phi\rangle $ (the other term with $\phi$ drops out because $\phi$ is a 3-form, so $D\phi=0$) $= \langle B ,\ast D B\rangle + \langle B, D\ast \phi\rangle + \langle \phi, D \ast B \rangle$. So far so good, it's the linearity of the inner product and the fact that 1- and 3-forms are orthogonal to each other. Now he continues \begin{eqnarray} \langle H, L_- H \rangle = \int_M Tr (B \wedge DB) + \int_M Tr(B \wedge \ast D \ast \phi) + \int_M Tr(\phi \wedge \ast D \ast B) \end{eqnarray} \begin{eqnarray} = \int_M Tr(B\wedge DB) + 2 \int_M Tr(\phi \wedge D^\dagger B) \end{eqnarray} where $D^\dagger$ is the codifferential of $D$, i.e. $\langle \alpha, D\beta \rangle = \langle D^\dagger \alpha, \beta \rangle$ for differential forms $\alpha, \beta$ with the right degree. Now I do not see at all how he gets to the last expression. I don't mind the factor of 2, but I don't see how he manages to get the codifferential in this way. I have tried using Stokes as well as the definition of the codifferential and my calculations say that the last two terms in the first line should cancel. However I have to admit that I did not bother about the Lie-algebra part of the forms, i.e. I basically did it for the abelian case. But I was assured that it shouldn't matter. But apparently, it does... I am pretty desperate to understand this part, so I would be happy about any kind of help you guys can offer me! dg.differential-geometry tqft chern-simons-theory add comment 4 Answers active oldest votes The $L^2$ inner product of $su(2)$ valued $p$-forms on a closed manifold $M$ is defined by $\langle a, b\rangle = -\int_M Tr(a\wedge *b).$ Together with the fact that $*^2=\pm 1$ and taking care with signs, this immediately explains the formula of Witten (just insert a $*^2$ before $DB$ in the first integral). up vote To see why the definition is correct, note that $-Tr$ is a positive definite inner product on the lie algebra. $Tr$ is acting on the coefficients and $*$ is acting on the forms. The usual $L 1 down ^2$ inner product on (real or complex valued) $p$-forms is $\langle a, b\rangle = \int_M a\wedge *b.$ Also, the adjoint of $D$ is $*D*$ (up so some sign). For details, look at the chapter on Hodge theory in Warner's book for ordinary forms; passing to vector-bundle valued forms just requires an inner product on the bundle. First of all, I think the minus sign in front of the trace is just an overall sign in front of all integrals that doesn't affect the problem at hand. Furthermore, it is just a matter of definition of the Lie-algebra inner product, so -Tr or Tr is merely a symbol. Maybe I should clearify the problem by an explicit calculation. Let's look at the second term: $\int_M Tr(B\ wedge *D*\phi) = (-)\langle B, D*\phi\rangle =(-)\langle D*\phi,B\rangle = \int_M Tr(D*\phi \wedge *B)$ Now both $*\phi$ and $*B$ are even forms, so using product rule and Stokes, one would get $-\int Tr(*\phi \wedge D*B)$ – moep Aug 8 '12 at 12:54 To continue: This then is $-(-)\langle *\phi,*D*B \rangle = -(-)\langle *D*B,*\phi\rangle = -\int_M Tr(*D*B \wedge \phi) = -\int_M Tr(\phi \wedge *D*B)$, as $*D*B$ is a 0-from and therefore commutes under the wedge product. The minus sign in bracket just incorporates the possibility of defining the inner product with an minus sign in front of the integral without affecting the result. I don't know where I went wrong, but clearly this minus sign at the end shouldn't be! So please, please point out any thing that has the potential to be wrong! – moep Aug 8 '12 at 12:59 The correct signs and this calculation can be found in many books, including Warner's. They are a bit of a pain to work out, but can all be derived from $a \wedge *b= \langle a, b\rangle d~vol$ and the product rule. – Paul Aug 9 '12 at 14:40 add comment I think that there is no problem. Consider your two lines formula. Let me call 2A the second term of the second line. The third term of the first line equals A : express the codifferential in term of D and of the Hodge star using Stokes. The second term of the first line equals A : use the definition of the codifferential. up vote 0 For the two calculus, use the fact that < *a , *b> = < a , b > et * * = identity if one acts on zero or three forms. down vote If this indications are not sufficient, I will edit to provide more details. Thanks for the answer! Unfortunately, I have already tried this route. I found (on Wikipedia and also in several different books on differential gemeometry) that the codifferential has a non-trivial sign depending on whether it acts on even or odd forms, i.e. D^dagger B = (-1)^(degree(B)) D B on a 3-manifold (where **=1). Since in this case B is a 1-form, *D*B is (according to this formula) -D^dagger B. You would also get this minus sign if you used Stokes theorem. There it comes from the product rule applied to D(*\phi \wedge *B), where now *\phi is even. – moep Aug 7 '12 at 18:49 add comment Well well well, it seems like Witten has played a pretty nasty joke on all of us... And all the authors who copied from him apparently fell for it as well! But I found a possible resolution to the problem above in "Computer Calculation of Witten's 3-Manifold Invariant" by Freed and Gompf (Commun. Math. Phys. 141, 79-117 (1991)): The formula (1.27) defines a self-adjoint operator $ (-1)^p (* D + D * )$ acting on 2p+1-forms. They go on talking about the $ \eta $-invariant of this operator, which is precisely what also Witten does in his paper later on. up vote 0 down vote If we take the operator $L_-$ to be this one, then everything works out perfectly. So I hope this is the answer. If any expert on this matter could confirm this I would be glad. Otherwise I think this question should be answered, but still I welcome any further add comment Hi moep, $\left<H,L_- H\right> = \left<B,*DB\right> + \left<B,D*\phi\right>+\left<\phi,D*B\right>$ $= \left<B,*DB\right> + \left<D^* B,*\phi\right>+\left<\phi,D*B\right>$ Now, in Euclidean space, $** =(-1)^{n(D-n)}$, where $D$ is the space dimension and $n$ is the degree of the form. For $D=n=3$, it yields $** = 1$. Thus, the last term changes, $\left<H,L_- H\right>= \left<B,*DB\right> + \left<D ^*B,*\phi\right>+\left<\phi,**D*B\right>,$ However, in Euclidean space, $D^* = (-1)^{D.n +D+1}*D*$, therefore, $D=3$ and $n=1$ yields, $D^* B= -*D*B$, i.e., up vote 0 down vote $\left<H,L_- H\right>= \left<B,*DB\right> + \left<D^*B,*\phi\right>-\left<\phi,*D^*B\right>,$ which gives the result you where pointing out!!! If I tried with Lorentzian signature the result holds... Can someone point out what are we doing wrong? P.D.: Sign conventions from Nakahara's book (section 7.9). add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry tqft chern-simons-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/104215/understand-wittens-qft-and-jones-polynomials-how-does-he-get-to-the-twisted","timestamp":"2014-04-20T06:08:30Z","content_type":null,"content_length":"71808","record_id":"<urn:uuid:51134433-54b3-4da3-a772-4428bc61c3cb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
I Understand Almost All The Way To The End How ... | Chegg.com I understand almost all the way to the end how the got the equivalent resistor from the circuit but at the end the 3ohm and 1 ohm resistor are in series and would give 4 ohms. Then the 4 ohm is in parallel with the 2 ohm and would give a Req=1.33333. because Req= (1/R1 +1/R2)^-1 but they never did the inverse. I know from ohm's law that V=IR but then how did they get the 2 A Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/understand-almost-way-end-got-equivalent-resistor-circuit-end-3ohm-1-ohm-resistor-series-w-q676125","timestamp":"2014-04-21T09:28:30Z","content_type":null,"content_length":"20907","record_id":"<urn:uuid:cb817390-e05c-4eed-930e-2ff3c3364328>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagorean Theorem for Solving Right Triangles 1.10: Pythagorean Theorem for Solving Right Triangles Difficulty Level: At Grade At Grade Created by: CK-12 Practice Pythagorean Theorem for Solving Right Triangles You are out on the playground with friends playing a game of tetherball. In this game, a ball is attached by a rope to the top of a pole. Each person is trying to hit the ball in a different direction until it wraps the rope completely around the pole. The first person to get the ball wrapped around the pole in their direction is the winner. You can see an example of a tetherball game on the right hand side of the picture shown here: You notice that the rope attached to the tetherball is 1 meter long, and that the angle between the rope and the pole is $35^\circ$ At the end of this Concept, you'll know how to solve this problem. Watch This James Sousa Example: Determine Trig Function Value Given a Right Triangle You can use your knowledge of the Pythagorean Theorem and the six trigonometric functions to solve a right triangle. Because a right triangle is a triangle with a 90 degree angle, solving a right triangle requires that you find the measures of one or both of the other angles. How you solve for these other angles, as well as the lengths of the triangle's sides, will depend on how much information is given. Example A Solve the triangle shown below. We need to find the lengths of all sides and the measures of all angles. In this triangle, two of the three sides are given. We can find the length of the third side using the Pythagorean Theorem: $8^2 + b^2 & = 10^2\\64 + b^2 & = 100\\b^2 & = 36\\b & = \pm 6 \Rightarrow b = 6$ (You may have also recognized that this is a “Pythagorean Triple,” 6, 8, 10, instead of using the Pythagorean Theorem.) You can also find the third side using a trigonometric ratio. Notice that the missing side, $b$$\angle{A}$$b$ $\cos 53.13^\circ & = \frac{\text{adjacent side}}{\text{hypotenuse}} = \frac{b}{10}\\0.6 & = \frac{b}{10}\\b & = 0.6(10) = 6$ Example B Solve the triangle shown below. In this triangle, we need to find the lengths of two sides. We can find the length of one side using a trig ratio. Then we can find the length of the third side by using a trig function with the information given originally and a different trig function. Because the side we found is an approximation, using the Pythagorean Theorem would not yield the most accurate answer for the other missing side. Therefore, we should use a trig function with the original information to find the length of the third side instead. Only use the given information when solving right triangles. We are given the measure of $\angle A$$\angle A$$c$ $\cos 40^\circ & = \frac{adjacent}{hypotenuse} = \frac{6}{c}\\\cos 40^\circ & = \frac{6}{c}\\c \cos 40^\circ & = 6\\c & = \frac{6}{\cos 40^\circ} \approx 7.83$ If we want to find the length of the other leg of the triangle, we can use the tangent ratio. This will give us the most accurate answer because we are not using approximations. $\tan 40^\circ & = \frac{opposite}{adjacent} = \frac{a}{6}\\a & = 6 \tan 40^\circ \approx 5.03$ Example C Solve the triangle shown below. In this triangle, we have the length of one side and one angle. Therefore, we need to find the length of the other two sides. We can start with a trig function: $\tan 30^\circ & = \frac{opposite}{adjacent} = \frac{b}{7}\\\tan 30^\circ & = \frac{b}{7}\\7 \tan 30^\circ & = b\\b & = 7 \tan 30^\circ \approx 4.04$ We can then use another trig relationship to find the length of the hypotenuse: $\sin 30^\circ & = \frac{opposite}{hypotenuse} = \frac{4.04}{c}\\\sin 30^\circ & = \frac{4.04}{c}\\c \sin 30^\circ & = 4.04\\c & = \frac{4.04}{\sin 30^\circ} \approx 8.08$ Sine: The sine of an angle in a right triangle is a relationship found by dividing the length of the side opposite the given angle by the length of the hypotenuse. Cosine: The cosine of an angle in a right triangle is a relationship found by dividing the length of the side adjacent the given angle by the length of the hypotenuse. Tangent: The tangent of an angle in a right triangle is a relationship found by dividing the length of the side opposite the given angle by the length of the side adjacent to the given angle. Guided Practice 1. Solve the triangle shown below: 2. Solve the triangle shown below: 3. Solve the triangle shown below: 1. Since the angle given is $40^\circ$ $\tan 40^\circ = \frac{9}{a}\\a = \frac{9}{\tan 40^\circ}\\a = \frac{9}{.839}\\a = 10.73\\$ We can then use another trig function to find the length of the hypotenuse: $\sin 40^\circ = \frac{9}{c}\\c = \frac{9}{\sin 40^\circ}\\c = \frac{9}{.643}\\c = 13.997\\$ Finally, the other angle in the triangle can be found either by a trigonometric relationship, or by recognizing that the sum of the internal angles of the triangle have to equal $180^\circ$ $90^\circ + 40^\circ + \theta = 180^\circ\\\theta = 180^\circ - 90^\circ - 40^\circ\\\theta = 50^\circ\\$ 2. Since this triangle has two sides given, we can start with the Pythagorean Theorem to find the length of the third side: $a^2 + b^2 = c^2\\8^2 + b^2 = 17^2\\b^2 = 17^2 - 8^2\\b^2 = 289- 64 = 225\\b = 15\\$ With this knowledge, we can work to find the other two angles: $\tan \angle{B} = \frac{15}{8}\\\tan \angle{B} = 1.875\\\angle{B} = \tan^{-1} 1.875 \approx 61.93^\circ\\$ And the final angle is: $180^\circ - 90^\circ - 61.93^\circ = 28.07^\circ$ 3. There are a number of things known about this triangle. Since we know all of the internal angles, there are a few different ways to solve for the unknown sides. Here let's use the $60^\circ$ $\tan 60^\circ = \frac{a}{4}\\a = 4 \tan 60^\circ\\a = (4)(1.73) = 6.92\\$ $\cos 60^\circ = \frac{4}{h}\\h = \frac{4}{\cos 60^\circ}\\h = \frac{4}{.5}\\h = 8\\$ So we have found that the lengths of the sides are 4, 8, and 6.92. Concept Problem Solution From our knowledge of how to solve right triangles, we can set up a triangle with the rope and the pole, like this: From this, it is straightforward to set up a trig relationship for sine that can help: $\sin 35^\circ = \frac{opposite}{1}\\(1)\sin 35^\circ = opposite\\opposite \approx .5736$ Use the picture below for questions 1-3. 1. Find $m\angle A$ 2. Find $m\angle B$ 3. Find the length of AC. Use the picture below for questions 4-6. 4. Find $m\angle A$ 5. Find $m\angle C$ 6. Find the length of AC. Use the picture below for questions 7-9. 7. Find $m\angle A$ 8. Find $m\angle B$ 9. Find the length of BC. Use the picture below for questions 10-12. 10. Find $m\angle A$ 11. Find $m\angle B$ 12. Find the length of AB. Use the picture below for questions 13-15. 13. Find $m\angle A$ 14. Find $m\angle C$ 15. Find the length of BC. 16. Explain when to use a trigonometric ratio to find missing information about a triangle and when to use the Pythagorean Theorem. 17. Is it possible to have a triangle that you must use cosecant, secant, or cotangent to solve? 18. What is the minimum information you need about a triangle in order to solve it? Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Trigonometry-Concepts/r1/section/1.10/","timestamp":"2014-04-19T05:02:00Z","content_type":null,"content_length":"144857","record_id":"<urn:uuid:3f1461f5-0421-485c-b795-fdb06c5a22df>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Mill Valley Math Tutor Hey All! I am a Masters student at Golden Gate Baptist Theological Student with a great amount of experience teaching English, Reading, Writing, and Math to both children and adults. I have been a reading tutor for elementary school students and I have spent time away from school teaching Math and English in both Nepal and Uganda. 30 Subjects: including algebra 1, ACT Math, grammar, SAT math ...I took a number of courses in the subject. I've used the concepts during my years as a programmer and have tutored many students in the subject. I have a strong background in linear algebra and differential equations. 49 Subjects: including statistics, finance, actuarial science, calculus ...I then took an advanced linear algebra class at UC Santa Cruz and received a C (horribly difficult class for math/computer science majors about to finish their bachelor's degrees). During my time at UCSC I also tutored intro linear algebra several times. At DVC I took Discrete Math (logic based ... 15 Subjects: including calculus, trigonometry, statistics, probability ...I have been able to apply this experience in my own teaching as I understand the importance of individualized instruction and teaching through meaningful activities. I want to provide the best experience possible for my students. I am very flexible with setting schedules and meeting locations for tutoring sessions. 16 Subjects: including prealgebra, TOEFL, ESL/ESOL, algebra 1 ...Many students entering algebra need work on arithmetic with fractions, how to calculate least common denominators, etc. I very often start by showing my student how to write out the prime factorization of the integers from 1 to 50, and then having the student continue, as a homework exercise, fr... 17 Subjects: including algebra 1, algebra 2, calculus, geometry
{"url":"http://www.purplemath.com/mill_valley_ca_math_tutors.php","timestamp":"2014-04-18T11:02:18Z","content_type":null,"content_length":"23815","record_id":"<urn:uuid:6cb78cbf-4a3c-4f1b-8338-98b0c6696ce9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Pages: 1 2 3 Post reply Re: Simulations Solve the equation for x. You get 2 solutions: I used the second one. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Simulations Hope you don't mind replying to old ones, I'll post code whichever I'll be able to come up with! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Simulations Hi gAr; You can post it and thanks for doing so. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Simulations Okay, it's very lightweight when compared to other functional programming languages, so I'm fascinated! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Simulations It is light in m too. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Simulations I meant the size of the whole package.. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Simulations Oh sorry, I did not understand. I thought you meant program size. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Simulations No problem, i'll explore more into the functional stuff.. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Simulations I prefer it too but I am afraid that it will always be like a second language to me. When you begin with procedural you are poisoned for life. See you later, got to get off. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Simulations Anyway, i'll try.. See you later. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Real Member Re: Simulations Is that an array construct? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Simulations No, that is a PDF. The random variate command constructs an array of random numbers that obey that PDF. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: Simulations How about writing a CFD simulation or a chemistry one? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Simulations How does that relate to Computer math? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Post reply Pages: 1 2 3
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=283280","timestamp":"2014-04-19T12:53:56Z","content_type":null,"content_length":"26686","record_id":"<urn:uuid:87ca0737-7e71-4fde-900a-07866b4280b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Consciousness Studies/The Philosophical Problem/Appendixs From Wikibooks, open books for an open world Action, Lagrangian and Hamiltonian Mechanics[edit] More on the origins of physics[edit] The view of physics taught at school is quite different from modern physics. Elementary School Physics concentrates on lumps of matter undergoing accelerations, collisions, extensions and motions in set directions. In real physics the world is understood as a collection of events occurring in a 'manifold' that dictates the freedom for motion. Each event or phenomenon is either directed and has the properties of a vector or not directed and has the properties of a scalar. Directed events have a magnitude that is a property of the event or phenomenon itself. The interaction of events with the world depends upon the angle between the direction of the event and the thing with which it interacts. So, in physics phenomena have an intrinsic magnitude and this can cause effects on other things according to the way they interact in space and time. In real physics all interactions depend on both magnitude and the spatio-temporal relations between things. The next big departure between physics and School Physics is the conservation laws. In physics it is understood that space and time are symmetrical and that the freedom for things to move evenly in all spatial directions results in the conservation of linear momentum, the evenness of time results in the conservation of energy etc. The discovery of the role of symmetry in the conservation laws arose out of the Lagrangian form of analysis and will be discussed below. The application of the Lagrangian method in quantum mechanics demonstrates why large objects take particular paths in the real world and can be used to derive and explain Newtonian mechanics. It is the deep attachment of biologists and AI researchers to School Physics that is probably the chief obstacle to making progress in areas such as consciousness studies. The fact that the world can, to some extent, be described by processes should not blind us to the fact that the classical world of our observation is actually arrangements of things governed by metrical geometry and quantum Most of those people who have been taught physics at school learnt Newton's original approach. This seventeenth century approach to physics has been superseded. It will come as a shock to many students to know that it was first superseded in the eighteenth century. What you were taught in school physics is an approach that is two hundred years out of date. Fortunately the approach discovered over two centuries ago is still widely applicable, even in modern physics, so it is easy to catch up. The eighteenth century approach is known as Lagrangian mechanics (devised by Joseph Lagrange between 1772 and 1788). Lagrangian mechanics concentrates on the energy exchanges during motion rather than on the forces involved. Lagrangian mechanics gave rise to Hamiltonian mechanics (devised by William Hamilton 1833). Consider a toy train running along friction free tracks. We want to work out the path it takes to get from the start to the finish of the track using our intuitions about energy and motion. If the train is run freely along the track it is found to take a particular time to get to the finish. If, on a second run, the train is reversed then set back in motion in the original direction to finish at the same time any amount of energy could be used to get the train from the start to the finish. It is evident that if the train is to get from the start to the finish in a given time then the least amount of energy used over the period occurs when there are no interventions. We could measure all the forms of energy used to slow down or speed up the train to see if an intervention has occurred but it turns out that only the kinetic energy of the train needs to be measured. If the train is slowed down, subtracting kinetic energy, then for the train to get to the end of the track at the proper time even more kinetic energy must be added when pushing it forward again for it to arrive at the finish on time. This means that we can account for the energy expenditure that affects the motion of the train by simply measuring the kinetic energy at intervals. The minimum amount of kinetic energy over the whole period of the trip corresponds to no interventions. No interventions occur when the sum of all the kinetic energy measurements are zero. The sum of the kinetic energy measurements in the toy train system is known as the action and has the symbol S. The action can be more complicated than a simple sum of kinetic energies, for instance when a ball is thrown into the air the kinetic energy can be converted into potential energy and vice versa. If a ball is thrown into the air and hits the ground after a definite time then the minimum interventions occur when the sum of the measurements of the difference between the kinetic and potential energy over the interval is a minimum. In this case the 'action' is the sum of the measurements of the difference between the kinetic and potential energies. Pierre Louis Moreau de Maupertuis discovered the idea of least action in 1746. He defined the action as the product of the time over which a movement occurs and twice the kinetic energy of the moving object. He found that this product tends to a minimum and this idea became called the Principle of Least Action. The work of Euler, Lagrange and Hamilton has led to the concepts in the principle of least action being applied to the whole of physics. This wider and modified principle of least action is now called the Principle of Stationary Action. In mathematical terms the action, S, is given by: $S = \int_{t_1}^{t_2}\ (T - U) \,dt.$ where T is the kinetic energy and U is the potential energy. The quantity (T - U) is known as the Langrangian function so if: $L = T - U$ The Lagrangian depends upon the position and the derivative of the position with respect to time $(x,\dot{x})$. The action is: $S = \int_{t_1}^{t_2}\; L(x,\dot{x})\,dt.$ The problem is to determine how the Lagrangian, $(T-U)$, can vary with distance, $x$, so that the action, $S$ is minimised. In other words, given relationships between $T, U$ and $x$, what curve of $L$ against $t$ will contain the minimum area. (This process is known as finding the minimising extremal curve for the integral). The starting point for calculating the least action in this way is Euler's calculation of variations method (see Hanc 2005). This results in the Euler-Lagrange equation: ${\partial L\over\partial x_{a}} - {d\over dt }{\partial L\over\partial\dot{x}_{a}} = 0$ which is a complicated formula for finding the extremal curve. The Lagrangian[edit] The Langrangian $(T - U)$ finds immediate applications in simple mechanics. In simple mechanics the kinetic energy of a moving object is given by: $T = \frac{1}{2} mv^2$ which, as $v = \dot{x}$ (the time derivative of distance), equals: $T = \frac{1}{2} m\dot{x}^2$ and the potential energy is usually directly proportional to distance: $U = kx$ or $U = mgh$ etc.. The Lagrangian is then: $L = \frac{1}{2} m\dot{x}^2 - U(x)$ Differentiating the Lagrangian with respect to $x$: ${\partial L\over\partial x} = -{dU\over dx}$ but Newtonian force is the change in potential energy with distance so: ${\partial L\over\partial x} = force$ Differentiating Lagrangian with respect to $\dot{x}$: ${\partial L\over\partial \dot{x}} = m \dot{x}$ and $m \dot{x}$ is Newtonian momentum. ${d\over dt }{\partial L\over\partial\dot{x}} = m \dot\dot{x}$ which is Newtonian force. Hence: ${\partial L\over\partial x} = {d\over dt }{\partial L\over\partial\dot{x}}$ which is the Lagrangian equivalent of $f = ma$ Hamiltonian mechanics[edit] Hamiltonian mechanics starts from the idea of expressing the total energy of a system: $H = T + U$ where T is the kinetic energy and U is the potential energy. The Hamiltonian can be expressed in terms of the momentum, $p$ and the Lagrangian: $H_{(p, \dot{x})} = p\dot{x} - L_{(x, \dot{x})}$ Differentiating the Hamiltonian with respect to momentum velocity is given by: ${\partial H\over\partial p} = \dot{x}$ Differentiating the Hamiltonian with respect to $x$ we can derive the Hamiltonian expression for force: ${\partial H\over\partial x} = -{\partial L\over\partial x}$ $-{\partial H\over\partial x} = \dot{p} = force$. Lagrangian analysis and conservation laws[edit] The Euler-Lagrange equation can be re-organised as: ${\partial L\over\partial x} = {d\over dt }{\partial L\over\partial\dot{x}}$ If one side of this equation is zero then the other side is also zero. This means that, for instance, if there is no change in kinetic-potential energy with distance then $\partial L\over\partial\dot {x}$ is constant or conserved. In the discussion of action above it was shown that changes in the kinetic and potential energy are due to perturbations in the course or progress of an object. In other words changes in the Lagrangian will occur in Euclidean space if an object is perturbed in its motion and $\partial L\over\partial x$ will be zero if the path is unperturbed. In the case of a freely moving particle: $L = \frac{1}{2} m\dot{x}^2$ ${\partial L\over\partial\dot{x}} = m\dot{x}$ ${\partial L\over\partial x} = 0$ the momentum, is conserved. Emmy Noether systematically investigated the relationship between conservation laws, symmetries and invariant quantities. The following symmetries are shown with their corresponding conservation Translation in space: conservation of momentum. Translation in time: conservation of energy Spatial rotation: conservation of angular momentum Hyperbolic rotation (Lorentz boost): conservation of energy-momentum 4 vector See http://www.eftaylor.com/pub/Symmetry0104.pdf This is a stub The role of quantum mechanics[edit] See Quantum physics explains Newton’s laws of motion http://www.eftaylor.com/pub/OgbornTaylor.pdf Special relativity for beginners http://en.wikipedia.org/wiki/Special_relativity_for_beginners This is a stub What is it like to be physical?[edit] Space-time vector or QM field? This is a stub Hanc, J. (2005). The original Euler’s calculus-of-variations method:Key to Lagrangian mechanics for beginners.Submitted to Eur. J. Phys. http://www.eftaylor.com/pub/HancEulerEJP.pdf Norbury, J.W. Lagrangians and Hamiltonians for High School Students. http://arxiv.org/PS_cache/physics/pdf/0004/0004029.pdf Calvert, J.B. The beautiful theory. http://www.du.edu/~jcalvert/math/lagrange.htm see also http://www.du.edu/~jcalvert/
{"url":"http://en.wikibooks.org/wiki/Consciousness_Studies/The_Philosophical_Problem/Appendixs","timestamp":"2014-04-19T12:57:50Z","content_type":null,"content_length":"46163","record_id":"<urn:uuid:b9f6a8cf-2fc1-4510-8f21-71ff0b399d5d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
composition functions November 24th 2006, 10:24 PM #1 composition functions 1. Find the Inverse of: My answer was: Is this Correct? 2. Find $f{\circ}g$ and $g{\circ}f$ when $f(x)=x^3-1$ and $g(x)=\sqrt[3]{x=1}$ I got: $\text{Thanks for the Help!!!}$ Yes, but this would look better if you did not use $f$ for both functions, so you would have: and its inverse as: Also as a personel preference I would use a different variable name in this second definition so I would write: is the inverse function of Ok, but I am definatley did get those as answers I got new answers but noone of yours 1. $f{\circ}g$= $\sqrt{x+1}$ 2. $g{\circ}f$= $\sqrt[6]{x^3}$ PURE CALCULATOR $\text{In your spare time could you please tell me what I did wrong}$ first I assume that you used the shift key where you better shouldn't use it. That means your problem reads: $f(x)=x^3-1$ and $g(x)=\sqrt[3]{x+1}$ $f{\circ}g$ means you have to calculate: $f(g(x))$. Therefore you have to plug the term of the function g in the place of the x in function f: $f{\circ}g=f(g(x))=(\sqrt[3]{x+1})^3-1=x+1-1=x$. That's the result CaptainBlack has already told you. Same procedure: $g{\circ}f=g(f(x))=\sqrt[3]{x^3-1+1}=\sqrt[3]{x^3}=x$. That's the result CaptainBlack has already told you. To be honest: I can't guess what you have done. Earboth has explained in more detail how we think this goes, if you want us to tell you where you went wrong, you will need to describe what you I must have slipped up on my Calculator, thanks for the advice... RE: Same Answers I keeps getting the same answers, so here is my work: $f(g(x))$= ${(\sqrt[3]{x+1})^3}-1$ = ${(x+1)^{1/18}}-1$ $g(f(x))$= $\sqrt[3]{(x^3-1)+1}$ = $(x^3)^1/6$ Texas Instruments Voyage 200 the bling bling of All Calc's The power rule is: $(x^a)^b = x^{ab}$ $\left ( \sqrt[3]{x+1} \right )^3 = \left ( (x + 1)^{1/3} \right )^3 = (x + 1)^{\frac{1}{3} \cdot 3} = x + 1$ For the same reason as above: $\sqrt[3]{(x^3-1)+1} = \sqrt[3]{x^3} = (x^3)^{1/3} = x^{3 \cdot \frac{1}{3}} = x$ from your result I believe that you used the sqrt( command. Then you have calculated: $\left( \left( (x+1)^\frac{1}{2}\right)^\frac{1}{3}\right)^\frac{ 1}{3}$ , which will indeed give your result. Type on your calculator: You'll get the correct result. Ok great i got $x$ for my answer, but if you were a teacher grading tests would you accept the answer given? Do you think he or she would? If you are speaking of the answers here: ${(x+1)^{1/18}}-1 = \sqrt[18]{x+1} - 1$ $(x^3)^{1/6} = \sqrt[6]{x^3} = \sqrt{x}$ which are not the same as the correct answers. November 24th 2006, 10:42 PM #2 Grand Panjandrum Nov 2005 November 24th 2006, 10:50 PM #3 Grand Panjandrum Nov 2005 November 25th 2006, 05:42 AM #4 November 25th 2006, 06:00 AM #5 November 25th 2006, 06:05 AM #6 Grand Panjandrum Nov 2005 November 25th 2006, 06:19 AM #7 November 25th 2006, 10:56 AM #8 November 25th 2006, 10:57 AM #9 November 25th 2006, 11:30 AM #10 November 25th 2006, 11:38 AM #11 November 26th 2006, 10:37 AM #12 November 26th 2006, 10:56 AM #13 November 26th 2006, 11:20 AM #14 November 26th 2006, 11:49 AM #15
{"url":"http://mathhelpforum.com/pre-calculus/7971-composition-functions.html","timestamp":"2014-04-16T08:53:04Z","content_type":null,"content_length":"96234","record_id":"<urn:uuid:071c8e3f-05f9-4bcd-a9a5-db3a879ed9e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [R-sig-ME] lme and prediction intervals - Douglas Bates - org.r-project.r-sig-mixed-models - MarkMail From Sent On Attachments D Chaws Feb 18, 2010 9:25 am walmes zeviani Feb 18, 2010 1:02 pm D Chaws Feb 20, 2010 9:01 am Ben Bolker Feb 20, 2010 1:27 pm D Chaws Feb 25, 2010 9:31 pm D Chaws Apr 3, 2010 10:12 pm Douglas Bates Apr 4, 2010 5:38 am Ben Bolker Apr 4, 2010 6:53 pm Douglas Bates Apr 5, 2010 7:39 am D Chaws Apr 5, 2010 7:48 pm Jarrod Hadfield Apr 6, 2010 9:59 am David Hsu Apr 6, 2010 1:59 pm D Chaws Apr 6, 2010 9:17 pm John Maindonald Apr 6, 2010 10:47 pm Douglas Bates Apr 7, 2010 6:53 am Douglas Bates Apr 7, 2010 7:14 am Emmanuel Charpentier Apr 7, 2010 2:36 pm D Chaws Apr 8, 2010 12:10 pm Subject: Re: [R-sig-ME] lme and prediction intervals From: Douglas Bates (bat...@stat.wisc.edu) Date: Apr 5, 2010 7:39:39 am List: org.r-project.r-sig-mixed-models On Sun, Apr 4, 2010 at 8:54 PM, Ben Bolker <bol...@ufl.edu> wrote: Douglas Bates wrote: On Sat, Apr 3, 2010 at 11:12 PM, D Chaws <cat....@gmail.com> wrote: Ok, issue solved for the most straightforward random effects cases. Not sure about nested random effects or more complex cases. Assuming that you can make sense of lsmeans in such a case. You may notice that lsmeans are not provided in base and recommended R packages. That isn't an oversight. Try to explain what lsmeans are in terms of the probability model. Anyway, if you are happy with it, then go for it. I'll just give you a warning from a professional statistician that they are a nonsensical construction. lsmeans may not make sense in general (I don't really know, I have a somewhat weird background that mostly doesn't include SAS), but there's nothing wrong with wanting predictions and standard errors of predictions, which be definable (?) if one can specify (a) whether a given random effect is set to zero or included at its conditional mean/mode value (or, for a simulation, chosen from a normal distribution with the appropriate variance-covariance structure (b) whether random effects not included in the prediction (and the residual error) are included in the SE or not. I agree that specifying all this is not as easy as specifying "level", but can't one in principle do this by specifying which random effects are in/out of the prediction or the SE? Your first sentence is absolutely right - those whose backgrounds do not include an introduction to SASspeak have difficulty in understanding what lsmeans are, and that group includes me. Once again, I have phrased my objections poorly. What I should have said is that I do not understand what lsmeans are. I have tried to read the documentation on them in various SAS publications and also in some books and I still can't make sense of them. I have a strong suspicion that, for most users, the definition of lsmeans is "the numbers that I get from SAS when I use an lsmeans statement". My suggestion for obtaining such numbers is to buy a SAS license and use SAS to fit your models. Those who have read Bill Venables unpublished paper, "Exegeses on Linear Models" (just put the title into a search engine) will recognize this situation. Insightfull or whatever their name was at the time had important customers (read "pharmaceutical companies") who wanted them to change S-PLUS so that it created both Type I and Type III sums of squares. They consulted with statisticians who knew S-PLUS well who told them "don't do that, it doesn't make sense". Of course the marketing folks won out and the company proceeded to ignore this advice and implement (poorly) the Type X sums of squares where X is the number that means "give me something that I will regard as a marginal sum of squares for a factor in the presence of non-ignorable interactions". Apparently the fact that such a concept doesn't make sense is not an adequate reason to avoid emulating SAS in producing these numbers. I should have phrased my objection as a deficiency in my background. I don't know what lsmeans are and therefore cannot advise anyone on how to calculate them. If you or anyone else can explain to me - in terms of the random variables Y and B and the model parameters - what you wish to calculate then I can indicate how it could be calculated. I think that lsmeans are, in some sense, elements of the mean of Y but I don't know if they are conditional on a value of B or not. If they are means of Y then there must be a parameter vector beta specified. This is where I begin to lose it. Many people believe that they can specify an incomplete parameter vector and evaluate something that represents means. In other words, many people believe that when there are multiple factors in the fixed-effects formula they can evaluate the mean response for levels of factor A in some way that is marginal with respect to the levels of the other factors or numerical covariates. I can't understand how that can be done. So I need to know what the values of the fixed-effects parameters, beta, should be and whether you want to condition on a particular value, B = b, or evaluate the mean of the marginal distribution of Y, in the sense of integrating with respect to the distribution of B. If the latter, then you need to specify the parameters that determine the distribution of B. If the former, then I imagine that you wish to evaluate the mean of the distribution of Y conditional on B at the BLUPs. As you know I prefer to use the term "conditional means" or, for more general models like GLMMs, "conditional modes", instead of BLUPs. The values returned by ranef for a linear mixed model are the conditional means of B given the observed value Y = y evaluated at the parameter estimates. To say that you want the conditional mean of Y given B = the conditional mean of B given the observed y is a bit too intricate for me to understand. I really don't know how to interpret such a concept. My hope is that, after building code for a reasonable number of examples, the general principles will become sufficiently clear that a method with an appropriate interface can then be written (note use of the passive voice). The hardest part I discovered for doing this with existing lme and lme4 objects is recalculating the random-effects design matrix appropriately when a new set of data (with different random-effects factor structure) is specified ... You may find the sparse.model.matrix function in the Matrix package helpful.
{"url":"http://markmail.org/message/dqpk6ftztpbzgekm","timestamp":"2014-04-16T16:54:15Z","content_type":null,"content_length":"19297","record_id":"<urn:uuid:6d5d3091-66e1-43af-9682-18a66aad814c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Hollis, NH Calculus Tutor Find a Hollis, NH Calculus Tutor ...From first grade in elementary school to graduate students in college, or adults in need of or interested in learning mathematics. After acquiring the essential math part of their interested topics together with their efforts on remembering (non math part, such as) formulas, algorithms, the stud... 13 Subjects: including calculus, geometry, trigonometry, algebra 1 ...As such it is used quite heavily in physics. During my undergraduate studies I took three calculus courses and then applied the knowledge gained therein in physics and chemistry courses. I have a minor in Chemistry. 6 Subjects: including calculus, chemistry, physics, algebra 1 ...I'm a good teacher, listen and explain things well, enjoy teens and am patient and understanding. I have excellent tutoring references. I am the father of 3 teens, and have been a soccer coach, youth group leader, and scouting leader. 15 Subjects: including calculus, physics, statistics, geometry ...Particularly important are operations with exponents and an understanding of the definition and properties of logarithms. Calculus is one of the three legs on which most mathematically-based disciplines rest. The other two are linear algebra and the stochastic systems (statistics), which come together in advanced courses. 7 Subjects: including calculus, physics, algebra 1, algebra 2 ...GRE: 170 Math, 170 Verbal, 6.0 Writing. GMAT: 780. MY EXPERIENCE: I have thousands of hours of professional teaching, tutoring, and mentoring experience - eight years in the Boston metro area 47 Subjects: including calculus, English, reading, chemistry Related Hollis, NH Tutors Hollis, NH Accounting Tutors Hollis, NH ACT Tutors Hollis, NH Algebra Tutors Hollis, NH Algebra 2 Tutors Hollis, NH Calculus Tutors Hollis, NH Geometry Tutors Hollis, NH Math Tutors Hollis, NH Prealgebra Tutors Hollis, NH Precalculus Tutors Hollis, NH SAT Tutors Hollis, NH SAT Math Tutors Hollis, NH Science Tutors Hollis, NH Statistics Tutors Hollis, NH Trigonometry Tutors Nearby Cities With calculus Tutor Amherst, NH calculus Tutors Ayer calculus Tutors Brookline, NH calculus Tutors Devens calculus Tutors Dunstable calculus Tutors Groton, MA calculus Tutors Litchfield, NH calculus Tutors Lunenburg, MA calculus Tutors Milford, NH calculus Tutors Pepperell calculus Tutors Shirley, MA calculus Tutors Townsend, MA calculus Tutors Tyngsboro calculus Tutors Westminster, MA calculus Tutors Wilton, NH calculus Tutors
{"url":"http://www.purplemath.com/hollis_nh_calculus_tutors.php","timestamp":"2014-04-20T20:00:08Z","content_type":null,"content_length":"23684","record_id":"<urn:uuid:5022302b-04ba-4e73-94ec-4bfa7a91adeb>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
SparkNotes: SAT Physics: The Doppler Effect 17.1 Periodic Motion 17.6 The Doppler Effect 17.2 Wave Motion 17.7 Key Formulas 17.3 Transverse Waves and Longitudinal Waves 17.8 Practice Questions 17.4 Superposition 17.9 Explanations 17.5 Standing Waves and Resonance The Doppler Effect So far we have only discussed cases where the source of waves is at rest. Often, waves are emitted by a source that moves with respect to the medium that carries the waves, like when a speeding cop car blares its siren to alert onlookers to stand aside. The speed of the waves, v, depends only on the properties of the medium, like air temperature in the case of sound waves, and not on the motion of the source: the waves will travel at the speed of sound (343 m/s) no matter how fast the cop drives. However, the frequency and wavelength of the waves will depend on the motion of the wave’s source. This change in frequency is called a Doppler shift.Think of the cop car’s siren, traveling at speed f and period T = 1/f. The wave crests travel outward from the car in perfect circles (spheres actually, but we’re only interested in the effects at ground level). At time T after the first wave crest is emitted, the next one leaves the siren. By this time, the first crest has advanced one wavelength, The shorter wavelength is called the Doppler-shifted wavelength, given by the formula Similarly, someone standing behind the speeding siren will hear a sound with a longer wavelength, You’ve probably noticed the Doppler effect with passing sirens. It’s even noticeable with normal cars: the swish of a passing car goes from a higher hissing sound to a lower hissing sound as it speeds by. The Doppler effect has also been put to valuable use in astronomy, measuring the speed with which different celestial objects are moving away from the Earth. A cop car drives at 30 m/s toward the scene of a crime, with its siren blaring at a frequency of 2000 Hz. At what frequency do people hear the siren as it approaches? At what frequency do they hear it as it passes? The speed of sound in the air is 343 m/s. As the car approaches, the sound waves will have shorter wavelengths and higher frequencies, and as it goes by, the sound waves will have longer wavelengths and lower frequencies. More precisely, the frequency as the cop car approaches is: The frequency as the cop car drives by is:
{"url":"http://www.sparknotes.com/testprep/books/sat2/physics/chapter17section6.rhtml","timestamp":"2014-04-21T04:32:26Z","content_type":null,"content_length":"50070","record_id":"<urn:uuid:ccfb5836-1b08-409b-8640-dfb39824bc27>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Critical Pressure and Temperature of a van der Waals Gas 1. The problem statement, all variables and given/known data From the van der Waals equation of state, show that the critical temperature and pressure are given by [tex]T_{cr} = \frac{8a}{27bR}[/tex] [tex]P_{cr} = \frac{a}{27b^2}[/tex] Hint: Use the fact that the [itex]P[/itex] versus [itex]V[/itex] curve has an inflection point at the critical point so that the first and second derivatives are zero. 2. Relevant equations [tex]P = \frac{RT}{V/n - b} - \frac{a}{(V/n)^2}[/tex] 3. The attempt at a solution The first and second derivative have powers of [itex]V[/itex] greater than 2. Unfortunately I don't have the skills to solve for [itex]dp/dt = 0[/itex] or [itex]d^2p/dt^2 = 0[/itex]. Perhaps there's a simpler way?
{"url":"http://www.physicsforums.com/showthread.php?t=180305","timestamp":"2014-04-20T14:10:26Z","content_type":null,"content_length":"28974","record_id":"<urn:uuid:4cc0b139-5e90-4492-a41b-92988cffc4cd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a connection between the theory of motives and homotopy theory? up vote 1 down vote favorite I have read that motives were designed to be the common part of the many homology theories, a way of unifying them. But as I understand it: homotopy is closely related to homology, there is only 1 homotopy theory, and homotopy groups contain more information than homology groups. Is there a relationship between motives and homotopies? homotopy-theory cohomology homology motives 1 An MFO report: mfo.de/programme/schedule/2010/20/OWR_2010_23.pdf – Thomas Riepe Feb 14 '11 at 8:55 I certainly couldn't claim to fully understand this paper, but I can tell you that a "spectrum" is the homotopy-theoretic generalization of a topological space and that this paper talks about motivic spectra whole lot: arxiv.org/abs/0712.2817 – Aaron Mazel-Gee Feb 14 '11 at 9:34 3 You say there's only one homotopy theory... but there are tons!! – Fernando Muro Feb 14 '11 at 9:58 You said you've read that "motives were designed to be the common part of the many homology theories", but I think that's not quite correct: motives are designed to contain the (co)homological 2 information of algebraic objects (schemes, varieties). I guess there are a lot of "ill-behaved" topological spaces which don't have the homotopy type of a CW-complex and don't have anything to do with algebraic geometry, where motivic ideas don't apply. On the other hand, if anyone knows of a theory of motives for arbitrary topological spaces, I'd be interested :-) – Konrad Voelkel Feb 14 '11 at 10:16 2 @Aaron - I disagree that a spectrum is "the" homotopy theoretic generalisation of a topological space. It is only a stable homotopy theoretic generalisation of a topological space. Or rather, that a topological space gives rise to a rather special example of a spectrum. – David Roberts Feb 17 '11 at 22:40 show 3 more comments 1 Answer active oldest votes In algebraic topology, there is a close relationship between stable homotopy theory and the study of (generalized) cohomology theories. Basically, all the cohomology theories become representable on the stable category of spectra and so, from the point of view of stable homotopy theory, the study of cohomology theories can be viewed as the study of their representing More recently it has been discovered (through the work of Voevodsky and others) that there is an analogous situation in algebraic geometry. Keywords to look up: motivic homotopy theory, $\ mathbb{A}^1$-homotopy theory. Basically, we can construct a homotopy theory for algebraic varieties and a suitable homotopy category which plays a role analogous to the stable category of spectra in topology. One thing this gives us is that it enables us to define new cohomology theories for algebraic varieties by describing their representing spectra ("motivic spectra"). For example, motivic cohomology, algebraic K-theory, and algebraic cobordism can be constructed in this way. This whole circle of ideas is closely related to recent work on motives and motivic cohomology. For example, Voevodsky's construction of a "derived category of mixed motives" is closely related to this work. up vote 6 down The following is a very easy-going introduction to the idea of motivic homotopy theory and is understandable even by an undergraduate: • Motivic Homotopy Theory: Lectures at a Summer School in Nordfjordeid, Norway, August 2002 by Bjørn Dundas, Marc Levine, Paul Østvær, Oliver Röndigs and Vladimir Voevodsky It is also worth reading Voevodsky's 1998 ICM address: • Vladimir Voevodsky - A^1-homotopy theory (Proceedings of the 1998 ICM) There is a lot more that could be said about this very interesting area of mathematics. add comment Not the answer you're looking for? Browse other questions tagged homotopy-theory cohomology homology motives or ask your own question.
{"url":"http://mathoverflow.net/questions/55388/is-there-a-connection-between-the-theory-of-motives-and-homotopy-theory/55778","timestamp":"2014-04-18T03:20:55Z","content_type":null,"content_length":"59651","record_id":"<urn:uuid:bc7834ee-f2c8-4f42-88fc-74ac29c0470f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Hackensack, NJ ACT Tutor Find a Hackensack, NJ ACT Tutor ...CONTACTING ME Please send me a message if you any questions or would like to discuss your particular circumstances in more detail. I respond to all messages within 24 hours or less (usually less), and am also available to discuss over the phone.Many students struggle on the SAT Math section, no... 18 Subjects: including ACT Math, geometry, GRE, algebra 1 ...Thanks to my engineering degree in Computer Science from Princeton University, I "speak computer". I've also worked at an in-person technical support center at my college, which allowed me to work with both computers and people together. I love helping people get along with their computers bett... 37 Subjects: including ACT Math, chemistry, physics, calculus ...The passages are presented in a specific order: Prose Fiction, Social Science, Humanities, Natural Science. I can help students improve their reading and comprehension skills. I help students understand and anticipate the most frequently asked types of questions so that their reading becomes more efficient. 9 Subjects: including ACT Math, SAT math, SAT reading, GMAT ...I hold NJ certification, highly qualified in science Proctor SAT II in biology to review and stay current on question types and subject matter I am permanently certified in K-8 in NJ & NY. I received a BA in elementary education & an MA in Science Education. I taught 6-8 grade math and science for 18 years, and have tutored in all elementary school subjects for over 25 years. 16 Subjects: including ACT Math, reading, geometry, biology I obtained my BSc in Applied Mathematics and BA in Economics dual-degree from the University of Rochester (NY) in 2013. I am a part-time tutor in New York City and want to help those students who need exam preparation support or language training. I used to work at the Department of Mathematics on campus as Teaching Assistance for two years and I know how to help you improve your skills. 7 Subjects: including ACT Math, calculus, algebra 1, actuarial science Related Hackensack, NJ Tutors Hackensack, NJ Accounting Tutors Hackensack, NJ ACT Tutors Hackensack, NJ Algebra Tutors Hackensack, NJ Algebra 2 Tutors Hackensack, NJ Calculus Tutors Hackensack, NJ Geometry Tutors Hackensack, NJ Math Tutors Hackensack, NJ Prealgebra Tutors Hackensack, NJ Precalculus Tutors Hackensack, NJ SAT Tutors Hackensack, NJ SAT Math Tutors Hackensack, NJ Science Tutors Hackensack, NJ Statistics Tutors Hackensack, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Hackensack_NJ_ACT_tutors.php","timestamp":"2014-04-18T11:41:18Z","content_type":null,"content_length":"24101","record_id":"<urn:uuid:40f1e00b-9619-4faf-ba27-601aa015cff8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Remote Definitions Parallel kernels do not have access to the values of variables defined in the master kernel, nor do they have access to locally defined functions. Mathematica contains a command DistributeDefinitions that makes it easy to transport local variables and definitions to all parallel kernels. The main advantage of this method is that the application package does not need to be installed on the remote kernels. All definitions are sent through the existing connection to the remote kernels. Distributing Definitions DistributeDefinitions[s[1],s[2],...] distribute all definitions for symbols to all remote kernels DistributeDefinitions["Context`"] distribute definitions for all symbols in the specified context DistributeDefinitions has the attribute HoldAll to prevent the evaluation of the symbols. DistributeDefinitions exports the following kinds of definitions: OwnValues, DownValues, , UpValues, , . DistributeDefinitions sets the attributes of the remote symbols equal to the locally defined attributes, except for attributes such as Protected and Locked. Any old definitions existing on the remote side are cleared before the new definitions are made. Here is a subtle point. The following remote evaluation seems to work, even though the symbols are not defined on the remote side. The reason is that the remote kernels return the unevaluated expression , because the function and variable are not defined on the remote kernel. The master kernel evaluates the returned results further, but it does so sequentially. You can easily produce an example where the difference between remote and local evaluation becomes apparent. On the local kernel, the symbol evaluates to , and the of is On the remote kernels, stays a symbol, and its head is Automatic Distribution of Definitions Higher-level parallel commands, such as Parallelize, ParallelTable, ParallelSum, ... will automatically distribute definitions of symbols occurring in their arguments. For this parallel table, the function and the iterator bound will evaluate on the subkernels, so their definitions need to be distributed to make it work. This automatic distribution happens for any functions and variables you define interactively, within the same notebook (technically, for all symbols in the default context). Definitions from other contexts, such as functions from packages, are not distributed automatically. As a result, the symbol is returned unevaluated from the remote kernels, and is evaluated only after the parallel computation is done, where is zero. Distributing Contexts DistributeDefinitions["Context`"] exports all definitions for all symbols in the given context. Thus, you can use the following to make all your interactively entered definitions known to the remote Exporting the context of a package you have loaded may not have the same effect on the remote kernels as loading the package on each remote kernel. The reason is that loading a package may perform certain initializations and it may also define auxiliary functions in other contexts (such as a private context). Also, a package may load additional auxiliary packages that establish their own DistributeDefinitions["Context`"] is useful for exporting contexts for definitions that you have explicitly set up to be used on remote kernels. There is a separate command ParallelNeeds for remote loading of packages. clears any definitions for symbols in the context on remote kernels. Loading Packages on Remote Kernels ParallelNeeds["Context`"] evaluate Needs["Context`"] on all available parallel kernels ParallelNeeds["Context`"] is essentially equivalent to ParallelEvaluate[Needs["Context`"]], but it is remembered, and any newly launched remote kernels will be initialized as well. Exporting the context of a package you have loaded may not have the same effect on the remote kernels as loading the package on each remote kernel with ParallelNeeds[]. The reason is that loading a package may perform certain initializations, and it may also define auxiliary functions in other contexts (such as a private context). Also, a package may load additional auxiliary packages that establish their own contexts. Note that Mathematica packages available to the master kernel may not be available on remote kernels from older versions of Mathematica. Example: Eigenvalues of Matrices The parameter gives the desired precision for the computation of the eigenvalues of a random × matrix. It is enough to distribute the definition of the main function . Any values it depends on will be distributed automatically. A Sample Run Here you measure the time it takes to find the eigenvalues of 5×5 to 25×25 matrices. Because the computations may happen on remote computers that differ in their processor speeds, the results do not necessarily form an increasing sequence. Alternatively, you can perform the same computation on each parallel processor to measure their relative speed. Here you find the speed of calculation of the eigenvalues of a 20×20 matrix on each of the parallel processors.
{"url":"http://reference.wolfram.com/mathematica/ParallelTools/tutorial/RemoteDefinitions.html","timestamp":"2014-04-20T16:31:41Z","content_type":null,"content_length":"50069","record_id":"<urn:uuid:513ba34e-385e-49d0-8778-0bbd79ceb57a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
The uncertain reasoner’s companion: A mathematical perspective Results 1 - 10 of 61 , 1996 "... this paper we investigate some logics whose set of truth values is the real interval [0; 1] and we concentrate our attention to logics having a conjunction whose truth function t(x; y) is a t-norm, and having a corresponding residuated implication (or, as Pavelka [14] observes, the conjunction and t ..." Cited by 28 (3 self) Add to MetaCart this paper we investigate some logics whose set of truth values is the real interval [0; 1] and we concentrate our attention to logics having a conjunction whose truth function t(x; y) is a t-norm, and having a corresponding residuated implication (or, as Pavelka [14] observes, the conjunction and the implication form an adjoint couple); i.e., if i(x; y) is the truth function of the implication - Synthese , 2000 "... This paper concerns the question of how to draw inferences common sensically from uncertain knowledge. Since the early work of Shore and Johnson, [10], Paris and Vencovsk a, [6], and Csiszár, [1], it has been known that the Maximum Entropy Inference Process is the only inference process which obeys ..." Cited by 24 (3 self) Add to MetaCart This paper concerns the question of how to draw inferences common sensically from uncertain knowledge. Since the early work of Shore and Johnson, [10], Paris and Vencovsk a, [6], and Csiszár, [1], it has been known that the Maximum Entropy Inference Process is the only inference process which obeys certain common sense principles of uncertain reasoning. In this paper we consider the present status of this result and argue that within the rather narrow context in which we work this complete and consistent mode of uncertain reasoning is actually characterised by the observance of just a single common sense principle (or slogan). - IEEE Trans. Automatic Control , 1995 "... This paper proposes a high level language constituted of a small number of primitives and macros for describing recursive maximum likelihood (ML) estimation algorithms. ..." Cited by 18 (4 self) Add to MetaCart This paper proposes a high level language constituted of a small number of primitives and macros for describing recursive maximum likelihood (ML) estimation algorithms. - Inconsistency Tolerance. Volume 3300 of Lecture Notes in Computer Science , 2005 "... Abstract. Measures of quantity of information have been studied extensively for more than fifty years. The seminal work on information theory is by Shannon [67]. This work, based on probability theory, can be used in a logical setting when the worlds are the possible events. This work is also the ba ..." Cited by 15 (8 self) Add to MetaCart Abstract. Measures of quantity of information have been studied extensively for more than fifty years. The seminal work on information theory is by Shannon [67]. This work, based on probability theory, can be used in a logical setting when the worlds are the possible events. This work is also the basis of Lozinskii’s work [48] for defining the quantity of information of a formula (or knowledgebase) in propositional logic. But this definition is not suitable when the knowledgebase is inconsistent. In this case, it has no classical model, so we have no “event ” to count. This is a shortcoming since in practical applications (e.g. databases) it often happens that the knowledgebase is not consistent. And it is definitely not true that all inconsistent knowledgebases contain the same (null) amount of information, as given by the “classical information theory”. As explored for several years in the paraconsistent logic community, two inconsistent knowledgebases can lead to very different conclusions, showing that they do not convey the same information. There has been some - ARTIF. INTELL , 2004 "... This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the information-theoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic progra ..." Cited by 11 (3 self) Add to MetaCart This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the information-theoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic programming under maximum entropy. The first one is based on the usual notion of entailment under maximum entropy, and is defined for the very general case of probabilistic logic programs over Boolean events. The second one is based on a new notion of entailment under maximum entropy, where the principle of maximum entropy is coupled with the closed world assumption (CWA) from classical logic programming. It is only defined for the more restricted case of probabilistic logic programs over conjunctive events. We then analyze the nonmonotonic behavior of both approaches along benchmark examples and along general properties for default reasoning from conditional knowledge bases. It turns out that both approaches have very nice nonmonotonic features. In particular, they realize some inheritance of probabilistic knowledge along subclass relationships, without suffering from the problem of inheritance blocking and from the drowning problem. They both also satisfy the property of rational monotonicity and several irrelevance properties. We finally present algorithms for both approaches, which are based on generalizations of techniques from probabilistic , 1997 "... This paper is a sequel to an earlier result of the authors that in making inferences from certain probabilistic knowledge bases the Maximum Entropy Inference Process, ME, is the only inference process respecting 'common sense'. This result was criticised on the grounds that the probabilistic knowle ..." Cited by 11 (3 self) Add to MetaCart This paper is a sequel to an earlier result of the authors that in making inferences from certain probabilistic knowledge bases the Maximum Entropy Inference Process, ME, is the only inference process respecting 'common sense'. This result was criticised on the grounds that the probabilistic knowledge bases considered are unnatural and that ignorance of dependence should not be identied with statistical independence. We argue against these criticisms and also against the more general criticism that ME is representation dependant. In a nal section we however provide a criticism of our own of ME, and of inference processes in general, namely that they fail to satisfy compactness. Introduction and Notation In [1] we gave a justication of the Maximum Entropy Inference Process, ME, by characterising it as the unique probabilistic inference process satisfying a certain collection of common sense principles. In the years following that publication a number of criticisms of these principl... - International Journal of Approximate Reasoning , 2003 "... Cox's Theorem provides a theoretical basis for using probability theory as a general logic of plausible inference. The theorem states that any system for plausible reasoning that satisfies certain qualitative requirements intended to ensure consistency with classical deductive logic and corresponden ..." Cited by 10 (0 self) Add to MetaCart Cox's Theorem provides a theoretical basis for using probability theory as a general logic of plausible inference. The theorem states that any system for plausible reasoning that satisfies certain qualitative requirements intended to ensure consistency with classical deductive logic and correspondence with commonsense reasoning is isomorphic to probability theory. However, the requirements used to obtain this result have been the subject of much debate. We review Cox's Theorem, discussing its requirements, the intuition and reasoning behind these, and the most important objections, and finish with an abbreviated proof of the theorem. , 1996 "... A new criterion is introduced for judging the suitability of various `fuzzy logics' for practical uncertain reasoning in a probabilistic world and the relationship of this criterion to several established criteria, and its consequences for truth functional belief, are investigated. Introduction It ..." Cited by 9 (1 self) Add to MetaCart A new criterion is introduced for judging the suitability of various `fuzzy logics' for practical uncertain reasoning in a probabilistic world and the relationship of this criterion to several established criteria, and its consequences for truth functional belief, are investigated. Introduction It is a rather widespread assumption in uncertain reasoning, and one that we shall make for the purpose of this paper, that a piece of uncertain knowledge can be adequately captured by attaching a real number (signifying the degree of uncertainty) on some scale to some unequivocal statement or conditional, and that an intelligent agent's knowledge base consists of a large, but nevertheless nite, set K of such expressions. Whether or not this is the correct picture for animate intelligent agents such as ourselves is, perhaps, questionable, but it is certainly the case that many expert systems (which one might feel should be included under the vague title of `intelligent agent') have, by design... - Soft Computing , 1997 "... We present a semantics for certain Fuzzy Logics of vagueness by identifying the fuzzy truth value an agent gives to a proposition with the number of independent arguments that the agent can muster in favour of that proposition. Introduction In the literature the expression `Fuzzy Logic' is used in ..." Cited by 7 (0 self) Add to MetaCart We present a semantics for certain Fuzzy Logics of vagueness by identifying the fuzzy truth value an agent gives to a proposition with the number of independent arguments that the agent can muster in favour of that proposition. Introduction In the literature the expression `Fuzzy Logic' is used in two separate ways (at least). One is where `truth values' are intended to stand for measures of belief (or condence, or certainty of some sort) and the expression `Fuzzy Logic' is taken as a synonym for the assumption that belief values are truth functional. That is, if w() denotes an agent's belief value (on the usual scale [0; 1]) for 2 SL, where SL is the set of sentences from a nite propositional language L built up using the connectives :; ^; _ (we shall consider implication later), then w satises w(:) = F: (w()); w( ^ ) = F^ (w(); w()); w( _ ) = F_ (w(); w()); (1) for some xed functions F: : [0; 1] ! [0; 1] and F^ ; F_ : [0; 1] 2 ! [0; 1]; where ; 2 SL. Two p... - Algorithms , 2009 "... algorithms ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=881674","timestamp":"2014-04-25T04:01:51Z","content_type":null,"content_length":"37149","record_id":"<urn:uuid:f5207a79-ca95-47a5-8a8b-f0d28c0cc3c5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization and Flow-Invariance via High Order Tangent Cones Abstract (Summary) The goals of this dissertation are: 1) to present some results on the flow-invariance of a closed set Sof a Banach space with respect to a differential equation, and to discuss optimization problems on S, as well; 2) to point out their unifying effect in the theory of differential equations and optimization. For the following optimization problem, one establishes necessary conditions of extremum in terms of the high order tangential directions to the constraint set at the extremum point: F(x0)=Local Minimum F(x), subject to x ? S, where X is a normed space, F: X? IR is a function of class Cpin a neighborhood of x0 ? S ? X, S?Ø, p? 1. It is analyzed in detail the case when S is the kernel D G of a function G : X?IRm, m? 1. To this aim, one describes the high order tangent cones to the set DGat x ? DG, and then derives some sufficient conditions for the optimality of F on DG. The characterizations of the high order tangent cones are also used to obtain some necessary and sufficient conditions for the flow-invariance of a subset DG= {x ? X; G (x)= 0}, of a Banach space X with respect to the differential equation u(n)(t) = F (u(t)),t ? 0, where G : U? IRm, m? 1, is a n-times Fréchet differentiable mapping on an open subset U of X, n ? 3, and F : U ? X is locally Lipschitz. The examples discussed illustrate some applications of the results presented. Bibliographical Information: School:Ohio University School Location:USA - Ohio Source Type:Master's Thesis Keywords:necessary conditions and sufficient of optimality fréchet differentiality constrained optimalization problems bouligand s tempest core high order tangent vectors flow invariance problem Date of Publication:01/01/2005
{"url":"http://www.openthesis.org/documents/Optimization-Flow-Invariance-via-High-539101.html","timestamp":"2014-04-21T07:08:57Z","content_type":null,"content_length":"9357","record_id":"<urn:uuid:ea467b04-f227-4f6d-9b5e-c9f7e93e2b22>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2005 [00546] [Date Index] [Thread Index] [Author Index] Re: Re: Re: Infinite sum of gaussians On 17 Apr 2005, at 16:07, Maxim wrote: > This is wrong on several points. In fact, Sum[E^(-(z - k)^2/2), {k, > -Infinity, Infinity}] is analytic everywhere in the complex plane. > Since > we already know that this sum is equal to Sqrt[2*Pi]*EllipticTheta[3, > Pi*z, E^(-2*Pi^2)], all its properties, including analyticity, follow > from > the properties of EllipticTheta. > Actually, since the sum of E^(-(z - k)^2/2) is very well-behaved (the > terms decay faster than, say, E^(-k^2/4)) > it is trivial to prove the > uniform convergence in z and therefore the validity of the termwise > differentiation as well as analyticity directly. The fact that the > series > is double infinite is of no importance; we can always rewrite it as two > series from 1 to +Infinity. Yes, you are right, and in fact that is what I thought at first. But after Carl Woll's message I realized that I could easily prove the following: if f[z] is everywhere complex analytic and Abs[z f[z]]->0 as z->Infinity then Sum[f[z],{z,-Infinity,Infinity}]==0. I thought that this shows that that the above sum can't be convergent everywhere, but I have had not time to think about it for more than a few minutes at a time so I am probably not missing something even now. > Also it's not correct that a real infinitely differentiable function > can > be defined by its value and the values of its derivatives at a point. > If > we take f[z] == E^-z^-2 for z != 0 and f[0] == 0, then all the (real) > derivatives at 0 vanish. Of course but nobody ever said that real analytic is the same as C^Ininity. Real analitic means that the Taylor series converges everhwehre and is equal to the value of the function. This is all that was needed in this case anyway. Andrzej Kozlowski Chiba, Japan • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Apr/msg00546.html","timestamp":"2014-04-18T19:05:52Z","content_type":null,"content_length":"37621","record_id":"<urn:uuid:75c28809-8eb0-47ec-9788-6d757a256000>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the tangent cone of a totally convex subset again totally convex? up vote 5 down vote favorite To need not worry about the possibly broadest context let: $X$ be an Alexandrov's space with lower curvature bound and $C$ be a totally convex subset, i.e. for any $x,y \in C$ and any geodesic $\gamma$ (that is a locally shortest path) connecting $x$ and $y$ we have $\gamma \subseteq C$. For $p \in C$ the tangent cone $K_pC \subset K_p X$ is thus well defined. My question is: Is $K_pC$ totally convex as well? It is not hard to see that $K_pC$ is convex in the sense that any unique shortest connection between points in $K_pC$ also lies within $K_pC$, solving this problem for example in the riemannian case. (In fact let $v,w \in K_pC$ together with a unique shortest geodesic $\gamma$ connecting the two points. Using the scaling invariance of the problem together with $(K_pC,0) = \lim_{\lambda \to \ infty} (\lambda C,p)$ one may approximate $\gamma$ by geodesics contained in $C$. But i think in general it might not be possible to approximate arbitrary geodesics like this). dg.differential-geometry mg.metric-geometry alexandrov-geometry 2 The totally convex subset $C$ usually appears as a sublevel set of a convex function (I do not know other sources of totally convex subsets). In this case the $K_pC$ is also a sublevel set of a convex function. – Anton Petrunin Nov 22 '12 at 18:05 @ Anton Petrunin: Thanks a lot. Indeed i encountered this problem for sublevel sets of a convex function, say $f$. If a sublevel $C$ corresponds to a nonminimal value $a$ i see that $K_pC$ is a sublevel of the differential $df_p$. But this is wrong if $a$ is minimal. Any hint what function to consider here? ps. Anyhow the general question might be of interest – wspin Nov 23 '12 at 12:19 add comment 1 Answer active oldest votes (Too long for a comment) The question is interesting and it might be hard. From the comments: The totally convex subset $C$ usually appears as a sublevel set of a locally Lipschitz convex function (I do not know other sources of totally convex subsets). If $C$ is a sublevel set of a convex function for a not mimimal value $a$ then so is $K_pC$, in particular $K_pC$ is totally convex. Related stuff. Instead of tangent cone you might consider the same question for a (noncollapsing) Gromov--Hausdorff convergence $A_n\to A_\infty$. (In particular you may think that $A=A_n= up vote 2 A_\infty$ for all $n$ and $C_n$ is a sequence of totally convex sets.) Here some relevant statements which might be useful. down vote • Any minimizing geodesic in $A_\infty$ can be approximated by minimizing geodesics in $A_n$. (Any minimizing geodesic can be approximated by unique minimizing geodesic, which is approximated by minimizing geodesic in $A_n$.) • If $A_n$ are Riemannian then any geodesic in $A_\infty$ can be approximated by geodesic in $A_n$. (You approximate a minimizing piece and then extend the approximation.) The general case would follow if the geodesic in Alexandrov space without boundary have infinite extension with probability 1 (this is not known now). • You might consider version of definition of totally convex set with quasigeodesics instead of geodesics. In this case the answer is NO; take $A_n=A_\infty$ to be a 2-dimensional cone and the sets $C_n$ which which lie on distance $\ge 1$ from the tip, but for its limit $C_\infty$ there is a quasigeodesic which pass through the tip. 1 One example of interest if one considers general noncollapsing GH-limits: Let A_n = (M,g) be a fixed riemannian manifold such that there exists precisely one closed geodesic, say $c$. Then any sequence of points not contained in $c$ is a sequence of totally convex sets. If a limit point lies within $c$ the limit is not totally convex anymore. There should be an example like this among complete metrics on the 2-dimensinal plane. – wspin Nov 24 '12 at 17:48 @wspin, A point is totally convex if it is not on the tip of a geodesic loop (not a closed geodesic). So even you have just one closed geodesic $c$ the points off the $c$ do not have to form a totally convex. – Anton Petrunin Nov 24 '12 at 21:01 @ Anton Petrunin, Of course you are right, i was careless there. Let´s say there exists precisely one geodesic loop, which is in fact a simple closed geodesic ;) – wspin Nov 25 '12 at add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry mg.metric-geometry alexandrov-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/114168/is-the-tangent-cone-of-a-totally-convex-subset-again-totally-convex?sort=oldest","timestamp":"2014-04-19T04:52:22Z","content_type":null,"content_length":"60379","record_id":"<urn:uuid:e825ac7e-0c9f-41df-aa95-8d45f2d94b9c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Small probabilities From: D. F. Siemens, Jr. <dfsiemensjr@juno.com> Date: Wed Nov 09 2005 - 15:49:48 EST While it is true that some matters that we can predict only in probabilistic terms are the result of our inability to measure all the parameters or formulate the interconnections, this does not necessarily apply to all matters. As to random number generators, I recall that, many years back, it was clearly stated that these produced pseudo-random sequences. But strictly determined sequences can pass all known tests for randomness. The decimal expansion of pi is an example. Are there truly random sequences in the universe? I don't know of a rigorous proof, but quantum physics seems likely to result in true randomness, which I'm guessing would be preserved in string and M theories. I may be demonstrating my ignorance here, but my understanding of complexity theory, which applies to deterministic chaos in the world, provides that however much one may know about initial conditions, prediction can only be in terms of probability. Additionally, the combination of a few linear equations can produce nonlinearity and chaos. We did not notice this earlier because of our tendency to substitute an approximation whenever things began to get complicated. The application of complexity theory is a recent development. Your reference to "truly ontologically random (even to omniscience)," raises a variety of questions and problems. Underlying it seems to be a confusion between knowing and causing, which produces a lot of the nonsense written against divine omniscience. God can know fully even when there is genuine freedom in creation. However, there is another assumption in your qualification, that randomness precludes prediction, and that this applies to the timeless deity. If God is in time and can only fully know things up to the present moment, then he will have problems with predicting the future without total simple determinism, which seems impossible given chaos theory. However, Paul notes that divine foreknowledge already involves glorification, though we haven't seen it yet. I believe that orthodox thought demands that God be outside of time and space in order to be the Creator of the time-space universe--however many dimensions may be involved. But a simple illustration derived from Abbott's /Flatland/ shows that this is not necessary for total knowledge of temporal events. Spacelander could see the entire "universe" of Linelanders, as well as that of Flatlanders. Both Linelanders and Flatlanders were restricted to seeing their own little piece of their "universe." Similarly, any entity with more than a single temporal dimension could see the entire sweep of our one-dimensional time. All these matters were hashed and rehashed a while back. On Tue, 08 Nov 2005 22:41:58 -0600 Mervin Bitikofer <mrb22667@kansas.net> Aren’t terms like ‘randomness’ and ‘probability’ ultimately more a statement of perspective than of reality? I may refer to a series of computer generated numbers as random because they appear that way to me, but when I become aware of the algorithms used to produce the numbers, then I no longer view the sequence as random but as determined. In the same way coin flips only appear ‘random’ to us because of the overwhelming calculations that would be involved analyzing initial velocity & spin vectors, air currents, micro-gravitational influences, etc about the event. But if we had a ‘God’s eye’ perspective where are computational capabilities weren’t limited, then each coin flip is pre-determined, right? This, of course, assumes that the quantum uncertainty principle is merely a measurement problem rather than an ontological one. I.e. even though we won’t ever be able to simultaneously measure velocity & location of a particle, it would still have these definite properties (in principle) to be known by omniscience. Apart from this humanly inescapable ignorance, what could the concept of ‘randomness’ possibly mean? If something (presumably many things – like every electron movement) was truly ontologically random (even to omniscience), wouldn’t this require each so called random event to be divorced from the causality that underpins science? This would be indistinguishable from what we call ‘miraculous’ or ‘supernatural’ – except in that it would be common place, indeed always happening, at the microscopic level. <!--[if !supportEmptyParas]--> Perhaps some of you can explain to me how it is that these quantum uncertainties are supposed to have killed LaPlace’s demon. To my thinking, declaring that we can’t know something is not the same as concluding that it can’t (in principle) be knowable. It only states that we won’t ever be able to play LaPlace’s demon ourselves. Just like the Schrödinger’s cat example – which always has seemed ridiculous to me, like some sort of philosophical solipsism disguised as science. Can anybody enlighten me as to how it is that modern mathematicians or scientists so neatly dismiss these century old quandaries? I’m either missing something, or else everybody else just got tired of talking about it & moved on to some new faddish mistress like string theory. Until these questions are answered, I don’t see how any such thing as ‘randomness’ could be said to even exist. I’m certainly not a Calvinist, and I do believe in freewill though I have no idea how that could ever be explainable. But this whole discussion does put Dave’s reference to Proverbs 16:33 in an interesting light. (that all lots cast are decisions from the Lord). That was from a HPS post – sorry I’m mixing subject headings, but some of this fits together <!--[if !supportEmptyParas]--> Iain Strachan wrote: While everyone has got interested in the point-picking-from-a-line example, I don't believe that anyone has really addressed Bill's question about low probability "eliminating chance". One can get lost in the philosophy of picking a point from an infinite number of points, without seeing the real point (which was to argue against Dembski's notion that low probability can eliminate chance). I'd like to re-address this point. This is not to say that low probability can detect "design", which is a separate issue. Low probability by itself cannot "eliminate chance", because if every event is low probability, then one of them has to happen. Bill states that the probability of picking any point is zero yet a point is picked. To make it less abstract and in the realm of the real world, consider 200 coin tosses. You can say that the probability of any sequence occurring is 6.6x10^(-61) ( = 2^(-200)), which is exceptionally unlikely. Yet you toss a coin 200 times and lo and behold you've just witnessed an event with probability 6.6e-61. Clearly the low probability cannot eliminate chance by itself. Something like this happens with a technique I work with, called "Hidden Markov Models", which are used commonly in speech recognition (though I'm using them in a medical application). When these models are used to recognise speech, the speech signal is segmented into a number of frames, say 10ms long, and each frame is signal processed to produce a vector of numbers (usually some frequency domain analysis). Then in order to recognise a word, one constructs a probabilistic model that evaluates a probability for the entire sequence of these vectors. Now, the probability for the whole lot is simply the product of the probabilities for each individual one, so if there are many hundreds of samples, then you get incredibly small probabilities. Now here lies a problem: you would like to have a number of different models for different words that you might want to recognise, eg "one" "two" "three" etc. But the length of time people take to say "one" might vary a lot, and clearly it takes longer to say "seven" than it does to say "one". So because there are many more samples in the sequence when you say "seven", it will of necessity have a much lower probability, just as a sequence of 200 coin tosses has a lower probability than a sequence of 100. The raw probability isn't sufficient to discriminate between the two. But what you can compute is an expected value of the probability churned out by the model. If you say "one" into a model that is designed to recognise "seven", the probability will be many orders of magnitude lower than if you said "seven" (because the probability assigned to each of the vectors in the 10ms time frames will be much lower) so you can do the discrimination, and the confidence you have in rejecting it could be given by the ratio of the two probabilities. Likewise, with a sequence of coins, Dembski uses the notion of compressibility. Any arbitrary sequence of 200 coin tosses will on average require 200 "bits" to describe it. But if you describe it as 50 reps of HTHH, then clearly you have a much shorter description. Say this can be fitted into 25 bits in some specification language. Now the number of 25 bit strings is 2^25 and the number of 200 coin toss sequences is 2^200, so it follows that the probability of getting a 200 sequence of coin tosses describable in 25 bits is 2^(-175) = 2.08x10^(-53). This low probability can be used to "eliminate" chance - you don't expect to get that kind of repetition in a sequence of coin All the above is not to say that this detects design as such. There may be a naturalistic explanation of why you got 50 reps of HTHH. But it does clearly detect non-randomness. Hope this answers some of your question. On 11/6/05, Bill Hamilton <williamehamiltonjr@yahoo.com> wrote: I read Dembski's response to Henry Morris and noted that it raised an old issue I've harped on before: that you can specify a probability below which chance is eliminated. There is a counterexample given (among other places) in Davenport and Root's book Signals and Noise" (McGraw Hill, probably sometime in the early 60's) that goes Draw a line 1 inch long. Randomly pick a single point on that line. The probability of picking any point on the line is identically zero. Yet a is picked. Am I missing something? I will probably unsubscribe this evening, because I don't really have during the week to read this list. However, I will watch the archive for responses and either resubscribe or resspond offline as appropriate. Bill Hamilton William E. Hamilton, Jr., Ph.D. 586.986.1474 (work) 248.652.4148 (home) 248.303.8651 (mobile) "...If God is for us, who is against us?" Rom 8:31 Yahoo! Mail - PC Magazine Editors' Choice 2005 There are 3 types of people in the world. Those who can count and those who can't. Received on Wed Nov 9 15:54:38 2005
{"url":"http://www2.asa3.org/archive/asa/200511/0142.html","timestamp":"2014-04-16T16:33:08Z","content_type":null,"content_length":"17423","record_id":"<urn:uuid:108635e6-047a-411b-8674-c0ef2135ab97>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of best case computer science average cases of a given express what the usage is at least at most on average , respectively. Usually the resource being considered is running time, but it could also be memory or other resource. In real-time computing, the worst-case execution time is often of particular concern since it is important to know how much time might be needed in the worst case to guarantee that the algorithm would always finish on time. Average performance and worst-case performance are the most used in algorithm analysis. Less widely found is best-case performance, but it does have uses, for example knowing the best cases of individual tasks can be used to improve accuracy of an overall worst-case analysis. Computer scientists use probabilistic analysis techniques, especially expected value, to determine expected running Best-case performance The term best-case performance is used in computer science to describe the way an algorithm behaves under optimal conditions. For example, a simple linear search on an array has a worst-case performance O(n) (for the case where the desired element is the last, so the algorithm has to check every element; see Big O notation ), and average running time is O(n) (the average position of an element is the middle of the array, ie. at position n/2, and O(n/2)=O(n)), but in the best case the desired element is the first element in the array and the run time is O(1). Development and choice of algorithms is rarely based on best-case performance: most academic and commercial enterprises are more interested in improving average performance and worst-case performance Worst case versus average case performance Worst-case performance analysis and average case performance analysis have similarities, but usually require different tools and approaches in practice. Determining what average input means is difficult, and often that average input has properties which make it difficult to characterise mathematically (consider, for instance, algorithms that are designed to operate on strings of text). Similarly, even when a sensible description of a particular "average case" (which will probably only be applicable for some uses of the algorithm) is possible, they tend to result in more difficult to analyse equations. Worst-case analysis has similar problems, typically it is impossible to determine the exact worst-case scenario. Instead, a scenario is considered which is at least as bad as the worst case. For example, when analysing an algorithm, it may be possible to find the longest possible path through the algorithm (by considering maximum number of loops, for instance) even if it is not possible to determine the exact input that could generate this. Indeed, such an input may not exist. This leads to a safe analysis (the worst case is never underestimated), but which is pessimistic, since no input might require this path. Alternatively, a scenario which is thought to be close to (but not necessarily worse than) the real worst case may be considered. This may lead to an optimistic result, meaning that the analysis may actually underestimate the true worst case. In some situations it may be necessary to use a pessimistic analysis in order to guarantee safety. Often however, a pessimistic analysis may be too pessimistic, so an analysis that gets closer to the real value but may be optimistic (perhaps with some known low probability of failure) can be a much more practical approach. When analyzing algorithms which often take a small time to complete, but periodically require a much larger time, amortized analysis can be used to determine the worst-case running time over a (possibly infinite) series of operations. This amortized worst-case cost can be much closer to the average case cost, while still providing a guaranteed upper limit on the running time. Practical consequences Many problems with bad worst-case performance have good average-case performance. For problems we want to solve, this is a good thing: we can hope that the particular instances we care about are average. For cryptography, this is very bad: we want typical instances of a cryptographic problem to be hard. Here methods like random self-reducibility can be used for some specific problems to show that the worst case is no harder than the average case, or, equivalently, that the average case is no easier than the worst case. • In the worst case, linear search on an array must visit every element once. It does this if either the element being sought is the last element in the list, or if the element being sought is not in the list. However, on average, assuming the input is in the list, it visits only n/2 elements. • Applying insertion sort on n elements. On average, half the elements in an array A[1] ... A[j-1] are less than an element A[j], and half are greater. Therefore we check half the subarray so t[j] = j/2. Working out the resulting average case running time yields a quadratic function of the input size, just like the worst-case running time. • The popular sorting algorithm Quicksort has an average case performance of O(n log n), which contributes to making it a very fast algorithm in practice. But given a worst-case input, its performance can degrade to O(n^2). See also • Sorting algorithm - an area where there is a great deal of performance analysis of various algorithms.
{"url":"http://www.reference.com/browse/best+case","timestamp":"2014-04-18T06:58:45Z","content_type":null,"content_length":"84212","record_id":"<urn:uuid:87d0608a-1ef3-426b-be67-4b8ce5ad1eac>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
The Net Advance of Physics RETRO: ÆTHER THEORIES in the Long Nineteenth Century. Most material in Net Advance Retro antedates 1920. It may be obsolete or incorrect. • General: HISTORY: • General: □ Examination of the Theory of a Resisting Medium, in which it is assumed that the Planets and Comets of our System are moved by Roswell Willson Haskins [American Journal of Science 33, 1 □ Notes on the Principles of Pure and Applied Calculation ; and Applications of Mathematical Principles to Theories of the Physical Forces by Rev. James Challis [Cambridge: Deighton and Bell, □ An Essay on the Mathematical Principles of Physics by Rev. James Challis [Cambridge: Deighton and Bell, 1873] □ Remarks on the Cambridge Mathematical Studies, and their Relation to Modern Physical Science by Rev. James Challis [Cambridge: Deighton and Bell, 1875] □ Physik des Aethers auf elektromagnetischer Grundlage by Paul Drude [Stuttgart: Enke, 1894] □ The Principle of Relativity by Ebenezer Cunningham [Cambridge, 1914] An advanced text of remarkable sophistication, especially for one of the first book-length accounts of special relativity in English. Cunningham is both a scientific conservative and a defender of Einstein. He points out that not every form of æther is ruled out by relativity but only the "unnecessarily restricted rigid" one of the later Victorians; he makes an early attempt at relativistic thermodynamics; he discusses the vexing question of defining probability in Minkowski's spacetime. □ Relativity and the Electron Theory by Ebenezer Cunningham [London: Longmans, Green, 1915] Following up on his Principle of Relativity, Cunningham continues his attempt to describe special relativity as a theory of æther. □ Sidelights on Relativity by Albert Einstein [London: Methuen, 1922] • Aspects: ELECTROMAGNETISM; SPECIAL RELATIVITY;
{"url":"http://web.mit.edu/redingtn/www/netadv/SPaether.html","timestamp":"2014-04-17T01:02:54Z","content_type":null,"content_length":"3563","record_id":"<urn:uuid:c878c7ed-4d9e-4ce3-b54a-ed718207bea3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 34 - JOURNAL OF COMPUTER AND SYSTEM SCIENCES , 1991 "... We explain and advance Levin's theory of average case completeness. In particular, we exhibit examples of problems complete in the average case and prove a limitation on the power of deterministic reductions. ..." Cited by 71 (2 self) Add to MetaCart We explain and advance Levin's theory of average case completeness. In particular, we exhibit examples of problems complete in the average case and prove a limitation on the power of deterministic - Complexity Theory Retrospective II , 1997 "... ABSTRACT Being NP-complete has been widely interpreted as being computationally intractable. But NP-completeness is a worst-case concept. Some NP-complete problems are \easy on average", but some may not be. How is one to know whether an NP-complete problem is \di cult on average"? The the ..." Cited by 31 (2 self) Add to MetaCart ABSTRACT Being NP-complete has been widely interpreted as being computationally intractable. But NP-completeness is a worst-case concept. Some NP-complete problems are \easy on average&quot;, but some may not be. How is one to know whether an NP-complete problem is \di cult on average&quot;? The theory of average-case computational complexity, initiated by Levin about ten years ago, is devoted to studying this problem. This paper is an attempt to provide an overview of the main ideas and results in this important new sub-area of complexity theory. 1 - IN PROCEEDINGS OF THE 6TH ANNUAL STRUCTURE IN COMPLEXITY THEORY CONFERENCE , 1991 "... In this paper, we study connections among one-way functions, hard on the average problems, and statistical zero-knowledge proofs. In particular, we show how these three notions are related and how the third notion can be better characterized, assuming the first one. ..." Cited by 27 (7 self) Add to MetaCart In this paper, we study connections among one-way functions, hard on the average problems, and statistical zero-knowledge proofs. In particular, we show how these three notions are related and how the third notion can be better characterized, assuming the first one. - Des. Codes Cryptogr , 1998 "... We demonstrate how a well studied combinatorial optimization problem may be introduced as a new cryptographic function. The problem in question is that of finding a "large" clique in a random graph. While the largest clique in a random graph is very likely to be of size about 2 log 2 n, it is widely ..." Cited by 26 (0 self) Add to MetaCart We demonstrate how a well studied combinatorial optimization problem may be introduced as a new cryptographic function. The problem in question is that of finding a "large" clique in a random graph. While the largest clique in a random graph is very likely to be of size about 2 log 2 n, it is widely conjectured that no polynomial-time algorithm exists which finds a clique of size (1 + ffl) log 2 n with significant probability for any constant ffl ? 0. We present a very simple method of exploiting this conjecture by "hiding" large cliques in random graphs. In particular, we show that if the conjecture is true, then when a large clique -- of size, say, (1+2ffl) log 2 n -- is randomly inserted ("hidden") in a random graph, finding a clique of size (1 + ffl) log 2 n remains hard. Our result suggests several cryptographic applications, such as a simple one-way function. 1 Introduction Many hard graph problems involve finding a subgraph of an input graph G = (V; E) with a certain - STOC 94 , 1994 "... Von Neumann’s Min-Max Theorem guarantees that each player of a zero-sum matrix game hss an optimal mixed strategy. We show that each player has a near-optimal mixed strategy that chooses uniformly from a multiset of pure strategies of size logarithmic in the number of pure strategies available to th ..." Cited by 23 (2 self) Add to MetaCart Von Neumann’s Min-Max Theorem guarantees that each player of a zero-sum matrix game hss an optimal mixed strategy. We show that each player has a near-optimal mixed strategy that chooses uniformly from a multiset of pure strategies of size logarithmic in the number of pure strategies available to the opponent. Thus, for exponentially large games, for which even representing an optimal mixed strategy can require exponential space, there are nearoptimal, linear-size strategies. These strategies are eaay to play and serve as small witnesses to the approximate value of the game. Because of the fundamental role of games, we expect this theorem to have many applications in complexity theory and cryptography. We use it to strengthen the connection estab-lished by Yao between randomized and distributional complexity and to obtain the following results: (1) Every language has anti-checkers — small hard multisets of inputs certifying that small circuits can’t decide the language. (2) Circuits of a given size can generate random instances that are hard for all circuits of linearly smaller size. (3) Given an oracle M for any exponentially large game, the approximate value of the game and near-optimal strategies for it can be computed in I&‘(M). (4) For any NP-complete lan-guage L, the problems of (a) computing a hard distribution of instances of L and (b) estimating the circuit complexity of L are both in Z;. - Problems of Information Transmission , 2003 "... All the king’s horses, and all the king’s men, Couldn’t put Humpty together again. The existence of one-way functions (owf) is arguably the most important problem in computer theory. The article discusses and refines a number of concepts relevant to this problem. For instance, it gives the first com ..." Cited by 20 (0 self) Add to MetaCart All the king’s horses, and all the king’s men, Couldn’t put Humpty together again. The existence of one-way functions (owf) is arguably the most important problem in computer theory. The article discusses and refines a number of concepts relevant to this problem. For instance, it gives the first combinatorial complete owf, i.e., a function which is one-way if any function is. There are surprisingly many subtleties in basic definitions. Some of these subtleties are discussed or hinted at in the literature and some are overlooked. Here, a unified approach is attempted. 1 - SIAM Journal on Computing , 1995 "... In the theory of worst case complexity, NP completeness is used to establish that, for all practical purposes, the given NP problem is not decidable in polynomial time. In the theory of average case complexity, average case completeness is supposed to play the role of NP completeness. However, the a ..." Cited by 20 (1 self) Add to MetaCart In the theory of worst case complexity, NP completeness is used to establish that, for all practical purposes, the given NP problem is not decidable in polynomial time. In the theory of average case complexity, average case completeness is supposed to play the role of NP completeness. However, the average case reduction theory is still at an early stage, and only a few average case complete problems are known. We present the first algebraic problem complete for the average case under a natural probability distribution. The problem is this: Given a unimodular matrix X of integers, a set S of linear transformations of such unimodular matrices and a natural number n, decide if there is a product of n (not necessarily different) members of S that takes X to the identity matrix. 1 Introduction The theory of NP completeness is very useful. It allows one to establish that certain NP problems are NP complete and therefore, for all practical purposes, not decidable in polynomial time (PTime).... - Electronic Colloquium on Computational Complexity , 1997 "... Abstract. In 1984, Leonid Levin initiated a theory of average-case complexity. We provide an exposition of the basic definitions suggested by Levin, and discuss some of the considerations underlying these definitions. Keywords: Average-case complexity, reductions. This survey is rooted in the author ..." Cited by 18 (2 self) Add to MetaCart Abstract. In 1984, Leonid Levin initiated a theory of average-case complexity. We provide an exposition of the basic definitions suggested by Levin, and discuss some of the considerations underlying these definitions. Keywords: Average-case complexity, reductions. This survey is rooted in the author’s (exposition and exploration) work [4], which was partially reproduded in [1]. An early version of this survey appeared as TR97-058 of ECCC. Some of the perspective and conclusions were revised in light of a relatively recent work of Livne [21], but an attempt was made to preserve the spirit of the original survey. The author’s current perspective is better reflected in [7, Sec. 10.2] and [8], which advocate somewhat different definitional choices (e.g., focusing on typical rather than average performace of algorithms). 1 - In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS , 2006 "... We show that 2-tag systems efficiently simulate Turing machines. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improve ..." Cited by 16 (7 self) Add to MetaCart We show that 2-tag systems efficiently simulate Turing machines. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improves a forty year old result in the area of small universal Turing machines. 1 - PROCEEDINGS OF THE 20TH ANNUAL CONFERENCE ON COMPUTATIONAL COMPLEXITY, (CCC) , 2005 "... We prove that if NP 6t, BPP, i.e., if some NP-complete language is worst-case hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errson inputs from this distribution. This ..." Cited by 16 (5 self) Add to MetaCart We prove that if NP 6t, BPP, i.e., if some NP-complete language is worst-case hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errson inputs from this distribution. This is the first worstcase to average-case reduction for NP of any kind.We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial time algorithms, which isa pre-requisite assumption needed for OWF and cryptography (even if not a sufficient assumption). Never-theless, we do show that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial time algorithms (unless NP is easy in the worst-case). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that failsto solve SAT in the worst-case, we can efficiently generate at most three Boolean formulas (of increasing
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=666319","timestamp":"2014-04-16T22:57:39Z","content_type":null,"content_length":"38263","record_id":"<urn:uuid:faff676c-2d7c-4d77-9879-a9e5facb2ca3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: A fundamental element of the study of 3-manifolds is Thurston's remarkable geometrization conjecture, which states that the interior of every compact 3-manifold has a canonical decomposition into pieces that have geometric structures. In most cases, these structures are complete metrics of constant negative curvature, that is to say, they are hyperbolic manifolds. The conjecture has been proved in some important cases, such as Haken manifolds and certain types of fibered manifolds. The influence of Thurston's hyperbolization theorem on the geometry and topology of 3-manifolds has been tremendous. This book presents a complete proof of the hyperbolization theorem for 3-manifolds that fiber over the circle, following the plan of Thurston's original (unpublished) proof, though the double limit theorem is dealt with in a different way. The book is suitable for graduate students with a background in modern techniques of low-dimensional topology and will also be of interest to researchers in geometry and topology. This is the English translation of a volume originally published in 1996 by the Société Mathématique de France. Titles in this series are co-published with Société Mathématique de France. SMF members are entitled to AMS member discounts. Graduate students and research mathematicians interested in low-dimensional topology and geometry. From a review of the French edition: "The book is very well written ... completely self-contained ..." -- Mathematical Reviews • Teichmüller spaces and Kleinian groups • Real trees and degenerations of hyperbolic structures • Geodesic laminations and real trees • Geodesic laminations and the Gromov topology • The double limit theorem • The hyperbolization theorem for fibered manifolds • Sullivan's theorem • Actions of surface groups on real trees • Two examples of hyperbolic manifolds that fiber over the circle • Geodesic laminations • Bibliography • Index
{"url":"http://ams.org/bookstore?fn=20&arg1=smfamsseries&ikey=SMFAMS-7","timestamp":"2014-04-19T17:44:52Z","content_type":null,"content_length":"16181","record_id":"<urn:uuid:3c43f275-11fb-42b6-b446-548efc796053>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 9 -4x+3y=3 7x-9y-6 solve Josh completed 2 math problems in 2 minutes. Julia says he did 3/5 of a problem each munute. is she correct? Ms. Nikkel wants to divide her class of 23 into 4 equal teams is this reasonable? Why or why not? Math- Alg 2 Which logarithmic equation is equivalent to the exponential equation below? 3^x=28 Math- Alg 2 The number of computers sold by BCC depends on the dollar amount, x, that they spend on advertising. How many computers will they sell by spending $80,000 on advertising? Round to the nearest whole number and do not include units in your answer. N(x) = 100 + 20 * In(0.25x) mystery number What is the three digit mystery number? It is less than 300. The tens digit is 4 more than the hundreds digit. The sum of the digit is 9. The hundreds digit is 2 less than the ones digit. physical science suppose one of your test tubes ha a capacity of 23 mL. you need to use about 5 mL. of a liquid. describe how you could estimate 5 mL Can someone please explain this problem to me? I would appreciate it very much. 1. y=x^2 - 2mx + (2m + 3) 2. D= b^2 - 4ac 3. D= (-2m)^2 - 4(1)(2m - 3) 4. 4m^2 - 8m + 12 5. 4(m^2 - 2m + 3) 6. 4(m + 1) (m - 3) m (> or = to) 3, m (< or = to) -1 Then you draw the parabola wit...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=CAMILA","timestamp":"2014-04-21T05:40:18Z","content_type":null,"content_length":"7554","record_id":"<urn:uuid:5980e0c6-6245-4eb8-8d89-92a54ec6e16b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Proper Group action on a metric space up vote 0 down vote favorite Let $(X,d)$ be a metric space and $C\subset X$ be a compact subset. Let furthermore $G$ be a group that acts on $X$ proper and by isometries. Does there exist an $\epsilon >0 $ such that: Let $U=$ {$x\in X| d(x,C)<\epsilon$} and there are only finitely many $g\in G$ such that $g(U) \cap U \not= \emptyset$ ? Cheers Helge mg.metric-geometry geometric-group-theory real-analysis gt.geometric-topology does one need the assumption that $(X,d)$, is a metric space, where $d$ is a length metric ? – Helge Jun 13 '13 at 18:12 1 Being a length metric is irrelevant. On the other hand, you ought to assume the metric space is proper (closed balls are compact), particularly since you are assuming that the action is proper (the induced map $G \times X \to X \times X$ is a proper function). In that case what you ask for is true, and is a trivial consequence of the definitions. – Lee Mosher Jun 13 '13 at 21:14 Helge: What Lee says is correct, such actions are usually called "metrically properly discontinuous". In your setting, you probably meant "properly discontinuous" (rather than just proper). Then the assertion is correct and is a nice exercise in point-set topology although not as easy as the one for "metrically proper" actions. – Misha Jun 14 '13 at 3:07 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged mg.metric-geometry geometric-group-theory real-analysis gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/133677/proper-group-action-on-a-metric-space","timestamp":"2014-04-19T20:23:06Z","content_type":null,"content_length":"49824","record_id":"<urn:uuid:e4d6bbb0-21cd-4552-99d8-3a09dbfa7b03>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by anthony on Thursday, December 4, 2008 at 7:02pm. i need help with these problems please which is the numeral set for [2x-]<5 • math - Cianán, Thursday, December 4, 2008 at 8:08pm You can treat greater than and less than signs kind of like equals signs by you can multiply both sides by a number and not change the result or add the same thing to both sides and not change the result. The only thing you have to be careful about is when you multiply by -1 as the direction of the > or < changes. 6 - 2x < 4 - 2x < 4 - 6 - 2x < -2 When I multiply both sides by -1 the less than becomes a greater than. 2x > 2 x > 1 If you want the reason why the direction changes reply back to this. Ok, part 2: 2x < 5 therefore x < 5/2 x < 2.5 1: -4<x<1 This interval is on the above interval 2: x<-4 or x>1 doesn't as x can't be larger than 2.5 so x>1 fails. 3: -1<x<4 doesn't work as 4 is > 2.5 so it fails 4: x<-1 or x>4 doesn't work as x>4 fails as x cannot be greater than 2.5 Ok, hope that helps! Related Questions math b - i need help with these problems please [6-2x]<4 which is the numeral... algebra 1 - who is correct and why? chloe jonas 3<2x-5<7 3<2x-5<7 3&... Math - help with these questions~ Thank you 1. Determine the interval(s) on ... Math - PLEASE HELP!!!NEED TO SOLVE: 5-(3)<10(-3)= 3<2x+1<9= 13<3x+5&... Precalculus - 1. Determine the interval(s) on which x^2 + 2x -3>0 a)x<-3, ... Precalculus - Help with these questions help with these questions~ Thank you 1. ... CALCULUS BC - a particle moves on the x-axis in such a way that its position at ... math - Which of the following is equivalent to the inequality 7>-3x>-12? a... Algebra - Please Solve: (x+3)(x-5) < 0 I got so far on this and am lost.. (x+... math - < = Angle <R and <S are complementary angles, and <U and <...
{"url":"http://www.jiskha.com/display.cgi?id=1228435335","timestamp":"2014-04-18T08:37:56Z","content_type":null,"content_length":"9283","record_id":"<urn:uuid:447ae5f5-cde9-4556-8702-a6b47a164b82>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
The State of my Thesis - Part 2 20 September 2010, 21:34 So, the world cup hit. And I spent a month away from Uni and people and everything, with my body mentally in South Africa but physically here in New Zealand. It was a good month… A while before I left, we realised that we were getting nowhere with Secret Sharing, and that wasn’t likely to change unless I got tenured. Which I’m nowhere near. So, a new topic was needed. Something that could potentially lead to a paper and some actual results. But I have a knack for picking the wrong topics… Anyway, the new topic we decided on was “Golden Mean Matroids”. Simply put, a matroid is golden mean (GM) if it is representable over GF(4) and GF(5) (the actual definition uses subdeterminants and partial fields, so I’ll spare you). In particular, we were trying to characterise the maximum sized GM matroids. Some preliminary work was done by Archer in his PhD thesis, using some buggy software called macek. He conjectured that the maximum-sized GM matroids came in three families: GI, GP, and T. He gave matrix representations for the GI and GP families, with the T family coming from Semple’s PhD thesis. We quickly found Dowling representations for both the T family and the GI family, but the GP family eluded us for awhile. Turns out it’s not golden mean. Which is nice, so now we only have 2 infinite families to play with (and some junk at rank 3). The basic strategy is to take a maximum-sized GM matroid at rank k+1, contract a point something. What can we say about the original matroid and the contraction? As it turns out, the answer to that question is “not much in the time we have available”. So, a new problem was needed. Again. Thankfully, there is a nice trick in Matroid Theory to make problems easier: excluded minors. So we excluded a minor, which I have called Γ. It’s the relaxation of the non-Fano, so I suppose it should be called (F[7]^–)^– or something like that. So that’s where I am now. I’m working on a subclass of the maximum-sized GM matroids that had better be just the T family. And once that is done, I’ll have to write up all the stuff I’ve done nicely, and that’s my thesis. After that, who knows? A PhD most probably, but where? Posted by Michael Welsh at 09:34. Commenting is closed for this article.
{"url":"http://yomcat.geek.nz/blog/5/the-state-of-my-thesis-part-2","timestamp":"2014-04-19T04:19:23Z","content_type":null,"content_length":"7220","record_id":"<urn:uuid:a66062a6-81cb-431c-99d3-bc2b59a3c469>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
How to find information about a group given its presentation November 16th 2011, 01:35 PM #1 Oct 2011 How to find information about a group given its presentation i am given the group presentation <x,y | x^8=1, x^4=y^2, xy=yx^{-1}> and am told to prove that it defines a 2-group of order at most 16. i've been playing around with the relations but not really sure how to go about this. thought i might have to see how many different elements can be made and see that it's at most 16 but don't know about the 2-group part. in general i'm not sure how to go about finding information about a group knowing its presentation. any help? thank youuu Re: How to find information about a group given its presentation i am given the group presentation <x,y | x^8=1, x^4=y^2, xy=yx^{-1}> and am told to prove that it defines a 2-group of order at most 16. i've been playing around with the relations but not really sure how to go about this. thought i might have to see how many different elements can be made and see that it's at most 16 but don't know about the 2-group part. in general i'm not sure how to go about finding information about a group knowing its presentation. any help? thank youuu let $G$ be your group and $g \in G.$ so $g = g_1g_2 \cdots g_n,$ for some $n$, where each $g_i$ is either $x^{\pm 1}$ or $y^{\pm 1}.$ now, in $g$ we can replace $xy$ with $yx^{-1}$ and $x^{-1}y$ with $yx$ and so we will eventually get $g = y^ix^j$ for some $i,j.$ since $x^8=1$ and $y^2=x^4,$ we have $0 \leq i \leq 1$ and $0 \leq j \leq 7$ and thus $|G| \leq 16.$ finally, since $xy = yx^ {-1},$ we have $x^my=yx^{-m}$ for all $m.$ use this to show that $g^8=1$ for all $g \in G$ and so $G$ is a $2$-group. Last edited by NonCommAlg; November 17th 2011 at 09:42 AM. Re: How to find information about a group given its presentation thanks v much for the help! Re: How to find information about a group given its presentation This is a good question, but with a somewhat rubbishy answer: No one does. Not really. I mean, there are some presentations where you cannot tell if a given element is equal to the identity or not! Look up "the word problem for groups". Also, there is a famous example for the 60s or so, where John Conway posed a problem in Notices to prove that a certain group was cyclic of order 15, I believe. It took something like two years for the solution to be found! Look up Fibonacci groups for more details. I think the group was, $\langle x_1, x_2, x_3, x_4, x_5; x_1{x_2}=x_3, x_2{x_3}=x_4, x_3{x_4}=x_5, x_4{x_5}=x_1, x_5{x_1}=x_2\rangle$. November 16th 2011, 03:10 PM #2 MHF Contributor May 2008 November 17th 2011, 02:38 AM #3 Oct 2011 November 17th 2011, 03:31 AM #4
{"url":"http://mathhelpforum.com/advanced-algebra/192058-how-find-information-about-group-given-its-presentation.html","timestamp":"2014-04-17T10:41:43Z","content_type":null,"content_length":"46482","record_id":"<urn:uuid:82a543b4-bef7-47a4-807c-95d0364b7e07>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Exegetic Analytics extols the wonders of foreach package for iterative operations that go beyond the standard "for" loop in R. For example, here's a neat (if not optimally efficient) construct using filters to calculate the primes less than 100: foreach(n = 1:100, .combine = c) %:% when (isPrime(n)) %do% n The open-source team at Revolution Analytics created the foreach... ECVP tutorial on classification images The slides for my ECVP tutorial on classification images are available here. Try this alternative version if the equations look funny. (image from Mineault et al. 2009) The slides are in HTML and contain some interactive elements. They’re the result of experimenting with R Markdown, D3 and pandoc. You write the slides in R Markdown, Making regex examples work for you! One of the most frequently used string recognition algorithms out there is regex and R implements regex. However, users can often be frustrated with how despite taking examples verbatim from many sources such as stackoverflow they do not seem to ... Knitr/Markdown OpenCPU App A new little OpenCPU app allows you to knit and markdown in the browser. It has a fancy pants code editor which automatically updates the output after 3 seconds of inactivity. It uses the Ace web editor with mode-r.js (thanks to RStudio for making the latter available).Like all OpenCPU apps, the source... Knitr/Markdown OpenCPU App A new little OpenCPU app allows you to knit and markdown in the browser. It has a fancy pants code editor which automatically updates the output after 3 seconds of inactivity. It uses the Ace web editor with mode-r.js (thanks to RStudio for making the latter available).Like all OpenCPU apps, the source package lives in the opencpu app... Drafting the Best Starting Lineup in Fantasy Football by Taking into Account Uncertainty in the Projections: An Optimization Simulation In a previous post, I showed how to determine the best starting lineup to draft using an optimizer tool. The optimizer identifies the players that maximize your projected points within your The post Drafting the Best Starting Lineup in Fantasy Football by Taking into Account Uncertainty in the Projections: An Optimization Simulation appeared first on Fantasy Football Analytics. Drafting the Best Starting Lineup in Fantasy Football by Taking into Account Uncertainty in the Projections: An Optimization Simulation In a previous post, I showed how to determine the best starting lineup to draft using an optimizer tool. The optimizer identifies the players that maximize your projected points within your risk tolerance. The optimizer does not take i... Plot Weekly or Monthly Totals in R When plotting time series data, you might want to bin the values so that each data point corresponds to the sum for a given month or week. This post will show an easy way to use cut and ggplot2's stat_summary to plot month totals in R wi... A simple amortization function I was working on a project yesterday where I needed to amortize out a bunch of loans to calculate the total interest a borrower would pay if he or she paid the minimum monthly payment for the full term of the loan. I couldn’t find any package in R that already contained the necessary math, R and Linear Algebra by Joseph Rickert I was recently looking through upcoming Coursera offerings and came across the course Coding the Matrix: Linear Algebra through Computer Science Applications taught by Philip Klein from Brown University. This looks like a fine course; but why use Python to teach linear algebra? I suppose this is a blind spot of mine: MATLAB I can see....
{"url":"http://www.r-bloggers.com/2013/08/page/2/","timestamp":"2014-04-19T07:26:24Z","content_type":null,"content_length":"37830","record_id":"<urn:uuid:f4a0e565-bec1-4efd-ba23-5206a30d95fa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Uniqueness of values in recurrence relations up vote 6 down vote favorite Given an integer $k > 1$, define the sequences $X(k,n), Y(k,n)$ as follows: $a=4k-2,$ $y_0 = 1,$ $y_1 = a + 1,y_n = ay_{n-1} - y_{n-2}$ $b = 4k + 2,$ $ x_0 = 1,$ $x_1 = b - 1,$ $x_n = bx_{n-1} - x_{n-2}$ For example, with $k = 2$ we get $y_j = 7, 41, 239, 1393, \ldots$ $x_j = 9, 89, 881, 8721, \ldots$ A simple question arises, as to whether there exist $\{k, i, j\}$ such that $X(k,i) = Y(k,j)$? This might well be an open question, and perhaps inappropriate here, but I have trawled the web for many hours and have found no evidence that anybody has even considered it. Computational experiments suggest that in fact an even stronger result is possible, ie. that there are no $\{k_1, k_2, i>1, j>1\}$ with $X(k_1,i) = Y(k_2,j)$. In other words, with the exception of $x_1, y_1$ which can be any odd number > 7, all values generated by these sequences appear to be unique. Any suggestions as to a way to attack this question would be greatly appreciated! Update: There are explicit proofs that for $k = 2, 3$ there can be no $X(k,i) = Y(k,j)$, so we can restrict the question to $k > 3$. Sadly these proofs are not extendable to other k nt.number-theory recurrences add comment 5 Answers active oldest votes You may consider the paper: B. Ibrahimpasic, A parametric family of quartic Thue inequalities. Bull. Malays. Math. Sci. Soc. (2) 34 (2011), no. 2, 215–230, available at http://www.emis.de/journals/BMMSS/ up vote 2 down vote vol34_2_2.html It seems that Theorem 3.1, with c=2k, answers your question. 1 Brilliant! Thank you. How on earth did you spot that? – Jim White Jan 10 '13 at 17:38 I was PhD thesis supervisor of the author of that paper :) You can find some similar results in papers by Borka Jadrijevic marjan.fesb.hr/~borka/popis_znanstvenih_radova.htm – duje Jan 10 '13 at 17:45 Thanks again. May I ask, would your first name be Andrej? If so we have a connection. – Jim White Jan 10 '13 at 19:08 yes, my first name is Andrej – duje Jan 10 '13 at 19:13 I co-authored a paper with Keith Matthews on your conjecture wrt $x^2 - (k+1)y^2 = k^2$, which we submitted to you recently. I found the recursive structure of solutions described therein. – Jim White Jan 10 '13 at 19:15 show 3 more comments Ok, Aaron has generalised my sequences $X(k), Y(k)$ to $U(m), V(m)$ for arbitrary m > 2. It will be found that any pair $(u, v) = (U_j(m), V_j(m))$ corresponds to a solution to the generalised Pell equation $(m+2)v^2 - (m-2)u^2 = 4$ If $m = 4k$ then this reduces to $(2k+1)v^2 - 2ku^2 = 2$, and for $m = 4k-2$ we get $kv^2 - (k-1)u^2 = 1$. This explains why cases $m = 3, 4, 6$ produce convergents to $\sqrt{5}, \sqrt{3}, \sqrt{2}$ respectively, since they correspond to regular Pell equations: $m=3: 5v^2 - u^2 = 4$ $m=4: 3v^2 - u^2 = 2$ $m=6: 2v^2 - u^2 = 1$ up vote 1 down vote My original question is thus restated as "Does $U_j(4k-2) = V_i(4k+2)$ have any solutions?". Which itself can be restated as, are there any solutions to the simultaneous equations: $kx^2 - (k-1)y^2 = 1$ $(k+1)y^2 - kz^2 = 1$ with k > 1, noting again that cases k = 2, 3 have been resolved in the negative. And the motivating question is this: do there exist squares in arithmetic progression that can be written $(k-1)n +1, kn+1, (k+1)n+1$, with $n > 0, k > 1$? If so, they necessarily correspond to solutions $\{x,y,z\}$ of these equations, with $n = (x^2 -1)/(k-1) = (y^2 -1)/k = (z^2-1)/(k+1)$ I've wondered about similar things. I'm interested in finding triples of nearly equal integers integers with square product. That would require (or at least benefit from) hits among near optimal rational approximates to square roots. A pretty impressive example is $(10082,10086,10092)=(2a^2,6b^2,3c^2)$ where $a/b=71/41$ and $b/c=41/29$ are convergents to $\sqrt{3}$ and $\ sqrt{2}$. That is enough although that drags along $22/9,49/20$ which are convergents to $\sqrt{6}$ with $(22+49)/(9+20)=a/c.$ – Aaron Meyerowitz Jan 9 '13 at 12:36 Aaron, that sounds like fun, is there anything I can do to contribute? – Jim White Jan 11 '13 at 1:39 add comment I have extended your definitions to have four times as many sequences (sorry to add a third set of definitions). If I am not mistaken there are exactly $11$ interesting repeated entries up to $10^{12}$, none of which affect your restricted case: You might find a few ideas here. This is just meant to reinforce the idea that there is no deep reason that coincidences could not occur, and a few do. But the numbers are so sparse that it seems reasonable that only finitely few do, except some obvious small identities. Consider the two sequences $U_n(m)=1, m+1, m^2+m-1, m^3+m^2-2m-1, m^4+m^3-3m^2-2m+1,\cdots$ given by the recurrence $ U_{i}=mU_{i-1}-U_{i-2}$ (for $i \ge 2$) with initial conditions $U_0=1,U_1=m+1$ $V_n(m)=1, m-1, m^2-m-1, m^3-m^2-2m+1, m^4-m^3-3m^2+2m+1,\cdots$ given by the same recurrence $V_{i}=mV_{i-1}-V_{i-2}$ (for $i \ge 2$) but with initial conditions $V_0=1,V_1=m-1$ Then the $U_i,V_i$ can be expressed as linear combinations of the roots $r=\frac{m \pm \sqrt{m^2-4}}{2}$ of $r^2-r+1=0.$ One of the roots is very close to $\frac{1}{m}$ and the other close to $m-\frac{1}{m}.$ SO, after a bit of computation, $U_i(m)=\lfloor{\frac{m-2+\sqrt{m^2-4}}{2(m-2)} \left( \frac{m+\sqrt{m^2-4}}{m-2}\right)^n}\rceil$ and $V_i(m)=\lfloor{\frac{m+2+\sqrt{m^2-4}}{2(m+2)} \left( \frac{m+\sqrt{m^2-4}}{m+2}\right)^n} \rceil$ where $\lfloor z\rceil$ means round to the nearest integer, which in this case will be very close.(The distance from the nearest integer goes to $0$ like $\frac{1}{m^n}$). The approximation will be of the form $U_i(m)=v \approx \frac{v}{2}+\frac{p\sqrt{m^2-4}}{q}$ I don't know that it matters, but we see from this (after more computation) that $\frac{U_i(m)}{V_i(m)}\approx\sqrt{\frac{m+2}{m-2}}$ where the approximation is quite good. For $m=4,6$ we have $\sqrt{\frac{m+2}{m-2}}=\sqrt{3},\sqrt{2}.$ Observe in the tables below that $U(4),V(4)$ give the numerators and denominators of alternate terms of the sequence $1/1,2/1, 5/3, 7/4, 19/ 11, 26/15, 71/41, 97/56, 265/153, 362/209,\cdots$ of convergents to $\sqrt{3}.$ Similarly, $U(6),V(6)$ give the numerators and denominators of alternate terms of the sequence $1/1,3/2,7/5,17/ 12,41/29,99/70,239/169,577/408,1393/985,\cdots$ of convergents to $\sqrt{2}.$ Similar things can be observed and explained. I'll only mention that, while the relation to $\sqrt{5}$ at $m=3$ is less obvious (though there) a consequence is that half of the Fibonacci numbers constitute $V(3)$ and another quarter constitute $U(7).$ Here are the first few terms of $U(m)$ then $v(m)$ for $3 \le m \le 17.$ Values over $1000000$ are not shown. As just mentioned, numerators and denominators of convergents to $\sqrt{2}$ show up as $U(6),V(6)$ respectively with growth rate $(1+\sqrt{2})^2=3+2\sqrt{2}=5.828\cdots \approx 6-1/6 \approx 6$ This illustrates that the terms in $U(m)$ and in $V(m)$ grow very much like $m ^i$. More precisely, they grow like $(\frac{m+\sqrt{m^2-4}}{2})^n \approx (m-\frac1m)^n.$ $\begin{array}{cccccccccc} 4&11&29&76&199&521&1364&3571&9349& 24476\\\ 5&19&71&265&989&3691&13775&51409&191861& 716035\\\ 6&29&139&666&3191&15289&73254&350981&-&- \\\ 7&41&239&1393&8119&47321 &275807&-&-&- \\\ 8&55&377&2584&17711&121393&832040&-&-&- \\\ 9&71&559&4401&34649&272791&-&-&-&- \\\ 10&89&791&7030&62479&555281&-&-&-&- \\\ 11&109&1079&10681&105731&-&-&-&-&- \\\ 12&131&1429 &15588&170039&-&-&-&-&- \\\ 13&155&1847&22009&262261&-&-&-&-&- \\\ 14&181&2339&30226&390599&-&-&-&-&- \\\ 15&209&2911&40545&564719&-&-&-&-&- \\\ 16&239&3569&53296&795871&-&-&-&-&-\end{array}$ $\begin{array}{cccccccccc} 2&5&13&34&89&233&610&1597&4181& 10946\\\ 3&11&41&153&571&2131&7953&29681&110771& 413403\\\ 4&19&91&436&2089&10009&47956&229771&-&- \\\ 5&29&169&985&5741&33461& 195025&-&-&- \\\ 6&41&281&1926&13201&90481&620166&-&-&- \\\ 7&55&433&3409&26839&211303&-&-&-&- \\\ 8&71&631&5608&49841&442961&-&-&-&- \\\ 9&89&881&8721&86329&854569&-&-&-&- \\\ 10&109&1189& up vote 12970&141481&-&-&-&-&- \\\ 11&131&1561&18601&221651&-&-&-&-&- \\\ 12&155&2003&25884&334489&-&-&-&-&- \\\ 13&181&2521&35113&489061&-&-&-&-&- \\\ 14&209&3121&46606&695969&-&-&-&-&- \\\ 15&239& 1 down 3809&60705&967471&-&-&-&-&- \\\ 16&271&4591&77776&-&-&-&-&-&- \\\ 17&305&5473&98209&-&-&-&-&-&-\end{array}$ You are only using the rows $U(4k-2)$ and $V(4k+2)$ for $k \ge 2.$ Here are some observations on the coincidences if we uses all the rows (none of these coincidences show up for your The $U_1$ and $V_1$ are all the integers so should not count for coincidences. There are six sporadic cases of $v=U_3(m)=U_2(m').$ Equivalently, $U_3(m)=V_2(m'+1)$. These are for $(v,m,m')=(29,3,5),(71,4,8),(239,6,15),$$(60761,39,246),(2370059,133,1539) (6679639,188,2584).$ There might be more, but I doubt it. This is complete up to $v=25 \cdot 10^{18}.$ Here is an analysis: To solve $m^3+m^2-2m-1=(m')^2+m'-1$ we can use the quadratic formula to solve $m'=\frac{-1+\sqrt{4m^3+4m^2-8m+1}}{2}$ SO the cubic under the radical must be a perfect square. This is a matter of looking for integer points on an elliptic curve for which there is a well developed theory (which I did not use.) One expects finitely many. One could check if the integer points given lead any others using the group law. It might be that this kind of analysis (which I did not really do here anyway) could also be done for some $U_4,V_4,U_6,V_6.$ The other repeats up to $10^{12}$ are $41=V_3(4)=U_2(6),\ 89=V_3(5)=U_2(9),\ 1189=V_3(11)=U_2(34)$ along with $3191=U_5(5)=U_2(56)$ and $13201=V_5(7)=V_3(24).$ Note: to check up to $10^{12}$ we can generate the $U_3(m)$ and $V_3(m)$ up to $m=10^4$ along with any $U_i(m) \lt 10^{12}$ and $V_i(m) \lt 10^{12}$ for $i \gt 3.$ In all this is about $43000$ vaues. We could also generate $U_2(m)=m^2+m-1$ up to $m=10^6$ but $m^2+m-1=v$ for $v=\frac{-1+\sqrt{5+4v}}{2}$ so it is better to just check which of the other values make the expression under the radical a square. However this does make it harder to check for the smallest gaps. It could still be done but I did not. My feeling is that there are a handful of repeated terms for coincidental reasons and that it is reasonable on random grounds to expect that there are only finitely many. Quite possibly just the $10$ I mentioned. There does not seem to be any underlying meaning for the coincidences. For example $239=U_3(6) \approx \frac{239+169\sqrt{2}}{2}\approx 239.001046$ and also $239=U_2(15)\approx \frac{239}{2}+\frac{209\sqrt{221}}{26} \approx 239.0003219.$ I do not see anything deep here. However the fact that the rational and irrational parts are nearly equal is not a coincidence. Other thoughts: In a sense, $U(m)$ and $ V(m)$ are just scaled versions of the powers of $m$ so we kind of have the prime powers (twice). We now know for sure that the set of powers $m^i$ (starting at $2^4=16$) and the set of near powers $b^j\pm1$ ($i,j \ge 2$) are disjoint. There are many conjectures about the the growth rate of gaps.Your sets are sparser than these by a factor of two. Even with four times as many entries as you are using, so twice the density of the integer powers, there are few coincidences. One could consider other sequences given by the same recurrence but with other initial conditions. That would provide the "missing" convergents and Fibonacci numbers. I wondered why you chose exactly the ones you did. Is there an motivating problem? There are also other second order recurrences with only one root larger than $1$ in absolute value. Namely: $W_{i+1}=mW_i+cW_{i-1}$ where $-(m+1) \lt c \lt m-1$. <br><br>I will explain why I'm so interested in $U(4k−2)$' and $V(4k+2)$' in a separate answer below, and yes, there is a motivating problem! – Jim White Jan 9 '13 at 6:10 1 You've listed 9 coincidences, I found 11. The two others are $41=V_3(4)=U_2(6)$ and $1189=V_3(11)=U_2(34)$. – Jim White Jan 10 '13 at 12:52 I can confirm that these 11 coincidences remain the only ones found for $u, v < 2^80$, so you are probably correct in your conjecture. <br><br> If that is the case then we have identified all solutions to the simultaneous equations:<br> <blockquote>$(m+2)v^2 - (m-2)u^2 = 4$<br> $mv^2 - (m-4)u^2 = 4$ </blockquote> and perhaps a couple of other forms. <br><br> It is also fascinating that 10 of the 11 coincidences involve $U_2, V_2$. The case $13201 = V_5(7) = V_3(24)$ is unique in that respect. – Jim White Jan 10 '13 at 19:40 Sorry, still getting to grips with what you can and can't do in a comment! :) Like no html tags, and no editing: I meant to say $u,v < 2^{80}$ – Jim White Jan 10 '13 at 19:43 And the second equation should of course read $mz^2 - (m-4)u^2 = 4$. For example, from $29 = U_2(5) = V_2(6)$ we obtain $7v^2 - 3u^2 = 5z^2 - u^2 = 4$ with $z=13, v=19, u=29$. – Jim White Jan 10 '13 at 19:59 show 1 more comment Thanks, Aaron. Your comment has reminded me that I have been negligent in the computational searches conducted so far, in that I have failed to report any information on minimum distances encountered. I will attend to that. By the way, I have reversed the definitions of X and Y above as they were the opposite of what I have in all existing code and research notes. My apologies! In terms of k the first few polynomials are $Py_1 = 4k - 1$ $Px_1 = 4k + 1$ $Py_2 = 16k^2 - 12k + 1$ $Px_2 = 16k^2 + 12k + 1$ $Py_3 = 64k^3 - 80k^2 + 24k - 1$ $Px_3 = 64k^3 + 80k^2 + 24k + 1$ $Py_4 = 256k^4 - 448k^3 + 240k^2 - 40k + 1$ $Px_4 = 256k^4 + 448k^3 + 240k^2 + 40k + 1$ If we define the distance polynomial $D_{j,i} = Py_j - Px_i$ then $D_{2,1} = 16k^2 - 16k$ so the quadratic case is disposed of, as you say. We can also rule out the cubic case, and in fact all odd j. We have up vote 0 down vote $D_{3,1} = 64k^3 - 80k^2 + 20k - 2$ $D_{3,2} = 64k^3 - 96k^2 + 12k - 2$ For all odd j we get even coefficients and $c_0 = -2$, so no $D_{2e+1,i}$ can have an integer root $k > 1$. For even j we get polys like these: $D_{4,1} = 56k^4 - 448k^3 + 240k^2 - 44k$ $D_{4,2}= 256k^4 - 448k^3 + 224k^2 - 52k$ $D_{4,3} = 256k^4 - 512k^3 + 160k^2 - 64k$ What I'm hoping to find is some magic property for even j that will tell us that all $D_{2e,i}$ are either irreducible or have a single integer root $k=1$. Since $Y(1,j) = 3,5,7 \ldots$, all of $X(1,i) = 5, 29, 169 \ldots$ are to be found in $Y(1,j)$ so the corresponding $D_{14,2}, D_{84,3} $ etc will all have root $k=1$. I suspect that all other D are irreducible, but these isolated exceptions are a bit of a fly in the ointment! Oh yes, and I can tell you that a search on all pairs of sequences $Y(k,j), X(k,i)$ revealed no match for a rather staggering j up to 100,000. For a given depth limit j < J, such a search is finite, since beyond a certain k we find that all $Y(k,J) > X(k,J-1)$ and so we need look no further. It follows then that the proposition, that all $D_{j,i}$ are either irreducible or have a single integer root $k=1$ is true for all j < 100,000. add comment Aaron prompted me to investigate the behaviour of gaps in the sequences $X(k), Y(k)$, or equivalently $U(m), V(m')$ with $m = 4k-2, m' = 4k+2$, with $k>3$. I found that, for any k, the distance $D_j$ of any $U_j$ to the nearest $V_i$ is nearly always increasing, with $log_m(D_j) = j - \epsilon$. The only time the distance decreased was at a up vote 0 "sync point", ie a point j where $V_i < U_j < U_{j+1} < V_{i+1}$. The $D_j, D_{j+1}$ values tend to be very close together and sometimes $D_{j+1}$ is marginally less than $D_j$. down vote Given this trend, I wonder whether the case for "no coincidences" is strengthened. If coincidences were possible, then wouldn't I expect to see $D_j$ fluctuate? Dr. Memory, I believe you have enough "reputation" (points) now to be leaving comments under answers, rather than creating more "answers" just to make comments. – Todd Trimble♦ Jan 10 '13 at 14:04 My apologies! I wasn't trying to rack up points but was concerned about the apparent size limit on comments. Eg: my discussion of the polynomials in the answer immediately above, would surely not fit? – Jim White Jan 10 '13 at 17:20 Another problem is that you don't seem to be able to edit comments – Jim White Jan 10 '13 at 19:49 No worries at all, and I wasn't implying you were doing this to rack up points; I just didn't know if you were aware. It's fine to fill up more than one comment box if you need to. And yes, it is impossible to edit comments, which is indeed annoying (but that will change once we make the move to MO 2.0); one is probably better off writing a comment in a text editor and then pasting it in, although I admit I never bother doing this myself. Finally, I should have said before: welcome to MO! :-) – Todd Trimble♦ Jan 10 '13 at 21:51 Thanks Todd! I'm very happy to be here :) – Jim White Jan 11 '13 at 1:36 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory recurrences or ask your own question.
{"url":"http://mathoverflow.net/questions/118247/uniqueness-of-values-in-recurrence-relations","timestamp":"2014-04-18T23:34:20Z","content_type":null,"content_length":"100158","record_id":"<urn:uuid:bf7dfe5a-47a6-482e-b30e-33396b22e6b6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Bivariate Pchip Replies: 9 Last Post: Jan 19, 2013 6:14 AM Messages: [ Previous | Next ] Re: Bivariate Pchip Posted: Jan 18, 2013 8:53 AM "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <kd8td4$2br$1@newscl01ah.mathworks.com>... > "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <kd8ef0$qnh$1@newscl01ah.mathworks.com>... > > What about perform successively pchip along the first dimension, then along the second dimension. > Just a note that I do not know successive 1D pchip gives the same result when swapping the dimensions. For splines with homogeneous conditions such as natural, not-a-knot, periodic, one can do either way, and it provides the same interpolation results as 2D, just the implementation and work-flow is different. > Bruno pchip in 2-d as a tensor product form has been shown NOT to be adequate for the general desired behavior. (It sometimes will produce an acceptable result, but in general, it is not adequate.) My memory tells me that there is a way to correct the derivatives generated by pchip so that it WILL be monotone in all desired aspects, and that this was once implemented as the 2-d version of pchip. Sadly I no longer have that work in my possession. Also as you point out, since pchip is a not a linear procedure, it is potentially not going to produce the same result as if you do swap the axes.
{"url":"http://mathforum.org/kb/message.jspa?messageID=8103857","timestamp":"2014-04-20T12:13:47Z","content_type":null,"content_length":"27707","record_id":"<urn:uuid:e38cf7df-a3ee-4ad9-a39d-b9702fc02233>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
a basket contains 5 red balls, 3 blue balls, 1 green balls May 27th 2010, 06:50 AM #1 Junior Member May 2010 a basket contains 5 red balls, 3 blue balls, 1 green balls a basket contains 5 red balls, 3 blue balls, 1 green balls what is the probability of getting 2 red balls, 1 blue ball and 1 green ball if a sample of 4 balls are taken from the basket. How do i start solving this problem. Please help. Last edited by mr fantastic; May 27th 2010 at 08:00 PM. Reason: Re-titled. Hi "i like c", there are 2 ways to work it out, if not more... If a sample of 4 balls are taken, we can list the order of selection if they are chosen one-by-one. RRBG red 1st, red 2nd, blue 3rd, green 4th.... $P=\frac{5}{9}\ \frac{4}{8}\ \frac{3}{7}\ \frac{1}{6}$ If you calculate the remaining probabilities and sum them, you will have the probability of choosing 2 reds, 1 blue and 1 green. Alternatively there are $\binom{9}{4}$ ways to choose 4 from 9. There are $\binom{5}{2}$ ways to choose 2 reds There are $\binom{3}{1}$ ways to choose one blue and only 1 way to choose the green. Any 2 reds can go with any blue, hence we multiply those numbers of ways to find the total number of ways we can get 2 reds, 1 blue and 1 green Hence you divide that number by the total number of ways to choose 4 from 9. How can you do it using Hyper-geometric Distribution principles? Using the multivariate hypergeometric distribution makes the problem much easier in my opinion. First let's label everything: N= total number of balls = 9 n= total number of balls selected =4 R = total number of red balls = 5 B = total number of blue balls = 3 G = total number of green balls =1 r = total number of red balls selected = 2 b = total number of blue balls selected = 1 g = total number of green balls selected = 1 Now simply solve: $\frac{{R\choose r}{B\choose b}{G\choose g}}{{N\choose n}}$ As you can see, it's simply the combination of ways to pick 2 red balls out of 5 red balls multiplied by the combination of ways to pick 1 blue ball out of 3 blue ball multiplied by the combination of ways to select 1 green ball from 1 green ball all divide by the combination of ways to select 4 balls from 9 balls. May 27th 2010, 07:05 AM #2 MHF Contributor Dec 2009 May 27th 2010, 07:25 AM #3 Junior Member May 2010 May 28th 2010, 02:39 AM #4 Senior Member Oct 2009
{"url":"http://mathhelpforum.com/advanced-statistics/146656-basket-contains-5-red-balls-3-blue-balls-1-green-balls.html","timestamp":"2014-04-21T05:56:53Z","content_type":null,"content_length":"40936","record_id":"<urn:uuid:0ff203d9-b884-4789-8a8f-1b4cce5e07f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
St Albans, NY Algebra 2 Tutor Find a St Albans, NY Algebra 2 Tutor STANDARDIZED TEST PREPARATION I help students achieve their best possible results by combining expert instruction with a comprehensive practice and feedback system for the SAT, ACT, ISEE, SSAT, GRE, and GMAT. I provide an assessment of their strengths and weaknesses to help students decide which t... 18 Subjects: including algebra 2, geometry, GRE, algebra 1 ...I have worked with 1 on 1 Academic Tutors who provide Mathematics and English courses from K-12th grade. So, it is a huge advantage for yourself, if you are unable to understand, have difficulty in finding solutions etc., I am ready to help you. I had an "A+" on Algebra 1, while I was in school... 7 Subjects: including algebra 2, English, reading, algebra 1 ...I took literature in high school, and received a fairly high grade. I am also taking literature classes in college. I read a lot of books and about books, and I love to write about books as well, using different ways of speaking and using various rhetoric devices and tropes. 30 Subjects: including algebra 2, reading, English, algebra 1 ...I have four kids of my own, and often do outreach as a guest science speaker in their schools, which is awesome. I have a solid foundation in math, biology, chemistry, physics, and physiology with an emphasis in neurobiology. I love the process of teaching and learning, and would look forward to tutoring you if you are the student, or your child if you are the parent. 11 Subjects: including algebra 2, chemistry, physics, biology ...I also taught Pre-calculus, which included analysis of the behavior of the six trig functions, for 5 years. In addition, I have taught Algebra 1 for 3 years, and I also wrote software packages that utilized the trig functions. I am certified in New York and familiar with the logistics and scoring of the SAT math exam. 10 Subjects: including algebra 2, calculus, algebra 1, SAT math
{"url":"http://www.purplemath.com/St_Albans_NY_Algebra_2_tutors.php","timestamp":"2014-04-19T23:31:44Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:fbdbfea8-f5b1-4435-b0de-e70d73680ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear program to maximize the minimum absolute value of linear functions ? up vote 3 down vote favorite I'd like to compute $\max_{x,t} t$ such that $\forall i$, $t < a_i + |x - b_i|$. where $a_i,\ldots, a_n$ and $b_1,\ldots,b_n$ are fixed and $x \in [0,1]$. Can this be solved with a linear program? I'm familiar with a technique to minimize the maximum of absolute values, by doubling the number of constraints, but I don't think it applies to maximizing the minimum. If a linear program won't work, is there another efficient way to get an exact solution? Thanks much. linear-programming oc.optimization-control linear-optimization global-optimization So you want the largest $t$ with any $x \in [0,1]$ using the Wikipedia form: Maximise $c^T X$ Subject to $AX \leq B$ and $X \geq 0$ Isn't this just: $X = [x,t]$ $c = [0,1]$ Then some nasty matrix A and vector B which I don't want to write out. But basically doing what you said. – Lucas Aug 12 '11 at 3:51 Thanks for your comment. As I understand it, it's the absolute value operator that makes this something other than a textbook example of linear programming: the absolute value of a linear function is not linear, though it may be represented as the maximum of two linear functions. If it's represented in this way, then I'm trying to find the max (over t) of a min (over i) of a max (over $a_i + (x - b_i)$ and $a_i - (x - b_i)$). – Jeff Aug 12 '11 at 5:42 1 Unless I'm making a mistake, you should be able to do this with at most $n+1$ linear programs: namely, assume without loss of generality that $b_1 \leq b_2 \leq \dots \leq b_n$ and break up the interval $[0,1]$ into at most $n+1$ subintervals, depending on the $i$ such that $x \in [b_i, b_{i+1}]$. On each such subinterval, you have $n$ linear functions (no absolute values). So it's pretty easy to find the max of the mins. Then take the max of the objective functions of these linear programs. – Abhinav Kumar Aug 12 '11 at 14:01 @Abhinav: Why not post this as an answer? – Emil Jeřábek Aug 12 '11 at 15:14 @Abhinav: This is a good solution, and I appreciate it. I was trying to simplify my problem when I posted it, but actually, $x$ and $b_i$ are in 21-dimensional space, and I'm using the sup-norm rather than absolute value. With 21-dimensions, I'm not sure whether it would be computationally possible to consider each linear piece separately. I've reposted the expanded question, since it seems like this one has been answered. Thanks for your help. – Jeff Aug 12 '11 at 17:26 show 3 more comments 1 Answer active oldest votes Unfortunately, this problem can't be represented by an LP, since your feasible region is in general nonconvex, and the feasible region of an LP (being the intersection of a bunch of half spaces) is always convex. To be more specific, consider the problem $\max \;t $ up vote 4 down vote $ t \leq | x- 1/2 | $ $ t \leq | x- 3/4 | $ $ 0 \leq x \leq 1 $ A sketch of the feasible region shows that it's nonconvex. There's a local maximum at $x=5/8$, $t=1/8$, and a global maximum at at $x=0$, $t=1/2$. add comment Not the answer you're looking for? Browse other questions tagged linear-programming oc.optimization-control linear-optimization global-optimization or ask your own question.
{"url":"http://mathoverflow.net/questions/72735/linear-program-to-maximize-the-minimum-absolute-value-of-linear-functions/72768","timestamp":"2014-04-16T22:29:18Z","content_type":null,"content_length":"57662","record_id":"<urn:uuid:1a64da1a-59eb-41c8-aeb9-64aea7e83501>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplying and Dividing Polynomials hi SlowlyFading, Sorry to hear about your troubles. I've been away for a few days so I hope this help is not coming too late for you. What I suggest is you copy the part answers I've shown here and try to fill in the gaps. Post back all the lines of your answers and I'll check those before we move on to the remainder. ? shows something is missing. Multiply both terms in the bracket by 3x Now simplify each term. Multiply both terms in the bracket by -6x^2 Now simplify each term. (You can simplify the terms in the bracket first, but, looking at the other questions, I think it best to stick to the general method here.) Divide each term by 3x Simplfy each term. Hope that helps, You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=228898","timestamp":"2014-04-20T16:45:44Z","content_type":null,"content_length":"19882","record_id":"<urn:uuid:6e686710-b0c9-4d31-be0c-f62758a6f9c2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: find the area of the figures correct to two decimals. • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Do you know how to find the area of an isosceles triangle? Best Response You've already chosen the best response. whats the answwer? Best Response You've already chosen the best response. The area of a triangle is \[\frac 1 2 bh\]where bis the base length (given ) and h can be found using the Pythagorean Theorem \[x^2+y^2 =z^2\]where x=1, y=h and z=5, so what would be the value of Best Response You've already chosen the best response. i think u have to use heron formulas Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fb000c9e4b059b524fac0bb","timestamp":"2014-04-20T18:37:06Z","content_type":null,"content_length":"45268","record_id":"<urn:uuid:08251ebe-6a5b-4ea5-8514-05cdaadceb12>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Lineweaver-Burk Plot (Molecular Biology) Enzymes that conform to Michaelis-Menten kinetics yield plots of initial velocity v, as a function of substrate concentration A that are rectangular hyperbolas of the form where Ka denotes the Km (Michaelis constant) for substrate A and V is the maximum velocity of the reaction. Hyperbolas of this type pass through the origin with an initial slope of V/Ka and have an asymptote, where v = V (Fig. 1). The determination of values for V and Ka from a hyperbolic curve is difficult, especially as the asymptote cannot be well defined and error will always be associated with initial velocity data. To overcome this difficulty, the equation was rearranged to linear forms to permit the graphical determination of values for the kinetic parameters. The first of the linearizations was reported by Lineweaver and Burk (1). Subsequently, two other linearizations were suggested; one was advanced by Eadie (2) and Hofstee (3) (see Eadie-Hofstee Plot) and the other by Hanes (4). Figure 1. Variation of initial velocity v as a function of the concentration of substrate A as described by Equation 1. The Lineweaver-Burk rearrangement of the initial velocity equation involves taking reciprocals of each side of the equation and rearranging: A plot of 1/v against 1/A yields a straight line, with a slope of KJV and an intercept with the vertical ordinate of 1/V (Fig. 2). The intersection of the line with the abscissa occurs at the point where 1/v = 0 and at this point -1/A = 1/Kfl. Figure 2. Double-reciprocal plot of the variation of the initial velocity v of an enzyme-catalyzed reaction as a function of substrate concentration A. Determinations of values for the kinetic parameters for a two-substrate reaction are more complex. This can be illustrated by reference to the initial velocity equation for a sequential Bi-Bi reaction that conforms to Michaelis-Menten kinetics involving random binding of substrates A and B to the enzyme, to form a ternary EAB complex (Eq. 3): where Ka and Kb are the dissociation constants for the dissociation of each substrate from the ternary EAB complex and Kia is the dissociation constant of the EA complex. The reciprocal form of this equation is given by Equation 4: For a plot of 1/v against 1/A, Equation (4) is that of a straight line with both slope and intercept varying as a function of the concentration of substrate B. Data obtained for the variation of the initial velocity as a function of the concentration of A, at different fixed concentrations of B, would give a family of straight lines that intersect at a point to the left of the vertical ordinate (Fig. 3), where the 1/v and 1/A coordinates are , respectively. Thus, the crossover point may be above, on, or below the abscissa, depending on the relative values of Ka and Kia. The intersection of each straight line with the abscissa would give only an apparent value for Ka at a particular concentration of B. The slope of the lines as a function of B is described by Equation (5): so that a replot of the slopes of the lines of the primary plot against 1/B would yield a straight line that intersects the abscissa at a point equal to For a rapid-equilibrium random mechanism, this value is equal to Kib, the dissociation constant for the interaction of B with free enzyme. For an ordered mechanism, there is no EB complex. The intersection points of the lines of the primary plot with the vertical ordinate give only apparent maximum velocities at different fixed concentrations of substrate B. Variation of the intercepts with the concentration of B is described Equation (6): so that a replot of the intercepts of the primary plot against 1/B would be a straight line that intersects the vertical ordinate at the reciprocal of the true value for the maximum velocity and the abscissa at the reciprocal of the true value for Kb. Values for V, Kja, and Ka would be obtained in a similar manner by starting with a plot of 1/v against 1/B at different fixed concentrations of A. Figure 3. Double-reciprocal plot of the variation of the initial velocity of an enzyme-catalyzed reaction involving the sequential addition of two substrates, A and B, as a function of the concentration of substrate A at different fixed concentrations of substrate B. There has been considerable discussion in the past about the relative merits of the Lineweaver-Burk plot, the Eadie-Hofstee plot, and Hanes linearization procedures for obtaining the best estimates of values for kinetic parameters (3, 5). Those days have long since passed, and now computer programs are available for least-squares fitting of data, with appropriate weighting factors, to an assumed rate equation (6). Graphical methods are important for determining the form of the rate equation to which the data are to be fitted and for illustrating the results of kinetic investigations. The Lineweaver-Burk plot must be considered the most satisfactory of the three types of plot, as it shows the straightforward variation of one dependent variable as a function of the concentration of one or two independent variables.
{"url":"http://what-when-how.com/molecular-biology/lineweaver-burk-plot-molecular-biology/","timestamp":"2014-04-19T12:05:44Z","content_type":null,"content_length":"19922","record_id":"<urn:uuid:6ba1b823-e1b8-4d11-94ce-182d42d4ce47>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
February 22, 2013 [ Today's exercise was contributed by Tom Rusting, who worked as a programmer until the mid 70’s. Being retired, he has now taken up programming again as a hobby. Suggestions for exercises are always welcome, or you may wish to contribute your own exercise; feel free to contact me if you are interested. Floup is an island-country in the South Pacific with a currency known as the floupia; coins are denominated in units of 1, 3, 7, 31 and 153 floupias. Merchants and customers engage in a curious transaction when it comes time to pay the bill; they exchange the smallest number of coins necessary to complete the payment. For instance, to pay a bill of 17 floupia, a customer could pay three 7-floupia coins and receive single 1-floupia and 3-floupia coins in exchange, a total of five coins, but it is more efficient for the customer to pay a single 31-floupia coin and receive two 7-floupia coins in exchange. Your task is to write a program that determines the most efficient set of coins needed to make a payment, generalized for any set of coins, not just the set 1, 3, 7, 31 and 153 described above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2 8 Responses to “Floupia” 1. February 22, 2013 at 10:04 AM [...] today’s Programming Praxis exercise, our goal is to calculate the minimum total amount of coins involved in [...] 2. February 22, 2013 at 10:04 AM My Haskell solution (see http://bonsaicode.wordpress.com/2013/02/22/programming-praxis-floupia/ for a version with comments): import Data.List import Math.Combinat pay :: (Eq a, Num a) => a -> [a] -> ([a], [a]) pay total coins = head [(p,c) | n <- [1..], pc <- [1..n], p <- combine pc coins , c <- combine (n - pc) (coins \\ p), sum p - total == sum c] 4. February 22, 2013 at 8:08 PM This was a lot of fun. I like problems that make you take something you do everyday and think about it sideways. Here’s my take: Making Floupian Change I played a bit with having higher order functions so that there’s a function which makes the coin system and in turn returns the function that makes change. Here’s an example: > (define floupia (make-coinage '(1 3 7 31 153))) > (floupia 17) '((31) (7 7)) > (floupia 100) '((1 1 7 153) (31 31)) > (floupia 57) '((1 1 31 31) (7)) It turns out that the inner function is horribly inefficient. I know I check multiple identical permutations, but for the most part the solutions use only a few coins so it doesn’t matter anyways. So it goes. 5. February 23, 2013 at 12:09 AM Before we look at today’s exercise, let’s review some facts from high-school mathematics. The binomial coefficient $\binom {n} {m}$ is the number in the m‘th position of the n‘th row of Pascal’s Triangle, and is computed as (n * (n−1) * … * (n−k+1)) / (k * (k−1) … * 1). Thus $\binom {5} {3} = 10$. We compute the binomial coefficient with this function, which is the same as the choose function of a previous exercise: (define (binom n m) (let loop ((n n) (m m) (b 1)) (if (zero? m) b (loop (- n 1) (- m 1) (* b n (/ m)))))) In the study of probability and statistics, $\binom {n} {m}$ is the number of ways m items can be chosen from a set of n items, so there are 10 different ways to select 3 items from a set of 5 items; if the items are a, b, c, d and e, the ten ways are (a b c), (a b d), (a b e), (a c d), (a c e), (a d e), (b c d), (b c e), (b d e), and (c d e). The list can be generated with a recursive (define (combinations-without-replacement n xs) (if (= n 0) (list (list)) (if (null? xs) (list) (append (map (lambda (xss) (cons (car xs) xss)) (combinations-without-replacement (- n 1) (cdr xs))) (combinations-without-replacement n (cdr xs)))))) > (binom 5 3) > (combinations-without-replacement 3 '(a b c d e)) ((a b c) (a b d) (a b e) (a c d) (a c e) (a d e) (b c d) (b c e) (b d e) (c d e)) This definition of combinations doesn’t allow duplicates; the items are chosen without replacement, in the jargon of probability and statistics. But sometimes it is useful to allow duplicates, in which case the items are said to be chosen with replacement. The binomial coefficient $\binom {n+m-1} {m}$ defines the number of ways m items can be chosen from a set of n items with replacement, and the list can be generated with a recursive function similar to the previous one: (define (combinations-with-replacement n xs) (if (= n 0) (list (list)) (if (null? xs) (list) (append (map (lambda (xss) (cons (car xs) xss)) (combinations-with-replacement (- n 1) xs)) (combinations-with-replacement n (cdr xs)))))) > (binom (+ 5 3 -1) 3) > (combinations-with-replacement 3 '(a b c d e)) ((a a a) (a a b) (a a c) (a a d) (a a e) (a b b) (a b c) (a b d) (a b e) (a c c) (a c d) (a c e) (a d d) (a d e) (a e e) (b b b) (b b c) (b b d) (b b e) (b c c) (b c d) (b c e) (b d d) (b d e) (b e e) (c c c) (c c d) (c c e) (c d d) (c d e) (c e e) (d d d) (d d e) (d e e) (e e e)) With that done, we are ready to look at today’s exercise. We augment the list of coins with the negatives of all the coins, so that a positive coin is given to the merchant by the customer and a negative coin is the change given back to the customer by the merchant; a transaction like (10 10 -2) indicates that the customer paid two 10-floupia coins and received a single 2-floupia coin in change, for a net payment of 18 floupia. Our solution generates all possible combinations with replacement (since there may be more than one instance of a particular denomination of coin) of 1 coin, then 2 coins, then 3 coins, and so on until the desired payment is found: (define (floupia price coins) (if (positive? (modulo price (apply gcd coins))) (error 'floupia "infeasible") (let ((coins (append coins (map negate coins)))) (let loop ((n 1)) (let ((xs (filter (lambda (xs) (= (sum xs) price)) (combinations-with-replacement n coins)))) (if (null? xs) (loop (+ n 1)) xs)))))) Note the test for feasibility. A particular input has a feasible solution only if the greatest common divisor of the set of coins evenly divides the price; for example, there is no way to make a price of 11 floupia if only 3-floupia and 6-floupia coins are available. Here are some examples: > (floupia 13 '(2 5 10)) ((5 10 -2)) >((10 10 -2)) > (floupia 17 '(1 3 7 31 153)) ((3 7 7) (31 -7 -7)) > (floupia 11 '(3 6)) floupia: infeasible We used filter, sum and negate from the Standard Prelude. You can run the program at http://programmingpraxis.codepad.org/XQuQhu5C. 6. February 23, 2013 at 2:22 AM An attempt in Python (which seems to have a similarly inefficient inner algorithm): def combinations(coins, size): if size == 1: for coin in coins: yield [coin] for coin in coins: for comb in combinations(coins, size-1): yield [coin] + comb def mostEfficientPayment(coins, balance): coins.extend([-coin for coin in coins]) numberofcoins = 1 while True: for comb in combinations(coins, numberofcoins): if sum(comb) == balance: return comb numberofcoins += 1 coins = map(lambda x: int(x), raw_input("Coins: ").split()) balance = int(raw_input("Balance: ")) comb = mostEfficientPayment(coins, balance) print "You give: ", " ".join([str(coin) for coin in comb if coin > 0]) print "Merchant gives: ", " ".join([str(-coin) for coin in comb if coin < 0]) 7. February 24, 2013 at 6:18 PM My Python solution. In essence it is a breadth-first search of coin combinations. A queue keeps track of the search space. A dictionary is used to keep track of the minimum number of coins needed to produce a value. The search stops when the difference between ‘new_value’ and the target price is known (i.e., already in the dictionary) from collections import deque def floupian(price, coinage): coins = [sign*coin for sign in (1,-1) for coin in coinage] seen = {0:[]} queue = deque([0]) while True: value = queue.popleft() for coin in coins: new_value = value + coin if new_value not in seen: seen[new_value] = seen[value] + [coin] rest = price - new_value if rest in seen: return seen[new_value] + seen[rest] 8. February 25, 2013 at 6:20 PM An inefficient, but terse implementation in python. Catches infeasible inputs. from fractions import gcd def coin_fun(coins, n): if n % reduce(gcd, coins) > 0: return "infeasible" q = [] cur = [] coins.extend([c * -1 for c in coins]) while True: for coin in coins: newcur = list(cur) cur = q.pop(0) if sum(cur) == n: return cur
{"url":"http://programmingpraxis.com/2013/02/22/floupia/","timestamp":"2014-04-21T02:04:39Z","content_type":null,"content_length":"76143","record_id":"<urn:uuid:5fd4bdd5-fe12-4d1d-9496-e2851dacb615>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Math Forum » Discussions » Policy and News » geometry.announcements Topic: THE MASONIC CODES of the UNITED STATES OF AMERICA- Part 1 Replies: 0 THE MASONIC CODES of the UNITED STATES OF AMERICA- Part 1 Posted: Aug 12, 2011 11:20 AM Copyright 2011 by Ion Vulcanescu. to THE MASONIC CODES of the UNITED STATES OF AMERICA it is Encoded in THE ORDER of the Letters of...THE ENGLISH ALPHABET. Understanding the Mathematical Models of the 1st and 2nd MASTER , and the PYRAMIDAL frequencies it opens THE ACCESS GATE to see what ENCODED TIME MESSAGES left after them THE EARLIER FREMASONS of - UNITED GRAND LODGE OF ENGLAND, and - GRAND ORIENT DE FRANCE, and it shows not only on what Principles was "designed" as to be as a Country by the FREMASONS of 18 and 19 century, but also it shows a Darker Side where we shall see later how Political Intereses of 18 century French Antisemitism contributted that France to change the Civilization's PI value, as to eliminate the Spiritual Beauty of then Jewish Scholars, and then under all kind of known "excuseses" to cover up the change of the Civilization's PI value, and lock the Civilization in the presente false PI value 3.141592654 Unknown to this author if these SECRET CODES are still known, by the High Executives of the todays English, French, or USA Fremasons and kept hidden by the Civilization, or by now lost, this serie of articles attempts to ...OPEN THE GATE to the reading of Please, be advised that this article contains not only the Research, but also my Opinions of philosopher, both fully protected by the US Federal Laws, and by the International Copyrights Laws. If you intend to use partial or total this research, you can use it by the Grant I have indicatted in the Copyrights Notice of that can be found at the end of the article: "Condemnation by the Paris Livre of the French Academy of Scioences" at As here-and-there may be typing errors, or other errors, this article shall be continuu corrected and reeditted until it shal remain fidel to the intended form. Due to this Effect the researchers who shall find this article on Internet are asked that allways to return to Mathforum at and consider only the last corrected edition. SECTION A- The 1st Master Frequency of the English Alphabet Step 1 As already known for centuries this is THE ORDER of the 26 Letters of the English Alphabet: A/1, B/2, C/3, D/4, E/5, F/6, G/7, H/8, I/9, J/10, K/11, L/12, M/13, N/14, O/15, P/16, Q17/, R/18, S/19, T/20, U/21,V/22,W/23, X/24,Y/25, Z/26 Step 2 THE SUMof the letters of the English Alphabet is 351: +19+20+21+22+23+24+25+26 = 351 Step 3 THE SUM (of all Letters)/ 26 Letters = 351/26 = 13.5 Step 4 (a) -This is THE ORDER of the 26 letters: (b) -Take in Consideration only the BIGGER letter, and when there ar 2 letters consider just one , as follows: (c) - APPLY the following Mathematical Model: +9+2+2+2+3+4+5+6 = 115 (d) APPLY the following Mathematical Model: x9x2x2x2x3x4x5x6 = 3.792438559 (14) (e) APPLY the following Mathematical Model: ONE : (1:2:3:4:5:6:7:8: :9:2:3:4:5:6) = 3.29777266 (12) (f) - Now divide (d) by (e), and multiply by (c) as: 3.792438559 (14) : 3.29777266(12) x 115 = 441 (g) - APPLY the following Mathematical Model: 351 x 351 = 12301. (h) - Now write down these values as herein seen: 351 = The Sum of all lettersa of the English Alhabet 12301 = 351 x 351 441 = How it was obtained it was explained in substep (f), now make a SQUARE taking the BIGGER NUMBER as a Square Side as: 12301 x 4 = 492804 "492804" IT IS = THE 1st MASTER FREQUENCY of THE ENGLISH ALPHABET The fact that THE SQUARE at the Sumerians appears to have been their it was explained in the majority of my articles posted on Mathforum between 2003 and 2011, and you can research it at: SECTION B - The 2nd Master Frequency of the English Alphabet Step 1 Consider Step 1 of Section A Step 2 (a) Consider the Value "492804" explain in Section A, Step (h), and (not considering ZERO) APPLY steps (c) through (h) of Section A as follows: 4+9+2+4 = 27 4x9x2x8x4 = 2304 ONE: (4:9:2:8:4) = 144 2304 x 4 = 2916 "2916"... it is the 2nd MASTER FREQUENCY of SECTION C - The Pyramidal Frequency of the English Alphabet Step 1 See the Step 1 of Section A Step 2 APPLY the following Mathematical Model: 9+(1+0) =(1+1)+(1+2)+(1+3)+(1+4)+1+5)+(1+6)+(1+7)+(1+8)+ +(1+9)+(2+0)+(2+1)+(2+2)+(2+3)+(2+4)+(2+5)+(2+6) = 135 135 = THE PYRAMIDAL FREQUENCY of THE ENGLISH ALPHABET. SECTION D - The word " ALPHABET" Note: As the Mathematical Models on which - the 1st master Frequency - the 2nd Master Frequency, and - the Pyramidal Frequency were explained before, althoguh their calculation for the word "ALPHABET" it is a little trichier (for the involves the number ONE), as this calculation should not be a problematic one, this author (for space saving reason) eliminates such calculation, and in the bellow values are presented only Step 1 A/1, L/12, P/16, H/8, A/1, B/2,E/5,T/20 = 65 Step 2 4225 x 4 = 16900 "16900" it is the 1st MASTER FREQUENCY of the word: "ALPHABET" Step 2 Step 3 256 x4 = 1024 "1024" it is the 2nd MASTER FREQUENCY of the word" "ALPHABET" SECTION E - The ORDER of Vowels, and of Consonants in the English Alphabet Step 1 Consider Step 1 of Section A. Observe that the Vowels SEPARES the English Alphabet in sections seen as: " 1313151515" A =1 VWXYZ =5 As the Mathematical Models for calculation of the 1st Master Frequency were explained in prior sections,. herein they are just marked: Consonants 1st Master Frequency. BCD ..........Master Frequency = 324 FGH...........Master Frequency = 3024 JKLMN......Master Frequency = 1440 PQRST.......Master FRequency = 32400 VWXYZ.....Master FRequency = 57600 324+3024+1440+32400+57600 = 107748 Total 1st Master Frequency = 107748 Step 2 The 2nd Master Frequency of - the Vowels Order Code " 1313151515", and - the Consonants GROUPS "107748" is "1313151515".......2nd Master Frequency = 3724 "107748"...............2nd Master Frequency = 2752 SECTION F -The 1st Master Frequencies of : - the word JEW - the ltters JUW, and - the Sumerian Natural Geometric PI value " 3.14626437" - The vowels AEIOU of the English Alphabet As the Mathematical Models on which the 1st Master Frequency was calculated is already explained in before sections, here are the 1st Master Frequencies for: - J/10, E/5,W/23 = 839808 - J/10, U/21, W/23 = 11664 - 3.14626437 = 5184 - A/1, E/5, I/9, O/15, U/21 = 10404 ...and now let's read the Time Message IN NUMBERS: "((839808 : 11664) x ( 3724 - 2752)) : (351 : 26) = 5184" ...and now let's read what the Time Message says: "((The word JEW : The word JUW) : (The 1st Master Frequency of all 1st Master Frequencies of the Consonants - The 1st Master Frequency of the of Separations indicated by the Vowels)) : ( The sum of letters : : the Total letters) = 1st Master Frequency of THE SUMERIAN NATURAL GEOMETRIC PI VALUE 3.14626437" which fact indicates that the Designers the English Alphabet have had Mathematically coordonatted the English Alphabet and THE SUMERIAN NATURAL GEOMETRIC PI VALUE 3.1426437 and that THE ENTITY that had THE POLITICAL INTERES of (of 18 th century) changed THE SUMERIAN NATURAL GEOMETRIC PI VALUE 3.14626437 with the presente PI Value 3.141592654 FOR NO OTHER REASON then The author of this article ION VULCANESCU, has posted on FACEBOOK the 4 pages of his US Congress recorded now here the Mathematical Exactitude of THE SUMERIAN NATURAL GEOMETRIC PI VALUE 3.14626437 can be researched by all international researchers. See the 4 Drawings of ION VULCANESCU on his FACEBOOK PAGE at end of the Photo Album. ...and the historical evidence clear indicates that before 18 the centuri in France were many units of measures where THE SUMERIAN NATURAL GEOMETRIC PI VALUE 3.14626437 WAS INDEED ENCODED. See yourself THE PROOF in the article: "Condemnation by the Paris Livre of the French Academy of Sciences" at ...and THIS ENTITY that changed the civilization's PI value,when looked to it through the historical evidence of the 18 century appears to have been...THE FRENCH ACADEMY! The publication by the author of: "THE MASONIC CODES of THE UNITED STATES OF AMERICA" continues in next articles ! Ion Vulcanescu - Philosopher Independent Researcher in Geometry Author, editor and Publisher of August 12 2011 Sullivan County, State of New York
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2287702","timestamp":"2014-04-16T04:29:23Z","content_type":null,"content_length":"25857","record_id":"<urn:uuid:5e95a9a8-5782-49d2-b61a-d58c5489716a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2006 [00092] [Date Index] [Thread Index] [Author Index] Re: Context • To: mathgroup at smc.vnet.net • Subject: [mg70980] Re: Context • From: jljelinek at comcast.net • Date: Fri, 3 Nov 2006 01:39:32 -0500 (EST) • References: <eicnon$g4g$1@smc.vnet.net> when you type in a symbol, Mathematica first checks for its occurence through all the contexts on its $ContextPath, proceeding left to right. Assuming that you started a fresh Mathematica session, you should see {Global`, System`} after you execute $ContextPath. Now try the following program, which will print the current context and context path. t = "outside"; Print["before ", $Context, " ", $ContextPath]; Print["inside ", $Context, " ", $ContextPath]; taa = "inside-aa`"; Print[t, " taa= ", taa]; Print["after ", $Context, " ", $ContextPath]; You will see that while the current context changes as a result of having executed Begin, the context path does not. It means that if you introduce a new variable t in your program, Mathematica will first try to locate it in Global`, then in System` and having failed to find it in either, it will create a new symbol t. The new symbol will not be in the current context, though, but in the leftmost context in $ContextPath, which happens to be Global` in your case. If you want to place a symbol in a new context, then you have to use BeginPackage. Execute the following program: Print["before ", t, " ", taa]; Print["before ", $Context, " ", $ContextPath]; Print["in ", $Context, " ", $ContextPath]; tbb = "inside-bb"; Print[t, " taa= ", taa, " tbb= ", tbb]; Print["after ", $Context, " ", $ContextPath]; Print["after ", t, " ", taa, " ", tbb]; Note that once the execution thread enters the new context bb`, the Global` context (and all other contexts if there were more of them) disappear from the context path and are replaced by bb`. (The System` context is always present.) When the symbol tbb appears, Mathematica starts looking for it left to right among the contexts on its current context path. Since it is not in bb` and it is not a system symbol either, Mathematica introduces it as a new symbol in the leftmost context, i.e., bb`, on the current context path. The program exits the bb` context on executing EndPackage and as you see, the new context bb` is now prepended to the context path. You can check where the three symbols t, taa and tbb are located by executing Context[t], Context[taa], Context[tbb]. Even though tbb exists only in the bb` context, you can access it simply by typing tbb, since it is exported by having its context placed on the context path. This mechanism is crucial for the Mathematica packages. dh wrote: > Hello, > consider: > fun[x_]:=( Begin[x]; > Print[t]; > End[]; > ); > according to the manual one would think that the variable t in context x > is printed. However, this is wrong! What is printed is Global`t. > Therefore, the context to which a symbol belongs is determined during > parsing and NOT execution. > Can anybode give more insight and strict rules for this quirck? > Daniel
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Nov/msg00092.html","timestamp":"2014-04-16T22:28:29Z","content_type":null,"content_length":"37275","record_id":"<urn:uuid:3d4babb3-2b0e-480f-a5b9-c72dbfcc2aa5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
The cohomology of a product of sheaves and a plea. up vote 12 down vote favorite The question Consider a topological space $X$ and a family of sheaves (of abelian groups, say) $\; \mathcal F_i \;(i\in I)$ on $X$. Is it true that $$H^*(X,\prod \limits_{i \in I} \mathcal F_i)=\prod \limits_{i \in I} H^*(X,\mathcal F_i) \;?$$ According to Godement's and to Bredon's monographs this is correct if the family of sheaves is locally finite (In particular if $I$ is finite). [Bredon also mentions in an exercise that equality holds for spaces in which every point has a smallest open neighbourhood.] What about the general case? A variant Same question for $\check{C}$ech cohomology: is it true that $$\check{H}^*(X,\prod \limits_{i \in I} \mathcal F_i)=\prod \limits_{i \in I} \check{H}^*(X,\mathcal F_i) \;?$$ (Of course, $\ check{C}$ech cohomology often coincides with derived functor cohomology but still the question should be considered independently) A prayer Godement's book Topologie algébrique et théorie des faisceaux was published in 1960 and is still, with Bredon's, the most complete book on the subject. I certainly appreciate the privilege of working in a field where a book released half a century ago is still relevant: programmers and molecular biologists are not so lucky. Still I feel that a new treatise is due, in which naïve/ foundational questions like the above would be addressed, and which would take the research and shifts in emphasis of half a century into account: one book on sheaf theory every 50 years does not seem an unreasonable frequency. So might I humbly suggest to one or several of the awesome specialists on MathOverflow to write one? I am sure I'm not the only participant here whose eternal gratitude they would earn. Why don't you like Kashiwara and Shapira's "Categories and sheaves"? By the way, Godement's promised second volume that will allow the reader to compute cohomologies of the sphere never materialized, has it? – Victor Protsak Jun 16 '10 at 23:27 Dear Victor, I certainly don't dislike Kashiwara/Schapira: I'm just not familiar with that book. I'll try to check it in the future: thanks for the reminder. And no,the second tome of Godement's treatise never appeared, unfortunately. – Georges Elencwajg Jun 17 '10 at 8:03 add comment 1 Answer active oldest votes The answer to the first question is almost always no, see Roos, Jan-Erik(S-STOC) Derived functors of inverse limits revisited. (English summary) J. London Math. Soc. (2) 73 (2006), no. 1, 65--83. . up vote 6 Addendum: The crucial point is that infinite products are not exact. The most precise counterexample statement is Cor 1.11 combined with Prop 1.6 which identifies the stalks of the down vote higher derived functors of the product with what you are interested in. Formally, it doesn't give a counter example for a single $X$ but Cor 1.11 shows that for any paracompact space accepted with positive cohomological dimension there is some open subset for which your question has a negative answer. It seems clear that one could examples for specific $X$. 8 The answer to the last question ("Would you please write a book on the subject?") is almost always no, too! – Allen Knutson Jun 16 '10 at 16:49 The point is that infinite product of sheaves does not preserve exact sequences, right? – Tom Goodwillie Jun 16 '10 at 19:45 Dear Torsten, thank you very much for the reference. The article "proves,corrects and extends" results of a note by the author published in 1961 (!), which tends to confirm the feelings I expressed at the end of my question. However because of the numerous cross-references and the general structure of the paper I could not locate a counter-example to the first equality, nor the assertion that the answer is almost always no. Needless to say, that didn't prevent me from upvoting you ! – Georges Elencwajg Jun 16 '10 at 19:51 Thanks for the addendum – Georges Elencwajg Jun 17 '10 at 22:28 add comment Not the answer you're looking for? Browse other questions tagged cohomology or ask your own question.
{"url":"http://mathoverflow.net/questions/28386/the-cohomology-of-a-product-of-sheaves-and-a-plea","timestamp":"2014-04-19T22:18:20Z","content_type":null,"content_length":"59620","record_id":"<urn:uuid:ecc4aafd-104c-4082-a4aa-41b52d88c823>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Causal Interpretation and Identification of Conditional Independence Structures Seminar 3 on LEARNING CAUSAL MODELS November 2 - 12, 1999 Organizing Committee: David Heckerman, Microsoft Research Steffen Lauritzen, Aalborg University Seminar 1 and 2 introduce connections between causal interpretations of graphs and their conditional independence properties. This seminar will discuss how these connections can be applied to the problem of learning about causal relations from data. We consider both Bayesian and asymptotic approaches, with an emphasis on the former. We relate causal interpretations to commonly used assumptions used for the selection of graph structure such as parameter independence, parameter modularity, and marginal likelihood equivalence. In addition, we address difficulties in scoring and searching over graphical models with latent variables, compare model selection to model averaging techniques, and discuss assumptions under which "counterfactual" information can be learned. INVITED SPEAKERS G. Cooper University of Pittsburgh J. Andersen Aalborg University B. Frey University of Waterloo J. Cheng University of Alberta T. Richardson University of Warwick G. Shafer Rutgers University P. Giudici University of Pavia J. Whittaker Lancaster University R. Shachter Stanford University
{"url":"http://www.fields.utoronto.ca/programs/scientific/99-00/causal/seminar_3.html","timestamp":"2014-04-17T07:45:55Z","content_type":null,"content_length":"9736","record_id":"<urn:uuid:6e572e1a-6204-404d-b5bc-1a9adb68adcb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Tutorials on Analysis of Data using MATLAB The tutorials below are aimed at freshman engineering students using Student Version of MATLAB, i.e. with only the symbolic toolbox. To be sure, MATLAB sells excellent toolboxes on statistics and curve fitting, but these are not included in the Student Version. Furthermore, while convenient, the programs in these toolboxes are generally too “user friendly.” That is, they require little thought or understanding. The tutorials use bold italics to indicate icons or menu items to click on and >> blue for MATLAB commands that must be executed. The first five tutorials use data for the vapor pressure of carbon monoxide versus temperature as an example. These data are first fit with polynomials, next with the Clapeyron equation, and finally with the Antoine equation. The following tutorials are intended to be performed in the order indicated, with each building on the work done in the preceding. 3. Use of MATLAB to fit data to a polynomial 4. Using theory to correlate data 5. Using a semi-empirical equation to correlate data The next tutorial deals with descriptive statistics for data on a single variable, including the mean, standard deviation, histograms, cumulative distribution plots, skewness, kurtosis and confidence limits for the mean. It ends with a MATLAB function that calculates all of these, except a histogram, for a given set of data. 6. Descriptive statistics for measurements of a single variable The next tutorial deals with comparison of means of two populations using samples from those populations. See also: Last modified May 14, 2010 Contact Professor Wilcox with your comments, suggestions and questions. Are there other topics you'd like to see covered?
{"url":"http://people.clarkson.edu/~wwilcox/ES100/dataproc.htm","timestamp":"2014-04-21T03:42:40Z","content_type":null,"content_length":"32427","record_id":"<urn:uuid:2c1304b5-9d42-4cea-81aa-53e0596c835e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Polygonal billiards Billiards are important models in mesoscopic physics. They are simpler than general systems, both classically and quantum mechanically, which makes them suitable objects to study how the quantum mechanical properties are influenced by the dynamics of the underlying classical system. Classical mechanics Polygonal billiards are interesting examples whose classical dynamics is neither integrable nor chaotic. The motion in a typical polygon is conjectured to be ergodic on the three-dimensional constant-energy surfaces. But this is not rigorously proven so far. There is numerical evidence that motion on these energy surfaces may exhibit even stronger ergodic properties, e.g. mixing. The motion in a rational polygon (all angles are rationally related to Pi) is restricted to two-dimensional invariant surfaces. That is similar to integrable systems, but the genus of these surfaces is larger than 1, so they do not have the topology of tori. Rational polygonal billiards are therefore also characterized as pseudointegrable. It is proven that the flow on such a surface is ergodic and not mixing. It is an open question whether this flow is typically weak mixing. Weak mixing as maximal ergodic property implies interesting (classical) spectral properties. I have studied the spectra of the barrier billiard, see Fig. 1, in [1]. Recently, I have discovered an interesting relation to Andreev billiards [3]. Quantum mechanics While the classical dynamics in rational polygons is close to integrability, it has been found that the energy eigenstates are similar to those in fully chaotic cavities. They look typically "irregular" as can be seen in Fig. 2. I have resolved the paradoxon by showing that appropriate superpositions of energy eigenstates share properties of eigenstates in integrable systems [2]. In collaboration with T. Gorin, G. Carlo and A. Bäcker, I study the structure of the energy eigenstates in more detail. Preliminary numerical results indicate that the eigenstates have multifractal properties in momentum space. The statistical properties of high-lying energy levels in rational polygons are conjectured to be close to a third universality class beside the Poisson statistics for integrable systems and the random-matrix statistics GOE for chaotic systems with time-reversal symmetry: the semi-Poisson statistics. The black curve in Fig. 3 is numerical data obtained from the barrier billiard [4]. An explanation for these ``critical statistics'' is still lacking. In order to describe polygonal billiards in the framework of semiclassical periodic-orbit theory, so-called diffractive orbits, which start and end at (critical) corners of the polygon, have to be Polygonal billiards have interesting applications in mesoscopic optics. Presently, I investigate the emission properties of coupled dielectric resonators of hexagonal shape. J. W. Singular continuous spectra in a pseudointegrable billiard. Phys. Rev. E, 62:R21-24,2000. J. W. The quantum-classical correspondence in polygonal billiards. Phys. Rev. E, 64:026212, 2001. J. W. Pseudointegrable Andreev billiard. Phys. Rev. E, 65:036221, 2002. J. W. Spectral properties of quantized barrier billiards. Phys. Rev. E, 65:04627, 2002.
{"url":"http://www.mpipks-dresden.mpg.de/~jwiersig/interest/polygons/polygons.html","timestamp":"2014-04-20T14:04:28Z","content_type":null,"content_length":"6180","record_id":"<urn:uuid:82f534ab-5998-426b-a1bc-56c5ffb826ca>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Topologically enriched homotopy colimits commuting with homotopy pullbacks up vote 4 down vote favorite I am looking for an enriched analogon of Proposition 4.4 in https://www.google.de/url?q=http://hopf.math.purdue.edu/Rezk-Schwede-Shipley/simplicial.pdf Concretely, I would like to prove the following statement: Suppose $K$ is a topologically enriched category, i. e. the morphism sets carry a topology and composition is continuous. For a functor $X: K \to Top$, I can consider an enriched version of the homotopy colimit, namely $hocolim X$ is the realization of the simplicial space $srep X$, whose $n$-th level is given by $ \coprod_{k_0,\ldots,k_n \in (ob K)^n} K(k_0, k_1) \times \ldots \times K(k_{n-1},k_n) \times X(k_0)$ Then, suppose there is a natural transformation between (enriched) functors $X,Y: K \to Top$, such that the diagram $$ \begin{array}{ccc} X(k) & \to & Y(k) \end{array}$$ $$ \begin{array}{ccc} X(l) & \to &Y(l) \end{array} $$ is a homotopy pullback for all $k,l \in ob\ K$ and all morphisms $\alpha: k \to l$, which induce the (missing) vertical arrows. Then the diagram $$ \begin{array}{ccc} X(k) & \to & Y(k) \end{array}$$ $$ \begin{array}{ccc} hocolim X & \to &hocolim Y \end{array} $$ is a homotopy pullback for all $k \in ob K$. Has anyone ever seen a statement like this or an idea on how to prove it? If it helps, one may assume that the natural transformations $X \to Y$ is levelwise a Serre fibration of topological spaces, since this is the only case, in which I need the statement to be true. Thanks in advance, Alex at.algebraic-topology homotopy-theory model-categories add comment 2 Answers active oldest votes Rainer Vogt worked on this sort of problem originally back in the 1970s so check out his papers from that time. The theory involves homotopy coherence so you may need to check that out up vote 4 in his early paper (R. Vogt, Homotopy limits and colimits , Math. Z., 134, (1973), 11–52.) down vote add comment I haven't thought about this hard (no time) but here are quick observations. Your homotopy colimit is the bar construction $B(\ast,K,X)$, the geometric realization of the simplicial space with $n$-simplices $B_n(\ast,K,X)$, as you state. The map $X(k) \to B(\ast,K,X)$ you are interested in is the geometric realization of the map from the constant simplicial space at $X(k)$ to $B_*(\ast, K, X)$ that identifies $X(k)$ with the subspace of $B_n(\ast,K,X)$ that sees only identity maps of the object $k$. Homotopy pullbacks of diagrams one leg of which is a (Hurewicz) fibration are equivalent to actual pullbacks, so one approach might be to try to prove that $B(\ast,K,X) \to B(\ast,K,Y)$ is a fibration. It is standard that geometric realization of simplicial spaces preserves pullbacks (takes levelwise pullbacks to pullbacks). A variation on the theme of replacing maps by up vote fibrations should convince you that geometric realization also preserves homotopy pullbacks (takes levelwise homotopy pullbacks to homotopy pullbacks). 4 down vote So you would like your map to be the realization of a levelwise homotopy pullback. However, your stated hypothesis feels wrong to me, since it does not take the topology on the category K into account. Your hypothesis presumably should say that the evident square with upper left corner $K(\ell,k)$ and lower right corner $Map(X(k),Y(\ell))$ is a homotopy pullback. Assuming that, you should be able to prove that your map of simplicial spaces is a levelwise homotopy pullback, and then you would be done. Hope that helps a bit. Dear Mr May, thank you very much for your response. I am not yet sure, if this all works out. At least, I am pretty convinced now that the induced map $B(\ast,K,X) \to B(\ast,K,Y)$ is usually not a fibration. I also get the feeling that the hypotheses might be wrong. At some places, I needed the condition that the square $$ \begin{array}{ccc} K(k,l) \times X(k) & \to & K(k,l) \ times Y(k) \end{array}$$ $$ \begin{array}{ccc} X(l) & \to & Y(l) \end{array}$$ is homotopy pullback (vertical maps are action maps). I do not know if this condition is equivalent to your proposed condition. – Alexander Körschgen Sep 28 '12 at 17:31 However, the problem is that the level fibrations which also satisfy the homotopy pullback condition should become fibrations in a cofibrantly generated model structure, and I do not know how to characterize the maps by a lifting property, if I change the condition to the one proposed by you or the one I stumbled upon. – Alexander Körschgen Sep 28 '12 at 17:31 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homotopy-theory model-categories or ask your own question.
{"url":"http://mathoverflow.net/questions/107071/topologically-enriched-homotopy-colimits-commuting-with-homotopy-pullbacks","timestamp":"2014-04-16T19:41:59Z","content_type":null,"content_length":"58189","record_id":"<urn:uuid:678aff54-9b41-470e-8f13-416a71f80d60>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Problems with vlan + carp + alias [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Problems with vlan + carp + alias • To: Primeroz lists <primeroz_(_dot_)_lists_(_at_)_googlemail_(_dot_)_com> • Subject: Re: Problems with vlan + carp + alias • From: Giulio Ferro <auryn_(_at_)_zirakzigil_(_dot_)_org> • Date: Sun, 22 Jun 2008 10:43:08 +0200 • Cc: freebsd-net_(_at_)_freebsd_(_dot_)_org, Han Hwei Woo <hhw_(_at_)_pce-net_(_dot_)_com> Primeroz lists wrote: What is tcpdump showing for ping on 192.168.10.11 <http://192.168.10.11> ? can you see echo reply exiting vlan10 interface ? what if you try from your server to "ping -S 192.168.10.11 <http://192.168.10.11> 192.168.10.254 <http://192.168.10.254>" ? First of all I'm sorry for the late reply. Yesterday I could do some more in-depth test to analyze this strange behavior of my firewall. The 192.168.10.0/24 range I used in the previous example isn't the real one, I just used it for simplicity´s sake. The true range, the one which has been assigned by the ISP to my customer is: x.y.z.128/27. (x.y.z corresponds to a true public IP address) I've deactivated the firewall, so we have one less thing to worry about: /etc/rc.d/pf stop This is a pure network configuration issue. Here is the relevant part in /etc/rc.conf: ifconfig_bce0="inet 192.168.26.1 netmask 255.255.255.0" cloned_interfaces="vlan5 vlan25 vlan30 vlan40 vlan128 carp5 carp25 carp30 carp40 carp128" ifconfig_vlan128="inet x.y.z.157 netmask 255.255.255.224 vlan 128 vlandev bce0" ifconfig_carp128="vhid 128 pass qweq x.y.z.132 netmask 255.255.255.255" ifconfig_carp128_alias0="x.y.z.133 netmask 255.255.255.255" ifconfig_carp128_alias1="x.y.z.134 netmask 255.255.255.255" ifconfig_carp128_alias2="x.y.z.135 netmask 255.255.255.255" ifconfig_carp128_alias3="x.y.z.136 netmask 255.255.255.255" ifconfig_carp128_alias4="x.y.z.137 netmask 255.255.255.255" ifconfig_carp128_alias5="x.y.z.138 netmask 255.255.255.255" ifconfig_carp128_alias6="x.y.z.139 netmask 255.255.255.255" ifconfig_carp128_alias7="x.y.z.140 netmask 255.255.255.255" ifconfig_carp128_alias8="x.y.z.141 netmask 255.255.255.255" On my managed switch I've set 2 ports: 1) the one where the bce0 interface is plugged in : mode trunk with all the vlans above 2) the one where the ISP internet is plugged in : mode access with vlan 128 I've also set the ip interface of my switch to x.y.z.155 vlan 128 Here is the relevant part of netstat -rn on my machine default x.y.z.129 UGS 0 13966 vlan12 x.y.z/27 link#11 UC 0 0 vlan12 x.y.z.132 x.y.z.132 UH 0 0 carp12 x.y.z.133 x.y.z.133 UH 0 0 carp12 x.y.z.134 x.y.z.134 UH 0 0 carp12 x.y.z.135 x.y.z135 UH 0 0 carp12 x.y.z.136 x.y.z.136 UH 0 0 carp12 x.y.z.137 x.y.z.137 UH 0 0 carp12 x.y.z.138 x.y.z.138 UH 0 0 carp12 x.y.z.139 x.y.z.139 UH 0 0 carp12 x.y.z.140 x.y.z.140 UH 0 0 carp12 x.y.z.141 x.y.z.141 UH 0 0 carp12 x.y.z.155 00:1e:c9:90:4a:c0 UHLW 1 8 vlan12 1183 Here come the tests. 1) From the firewall : basic I can ping both the default gateway (x.y.z.129) and the switch interface (x.y.z.155) I can ping a generic internet address (a.b.c.d) With tcpdump I can see the packets leaving as x.y.z.157 and coming with the same 2) from the switch : basic I can ping the firewall's vlan address (x.y.z.157) I can ping _ALL_ the carp interfaces, base and alias: ping x.y.z.157 -> OK ping x.y.z.132 -> OK ping x.y.z.133 -> OK ping x.y.z.141 -> OK 3) from the internet : basic From an external internet address I can ping the vlan address: ping x.y.z.157 -> OK 4) from the firewall : advanced From the firewall I can ping the switch address from one of the carp base and aliased address: ping -S x.y.z.132 x.y.z.155 -> OK ping -S x.y.z.133 x.y.z.155 -> OK I _cannot_ ping the default router from one of the carp addresses: ping -S x.y.z.132 x.y.z.129 -> NOT OK ping -S x.y.z.133 x.y.z.129 -> NOT OK By using tcpdump on the vlan128 interface I can see the packets _BOTH_ leaving and coming from the carp addresses. It just seems that the carp interfaces can't process the packets properly. 5) from the internet : advanced From an external internet address I _cannot_ ping the carp addresses (x.y.z.132 and up) As above, I can see the incoming packets with tcpdump -i vlan128 -n icmp Ok, that was long. I hope someone can help to shed light into this, to see whether this is a bug or not. I stress again that the _same_ configuration works as it should on a physical (non-vlan) interface. freebsd-net_(_at_)_freebsd_(_dot_)_org mailing list To unsubscribe, send any mail to "freebsd-net-unsubscribe_(_at_)_freebsd_(_dot_)_org" Visit your host, monkey.org
{"url":"http://monkey.org/freebsd/archive/freebsd-net/200806/msg00249.html","timestamp":"2014-04-16T22:05:38Z","content_type":null,"content_length":"10448","record_id":"<urn:uuid:68f8827d-9057-406b-99da-1a33abb63965>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
What does the d-slice of a weighted polynomial algebra look like? up vote 2 down vote favorite This question comes from the explicit construction of a smooth projective model of a hyperelliptic curve. Nevertheless it is fully elementary and, to me, more interesting than hyperelliptic curves. Notations. In the following, $k$ will always denote a commutative ring with $1$. "Graded $k$-algebra" will always mean a $k$-algebra graded by $\mathbb N$. Whenever $S$ is a graded $k$-algebra and $d$ is a positive integer, we denote by $S^{(d)}$ the graded $k$-algebra whose $i$-th graded component is $S^{di}$ (this means the $di$-th graded component of $S$) for each $i\in\mathbb N$. The algebra structure on $S^{(d)}$ is inherited from $S$ (since $S^{(d)}$ is a subset of $S$ and easily seen to be a subalgebra). Whenever we speak of "standard polynomial algebras", we mean polynomial $k$-algebras with standard grading (i. e., any indeterminate has degree $1$). First, here is the fact that helps us construct that projective model: Theorem 1. Let $e$ be a positive integer. Let $k\left[X,Y\right]$ and $k\left[X_0,X_1,...,X_e\right]$ be standard polynomial algebras. Then, the $k$-algebra homomorphism $k\left[X_0,X_1,...,X_e\right]\to k\left[X,Y\right]^{(e)},$ $X_i\mapsto X^iY^{e-i}$ is graded and surjective. Its kernel is the ideal generated by terms of the form $X_iX_j-X_{i+1}Y_{j-1}$ for $i$ and $j$ satisfying $0\leq i < j-1 < j \leq n$. This is easily proven combinatorially, by constructing a basis of $k\left[X,Y\right]^{(e)}$ of monomials and lifting it to a basis of $k\left[X_0,X_1,...,X_e\right]$ of monomials. The natural questions are now: Question 2. Theorem 1 gives $k\left[X,Y\right]^{(e)}$ as a graded quotient $k$-algebra of $k\left[X_0,X_1,...,X_e\right]$. Can we similarly represent $k\left[X,Y,Z\right]^{(e)}$ or $k\left [Y_1,Y_2,...,Y_n\right]^{(e)}$ ? Question 3. Now assume that we grade a polynomial algebra $k\left[T_1,T_2,...,T_e\right]$ in a nonstandard way, i. e., we have $\deg T_i=\alpha_i$ for some positive integers $\alpha_i$. (The ground ring $k$ is still in the $0$-th component.) What is a necessary and sufficient condition on $d$ for the $k$-algebra $k\left[T_1,T_2,...,T_e\right]^{(d)}$ to be generated by its degree-$1$ component (as an algebra over $k$)? Clearly, a necessary condition is for $d$ to be divisible by all $\alpha_i$, but I can't see whether it is sufficient. Elementary reformulation: If $\alpha_1$, $\alpha_2$, ..., $\alpha_e$ are positive integers, then what conditions do we have to impose on a positive integer $d$ in order for the following to hold: Whenever $\beta_1$, $\beta_2$, ..., $\beta_e$ are nonnegative integers satisfying $d\mid\alpha_1\beta_1+\alpha_2\beta_2+...+\alpha_e\beta_e$, there exist nonnegative integers $\gamma_1\leq \beta_1$, $\gamma_2\leq \beta_2$, ..., $\gamma_e\leq \beta_e$ such that $\alpha_1\gamma_1+\alpha_2\gamma_2+...+\alpha_e\gamma_e = d$. On the one hand, this looks like elementary number theory; on the other it reminds me of combinatorial facts like the one claiming that a regular bipartite graph can be factored into perfect matchings. None of these helps me proving or disproving the natural conjecture (that the condition is that $d$ is divisible by all $\alpha_i$), though... Question 2: On the geometric side, you are asking about the $e$th Veronese embedding of $n-1$ dimensional projective space. I am happy to elaborate in person. – Steven Sam Nov 16 '11 at 2:06 Thanks, Steven. We'll see whether I can hijack tomorrow's combinatorics preseminar with this question. – darij grinberg Nov 16 '11 at 6:49 Hyperelliptic curves are actually pretty interesting. – JSE Nov 17 '11 at 2:37 add comment 1 Answer active oldest votes Question 2: The following map defines a surjective $k$-algebra homomorphism: $$\varphi: k[X_{i_1,...,i_n} \mid i_1 + ... + i_n = e] \to k[Y_1,...Y_n],\quad X_{i_1,...,i_n} \mapsto Y_1^{i_1} \ cdots Y_n^{i_n}.$$ For, let non-negative rational integers $j_1,...,j_n$ be given, those sum is $de$ and let $I_p =(i_{p1},...,i_{pn})$ be non-negative rational integers such that $i_{p1} + ... + i_{pn} = e$. Because of $$Y_1^{j_1} \cdots Y_n^{j_n}\overset{!}{=}\varphi(\prod_{p=1}^d X_{I_p}) =\prod_{p=1}^d (Y_1^{i_{p1}} \cdots Y_n^{i_{pn}}) = (Y_1^{\sum_{p=1}^d i_{p1}}) \cdots up vote (Y_n^{\sum_{p=1}^d i_{pn}})$$ we want to solve $$\sum_{p=1}^d i_{p,q} = j_q,\quad (q=1,...,n).$$ In case $d=1$ choose $i_{1q} = j_q$. Assume the equation is solvable for $d-1$. Choose $0 \le 1 down i_{d,q} \le j_q$ such that $i_{d1} + ... + i_{dn} = e$ (possible since $j_1 +...+j_n = de \ge e$). Then the linear system above is equivalent to $$\sum_{p=1}^{d-1} i_{p,q} = j_q - i_{d,q},\ vote quad (q=1,...,n)$$ which is solvable by induction hypothesis. Thanks, but what interests me is the kernel of your $\varphi$. – darij grinberg Nov 16 '11 at 6:48 add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra monomial-ideals nt.number-theory groebner-bases co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/81001/what-does-the-d-slice-of-a-weighted-polynomial-algebra-look-like","timestamp":"2014-04-19T12:16:27Z","content_type":null,"content_length":"60527","record_id":"<urn:uuid:4a972949-230a-4447-a0a0-12b3a4e1d578>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the best way to study Rational Homotopy Theory up vote 6 down vote favorite I studied basic algebraic topology elements: fundamental group, higher homotopy groups, fibre bundles, homology groups, cohomology groups, obstruction theory, etc. I want to study Rational Homotopy Theory. Specifically, I want to study Sullivan's model. What is the short way and what is the complete way to study Sullivan's model? add comment 6 Answers active oldest votes Griffiths and Sullivan wrote a fine book on the subject. Apart from the obvious attractiveness of learning a theory from its creator, it is written in an amazingly user-friendly style. For example, Chapter XIII is devoted to examples and computations: it starts with the computation of a minimal model for the forms on a sphere and ends with Massey triple products on compact Kähler manifolds, a section inspired by the 1974 Inventiones article of Deligne,Griffiths, Morgan, Sullivan. The first hundred pages (Chapters I to VII) are an introduction to the necessary up vote algebraic topology and you can probably essentially skip it, judging from your description of what you already know. 12 down vote Reference Griffiths, P.; Morgan, J. (1981), Rational homotopy theory and differential forms, Progress in Mathematics, 16, Birkhäuser 1 I am amazed at the similarity of SGP's and my recommendations , posted independently 6 seconds apart! – Georges Elencwajg Feb 27 '11 at 13:26 add comment This depends very much on what you want to see (Griffiths-Morgan has been mentioned, and I recommend it as well): 1. A quick introduction: Morita, "Geometry of characteristic classes", chapter 1. He also treats the non-simply connected case. 2. A broad and comprehensive treatise, with tons of examples: Felix, Halperin and Thomas, "Rational homotopy theory". If you fear spectral sequences, this is the book to use for the "complete way". up vote 11 down 3. An inspring paper that you'll read 20 times: Sullivan, "Infinitesimal computations in topology". 4. A collection of geometric applications, starting from the historical origin (the de rham models for Lie groups and homogeneous spaces): "Algebraic models in geometry", by Felix, Oprea, Tanre. 5. Model categories in action: Gelfand-Manin, Homological algebra (the last chapter) or Kathryn Hess: "Rational homotopy theory, a brief introduction". add comment Start with Griffiths-Morgan's green book on Rational homotopy. A quick introduction is also provided in the Springer GTM by Bott and Tu. Also useful is the case of compact Kahler up vote 7 manifolds treated in the paper by Deligne, Griffiths, Morgan and Sullivan in Inventiones Math 1976 (available free at Digizeitschriften) down vote Here is how to write ä. – Chandan Singh Dalawat Mar 7 '11 at 10:23 add comment Here are some video lectures that John Morgan gave at Stoney Brook up vote 3 down vote http://www.math.sunysb.edu/Videos/dfest/ This also has many other nice mathematical videos. add comment After reading Griffiths-Morgan, Bott-Tu (not just the chapter on Rational Homotopy Theory, I would say) and Felix-Halperin-Thomas, maybe it wouldn't be a bad idea to be acquainted with: 1. Halperin, Lectures on minimal models, Mémoires SMF 230 -aka "the bible": all technical details you won't find elsewhere. up vote 2 2. Bousfield, Gugenheim, On PL De Rham theory and rational homotopy type, Memoirs AMS 179 -the model category point of view; Sullivan's results can be stated as an equivalence of down vote categories: find which. 3. Lehman, Théorie homotopique des formes différentielles, Asterisque 45 -if you know French, this is a very nice introduction to the subject. I once mentioned to Dennis Sullivan I was thinking about studying RHT from Felix-Halperin-Thomas. He told me that's a nice book on modern topology,but it doesn't have anything to do with rational homotopy theory. I hope one day he'll explain to me what the heck he means by that.......... – Andrew L Jul 20 '11 at 20:58 add comment While not comprehensive, the following book has a nice introduction to the subject in its first chapter or so: J. Oprea, A. Tralle, Symplectic manifolds with no Kähler structure, up vote 0 down Lecture Notes in Math. 1661,. Springer–Verlag, 1997. add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/56809/what-is-the-best-way-to-study-rational-homotopy-theory?sort=votes","timestamp":"2014-04-17T01:39:19Z","content_type":null,"content_length":"70392","record_id":"<urn:uuid:efd066cc-a732-4852-935e-073f0d06d5dd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
odd prime numbers Just read a puzzling fact in Le Monde science leaflet: someone at the “Great Internet Mersenne Prime Search” (GIMPS) just found a prime number with 17 million digits and…it is the 48th prime number of the form 2p-1. Which just means it is an odd number. How odd a remark!!! And how wrong. In fact, the short news item meant that this is a Mersenne number, of the form 2^p-1! Another victim of a hasty cut&paste, I presume… 2 Responses to “odd prime numbers” 1. Christian/Your reference to Mersenne primes reminds me the famous “Mersenne twister” by Matsumoto and Nishimura with a quite smaller p value (p=19937) but yet a very long period of (2^p)-1! This random number generator is implemented in R by default if I remember correctly: see also the package “randtoolbox” for additional generators. □ Yes, indeed! This is the default generator in R (check with ?RNG).
{"url":"http://xianblog.wordpress.com/2013/02/09/odd-prime-numbers/","timestamp":"2014-04-18T13:39:58Z","content_type":null,"content_length":"35840","record_id":"<urn:uuid:46f85779-5768-450a-9150-aea8846dba57>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
An Exploratory Study of Schema-Based Word-Problem-Solving Instruction for Middle School Students with Learning Disabilities: An Emphasis on Conceptual and Procedural Understanding By: Asha Jitendra and Caroline M. DiPipi (2003) A multiple-probe-across-participants design included baseline, treatment, generalization, and maintenance. During treatment, students received schema strategy training in problem schemata (conceptual understanding) and problem solution (procedural understanding). Results indicated that the schema-based strategy was effective in substantially increasing the number of correctly solved multiplication and division word problems for all 4 participants. Maintenance of strategy effects was evident for 10, 5 1/2, and 2 1/2 weeks following the termination of instruction for Sara, Tony, and Percy, respectively. In addition, the effects of instruction generalized to novel word problems for all 4 participants. The importance of mathematics literacy and problem solving has been emphasized by researchers (e.g., De Corte, Greer, & Verschaffel, 1996; Goldman, Hasselbring, & the Cognition and Technology Group at Vanderbilt, 1997; Patton, Cronin, Bassett, & Koppel, 1997) and in national reports (e.g., National Council of Teachers of Mathematics [NCTM], 2000; National Education Goals Panel, 1997). Although instruction that emphasizes reflective thinking and reasoning is considered by many to be critical to mathematics reform efforts (Baroody & Hume, 1991; Bottge, 1999; Hofmeister, 1993; Montague, 1997a), procedures that encourage memorization and the completion of lengthy worksheets requiring rote practice are common in many classrooms (Parmar & Cawley, 199 1). Mathematics instruction in special education, particularly, has been characterized to a large extent by its emphasis on rote memorization of facts and computational skills, rather than on developing important concepts and applying mathematics to real-world problem situations (Baroody & Hume, 1991; Bottge, 1999; Parmar, Cawley, & Miller, 1994; Woodward & Montague, 2000). Many researchers argue that highly procedural instruction (meaningless drill and practice of computation facts) may sustain the characterization of students with learning disabilities as passive learners and fail to fill the gaps in their conceptual understanding of the core concepts and principles underlying mathematical thinking (Baroody & Hume, 1991; Parmar et al., 1994; Torgesen, 1982; Woodward & Montague, 2000). It is not surprising, then, that many students with learning disabilities have difficulty with higher level mathematics skills, such as solving word problems (Xin & Jitendra, 1999). Parmar, Cawley, and Frazita (1996) compared the performance of students with and without disabilities on arithmetic word problems involving all four operations and problems that contained direct or indirect statements, extraneous information, and one-step or two-step problems. Results indicated that the students with disabilities performed at significantly lower levels than the students without disabilities on all problem types. The students with disabilities experienced considerable difficulty with problem representation or identifying relevant information, along with difficulties in reading, computation, and identifying operations. One plausible deficiency of traditional mathematics instruction is its failure to make explicit the key aspects of domain knowledge needed for problem solving. Emerging views in special education indicate the importance of explicit instruction and practice in domain-specific problem solving (Hofmeister, 1993; Mercer & Miller, 1992; Parmar et al., 1996), with an increased emphasis on the domain heuristic of graphically representing word problems (Jitendra, Hoff, & Beck, 1999; Xin & Jitendra, 1999). Domain-specific knowledge encompasses both conceptual and procedural knowledge. Conceptual knowledge refers to the hierarchical network of knowledge and its corresponding relationships (Hiebert & Lefevre, 1986); procedural knowledge results from the organization of conceptual knowledge into action units (Anderson, 1989). Knowledge organization and pattern recognition are key aspects of conceptual knowledge (Silver & Marshall, 1990). For example, mathematical problem solving requires the ability to organize problems (e.g., distance-rate-time problems, interest problems, discount problems) by structural similarity (e.g., generalized rate problems). During problem solving, all problem-relevant knowledge is accessible only when the knowledge is adequately organized in memory by a suitable cognitive structure (i.e., problem schemata). Problem schemata are elements of knowledge that are closely linked with each other within the knowledge base (Chi, Glaser, & Rees, 1982). Knowledge of the mathematical structure of problems, in turn, can facilitate activation of the relevant schemata or patterns that would guide problem representation, which is necessary for solving problems. Clearly, providing quality instruction that emphasizes both problem representation and problem solution is deemed important to successful problem solving (Fraivillig, Murphy, & Fuson, 1999; Fuson & Willis, 1989). Problem representation involves translating a problem from words into a meaningful representation. This could include a "combination of something written on paper, something existing in the form of physical objects and a carefully constructed arrangement of an idea in one's mind" (Janvier, 1987, p. 68). Problem solution refers to the selection and application of appropriate mathematical operations based on the representation. It involves both solution planning and execution of mathematical operations. Mathematical problem-solving instruction should not only emphasize conceptual knowledge of the operations but also facilitate "a highly integrated understanding of the operations and the many different but related meanings these operations take on in real contexts" (Van de Walle, 1998, p. 117). The big ideas for developing meanings for the operations should, for example, show that addition and subtraction are connected and that multiplication and division are related. In the context of solving story problems, for example, models or diagrams can be used to represent the information in a problem and to figure out what operation is needed to solve the problem (Van de Walle, 1998). Most models for understanding and assessing children's solution of problems are generally derived from cognitive psychology (Briars & Larkin, 1984; Carpenter & Moser, 1984; Fennema, Carpenter, & Peterson, 1989; Kintsch & Greeno, 1985; Riley, Greeno, & Heller, 1983). These models of problem solving emphasize the importance of the problem's semantic characteristics (Silver & Marshall, 1990). As students develop knowledge in a domain (e.g., mathematics), the knowledge structure eventually takes on the form of schema mapping of relationships. Schema as a knowledge structure serves the function of knowledge organization. According to Marshall (1995), schemata are the basis for understanding and the appropriate mechanism for the problem solver to use to "capture both the patterns of relationships as well as their linkages to operations" (p. 67). A distinctive feature of schemata is that when one piece of information is retrieved from memory during problem solving, other connected pieces of information will be activated. Problem schemata pertaining to a wide range of problems involving all four operations include " change," "group,"" compare," "vary," and "restate" (Marshall, Pribe, & Smith, 1987). These problem types dominate word problems typically found in the elementary and middle grades (Van de Walle, 1998). Recent reviews provide empirical support for schemabased word-problem-solving instruction that emphasizes conceptual understanding (Jitendra & Xin, 1997; Xin & Jitendra, 1999). The schema-based representational strategy, with its focus on schemata (i.e., problem pattern or structure) identification, is known to benefit both students with learning disabilities (elementary, middle, and high school) and students at risk for math failure (Hutchinson, 1993; Jitendra & Hoff, 1996; Jitendra et al., 1999; Jitendra et al., 1998; Zawaiza & Gerber, 1993). Strategies (e.g., schema- based instruction) that entail "looking systematically for patterns, are very close to content curriculum goals" (NCTM, 1998, p. 4). A primary characteristic of a schema-based strategy that distinguishes it from other approaches is the use of schemata diagrams to map important information and highlight semantic relations in the problem to facilitate problem translation and solution. Other strategy-training procedures (e.g., cognitive and metacognitive) also may include diagrams, but the emphasis is less on identifying the semantic relations in a problem and more on problem solving heuristic procedures that lead to its solution. Although the use of schema strategy in teaching students with learning disabilities is promising, most schema- based research studies reported in the literature have focused on teaching addition and subtraction word problems (i.e., change, group, compare; e.g., Jitendra et al., 1998; Jitendra & Hoff, 1996; Jitendra et al., 1999) or algebra word problems (Hutchinson, 1993; Maccini & Hughes, 2000). However, children typically experience significant problems with multiplicative (includes all problems involving multiplication and division structure) rather than additive (includes all problems involving addition and subtraction structure) situations (Van de Walle, 1998). Also, the shift in focus during middle school to the more complex relations found in "vary" or "equal groups" and "restate" (i.e., "multiplicative comparison") multiplication and division word problems makes it necessary to teach these problem types. Although Greer (1992) discusses two other types of multiplication and division problems (i.e., combinations, and products of measures), these types of problems do not receive much attention in school (Van de Walle, 1998). The study by Zawaiza and Gerber (1993) is the only one that has investigated the effectiveness of the schema strategy in solving multiplicative comparison-type word problems. However, that study did not address "vary" problems, which are also prevalent in elementary and middle school (Van de Walle, 1998). In summary, the schema strategy is seen as a viable approach for teaching students with learning disabilities to solve addition and subtraction word problems. However, research on teaching middle school students with learning disabilities to solve multiplication and division word problems using schema-based instruction is lacking. Therefore, the purpose of this exploratory study was to examine the effectiveness of the schema strategy in solving multiplication and division problems. The present study extends the existing body of research regarding the applicability of the schema strategy to promote word-problem-solving skills in middle school students with learning disabilities (e.g., Jitendra et al., 1999). Specifically, the following research questions were posed: (a) Is a schema- based instructional strategy effective in teaching one-step multiplication and division word-problem solving to middle school students with learning disabilities who are low-performing in mathematics? (b) Will the students maintain the acquired word-problem-solving skills? (c) Will the students generalize the word-problem-solving skills to novel problems, including multistep problems? To address these questions, we used a multiple-probe-across- participants design. However, this single-subject design may not be well suited to determining the specific aspects of the treatment to which to attribute effects. In light of this limitation, results from this study should be viewed as preliminary. Participants included four eighth-grade students with learning disabilities (e.g., one girl and three boys) attending a suburban middle school in the northeastern United States. Participant selection was based on several criteria. First, participants had been previously identified as learning disabled by meeting Pennsylvania State criteria for learning disabilities, which stipulate that a child must demonstrate (a) a chronic condition of presumed neurological origin that selectively interferes with the development, integration, or demonstration of language or nonverbal abilities; (b) a severe discrepancy between achievement and intellectual ability in one or more of several areas (i.e., oral expression, listening comprehension, written expression, basic reading skills, reading comprehension, mathematics calculation, and mathematics reasoning), which is not correctable without special education and related services; (c) specific deficits in receptive and expressive language and deficiencies in initiating or sustaining attention, impulsivity, and other specific conceptual and thinking difficulties; (d) normal or above-normal intelligence; and (e) learning problems that are not due primarily to other disabling conditions or environmental, cultural, or economic disadvantage. This determination was made by a full assessment and comprehensive report by a certified school psychologist. A summary of participating students' characteristics is presented in Table 1. It must be noted that although the sample in this study was identified as learning disabled, whether these students had learning disabilities in mathematics is questionable according to the conventional cutoff scores for a mathematics disability (a discrepancy of at least 1 standard deviation between their scores on a standardized test of mathematics and their full scale score on an intelligence test). *Only Percy and possibly Tony would fit this definition of having a learning disability in mathematics (i.e., a discrepancy existed between their IQs and each of the math composite and numerical operations scores on the WIAT, but their math reasoning scores were in the average range). Second, a teacher interview indicated that these students were experiencing significant cant difficulties with mathematical problem solving, an area that was specifically targeted for instruction on each student's Individualized Education Program goals. However, the teacher reported that all students had successfully passed a criterion test of mathematics computation skills involving all four operations. In addition, each student had to complete a sample of six one-step multiplication and division word problems similar to the criterion tests used in the study. An examination of word problems completed (see Table 1) indicated that all four students experienced significant difficulties solving them; they had not reached the mastery (less than 50%) needed to solve more complex word In this study, the special education teacher conducted all testing and instruction in the learning support classroom. She was certified to teach students with mental and physical disabilities and had 12 years' teaching experience. At the time of the study, the teacher was completing her master's degree in special education. All four students were included in general education classrooms but received special education services in a learning support classroom for mathematics and other subjects (e.g., reading, English, study skills). Each participant sat at a table across from the teacher when receiving instruction in the study. During this time, the other students in the classroom worked independently on different skills, while the classroom instructional aide supervised and provided assistance as needed. Three of the participants in the study were present in the classroom during the same period. The seating of these participants was arranged to place them away from one another, at opposite ends of the room, While one participant received instruction, the others received direct instruction from the classroom aide to prevent incidental learning. The fourth participant had received mathematics earlier in the day. During independent practice and the completion of tests, each student sat at his or her assigned desk and worked alone. TABLE 1. Student Demographics │ │ Students │ │ Variable ├──────────────────────────────────┬─────────────────────┬────────────────────────────┬───────────────────────────────────────────────────────┤ │ │ Sara │ Tony │ Percy │ Andy │ │ Gender │ Girl │ Boy │ Boy │ Girl │ │ Ethnicity │ Caucasian │ Caucasian │ African American │ Caucasian │ │ Age │ 13-10 │ 13-8 │ 13-5 │ 13-7 │ │ Grade │ 8 │ 8 │ 8 │ 8 │ │ Classification │ LD │ LD │ LD │ LD, ADHD │ │ SES (a) │ Medium │ Low │ High │ Medium │ │ Years in special ed. │ 4 │ 4 │ <1 │ <1 │ │ Learning support classroom placement │ Math, reading, English │ Math, English │ Math, study skills │ Math, reading, English, study skills │ │ % in general ed. class │ 62.5 │ 75 │ 85 │ 62.5 │ │ IQ(b) │ │ │ │ │ │ Full Scale │ 95 │ 103 │ 101 │ 89 │ │ Verbal │ 101 │ 107 │ 98 │ 93 │ │ Performance │ 90 │ 98 │ 106 │ 86 │ │ Achievement (c) │ │ │ │ │ │ Math composite │ 87 │ 83 │ 79 │ 76 │ │ Math reasoning │ 93 │ 93 │ 80 │ 84 │ │ Numerical operations │ 84 │ 77 │ 84 │ 76 │ │ Composite reading │ 81 │ 97 │ 85 │ 91 │ │ Composite writing │ 87 │ 94 │ 92 │ 78 │ │ Protest │ │ │ │ │ │ One-step word probs. │ 50% │ 33% │ 33% │ 16% │ │ Note. LD = learning disability; ADHD = attention-deficit/hyperactivity disorder. (a) Based on parents' profession. (b) Wechsler Intelligence Scale for Children-III (Wechsler, 199 1). (c) Standard │ │ scores for subtests of the Wechsler Individual Achievement Test (Wechsler, 1992). │ Dependent measures Word-problem tests. Each of the series of tests constructed for the study consisted of six one-step multiplication and division word problems involving two different problem types (i.e., vary and multiplicative comparison) based on Marshall's (1995) and Van de Walle's (1998) word-problem classification system. However, we did not include multiplicative comparison problems in which the comparison amount (based on one set's being a particular multiple or part of the other set) is unknown, because these types of problems rarely occur in textbooks. Vary and multiplicative comparison problems in this investigation were selected from the SRA Spectrum Math (Richard, 1997) textbook used in the classroom (see Table 2 for examples of each problem type). After consultations with the classroom teacher, we modified word problems from the text to include names of participants and -their peers in the classroom and familiar contexts to make them interesting to students. The order of problem types within each test was determined randomly. In addition, to assess response generalization of the strategy, we developed a separate test that consisted of 12 one- step and multistep multiplication and division word problems. To assess near transfer, we included three each of one-step vary and multiplicative comparison word problems that were similar in structure to those used in the study but differed in con-, text and the position of the unknown (see Table 2). To assess far transfer, six multistep problems (see Table 3) of the type in which students did not receive instruction were included. The order of one-step and multistep problems within the test was randomly ordered. For all tests, two problems to a page were typed on 8 1/2-inch by 11-inch unlined paper, and each problem included a workspace and a line for writing the entire answer. Tests were scored by counting the number of problems answered correctly. For each step of a problem, I point was assigned for a correct answer, whereas an incorrect answer was awarded a score of 0. As such, the total possible points for one-step problems ranged from 0 to 1, whereas scores for multistep problems ranged from 0 to 3, depending on the number of steps. TABLE 2. Examples of Multiplication and Division One-Step Word Problem Types │ Instructional and testing problems │ Generalization problems │ │ Vary │ Vary │ │ │ │ │ • Size of groups unknown: │ • Size of groups unknown: │ │ □ In Mrs. Jones's class, there are 9 computers for 27 students to share. How many │ □ Sara and Mrs. Jones worked a total of 64 hours last week at the mall. They each worked the │ │ students will share each computer? │ same amount of hours. How many hours did each work? │ │ • Whole unknown: │ • Whole unknown: │ │ □ Nicole earned $24 for each day that she worked at the music store. She worked for 9 │ □ Nicole went to CVS to shop for hairspray. At the store, the bottles were lined up in 8 rows. │ │ days. How much money did she earn? │ Each row contained 11 bottles of hairspray. How many bottles of hair spray does Sara have to │ │ │ choose from? │ │ │ • Number of groups unknown: │ │ │ □ Tony packs CDs of violin music at a store on Saturdays to earn extra money. If each box │ │ │ holds 9 CDs, how many boxes will he need to pack 45 CDs? │ │ Multiplicative comparison │ Multiplicative comparison │ │ │ │ │ • Referent unknown (compared is part of referent): │ • Referent unknown (compared is part of referent): │ │ □ Frankie and Tony went fishing. Tony caught 20 fish. He caught 1/4 as many fish as │ □ At the Backstreet Boys' concert, there were 11 boys. There were 1/3 as many boys as girls. │ │ Frankie. How many fish did Frankie catch? │ How many girls were at the concert? │ │ • Compared unknown (compared is part of referent): │ □ Tony bought 6 pairs of baggy pants. Nicole bought 2/3 as many pairs of pants as Tony. How │ │ □ Percy and Tony took a test in math class. Percy correctly answered 2/3 as many │ many pairs of baggy pants did Nicole buy? │ │ problems as Tony. Tony correctly answered 15 problems. How many problems did Percy │ • Referent unknown (referent is multiple of compared): │ │ correctly complete? │ □ Tiffany sold 5 beaded bracelets at the flea market on Tuesday. On Saturday, she sold 6 times │ │ • Compared unknown (compared is multiple of referent): │ as many bracelets. How many bracelets did Tiffany sell on Saturday? │ │ □ Sara has 20 coins in her coin collection. Tony has 5 times as many coins as Sara. │ │ │ How many coins does Tony have? │ │ Strategy questionnaire. Students were administered a strategy questionnaire to complete at the end of the investigation. The questionnaire contained both Likert-type and open-ended questions that provided information on each student's perception of the strategy's effectiveness and his or her satisfaction with it. Ratings ranged from a high of 5 to a low score of I with respect to the usefulness of the strategy and its specific components (e.g., using diagrams, mapping information onto diagrams). Additionally, students were asked whether they would continue to use the strategy and recommend it to other students. The two open-ended questions required students to report what they liked the most and least about solving word problems. The classroom teacher also completed a questionnaire that contained both Likert-type and open-ended questions. Her questionnaire was designed to assess her overall level of satisfaction with the schema-based instruction in terms of its effectiveness for her students, ease of use, efficiency, flexibility, and generalizability. Additionally, the teacher was asked whether she would continue to use the strategy and recommend it to other teachers. The two open-ended questions asked the teacher to list aspects of the strategy that were most beneficial and note any changes she believed would enhance the strategy's effectiveness. Strategy usage. All completed test worksheets were examined to determine the extent to which students effectively used the schema strategy. We determined whether students used the strategy to (a) identify the problem type (problem schemata) by drawing a picture and (b) develop a plan (action schemata) by setting up the mathematics sentence(s) for one-step and multistep problems prior to solving (strategy knowledge) them. Intervention materials Materials included scripted lessons for teaching word problems, strategy diagram sheets, and numerous practice problems designed for this phase of the study. In addition, story situations that did not involve any unknown information were developed for use in teaching students to discern the two different problem types (vary and multiplicative comparison). Worksheets with story situations included problem schemata diagrams (e.g., Marshall, 1995; Marshall, Barthuli, Brewer, & Rose, 1989). Additional materials included note sheets with key features of the two problem types. TABLE 3. Examples of Multistep Word Problem Types │ Problem │ Example │ │ type │ │ │ │ • Tony feeds his snake 3 mice 5 times a day. How many mice does he feed his snake in 3 days? │ │Vary/vary │ • Larry works a 3-hour shift 4 days a week. How many hours does Larry work in 2 weeks? │ │ │ • Tony practices the song he is going to play in the concert 4 times in a row twice a day. How many times does Tony play his song in 5 days? │ │Vary/MC │ • The electric company charges $20 for every kilowatt hour of power used. For the month of August, Mrs. Jones was billed for 260 kilowatt hours of electricity. During the same month, │ │ │ her natural gas bill was 1/4 of her electric bill. What were her electric and gas bills for August? │ │Change/ │ • Chris had 10 cakes at the bake sale for the Salisbury football team. Each cake has 5 slices. Mr. Cassidy bought 2 whole cakes. How many slices of cake does Chris have left? │ │vary │ │ │Change/ │ • At Salisbury Middle School's chorus concert, Sam brought 5 tins of cupcakes to sell. There were 6 cupcakes in each tin. Sue bought 1/3 of Sam's cupcakes. How many cupcakes were left?│ │vary/MC │ │ │Note. MC = multiplicative comparison. │ Teacher training Before the study began, the first and second authors met with the teacher to discuss the training procedures for experimental conditions. The teacher was provided with instructional materials (i.e., scripts, worksheets, and tests) and participated in a 1-hr training session. In this session, the teacher was informed that during baseline, generalization, and maintenance conditions, she could read the problems to students or praise them for their efforts, but that she could not assist the students in any other way during tests. Because the teacher was experienced and knew how to implement explicit instructional strategies, intervention training consisted of going over the key elements of the teaching scripts. The teacher was provided with the opportunity to read the scripts and clarify questions prior to implementing the first intervention lesson. In addition, treatment integrity was measured using a checklist (see the Treatment Integrity section). Experimental design A multiple-probe-across-participants design was used to evaluate the effects of the schema strategy on the mathematical word-problem-solving performance of four middle school students who were low-performing in math. A functional relationship between the intervention and word-problem solving was demonstrated because each student's performance remained stable or displayed a contratherapeutic trend during the baseline condition and increased only after the intervention was applied. Given that the fourth student was identified late in the school year for special education services, the baseline data for this student are limited. The experimental phases included baseline, instruction, response generalization, and maintenance. During baseline, each test assessed word-problem-solving performance on both problem types. In addition, generalization to novel problems was assessed once during baseline. Next, participants were introduced to the strategy one at a time, beginning with training in problem schemata. Once mastery (100% correct for two sessions) in identifying and representing problem schemata for both problem types was achieved, the schemabased strategy was introduced to teach word problems. Again, a criterion of 100% correct for two sessions in solving each of vary and multiplicative comparison problem types was required. Following instruction in each problem type, word-problem-solving performance was assessed on both problem types. The intervention with the second student was then implemented. The same sequence continued with the third and fourth students. Each student also completed a generalization probe after completion of the intervention. The design ended with a maintenance condition for all students. Baseline testing. Baseline testing was conducted using the word-problem test for each participant. A different version of the test was administered for each session. Participants were given as much time as needed and were instructed to do their best, show all their work in the space provided, and write the complete answer on the line at the end of the problem. None of the participants required more than 20 minutes to complete each test. Participants were encouraged to call on the teacher if they had difficulty reading a word; on no occasion did any of the students require assistance with reading. During baseline testing, the special education classroom teacher provided praise only for completing the tests. All instructional procedures were implemented using scripted lessons. Each instructional session lasted 35 to 40 min. In this study, schema-training procedures were criterion based and required students to obtain 100% correct on two sessions prior to progressing to the next problem type. Instructional components included explicit strategy modeling, interactive discussion, guided practice, monitoring and corrective feedback, and independent practice (Rosenshine, 1986; Rosenshine & Stevens, 1984). Schema-based strategy instruction to solve vary and multiplicative comparison problems was presented in two phases (problem schemata identification and problem solution). The easier problem type, vary, was introduced first, followed by multiplicative comparison. The latter problem type is deemed to be more difficult cult because it can involve both a prealgebra relation and an arithmetic relation (Marshall et al., 1989). On average, problem schemata identification and problem solution training lasted 6 and 12 sessions, respectively. Problem schemata identification condition. Instruction began with the problem schemata identification training, a prerequisite to understanding and organizing information for later problem solution. In this phase, students were provided with worksheets that included story situations only with problem schemata diagrams (see Figure 1); the diagrams were used for instruction and student work. The teacher demonstrated the problem schemata analysis using several examples. Examples of story situations for the two problem types were presented to help students recognize and understand the key features and relations of the problem schemata (Marshall, 1995). For example, the problem analysis for the vary story situation "A car travels 25 miles on a gallon of gas; it can travel 75 miles on 3 gallons of gas" had students identify several key features. They included (a) a constant per unit (e.g., I gallon of gas) or unit ratio value that was explicitly stated or implied by the story wording; (b) four quantities (i.e., 25, 1, 75, and 3), two of which were subject-units and two of which were object-units; (c) the association (e.g., goes) that paired each subject-object (gallon of gas and miles) unit; and (d) an if-then relationship ("If a car on I gallon of gas goes 25 miles, then a car on 3 gallons of gas goes 75 miles"). Constraints of the vary relation required that each of the subject-units' and object-units' identities be expressed using the same measures (gallon of gas and miles) and that the associations (goes) between the if-then statements be identical. In the multiplicative comparison story situation"Linda answered 5 problems correctly, and Cindy correctly answered 15 problems. Linda correctly answered 1/3 as many problems as Cindy"--instruction focused on the presence of compared (problems correctly answered by Linda) and referent (problems correctly answered by Cindy) sets and their relative sizes (5 and 15). In addition, instruction emphasized the part-whole relationship. That is, the comparison or relational statement (i.e., Linda correctly answered 1/3 as many problems as Cindy) described one set as the multiple or part (1/3) of the other set. Identifying the comparison statement also helped the student readily recognize the compared and referent sets. In general, the problem schemata instruction employed teacher-led demonstration and modeling, along with frequent student exchanges, to identify critical elements of problem schemata and map them onto the relevant schemata diagrams. Before mapping the information, the student was taught to underline sentences in story situations that indicated the unit set and relational statement in vary and multiplicative comparison story situations, respectively. The underlining served as a memory aid to help students identify and retrieve the essential elements in the problem. At the end of the training session, students independently completed a worksheet containing six story situations by reading the story situation and mapping the information onto the diagrams. Initially, worksheets included story situations of a single problem type (vary or multiplicative comparison). When students learned to correctly identify and map the two problem types, worksheets, included both story situations. Problem schemata identification instruction continued until the student was able to distinguish between the two different problem schemata. Problem solution condition. This phase began with a review of each problem schemata, but in the context of word problems rather than story situations. Teacher-led demonstrations and a facilitative questioning procedure allowed students to identify and map critical elements of the specific problem onto the schemata diagrams. Additionally, the strategy mapping instruction required flagging the missing element in the problem with a question mark. During this phase, any misconceptions about problem schema constraints were consistently addressed with explicit feedback and additional modeling, and instruction was provided when needed. Instruction then proceeded to representing the given information in the diagram as a mathematical sentence prior to solving it. For example, using the completed vary schema diagram for the first problem in Table 2, the student sets up the math sentence as follows: 9 computers 27 students ---------------- = ---------------- 1 computer ?students Next, the student was taught to use the equivalent fraction rule (i.e., multiply or divide the top and bottom numbers by the same nonzero number to get an equivalent fraction) to solve the problem. In some instances, instruction had to be broken down into more steps to apply the equivalent fraction rule. For example, following scaffolded instruction using a simple problem (6 x ? = 12), students were questioned as follows: 9 x ? = 27; therefore, 1 x 3 = ? Finally, instruction required the students to reason whether the answer made sense and to check their answers using cross multiplication. In contrast, instruction for the multiplicative comparison problem focused on first identifying whether the unknown represented the compared or referent set. Students were taught to examine the relational statement in the problem to identify whether the compared or referent set, was a multiple or part of the other set, and then multiply or divide accordingly. In this investigation, to solve for the unknown in the first multiplicative comparison problem presented in Table 2, we had students use the prealgebra. relation. That is, they set up the math sentence as follows based on the information given in the problem: 1/4 x ? = 20. Finally, they solved for ? (i.e., ? = 20 x 4) to complete the problem. As with vary problems, the student checked the reasonableness of the answer. To assist students in remembering the key features of each problem type, a note sheet with the essential elements by problem type was provided. The notesheet was used as a scaffold while the students completed problems during practice trials, until students could independently verbalize them. At the end of each session, students completed a worksheet containing word problems. Initially, students worked on only one type of problem; later, when they had completed instruction in the use of the strategy steps for both problem types, worksheet with word problems that included both problem types were presented for independent practice. Upon completion, the worksheet was checked and appropriate feedback provided. It must be noted that the diagrams were eventually faded and the final independent review of both problem types required students to complete the word problems without the aid of diagrams. Upon completion of instruction in and mastery of each problem type, the student completed six-item word-problem tests similar to those used in baseline using the same procedures. Vary. A car travels 25 miles on a gallon of gas. It can travel 75 miles on 3 gallons of gas. View sample diagrams Multiplicative comparison. Linda answered 5 problems correctly, and Cindy correctly answered 15 problems. Linda correctly answered 1/3 as many problems as Cindy. View sample diagrams Note. From Schemas in Problem Solving (p. 135) by S. P. Marshall, 1995, New York: Cambridge University Press. Copyright 1995 by Cambridge University Press. Adapted with the permission of Cambridge University Press. Generalization and maintenance. Students completed a generalization test of novel word problems before and after the intervention. To assess maintenance of the strategy effects, all students were administered tests at different points in time (e.g., at the end of Weeks 4, 8, 9, and 10 following instruction for Sara). Procedures for administering the generalization and maintenance tests were identical to those in the baseline and postinstructional test conditions. Observation system and interobserver agreement Word-problem tests. The classroom teacher and the second author conducted interobserver agreement checks on the word-problem tests. The classroom teacher rated each test using answer keys, while the second author independently scored all tests. Agreement was defined as both raters' recording that the same problem was answered correctly or incorrectly. Interscorer agreement was computed by dividing the number of agreements by the number of agreements plus disagreements and then multiplying by 100%. Interscorer agreement was 100% across students for all experimental phases. Treatment integrity. During the instructional sessions in which each of the strategy steps (problem schemata and problem solution) was taught to a participant, the second author and a graduate student in school psychology independently collected treatment integrity data for approximately 30% of the lessons. The observers were given identical observer checklists on which were listed 10 critical parts of the lesson (e.g., providing clear instructions, having students read word problems aloud). The observers independently completed checklists by marking the parts of the lesson implemented. The mean interobserver agreement across the lessons observed was 100%. Figure 2 presents the number correct of word problems during the baseline, intervention, postintervention, and maintenance conditions. In general, results indicate improved word-problem-solving performance for all four participants following schema instruction on one-step multiplication and division word problems. The participants also maintained their word-problem-solving performance following termination of the intervention. Figure 3 presents the percentage correct of word problems during the pretreatment and posttreatment generalization conditions. Again, high levels of performance (100%) on generalization word problems after instruction were evident for all four participants. The following section describes the results of word-problem-solving performance for the Baseline performance During baseline, the mean number of correct word problems for participants was 41%. Overall, the average performance on word-problem-solving tests for Sara, Tony, Percy, and Andy was 2.7, 3.0, 2.2, and 1.8, respectively. Sara's performance during baseline was relatively stable. Although her performance improved slightly from Test 1 (33% correct) to Test 2 (50% correct) and remained the same on Test 3, the low scores (mean correct = 44%) indicated a need for intervention. In contrast, Tony's performance was extremely variable, and a decreasing trend was evident during baseline. Given Tony's inconsistent and low performance (mean correct = 50%), continued difficulty in mathematics classes as reported by his teacher, and the need to prepare for school-wide standardized testing, a decision was made to begin the intervention. Similar to Sara's, Percy's performance was stable and low (mean correct = 37%) during baseline. His scores increased from Test 1 to Test 2 and then plateaued, plausibly due to practice effects. Baseline scores for Andy were relatively stable and low (mean correct = 29%). His highest and lowest scores during baseline were 50% and 0%, respectively. Because his teacher reported that Andy's overall performance in mathematics was well below grade level, it was deemed important to begin instruction on multiplication and division word problems. Problem schemata performance. Instruction in identifying and describing the features of problem schemata resulted in Sara's improving her independent performance on vary story situations from 67% to 100%. Her performance on multiplicative comparison story situations was 100%. Sara readily acquired the problem schemata for vary and multiplicative comparison story situations in a total of six sessions. Problem schemata intervention for Tony began when Sara began to show an increasing trend during the problem solution intervention phase. Tony's independent performance on each problem type was 100%, and he acquired the problem schemata for both problem types in seven sessions. The remaining two participants, Percy and Andy, also learned to correctly identify and describe the features of the two problem schemata with 100% accuracy. This phase lasted five sessions each for Percy and Andy. Problem solution performance. Following schema strategy instruction to solve each problem type in isolation, the students completed word-problem tests that included both problem types. On average, it took Sara, Tony, Percy, and Andy 13, 13, 11, and 11 sessions, respectively, to acquire word-problem-solving skills. Level 1 of problem solution-vary. During the vary intervention phase, Sara scored 100% on all independent work following teacher-led instruction. After instruction in vary problems, Sara's average performance on tests that assessed both problem types was 58%. A decreasing trend in data was evident, and an examination of each test indicated that Sara correctly completed all vary problems and that the errors involved multiplicative comparison problem types only. Tony also completed vary problems on independent worksheets with 100% accuracy. His performance on tests following instruction in vary problems was 58%. Again, all of the errors involved multiplicative comparison problems. Percy correctly completed 100% of the vary problems on independent worksheets following teacher-led instruction. Although he scored an average of 92% correct on word-problem-solving tests containing both problem types, his performance on vary problems was 100% correct. The one problem that Percy completed incorrectly was a multiplicative comparison problem. While Andy's mean performance on tests following instruction in vary problems was 75% correct, he scored 100% on all vary problems. Overall, instruction in solving vary problems was not sufficient for solving multiplicative comparison word problems for Sara and Tony, whereas the other two participants were able to generalize the use of schema diagrams and word-problem-solving skill to solve the untaught problem type. Level 2 of problem solution-multiplicative comparison. In general, all participants demonstrated the ability to independently discriminate between vary and multiplicative comparison problem types and use the correct schema diagram with 100% accuracy on independent worksheets and tests following teacher-led instruction. Maintenance tests given 4 and 8 weeks after the intervention indicated that Sara maintained her high level of performance (100%). A 9-week follow-up check showed that although Sara's performance dropped slightly (83%), it was much higher than her baseline performance (mean correct = 44%). A 10-week follow-up check indicated an increase in performance to 100%. Tony completed maintenance tests at 1, 2, 31/2, 5, and 51/2 weeks following the intervention. Similar to Sara's, his performance dropped to 83% on the third probe. However, Tony scored 100% on a follow-up check administered 5 and 51 /2 weeks later, indicating maintenance of word-problem-solving skill. On I - and 2 1/2-week follow-up checks, Percy scored 100%, demonstrating maintenance of the learned information. Maintenance data were not available for Andy due to the ending of the school year. Figure 2. The number of word problems correct during baseline, intervention, and maintenance conditions for the four participants. View Figure 2 Note. T1 = baseline tests; PS = problem schemata training; T2 = tests following instruction on vary problems; T3 = tests following instruction on multiplicative comparison problems; 11 = instruction on vary problems; 12 = instruction on multiplicative comparison problems. Pretreatment generalization scores across participants were low (mean correct = 37%). The mean scores for Sara, Tony, Percy, and Andy were 44%,39%,44%, and 28%, respectively. Following the intervention, all participants scored 100%, indicating that they were able to generalize the strategy to solve novel word problems. An examination of one-step vary problems on the generalization test indicated that with the exception of Tony, the participants' pretreatment performance was high: Sara, Percy, and Andy, scored 67%, 100%, and 100%, respectively. Posttreatment performance for each of the four participants was 100%. Tony's performance on vary problems increased from 33% during pretreatment to 100% during posttreatment, indicating generalization of the word-problem-solving skill. FIGURE 3. The percentage correct of word problems during the pretreatment and posttreatment generalization conditions by the four participants. View Figure 3 Although pretreatment generalization scores on both multiplicative comparison and multistep problems were low (less than 40%) for all four participants, their performance on these problem types substantially improved (100%) during posttreatment. It must be noted that multistep problems were not directly taught in the study, yet students were able to complete them with 100% accuracy after the intervention, indicating that the strategy usage generalized not only to novel one-step problems but also to multistep problems. Strategy Use Table 4 presents the percentage of time students displayed overt use of the strategy steps (i.e., drawing diagrams and writing the number sentence) when completing the tests during each phase of the study. Strategy steps that entailed writing the operation and doing the computation were not examined, as students had to do this to complete each problem. Visual inspection of the data in Table 4 reveals that other than Sara, none of the participants drew diagrams to represent the information in the word problems during baseline. However, the percentage of time that Sara drew diagrams during baseline was low (20%). In contrast, all participants wrote the number sentence during baseline. The mean percentage for writing the number sentence for Sara, Tony, Percy, and Andy was 29, 100, 58, and 33, respectively. On tests following instruction on vary and multiplicative comparison word problems, all students consistently increased their use of diagrams. Both Sara and Tony continued to draw diagrams during the maintenance phase (100%). whereas Percy's use of diagrams decreased from 100% following the intervention to 75% during maintenance (but was still higher than during baseline [58%]). Maintenance data were not available for Andy. Pretreatment generalization data indicate that with the exception of Percy, the participants did not use diagrams prior to instruction. After the intervention, Sara used diagrams for a majority of the problems (83%). The other participants correctly represented word problems by drawing diagrams 100% of the time on the posttreatment generalization test. In general, students used diagrams more when solving vary than when solving multiplicative comparison problems. When students attempted to draw and map diagrams for multiplicative comparison problems following instruction on vary problems, it seemed that some did not generalize the use of diagramming learned in solving vary problems. Although Sara and Tony both developed diagrams for the untaught problems, their representations were not consistently correct. For example, they attempted to use the vary diagram to represent the multiplication comparison problem, which seemed to interfere with correctly solving the problem. Once students were instructed on multiplicative comparison problems, the frequency of accurately drawing and mapping diagrams increased. When students' worksheets were examined for the strategy step of writing the number sentence, it appeared that they were more likely to write the number sentence than to draw diagrams during baseline, which was further maintained during and following instruction. For Tony, the mean percentage of writing the number sentence was 100% for all phases (baseline, instruction, maintenance, and generalization) of the study. However, about half of his number sentences written during baseline (52%) and following instruction on vary problems (50%) were incorrect. Sara, Percy, and Andy consistently showed an increase in writing the number sentence from baseline to postinstruction on vary and multiplicative comparison problems. In addition, Sara and Percy maintained (100%) the strategy usage during the maintenance phase. (Maintenance data were unavailable for Andy.) Sara, Percy, and Andy also demonstrated an increase of 100%, 100%, 41 %, respectively, in writing the number sentence from pretreatment to posttreatment during generalization. Strategy Questionnaire Interviews Results of the strategy questionnaire indicated that all students found the strategy in general, and drawing and mapping information onto diagrams in particular, to be most helpful in understanding and solving the word problems (M = 5.0). The overall mean ratings for strategy satisfaction (i.e., continue to use the strategy and recommend the strategy) were 5, 4.5, 4.2, and 5, for Sara, Tony, Percy, and Andy, respectively. Student comments about the strategy indicated that they liked solving the word problems. Their answers varied from "It made it easier for me to solve problems" to "It helps me in every day life," and "Helped me learn something I could never learn before." Student responses about what they least liked included "Nothing," "Too many word problems drive me crazy," "It can get a bit confusing," and "Doing the math." The teacher ratings for strategy effectiveness, efficiency, ease of use, flexibility, application, and generalizability were 5, 49 5, 5, 5, and 5, respectively. Regarding efficiency, which received a score of 4, the teacher commented that "it was worth the investment." The teacher responded that the strategy was helpful because it was visual and because explicit application of key components of word problems allowed for student self-instruction. She noted that the "systematic practice led to strong independence over time" for her students. When asked to recommend ways to facilitate word-problem solving for students with disabilities included in general education classrooms, she noted that the operational procedures for the vary problem solution should be based on students' proficiency level in mathematics. For example, some students were able to readily follow the algebraic process, which was used to check the final answer derived using the equivalent fraction rule, whereas for others (e.g., Sara and Tony) it was more difficult. These students needed extensive practice to be able to use the algebraic process, but once they learned it, they used it more frequently to solve the problem. TABLE 4. Percentage of Time Students Displayed Overt use of Strategy Steps │ │ │ │ │ │ Generalization │ │ Condition │ Baseline │ Level 1(a) │ Level 2(b) │ Maintenance │ Pretreatment │ Posttreatment │ │Sara │ │ │ │ │ │ │ │Draw diagrams │20 │92 │100 │100 │0 │83 │ │Write number sentence │22 │58 │100 │96 │0 │100 │ │Tony │ │ │ │ │ │ │ │Draw diagrams │0 │42 │100 │100 │0 │100 │ │Write number sentence │100 │100 │100 │100 │100 │100 │ │Percy │ │ │ │ │ │ │ │Draw diagrams │0 │100 │100 │75 │33 │100 │ │Write number sentence │58 │100 │100 │100 │0 │100 │ │Andy │ │ │ │ │ │ │ │Draw diagrams │0 │50 │100 │na │0 │100 │ │Write number sentence │33 │83 │100 │na │17 │58 │ │Note. -Percentages were computed by dividing the number of times the strategy step was written by the total number of possible times; na = not available. │ │ │ │(a) Tests completed following instruction in using the strategy with vary word problems. (b) Tests completed following instruction in using the strategy with multiplicative word problems.│ Instructors' notes and observations All students stated that they enjoyed participating in the study and believed that learning the strategy was helpful. Sara commented that she would be able to transfer the learned information to real-world activities. Both Sara and Tony demonstrated generalization of the learned skill to complete unfamiliar word problems on a school-wide standardized state test administered during the study. The teacher reported that Tony repeatedly classified word problems as either vary or multiplicative comparison on several occasions during the test by stating, "Hey, this is a vary problem, I can do this." During the study, all four participants were highly cooperative and engaged in appropriate behavior. For example, Andy, a student diagnosed with attention- deficit/hyperactivity disorder, was physically active during instruction, and it was difficult for him to remain on task, but he managed to complete all work with minimal prompting. Students in the study were functioning at different levels in terms of computational fluency, and the varied prompts helped them to successfully solve word problems. For example, at the onset of the study, Tony used a multiplication chart as a scaffold to assist in computing multiplication problems. However, by the end of the study, he seemed to gain confidence and used the chart less and less. In general, students expressed surprise at their ability to accurately complete the word problems before the end of the study. Although results of this exploratory study are encouraging, the nature of the single-subject design used in this investigation indicates that caution is called for in interpreting the findings. Results of the study seem to indicate that middle school students with learning disabilities who are low-performing in mathematics can be taught to effectively apply schema-based instruction to correctly solve multiplication and division word problems. In this study, the four participants' performance substantially improved after they received instruction. Replication of the effects of the schema-based instruction occurred across participants, extending the findings of previous research on the effectiveness of schema-based instruction in teaching mathematical word-problem solving (Jitendra et al., 1998; Jitendra & Hoff, 1996; Jitendra et al., 1999; Marshall, 1995). It should be noted that each of the participants experienced success by acquiring the word-problem-solving skill in a reasonable amount of time (12 sessions). They were able to discuss key features of each problem type, verbally explain what the word problem was asking, and draw a diagram of the relationship present in the problem. This result suggests that practice with the schema strategy helped students to develop a conceptual understanding of the core concepts, which is considered important to problem solving (Baroody & Hume, 1991; Cawley & Parmar, 1992; Woodward & Montague, 2000). In addition, the positive benefits may be attributed to the personalized contexts during acquisition learning. It may be the case that, as in the study by Davis-Dorsey, Ross, and Morrison (1991). personalization made the problems more motivating, made it easier to construct a meaningful conceptual representation to connect the problem information and solution strategies, and made successful encoding and retrieval more likely. The schema-based instruction was also associated with maintenance of the high level of postinstructional. performance during follow-up probes several weeks after the intervention was terminated. This result supports and extends the findings described by Jitendra and Hoff (1996) and Jitendra et al. (1999). It is encouraging to note that for one of the students (Sara), the effects of the schema strategy were maintained for 10 weeks-longer than that reported in the literature. This is an interesting finding given that students with learning disabilities often experience difficulty with long-term retention of skills. Also encouraging is that generalization to novel (one-step vary and multiplicative comparison) and untrained word problems (i.e., multistep) occurred for all students following instruction in solving one-step multiplication and division problems. This is an exciting finding given the severity of the students learning difficulties. Furthermore, because of the unique design of this study, whereby students were taught to apply the strategy to one problem type at a time, it was possible to determine if learning to use the strategy on one problem type (vary) generalized to the other problem type (multiplicative comparison). For two students (Percy and Andy), a generalized effect was seen on multiplicative comparison problems after learning to apply the strategy first to vary word problems. Similar results were found in the Jitendra et al. (1999) study for addition and subtraction problems, and in the study by Hutchinson (1993) , for algebra word problems by students with learning disabilities. One plausible explanation for the generalized effects on performance in solving untrained word problems (e.g., multistep) not targeted for instruction in this investigation is that schema-based instruction, with its emphasis on conceptual understanding, allowed students to successfully encode and apply the learned schemata to represent and solve multistep problems, which is consistent with Butterfield and Nelson's (1989) cognitive theory of elements and mechanisms of transfer. The participants seemed to be more enthusiastic about solving word problems during and after the implementation of the instruction than during baseline. The students' and special education teacher's positive feelings toward the strategy and teaching procedures seemed to contribute to the students' improved performance and task behavior, as in several previous investigations (e.g., Case, Harris, & Graham, 1992; Jitendra, et al., 1999). The teacher indicated that participating students were enthused about the strategy and spontaneously applied it when completing word problems on the standardized state test. As noted by Wood, Frank, and Wacker (1998), "Student preference is an important factor, because students are not as likely to exhibit effort over time with strategies that they do not like or do not feel are helpful" (p. 336). One participant in the present study (Sara) commented that the strategy was easy to use and applicable to everyday life. Strategies that are connected with real-world situations are important in, promoting skill acquisition and generalization (Bottge, 1999). Furthermore, the participating teacher believed that the strategy was helpful as an introduction to prealgebra, which is required in most college curricula and is often a difficult area for college students with learning disabilities (Maccini & Hughes, 2000). Finally, the social validity of the schemabased instructional approach was enhanced because no external investigators were present in the classroom during this investigation. Several limitations of this study call for caution in interpreting the findings. First, the small number of participants limits the generalizability of results to other student populations (e.g., students with behavior disorders). Second, the range of problems addressed in this study was limited to vary and multiplicative comparison problem types; future research should examine how students would do on a varied problem set. A third limitation is that instruction occurred individually, which can be time consuming and personnel intensive. Future research should examine whether the effects found in the present study generalize to small instructional groups. However, recent research findings (Jitendra et al., 1998) provide preliminary evidence regarding the strategy's applicability to larger groups of students. Fourth, given the trend toward inclusionary practices, future research should address transferability of this strategy to general education classrooms. Finally, the use of one teacher limits generalizability. It would be worth exploring how effectively other teachers would use this intervention. In addition, one of the major concerns in this investigation relates to the appropriateness of the sample selection. The four participants in this study had been diagnosed as having a learning disability and were receiving services in a learning support classroom for mathematics. However, Sara's and Tony's scores in mathematical reasoning were within 1 standard deviation of the mean, thereby raising the issue as to whether they were truly mathematics disabled. Also, employing the discrepancy criterion would mean that only Percy qualified as being learning disabled in mathematics. Perhaps the team's decision to identify the four participants as needing special education services in mathematics was a function of these students' performance in relation to others' in this high-functioning school district. For example, the team must consider whether, with appropriate modifications and support in the general education classroom, the student evidences academic problems. Both Percy and Andy received mathematics instruction in the learning support classroom immediately upon being diagnosed as learning disabled. In contrast, Sara and Tony were not receiving special education services in mathematics when they were first diagnosed as learning disabled. However, they were eventually placed in the learning support classroom for mathematics instruction because they were not able to keep up with their peers in the general education classroom. In sum, the variation in criteria used by schools and researchers presents problems in terms of accurately identifying the sample, a common struggle that researchers encounter when conducting applied research in the classroom. It is also the case that the single-subject design employed in this investigation does not help clarify whether the study findings are attributable to the specific schema-based nature of the instruction or to the generally carefully designed instruction and increased focus on the two problem types. Therefore, further research is needed to determine whether schema-based instruction is necessary to promote these outcomes. This would entail using a group-design study to compare and evaluate the relative efficacy and cost efficiency of the schema diagram strategy, intervention procedures that employ manipulatives, and other empirically validated strategies (e.g., cognitive-metacognitive strategy) described in the literature (Jitendra & Xin, 1997). It must be noted that the tests and worksheets employed in this investigation were designed to match students' interests, based on a list of preferred items provided by the teacher. One suggested extension of the present research would involve employing tasks that reflect the varied situations that students typically encounter in real life. For example, asking students to calculate percentages on sale items or figure out how much tip to give a waitress based on the cost of a meal would be extremely valuable in teaching students functional mathematical skills. Therefore, using schema-based instruction to teach functional academics is an area to further explore, because even though students with learning disabilities have the ability to complete community routines (e.g., paying rent, shopping for groceries and clothing), they often struggle with parts of those routines (e.g., money usage, calculation, budgeting; Patton et al., 1997). Implications for practice The findings from this study have several implications for practice. First, the schema-based intervention, with its emphasis on conceptual understanding, helped students with learning disabilities not only acquire word-problem-solving skills but also maintain the taught skills. Therefore, results of the study highlight the effectiveness of strategy instruction for addressing mathematical difficulties evidenced by students with learning disabilities (Montague, 1995, 1997b). Second, the results of this study suggest that teaching students to identify the relationships present in each word problem promotes generalization to other, untaught skills (e.g., multistep problems). Students with learning disabilities should receive instruction that teaches them to understand the key features of problems prior to solving them. Third, the effectiveness of the strategy when implemented by the classroom teacher may indicate the importance of researchers' collaborating with practitioners to adapt instruction to meet students' individual needs. Involving the classroom teacher in the implementation of this study was important because the teacher is now more likely to invest effort in continuing to use a strategy that had beneficial effects for her students. Click the "References" link above to hide these references. Anderson, J. R. (1989). A theory of the origins of human knowledge. Artificial Intelligence, 40, 3 13-35 1. Baroody, A. J.,& Hume, J. (199 1). Meaningful mathematics instruction: The case of fractions. Remedial and Special Education, 12 2 (3), 54-68. Bottge, B. A. (1999). Effects of contextualized math instruction on problem solving of average and below-average achieving students. The Journal of Special Education, 33, 81-92. Briars, D. J.,& Larkin, J. H. (1984). An integrated model of skill in solving elementary problems. Cognition and Instruction, 1, 245-296. Butterfield, E. C.,& Nelson, G. D. (1989). Theory and practice of teaching for transfer. Educational Technology Research and Development, 37, 5-38. Carpenter, T. P.,& Moser, J. M. (1984). The acquisition of addition and subtraction concepts in grades one through three. Journal for Research in Mathematics Education, 15(3), 179-202. Case, L. P., Harris, K. R.,& Graham, S. (1992). Improving the mathematical problem-solving skills of students with learning disabilities: Selfregulated strategy development. The Journal of Special Education, 26, 1-19. Cawley, J. F.,& Parmar, R. S. (1992). Arithmetic programming for students with disabilities: An alternative. Remedial and Special Education, 12(2), 19-35. Chi, M. T. H., Glaser, R.,& Rees, E. (1982). Expertise in problem solving. In R. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7-75). Hillsdale, NJ: Erlbaum. Davis-Dorsey, J., Ross, S. R.,& Morrison, G. R. (1991). The role of rewording and context personalization in the solving of mathematical word problems. Journal of Educational Psychology, 83(l), DeCorte, E., Greer, B.,& Verschaffel, L. (1996). Mathematics teaching and learning. In D. Berliner& R. Calfee (Eds.), Handbook of educational psychology (pp. 491-549). New York: Macmillan. Fennema, E., Carpenter, T. P.,& Peterson, P. L. (1989). Learning mathematics with understanding: Cognitively guided instruction. In J. E. Brophy (Ed.), Advances in research on teaching (pp. 195-221). Greenwich, CT: JAI. Fraivillig, J. L., Murphy, L. A.,& Fuson, K. C. (1999). Advancing children's mathematical thinking in Everyday Mathematics classrooms. Journal of Research in Mathematics, 30(2), 148-170. Fuson, K. C.,& Willis, G. B. (1989). Second graders ' use of schematic drawings in solving addition and subtraction word problems. Journal of Educational Psychology, 81, 514-520. Goldman, S. R., Hasselbring, T. S.,& the Cognition and Technology Group at Vanderbilt. (1997). Achieving meaningful mathematics literacy for students with learning disabilities. Journal of Learning Disabilities, 30, 198-208. Greer, B. (1992). Multiplication and division models of situations. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 276-295). Old Tappan, NJ: Macmillan. Hiebert, J.,& Lefevre, P. (1986). Conceptual and procedural knowledge in mathematics: An introductory analysis. In J. Hiebert (Ed.), Conceptual and procedural knowledge: The case of mathematics. Hillsdale, NJ: Erlbaum. Hoftneister, A. M. (1993). Elitism and reform in school mathematics. Remedial and Special Education, 14(6), 8-13. Hutchinson, N. L. (1993). Effects of cognitive strategy instruction on algebra problem solving of adolescents with learning disabilities. Learning Disability Quarterly, 16, 34-63. Janvier, C. (1987). Problems of representation in the teaching and learning of mathematics. Hillsdale, NJ: Erlbaurn. Jitendra, A. K., Griffin, C. C., McGoey, K., Gardill, M. C., Bhat, P.,& Riley, T. (1998). Effects of mathematical word problem solving by students at-risk or with mild disabilities. The Journal of Educational Research, 91, 345-355. Jitendra, A. K.,& Hoff, K. (1996). The effects of schema-based instruction on the mathematical word-problem solving performance of students with learning disabilities. Journal of Learning Disabilities, 29, 422-432. Jitendra, A. K., Hoff, K.,& Beck, M. M. (1999). Teaching middle school students with learning disabilities to solve word problems using a schemabased approach. Remedial and Special Education, 20, Jitendra, A. K.,& Xin, Y. (1997). Mathematical word problem solving instruction for students with mild disabilities and students at risk for math failure: A research synthesis. The Journal of Special Education, 30, 412-438. Kintsch, W.,& Greeno, J. G. (1985). Understanding and solving word arithmetic problems. Psychological Review, 92, 109-129. Maccini, P.,& Hughes, C. A. (2000). Effects of a problem solving strategy on the introductory algebra performance of secondary students with learning disabilities. Learning Disabilities Research& Practice, 15(l), 10-21. Marshall, S. P. (1995). Schemas in problem solving. New York: Cambridge University Press. Marshall, S. P., Barthuli, K. E., Brewer, M. A.,& Rose, F. E. (1989). Story problem solver: A schema-based system of instruction (CRMSE Tech. Rep. No. 89-01). San Diego, CA: Center for Research in Mathematics and Science Education. Marshall, S., Pribe, C. A.,& Smith, J. D. (1987). Schema knowledge structures for representing and understanding arithmetic story problems (Tech. Rep. Contract No. N00014-85-K-0061). Arlington, VA: Office of Naval Research. Mercer, C. D.,& Miller, S. P. (1992). Teaching students with learning problems in math to acquire, understand, and apply basic math facts. Remedial and Special Education, 13(3), 19-6 1. Montague, M. (1995). Cognitive instruction and mathematics: Implications for students with learning disorders. Focus on Learning Problems in Mathematics, 17(2), 39-49. Montague, M. (1997a). Student perception, mathematical problem solving, and learning disabilities. Remedial and Special Education,18, 46-53. Montague, M. (1997b). Cognitive strategy instruction in mathematics for students with learning disabilities. Journal of Learning Disabilities, 30 164-177. National Council of Teachers of Mathematics. (1998). Principles and standards for school mathematics, electronic version 1.0: Discussion draft http://standards-e.nctm.org/1.0/normal/index National Council of Teachers of Mathematics. (2000). Principles and standard for school mathematics electronic version http://standards.nctm.org/ National Education Goals Panel. (1997). National education goals report summary, 1997. Washington, DC: Author. Parmar, R. S.,& Cawley, J. F. (1991). Challenging the routines and passivity that characterize arithmetic instruction for children with mild handicaps. Remedial and Special Education,12, 23-32. Parmar, R. S., Cawley, J. F.,& Frazita, R. R. (1996). Word problem solving by students with and without mild disabilities. Exceptional Children, 62, 415- 429. Parmar, R. S., Cawley, J. F.,& Miller, J. H. (1994). Differences in mathematics performance between students with learning disabilities and students with mild retardation. Exceptional Children, 60, Patton, J. R., Cronin, M. E., Bassett, D. S.,& Koppel, A. E. (1997). A life skills approach to mathematics instruction: Preparing students with learning disabilities for the real-life math demands of adulthood. Journal of Learning Disabilities, 30, 178-187. Richard, T. J. (1997). SRA spectrum math (4th ed.). Columbus, OH: McGrawHill. Riley, M. S., Greeno, J. G.,& Heller, J. 1. (1983). Development of children's problem-solving ability in arithmetic. In H. P. Ginsburg (Ed.), The development of mathematical thinking (pp. 153-196). New York: Academic Press. Rosenshine, B. (1986). Synthesis of research on explicit teaching. Educational Leadership, 43, 60-69. Rosenshine, B.,& Stevens, R. (1984). Classroom instruction in reading. In P. D. Pearson (Ed.), Handbook of reading research (pp. 745-798). New York: Longman. Silver, E. A.,& Marshall, S. P. (1990). Mathematical and scientific problem solving: Findings, issues, and instructional implications. In B. R. Jones& L. Idol (Eds.), Dimensions of thinking and cognitive instruction. Hillsdale, NJ: Erlbaum. Torgesen, J. K. (1982). The learning disabled child as an inactive learner. Topics in Learning and Language Disabilities, 2, 45-52. Van de Walle, J. A. (1998). Elementary and middle school mathematics: Teaching developmentally (3rd ed.). New York: Longman. Wechsler, D. (1991). Wechsler intelligence scale for children (3rd ed.). San Antonio, TX: Psychological Corp. Wechsler, D. (1992). Wechsler individual achievement test. San Antonio, TX Psychological Corp. Wood, D. K., Frank, A. R.,& Wacker, D. P. (1998). Teaching multiplication facts to students with learning disabilities. Journal of Applied Behavior Analysis, 31, 323-338. Woodward, J.,& Montague, M. (2000, April). Meeting the challenge of mathematics reform for students with learning disabilities. Paper presented at the annual meeting of the Council for Exceptional Children, Vancouver, Canada. Xin, Y. P.,& Jitendra, A. K. (1999). The effects of instruction in solving mathematical word problems for students with learning problems: A meta-analysis. The Journal of Special Education, 32, Zawaiza, T. B. W.,& Gerber, M. M. (1993). Effects of explicit instruction on community college students with learning disabilities. Learning Disability Quarterly, 16, 64-79. Asha Jitendra and Caroline M. DiPipi, Lehigh University Nora Perron-Jones, Salisbury School District The Journal of Special Education Vol. 36/NO. 1/2002/pp. 23-38
{"url":"http://www.ldonline.org/article/5678","timestamp":"2014-04-17T12:39:51Z","content_type":null,"content_length":"125281","record_id":"<urn:uuid:e795afa6-3765-45b6-97f6-9fc9fa2e1343>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
ASTM E1402 - 13 Significance and Use 4.1 This guide describes the principal types of sampling designs and provides formulas for estimating population means and standard errors of the estimates. Practice E105 provides principles for designing probability sampling plans in relation to the objectives of study, costs, and practical constraints. Practice E122 aids in specifying the required sample size. Practice E141 describes conditions to ensure validity of the results of sampling. Further description of the designs and formulas in this guide, and beyond it, can be found in textbooks (1-10). 4.2 Sampling, both discrete and bulk, is a clerical and physical operation. It generally involves training enumerators and technicians to use maps, directories and stop watches so as to locate designated sampling units. Once a sampling unit is located at its address, discrete sampling and area sampling enumeration proceeds to a measurement. For bulk sampling, material is extracted into a 4.3 A sampling plan consists of instructions telling how to list addresses and how to select the addresses to be measured or extracted. A frame is a listing of addresses each of which is indexed by a single integer or by an n-tuple (several integer) number. The sampled population consists of all addresses in the frame that can actually be selected and measured. It is sometimes different from a targeted population that the user would have preferred to be covered. 4.4 A selection scheme designates which indexes constitute the sample. If certified random numbers completely control the selection scheme the sample is called a probability sample. Certified random numbers are those generated either from a table (for example, Ref (11)) that has been tested for equal digit frequencies and for serial independence, from a computer program that was checked to have a long cycle length, or from a random physical method such as tossing of a coin or a casino-quality spinner. 4.5 The objective of sampling is often to estimate the mean of the population for some variable of interest by the corresponding sample mean. By adopting probability sampling, selection bias can be essentially eliminated, so the primary goal of sample design in discrete sampling becomes reducing sampling variance. 1. Scope 1.1 This guide defines terms and introduces basic methods for probability sampling of discrete populations, areas, and bulk materials. It provides an overview of common probability sampling methods employed by users of ASTM standards. 1.2 Sampling may be done for the purpose of estimation, of comparison between parts of a sampled population, or for acceptance of lots. Sampling is also used for the purpose of auditing information obtained from complete enumeration of the population. 1.3 No system of units is specified in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. 2. Referenced Documents (purchase separately) The documents listed below are referenced within the subject standard but are not provided as part of the standard. ASTM Standards D7430 Practice for Mechanical Sampling of Coal E105 Practice for Probability Sampling of Materials E122 Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process E141 Practice for Acceptance of Evidence Based on the Results of Probability Sampling E456 Terminology Relating to Quality and Statistics ICS Code ICS Number Code 01.040.19 (Testing (Vocabularies)); 19.020 (Test conditions and procedures in general) UNSPSC Code UNSPSC Code DOI: 10.1520/E1402 ASTM International is a member of CrossRef. ASTM E1402
{"url":"http://www.astm.org/Standards/E1402.htm","timestamp":"2014-04-16T19:29:44Z","content_type":null,"content_length":"28609","record_id":"<urn:uuid:e780218f-5538-464b-af60-e97aa7af57f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Focus (geometry) , the ) are a pair of special points used in describing conic sections . The four types of conic sections are the , and The focus has two equivalent defining properties; and they always fall on the major axis of symmetry of the conic. The simpler depends on the type of conic: • In an ellipse, the sum of the distances from any point on the ellipse to the two foci is a constant (which is always the length of the major axis of the ellipse). • In a circle, there is only one focus, the center of the circle, and all the points of the circle are equidistant from it. (This can be viewed a special case of the above, with a circle being an ellipse with two foci at the same point; the sum of the distances is the diameter.) • In a hyperbola, the difference of the distances is always constant. • A parabola also only has one focus (although it is sometimes useful to speak of a focus at infinity); but there is a line called the directrix such that the distance from any point of the parabola to the focus is equal to the (perpendicular) distance from the point to the directrix. The rule for the parabola can be generalized to other conics, and this is the other defining property: A conic section can be defined as the set of points such that the ratio of distance to its focus to the distance to the corresponding directrix is a constant, called the eccentricity. Even in the case of two foci, the described set, applied on a single focus-directrix combination, is the whole conic section. The circle has eccentricity 0, and the directrix is a line at infinity. The focus-directrix property is thus true of the circle, but it is also true of every other point on the plane. Conics in projective geometry It is also possible to describe all the conic sections as of points that are equidistant from a single focus and a single, circular For the ellipse, both the focus and the center of the directrix circle have finite coordinates and the radius of the directrix circle is greater than the distance between the center of this circle and the focus; thus, the focus is inside the directrix circle. The ellipse thus generated has its second focus at the center of the directrix circle. For the parabola, the center of the directrix moves to the point at infinity (see projective geometry). The directrix 'circle' becomes a curve with zero curvature, indistinguishable from a straight line. The two arms of the parabola become increasingly parallel as they extend, and 'at infinity' become parallel; using the principles of projective geometry, the two parallels intersect at the point at infinity and the parabola becomes a closed curve (elliptical projection). To generate a hyperbola, the radius of the directrix circle is chosen to be less than the distance between the center of this circle and the focus; thus, the focus is outside the directrix circle. The arms of the hyperbola approach asymptotic lines and the 'right-hand' arm of one branch of a hyperbola meets the 'left-hand' arm of the other branch of a hyperbola at the point at infinity; this is based on the principle that, in projective geometry, a single line meets itself at a point at infinity. The two branches of a hyperbola are thus the two (twisted) halves of a curve closed over In projective geometry, all conics are equivalent in the sense that every theorem that can be proved for one conic section applies to all the others. Astronomical significance In the gravitational two-body problem , the orbits of the two bodies are described by two overlapping conic sections each with one of their foci being coincident at the center of mass
{"url":"http://www.reference.com/browse/Focus+(geometry)","timestamp":"2014-04-21T00:42:22Z","content_type":null,"content_length":"76614","record_id":"<urn:uuid:e521b999-2d74-4e59-8628-457c0ff5c57e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Can You Solve This Mind-Bending Conundrum? | EE Times Can You Solve This Mind-Bending Conundrum? I've just been fighting with a mind-bending conundrum. This is all part of my ongoing Inamorata Prognostication Engine project. I'm currently working on the code to calculate how many days it is until the next full moon, a time that -- experience has shown -- can be somewhat troublesome with regards to one's Inamorata. The problem was that my code was producing unexpected results. I was starting to pull my hair out. By yesterday, I was contemplating posting a blog asking for your help. But I was mulling things over in the wee hours of this morning when I thought, "Hang on just a minute. Could it be that...?" It could indeed! The bottom line is that I've tracked down the root of my problem and everything is once again running smoothly in "The House of Max," but for a moment there I thought I was losing my mind. What? You think you could have sorted this out faster than I? Well, let's see, shall we? The remainder of this blog reflects my musings from yesterday, February 20, 2014, before I'd worked out what I was doing wrong... Remember that I'm working with an Arduino, so my float variables have only around 6 to 7 decimal digits of precision. The output from my real time clock shows me that today's date (at the time of this writing) is February 20, 2014. Since today is indeed the February 20, I'm happy so far. I use this date to calculate today's Julian day number and obtain a value of 2456709. I bounce over to the USNO website to confirm that they are in agreement with me and that today does indeed correspond to a Julian day number of 2456709. I'm still smiling. Next, I go to the MoonPhases.Info website and look up the full moon dates for 2014 as illustrated below. Purely for the sake of a quick test, I use the date of the most recent full moon -- February 14, 2014 -- as my reference point (any full moon in the past 1,000 years would do, so later I intend to replace this reference with something more appropriate, like a full moon that fell on April 1, or one that occurred on the day that something interesting or relevant happened -- do you have any But we digress... I enter February 14, 2014 as the date of the reference full moon into my code, which calculates that this corresponds to a Julian day number of 2456703. Since this was only six days ago, and since 2456709 – 2456703 = 6, I think we can safely say that we're still on track. But this is where my smile turns upside down into a frown, because my Arduino now tells me that -- after performing some heroic calculations -- it has determined that there are 21 days until the next full moon and that the date of that full moon will be March 13, 2014. I would say "Close, but no cigar," except that this is not even particularly close. The actual date, as shown in the image above, should be March 16, 2014, which is 24 days in our future. "Oh dear," I said to myself (or words to that effect). In a moment I'll show you the code I'm using to calculate how many days there are to the next full moon. From there, we can calculate the actual date of the next full moon. But first, let me walk you through the reasoning process I went through to determine what algorithm to use (remember that, as for most things in life, I'm making all of this up as I go along). Purely for the sake of a starting-point example, let's assume that our universe has only been in existence for a little over 100 days. Let's also assume that a full moon occurs every 10 days on the dot -- that is, there was full moon on days 10, 20, 30, 40, etc. Let's pick one of these full moons as our reference full moon -- say the one that occurred on day 100. Now, let's assume that we are currently on day number 134 in our hypothetical universe. So, the number of days (the "difference" or 'd') between today and our reference full moon is 134 - 100 = 34. We know that the period ('p') of our moon's orbit is 10 days. What we want to do is to divide the number of days 'd' by the period 'p' and keep the remainder. In computer terms, when performing integer math, we have two types of divide operation available to us: // Standard divide operation; remainder is lost d / p = 34 / 10 = 3 // Modulo divide operation; returns the remainder d % p = 34 % 10 = 4 In our terms, this remainder of 4 equates to 4 days. Thus, since the period of our moon's orbit is 10 days, the number of days to the next full moon is p - 4 = 6. Furthermore, since we know that today's number is 134, we therefore know that the next full moon will occur on day 134 + 6 = 140. Tra la! Of course, our test as described above was based on integer math operations. If we had been dealing with real numbers, then d / p = 34 / 10 = 3.4. If we now throw away the integer part of this result to leave a remainder of 0.4, we have to multiply this by the period to obtain the number of days; that is, 0.4 x p = 0.4 x 10 = 4 days. To Page 2 >
{"url":"http://www.eetimes.com/author.asp?section_id=36&doc_id=1321114","timestamp":"2014-04-16T04:24:48Z","content_type":null,"content_length":"163608","record_id":"<urn:uuid:2cac551f-833b-4cf5-8f3b-3bde066ae069>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Author/Editor=(Gabardo_Jean-Pierre) Extension of Positive-Definite Distributions and Maximum Entropy &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp Memoirs of the In this work, the maximum entropy method is used to solve the extension problem associated with a positive-definite function, or distribution, defined on an interval of the real American line. Garbardo computes explicitly the entropy maximizers corresponding to various logarithmic integrals depending on a complex parameter and investigates the relation to the Mathematical Society problem of uniqueness of the extension. These results are based on a generalization, in both the discrete and continuous cases, of Burg's maximum entropy theorem. 1993; 94 pp; Research Mathematicians. • Facts and definitions Volume: 102 • The discrete case • Positive-definite distributions on an interval \((-A,A)\) ISBN-10: • The non-degenerate case 0-8218-2551-8 • A closure problem in \(L^2_\mu (\hat {\mathbb R})\) • Entropy maximizing measures in \(\scr M_A(Q)\) ISBN-13: • Uniqueness of the extension List Price: US$34 Individual Members: Members: US$27.20 Order Code: MEMO/102
{"url":"http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Gabardo_Jean-Pierre&arg9=Jean-Pierre_Gabardo","timestamp":"2014-04-18T06:43:04Z","content_type":null,"content_length":"14777","record_id":"<urn:uuid:4d26d2e1-ae47-49a7-aef5-501dc48a3575>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
NURBS and CAD: 30 Years Together All articles Articles 30 Dec 2011 19th of April, 2014 NURBS and CAD: 30 Years Together News & Dmitry Ushakov All news In the outgoing year engineering celebrates a remarkable anniversary – thirty years of industrial use of Non-Uniform Rational B-Spline (NURBS) for modeling 3D curves and All press-releases surfaces. In August 1981 an American aircraft concern Boeing proposed to embody NURBS in IGES industrial standard. Although the decision was officially approved only a couple of years later, CAD industry reacted to this proposal at once: the same year SDRC and Computervision - the two leading vendors of engineering software – announced their support for NURBS. Today, thirty years later, it is practically impossible to find any CAD that does not support NURBS. What is the reason for such a phenomenon? Why inventing NURBS revolutionised the industry? Below I try to give answers to these questions, and remember all researchers who contributed to development and establishment of NURBS. Sculptured Surfaces It is well known that research studies of 3D geometric modeling started as part of CAM (computer-aided manufacturing) rather than CAD (computer-aided design). Inventing NC (numerically-controlled) machine tool at the beginning of the 1950s at MIT (Massachusetts Institute of Technology, US) generated demand for digital mock-up of component parts, required to develop control programs for machine tools. Various research groups studied the principles of modeling 3D objects; their main customers were the largest companies in aerospace and car-manufacturing industries. Fig. 1. Citroën DS Take a look at the picture of Citroën DS (YOM 1955-1975), which became an all-time car icon. Accurate fabrication of such complex sculptured surfaces requires advance mathematical apparatus, and it’s certainly not a coincidence that one of the first studies in this field was conducted by a French mathematician Paul de Casteljau who worked for Citroën. He suggested a method for construction of smooth curves using a set of reference points that determine geometric properties. The results of his research were published only in 1974; but the study was completed as far back as 1959, which affords grounds to consider Paul de Casteljau to be the author of curves and surfaces that are now known under the name of another Frenchman - Pierre Bézier. But before talking more about him, let me remind you of the agenda associated with “sculptured” engineering How is it possible to constructively (by geometric construction rather than abstract algebraic equations) define a smooth surface of a required esthetic shape? The simplest method is to specify four points in a 3D space, which form a so-called bilinear patch: Fig. 2. Bilinear patch A bilinear patch is a type of a ruled surface that fully consists of linear segments connecting corresponding points of two curves: Fig. 3. Ruled surface An MIT professor Steven Coons summarized such representation method for double curves surfaces that are named after him (Coons patch): Fig. 4. Coons patch In 1967 he published a treatise “Surfaces for Computer-Aided Design in Space Form” [Coons 1967], which became widely known as the “Little Red Book”. His techniques for boundary curves and blending functions formed the basis for further research in the field. Coons was the first to propose using rational polynoms to model conic sections. Coons’ outstanding contribution in CAD development is even more emphasised by the fact that he was an academic supervisor of Ivan Sutherland, the creator of a famous Sketchpad, which became the prototype of modern CAD systems. Bézier Curves Coons patch allowed controlling the surface shape on its boundaries but not between the boundaries. Pierre Bézier, who at the beginning of the 1960s was developing UNISURF system for designing surfaces of Renault cars, clearly understood the importance of controlling shape on the inside. Fig. 5. Pierre Bézier Pierre Bézier, a true representatives of the French mathematical tradition, was well aware of the works of Charles Hermite (a French mathematician of the XIX c.); in particular, cubic curves named after him. Hermite curve is a geometric representation of a cubic curve using end-points and tangent vectors. The shape of a Hermite curve can be controlled by varying vector directions and sizes: Fig. 6. A family of Hermite curves Bézier was not happy that defining Hermite curves one could only specify its conduct in end-points but could not explicitly influence the curve shape between such points (in particular, a curve can arbitrary move away from the segment connecting its end-points). Therefore, Bézier designed constructively defined curve (that later was given his name), the shape of which can be controlled in intermediary, the so-called control points. Bézier curve always comes from the first reference point, touching the first section of a polygonal curve that connects all reference points, and ends in the last reference point, touching the last section. Any point on a polygonal curve always remains inside a convex closure of a set of reference points: Fig. 7. Bézier curve with four control points Bézier published his work on curves in 1962, but when 12 years afterwards Citroën took the wraps off its studies, it became clear that Paul de Casteljau had known about such curves at least three years before Bézier. De Casteljau described them constructively, and the algorithm was named after him. Later Forrest showed connection between Bézier curves and Bernstein polynomials (of which mathematicians knew as far back as from the beginning of the XX c.) He demonstrated that the function that defines Bézier curves can be represented as a linear combination of basic Bernstein polynomials [Forrest 1972]. It enabled studying the properties of Bézier curves relying on the properties of these polynoms. There are two methods to move from curves to Bézier surface. The first method introduces the so-called generating Bézier curves that have identical parameterisation. With each parameter value, a Bézier curve is constructed using points on such curves. Moving along generating curves, one constructs a surface that is called a rectangular Bézier surface. The area for specifying surface parameters is rectangular. The second method uses natural generalization of Bernstein polynomials in the case of two variables. A surface defined by such a polynom is called a triangle Bézier surface. Fig. 8. Bézier surface Being impeccable geometric constructs, Bézier curves and surfaces, however, have a couple of properties that considerably restrict their applicability. One of such properties is that Bézier curves do not allow accurate representation of conic sections (for instance, of a circular arc). Another one is that their algebraic degree increases together with the number of reference points, which greatly complicates numerical calculations. Mathematicians have long known the way to combat algebraic degrees of complex curves – it suffices to construct a curve from smoothly conjugated segments, each of which has a constrained algebraic degree. Such curves are called splines; the first reference to them was made by Isaac Schoenberg, an American mathematician of Romanian origin [Schoenberg 1946]. Carl de Boor, another American mathematician but of German origin, look at Schoenberg’s theoretical works from practical perspectives (in the CAD context). De Boor’s treatise “On calculating with B-Splines”[De Boor 1972] as well as a paper by Cox “The numerical evaluation of B-Splines” [Cox 1972], which was published the same year, established connections between a geometric shape of a compound curve and an algebraic method of its definition. B-splines are generalizations of Bézier curves and surfaces: they enable defining a curve shape in a similar manner using reference points but an algebraic degree of a B-spline does not depend on the number of reference points. The B-spline equation is similar to Bézier curves, but rather than being Bernstein polynomials, blending functions are defined recursively, depending on a parameter value. The range of definition of a B-spline parameter is divided into knots, in accordance with the conjunction points of algebraic curves of the given degree. Inventing NURBS The first work mentioning NURBS was a dissertation of Ken Versprille, a post-graduate student at Syracuse University, New York [Versprille 1975]. Fig. 9. Kenneth Versprille, the NURBS inventor Versprille holds a bachelor of science degree in mathematics from the University of New Hampshire and then studied towards Master’s and Doctorate degrees at Syracuse University, where Steven Coons was his professor. Appreciating Coons ideas, Versprille published the first description of NURBS, which was the focus of his dissertation. Soon after the graduation, he was hired by Computervision as a senior programmer to develop a 3D modeling functionality in CADDS 3. Although his assignments (spline implementation) matched his research interests, his boss was more concerned with meeting the project deadline and insisted on rejecting NURBS in favor of using simpler (in terms of mathematics) Bézier curves. Few years later, Versprille reached a top position at Computervision, and the company finally decided to support NURBS. A programer who was put in charge of the project sought Ken’s advice that was not late in arriving: “Just change a particular flag in a particular file from 0 to 1 and recompile the code!” Apparently, Versprille was working on NURBS from the very beginning, just not including the relevant code in a release. After a couple of errors were corrected, the code worked! [Yares 2008]. In 2005, Kenneth Versprille received Lifetime Achievement Award from CAD Society, a non-commercial CAD association, for inventing NURBS - an invaluable contribution to CAD development. Dr. Versprille received the award at a COFES (The Congress on the Future of Engineering Software) that took place the same year in Arizona. Boeing Contribution In 1979 the Boeing Company - an American aerospace corporation started developing its own CAD/CAM system called TIGER [Solid Modeling 2011]. One of the tasks of its developers was to choose an appropriate representation of 11 types of curves, comprising everything – from conic sections to Bézier curves and B-splines. In course of the project, one of the researchers – Eugene Lee – discovered that the main task (locating the intersection points of two arbitrary curves) can be narrowed down to solve a problem of locating the intersection point of Bézier curves, because any smooth curve within a certain neighbourhood can be approximated with a Bézier curve. It stimulated researchers to look for a way of representing all curves using a single shape. (It seems that they knew nothing of Versprille’s dissertation.) Possibility to represent curves and other conic sections using rational Bézier curves became an important local discovery [Lee 1981]. Another step towards discovery was introducing non-uniform B-splines, well-known from scientific literature, into industrial practice. Finally, researchers integrated the two concepts into a single formula – NURBS. Afterwards, a great deal of efforts was needed to persuade other TIGER developers to start employing the unified representation for all types of curves. Soon Boeing proposed to include NURBS in the IGES format and prepared a technical document with exhaustive description of a new universal type of geometric data. This proposal was received with great enthusiasm – first of all, due to a position taken by SDRC. SDRC Contribution In 1967 former professors of the Machine-Building Department of the University of Cincinnati (the USA) created SDRC (Structural Dynamics Research Corporation). Initially it was intended that the company would provide consulting services in machine-building, but with time SDRC transformed in one of leading global CAD developers. Starting with CAE (computer-aided engineering) the company then also turned to CAD (design), and developed I-DEAS, which helped to deal with a wide range of tasks – from conceptual design through wireframing and solid modeling to drafting, finite-element analysis and NC programming. I-DEAS was based on GEOMOD - an original solid modeler. Initially GEOMOD represented solid bodies as polygonal meshes that approximate their boundaries. Having realised importance of the Boeing proposal to standardize NURBS, SDRC developers zealously took on implementing NURBS in GEOMOD. The algorithm was mainly developed by Wayne Tiller, who later co-authored a famous monograph "The NURBS Book" [Piegl Fig. 10. Wayne Tiller, the President of GeomWare, and the co-author of "The NURBS Book" I-DEAS ceased existing when in 2001 EDS acquired SDRC, while Wayne Tiller used his experience to implement NLib library (see below). Contribution of GeomWare, IntegrityWare and Solid Modeling Solutions An American company IntegrityWare has been developing libraries for geometric calculations since 1996. In 1998 it reached an agreement with Solid Modeling Solutions to develop SMLib – a solid modeling kernel, the first version of which was ready the same year. SMLib is a sort of “nested-doll”, where every nesting level is a separate library of functions or classes. The most “nested-doll” is NLib - a library of functions (NURBS Library), developed by a partner company GeomWare. NLib is an exhaustive set of functions for designing and manipulating NURBS curves and surfaces. NLib algorithms are based on a classic monograph [Piegl 1997], and one of its authors - Wayne Tilleris is the founder and President of GeomWare. NLib is used by more than 85 companies involved in engineering software development. An object-oriented library GSNlib (General Surface NURBS Library) is based on NLib; it is a set of methods for creating, editing, obtaining information and intersecting NURBS curves and surfaces. IntegrityWare distributed this library under the name of GSLib and licensed it to such companies as Robert McNeel & Associates (for developing Rhino 3D) and Ford Motor Company. Subdivision Surfaces Subdivision surfaces can be considered as polygonal models that are iteratively constructed from a base mesh; that with each iteration becoming closer to the shape of a modelled surface. The two components of a subdivision surface are the base mesh and the algorithm for smoothing it. Historically, the theory of subdivision surfaces started with the work of an American designer Chaikin, who developed a method of iterative curve construction according to reference points [Chaikin 1974]. Similar to Bézier, Chaikin starts constructing a curve with a characteristic polygonal curve defined by a set of reference points. At the next stage, a new sequence of reference points is formed, which is built under special rules based on the first sequence. Geometrically it looks like corner cutting of the initial curve – each section is divided in the 1:2:1 ratio and the angles between two sections are cut as new sections are put between the shortened, old ones. The process continues until the curve is not sufficiently smooth. Fig. 11. Chaikin subdivision curve Soon it was proved that the curve generated by the Chaikin algorithm is nothing else but a quadric uniform B-spline. Chaikin method formed the basis for a family of algorithms developed by his followers. One of such algorithms was a method developed by Doo and Sabin for constructing quadric uniform B-spline surfaces using a base quadrangular mesh (each facet in such a mesh is a convex quadrangle) [Doo 1978]. Soon the researchers were able to extrapolate their method to any base mesh where each facet can have an arbitrary number of apexes – 3, 4, 5… Locally the resulting surface (except the finite number of points) is a quadric uniform B-spline. Doo - Sabin method is that at each step each facet is replaced with a smaller facet with the same number of apexes. Each apex of a smaller facet is the arithmetic average of the original apex, the centres of two adjacent edges and the facet centre. As a result, there is an unconnected mesh, where each new apex is then connected to all other apexes, obtained from the same old apex forming new facets. The resulting connected polyhedron is the basis for a new algorithm step. It’s easy to notice that this method as well as Chaikin method involves corner cutting: Fig. 12. Doo-Sabin subdivision surface Post-graduate students at Utah Univestiy, Catmull and Clark, expanded the corner cutting method to construct uniform cubic B-splines [Catmull 1978]. Their method, like Doo - Sabin method, can work on base meshes of arbitrary topology (locally the resulting surface is similar to a cubic B-spline). The smoothing algorithm is based on iterative construction of a new mesh under somewhat different rules. The following figure illustrates how the method works: Fig. 13. Catmull-Clark subdivision surface Subdivision surfaces constitute a convenient way of representing smooth surfaces in a compact manner. This property is broadly used to represent various organic objects, and, therefore, is also well suited to describe complex surfaces in surface modeling systems. (Special attributes are used to support non-smooth integrations – sharp edges, which limit the scope of subdivision algorithms.) Currently several industrial standards for exchanging geometric data on the basis of subdivision surfaces are being developed. Why NURBS are good? Why have NURBS curves and surfaces played such an importance role in CAD development? First of all, they offer general mathematical representation for both analytical geometric objects and freeform curves and surfaces. Manipulating NURBS control points and weights enable flexible design of a great variety of geometric forms. Calculations with NURBS can be done rather quickly and are numerically stable. NURBS curves and surfaces have a clear geometric interpretation that is especially useful and valuable for designers who have a good knowledge of geometry. NURBS have a rich set of tools (knot insertion / removing / changing, increasing a degree, splitting), that can be used to create and analyse such objects. NURBS give invariants of scaling, rotating, translating, cutting, constructing parallel and isometric projections [Piegl 1991]. At the same time representation of curves and surfaces in NURBS has some disadvantages. First of all, it requires more memory: for instance, representing a circle as a NURBS-curve requires defining seven reference points and ten knots, which means saving 38 floating-point numbers instead of seven (centre, surface normal, radius). Incorrect weight function can result in extremely poor parameterisation, which will make further NURBS-based constructs impossible. Certain algorithms (for example, computing an intersection of two surfaces) work better in a traditional representation. Finally, some fundamental algorithms (such as inverse mapping) are numerically unstable in NURBS. In spite of these shortcomings NURBS is still widely used in CAD – because noting better has been invented. Nevertheless... T-splines – is a type of freeform surface similar to NURBS. The key difference between T-splines and NURBS is that reference points of a NURBS-surface must form topological similarity of a rectangular frame, while T-splines can have the so-called inner T-points (a reference point with three rather than four neighbors). T-splines serves as a bridge technology between NURBS and subdivision surfaces. Fig. 14. T-spline Modeling organic surfaces using T-splines reduces the number of reference points twofold in comparison with NURBS (with the same requirements for G2 surface smoothness). T-splines were invented by Thomas Sederberg [Sederberg 2003]. In 2004 T-Splines, Inc. (the US) was formed to facilitate their commercial application; the company develops end-user software products using T-spline technology. Time will show whether this patent-protected technology ousts NURBS. The recent acquisition of technology assets of T-Splines, Inc. by Autodesk [Autodesk 2011] demonstrated the recognition of this technology by CAD industry. Autodesk, Inc., 2011, Autodesk Acquires T-Splines Modeling Technology Assets, http://news.autodesk.com/news/autodesk/20111222005259/en/ Catmull, E., and Clark, J., 1978, Recursively generated B-spline surfaces on arbitrary topological meshes, Computer-Aided Design 10(6):350-355. Chaikin, G., 1974, An algorithm for high speed curve generation, Computer Graphics and Image Processing, 3(4):346–349. Coons S. À., 1967, Surfaces for Computer Aided Design of Space Form, MIT Project MAC, AUC-TR-41. http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-041.pdf Cox, M. G., 1972, The Numerical Evaluation of B-Splines, J. Inst. Mathematics and Applications, Vol. 10, pp. 134-149. De Boor, C., 1972, On Calculation with B-Splines, J. Approximation Theory, Vol. 6, No. 1, pp. 50-62. Doo, D., 1978, A subdivision algorithm for smoothing down irregularly shaped polyhedrons, Proceedings on Interactive Techniques in Computer Aided Design, pp. 157-165. Forrest, A. R., 1972, Interactive Interpolation and Approximation by Bezier Polynomials, Comp J., Vol 15, pp 71-79. Lee, E. T. Y., 1981, A Treatment of Conics in Parametric Rational Bezier Form, Boeing document, Boeing, Seattle, Wash. Piegl, L., 1991, On NURBS: A Survey, IEEE CG&A, Vol. 11, No. 1, pp. 55-71. http://www.ece.uvic.ca/~bctill/papers/mocap/Piegl_1991.pdf Piegl, L. A., and Tiller, W., 1997, The NURBS Book, Springer. Schoenberg, I. J., 1946, Contributions to the problem of approximation of equidistant data by analytic functions, Part A: On the problem of smoothing or graduation, a first class of analytic approximation formulas, Quart. Appl. Math. 4, 45–99. Sederberg, T.-W., Zheng, J., Bakenov, A., and Nasri, A., 2003, T-Splines and T-NURCCs, ACM Transactions on. Graphics, 22(3), 477-484, http://cagd.cs.byu.edu/~tspline/ Solid Modeling Solutions, 2011, NURBS at Boeing. http://www.smlib.com/white%20papers/nurbsatboeing.htm Versprille, K. J., 1975, Computer-Aided Design Applications of the Rational B-Splines Approxamation Form, doctoral dissertation, Syracuse Univ., Syracuse, N.Y. Yares, E., 2008, A story about NURBS and bugs from Ken Versprille, http://www.evanyares.com/a-story-about-nurbs-and-bugs-from-ken-versprille/ See also: Permanent link :: http://isicad.net/articles.php?article_num=14940
{"url":"http://isicad.net/articles.php?article_num=14940","timestamp":"2014-04-19T11:56:03Z","content_type":null,"content_length":"45468","record_id":"<urn:uuid:cac91100-e6e8-435c-b6d7-c0a3848ee21c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Simpson's paradox visualized: The example of the Rosiglitazone meta-analysis Simpson's paradox is sometimes referred to in the areas of epidemiology and clinical research. It can also be found in meta-analysis of randomized clinical trials. However, though readers are able to recalculate examples from hypothetical as well as real data, they may have problems to easily figure where it emerges from. First, two kinds of plots are proposed to illustrate the phenomenon graphically, a scatter plot and a line graph. Subsequently, these can be overlaid, resulting in a overlay plot. The plots are applied to the recent large meta-analysis of adverse effects of rosiglitazone on myocardial infarction and to an example from the literature. A large set of meta-analyses is screened for further As noted earlier by others, occurrence of Simpson's paradox in the meta-analytic setting, if present, is associated with imbalance of treatment arm size. This is well illustrated by the proposed plots. The rosiglitazone meta-analysis shows an effect reversion if all trials are pooled. In a sample of 157 meta-analyses, nine showed an effect reversion after pooling, though non-significant in all cases. The plots give insight on how the imbalance of trial arm size works as a confounder, thus producing Simpson's paradox. Readers can see why meta-analytic methods must be used and what is wrong with simple pooling. Simpson's paradox, also known as the ecological effect, was first described by Yule in 1903 [1] and is named after Simpson's article in 1951 [2]. It refers to the phenomenon that sometimes an association between two dichotomous variables is similar within subgroups of a population, say females and males, but changes its sign if the individuals of the subgroups are pooled without stratification. This is reflected in the title of a paper by Baker and Kramer ('Good for women, good for men, bad for people', [3]). There are numerous examples, particularly from the areas of epidemiology and social sciences, of associations strongly affected by observed or unobserved dichotomous variables [4-8]. Even a tale based on Simpson's paradox has been told [9]. The reason for its occurrence is the existence of an influencing variable that is not accounted for, often unobserved. Thus, it may seem that the effect is charactistic for observational studies and can be avoided by This is not true, as was pointed out by others [10-14]. As Altman and Deeks note, Simpson's paradox is not really a paradoxon, but a form of bias, resulting from heterogeneity in the data if not accounted for [10]. Often tables of hypothetical as well as real data examples are presented. However, though these examples are easily recalculated, there is a need for readers, especially clinicians and practitioners in other fields, to really understand the nature of the phenomenon. Baker and Kramer proposed a plot, later called the Baker-Kramer (BK) plot, which was independently invented by others much earlier, for graphically illustrating Simpson's paradoxon [3,13-15]. Their examples stem from hypothetical data. For this plot it is required that the influencing variable is dichotomous. In the setting of a meta-analysis, however, the main source of heterogeneity and thus the most important influential variable is well-known and not dichotomous in general: it is the variable 'trial'. A perfect example of Simpson's paradox occurring in a meta-analysis of case-control studies is given by Hanley and Theriault [8]. In this meta-analysis all single trials show an increased risk for exposed individuals, while the pooled analysis reverses this effect. As a (less perfect) example for meta-analysis of RCTs, we use a recent systematic review of the effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular diseases [16 ]. It stated a significant increase of myocardial infarctions in the rosiglitazone group. The authors found a Peto odds ratio 1.428 with 95 per cent confidence interval [1.031; 1.979] and p-value 0.0321 (fixed effect model) [17]. This meta-analysis immediately raised a discussion not only about the safety of the drug, but also on methodological issues referring to potential heterogeneity, different follow-up times, the large number of trials with no or very few events and the imbalanced group sizes within many trials [18-20]. A re-analysis of the data using several variants of the Mantel-Haenszel method found that the significance of the effect is questionable (odds ratio estimates between 1.26 and 1.36, most of them not significant) [18]. Though not consistently significant, meta-analysis (all methods) exhibits an excess of events in the treatment group (rosiglitazone), compared to the control group (any other regimen). For example, taking the risk difference (fixed effect model, Mantel-Haenszel method) results in a combined estimate of 0.002 (95 per cent confidence interval [0.000; 0.004] with p-value 0.0549), corresponding to an estimated NNH (Number Needed to Harm) of about 489 patients. One problem of this data is the large number of trials without any events. If the outcome is measured by the risk ratio or the odds ratio, these trials are often excluded from a meta-analysis because it is argued that they do not contribute any information about the magnitude of the treatment effect [21]. In order to use all available information, simple pooling of all single tables could be rather tempting. It is seemingly convenient here because of the considerable number of double-zero studies, despite of the general consensus that this is discouraged [22]. If pooling is done – in spite of this objection – for the main endpoint myocardial infarction (MI), we in fact surprisingly observe that the pooled 2 × 2-table provides the contrary: the risk of MI for the treated individuals is 0.0055 and therefore less than for the control group (0.0059), see Table 1. The pooled odds ratio is 0.94 with 95 per cent confidence interval [0.69; 1.29] (p-value 0.7109). This (non-significant) effect reversion, produced by pooling, was observed by another author who in the light of these found the results of the meta-analysis 'intriguing' [23]. It can be seen as a milder form of Simpson's paradox. Table 1. Pooled data of rosiglitazone meta-analysis (full data see ref. [16]) In the next section, we first develop two kinds of plots to reveal and illustrate the mechanism of Simpson's paradox and effect reversion, using the rosiglitazone example. The third plot emerges from overlaying both plots. In the results section, we apply the plots to the data given by Hanley and Theriault [8] and discuss both methods and results. The paper is ended with conclusions. Methods and Results Simpson's paradox for continuous variables The first idea to give a pictorial representation of the data is very simple. It comes from a graphic that serves for demonstrating the continuous version of the effect. For example, think of a correlation study where the data are grouped by a nominal variable Z, say study center. The conditional correlation (i.e. the correlation, given Z) of two continuous variables X, Y is assumed to be positive for all values of Z. Simpson's paradox occurs if, on the other hand, between different levels of Z holds 'the higher X, the lower is Y '. The appropriate plot best illustrating this is given by Figure 1. It is a grouped scatterplot that shows approximately parallel ascending regression lines within each level of Z, but a decreasing sequence of midpoints. Our goal is now to transfer this idea to the case of both X and Y being dichotomous. Figure 1. Scatterplot of correlation between two continuous variables X and Y, grouped by a nominal variable Z. Different colors represent different levels of Z. Simpson's paradox for dichotomous variables: a scatterplot Let X, Y be dichotomous variables, where X is the treatment (1 = active, 0 = control) and Y is the outcome of interest (e.g., 1 = MI, 0 = no MI, where MI means myocardial infarction). The grouping variable is denoted as Z. In our meta-analytic example, Z ∈ {1,..., N} is the trial (N = 42 for the rosiglitazone meta-analysis). Simpson's paradox occurs, e.g., if within (most) studies, the event Y is more frequent in the active treatment group (X = 1), but between studies, those with larger treatment proportions (corresponding to higher X) tend to exhibit lower event probabilities (corresponding to lower Y). This is possible only if the proportions of patients treated with the active drug vary substantially over all trials. Exactly this – the noticeable imbalance of the groups in many of the studies – is a characteristic feature of the rosiglitazone meta-analysis, as is pointed out both in the original article [16] and several reactions thereon, e.g. [18]. The connection between group imbalance and the occurrence of ecological effects was pointed out earlier by some authors [8,10,11]. Figure 2 (left panel) is a straightforward analogue to the continuous plot described above. Instead of the (dichotomous) variables X and Y themselves, their observed frequencies are used. A simple scatterplot is presented that shows the overall event frequencies P(Y = 1|Z = i) within the N trials i = 1,..., N versus the proportions P(X = 1|Z = i) of patients undergoing the active treatment. The large dispersion of the treatment proportions, unusual for randomized trials, is clearly seen. The negative correlation between treatment proportion and event probability (indicated by the fitted unweighted regression line) could lead to the deceptive impression that the frequency of adverse events decreases if more patients receive active treatment, thus potentially producing Simpson's Figure 2. Three plots elucidating effect reversion in rosiglitazone meta-analysis: (a) Scatterplot of fraction of events against proportion of patients in the active treatment group (left panel). (b) Line plot displaying risk differences within trials (middle panel). 0 = control group, 1 = active treatment group. (c) Overlay plot of scatterplot and line plot (right panel). Simpson's paradox for dichotomous variables: a line plot A second way to demonstrate this is given by Figure 2 (middle panel). It shows, according to the scatterplot for continuous variables, the actual treatment X (i.e, 0 or 1) on the x-axis and the event frequencies conditional on group (X = 0 or X = 1) and trial (Z), that is P(Y = 1|X, Z) on the y-axis. Points belonging to the same trial are joined by a thin line, so that different lines indicate different trials. The slope of each within-trial line corresponds to the risk difference of this trial. The lines show a tendency to increase for most trials, revealing more adverse events in the active treatment group, in agreement with the published result of the meta-analysis. In addition, three other lines are drawn. The green line joins the estimated mean event frequencies under control and under rosiglitazone, calculated within trials and averaged with equal weights for all trials. The blue line is similar, but the trials are now weighted with their precision, measured by the inverse sampling variance, calculated from a meta-analysis using the risk difference as outcome measure. Both lines increase slightly, reflecting what in average happens within trials. The red line, however, calculated by simple collapsing all 2 × 2-tables without stratification by trial, decreases. The reason is that there are many unbalanced trials with the treatment groups being larger than the control groups and simultaneously having the lowest event rates (see Figure 2, left panel). We can visualize this by adding further elements to this plot. The starting and ending points of the single trial lines are marked by diamonds with size proportional to the size of the control group and the treatment group of this trial, respectively. If this is done, the contribution of single trial arms to the red line becomes visible. In our example, we have a large trial with many events in the control group (left) and, on the other hand, many trials with a larger proportion of rosiglitazone patients having low event rates (right-hand side). Simpson's paradox for dichotomous variables: the overlay plot The right panel plot of Figure 2 shows a combination of the scatterplot and the line plot. The circles from the scatterplot and the trial-specific lines from the line plot are overlaid, while the regression line, the colored lines and the diamonds are skipped for sake of clarity. The interpretation of the x-axis and the lines now is slightly changed. Values x on the x-axis are interpreted as all possible proportions of active treatment in a trial. The y-values on the line belonging to a particular trial indicate the expected frequency of events in this trial, given X = x. If X = 0, this corresponds to the observed fraction of events in the control group (the intercept). If X = 1, the value provides the observed fraction of events in the treatment group of the trial. The i'th line is thus given by the linear equation y = P[Z = i](Y = 1|X = 0) + [P[Z = i](Y = 1|X = 1) - P[Z = i](Y = 1|X = 0)] x,(1) where the slope P[Z = i](Y = 1|X = 1) - P[Z = i](Y = 1|X = 0) is the risk difference observed in trial Z = i, as stated above. If we insert for x the proportion x[0 ]of patients actually treated in trial i, that is x[0 ]= P[Z = i](X = 1), we get y[0 ]= P[Z = i](Y = 1|X = 0) + [P[Z = i](Y = 1|X = 1) - P[Z = i](Y = 1|X = 0)] P[Z = i](X = 1), which results (after straightforward simplification) in y[0 ]= P[Z = i](Y = 1), the overall frequency of events in trial i. These values are marked as the circles on the lines in the right-hand panel, which are the same as those on the scatterplot (left panel). This equality corresponds to equation (1) in [4]. We apply the plots to the example of meta-analysis of case-control studies given by Hanley and Theriault (data in reference [8]). The cases are children with leukemia, the exposition of interest being the presence of a high voltage power line within 100 m of the residence. Figure 3 displays the plots for this example. The y-axes are logit-transformed, because effect is measured as odds ratio. The scatterplot (left-hand panel) shows that the proportion of exposed (children living near a power line, here expressed as log odds) was higher in studies with a lower case-control ratio. The line plot (middle panel) displays that within all studies the exposition is slightly associated with leukemia, likewise for the stratified meta-analysis (green and blue line), but in the pooled sample (red line) the direction of association is reversed. The diamonds disclose how the large case and control groups pull the red line in the opposite direction. A direct overlay of these plots would not make sense, because when using a nonlinear transformation of the y-axis, the circles of the scatterplot do not lie exactly on the lines of the line plot. Instead, by subjecting equation (1) to the logit transformation we get a curved counterpart to the overlay plot. This is shown in the right-hand panel of Figure 3. Figure 3. Three plots illustrating Simpson's paradox in a meta-analysis of case-control studies: (a) Scatterplot of frequency of exposition (on a log odds scale) against proportion of cases (left panel). (b) Line plot displaying log odds ratios within studies (middle panel). 0 = control group, 1 = case group. (c) Curved overlay plot (right panel). The example of the rosiglitazone meta-analysis illustrates that an ecological effect can occur even if all studies are randomized clinical trials. The scatterplot, applied to this example, shows that the myocardial infarction rate is the lower, the higher the proportion of patients in the active treatment groups is. This is no effect of the treatment, but an artefact of the studies included in this meta-analysis. The large majority of treated patients in some trials is explained by the fact that the authors pooled multiple groups of patients receiving rosiglitazone, where applicable [16]. On the other hand, many of these studies had only short-time follow-up, so that there were only few events observed. Casually, we note that this kind of heterogeneity in study design is present although there was no indication of statistical heterogeneity of the treatment effect on any scale, as measured in terms of τ^2, H or I^2[24]. These measures do not capture heterogeneity in other respect. This taken into account, the result of the meta-analysis that more adverse events are attributed to treatment than to control, as claimed to be significant in [16], was questioned by others In general, even a strong correlation contrary to the within-study association does not necessarily cause an effect reversion. This happens only if the disparities of the treatment arm sizes are large enough to outbalance the treatment effect in the single trials. This can be judged by inspection of the line plot. The line plot displays the treatment effect in each single study, as the slope of each line corresponds to the treatment effect measured in this study. The slope of the green line is the (uniformly weighted) mean treatment effect, that of the blue line the weighted mean treatment effect, the latter corresponding to the result of a meta-analysis. This kind of plot is not restricted to the risk difference, as the second example shows. Rather, it is easily generalized to a plot for the risk ratio or the odds ratio or other measures of treatment effect, such as the arcsine difference [25], by using the log scale, the logit scale, or the arcsine scale for the y -axis, respectively. If the y-axis is not transformed, the plots can be overlaid. At first glance, the overlay plot is evocative of the so-called BK-plot [3,13-15]. It was first demonstrated using a hypothetical situation with only two groups (males and females), with the female fraction of patients as x-axis and the two lines corresponding to the two treatments [3]. The BK-plot was applied, for example, to medical school admission data [26]. There is, however, a fundamental difference between our overlay plot and the BK-plot which is elucidated in Table 2. In the overlay plot, the x-axis represents the variable 'treatment', that is, proportions of patients treated with the active treatment, and the lines correspond to any number of strata (here trials). In the BK-plot, however, x represents a binary confounder (i.e., proportions of patients belonging to one of two subgroups), and the lines correspond to the treatments. In fact, the BK-plot was originally introduced to elucidate Simpson's paradox for the simplest case that both the treatment variable and the confounding variable are binary. The plot then contains only two lines and two circles. Insight comes from comparing the position of the two circles on these lines: If Simpson's paradox is working, the lines have the same direction, do not intersect, and the circle on the lower line lies higher than the circle on the upper line. This method of pairwise comparing circles does not work in the context of a large and maybe heterogeneous meta-analysis. The confounder, here the trial, is not binary. Moreover, Simpson's paradox in meta-analysis uses to occur in a generalized form: We do not presuppose that the effects within all studies have the same direction. An effect reversion is identified if the sign of the pooled effect differs from that of the within-study treatment effect, estimated using meta-analytic methods. Table 2. Overlay plot compared to Baker-Kramer plot [3] As mentioned before, looking at the scatterplot or the overlay plot alone does not suffice, because a strong association between the proportion of patients treated and the event frequency in the direction opposite to the treatment effect is not sufficient for an effect reversion. The essential information is given by the line plot or by using the whole triplet of plots. In addition, we screened a large set meta-analyses for finding further examples of this phenomenon. This data set, consisting of 157 meta-analyses with binary endpoints and two treatment groups was kindly provided by Peter Jüni who had collected the data at the Department of Social and Preventive Medicine, University of Berne, Switzerland. We had formerly used these data for a study on publication bias [27]. For each meta-analysis, a 'Simpson check' is carried out by comparing the sign of the result of the pooled analysis to the sign of the meta-analytic result, using the risk difference (without loss of generality). We found that in 9 out of all 157 meta-analyses (5.7%) the sign changed. However, in all these examples the treatment effect was far from being significant, and the confidence intervals of the meta-analytic and the pooled estimate overlapped largely. Hence the change of the sign was of no statistical importance. The rosiglitazone example illustrates that an ecological effect (Simpson's paradox) can occur even when all studies are randomized clinical trials. However, as our empirical study shows, this is not a common phenomenon. When it occurs, it is caused by strong imbalance of the proportions allocated to the active and control treatment in the trials included in the meta-analysis. The usual measures of heterogeneity on the treatment effect scale are not sensitive against this kind of heterogeneity. In our opinion, the plots proposed here serve to clarify what is going on beyond the calculations. Taken together, they help the reader to understand what is behind Simpson's paradox if he faces it in a meta-analysis. The R code producing the plots is available from the first author on request [28]. Authors' contributions GR conceived the proposed plots and drafted the manuscript. MS contributed the curved overlay plot and added to the writing. Both authors read and approved the final manuscript. GR is funded by Deutsche Forschungsgemeinschaft (FOR 534 Schw 821/2-2). The authors wish to thank the two referees for helpful comments on the manuscript. 1. Yule G: Notes on the theory of association of attributes of statistics. Biometrika 1903, 2:121-134. Publisher Full Text 2. Simpson E: The interpretation of interaction in contingency tables. 3. Baker SG, Kramer BS: Good for women, good for men, bad for people: Simpson's paradox and the importance of sex-specific analysis in observational studies. Journal of Women's Health and Gender-Based Medicine 2001, 10(9):867-872. PubMed Abstract | Publisher Full Text 4. Greenland S, Morgenstern H: Ecological bias, confounding, and effect modification. Int J Epidemiol 1989, 18:269-274. PubMed Abstract | Publisher Full Text 5. Julious SA, Mullee MA: Confounding and Simpson's paradox. BMJ 1994, 309:1480-1481. PubMed Abstract | Publisher Full Text 6. Appleton D, French J, Vanderpump M: Ignoring a covariate: An example of Simpson's paradox. The American Statistician 1996, 50(4):340-341. Publisher Full Text 7. Reintjes R, de Boer A, van Pelt W, de Groot JM: Simpson's paradox: An example from hospital epidemiology. Epidemiology 2000, 11:81-83. PubMed Abstract | Publisher Full Text 8. Hanley JA, Theriault G: Simpson's paradox in meta-analysis. Epidemiology 2000, 11(5):613. PubMed Abstract | Publisher Full Text 9. Significance 2007, 2:47-48. Publisher Full Text 10. Altman DG, Deeks JJ: Meta-analysis, Simpson's paradox, and the number needed to treat. BMC Medical Research Methodology 2002, 2:3. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 11. Cates CJ: Simpson's paradox and calculation of number needed to treat from meta-analysis. BMC Medical Research Methodology 2002, 2():1. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 12. Lievre M, Cucherat M, Leizorovicz A: Pooling, meta-analysis, and the evaluation of drug safety. Current Controlled Trials in Cardiovascular Medicine 2002, 3:6. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 13. Baker SG, Kramer BS: The transitive fallacy for randomized trials: if A bests B and B bests C in separate trials, is A better than C? BMC Medical Research Methodolology 2002, 2:13. BioMed Central Full Text 14. Baker SG, Kramer BS: Randomized trials, generalizability, and meta-analysis: graphical insights for binary outcomes. BMC Medical Research Methodology 2003, 3:10. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 15. Jeon J, Chun H, Bae J: Chances of Simpson's paradox. 16. Nissen SE, Wolski K: Effect of Rosiglitazone on the risk of myocardial infarction and death from cardiovascular diseases. NEJM 2007, 356(24):2457-2471. PubMed Abstract | Publisher Full Text 17. Yusuf S, Peto R, Lewis J, Collins R, Sleight P: Beta blockade during and after myocardial infarction: An overview of the randomized trials. Progress in Cardiovascular Diseases 1985, 27:335-371. PubMed Abstract | Publisher Full Text 18. Diamond GA, Bax L, Kaul S: Uncertain Effects of Rosiglitazone on the Risk for Myocardial Infarction and Cardiovascular Death. Annals of Internal Medicine 2007, 147(8):578-581. PubMed Abstract | Publisher Full Text 19. Shuster J, Jones L, Salmon D: Fixed vs random effects meta-analysis in rare event studies: The Rosiglitazone link with myocardial infarction and cardiac death. Statistics in Medicine 2007, 26:4375-4385. PubMed Abstract | Publisher Full Text 20. Carpenter JR, Rücker G, Schwarzer G: Letter to the Editor. Statistics in Medicine 2007. [DOI: 10.1002/sim.3173]. PubMed Abstract | Publisher Full Text 21. Bradburn MJ, Deeks JJ, Berlin JA, Localio AR: Much ado about nothing: a comparison of the performance of meta-analytical methods with rare events. Statistics in Medicine 2007, 26:53-77. PubMed Abstract | Publisher Full Text 22. Bracken M: Rosiglitazone and cardiovascular risk. N Engl J Med 2007, 357(9):937-938. [Author reply 939–940]. PubMed Abstract | Publisher Full Text 23. Higgins JPT, Thompson SG: Quantifying heterogeneity in a meta-analysis. Statistics in Medicine 2002, 21:1539-1558. PubMed Abstract | Publisher Full Text 24. Rücker G, Schwarzer G, Carpenter JR: Arcsine test for publication bias in meta-analyses with binary outcomes. Statistics in Medicine 2008, 27(5):746-763. PubMed Abstract | Publisher Full Text 25. Wainer H, Brown LM: Two Statistical Paradoxes in the Interpretation of Group Differences: Illustrated with Medical School Admission and Licensing Data. 26. Carpenter JR, Schwarzer G, Rücker G, Künstler R: Empirical evaluation shows the Copas selection model provides a useful summary in 80% of meta-analyses. 2007, in press. 27. R Development Core Team: [http://www.R-project.org] webcite R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria; 2006. [ISBN 3-900051-07-0]. Pre-publication history The pre-publication history for this paper can be accessed here: Sign up to receive new article alerts from BMC Medical Research Methodology
{"url":"http://www.biomedcentral.com/1471-2288/8/34","timestamp":"2014-04-20T09:15:13Z","content_type":null,"content_length":"106771","record_id":"<urn:uuid:a20b78c2-f801-4cec-945a-a644bbc29f36>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
El Segundo Calculus Tutor Find an El Segundo Calculus Tutor ...Being an Engineer, I was hired by a technology company to direct their International Marketing operations. I executed this function with great success, managing an internal group of 13 people as well as public relations agencies in Mexico, Brasil, Saudi Arabia, Australia, New York and Los Angele... 20 Subjects: including calculus, Spanish, linear algebra, algebra 1 ...Calculus is a foundational subject for electrical engineering, and I enjoyed taking calculus courses, receiving A's in all my courses. I work in communications and radar, which involves a lot of analysis, using many principles from calculus. I really enjoyed geometry when I first took it in AP high school math. 33 Subjects: including calculus, chemistry, physics, geometry ...Earned Ph.D. degree in physical Chemistry. B.Sc. degree in chem. eng. Professorship in Chemistry in the United States. 10 Subjects: including calculus, chemistry, statistics, algebra 1 ...Now I'm happy to help other people find their own ways to think in algebra terms. Calculus--the study of change and growth--was the class that convinced me to take a lot of advanced math in college. This is the basis for a lot of the work I do every day as a research scientist. 13 Subjects: including calculus, physics, geometry, algebra 1 My name is Andy, and I love math. I hold a BS and MS from Massachusetts Institute of Technology (MIT). In high school, I competed in various prestigious math and science competitions. I have gotten 800s in SAT math I and II. 11 Subjects: including calculus, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/El_Segundo_calculus_tutors.php","timestamp":"2014-04-19T20:21:45Z","content_type":null,"content_length":"23972","record_id":"<urn:uuid:45fa4bf6-780e-4f0c-8703-020da71fc3d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Boundary Value Problem Using Series of Bessel Functions This Demonstration solves a Bessel equation problem of the first kind. The equation is for a thin elastic circular membrane and is governed by the partial differential equation in polar coordinates: Here , a function of the coordinates and time, is the vertical displacement and , a constant independent of coordinates and time, which is determined by the density and tension in the membrane. The initial conditions are and , . In this example we assume circular symmetry. Thus the term can be removed from the equation, yielding the traditional form of Bessel's equation: Using separation of variables with and the separation constant reduces the problem to two ordinary differential equations: The solution of these ODE equations is done using the techniques outlined in [1] for series solutions of ordinary differential equations. The general solution has the form: The boundary conditions that determine the constants , , , and are that , meaning that the function vanishes on the perimeter . The Bessel function of the first kind, , can be expressed by the Then with , , equal to the zeros of , the solution satisfying the boundary conditions is given by This example comes from [1], and the discussions given in Chapter 8.7 on series solutions and Bessel's equation. Also see Chapter 10.5. [1] J. R. Brannan and W. E. Boyce, Differential Equations with Boundary Value Problems: An Introduction to Modern Methods and Applications , New York: John Wiley and Sons, 2010. (United States Military Academy West Point, Department of Mathematics)
{"url":"http://demonstrations.wolfram.com/BoundaryValueProblemUsingSeriesOfBesselFunctions/","timestamp":"2014-04-18T11:11:09Z","content_type":null,"content_length":"48145","record_id":"<urn:uuid:9f47e260-4373-494f-b7ed-bc53b8f0bfca>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Read the ?JET Copies? Case Problem on pages 678-679 of the text. Using simulation estimate the loss of revenue due to copier breakdown for one year, as follows:1. In Excel, use a suitable method for generating the number of days needed to repair the copier, when it is out of service, according to the discrete distribution shown. 2. In Excel, use a suitable method for simulating the interval between successive breakdowns, according to the continuous distribution shown. 3. In Excel, use a suitable method for simulating the lost revenue for each day the copier is out of service. 4. Put all of this together to simulate the lost revenue due to copier breakdowns over 1 year to answer the question asked in the case study. 5. In a word processing program, write a brief descriptionexplanation of how you implemented each component of the model. Write 1-2 paragraphs for each component of the model (days-to-repair interval between breakdowns lost revenue putting it together). 6. Answer the question posed in the case study. How confident are you that this answer is a good one? What are the limits of the study? Write at least one paragraph. There are two deliverables for this Case Problem, the Excel spreadsheet and the written descriptionexplanation. Please submit both of them electronically via the dropbox.
{"url":"http://expresshelpline.com/read-the-jet-copies-case-problem-on-pages-678-679-of-the-text-usi-11180512.html","timestamp":"2014-04-20T03:58:30Z","content_type":null,"content_length":"16098","record_id":"<urn:uuid:6489ccec-c343-497d-bd83-8813d39278ce>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00570-ip-10-147-4-33.ec2.internal.warc.gz"}