content
stringlengths
86
994k
meta
stringlengths
288
619
Investigation into IGBT dV/dt during turn-off and its temperature dependence The Library Investigation into IGBT dV/dt during turn-off and its temperature dependence Bryant, Angus T., Yang, Shaoyong, Mawby, P. A. (Philip A.), Xiang, Dawei, Ran, Li, Tavner, Peter and Palmer, Patrick R.. (2011) Investigation into IGBT dV/dt during turn-off and its temperature dependence. IEEE Transactions on Power Electronics, Volume 26 (Number 10). pp. 3019-3031. ISSN 0885-8993 Full text not available from this repository. In many power converter applications, particularly those with high variable loads, such as traction and wind power, condition monitoring of the power semiconductor devices in the converter is considered desirable. Monitoring the device junction temperature in such converters is an essential part of this process. In this paper, a method for measuring the insulated gate bipolar transistor (IGBT) junction temperature using the collector voltage dV/dt at turn-OFF is outlined. A theoretical closed-form expression for the dV/dt at turn-OFF is derived, closely agreeing with experimental measurements. The role of dV/dt in dynamic avalanche in high-voltage IGBTs is also discussed. Finally, the implications of the temperature dependence of the dV/dt are discussed, including implementation of such a temperature measurement technique. Item Type: Journal Article Subjects: T Technology > TK Electrical engineering. Electronics Nuclear engineering Divisions: Faculty of Science > Engineering Library of Insulated gate bipolar transistors -- Reliability, Power electronics, Electric current converters, Power semiconductors Journal or IEEE Transactions on Power Electronics Publisher: IEEE ISSN: 0885-8993 Date: October 2011 Volume: Volume 26 Number: Number 10 Page Range: pp. 3019-3031 Identification 10.1109/TPEL.2011.2125803 Status: Peer Reviewed Publication Published Access rights Restricted or Subscription Access to Published Funder: Engineering and Physical Sciences Research Council (EPSRC), University of Cambridge. Schiff Foundation Grant number: EP/E0274$$$$4X/1 (EPSRC), EP/E026923/1 (EPSRC) [error in script] [error in script] [1] M. Bartram, I. von Bloh, and R. D. Doncker, “Doubly-fed-machines in wind-turbine systems: Is this application limiting the lifetime of IGBTfrequency- converters?” in Proc. PESC Conf. Rec., Aachen, Germany, Jun. 2004, pp. 2583–2587. [2] S. Watson, B. Xiang, W. Yang, P. Tavner, and C. Crabtree, “Condition monitoring of the power output of wind turbine generators usingwavelets,” IEEE Trans. Energy Convers., vol. 25, no. 3, pp. 715–721, Sep. 2010. [3] D.McMillan and G. Ault, “Quantification of condition monitoring benefit for offshore wind turbines,” Wind Eng., vol. 31, no. 4, pp. 267–285, 2007. [4] A. Bryant, P. Mawby, P. Palmer, E. Santi, and J. Hudgins, “Exploration of power device reliability using compact device models and fast electrothermal simulation,” IEEE Trans. Ind. Appl., vol. 44, no. 3, pp. 894–903, May 2008. [5] D. Hirschmann, D. Tissen, S. Schr¨oder, and R. D. Doncker, “Reliability prediction for inverters in hybrid electrical vehicles,” presented at the PESC Conf. Rec., Jeju, South Korea, Jun. 2006. [6] L. Wei, J. McGuire, and R. Lukaszewski, “Analysis of PWM frequency control to improve the lifetime of PWM inverter,” presented at the ECCE Conf. Rec., San Jose, CA, Sep. 2009. [7] S. Yang, A. Bryant, P. Mawby, D. Xiang, L. Ran, and P. Tavner, “An industry-based survey of reliability in power electronic converters,” presented at the ECCE Conf. Rec., San Jose, CA, Sep. 2009. [8] S. Yang, D. Xiang, A. Bryant, P. Mawby, L. Ran, and P. Tavner, “Condition monitoring for device reliability in power electronic converters—A review,” IEEE Trans. Power Electron., vol. 47, no. 3, pp. 1441–1451, May 2011. [9] N. Patil, J. Celaya, D. Das, K. Goebel, andM. Pecht, “Precursor parameter identification for insulated gate bipolar transistor (IGBT) prognostics,” IEEE Trans. Reliabil., vol. 58, no. 2, pp. 271–276, Jun. 2009. [10] M. Musallam and C. Johnson, “Real-time compact thermal models for health management of power electronics,” IEEE Trans. Power Electron., vol. 25, no. 6, pp. 1416–1425, Jun. 2010. [11] M. Ciappa, F. Carbognani, P. Cova, and W. Fichtner, “A novel thermomechanics-based lifetime predictionmodel for cycle fatigue failure mechanisms in power semiconductors,” Microelectron. Reliabil., vol. 42, pp. 1653–1658, 2002. [12] U. Scheuermann, “Reliability of pressure contacted intelligent integrated power modules,” presented at the ISPSD Conf. Rec., Santa Fe, NM, Jun. 2002. [13] Y.-S. Kim and S.-K. Sul, “On-line estimation of IGBT junction temperature using on-state voltage drop,” in Proc. IAS Conf. Rec., St Louis, MO, Oct. 1998, pp. 853–859. [14] D. Barlini, M. Ciappa, A. Castellazzi, M. Mermet-Guyennet, and W. Fichtner, “New technique for the measurement of the static and of the transient junction temperature in IGBT devices under operating conditions,” Microelectron. Reliabil., vol. 46, pp. 1772–1777, 2006. [15] D. Barlini, M. Ciappa, M. Mermet-Guyennet, and W. Fichtner, “Measurement of the transient junction temperature in MOSFET devices under operating conditions,” Microelectron. Reliabil., vol. 47, pp. 1707–1712, 2007. [16] H. Chen, V. Pickert, D. Atkinson, and L. Pritchard, “On-linemonitoring of theMOSFET device junction temperature by computation of the threshold voltage,” in Proc. PEMD Conf. Rec., Dublin, Fingal, Ireland, Apr. 2006, pp. 440–444. [17] A. Ammous, B. Allard, and H. Morel, “Transient temperature measurements and modeling of IGBTs under short circuit,” IEEE Trans. Power Electron., vol. 13, no. 1, pp. 12–25, Jan. 1998. [18] M. Musallam, P. Acarnley, C. Johnson, L. Pritchard, and V. Pickert, “Estimation and control of power electronic device temperature during operation with variable conducting current,” IET Circuits, Devices Syst., vol. 1, no. 2, pp. 111–116, Apr. 2007. [19] M. Musallam, C. Johnson, C. Yin, C. Bailey, and M. Mermet-Guyennet, “Real-time life consumption power modules prognosis using on-line rainflow algorithm in metro applications,” in Proc. ECCE Conf. Rec., Atlanta, GA, Sep. 2010, pp. 970–977. [20] R. Kr¨ummer, T. Reimann, G. Berger, J. Petzoldt, and L. Lorenz, “On-line calculation of the chip temperature of power modules in voltage source converters using the microcontroller,” presented at the EPE Conf. Rec., Lausanne, Switzerland, 1999. [21] D. Xiang, L. Ran, P. Tavner, S. Yang, A. Bryant, and P. Mawby, “Condition monitoring solder fatigue in a power module by inverter harmonic identification,” IEEE Trans. Power Electron., to be published. [22] A. Hefner and D. Blackburn, “An analytical model for the steady-state and transient characteristics of the power insulated-gate bipolar transistor,” Solid-State Electron., vol. 31, no. 10, pp. 1513–1532, 1988. [23] F. Calmon, J. Chante, B. Reymond, and A. Senes, “Analysis of the IGBT dv/dt in hard switching mode,” in Proc. EPE Conf. Rec., Seville, 1995, vol. 1, pp. 234–239. References: [24] A. Ramamurthy, S. Sawant, and B. Baliga, “Modeling the dV/dt of the IGBT during inductive turn off,” IEEE Trans. Power Electron., vol. 14, no. 4, pp. 601–606, Jul. 1999. [25] W. Feiler, W. Gerlach, and U. Wiese, “On the turn-off behaviour of the NPT-IGBT under clamped inductive loads,” Solid-State Electron., vol. 39, no. 1, pp. 59–67, 1996. [26] ATLAS, Silvaco’s TCAD software. (June 2011). [Online]. Available: http://www.silvaco.com/products/device_simulation/atlas.html [27] A. Bryant, Y.Wang, S. Finney, T.-C. Lim, and P. Palmer, “Numerical optimization of an active voltage controller for high-power IGBT converters,” IEEE Trans. Power Electron., vol. 22, no. 2, pp. 374–383, Mar. 2007. [28] P. Jeannin, D. Frey, and J. Schanen, “Sizing method of external capacitors for series association of insulated gate components,” presented at the EPE Conf. Rec., Graz, Austria, Sep. 2001. [29] P. Palmer, E. Santi, J. Hudgins, X. Kang, J. Joyce, and P. Eng, “Circuit simulator models for the diode and IGBT with full temperature dependent features,” IEEE Trans. Power Electron., vol. 18, no. 5, pp. 1220–1229, Sep. 2003. [30] A. Bryant, X. Kang, E. Santi, P. Palmer, and J. Hudgins, “Two-step parameter extraction procedure with formal optimization for physics-based circuit simulator IGBT and PIN diode models,” IEEE Trans. Power Electron., vol. 21, no. 2, pp. 295–309, Mar. 2006. [31] C. Jacoboni, C. Canali, G. Otiaviani, and A. A. Quaranta, “A review of some charge transport properties of silicon,” Solid-State Electron., vol. 20, pp. 77–89, 1977. [32] L. Lu,A.Bryant, E. Santi, J. Hudgins, and P. Palmer, “Physics-based model of planar-gate IGBT including MOS side two-dimensional effects,” IEEE Trans. Ind. Appl., vol. 46, no. 6, pp. 2556–2567, Nov. 2010. [33] W. Feiler, W. Gerlach, and U. Wiese, “Two-dimensional analytical models of the carrier distribution in the on-state of the IGBT,” Solid-State Electron., vol. 38, no. 10, pp. 1781–1790, 1995. [34] H. Schlangenotto and W. Gerlach, “On the effective carrier lifetime in p-s-n rectifiers at high injection levels,” Solid-State Electron., vol. 12, pp. 267–275, 1969. [35] A. Zekry, “The dependence of diffusion length, lifetime and emitter Gummel number on temperature and doping,” Archiv f¨ur Elektrotechnik, vol. 75, pp. 147–154, 1992. [36] P. Rose, D. Silber, A. Porst, and F. Pfirsch, “Investigations on the stability of dynamic avalanche in IGBTs,” in Proc. ISPSD Conf. Rec., Santa Fe, NM, Jun. 2002, pp. 165–168. [37] T. Ogura, H. Ninomiya, K. Sugiyama, and T. Inoue, “Turn-off switching analysis considering dynamic avalanche effect for low turn-off loss highvolage IGBTs,” IEEE Trans. Electron Devices, vol. 51, no. 4, pp. 629–635, Apr. 2004. [38] M. Rahimo, A. Kopta, S. Eicher, U. Schlapbach, and S. Linder, “Switching-self-clamping-mode “SSCM”, a breakthrough in SOA performance for high voltage IGBTs and diodes,” in Proc. ISPSD Conf. Rec., Kitakyushu, May 2004, pp. 437–440. [39] A. Hefner, “An improved understanding for the transient operation of the power insulated gate bipolar transistor (IGBT),” IEEE Trans. Power Electron., vol. 5, no. 4, pp. 459–468, Oct. 1990. [40] A. Githiari, “The design of semiconductor switches for high voltage applications,” Ph.D. dissertation, Univ. Cambridge, Cambridge, U.K., 1996. [41] R. Kraus, P. T¨urkes, and J. Sigg, “Physics-based models of power semiconductor devices for the circuit simulator SPICE,” in Proc. PESC Conf. Rec., Fukuoka, Japan, 1998, pp. 1726–1731. [42] P. Palmer and J. Joyce, “Circuit analysis of active mode parasitic oscillations in IGBT modules,” IEE Proc.—Circuits Devices Syst., vol. 150, no. 2, pp. 85–91, Apr. 2003. [43] A. Githiari and P. Palmer, “Analysis of IGBT modules in the series connection,” IEE Proc.—Circuits Devices Syst., vol. 145, no. 5, pp. 354–360, Oct. 1998. [44] A. Hefner, “An investigation of the drive circuit requirements for the power insulated gate bipolar transistor (IGBT),” IEEE Trans. Power Electron., vol. 6, no. 2, pp. 208–219, Apr. 1991. [45] A. Herlet, “The forward characteristic of silicon power rectifiers at high current densities,” Solid-State Electron., vol. 11, pp. 717–742, 1968. [46] H. Schlangenotto and H. Maeder, “Spatial composition and injection dependence of recombination in silicon power device structures,” IEEE Trans. Electron Devices, vol. ED-26, no. 3, pp. 191–200, Mar. 1979. [47] D. Green, K. Vershinin,M. Sweet, and E. Narayanan, “Anode engineering for the insulated gate bipolar transistor - a comparative review,” IEEE Trans. Power Electron., vol. 22, no. 5, pp. 1857–1866, Sep. 2007. [48] R. Dutton and Z. Yu, Technology CAD: Computer Simulation of IC Processes and Devices. Boston, MA: Kluwer, 1993. [49] A. Hefner, “Adynamic electro-thermalmodel for the IGBT,” IEEE Trans. Ind. Appl., vol. 30, no. 2, pp. 394–405, Mar. 1994. [50] L. Mussard, P. Tounsi, P. Austin, G. Bonnet, J.-M. Dorkel, and J. Saiz, “Power component models with thermally dependent parameters for circuit simulator,” presented at the EPE Conf. Rec., Toulouse, France, Sep. 2003. URI: http://wrap.warwick.ac.uk/id/eprint/40439 Data sourced from Thomson Reuters' Web of Knowledge Actions (login required)
{"url":"http://wrap.warwick.ac.uk/40439/","timestamp":"2014-04-16T04:27:27Z","content_type":null,"content_length":"58672","record_id":"<urn:uuid:110b87cb-61c0-4d8b-8b16-75114994366c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Implementing hierarchical design-for-test logic for modular circuit design Embodiments of the present invention provide methods and apparatuses for implementing hierarchical design-for-test (DFT) logic on a circuit. The hierarchical DFT logic implements DFT circuitry that can be dedicated to a module, and which can configure DFT circuitry for multiple modules to share a sequential input signal and/or to share a sequential output signal. During operation, the DFT circuitry for a first module can propagate a bit sequence from the sequential input signal to the DFT circuitry of a second module, such that the bit sequence can include a set of control signal values for controlling the DFT circuitry, and can include compressed test vectors for testing the modules. Furthermore, the DFT circuitry for the second module can generate a sequential response signal, which combines the compressed response vectors from the second module and a sequential response signal from the DFT circuitry of the first module. Inventors: Kapur; Rohit (Cupertino, CA), Chandra; Anshuman (Mountain View, CA), Kanzawa; Yasunari (Sunnyvale, CA), Saikia; Jyotirmoy (Bangalore, IN) Assignee: Synopsys, Inc. (Mountain View, CA) Appl. No.: 12/362,284 Filed: January 29, 2009
{"url":"http://patents.com/us-8065651.html","timestamp":"2014-04-17T04:18:37Z","content_type":null,"content_length":"81413","record_id":"<urn:uuid:a7b9cd69-54b9-4767-addb-2a917fcc339b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
On the power assignment problem in radio networks Results 1 - 10 of 36 - in ICDCS , 2008 "... Topology control is an effective method to improve the energy efficiency of wireless sensor networks (WSNs). Traditional approaches are based on the assumption that a pair of nodes is either “connected ” or “disconnected”. These approaches are called connectivity-based topology control. In real envi ..." Cited by 91 (15 self) Add to MetaCart Topology control is an effective method to improve the energy efficiency of wireless sensor networks (WSNs). Traditional approaches are based on the assumption that a pair of nodes is either “connected ” or “disconnected”. These approaches are called connectivity-based topology control. In real environments however, there are many intermittently connected wireless links called lossy links. Taking a succeeded lossy link as an advantage, we are able to construct more energy-efficient topologies. Towards this end, we propose a novel opportunity-based topology control. We show that opportunity-based topology control is a problem of NPhard. To address this problem in a practical way, we design a fully distributed algorithm called CONREAP based on reliability theory. We prove that CONREAP has a guaranteed performance. The worst running time is O(|E|) where E is the link set of the original topology, and the space requirement for individual nodes is O(d) where d is the node degree. To evaluate the performance of CONREAP, we design and implement a prototype system consisting of 50 Berkeley Mica2 motes. We also conducted comprehensive simulations. Experimental results show that compared with the connectivity-based topology control algorithms, CONREAP can improve the energy efficiency of a network up to 6 times. 1 - ACM Wireless Networks , 2005 "... supported by NSF CCR-0311174. Abstract — Topology control has been well studied in wireless ad hoc networks. However, only a few topology control methods take into account the low interference as a goal of the methods. Some researchers tried to reduce the interference by lowering node energy consump ..." Cited by 56 (0 self) Add to MetaCart supported by NSF CCR-0311174. Abstract — Topology control has been well studied in wireless ad hoc networks. However, only a few topology control methods take into account the low interference as a goal of the methods. Some researchers tried to reduce the interference by lowering node energy consumption (i.e. by reducing the transmission power) or by devising low degree topology controls, but none of those protocols can guarantee low interference. Recently, Burkhart et al. [?] proposed several methods to construct topologies whose maximum link interference is minimized while the topology is connected or is a spanner for Euclidean length. In this paper we give algorithms to construct a network topology for wireless ad hoc network such that the maximum (or average) link (or node) interference of the topology is either minimized or approximately minimized. Index Terms — Topology control, interference, wireless ad hoc networks. - in ESA , 2003 "... Abstract. Used for topology control in ad-hoc wireless networks, Power Assignment is a family of problems, each defined by a certain connectivity constraint (such as strong connectivity) The input consists of a directed complete weighted graph G = (V; c). The power of a vertex u in a directed spanni ..." Cited by 42 (3 self) Add to MetaCart Abstract. Used for topology control in ad-hoc wireless networks, Power Assignment is a family of problems, each defined by a certain connectivity constraint (such as strong connectivity) The input consists of a directed complete weighted graph G = (V; c). The power of a vertex u in a directed spanning subgraph H is given by pH(u) = maxuv2E(H) c(uv). The power of H is given by p(H) = P u2V pH (u), Power Assignment seeks to minimize p(H) while H satisfies the given connectivity constraint. We , 2002 "... One of the main benefits of power controlled ad-hoc wireless networks is their ability to vary the range in order to reduce the power consumption. Minimizing energy consumption is crucial on such kind of networks since, typically, wireless devices are portable and benefit only of limited power resou ..." Cited by 30 (9 self) Add to MetaCart One of the main benefits of power controlled ad-hoc wireless networks is their ability to vary the range in order to reduce the power consumption. Minimizing energy consumption is crucial on such kind of networks since, typically, wireless devices are portable and benefit only of limited power resources. On the other hand, the network must have a sufficient degree of connectivity in order to guarantee fast and efficient communication. These two aspects yield a class of fundamental optimization problems, denoted as range assignment problems, that have been the subject of several works in the area of wireless network theory. The primary aim of this paper is to describe the most important recent advances on this class of problems. Rather than completeness, the paper will try to provide results and techniques that seem to be the most promising to address the several important related problems which are still open. Discussing such related open problems are indeed our other main goal. - Wireless Networks , 2002 "... Energy conservation is a critical issue in ad hoc wireless networks for node and network life since the nodes are powered by batteries only. One major approach for... ..." Cited by 30 (3 self) Add to MetaCart Energy conservation is a critical issue in ad hoc wireless networks for node and network life since the nodes are powered by batteries only. One major approach for... - in Proc. IEEE Infocom , 2004 "... We consider the problem of positioning data collecting base stations in a sensor network. We show that in general, the choice of positions has a marked influence on the data rate, or equivalently, the power efficiency, of the network. In our model, which is partly motivated by an experimental enviro ..." Cited by 26 (0 self) Add to MetaCart We consider the problem of positioning data collecting base stations in a sensor network. We show that in general, the choice of positions has a marked influence on the data rate, or equivalently, the power efficiency, of the network. In our model, which is partly motivated by an experimental environmental monitoring system, the optimum data rate for a fixed layout of base stations can be found by a maximum flow algorithm. Finding the optimum layout of base stations, however, turns out to be an NP-complete problem, even in the special case of homogeneous networks. Our analysis of the optimum layout for the special case of the regular grid shows that all layouts that meet certain constraints are equally good. We also consider two classes of random graphs, chosen to model networks that might be realistically encountered, and empirically evaluate the performance of several base station positioning algorithms on instances of these classes. In comparison to manually choosing positions along the periphery of the network or randomly choosing them within the network, the algorithms tested find positions which significantly improve the data rate and power efficiency of the - Wireless Communications and Mobile Computing , 2002 "... We present an overview of the recent progress of applying computational geometry techniques to solve some questions, such as topology construction and broadcasting, in wireless ad hoc networks. Treating each wireless device as a node in a two dimensional plane, we model the wireless networks by unit ..." Cited by 24 (2 self) Add to MetaCart We present an overview of the recent progress of applying computational geometry techniques to solve some questions, such as topology construction and broadcasting, in wireless ad hoc networks. Treating each wireless device as a node in a two dimensional plane, we model the wireless networks by unit disk graphs in which two nodes are connected if their Euclidean distance is no more than one. We rst summarize the current status of constructing sparse spanners for unit disk graphs with various combinations of the following properties: bounded stretch factor, bounded node degree, planar, and bounded total edges weight (compared with the minimum spanning tree). Instead of constructing subgraphs by removing links, we then review the algorithms for constructing a sparse backbone (connected dominating set), i.e., subgraph from the subset of nodes. We then review some ecient methods for broadcasting and multicasting with theoretic guaranteed performance. - Wireless Networks , 2006 "... In this paper we study the problem of assigning transmission ranges to the nodes of a static ad hoc wireless network so as to minimize the total power consumed under the constraint that enough power is provided to the nodes to ensure that the network is connected. We focus on the MIN-POWER SYMMETRIC ..." Cited by 19 (1 self) Add to MetaCart In this paper we study the problem of assigning transmission ranges to the nodes of a static ad hoc wireless network so as to minimize the total power consumed under the constraint that enough power is provided to the nodes to ensure that the network is connected. We focus on the MIN-POWER SYMMETRIC CONNECTIVITY problem, in which the bidirectional links established by the transmission ranges are required to form a connected graph. Implicit in previous work on transmission range assignment under asymmetric connectivity requirements is the proof that MIN-POWER SYMMETRIC CONNECTIVITY is NP-hard and that the MST algorithm has an approximation ratio of 2. In this paper we make the following contributions: (1) we show that the related MIN-POWER SYMMETRIC UNICAST problem can be solved efficiently by a shortest-path computation in an appropriately constructed graph. (2) we give an exact branch and cut algorithm based on a new integer linear program formulation solving instances with up to 35-40 nodes in 1 hour; (3) we establish the similarity between MIN-POWER SYMMETRIC CONNECTIVITY and the classic STEINER TREE problem in graphs, and use this similarity to give a polynomial-time approximation scheme with performance ratio approaching 5/3 as well as a more practical approximation algorithm with approximation factor 11/6; and (4) we give a comprehensive experimental study comparing new and previously proposed heuristics with the above exact and approximation algorithms. - In Proc. IEEE Wireless Communications and Networking Conference (WCNC , 2003 "... Abstract—We study the problem of assigning transmission ranges to the nodes of ad hoc wireless networks so that to minimize power consumption while ensuring network connectivity. We give (1) an exact branch and cut algorithm based on a new integer linear program formulation solving instances with up ..." Cited by 18 (2 self) Add to MetaCart Abstract—We study the problem of assigning transmission ranges to the nodes of ad hoc wireless networks so that to minimize power consumption while ensuring network connectivity. We give (1) an exact branch and cut algorithm based on a new integer linear program formulation solving instances with up to 35-40 nodes in 1 hour; (2) a proof that MIN-POWER SYMMETRIC CONNECTIVITY WITH ASYMMETRIC POWER REQUIREMENTS is inapproximable within £¥¤§¦©¨������� � �� � factor for ¨��� � any unless; (3) an improved analysis for two approximation algo-rithms recently proposed by Călinescu et al. (TCS’02), decreasing the best known approximation factor to �������� ¨ ; (4) a comprehensive experimental study comparing new and previously proposed heuristics with the above exact and approximation - in Symposium on Computational Geometry , 2006 "... We consider a class of geometric facility location problems in which the goal is to determine a set X of disks given by their centers (t j) and radii (r j) that cover a given set of demand points Y ⊂ R 2 at the smallest possible cost. We consider cost functions of the form ∑ j f(r j), where f(r) = ..." Cited by 15 (2 self) Add to MetaCart We consider a class of geometric facility location problems in which the goal is to determine a set X of disks given by their centers (t j) and radii (r j) that cover a given set of demand points Y ⊂ R 2 at the smallest possible cost. We consider cost functions of the form ∑ j f(r j), where f(r) = r α is the cost of transmission to radius r. Special cases arise for α = 1 (sum of radii) and α = 2 (total area); power consumption models in wireless network design often use an exponent α> 2. Different scenarios arise according to possible restrictions on the transmission centers t j, which may be constrained to belong to a given discrete set or to lie on a line, etc. We obtain several new results, including (a) exact and approximation algorithms for selecting transmission points t j on a given line in order to cover demand points Y ⊂ R 2; (b) approximation algorithms (and an algebraic intractability result) for selecting an optimal line on which to place transmission points to cover Y; (c) a proof of NP-hardness for a discrete set of transmission points in R 2 and any fixed α> 1; and (d) a polynomial-time approximation scheme for the problem of computing a minimum cost covering tour (MCCT), in which the total cost is a linear combination of the transmission cost for the set of disks and the length of a tour/path that connects the centers of the disks. ACM Classification: F.2.2 Nonnumerical Algorithms and Problems. AMS Classification: 68Q25, 68U05, 90C27.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=16009","timestamp":"2014-04-21T07:24:38Z","content_type":null,"content_length":"41457","record_id":"<urn:uuid:bdc41776-0b0a-42a1-b1f8-312dbf87adef>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Boson samplers offering promise for new kinds of computing devices The 8-cm-long silica-on-silicon photonic chip in the center of the picture served as the 4-photon QBSM. Arrays of single-mode fibers are glued to the left and right sides of the chip. For viewing purposes, a red laser is coupled into two of the single mode fibers (right side of picture), which illuminate a portion of the on-chip interferometric network. For the boson sampling experiment, the red laser was replaced with single photon sources. There are five thermal phase shifting elements on top of the chip, though they were not used in this experiment. This image relates to the paper by Dr. Spring and colleagues. Credit: Dr. James C. Gates (Phys.org)—Separate teams working on boson samplers report progress in separate papers uploaded to the preprint server arXiv and in the journal Science. Each relate the progress being made in developing a quantum version of a Victorian era Galton board – where balls are dropped across a peg board resulting in the representation of a binomial distribution at the bottom. While interest remains high in creating a true quantum computer, thus far, real world results have been less than promising. Because of that, some experts in the field have suggested that perhaps what's needed is a new way of looking at the problem. MIT's Scott Aaronson, for example, has suggested that rather than trying to start by building a quantum computer from the ground up, a better approach might be to build specialty devices that solve just one type of problem. He has suggested that a Galton board created using quantum mechanics principles should be possible. In response, several research teams around the globe have been trying to do just that. Quantum versions of the Galton board take the form of a board that uses photons instead of wooden balls and pegs and are named after the family of particles to which they belong: bosons. The sampling devices work much the same as the Victorian models except in one important way. When a photon in the sampler meets another photon, both must go left, or right – in the real-world physical board, balls can go individually either way on their own. The result should be a device that can calculate far faster than any conventional computer. Using such a setup, one team, led by Justin Spring, has built a sampler capable of computing the permanent of a matrix. Another led by Matthew Broome, has sent three photons through a maze that describe a 6 node optical circuit. What's perhaps most compelling about the work being done by all of the teams in this area is the promise of scalability. Not in making the boards or balls bigger, but in making the samplers more and more complex by adding more ways that the photons can be manipulated as they move through the device. Theoretically, doing so offers the promise of a universal computer capable of performing limitless numbers of applications. Whether it will be possible to construct such complex devices in the real world however, remains to be seen. More information: Photonic Boson Sampling in a Tunable Circuit, Science, DOI: 10.1126/science.1231440 Quantum computers are unnecessary for exponentially efficient computation or simulation if the Extended Church-Turing thesis is correct. The thesis would be strongly contradicted by physical devices that efficiently perform tasks believed to be intractable for classical computers. Such a task is boson sampling: sampling the output distributions of n bosons scattered by some linear-optical unitary process. Here, we test the central premise of boson sampling, experimentally verifying that 3-photon scattering amplitudes are given by the permanents of submatrices generated from a unitary describing a 6-mode integrated optical circuit. We find the protocol to be robust, working even with the unavoidable effects of photon loss, non-ideal sources, and imperfect detection. Scaling this to large numbers of photons will be a much simpler task than building a universal quantum computer. Boson Sampling on a Photonic Chip, Science, DOI: 10.1126/science.1231692 While universal quantum computers ideally solve problems such as factoring integers exponentially more efficiently than classical machines, the formidable challenges in building such devices motivate the demonstration of simpler, problem-specific algorithms that still promise a quantum speedup. We construct a quantum boson sampling machine (QBSM) to sample the output distribution resulting from the nonclassical interference of photons in an integrated photonic circuit, a problem thought to be exponentially hard to solve classically. Unlike universal quantum computation, boson sampling merely requires indistinguishable photons, linear state evolution, and detectors. We benchmark our QBSM with three and four photons and analyze sources of sampling inaccuracy. Scaling up to larger devices could offer the first definitive quantum-enhanced computation. Arxiv papers: arxiv.org/abs/1212.2783 and arxiv.org/abs/1212.2240 1 / 5 (1) Dec 23, 2012 At the moment, when the computational power of classical computers becomes limited with uncertainty principle, then the quantum computers cannot bring any new progress into it. The problem of quantum computers is, they're fast, but very approximate. At the moment, when you repeat the same calculation multiple-times for to achieve the same level of precision/reliability like the classical 64-bit computer, then every advantage of quantum computer will simply evaporate. IMO it's just a waste of tax payers money, which could be used for development of cold fusion, for example - i.e. way more And if you're impressed with the possibility, you could do the mathematical operations with multiple qbits at the same moment, then you should be warned, that the stability of these entangled qabits decreases with the number of entangled states geometrically. Memo: the physical laws are binding for everyone. 5 / 5 (1) Dec 24, 2012 Valeria, you still haven't answered my question. Now that Rossi's e-Cat has been shown to be nothing but a fraud, can you point to any other "working" LENR/cold fusion device that isn't also a fraud?
{"url":"http://phys.org/news/2012-12-boson-samplers-kinds-devices.html","timestamp":"2014-04-17T18:49:33Z","content_type":null,"content_length":"76795","record_id":"<urn:uuid:ec8c4228-da43-4e33-90bc-869ca5444e43>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 15 , 1990 "... Linear logic, introduced by Girard, is a refinement of classical logic with a natural, intrinsic accounting of resources. We show that unlike most other propositional (quantifier-free) logics, full propositional linear logic is undecidable. Further, we prove that without the modal storage operator, ..." Cited by 90 (17 self) Add to MetaCart Linear logic, introduced by Girard, is a refinement of classical logic with a natural, intrinsic accounting of resources. We show that unlike most other propositional (quantifier-free) logics, full propositional linear logic is undecidable. Further, we prove that without the modal storage operator, which indicates unboundedness of resources, the decision problem becomes pspace-complete. We also establish membership in np for the multiplicative fragment, np-completeness for the multiplicative fragment extended with unrestricted weakening, and undecidability for certain fragments of noncommutative propositional linear logic. 1 Introduction Linear logic, introduced by Girard [14, 18, 17], is a refinement of classical logic which may be derived from a Gentzen-style sequent calculus axiomatization of classical logic in three steps. The resulting sequent system Lincoln@CS.Stanford.EDU Department of Computer Science, Stanford University, Stanford, CA 94305, and the Computer Science Labo... , 1996 "... The multiplicative fragment of second order propositional linear logic is shown to be undecidable. Introduction Decision problems for propositional (quantifier-free) linear logic were first studied by Lincoln et al. [LMSS]. In referring to linear logic fragments, let M stand for multiplicatives, A ..." Cited by 14 (3 self) Add to MetaCart The multiplicative fragment of second order propositional linear logic is shown to be undecidable. Introduction Decision problems for propositional (quantifier-free) linear logic were first studied by Lincoln et al. [LMSS]. In referring to linear logic fragments, let M stand for multiplicatives, A for additives, E for exponentials (or modalities), 1 for first order quantifiers, 2 for second order propositional quantifiers, and I for "intuitionistic" version. In [LMSS] it was shown that full propositional linear logic is undecidable and that MALL is PSPACEcomplete. The main problems left open in [LMSS] were the NP-completeness of MLL, the decidability of MELL, and the decidability of various fragments of propositional linear logic without exponentials but extended with second order propositional quantifiers. The decision problem for MELL is still open, but almost all the other problems have been solved: ffl The NP-completeness of MLL has been obtained by Kanovich [K1]. Moreover, Linco... - Journal of Symbolic Logic , 1995 "... . Recently, Lincoln, Scedrov and Shankar showed that the multiplicative fragment of second order intuitionistic linear logic is undecidable, using an encoding of second order intuitionistic logic. Their argument applies to the multiplicative-additive fragment, but it does not work in the classical c ..." Cited by 12 (3 self) Add to MetaCart . Recently, Lincoln, Scedrov and Shankar showed that the multiplicative fragment of second order intuitionistic linear logic is undecidable, using an encoding of second order intuitionistic logic. Their argument applies to the multiplicative-additive fragment, but it does not work in the classical case, because second order classical logic is decidable. Here we show that the multiplicative-additive fragment of second order classical linear logic is also undecidable, using an encoding of two-counter machines originally due to Kanovich. The faithfulness of this encoding is proved by means of the phase semantics. In this paper, we write LL for the full propositional fragment of linear logic, MLL for the multiplicative fragment, MALL for the multiplicative-additive fragment, and MELL for the multiplicative-exponential fragment. Similarly, we write ILL, IMLL, etc. for the fragments of intuitionistic linear logic, LL2, MLL2, etc. for the second order fragments of linear logic, and ILL2, IML... - Proc. of the 20th International Conference on Automated Planning and Scheduling "... The utility of including loops in plans has been long recognized by the planning community. Loops in a plan help increase both its applicability and the compactness of representation. However, progress in finding such plans has been limited largely due to lack of methods for reasoning about the corr ..." Cited by 7 (7 self) Add to MetaCart The utility of including loops in plans has been long recognized by the planning community. Loops in a plan help increase both its applicability and the compactness of representation. However, progress in finding such plans has been limited largely due to lack of methods for reasoning about the correctness and safety properties of loops of actions. We present novel algorithms for determining the applicability and progress made by a general class of loops of actions. These methods can be used for directing the search for plans with loops towards greater applicability while guaranteeing termination, as well as in post-processing of computed plans to precisely characterize their applicability. Experimental results demonstrate the efficiency of these algorithms. 1. - Journal of Consciousness Studies , 1995 "... Moody is right that the doctrine of conscious inessentialism (CI) is false. Unfortunately, his zombie-based argument against (CI), once made sufficiently clear to evaluate, is revealed as nothing but legerdemain. The fact is, though Moody has---for reasons I explain---convinced himself otherwise, ce ..." Cited by 5 (4 self) Add to MetaCart Moody is right that the doctrine of conscious inessentialism (CI) is false. Unfortunately, his zombie-based argument against (CI), once made sufficiently clear to evaluate, is revealed as nothing but legerdemain. The fact is, though Moody has---for reasons I explain---convinced himself otherwise, certain zombies are impenetrable: that they are zombies, and not conscious beings like us, is something beyond the capacity of humans to divine. 1 Moody's Argument Moody's argument is imaginative, but not exactly rigorous: it's painfully difficult to identify his premises, and his inferences therefrom to the conclusion that conscious inessentialism (CI) is false. Charitable exegesis yields the following overarching reasoning: (1) If (CI) is true, then a group of zombies visiting us from a zombie-world would not bear a mark of zombiehood. (2) A group of zombies visiting us from a zombie-world would bear a mark of zombiehood. Therefore: (3) :(CI) I'm indebted to Larry Hauser for many , 2005 "... Locality Conditions (LCs) on (unbounded) dependencies have played a major role in the development of generative syntax ever since the seminal work by Ross [22]. Descriptively, they fall into two groups. On the one hand there are intervention-based LCs (ILCs) often formulated as “minimality constra ..." Cited by 5 (3 self) Add to MetaCart Locality Conditions (LCs) on (unbounded) dependencies have played a major role in the development of generative syntax ever since the seminal work by Ross [22]. Descriptively, they fall into two groups. On the one hand there are intervention-based LCs (ILCs) often formulated as “minimality constraints” (“minimal link condition,” “minimize chain links,”“shortest move,” “attract closest,” etc.). On the other hand there are containment-based LCs (CLCs) typically defined in terms of (generalized) grammatical functions (“adjunct island,” “subject island,” “specifier island,” etc.). Research on LCs has been dominated by two very general trends. First, attempts have been made at unifying ILCs and CLCs on the basis of notions such as “government ” and “barrier ” (e.g. [4]). Secondly, research has often been guided by the intuition that, beyond empirical coverage, LCs somehow contribute to restricting the formal capacity of grammars (cf. [3, p. 125], [6, p. 14f]). Both these issues, we are going to argue, can be fruitfully studied within the framework of minimalist , 1996 "... The multiplicative fragment of second order propositional linear logic is shown to be undecidable. Introduction Decision problems for propositional (quantifier-free) linear logic were first studied by Lincoln et al. [LMSS]. In referring to linear logic fragments, let M stand for multiplicatives, A ..." Add to MetaCart The multiplicative fragment of second order propositional linear logic is shown to be undecidable. Introduction Decision problems for propositional (quantifier-free) linear logic were first studied by Lincoln et al. [LMSS]. In referring to linear logic fragments, let M stand for multiplicatives, A for additives, E for exponentials (or modalities), 1 for first order quantifiers, 2 for second order propositional quantifiers, and I for "intuitionistic" version. In [LMSS] it was shown that full propositional linear logic is undecidable and that MALL is PSPACEcomplete. The main problems left open in [LMSS] were the NP-completeness of MLL, the decidability of MELL, and the decidability of various fragments of propositional linear logic without exponentials but extended with second order propositional quantifiers. The decision problem for MELL is still open, but almost all the other problems have been solved: ffl The NP-completeness of MLL has been obtained by Kanovich [K1]. Moreover, Linco... "... Minimalist grammars (Stabler 1997) capture some essential ideas about the basic operations of sentence construction in the Chomskyian syntactic tradition. Their affinity with the unformalized theories of working linguists makes it easier to implement and thereby to better understand the operations a ..." Add to MetaCart Minimalist grammars (Stabler 1997) capture some essential ideas about the basic operations of sentence construction in the Chomskyian syntactic tradition. Their affinity with the unformalized theories of working linguists makes it easier to implement and thereby to better understand the operations appealed to in neatly accounting for some of the regularities perceived in language. Here we characterize the expressive power of two, apparently quite different, variations on the basic minimalist grammar framework, gotten by: 1. adding a mechanism of ‘feature percolation ’ (Kobele, forthcoming), or 2. instead of adding a central constraint on movement (the ‘specifier island condition’, Stabler 1999), using it to replace another one (the ‘shortest move condition’, Stabler 1997, 1999) (Gärtner and Michaelis 2005). We demonstrate that both variants have equal, unbounded, computing power by showing how each can simulate straightforwardly a 2-counter automaton. , 2009 "... Abstract. The decidability of multiplicative exponential linear logic (MELL) is currently open. I show that two independently interesting refinements of MELL that alter only the syntax of proofs—leaving the underlying truth untouched— are undecidable. The first refinement uses new modal connectives ..." Add to MetaCart Abstract. The decidability of multiplicative exponential linear logic (MELL) is currently open. I show that two independently interesting refinements of MELL that alter only the syntax of proofs—leaving the underlying truth untouched— are undecidable. The first refinement uses new modal connectives between the linear and the unrestricted judgments, and the second is based on focusing with priority assignments that conforms to a staging discipline. Both refinements can adequately encode the transitions of a two-register Minsky machine. While neither refinement is weak enough to entail the undecidability of MELL, they show that no additive connectives are necessary for undecidability. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1375396","timestamp":"2014-04-16T09:39:28Z","content_type":null,"content_length":"37580","record_id":"<urn:uuid:82c57016-bcd1-44f9-bb46-9cab5e5809a0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
This Week in Compressive Sensing and Advanced Matrix Factorization With regards to insight of what is to come here is a list of talks worth attending or reading about: Peter Olcott^1, Ealgoo Kim^2, Garry Chinn^2 and Craig Levin^2 ^1 Bio-engineering, Stanford University, Stanford, CA ^2 Radiology, Stanford Medical School, Stanford, CA Objectives: Potential clinical silicon photomultiplier based PET systems will consist of tens of thousands of individual sensors. Compressed sensing electronics can be used to multiplex a large number of individual readout sensors to significantly reduce the number of readout channels. Methods: Using brute force optimization method, a two level sensing matrix based on a 2-weight constant weight code C1[128:32] followed by a 3 weight constant weight code C2[32:16] was designed. These codes consists of discrete resistor elements either connected or not connected to intermediate or output signals. A PET block detector PCB and electronics were fabricated that can multiplex 128 3.2 mm x 3.2 mm solid-state photomultiplier pixels arranged into a 16 x 8 array. Signals from the detector were acquired by a custom 16 channel simultaneously sampling 12-bit 65 Msps ADC acquisition system. Each of the signals was summed to form a trigger, and the peak value for each event on each channel was captured simultaneously. For calibration, we placed a single 4 x 4 array of 3.2 mm x 3.2 mm x 20 mm LYSO crystals onto one of the populated detectors and collected a uniform flood calibration dataset using a 125μCi Ge source. We used a KNN Density clustering method to calculate the centroids of the calibration flood irradiation that were mapped through the sensing matrix and captured by the 16 ADC channels. Results: All 16 crystals were clearly segmented from the 16 dimensional output data using the new KNN-density clustering method. After correcting for the gain non-uniformities of the SiPM sensor, we measured a preliminary 23.7 +/- 1.2% FWHM energy resolution at 511 keV. Conclusions: We have successfully fabricated, performed data acquisition, developed a new calibration method, and done preliminary calibration for a compressed sensing PET detector. Two other papers in the same conference are related to CS: Koon-Pong Wong^1 and Sung-Cheng Huang^1 ^1 Molecular & Medical Pharmacology, UCLA School of Medicine, Los Angeles, CA Objectives: Whole-body PET/CT imaging of patients or preclinical PET imaging of larger animals requires the image data be acquired at multiple bed positions, making it impossible to provide continuous kinetics of all body regions for standard kinetic analysis. Here, we investigated the use of compressed sensing to estimate kinetic parameters from sparse temporal data through computer simulation. Methods: Time-activity curves (TACs) of the brain, myocardium, and muscle were simulated with 4 framing protocols (40x90 s, 30x120 s, 20x180 s, and 12x300 s) using an input function (described by a 4-exponential function) and the FDG model (with a set of model parameters derived from a mouse FDG-PET study). Two bed positions (bed 1: blood pool and myocardium; bed 2: brain and muscle) were assumed and thus, every other frame of all the kinetic data were deleted. Realistic noise of variance proportional to the activity concentration and inversely proportional to the frame duration was introduced to simulate noisy blood pool and tissue TACs. 100 noise realizations were generated for each framing protocol. The sparsely sampled noisy blood and tissue TACs were fitted by the 4-exponential function and the FDG model simultaneously and the parameters were estimated. FDG uptake constant (Ki) in tissues was calculated and compared to the true values used to simulate the noise-free data. The procedure was repeated with the noise level doubled to evaluate the noise sensitivity. Results: Variability of the FDG model parameter estimates increased as the TACs became more sparsely sampled. Ki estimates in various tissues agreed well with the true values. Coefficient of variation (CV) of Ki estimates averaged over 4 protocols was 10±1% in brain, 8±3% in myocardium, and 9±1% in muscle. When the noise level was doubled, CV of Ki was doubled in the brain and increased by ~55% in myocardium and muscle. Conclusions: Reliable estimates of Ki can be obtained from sparsely sampled kinetics using compressed sensing, which has great potential for quantitative dynamic whole-body imaging in human and animal studies. Chia-Jui Hsieh^1, Huihua Kenny Chiang^2, Yung-Hsiang Chiu^3, Bo-Wen Xiao^3, Cheng-Wei Sun^3,Ming-Hua Yeh^3, Ming-Hua Yeh^3 and Jyh-cheng Chen^1 ^1 Department of Biomedical Imaging & Radiological Sciences, National Yang-Ming University, Taipei, Taiwan ^2 Institute of Biomedical Engineering, National Yang-Ming University, Taipei, Taiwan ^3 Display Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan Objectives: The objective of this study is to develop a new iterative algorithm for computed tomography (CT) reconstruction. This algorithm can be used in the circumstances of substantially reduced projection data, which implies to decrease X-ray exposure time and consequently reduce radiation dose, to accelerate the image reconstruction speed and maintain good image quality. Methods: In this study, we combine the compressed sensing (CS) technology with the simultaneous algebraic reconstruction technique (SART) to create a new CT reconstruction algorithm called CS-SART. The algorithm minimizes the total variation (TV) of the image that has been transformed into sparse domain to obtain the gradient direction of this image. Then, the gradient direction is used to improve the image. The reconstructed image will be obtained by following above procedures repeatedly until the stopping criteria are satisfied. Results: To validate and evaluate the performance of this CS-SART algorithm, we use Shepp-Logan phantom as the target for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg). From the results, the CS-SART algorithm can reconstruct images with relatively less artifacts compared with that obtained by traditional FBP (filtered back projection) and ART (algebraic reconstruction technique) with full-scan data (angular sampling interval is 1 deg). Compared with the reconstruction speed of existing reconstruction methods under the same image quality condition, CS-SART is also the fastest one. Conclusions: We have developed the CS-SART algorithm which can accelerate the computational speed while maintaining good image quality under the circumstances of substantially reduced projection While on arxiv we had the following preprints: (Submitted on 6 Jun 2012) A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification. (Submitted on 4 Jun 2012) Signal recovery is one of the key techniques of Compressive sensing (CS). It reconstructs the original signal from the linear sub-Nyquist measurements. Classical methods exploit the sparsity in one domain to formulate the L0 norm optimization. Recent investigation shows that some signals are sparse in multiple domains. To further improve the signal reconstruction performance, we can exploit this multi-sparsity to generate a new convex programming model. The latter is formulated with multiple sparsity constraints in multiple domains and the linear measurement fitting constraint. It improves signal recovery performance by additional a priori information. Since some EMG signals exhibit sparsity both in time and frequency domains, we take them as example in numerical experiments. Results show that the newly proposed method achieves better performance for multi-sparse signals. (Submitted on 4 Jun 2012) This paper gives a precise characterization of the fundamental limits of adaptive sensing for diverse estimation and testing problems concerning sparse signals. We consider in particular the setting introduced in Haupt, Castro and Nowak (2011) and show necessary conditions on the minimum signal magnitude for both detection and estimation: if $x\in\R^n$ is a sparse vector with $s$ non-zero components then it can be reliably detected in noise provided the magnitude of the non-zero components exceeds $\sqrt{2/s}$. Furthermore, the signal support can be exactly identified provided the minimum magnitude exceeds $\sqrt{2\log s}$. Notably there is no dependence on $n$, the extrinsic signal dimension. These results show that the adaptive sensing methodologies proposed previously in the literature are essentially optimal, and cannot be substantially improved. In addition these results provide further insights on the limits of adaptive compressive sensing. (Submitted on 1 Jun 2012) Phase retrieval seeks to recover a complex signal x from the amplitude |Ax| of linear measurements. We cast the phase retrieval problem as a non-convex quadratic program over a complex phase vector and formulate a tractable relaxation similar to the classical MaxCut semidefinite program. Numerical results show the performance of this approach over three different phase retrieval problems, in comparison with greedy phase retrieval algorithms and matrix completion approaches. (Submitted on 30 May 2012) Sparse signal recovery has been dominated by the basis pursuit denoise (BPDN) problem formulation for over a decade. In this paper, we propose an algorithm that outperforms BPDN in finding sparse solutions to underdetermined linear systems of equations at no additional computational cost. Our algorithm, called WSPGL1, is a modification of the spectral projected gradient for $\ell_1$ minimization (SPGL1) algorithm in which the sequence of LASSO subproblems are replaced by a sequence of weighted LASSO subproblems with constant weights applied to a support estimate. The support estimate is derived from the data and is updated at every iteration. The algorithm also modifies the Pareto curve at every iteration to reflect the new weighted $\ell_1$ minimization problem that is being solved. We demonstrate through extensive simulations that the sparse recovery performance of our algorithm is superior to that of $\ell_1$ minimization and approaches the recovery performance of iterative re-weighted $\ell_1$ (IRWL1) minimization of Cand{\`e}s, Wakin, and Boyd, although it does not match it in general. Moreover, our algorithm has the computational cost of a single BPDN problem. (Submitted on 30 May 2012) In this paper, we study the support recovery conditions of weighted $\ell_1$ minimization for signal reconstruction from compressed sensing measurements when multiple support estimate sets with different accuracy are available. We identify a class of signals for which the recovered vector from $\ell_1$ minimization provides an accurate support estimate. We then derive stability and robustness guarantees for the weighted $\ell_1$ minimization problem with more than one support estimate. We show that applying a smaller weight to support estimate that enjoy higher accuracy improves the recovery conditions compared with the case of a single support estimate and the case with standard, i.e., non-weighted, $\ell_1$ minimization. Our theoretical results are supported by numerical simulations on synthetic signals and real audio signals. (Submitted on 29 May 2012) Compressed sensing is a method that allows a significant reduction in the number of samples required for accurate measurements in many applications in experimental sciences and engineering. In this work, we show that compressed sensing can also be used to speed up numerical simulations. We apply compressed sensing to extract information from the real-time simulation of atomic and molecular systems, including electronic and nuclear dynamics. We find that for the calculation of vibrational and optical spectra the total propagation time, and hence the computational cost, can be reduced by approximately a factor of five. (Submitted on 28 May 2012) Sparse coding in learned dictionaries is a successful approach for signal denoising, source separation and solving inverse problems in general. A dictionary learning method adapts an initial dictionary to a particular signal class by iteratively computing an approximate factorization of a training data matrix into a dictionary and a sparse coding matrix. The learned dictionary is characterized by two properties: the coherence of the dictionary to observations of the signal class, and the self-coherence of the dictionary atoms. A high coherence to signal observations enables the sparse coding of signal observations with a small approximation error, while a low self-coherence of the atoms guarantees atom recovery and a more rapid residual error decay rate for the sparse coding algorithm. The two goals of high signal coherence and low self-coherence are typically in conflict, therefore one seeks a trade-off between them, depending on the application. We present a dictionary learning method which enables an effective control over the self-coherence of the trained dictionary, enabling a trade-off between maximizing the sparsity of codings and approximating an equi-angular tight frame. (Submitted on 6 Jun 2012) Distributed functional scalar quantization (DFSQ) theory provides optimality conditions and predicts performance of data acquisition systems in which a computation on acquired data is desired. We address two limitations of previous works: prohibitively expensive decoder design and a restriction to sources with bounded distributions. We rigorously show that a much simpler decoder has equivalent asymptotic performance as the conditional expectation estimator previously explored, thus reducing decoder design complexity. The simpler decoder has the feature of decoupled communication and computation blocks. Moreover, we extend the DFSQ framework with the simpler decoder to acquire sources with infinite-support distributions such as Gaussian or exponential distributions. Finally, through simulation results we demonstrate that performance at moderate coding rates is well predicted by the asymptotic analysis, and we give new insight on the rate of (Submitted on 5 Jun 2012) Many models for sparse regression typically assume that the covariates are known completely, and without noise. Particularly in high-dimensional applications, this is often not the case. This paper develops efficient OMP-like algorithms to deal with precisely this setting. Our algorithms are as efficient as OMP, and improve on the best-known results for missing and noisy data in regression, both in the high-dimensional setting where we seek to recover a sparse vector from only a few measurements, and in the classical low-dimensional setting where we recover an unstructured regressor. In the high-dimensional setting, our support-recovery algorithm requires no knowledge of even the statistics of the noise. Along the way, we also obtain improved performance guarantees for OMP for the standard sparse regression problem with Gaussian noise. (Submitted on 4 Jun 2012) In this paper, we study the complex Wigner matrices $M_n=\frac{1}{\sqrt{n}}W_n$ whose eigenvalues are typically in the interval $[-2,2]$. Let $\lambda_1\leq \lambda_2...\leq\lambda_n$ be the ordered eigenvalues of $M_n$. Under the assumption of four matching moments with the Gaussian Unitary Ensemble(GUE), for test function $f$ 4-times continuously differentiable on an open interval including $[-2,2]$, we establish central limit theorems for two types of partial linear statistics of the eigenvalues. The first type is defined with a threshold $u$ in the bulk of the Wigner semicircle law as $\mathcal{A}_n[f; u]=\sum_{l=1}^nf(\lambda_l)\mathbf{1}_{\{\lambda_l\leq u\}}$. And the second one is $\mathcal{B}_n[f; k]=\sum_{l=1}^{k}f(\lambda_l)$ with positive integer $k= k_n$ such that $k/n\rightarrow y\in (0,1)$ as $n$ tends to infinity. Moreover, we derive a weak convergence result for a partial sum process constructed from $\mathcal{B}_n[f; \lfloor nt\rfloor] (Submitted on 2 Jun 2012) This paper is a follow up to the previous author's paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modified there three the most popular in nonlinear approximation in Banach spaces greedy algorithms -- Weak Chebyshev Greedy Algorithm, Weak Greedy Algorithm with Free Relaxation and Weak Relaxed Greedy Algorithm -- for solving convex optimization problems. We continue to study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. In this paper we concentrate on greedy algorithms that provide expansions, which means that the approximant at the $m$th iteration is equal to the sum of the approximant from the previous iteration ($(m-1)$th iteration) and one element from the dictionary with an appropriate coefficient. The problem of greedy expansions of elements of a Banach space is well studied in nonlinear approximation theory. At a first glance the setting of a problem of expansion of a given element and the setting of the problem of expansion in an optimization problem are very different. However, it turns out that the same technique can be used for solving both problems. We show how the technique developed in nonlinear approximation theory, in particular, the greedy expansions technique can be adjusted for finding a sparse solution of an optimization problem given by an expansion with respect to a given dictionary. (Submitted on 2 Jun 2012) Photon-limited imaging, which arises in applications such as spectral imaging, night vision, nuclear medicine, and astronomy, occurs when the number of photons collected by a sensor is small relative to the desired image resolution. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse representations for image patches. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its simplicity, PCA-flavored denoising appears to be highly competitive in very low light regimes. (Submitted on 1 Jun 2012) We consider the problem of designing optimal $M \times N$ ($M \leq N$) sensing matrices which minimize the maximum condition number of all the submatrices of $K$ columns. Such matrices minimize the worst-case estimation errors when only $K$ sensors out of $N$ sensors are available for sensing at a given time. For M=2 and matrices with unit-normed columns, this problem is equivalent to the problem of maximizing the minimum singular value among all the submatrices of $K$ columns. For M=2, we are able to give a closed form formula for the condition number of the submatrices. When M=2 and K=3, for an arbitrary $N\geq3$, we derive the optimal matrices which minimize the maximum condition number of all the submatrices of $K$ columns. Surprisingly, a uniformly distributed design is often \emph{not} the optimal design minimizing the maximum condition number. (Submitted on 1 Jun 2012) Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. The most commonly applied Markov chain Monte Carlo (MCMC) sampling algorithms for this purpose are Metropolis-Hastings (MH) schemes. However, we demonstrate in this article that for sparse priors relying on L1-norms, their efficiency dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using these samplers is not feasible at all. We therefore develop a sampling algorithm that relies on single component Gibbs sampling. We show that the efficiency of our Gibbs sampler even increases when the level of sparsity or the dimension of the unknowns is increased. This property is not only distinct to the MH schemes but also challenges common beliefs about MCMC sampling. (Submitted on 24 May 2012) Transform Invariant Low-rank Textures (TILT) is a novel and powerful tool that can effectively rectify a rich class of low-rank textures in 3D scenes from 2D images despite significant deformation and corruption. The existing algorithm for solving TILT is based on the alternating direction method (ADM). It suffers from high computational cost and is not theoretically guaranteed to converge to a correct solution. In this paper, we propose a novel algorithm to speed up solving TILT, with guaranteed convergence. Our method is based on the recently proposed linearized alternating direction method with adaptive penalty (LADMAP). To further reduce computation, warm starts are also introduced to initialize the variables better and cut the cost on singular value decomposition. Extensive experimental results on both synthetic and real data demonstrate that this new algorithm works much more efficiently and robustly than the existing algorithm. It could be at least five times faster than the previous method. (Submitted on 10 May 2012 ( ), last revised 18 May 2012 (this version, v2)) Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank. (Submitted on 7 Jun 2012) We reintroduce an M-estimator that was implicitly discussed by Tyler in 1987, to robustly recover the underlying linear model from a data set contaminated by outliers. We prove that the objective function of this estimator is geodesically convex on the manifold of all positive definite matrices, and propose a fast algorithm that obtains its unique minimum. Besides, we prove that when inliers (i.e., points that are not outliers) are sampled from a subspace and the percentage of outliers is bounded by some number, then under some very weak assumptions this algorithm can recover the underlying subspace exactly. We also show that our algorithm compares favorably with other convex algorithms of robust PCA empirically. (Submitted on 6 Jun 2012) This paper describes a new approach for computing nonnegative matrix factorizations (NMFs) with linear programming. The key idea is a data-driven model for the factorization, in which the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C that satisfies X is approximately equal to CX and some linear constraints. The matrix C selects features, which are then used to compute a low-rank NMF of X. A theoretical analysis demonstrates that this approach has the same type of guarantees as the recent NMF algorithm of Arora et al. (2012). In contrast with this earlier work, the proposed method (1) has better noise tolerance, (2) extends to more general noise models, and (3) leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation of the new algorithm can factor a multi-Gigabyte matrix in a matter of minutes. (Submitted on 6 Jun 2012) Chandrasekaran, Parrilo and Willsky (2010) proposed a convex optimization problem to characterize graphical model selection in the presence of unobserved variables. This convex optimization problem aims to estimate an inverse covariance matrix that can be decomposed into a sparse matrix minus a low-rank matrix from sample data. Solving this convex optimization problem is very challenging, especially for large problems. In this paper, we propose a novel alternating direction method of multipliers (ADMM) for solving this problem. The classical ADMM does not apply to this problem because there are three blocks in the problem and there is currently no convergence guarantee. Our method is a variant of the classical ADMM but only consists of two blocks and one of the subproblems is solved inexactly. Our method exploits and takes advantage of the special structure of the problem and thus can solve large problems very efficiently. Global convergence result is established for our proposed method. Numerical results on both synthetic data and gene expression data show that our method usually solve problems with one million variables in one to two minutes, and are usually five to thirty five times faster than a state-of-the-art Newton-CG proximal point algorithm. (Submitted on 2 Jun 2012) We study the problem of estimating multiple predictive functions from a dictionary of basis functions in the nonparametric regression setting. Our estimation scheme assumes that each predictive function can be estimated in the form of a linear combination of the basis functions. By assuming that the coefficient matrix admits a sparse low-rank structure, we formulate the function estimation problem as a convex program regularized by the trace norm and the $\ell_1$-norm simultaneously. We propose to solve the convex program using the accelerated gradient (AG) method and the alternating direction method of multipliers (ADMM) respectively; we also develop efficient algorithms to solve the key components in both AG and ADMM. In addition, we conduct theoretical analysis on the proposed function estimation scheme: we derive a key property of the optimal solution to the convex program; based on an assumption on the basis functions, we establish a performance bound of the proposed function estimation scheme (via the composite regularization). Simulation studies demonstrate the effectiveness and efficiency of the proposed algorithms. In this paper, a novel multi-target sparse localization (SL) algorithm based on compressive sampling (CS) is proposed. Different from the existing literature for target counting and localization where signal/received-signal-strength (RSS) readings at different access points (APs) are used separately, we propose to reformulate the SL problem so that we can make use of the cross-correlations of the signal readings at different APs. We analytically show that this new framework can provide a considerable amount of extra information compared to classical SL algorithms. We further highlight that in some cases this extra information converts the under-determined problem of SL into an over-determined problem for which we can use ordinary leastsquares (LS) to efficiently recover the target vector even if it is not sparse. Our simulation results illustrate that compared to classical SL this extra information leads to a considerable improvement in terms of number of localizable targets as well as localization accuracy. Compressive sensing is an emerging area which uses a relatively small number of non-traditional samples in the form of randomized projections to reconstruct sparse or compressible signals. This study considered the carrier frequency offset estimation problem for interleaved orthogonal frequency-division multiple-access uplink systems. A new carrier frequency offset estimation method based on the compressive sensing theory is proposed to estimate the carrier frequency offsets in interleaved OFDMA uplink systems. The presented method can effectively estimate all carrier frequency offsets of the active users by finding the sparest coefficients. Simulation results are presented to verify the efficiency of the proposed approach. . You can also subscribe to Nuit Blanche by Email , explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing advanced matrix factorization calibration issues on Linkedin. No comments:
{"url":"http://nuit-blanche.blogspot.com/2012/06/this-week-in-compressive-sensing-and.html","timestamp":"2014-04-20T23:26:40Z","content_type":null,"content_length":"411810","record_id":"<urn:uuid:a30769de-817b-4bd5-a546-e4369a1b843c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Secant: Introduction to the Secant Function (subsection Sec/05) The best-known properties and formulas for the secant function Values in points Using the connection between the cosine and secant functions gives the following table of values of the secant function for angles between 0 and 2 π: General characteristics For real values of argument , the values of are real. In the points , the values of are algebraic. In several cases they can be integers , , 1, or 2: The values of can be expressed using only square roots if and is a product of a power of 2 and distinct Fermat primes {3, 5, 17, 257, …}. The function is an analytical function of that is defined over the whole complex ‐plane and does not have branch cuts and branch points. It has an infinite set of singular points: (a) are the simple poles with residues . (b) is an essential singular point. It is a periodic function with a real period : The function is an even function with mirror symmetry: The first derivative of has simple representations using either the function or the function: The derivative of has much more complicated representations than symbolic derivatives for and : where is the Kronecker delta symbol: and . Ordinary differential equation The function satisfies the following first-order nonlinear differential equation: Series representation The function has the following series expansion at the origin that converges for all finite values with : where are the Euler numbers. The secant function can also be presented using other kinds of series by the following formulas: Integral representation The function has a well-known integral representation through the following definite integral along the positive part of the real axis: Product representation The famous infinite product representation for can be easily rewritten as the following product representation for the secant function: Limit representation The secant function has the following limit representation: Indefinite integration Indefinite integrals of expressions involving the secant function can sometimes be expressed using elementary functions. However, special functions are frequently needed to express the results even when the integrands have a simple form (if they can be evaluated in closed form). Here are some examples: Definite integration Definite integrals that contain the secant function are sometimes simple and their values can be expressed through elementary functions. Here is one example: Some special functions can be used to evaluate more complicated definite integrals. For example, polygamma and gamma functions and the Catalan constant are needed to express the following integrals: Finite summation Finite sums that contain the secant function have the following simple values: Infinite summation The evaluation limit of the last formula in the previous subsubsection for gives the following value for the corresponding infinite sum: Finite products The following finite product from the secant can be represented through the cosecant function: Infinite products The following infinite product from the secant can be represented through the cosecant function: Addition formulas The secants of a sum and a difference can be represented by the following formulas that are derived from the cosines of a sum and a difference: Multiple arguments In the case of multiple arguments , , , …, the function can be represented as a rational function including powers of a secant. Here are two examples: Half-angle formulas The secant of a half‐angle can be represented by the following simple formula that is valid in a vertical strip: To make this formula correct for all complex , a complicated prefactor is needed: where contains the unit step, real part, imaginary part, and the floor functions. Sums of two direct functions The sum and difference of two secant functions can be described by the following formulas: Products involving the direct function The product of two secants and the product of a secant and a cosecant have the following representations: One of the most famous inequalities for a secant function is the following: Relations with its inverse function There are simple relations between the function and its inverse function : The second formula is valid at least in the vertical strip . Outside of this strip a much more complicated relation (containing the unit step, real part, and the floor functions) holds: Representations through other trigonometric functions Secant and cosecant functions are connected by a very simple formula that contains the linear function in the argument: The secant function can also be represented using other trigonometric functions by the following formulas: Representations through hyperbolic functions The secant function has representations using the hyperbolic functions: The secant function is used throughout mathematics, the exact sciences, and engineering.
{"url":"http://functions.wolfram.com/ElementaryFunctions/Sec/introductions/Sec/05/ShowAll.html","timestamp":"2014-04-21T10:02:02Z","content_type":null,"content_length":"65306","record_id":"<urn:uuid:6d629ff8-fc8e-4875-a73d-d784750b8db1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Teacher: Common Core Harms My Title I Students One of the unsettled questions about the Common Core standards is whether they will widen or narrow the achievement gaps between children of different races and different income levels. In their first trial in Kentucky, the gap grew larger, and scores fell across the board. Some see this effect as a temporary adjustment to higher standards. Some suspect that it is intended to induce panic among parents about public education. Some see it as an opportunity for entrepreneurs to sell more stuff to schools. This teacher read Stephen Krashen’s post last night about the Common Core and offered the following comments. “From a teacher who has spent this year implementing CC I can tell everyone it has been a nightmare of epic proportion. “We were already a standards based Title 1 school with great success over the past 4 years, and these past 5 months have left my students months behind. I am a great teacher, building the relationships necessary in a TItle 1 school for students to learn. I have always posted 90% and higher pass rates on the state test (not that I give any heed to those numbers – even though my job now depends on them), but I will be shocked if I hit 70% this year following this CC crap. “The design and implementation has left my Title 1 students feeling like failures. There is no “leveling of the playing field.” If I am to salvage something from this year I will have to risk my job and fix what CC has done for my students, essentially nothing. “There was zero thought given to low income students, how they think or how they learn. You cannot build EVERYTHING on previous learning. Anyone who teaches TItle 1 will tell you it does not work that way. The achievement gap widens, and will become irreparable in just a few years of CC. “I sit here over my Christmas Break trying to figure out how to implement CC for the next 5 months and still catch my kids up to level. CC is not about teaching. It is about the creation of two separate educational systems, one for the haves, and one for the have nots. Sadly for my students, and more than 50% of the children in the South, they have not and CC is not helping.” 26 Comments Post your own or leave a trackback: Trackback URL 1. What a complete and total disaster coming our way and there apparently is no way to stop this train wreck or is there? What can we do? □ Keep speaking up. Be fearless. Have the facts. Facts matter. In the end, knowledge and facts conquer false narratives. The big lie works until enough people see it for what it is. keep educating the public. 2. Finally, the Big$ EdReformers are achieving their goal: the last hardworking kids and teachers are scoring so low that the TFAtypes will swoop in and save the day with charter schools and Poverty Pays$$$$$! What noble achievement!? Shame on you! 3. I am a teacher at a Title I school and I disagree with your claim that CC will widen the achievement. CC is good for ALL students, especially in math. It has the perfect balance of procedural fluency and conceptual understanding. I would love know what has made you feel this way. □ It’s not good for my learning-disabled son, who missed a lot of content in the jump to CC and it completely lost. And he’s not the only one. The insistence that there be no remedial classes leaves a lot of students falling into an enormous chasm. I’m starting to worry that my son won’t be able to graduate from high school because of this. ☆ Has your district made any attempts to cover the gaps in concepts? Our scope and sequence was written by teachers like me. We knew that this would be a difficult year, but we added those gaps into the scope and sequence for this year. ☆ The teachers have tried to cover the gaps. The District’s answer to the problem was “hand them a calculator.” My son can’t figure out how to set up the problem in the first place–a calculator does not help with that! 4. This teacher’s story is a cautionary tale for districts in how they implement and assess the Common Core. To drop students in to a groups of standards in, say, 10th grade, without the prior 10 years of preparation does not seem wise. Yet, we can ill afford to grow from kindergarten up. There needs to be support for teachers to implement the new standards. At the same time, we should all remember that these standards resulted from a hodge podge of state standards that were widely disparate in rigor. Standardized tests will only compound the stress for both teachers and students. Where states have implemented the CCSS-aligned tests, a 30 point drop was seen in average score results. This is more a failure of how we use tests than the standards. The standards have authentic assessments built in, and allow for reteaching to help students get concepts. This is authentic learning. I urge you, Diane, to pick up the drum beat issued by Montgomery County Maryland’s superintendent for a three-year moratorium on testing. Give the standards, districts, states and teachers time to support our students. 5. We have been implementing the common core in my Title I school in NYC, and I agree, it is a disaster. It has been mind numbing and has taken all of the creativity and joy out of learning. I’m sure it will be a major boon to everyone who will be cashing in, forget about the children. 6. This post is misleading. The teacher has not given any specific reasons why CC doesn’t work with her students other than the standards “build on previous learning”. This headline and vague indictment is yet another example of teachers whining that brown or poor kids can’t learn. They can and they are. □ If there is a whiner here perhaps that whiner is you. “Brown and poor kids can’t learn”??? I have nerver seen that written or expressed here. Does poverty often factor into the learning curve, absolutely, but those students can and do learn despite the many daily obstacales they may face. I think that testing simply for growth without the success percentile being mentioned is the way to go for all students especially those at the elementary level. Did Michael show growth from last year throug this year? Great, he’s on his way and he can feel good about what he has achieved rather that knowing he isn’t in the highest achievers group. 7. In my school, we are told to go slowly, build the skills, no rush, master the skills. When I talk to other teachers across the country teaching with the CCSS, I realize how far behind we are. No one is doing the same thing. We have no idea what the PARCC assessment will look like. Our Math series is a joke and we’re told we won’t have one next year. Great help in implementation. □ This will be the cluster%#%# of all cluster%##%s and guess who they will blame? The lowly teacher in the trenches, who else? 8. My two cents 9. As another post elsewhere stated: if the common core, PARCC and SMART, performance pay, and test driven teacher evaluations (VAMS and SGP) were such good ideas, wouldn’t schools like Sidwell Friends and other schools for children of advantage ADOPT them? Good enough for the rabble and “other people’s children” though. □ Bingo. For that matter, I think Obama and Duncan themselves (and all the rheephormers) should sit for whatever standardized tests they think other people’s kids should take and their scores should be published. ☆ Fundraiser? I’d pay to see Arnie take some high school exit exams. 10. I think it’s important to clarify a difference between indicators of success/failure and causes of success/failure. CC may be both, but regarding the idea that assessments will reflect lower levels of achievement because higher standards, that’s not creating an achievement gap – simply differentially assessing student achievement. The absolute level of learning is not higher or lower before or after CC in this dimension – it simply appears different. I can see problems with motivation and self-concept related to lower scores, but the reality is that many struggling kids will already have to deal with this issue with any assessment, thus making the issue not about CC, but about how to mitigate negative affects of low achievement scores, even when those scores are 11. “The design and implementation has left my Title 1 students feeling like failures.” This is the real crime of “reform”. We must stand up for our students and shield them. This, to me, is an insidious form of violence. 12. This is my second year teach with the CC for math. I am a 4th grade math/science teacher. My kids love it because we learn in an authentic learning environment and I love teaching it. You can’t teach from the textbook and expect to be successful with it. It is about using the standards to teach. Everyday in math we look at the standard and how we use it in real life math applications. My class is taught from a problem solving approach. It also about combining skills within the problem solving. I have found that you really can’t just teach the standards in sequential order. It was definitely a transition, but a well needed one. I like the idea of a national curriculum where a kid who moves into my class from another state has been learning the same skills and concepts being taught in my classroom. As for the assessments, they will be problem solving based and you can see an example on their website: http://www.parcconline.org/samples/mathematics/ grade-4-mathematics. We are teaching our kids that math is not about simple drill and skill or solving a problem from a textbook. It is about applying those skills within the context of something that matters. In the long run, our kids will become better math students. There are a ton of great common core resources on the web and great resources on teachers pay teachers to help you teach the standards. □ I’m glad it’s working well for your kids. I think it might be a good thing for kids who are getting these standards from a young age. For kids in the middle of the pipeline, this is insane. There’s too much of a gap and kids are getting left behind. Ironic, huh? 13. As usual these brilliant thinkers forgot that when you force joining of two different systems you must plan for integration to not cause harm. Guess they forgot that rule here. Now, once again, blame the teacher for the administrators mistake, purposeful or whatever. The whole game in deflection now is to blame those not responsible such as teachers and not administrators. This was one more case of administrators forcing their problems of failure on those below in deflection away from them so that they can continue their destruction. Why do you think Gates gave up on small schools after 10 or so years and then it was teacher failure, not administrators? Administrators spend the money, determine curriculum and how that curriculum will be taught. If one year it is this and the next year it is that your problem, not mine, I just tell you what to do and you do it. This is spin to paint the victim as the agressor. This is what Gates, Broad, Walton, HP and the rest are doing now in their takeover of public schools in the U.S. with the assistance of Obama and Duncan for profit and control of young minds and the future. This is what is really happening. It is for all the cookies right now in your face. There is no other reason for what is now going on. 14. I am also a teacher in a title school and absolutely love the math principals for my 3rd graders. I am actually doing my action research on ways to promote investigative teaching strategies to help promote my student’s reasoning because our math curriculum in years past has been awful. I also am a first year teacher so unlike some veteran teachers its all new to me. I think the fact that its promoting thinking is great. I have seen a lot of growth already this school year in there strategic thinking. 15. I agree with Kaye that we shouldn’t blame the curricula if what is at fault is the implementation and assessment (and follow up). Separating these issues might help us all. Also, it sounds like you believe that the CC is good for the other 50 % but not for low income students? What then would you do for that other 50%? Do you believe that each group should have different curricula? 16. Also, I would not choose a curriculum because it widens or closes the gap. To judge the curriculum on that means we’d come up with something that was so easy, everyone makes 100 and already knows it all. Then we have no gap but no learning either. I don’t think that is a good criterion. 17. I don’t think the common core math standards are good for most kids, not just the Title I students. While they are certainly more focused than the previous NCTM-inspired state standards, which were a horrifying hodge-podge of material, they still basically put the intellectual cart before the horse. They pay lip service to actually practicing standard algorithms. Seriously, students don’t have to be fluent in addition and subtraction with the standard algorithms until 4th grade? I teach high school math. I took a break to work in the private sector from 2002 to 2009. Since my return, I have been stunned by my students’ lack of basic skills. How can I teach algebra 2 students about rational expressions when they can’t even deal with fractions with numbers? Please don’t tell me this is a result of the rote learning that goes on in grade- and middle-school math classes, because I’m pretty sure that’s not what is happening at all. If that were true, I would have a room full of students who could divide fractions. But for some reason, most of them can’t, and don’t even know where to start. I find it fascinating that students who have been looking at fractions from 3rd grade through 8th grade still can’t actually do anything with them. Yet I can ask adults over 35 how to add fractions and most can tell me. And do it. And I’m fairly certain they get the concept. There is something to be said for “traditional” methods and curriculum when looked at from this Grade schools have been using Everyday Math and other incarnations for a good 5 to 10 years now, even more in some parts of the country. These are kids who have been taught the concept way before the algorithm, which is basically what the Common Core seems to promote. I have a 4th grade son who attends a school using Everyday Math. Luckily, he’s sharp enough to overcome the deficits inherent in the program. When asked to convert 568 inches to feet, he told me he needed to divide by 12, since he had to split the 568 into groups of 12. Yippee. He gets the concept. So I said to him, well, do it already! He explained that he couldn’t, since he only knew up to 12 times 12. But he did, after 7 agonizing minutes of developing his own iterated-subtraction-while-tallying system, tell me that 568 inches was 47 feet, 4 inches. Well, he got it right. But to be honest, I was mad; he could’ve done in a minute what ended up taking 7. And he already got the concept, since he knew he had to divide; he just needed to know how to actually do it. From my reading of the common core, that’s a great story. I can’t say I feel the same. If Everyday Math and similar programs are what is in store for implementing the common core standards for math, then I think we will continue to see an increase in remedial math instruction in high schools and colleges. Or at least an increase in the clientele of the private tutoring centers, which do teach basic math skills.
{"url":"http://dianeravitch.net/2012/12/27/teacher-common-core-harms-my-title-i-students/","timestamp":"2014-04-18T03:44:08Z","content_type":null,"content_length":"106800","record_id":"<urn:uuid:a9750980-b228-4acf-9c59-697365057aa2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Kristin Shaw Kristin Shaw I am a postdoctoral fellow at the University of Toronto and I work in applications of tropical geometry to real, complex and symplectic geometries. Departement of Mathematics University of Toronto Bahen Centre 40 St. George St., Room 6290 Toronto, Ontario CANADA M5S 2E4 shawkm(at)math(dot)toronto(dot)edu Here is my cv Publications Tropicalizations of del Pezzo surfaces. with Q. Ren and B. Sturmfels (submitted). A bit of tropical geometry. with E. Brugallé. American Mathematical Monthly (forthcoming) Translation of "Un peu de géométrie tropicale" with update sections and references. Obstructions to approximating tropical curves in surfaces via intersection theory. (with E. Brugallé) Canadian Journal of Mathematics (forthcoming). Tropical (1, 1)-homology for floor decomposed surfaces. "Algebraic and combinatorial aspects of tropical geometry", Contemporary mathematics, Eds: E. Brugallé, M.A. Cueto, A. Dickenstein, E.M. Feichtner and I. Itenberg, 589:529-550, (2013). A tropical intersection product on matroidal fans SIAM J. Discrete Math. 27(1), 459-491 (2013). Local obstructions to approximating tropical curves in surfaces. Oberwolfach Report 20/2011, European Mathematical Society Publishing House, 1144-1146, (2011). Multiplicity free expansions of Schur P-functions with Stephanie Van Willigenburg, Ann. Comb. 11:69--77 (2007). Coincidences among skew Schur functions with Victor Reiner and Stephanie Van Willigenburg, Adv. Math. 216:118--152 (2007). Here is a copy of my thesis. Teaching Classical Geometries - MAT402 (Spring 2013, Fall 2013, Spring 2014) Linear Algebra I - MAT223 (Fall 2012) Introduction to Number Theory - MAT315HS (Spring 2012)
{"url":"http://www.math.toronto.edu/shawkm/","timestamp":"2014-04-20T10:48:08Z","content_type":null,"content_length":"4128","record_id":"<urn:uuid:02b40b39-af94-434c-a051-09c18079f73a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
BFCA... is used to carry the density of the incoming fluid at the boundary of a BFC case into the subroutine GXBFC. In a two phase case it represents the (phase 1 density) * (volume fraction) product at the inlet. The corresponding phase 2 value is carried by RSG28. If DEN1 (and DEN2 for two phase cases) is STOREd, then inlet densities for individual patches can be specified by INIT statements for DEN1 (or DEN2). ---- Autoplot Help ---- BL[B]n [i] [j] Plots data elements i - j using blobs of type n. If i & j are omitted, all elements in memory but not on the screen will be plotted. 'n' is an integer in the range 1 - 5, as follows: BLB1 = circle; BLB2 = square; BLB3 = diamond; BLB4 = '+'; BLB5 = 'x'. If n is omitted, 1 is assumed See also HELP on : Both 'log-law' and 'power-law' velocity profiles can be set; and profiles are also provided for k, the kinetic energy of turbulence and eps, the volumetric dissipation rate of k. -------------------------------------- Photon Help ---- BL[ock]....defines blocked regions for the plot, in which no contours or vectors will be plotted. PHOTON prompts for the number of blocked regions, and for the extent of the region in the X,Y and Z directions. Up to 10 blocked regions may be specified, and subsequent use of the block command will clear all previously defined regions. BLOK....to employ the arbitrarily-adjustable block-correction feature of the equation solver, set STORE(BLOK) in the Q1 file. Values ascribed to BLOK indicate which cells are to be associated with which block. Set FIINIT(BLOK)=1.0 and INIADD=F; blocks then defined by PATCH/INIT commands should be given consecutive integer values. The marker BLOK should be used to identify sections of the solution domain where the values of any solved-for variable (or the coefficients in the finite-volume equations) are expected to differ greatly from elsewhere in the domain. For example, in a conjugate heat-transfer problem, it would be appropriate to identify each solid component as a separate block. The block-corrections, made for a selected variable every ISOLBK iterations, cause large-scale influences on the values of the variable solved in this way to be more rapidly transmitted to all parts of the domain, by grouping the cells in each block together as if to form a coarser grid. In this way, the speed of convergence may be improved for certain types of problem, particularly those in which different areas of the domain may have widely differing material properties. Use of this feature requires IVARBK to be set equal to the index of the variable which is to be solved by the block-correction method, or to -1 when it is to be used for all variables. The nature of the block-correction feature is this: • If the number of blocks created is NBLOK, it solves NBLOK simultaneous linear algebraic equations by a direct matrix-inversion method. • The NBLOK unknowns in these equations are the values of the corrections which, if applied uniformly to all cells in the individual block of cells, would make the nett residuals for the blocks • The coefficients in the NBLOK equations are derived from the equations for the individual cells by summation, in which process the cell-to-cell links within the block cancel each other out. • Calls to the block-correction feature and to the standard cell-wise solver are interspersed within the main iteration loop. • The corrections are applied as soon as they have been calculated, and the individual residuals of the cells within them are then re-calculated. See also the entry on The block-correction feature is illustrated in library cases 100 and 459 to 467. --- Command; defaults F; group 1 --- - BOOLEAN....command to declare up to 50 PIL logical variables. For example: BOOLEAN(LOG1,LOG2,LOG3,LOG4) makes LOG1, LOG2, LOG3 and LOG4 recognised as local working logical variables. Any name of not more than 6 characters can be used, eg: BOOLEAN(LOGVAR). Variables are assigned by the statement: LOG1= <logical expression> Permitted simple logical expressions are:- T or F for TRUE or FALSE <logical variables> <numeric expression><operator><numeric expression> where implicit FLOATING is performed on Integer values and all FORTRAN operators are valid. Simple logical expressions can be combined with the logical operators .AND. , .OR. , and .NOT. to create arbitarily complex logical expressions. There are two limitations: 1. There is no precedence defined; and, in the absence of brackets, evaluation is carried out from left to right. It is therefore recommended that brackets are used to remove potential ambiguity from complex logical expressions. 2. A .NOT. operator must not immediately follow an .AND. or .OR. without an intervening bracket. eg CARTES.OR..NOT.NONORT is illegal and must be re-written as CARTES.OR.(.NOT.NONORT) In interactive work, the current set of user-declared logical variables, and the values assigned to them, may be displayed by entering the command SEE L. The default provision of up to 50 variables can be enlarged by re- dimensioning in the MAIN program of the SATELLITE. See ----------- PIL real; group 13 ----------- BUOYA... is used by GXBUOY to carry the gravitational acceleration in the x-direction. In the GRND4 (or LINBC) option for GRAVitational source terms, BUOYC is used as a constant in a linear function of two variables for the VALue. See the help on ----------- PIL real; group 13 ----------- BUOYB... is used by GXBUOY to carry the gravitational acceleration in the y-direction. In the GRND4 ( or LINBC) option for GRAVitational source terms BUOYC is used as a constant in a linear function of two variables for the VALue. See the help on ----------- PIL real; group 13 ----------- BUOYC... is used by GXBUOY to carry the gravitational acceleration in the z-direction. In the GRND4 (or LINBC) option for GRAVitational source terms BUOYC is used as a constant in a linear function of two variables for the VALue. See the help on ------ PIL real; default=0.0; group 19 ---- BZW1....is a parameter used in specification of the movement of the first part of an n-part grid. In the piston-in-cylinder example provided in subroutine GXPIST (called from GREX), BZW1 is the radius of the crank. If AZW1=GRND1, BZW1 is the constant piston velocity. See also
{"url":"http://www.cham.co.uk/phoenics/d_polis/d_enc/enc_b.htm","timestamp":"2014-04-20T20:55:30Z","content_type":null,"content_length":"14355","record_id":"<urn:uuid:044a2477-cc69-4801-a8ba-1304edf7941d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
The Basics of Torque Measurement | EE Times Design How-To The Basics of Torque Measurement Torque is an important factor in much of the equipment on a factory floor. Measuring torque is often something that's misunderstood, which can lead to over- under-designing of measurement systems. This article addresses the many techniques and tradeoff of torque measurement techniques. Torque can be divided into two major categories, either static or dynamic. The methods used to measure torque can be further divided into two more categories, either reaction or in-line. Understanding the type of torque to be measured, as well as the different types of torque sensors that are available, will have a profound impact on the accuracy of the resulting data, as well as the cost of the measurement. In a discussion of static vs. dynamic torque, it is often easiest start with an understanding of the difference between a static and dynamic force. To put it simply, a dynamic force involves acceleration, were a static force does not. The relationship between dynamic force and acceleration is described by Newton’s second law; F=ma (force equals mass times acceleration). The force required to stop your car with its substantial mass would a dynamic force, as the car must be decelerated. The force exerted by the brake caliper in order to stop that car would be a static force because there is no acceleration of the brake pads Torque is just a rotational force, or a force through a distance. From the previous discussion, it is considered static if it has no angular acceleration. The torque exerted by a clock spring would be a static torque, since there is no rotation and hence no angular acceleration. The torque transmitted through a cars drive axle as it cruises down the highway (at a constant speed) would be an example of a rotating static torque, because even though there is rotation, at a constant speed there is no acceleration. The torque produced by the cars engine will be both static and dynamic, depending on where it is measured. If the torque is measured in the crankshaft, there will be large dynamic torque fluctuations as each cylinder fires and its piston rotates the crankshaft. If the torque is measured in the drive shaft it will be nearly static because the rotational inertia of the flywheel and transmission will dampen the dynamic torque produced by the engine. The torque required to crank up the windows in a car (remember those?) would be an example of a static torque, even though there is a rotational acceleration involved, because both the acceleration and rotational inertia of the crank are very small and the resulting dynamic torque (Torque = rotational inertia x rotational acceleration) will be negligible when compared to the frictional forces involved in the window movement. This last example illustrates the fact that for most measurement applications, both static and dynamic torques will be involved to some degree. If dynamic torque is a major component of the overall torque or is the torque of interest, special considerations must be made when determining how best to measure it.
{"url":"http://www.eetimes.com/document.asp?doc_id=1273988","timestamp":"2014-04-16T08:32:11Z","content_type":null,"content_length":"130237","record_id":"<urn:uuid:29d7d878-5e1a-498c-8aa5-055f2b0c5ce3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Distributed Online Optimization of Wireless Optical Networks With Network Coding Recently, the hybrid wireless-optical broadband network integrating optical backbone networks, passive optical networks (PON), and wireless access networks have been proposed to provide the high-bandwidth, low-cost, and ubiquitous communication connections. In this paper, we consider the design of network coding-based multicast applications in such networks with the objective of maximizing the total network utility and minimizing the deployment cost, subject to QoS constraints. The problem is formulated as a mixed integer nonlinear programming problem and the exact solution is prohibitively complex. In order to make the problem more tractable, we develop a two-step optimization procedure that iteratively selects the optical network unit and gateways for the multicast sessions. During each iteration, two subproblems are solved, i.e., a network coding design problem for the optical network, and a user assignment and bandwidth allocation problem for the wireless network. The former is solved in a distributed way based on the Lagrangian-dual decomposition; the latter is solved based on the generalized bender decomposition. Simulation results are provided to illustrate the effectiveness of the proposed solutions. © 2012 IEEE Jinxin Zhang, Weiqiang Xu, and Xiaodong Wang, "Distributed Online Optimization of Wireless Optical Networks With Network Coding," J. Lightwave Technol. 30, 2246-2255 (2012) Sort: Year | Journal | Reset 1. W.-T. Shaw, S.-W. Wong, N. Cheng, K. Balasubramanian, X. Zhu, M. Maier, L. G. Kazovsky, "Hybrid architecture and integrated routing in a highly scalable optical-wireless network," J. Lightw. Technol. 25, 3343-3351 (2007). 2. S. Sarkar, H.-H. Yen, S. Dixit, B. Mukherjee, "Hybrid wireless-optical broadband access network (WOBAN): Network planning and setup," IEEE J. Sel. Areas Commun. 26, 12-21 (2008). 3. I. Filippini, M. Cesana, "Topology optimization for hybrid optical/wireless access networks," Ad Hoc Netw. 8, 614-625 (2010). 4. B. Lin, P. Ho, X. Shen, F. Su, "Network planning for next-generation metropolitan-area broadband access under EPON-WiMAX integration," presented at the IEEE Global Telecommun. Conf. New OrleansLA 5. E. Amaldi, A. Capone, M. Cesana, I. Filippini, F. Malucelli, "Optimization models and methods for planning wireless mesh networks," Comput. Netw. 52, 2159-2171 (2008). 6. K. Li, X. Wang, "Cross-layer design of wireless mesh networks with network coding," IEEE Trans. Mobile Comput. 7, 1363-1373 (2008). 7. Y. Yu, S. Murphy, L. Murphy, "Planning base station and relay station locations in IEEE 802.16j multi-hop relay networks," Proc. IEEE Consum. Commun. Netw. Conf. (2008) pp. 922-926. 8. R. Ahlswede, N. Cai, S. Y. R. Li, R. W. Yeung, "Network information flow," IEEE Trans. Inf. Theory 46, 1204-1216 (2000). 9. M. Xiao, T. Aulin, "Energy-efficient network coding for the noisy channel network," Proc. 2006 IEEE Int. Symp. Inf. Theory (2006). 10. K. Miller, T. Biermann, H. Woesner, H. Karl, "Network coding in passive optical networks," Proc. IEEE Int. Symp. Netw. Coding (2010) pp. 1-6. 11. R. Menendez, J. Gannet, "Efficient, fault-tolerant all-optical multicast networks via network coding," Proc. Opt. Fiber Commun./Nat. Fiber Opt. Eng. Conf. (2008) pp. 1-3. 12. E. D. Manley, J. S. Deogun, L. Xu, "Network coding for optical-layer multicast," Proc. 5th Int. Conf. Broadband Commun. (2008) pp. 452-459. 13. R. S. Thinniyam, M. Kim, M. Médard, U.-M. O'Reilly, "Network coding in optical networks with O/E/O based wavelength conversion," Proc. Opt. Fiber Commun./National Fiber Opt. Eng. Conf. (2010) pp. 14. J. Du, M. Xiao, M. Skoglund, "Cooperative network coding strategies for wireless relay networks with backhaul," IEEE Trans. Commun. 59, 2502-2514 (2011). 15. R. Chandra, L. Qiu, K. Jain, M. Mahdian, "Optimizing the placement of integration points in multi-hop wireless networks," Proc. 12th IEEE Int. Conf. Netw. Protocols (2004) pp. 271-282. 16. B. Aoun, R. Boutaba, Y. Iraqi, G. Kenward, "Gateway placement optimization in wireless mesh networks with QoS constraints," IEEE J. Sel. Areas Commun. 24, 2127-2136 (2006). 17. Y. Drabu, H. Peyravi, "Gateway placement with QoS constraints in wireless mesh networks," Proc. 7th IEEE Int. Conf. Netw. (2008) pp. 46-51. 18. E. D. Manley, J. S. Deogun, L. Xu, D. R. Alexander, "All-optical network coding," J. Opt. Commun. Netw. 2, 175-191 (2010). 19. D. S. Lun, M. Médard, T. Ho, R. Koetter, "Network coding with a cost criterion," Proc. 2004 Int. Symp. Inf. Theory Appl. (2004) pp. 1232-1237. 20. D. S. Lun, N. Ratnakar, M. Médard, R. Koetter, E. Ahmed, H. Lee, "Achieving minimum-cost multicast: A decentralized approach based on network coding," Proc. Annu. Joint. Conf. IEEE Comput. Commun. Soc. (2005) pp. 1607-1617. 21. R. Gallager, "A minimum delay routing algorithm using distributed computation," IEEE Trans. Commun. 25, 73-85 (1977). 22. D. S. Lun, N. Ratnakar, M. Médard, R. Koetter, D. R. Karger, T. Ho, E. Ahmed, F. Zhao, "Minimum-cost multicast over coded packet networks," IEEE Trans. Inf. Theory 52, 2608-2623 (2006). 23. S.-Y. R. Li, R. W. Yeung, N. Cai, "Linear network coding," IEEE Trans. Inf. Theory 49, 371-381 (2003). 24. T. Ho, M. Médard, R. Koetter, D. Karger, M. Effros, J. Shi, B. Leong, "A random linear network coding approach to multicast," IEEE Trans. Inf. Theory 52, 4413-4430 (2004). 25. Y. Xi, E. M. Yeh, "Distributed algorithms for minimum cost multicast with network coding," IEEE/ACM Trans. Netw. 18, 379-392 (2010). 26. M. M. Carvalho, J. J. Garcia-Luna-Aceves, "Delay analysis of the IEEE802.11 in single-hop networks," Proc. 11th IEEE Int. Conf. Netw. Protocols (2003) pp. 146-155. 27. S. Sarkar, H.-H. Yen, S. Dixit, B. Mukherjee, "Hybrid wireless-optical broadband access network (WOBAN): Network planning using Lagrangian relaxation," IEEE/ACM Trans. Netw. 17, 1094-1105 (2009). 28. A. M. Geoffrion, "Generalized benders decomposition," J. Optimization Theory Appl. 10, 237-260 (1972). 29. J. Chen, L. Qian, Y. Zhang, "On optimization of joint base station association and power control via benders' decomposition," Proc. IEEE Global Telecommun. (2009) pp. 1-6. 30. M. P. Mcgarry, M. Reisslein, M. Maier, "Ethernet passive optical network architectures and dynamic bandwidth allocation algorithms," IEEE Commun. Surveys Tuts. 10, 46-60 (2008). 31. J. F. Benders, "Partitioning procedures for solving mixed-variables programming problems," Numerische Mathematik 4, 238-252 (1962). 32. J. Mo, J. Walrand, "Fair end-to-end window-based congestion control," IEEE/ACM Trans. Netw. 8, 556-567 (2000). 33. L. Massoulié, J. Roberts, "Bandwidth sharing: Objectives and algorithms," IEEE/ACM Trans. Netw. 10, 320-328 (2002). 34. D. P. Palomar, M. Chiang, "Alternative distributed algorithms for network utility maximization: Framework and applications," IEEE Trans. Autom. Control 52, 2254-2269 (2007). 35. J. Löfberg, "YALMIP: A toolbox for modeling and optimization in MATLAB," Proc. IEEE Int. Conf. Robot. Autom. (2004) pp. 284-289. OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/jlt/abstract.cfm?URI=jlt-30-14-2246","timestamp":"2014-04-16T17:07:37Z","content_type":null,"content_length":"87794","record_id":"<urn:uuid:df6fa0dc-f730-424a-9c01-c50963dd9c6a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Re: Disaster? Matthew Frank mfrank at math.uchicago.edu Thu May 13 12:33:08 EDT 2004 In response to Tim Chow's post: I don't find it so hard to imagine the inconsistency of PA. We usually use the induction axioms (phi(0) & (forall x)(phi(x)->phi(x+1)) -> (forall x)phi(x) only when phi is decidable or quantifier-free. So if more complex induction axioms turned out inconsistent, we would abandon them, probably with mutterings of impredicativity. In that case, we would use something like elementary function arithmetic. This is also known as I_Delta_0(exp), where the I stands for induction, Delta_0 refers to the complexity of the induction formulas, and exp indicates that the language of the theory goes beyond that of Peano arithmetic to include an exponential function. If elementary function arithmetic turned out to be inconsistent, we would remove exp from the language, concluding that the infeasibility of computing exponentials was indicative of the impossibility of any total function with the properties of exponentiation. And so on down.... Discoveries of such inconsistencies might well ignite wide interest in non-classical logics, such as intuitionist logic. However, double-negations convert inconsistencies in classical arithmetics to inconsistencies in intutionist arithmetics. So switching to an intuitionist logic wouldn't help in the above scenarios. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-May/008133.html","timestamp":"2014-04-16T10:16:55Z","content_type":null,"content_length":"3560","record_id":"<urn:uuid:d7de631f-d72d-41f6-87b4-dab793930b94>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
protein sequence generation plantscigroup at my-dejanews.com plantscigroup at my-dejanews.com Sat Mar 27 10:08:37 EST 1999 In article <36FA980A.143E4866 at fuerst.de>, "Frank Fürst" <frank at fuerst.de> wrote: > Hi, > Andy Phillips wrote: > > > > Cornelius Krasel wrote: > > > > > > Be aware that for a protein with n > > > residues there are approximately n! different sequences (somewhat > > > less because of repetitions). > > > > > > > Err..shouldn't that be 20^n (20 to the nth power) different sequences?? > You're right. But on the other hand, if we come back to the original > question (which I snipped out above...), even with a molecular weight > tolerance of +- zero, one gets n! different sequences. Plus the > sequences with multiple mutations that compensate each other, each of > them again n! times. So n! gets somewhat important, because for every > sequence a man would find calculating with his pencil, the computer > would find n! No, sorry, it isn't even close to n! - the fact that there are on average n/20 repetitions makes the number much smaller. Consider a protein of length 100 - there are 20^100 (~= 10^130) possible sequences, and 100! (~= 10^157) possible permutations. Consider a sequence of length Z, in which there are A amino acids of type 1, B of type 2 etc... The number of distinct sequences would be: A! B! C! ... T! Gary M. -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own More information about the Bio-soft mailing list
{"url":"http://www.bio.net/bionet/mm/bio-soft/1999-March/020523.html","timestamp":"2014-04-17T02:09:08Z","content_type":null,"content_length":"4079","record_id":"<urn:uuid:4be4d747-5813-40d6-9c39-9dc8416d3178>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
the probability that this power supply will be inadequate on any given day February 21st 2009, 07:12 PM the probability that this power supply will be inadequate on any given day In a certain city, the daily consumption of electric power in millions of kilowatt-hours can be treated as a random variable having a gamma distribution with α=3 and β=b. If the power plant of this city has a daily capacity of 12 million kilowatt-hours, what is the probability that this power supply will be inadequate on any given day? February 22nd 2009, 01:04 AM In a certain city, the daily consumption of electric power in millions of kilowatt-hours can be treated as a random variable having a gamma distribution with α=3 and β=b. If the power plant of this city has a daily capacity of 12 million kilowatt-hours, what is the probability that this power supply will be inadequate on any given day? Let X be the rv representing the daily consumption (in millions of kw-h). So you are looking for $\mathbb{P}(X>12)=1-\mathbb{P}(X\leqslant 12)$ The pdf of a gamma distribution with parameters $(\alpha,\beta)$ is: $g(x,\alpha,\beta)=x^{\alpha-1} \cdot \frac{\beta^\alpha e^{-\beta x}}{\Gamma(\alpha)}$ So here, it's $g(x)=\frac{b^3}{\Gamma(3)} \cdot x^2 e^{-bx}$ $\mathbb{P}(X\leqslant 12)=\int_0^{12} g(x) ~dx=\frac{b^3}{\Gamma(3)} \int_0^{12}x^2 e^{-bx} ~dx$ make twice an integration by parts and you'll be done. February 22nd 2009, 09:35 AM sorry! i made a mistake on the question. the β is equal to 2 not b. β=2. and can you explain why i have to make twice an integration by parts? thanks!!! February 22nd 2009, 09:57 AM is the equation should be The pdf of a gamma distribution with parameters $(\alpha,\beta)$ is: $g(x,\alpha,\beta)=x^{\alpha-1} \cdot \frac{\beta^{-\alpha} e^{-x/\beta }}{\Gamma(\alpha)}$? February 22nd 2009, 10:28 AM I don't know... the notations are made this way in the wikipedia. The pdf I gave you is if we're working on $(\alpha,\beta)$. The pdf you gave is if we're working on $(k,\theta)$ You can see that $\alpha=k$ and $\beta=\frac 1 \theta$ So here, it would depend on how you have been taught ;) As for your mistake of b=2, it doesn't change anything to the result, since b was a constant, and that you can substitute it by any positive value you want :D And for the double integration by parts, it's because we have $x^2 e^{-2x}$ in the integrand. We can't calculate an antiderivative of it. So make an integration by parts : the power x^2 will be transformed into x and the exponential remains. But we still cannot find an antiderivative. So integrate again by parts : the power x will be transformed to a constant and the exponential remains. But then you can compute an antiderivative, because the integrand will be an exponential. Just do it and you'll see ;) (each time, take the u part as x^2 or x and the dv part as the exponential) February 22nd 2009, 02:08 PM I was about to warn you. IN SOME books $\beta$ is in the numerator and in other books it's in the denominator. I was going ask the person who submitted this thread to clarify. By just saying $\beta$ is such and such it's not clear.
{"url":"http://mathhelpforum.com/advanced-statistics/74937-probability-power-supply-will-inadequate-any-given-day-print.html","timestamp":"2014-04-18T16:02:06Z","content_type":null,"content_length":"11711","record_id":"<urn:uuid:1b0f719c-53e6-4846-818d-5584af6d4d67>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Pi Day: How 3.14 helps find other planets, and more Happy Pi Day! A favorite holiday among geeks, March 14 commemorates one of the most fundamental and strange numbers in mathematics. It's also Albert Einstein's birthday. This is a great excuse to bake pies, as many iReporters have done. But there are also lots of reasons to celebrate this number: Pi appears in the search for other planets, in the way that DNA folds, in science at the world's most powerful particle collider, and in many other fields of science. Here's a refresher: Pi is the ratio of circumference to diameter of a circle. No matter how big or small the circle is, if you calculate the distance around it, divided by the distance across it, you will get pi, which is approximately 3.14. That's why Pi Day is 3/14! But the digits of pi actually go on forever in a seemingly random fashion, making it a fun challenge for people who like to memorize and recite long strings of numbers. By the way, the world record for memorization stands at 67,890 digits, according to the Pi World Ranking List. To the uninitiated, such enthusiasm over a number may sound ridiculous. But when you think about how many different fields of science incorporate pi, it does seem kind of amazing. Be forewarned: We're going to have to use a bit of math to explain why. Yes, math formulas may seem scary, but trust us: It's worth the challenge. The search for new planets For Sara Seager, professor of planetary science at Massachusetts Institute of Technology, pi is part of everyday work in characterizing and searching for planets outside our solar system, called Here's her basic formula: The volume of a planet is about 4/3 pi times the radius^3. You need this formula to find the density of a planet, which is mass divided by volume. This number that tells Seager and colleagues whether a planet is mostly gaseous like Jupiter, rocky like Earth, or something in between. Pi is also involved in calculations regarding an exoplanet's atmosphere, since it can be described spherically, and spheres always involve pi. "Coincidentally, pi is useful to estimate the number of seconds in a year (on Earth): There are approximately pi times 10 million seconds in a year," Seager says. And a tiny space telescope that Seager works on called the ExoplanetSat, which is a collaboration between MIT and Draper Laboratory, also incorporates pi in optics equations related to the telescope's mirror. Pi helps describe the shape of the universe, says David Spergel, chairman of Princeton University's astrophysical sciences department. Spergel studies cosmic microwave background radiation, which is basically radiation that's still hanging around from the early universe - it's the afterglow of the Big Bang. Using a spacecraft called WMAP (Wilkinson Microwave Anisotropy Probe), Spergel and colleagues have been able to get an idea of what the early universe looked like - a "baby picture," as it was called when WMAP's 2003 results were released. See if you can wrap your head around this: 4pi is the ratio of the surface area of a sphere to the square of its radius, in geometrically flat space. "Using our measurements of the microwave background, we measure this ratio by determining the angular size of hot and cold spots in the microwave sky. Our measurements show that the large-scale geometry of the universe is accurately described by the Euclidean geometry that we all learned in high school," Spergel says. "This measurement implies that the total energy of the universe is very close to zero." Why? The positive energy from the universe's expansion (it's been expanding since the Big Bang) is balanced by the negative energy of matter being attracted to itself, via gravity. The Large Hadron Collider Pi comes up a lot in what physicists do at the Large Hadron Collider, the $10 billion machine at the European Organization for Nuclear Research (CERN) in Switzerland that smashes protons into protons at unprecedented energies. Scientists are looking for as-yet-undiscovered particles such as the Higgs boson, which popular culture refers to as "the God particle." Joe Incandela, spokesman for the collider's Compact Muon Solenoid experiment, explains one way that pi shows up at the LHC: In particle physics, if we can measure particle properties, like masses, very very precisely, we can sometimes find tell-tale evidence of undiscovered new particles. That's because particles can transform themselves into other particles, and then come back together to make the original particle again. This is called a loop. When you calculate the contribution of this process to the particle's mass, a factor of something like 1/(16pi^2) comes out, along with other factors that depend on the properties of the particles in the loops. Interestingly, prior to the LHC, some particles could only appear in these loops, and nowhere else, and should come in pairs in order for a special property that they have to be conserved. A very important example of a theory of particles with this kind of behavior is what scientists call supersymmetry, and it helps explain a lot of the holes in our current best understanding of the universe, known as the Standard Model. Scientists are hoping to see these kinds of particles directly which requires very high-energy particle accelerators like the LHC to make them. They will also continue to try to detect their effects on Standard-Model particles in these loops, which are extremely short-lived and this requires a lot of patience because measurements must be extremely precise. So, supersymmetry will probably still be out there for us to discover, even though there is no evidence of it in detailed measurements of particle parameters at the LEP and Tevatron accelerators in the past (or at the LHC either, so far, but there's still a lot of room to look for them). Gravity, energy and mass Pi appears in Einstein's equation for how energy and mass lead to the curvature of spacetime: R_ij -- (1/2)R g_ij = 8pi G T_ij. Wow, what is that? Sean Carroll at California Institute of Technology acknowledges that this is a weird-looking equation, but the important part is that G is Newton's constant of gravitation. "Long story short: in Newton's equation for gravity, the constant is just G; in Einstein's equation, it's 8pi*G," he says. Why? Carroll explains: "Let's say you know how much mass the Earth has, and you want to figure out what the strength of gravity is at some distance away. Newton's equation tells you what that force is - it's proportional to one divided by the distance squared (the famous "inverse square law"). But let's say you want to do the opposite - you know what the force is, but you want to figure out how much mass is causing You could draw a sphere that completely surrounds the object, and add up the gravitational force at each point on the sphere, to make sure you are correctly capturing what's going on inside. So the answer to one question is related to the answer to the other, by adding up things all over a sphere. And the area of a sphere of radius R is 4pi R^2. Voila - pi comes into the expression, because pi relates distances (straight lines) to spheres." How DNA folds Pi plays an important role in the way the genome is folded, says Leonid Mirny, associate professor at MIT. "If you take all DNA of the human genome contained in a single cell and stretch it, the DNA would be a 2-meter-long fiber," he says. How are these two meters of DNA packed inside a cell nucleus, which is only 5 micrometers (that's 5 millionths of a meter) in diameter? Think about thread around a spool. At the cellular level, there's a core made of special proteins called histones, and they're like the spool. DNA wraps twice around it and then continues to the next spool. Each one of these spools is called a nucleosome, and tens of millions of them pack our DNA, making it look like a string of beads. How much shorter is this string than the DNA itself? The answer is about 1.5pi (or about 5) times! Pi is essential for mathematicians whether they care about circles or not, says Jordan Ellenberg, professor of mathematics at the University of Wisconsin. Here's one place where it comes up for Choose two random numbers between 1 and 1,000. Then, he could compute whether they have any factors other than 1 in common. "It turns out that the probability of having no common factor is a a little over 60%," he says. "And you can change 1,000 to 10,000, and then to 100,000, etc etc, and amazingly the probability seems to be converging to a fixed value, about 60.79%. More amazingly still, this value is 6/pi^2!" Studying crickets Crickets use sound to locate mates, and their reaction to fellow crickets' calls are of interest to Gerald Pollack - a biologist at McGill University in Montreal, Quebec. In one of his experiments, crickets walk on a spherical treadmill while a loudspeaker broadcasts a cricket song. How accurately do they walk toward the sound? "We measure the discrepancy between the direction of the loudspeaker and the direction in which the cricket walks, both of which are measured as angles ranging between zero and 2pi radians," he says. James Clerk Maxwell Maxwell published famous equations of electromagnetism in the 1860s. They are fundamental to modern electronics and communications. These equations include an important physical quantity called "the permeability of free space," which has a value of 4pi x 10^-7 H/m that's units per Henry per meter, where a Henry is a unit used in "So we are all using pi every day when we think about magnetic or electric fields, or electromagnetic radiation (light, radio etc)," says Caroline Ross, associate head of the Department of Materials Science and Engineering at MIT. Pi is involved in calculating the surface area and volume of round three-dimensional objects. So, if you're planning to build something involving spheres or arches or some kind of circular geometry, you're going to need pi! Drug design Chandrajit Bajaj at the University of Texas, Austin, is researching molecular recognition models for drug design and discovery. She uses simulations of particles in which atoms are often represented as spheres. The formulas for molecular surface area and volume involve pi, and often appear in Bajaj's calculations. So, there are more than 3.14 reasons that pi is special. Now go eat some pie!
{"url":"http://www.wptv.com/news/science-tech/pi-day-how-314-helps-find-other-planets-and-more-wcpo1331751226601","timestamp":"2014-04-20T14:19:19Z","content_type":null,"content_length":"173099","record_id":"<urn:uuid:3d845370-e74b-4c8d-b42d-b671ead159dd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
The Basics of Torque Measurement | EE Times Design How-To The Basics of Torque Measurement Torque is an important factor in much of the equipment on a factory floor. Measuring torque is often something that's misunderstood, which can lead to over- under-designing of measurement systems. This article addresses the many techniques and tradeoff of torque measurement techniques. Torque can be divided into two major categories, either static or dynamic. The methods used to measure torque can be further divided into two more categories, either reaction or in-line. Understanding the type of torque to be measured, as well as the different types of torque sensors that are available, will have a profound impact on the accuracy of the resulting data, as well as the cost of the measurement. In a discussion of static vs. dynamic torque, it is often easiest start with an understanding of the difference between a static and dynamic force. To put it simply, a dynamic force involves acceleration, were a static force does not. The relationship between dynamic force and acceleration is described by Newton’s second law; F=ma (force equals mass times acceleration). The force required to stop your car with its substantial mass would a dynamic force, as the car must be decelerated. The force exerted by the brake caliper in order to stop that car would be a static force because there is no acceleration of the brake pads Torque is just a rotational force, or a force through a distance. From the previous discussion, it is considered static if it has no angular acceleration. The torque exerted by a clock spring would be a static torque, since there is no rotation and hence no angular acceleration. The torque transmitted through a cars drive axle as it cruises down the highway (at a constant speed) would be an example of a rotating static torque, because even though there is rotation, at a constant speed there is no acceleration. The torque produced by the cars engine will be both static and dynamic, depending on where it is measured. If the torque is measured in the crankshaft, there will be large dynamic torque fluctuations as each cylinder fires and its piston rotates the crankshaft. If the torque is measured in the drive shaft it will be nearly static because the rotational inertia of the flywheel and transmission will dampen the dynamic torque produced by the engine. The torque required to crank up the windows in a car (remember those?) would be an example of a static torque, even though there is a rotational acceleration involved, because both the acceleration and rotational inertia of the crank are very small and the resulting dynamic torque (Torque = rotational inertia x rotational acceleration) will be negligible when compared to the frictional forces involved in the window movement. This last example illustrates the fact that for most measurement applications, both static and dynamic torques will be involved to some degree. If dynamic torque is a major component of the overall torque or is the torque of interest, special considerations must be made when determining how best to measure it.
{"url":"http://www.eetimes.com/document.asp?doc_id=1273988","timestamp":"2014-04-16T08:32:11Z","content_type":null,"content_length":"130237","record_id":"<urn:uuid:29d7d878-5e1a-498c-8aa5-055f2b0c5ce3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
A Simple Publicly Verifiable Secret Sharing Scheme and Its Application to Electronic Voting A publicly verifiable secret sharing (PVSS) scheme is a verifiable secret sharing scheme with the property that the validity of the shares distributed by the dealer can be verified by any party; hence verification is not limited to the respective participants receiving the shares. We present a new construction for PVSS schemes, which compared to previous solutions by Stadler and later by Fujisaki and Okamoto, achieves improvements both in efficiency and in the type of intractability assumptions. The running time is O(nk), where k is a security parameter, and n is the number of participants, hence essentially optimal. The intractability assumptions are the standard Diffie-Hellman assumption and its decisional variant. We present several applications of our PVSS scheme, among which is a new type of universally verifiable election scheme based on PVSS. The election scheme becomes quite practical and combines several advantages of related electronic voting schemes, which makes it of interest in its own right. 1. J. Benaloh. Secret sharing homomorphisms: Keeping shares of a secret secret. In Advances in Cryptology—CRYPTO’ 86, volume 263 of Lecture Notes in Computer Science, pages 251–260, Berlin, 1987. Springer-Verlag. CrossRef 2. J. Benaloh. Verifiable Secret-Ballot Elections. PhD thesis, Yale University, Department of Computer Science Department, New Haven, CT, September 1987. 3. G.R. Blakley. Safeguarding cryptographic keys. In Proceedings of the National Computer Conference 1979, volume 48 of AFIPS Conference Proceedings, pages 313–317, 1979. 4. E. F. Brickell. Some ideal secret sharing schemes. Journal of Combinatorial Mathematics and Combinatorial Computing, 9:105–113, 1989. 5. J. Benaloh and M. Yung. Distributing the power of a government to enhance the privacy of voters. In Proc. 5th ACM Symposium on Principles of Distributed Computing (PODC’ 86), pages 52–62, New York, 1986. A.C.M. 6. R. Cramer, I. Damgård, and U. Maurer. General secure multi-party computation from any linear secret sharing scheme, 1999. Manuscript. 7. R. Cramer, I. Damgård, and B. Schoenmakers. Proofs of partial knowledge and simplified design of witness hiding protocols. In Advances in Cryptology—CRYPTO’ 94, volume 839 of Lecture Notes in Computer Science, pages 174–187, Berlin, 1994. Springer-Verlag. 8. J. Cohen and M. Fischer. A robust and verifiable cryptographically secure election scheme. In Proc. 26th IEEE Symposium on Foundations of Computer Science (FOCS’ 85), pages 372–382. IEEE Computer Society, 1985. 9. R. Cramer, M. Franklin, B. Schoenmakers, and M. Yung. Multi-authority secret ballot elections with linear work. In Advances in Cryptology — EUROCRYPT’ 96, volume 1070 of Lecture Notes in Computer Science, pages 72–83, Berlin, 1996. Springer-Verlag. 10. B. Chor, S. Goldwasser, S. Micali, and B. Awerbuch. Verifiable secret sharing and achieving simultaneity in the presence of faults. In Proc. 26th IEEE Symposium on Foundations of Computer Science (FOCS’ 85), pages 383–395. IEEE Computer Society, 1985. 11. R. Cramer, R. Gennaro, and B. Schoenmakers. A secure and optimally efficient multi-authority election scheme. In Advances in Cryptology — EUROCRYPT’ 97, volume 1233 of Lecture Notes in Computer Science, pages 103–118, Berlin, 1997. Springer-Verlag. 12. J. Camenisch, U. Maurer, and M. Stadler. Digital payment systems with passive anonymity-revoking trustees. In Computer Security-ESORICS 96, volume 1146 of Lecture Notes in Computer Science, pages 33–43, Berlin, 1996. Springer-Verlag. 13. D. Chaum and T. P. Pedersen. Transferred cash grows in size. In Advances in Cryptology—EUROCRYPT’ 92, volume 658 of Lecture Notes in Computer Science, pages 390–407, Berlin, 1993. Springer-Verlag. CrossRef 14. P. Feldman. A practical scheme for non-interactive verifiable secret sharing. In Proc. 28th IEEE Symposium on Foundations of Computer Science (FOCS’ 87), pages 427–437. IEEE Computer Society, 15. E. Fujisaki and T. Okamoto. A practical and provably secure scheme for publicly verifiable secret sharing and its applications. In Advances in Cryptology—EUROCRYPT’ 98, volume 1403 of Lecture Notes in Computer Science, pages 32–46, Berlin, 1998. Springer-Verlag. CrossRef 16. Y. Frankel, Y. Tsiounis, and M. Yung. “Indirect discourse proofs”: Achieving efficient fair off-line e-cash. In Advances in Cryptology ASIACRYPT’ 96, volume 1163 of Lecture Notes in Computer Science, pages 286–300, Berlin, 1996. Springer-Verlag. CrossRef 17. R. Gennaro, S. Jarecki, H. Krawczyk, and T. Rabin. Secure distributed key generation for discrete-log based cryptosystems. In Advances in Cryptology—EUROCRYPT’ 99, volume 1592 of Lecture Notes in Computer Science, pages 295–310, Berlin, 1999. Springer-Verlag. 18. M. Karchmer and A. Wigderson. On span programs. In Proceedings of the Eighth Annual Structure in Complexity Theory Conference, pages 102–111. IEEE Computer Society Press, 1993. 19. T. Pedersen. A threshold cryptosystem without a trusted party. In Advances in Cryptology—EUROCRYPT’ 91, volume 547 of Lecture Notes in Computer Science, pages 522–526, Berlin, 1991. 20. T. P. Pedersen. Distributed Provers and Verifiable Secret Sharing Based on the Discrete Logarithm Problem. PhD thesis, Aarhus University, Computer Science Department, Aarhus, Denmark, March 1992. 21. T. P. Pedersen. Non-interactive and information-theoretic secure verifiable secret sharing. In Advances in Cryptology—CRYPTO’ 91, volume 576 of Lecture Notes in Computer Science, pages 129–140, Berlin, 1992. Springer-Verlag. 22. B. Pfitzmann and M. Waidner. How to break fraud-detectable key recovery. Operating Systems Review, 32(1):23–28, 1998. CrossRef 23. A. Shamir. How to share a secret. Communications of the ACM, 22(11):612–613, 1979. CrossRef 24. M. Stadler. Publicly verifiable secret sharing. In Advances in Cryptology — EUROCRYPT’ 96, volume 1070 of Lecture Notes in Computer Science, pages 190–199, Berlin, 1996. Springer-Verlag. 25. E. Verheul and H. van Tilborg. Binding ElGamal: A fraud-detectable alternative to key-escrow proposals. In Advances in Cryptology—EUROCRYPT’ 97, volume 1233 of Lecture Notes in Computer Science, pages 119–133, Berlin, 1997. Springer-Verlag. 26. A. Young and M. Yung. Auto-recoverable auto-certifiable cryptosystems. In Advances in Cryptology—EUROCRYPT’ 98, volume 1403 of Lecture Notes in Computer Science, pages 17–31, Berlin, 1998. Springer-Verlag. CrossRef A Simple Publicly Verifiable Secret Sharing Scheme and Its Application to Electronic Voting Book Title Book Subtitle 19th Annual International Cryptology Conference Santa Barbara, California, USA, August 15–19, 1999 Proceedings pp 148-164 Print ISBN Online ISBN Series Title Series Volume Series ISSN Springer Berlin Heidelberg Copyright Holder Springer Berlin Heidelberg Additional Links Industry Sectors eBook Packages Editor Affiliations Author Affiliations □ 5. Department of Mathematics and Computing Science, Eindhoven University of Technology, P.O. Box 513, 5600, MB Eindhoven, The Netherlands Continue reading... To view the rest of this content please follow the download PDF link above.
{"url":"http://link.springer.com/chapter/10.1007%2F3-540-48405-1_10","timestamp":"2014-04-20T23:17:43Z","content_type":null,"content_length":"49670","record_id":"<urn:uuid:b23ebc29-714b-4128-adb6-98dc7f483c1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
The Basics of Torque Measurement | EE Times Design How-To The Basics of Torque Measurement Torque is an important factor in much of the equipment on a factory floor. Measuring torque is often something that's misunderstood, which can lead to over- under-designing of measurement systems. This article addresses the many techniques and tradeoff of torque measurement techniques. Torque can be divided into two major categories, either static or dynamic. The methods used to measure torque can be further divided into two more categories, either reaction or in-line. Understanding the type of torque to be measured, as well as the different types of torque sensors that are available, will have a profound impact on the accuracy of the resulting data, as well as the cost of the measurement. In a discussion of static vs. dynamic torque, it is often easiest start with an understanding of the difference between a static and dynamic force. To put it simply, a dynamic force involves acceleration, were a static force does not. The relationship between dynamic force and acceleration is described by Newton’s second law; F=ma (force equals mass times acceleration). The force required to stop your car with its substantial mass would a dynamic force, as the car must be decelerated. The force exerted by the brake caliper in order to stop that car would be a static force because there is no acceleration of the brake pads Torque is just a rotational force, or a force through a distance. From the previous discussion, it is considered static if it has no angular acceleration. The torque exerted by a clock spring would be a static torque, since there is no rotation and hence no angular acceleration. The torque transmitted through a cars drive axle as it cruises down the highway (at a constant speed) would be an example of a rotating static torque, because even though there is rotation, at a constant speed there is no acceleration. The torque produced by the cars engine will be both static and dynamic, depending on where it is measured. If the torque is measured in the crankshaft, there will be large dynamic torque fluctuations as each cylinder fires and its piston rotates the crankshaft. If the torque is measured in the drive shaft it will be nearly static because the rotational inertia of the flywheel and transmission will dampen the dynamic torque produced by the engine. The torque required to crank up the windows in a car (remember those?) would be an example of a static torque, even though there is a rotational acceleration involved, because both the acceleration and rotational inertia of the crank are very small and the resulting dynamic torque (Torque = rotational inertia x rotational acceleration) will be negligible when compared to the frictional forces involved in the window movement. This last example illustrates the fact that for most measurement applications, both static and dynamic torques will be involved to some degree. If dynamic torque is a major component of the overall torque or is the torque of interest, special considerations must be made when determining how best to measure it.
{"url":"http://www.eetimes.com/document.asp?doc_id=1273988","timestamp":"2014-04-16T08:32:11Z","content_type":null,"content_length":"130237","record_id":"<urn:uuid:29d7d878-5e1a-498c-8aa5-055f2b0c5ce3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
product of two continous functions Why on earth would he do so? Halls hints are quite straightforward. To solve the problem and because it is fun. I agree, I provided a different (though very slightly) view. lurflurf may has misread "continuous" as "differentiable". theorem f is continuous if and only if lim_h->0 (f(x+h)-f(x))=0 where (for picking nits) we accept as implicit f(x) exist and limit_h->0 f(x+h) exist the point (of continuity) is simply that if two numbers are respectively near two other numbers, then the products are also near each other. to see this, let the numbers be a+h and b+k and compare the product of ab to that of (a+h)(b+k), when h and k are small. A useful framework we desire to show that eps1=(eps/3)/|g(x)| if |g(x)|>0 eps2=(eps/3)/max(eps1,|f(x)|) if |f(x)|>0 we easily see the result is true
{"url":"http://www.physicsforums.com/showthread.php?p=1615165","timestamp":"2014-04-16T10:30:32Z","content_type":null,"content_length":"45478","record_id":"<urn:uuid:b8363b90-4acd-4957-b9d3-571f33ca95be>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Clocks - Time June 9th 2008, 11:55 AM #1 Jun 2008 Clock A,B and Cstrikes every hour. B slows down and take 2 minutes longer than A per hour while C becomes faster and takes a minute less than A per hour. If they strike together at 12 midnight, when will they strike together again...? ans: 11 am Hello, MathLearner! Clocks $A, B\text{ and }C$ strike every hour. $B$ slows down and take 2 minutes longer than $A$ per hour while $C$ becomes faster and takes a minute less than $A$ per hour. If they strike together at 12 midnight, when will they strike together again? Answer: 11 am . . . . I don't agree! Clock A strikes every 60 minutes. Clock B strikes every 62 minutes. Clock C strikes every 59 minutes. The LCM of 60, 62, and 59 is: .109,740 $109,\!740\text{ minutes} \:=\:1829\text{ hours} \:=\:76\text{ days, }{\color{blue}5\text{ hours}}$ $\text{They will strike together at }{\color{red}5\text{ am}}\text{ (of the 77th day).}$ I am not sure about this... In the paper i was solving, the right option given was for 11 am.. there was no option as 5 am... June 10th 2008, 12:16 PM #2 Super Member May 2006 Lexington, MA (USA) June 11th 2008, 12:55 AM #3 Jun 2008
{"url":"http://mathhelpforum.com/math-challenge-problems/41113-clocks-time.html","timestamp":"2014-04-19T17:53:45Z","content_type":null,"content_length":"36786","record_id":"<urn:uuid:4dc810a2-e871-4f30-9614-064809229bb7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Các cây tìm kiếm cân bằng: AVL, đỏ đen, (2,4), và cây loe Bài giảng cho lớp cấu trúc dữ liệu. Bạn nào rảnh đọc thì góp ý giùm, nhất là phần C++. Bài trước: Cây nhị phân và cây tìm kiếm nhị phân Bài sau: bảng băm. In the previous lecture we have discussed binary search trees (BSTs) and several basic results such as the number of BSTs with n keys and the height of a random BST with n node. Perhaps the most important result is that in a BST the three basic operations — search, insert, and delete — take O(h) time where h is the height of the tree (at the time an operation is performed). Since the average height of a random BST is O(log n) (with a small variance), these operations are pretty fast on average. However, the run-time in the worst case could be bad. Exercise: come up with a sequence of n inserts such that the tree height is $\Omega(n)$. In this lecture, we describe several schemes for maintaining BSTs such that the height of the tree is (about) logarithmic in the number of nodes. One of the main ideas is to ensure that for each node in the tree the height of the left sub-tree and the height of the right-subtree are roughly equal. For a visual illustration of three of the trees described in this lecture, you can make use of the Java applet on this page. 1. AVL Trees AVL Trees are probably the earliest balanced BST, proposed by two Russians (Soviet) G. M. Adelson-Velskii and E. M. Landis in 1962. The main idea is intuitive: we call a node balanced if its two subtrees have heights differ by at most 1. A BST is an AVL tree if all nodes are balanced. This property is called the AVL property. There are two things we have to do to turn this idea into a theoretically sound data structure: 1. Prove that if a tree is an AVL tree, then its height is O(log n) where n is the number of keys in the tree. 2. Show how to do insert and delete in O(log n)-time while maintaining the AVL property. 1.1. AVL property implies logarithmic height Theorem: an AVL tree on n nodes has O(log n) height. Proof. Let $h(n)$ be the maximum height of an AVL tree with $n$ nodes. We want to show that $h(n) \leq c\log n$ where $c$ is some constant. It turns out that proving the contra-positive is a little easier. Let $n(h)$ be the minimum number of nodes in an AVL tree with height $h$, we will derive a lower bound for $n(h)$ in terms of $h$, which will then lead to an upper bound of $h(n)$ in terms of For simplicity, we will count all of the NULL pointers as leaves of the tree. Thus, a BST is a full binary tree: each non-NULL node has 2 children. It is not hard to see that $n(1) = 1$ and $n(2) = 2$. Now, consider an AVL tree with height $h>2$ with the minimum possible number of nodes $n(h)$. Then, it must be the case that one of the sub-trees of the root has height $h-1$ with $n(h-1)$ nodes and the other sub-tree has height $h-2$ with $n(h-2)$ nodes. Thus, we have the following recurrence: $n(h) = 1 + n(h-1) + n(h-2), \text{ for } h \geq 3$ Define $g(h) = n(h) + 1$, then we have $g(1) = 2$, $g(2) = 3$, and $g(h) = g(h-1) + g(h-2), \text{ for } h \geq 3$ Because $g(1) = F_3$, $g(2) = F_4$, the third and fourth Fibonacci numbers, and beause $g(n)$ obeys the same recurrence as the Fibonnaci recurrence, we conclude that $g(n) = F_{n+2}$. Consequently, $n(h) = \frac{\varphi^{h+2} - (-1/\varphi)^{h+2}}{\sqrt 5} - 1 = \Omega(\varphi^h)$ where $\varphi = \frac{1+\sqrt 5}{2} \approx 1.618$ is the golden ratio. Taking log on both sides of the inequality, we obtain $h = O(\log n(h))$. Since $n(h)$ is the minimum possible number of nodes in any AVL tree with height $h$, we conclude that for any AVL tree on $n$ nodes with height $h$, we have $h = O(\log n)$ as desired. 1.2. Maintaining the AVL property After an insert (using the same algorithm as in a normal BST), some subtree might be 2 taller than its sibling subtree. We thus will have to readjust the tree to rebalance it. One important property to note is that the only nodes which might become unbalanced after a new node v is inserted are the nodes along the path from v back up to the root of the tree. Because, those are the nodes whose heights might be affected by the insertion. To keep track of which nodes are balanced or not, we will add an extra field to each node called the balance field, which is defined to be the height of the left subtree minus the height of the right subtree of that node. struct AVLNode { // can also use static const instead of enum enum { LEFT_HEAVY = 1, BALANCED = 0, RIGHT_HEAVY = -1}; int balance; Key key; Value value; AVLNode* left; AVLNode* right; AVLNode* parent; AVLNode(const Key& k, const Value& v) : key(k), value(v), parent(NULL), left(NULL), right(NULL), balance(BALANCED) {} std::string to_string() const { std::ostringstream oss; oss << key << "(" << balance << ")"; return oss.str(); We want a node’s balance to be 1, 0, or -1. After inserting a node v into the tree, let a be the first node on the path from v back to the root that is unbalanced. Note that a‘s balance factor has to be 2 or -2. Let b be the ancestor of v which is the child of a. Note that v‘s parent cannot be unbalanced after v‘s insert, and thus a has to be at the very least v‘s grandparent. Node b might be v‘s We look for a by moving up the tree from v, readjusting the nodes’ balance fields along the way. If we don’t find a then we are done, no rebalancing needed. If we do find a, then there are four cases to consider: the right-right case, the right-left case, the left-left case, and the left-right case. Below is the picture for the right-right case, which should be self-explanatory: We readjust the tree by doing a left rotation about node a as shown in the picture. For the right-left case, we perform a double rotation: imagine doing a right-rotate about node c first to bring b up, and then a left-rotate about node a to bring b one more step up. Note that the new node can be in either T2 or T3 in the following picture. The left-left and the left-right cases are symmetric to the above two cases. A very important point to notice is that after doing the rotation (double or single), we no longer have to fix the balance fields of nodes further up the tree! Exercise: argue why after a single or double rotation, all nodes higher up in the tree are balanced. Here is a simple interface for AVL tree: * ***************************************************************************** * AVLTree.h: a simple implementation of the AVL Tree * Author: Hung Q. Ngo * ***************************************************************************** #ifndef AVLTREE_H_ #define AVLTREE_H_ #include <iostream> #include <sstream> template <typename Key, typename Value> class AVLTree { AVLTree() : root(NULL) { } void inorder_print() { inorder_print(root); std::cout << std::endl; } void preorder_print() { preorder_print(root); std::cout << std::endl; } void postorder_print() { postorder_print(root); std::cout << std::endl; } bool insert(Key, Value); bool find(Key key) { return search(root, key) != NULL; } const Value& get(Key); const Value& minimum(); const Value& maximum(); bool remove(Key); void clear() { clear(root); } // The node is similar to a BSTNode; use parent pointer to simplify codes // A tree is simply a pointer to a BSTNode, we will assume that variables of // type Key are comparable using <, <=, ==, >=, and > // we do not allow default keys and values struct AVLNode { enum { LEFT_HEAVY = 1, BALANCED = 0, RIGHT_HEAVY = -1}; int balance; Key key; Value value; AVLNode* left; AVLNode* right; AVLNode* parent; AVLNode(const Key& k, const Value& v) : key(k), value(v), parent(NULL), left(NULL), right(NULL), balance(BALANCED) {} std::string to_string() const { std::ostringstream oss; oss << key << "(" << balance << ")"; return oss.str(); AVLNode* root; AVLNode* successor(AVLNode* node); AVLNode* predecessor(AVLNode* node); AVLNode* search(AVLNode*, Key); void rebalance(AVLNode*); void right_rotate(AVLNode*&); void left_rotate(AVLNode*&); void inorder_print(AVLNode*); void preorder_print(AVLNode*); void postorder_print(AVLNode*); void clear(AVLNode*&); #include "AVLTree.cpp" // only done for template classes And here is the implementation of the insertion strategy just described. * ----------------------------------------------------------------------------- * insert a new key, value pair, return * true if the insertion was successful * false if the key is found already * ----------------------------------------------------------------------------- template <typename Key, typename Value> bool AVLTree<Key, Value>::insert(Key key, Value value) { AVLNode* p = NULL; AVLNode* cur = root; while (cur != NULL) { p = cur; if (key < cur->key) cur = cur->left; else if (key > cur->key) cur = cur->right; return false; // key found, no insertion // insert new node at a leaf position AVLNode* node = new AVLNode(key, value); node->parent = p; if (p == NULL) // empty tree to start with root = node; else if (node->key < p->key) p->left = node; p->right = node; // readjust balance of all nodes up to the root if necessary return true; * ----------------------------------------------------------------------------- * left rotate around node c * c b * / \ / \ * C b --> c B * / \ / \ * A B C A * adjust parent pointers accordingly * ----------------------------------------------------------------------------- template <typename Key, typename Value> void AVLTree<Key, Value>::left_rotate(AVLNode*& node) { if (node == NULL || node->right == NULL) return; AVLNode* c = node; AVLNode* b = c->right; AVLNode* p = c->parent; // first, adjust all parent pointers b->parent = p; c->parent = b; if (b->left != NULL) b->left->parent = c; // make sure c's parent points to b now if (p != NULL) { if (p->right == c) p->right = b; else p->left = b; // finally, adjust downward pointers c->right = b->left; b->left = c; node = b; // new local root if (root == c) root = b; // new root if necessary * ----------------------------------------------------------------------------- * right rotate around node c * c b * / \ / \ * b C --> A c * / \ / \ * A B B C * adjust parent pointers accordingly * ----------------------------------------------------------------------------- template <typename Key, typename Value> void AVLTree<Key, Value>::right_rotate(AVLNode*& node) { if (node == NULL || node->left == NULL) return; AVLNode* c = node; AVLNode* b = c->left; AVLNode* p = c->parent; // first, adjust all parent pointers b->parent = p; c->parent = b; if (b->right != NULL) b->right->parent = c; // next, make sure c's parent points to b if (p != NULL) { // make sure c's parent points to b now if (p->right == c) p->right = b; else p->left = b; // finally, adjust the downward pointers c->left = b->right; b->right = c; node = b; // new local root if (root == c) root = b; // new root if necessary * ----------------------------------------------------------------------------- * node points to the root of a sub-tree which just had a height increase * we assume the invariance that node's parent is not unbalanced, which * certainly holds for the first node that got inserted * the 'balance' field of the parent may not be correct and we have to * adjust that too * ----------------------------------------------------------------------------- template <typename Key, typename Value> void AVLTree<Key, Value>::rebalance(AVLNode* node) { AVLNode* p = node->parent; if (p == NULL) return; // first, recompute 'balance' of the parent; node got a heigh increase if (p->left == node) // if there's no grandparent or if the parent is balanced then we're done AVLNode* gp = p->parent; // the grand parent if (gp == NULL || p->balance == AVLNode::BALANCED) return; // if we get here then the parent p just got a height increase // next, see if the grand parent is unbalanced if (node == p->left) { if (p == gp->left) { if (gp->balance == AVLNode::LEFT_HEAVY) { // this is the LL case // gp(+2) p (0) // / \ / \ // p(+1) B --> node gp (0) // / \ / \ // node A A B p->balance = gp->balance = AVLNode::BALANCED; } else { // p == gp->right if (gp->balance == AVLNode::RIGHT_HEAVY) { // the RL case // this is the RL case // gp(-2) node(0) // / \ / \ // A p(+1) --> gp(x) p(y) // / \ /\ / \ // node D A B C D // / \ // B C // computing the new balance is a little trickier, depending on // with of B & C is heavier switch (node->balance) { case AVLNode::LEFT_HEAVY: p->balance = AVLNode::RIGHT_HEAVY; gp->balance = AVLNode::BALANCED; case AVLNode::BALANCED: // only happens if B & C are NULL p->balance = AVLNode::BALANCED; gp->balance = AVLNode::BALANCED;; case AVLNode::RIGHT_HEAVY: p->balance = AVLNode::BALANCED;; gp->balance = AVLNode::LEFT_HEAVY; node->balance = AVLNode::BALANCED; } else { // node == p->right if (p == gp->right) { if (gp->balance == AVLNode::RIGHT_HEAVY) { // this is the RR case // gp(-2) p(0) // / \ / \ // A p(-1) --> gp(0) node // / \ / \ / \ // B node A B p->balance = gp->balance = AVLNode::BALANCED; } else { // p == gp->left if (gp->balance == AVLNode::LEFT_HEAVY) { // this is the LR case // gp(+2) node(0) // / \ / \ // p(-1) D --> p(x) gp(y) // / \ /\ / \ // A node A B C D // / \ // B C // computing the new balance is a little trickier, depending on // with of B & C is heavier switch (node->balance) { case AVLNode::LEFT_HEAVY: p->balance = AVLNode::BALANCED; gp->balance = AVLNode::RIGHT_HEAVY; case AVLNode::BALANCED: // only happens if B & C are NULL p->balance = AVLNode::BALANCED; gp->balance = AVLNode::BALANCED;; case AVLNode::RIGHT_HEAVY: p->balance = AVLNode::LEFT_HEAVY; gp->balance = AVLNode::BALANCED;; node->balance = AVLNode::BALANCED; Insertion as desribed above runs in time O(log n) because it involves one pass down and one pass up the tree’s height. Also, we use the node structure which has a parent pointer to simplify the code. We do not need parent pointers, but without them we will then probably have to use a stack to move up after an insertion. Removing a node from an AVL tree has two steps: • The first step is done in the same way as a normal BST. If the node has at most one child then we splice it. Let v be its child (or NULL). If the node has two children, we find its successor (which is the minimum node on the right subtree) and splice the successor, put the successor’s payload in the node’s payload. In this case, let v be the successor’s other child (which could be • In the second step, we fix the balances of all nodes from v up to the root and perform rotations accordingly. The strategy is exactly the same as that in the insertion case with one key difference: we might have to do more than one single/double rotations, all the way up to the root. In this case, the subtree rooted at v just lost 1 from its height. Consider, for instance, the case when v is the left child of its parent. We will have to rebalance at v‘s parent if it was already right-leaning. Let u be v‘s sibling. If u is left leaning then we do a double rotation. Otherwise, a single rotation at the parent node will suffice. Then, we repeat the strategy if there is an overal height reduction at the parent node. Exercise: complete the AVLTREE::remove() function. 2. Red-Black Trees Red Black trees were proposed by Rudolf Bayer in 1972, and refined by Leonidas J. Guibas and Robert Sedgewick in 1978. Guibas and Redgewick presented the coloring scheme and the name RB tree sticks. RB trees are found in many practical search structures. C++’s std::map and std::set are typically implemented with a red-black tree, so are symbol tables in many operating systems and other One way to describe the intuition behind a RB tree is as follows. In a perfectly balanced tree all the paths from a node to its leaves have the same length. However, this property is way too strong to be maintainable. Hence, we will design a balanced BST by keeping a “skeleton” of black nodes which have the aforementioned property of a perfectly balanced tree. At the same time, we cut the tree some slack by allowing some other nodes to be red, positioned at various placed in the tree so that the constraint is looser. However, we cannot give the red nodes too much freedom because they will destroy the balance maintained by the black skeleton. Thus, we will enforce the property that red nodes can not have red children. This way, red nodes are positioned scatteredly throughout the tree giving the “right” amount of slack while keeping the tree mostly balanced. Rigorously, a red-black tree is a BST with the following properties: • (Bicolor property) All nodes are colored red or black • (Black root and leaves) The root and the leaves (the NULL nodes) are colored black • (Black height property) For every internal node v, the number of black nodes we encounter on any path from v down to one of its leaves are the same. This number is called the black height of v. Note that the black height of a node does not count the node’s own color. • (Black parent property) A red node must have a black parent. Or, equivalently, every red node must have two black children. This is the property that ensures the sparsity of red nodes. Here is an example of what a RB tree looks like. 2.1. RB trees have logarithmic heights Consider any RB tree T. Suppose we lump together every red node with its parent (which must be black); then, we obtain a tree T' which is no longer a binary tree. The tree T' will be a (2,4)-tree (or 2-3-4 tree), where each internal node has 2, 3, or 4 children. If the RB tree shown above is T, then its 2-3-4 counterpart is Let h be the height of T and h' be the height of T'. Also, let n be the number of keys in T. First of all, we claim that the number of black leaves (squares in the pictures) is exactly n+1. This is because the RB tree is a full binary tree and from that we can prove this claim by induction. (Sketch: if the left subtree of the root has a keys and the right subtree has b keys, then there are totally a+b+1 keys in the tree and (a+1)+(b+1) leaves by the induction hypothesis.) Now, due to the 2 to 4 branching factors, the number of leaves varies between 2^h' and 4^h': $2^{h'} \leq n+1 \leq 4^{h'}$. By the black parent property, the height of T' is at least half the height of T. Hence, $h \leq 2h' \leq 2\log_2(n+1)$ 2.2. How to maintain the RB properties After an insert. We will always color the newly inserted node red, unless it is the root in which case we color it black. Call the newly inserted node z. If z or its parent is black, then there is nothing to do. All properties are still satisfied. Now, suppose z‘s parent is red. Then, we have the double red problem. We fix this problem by considering the following cases. • If z‘s uncle is red, then we recolor z‘s parent and uncle black, grandparent red (it had to be black before), and consider the grandparent the new z. We potentially have a new double red problem but it is now higher up in the tree. If z‘s grand parent is the root then we color it black and we are done. The following picture illustrates this case. After a delete. When we delete a node from a BST, we either splice it or splice its successor. Let z be the children of the node that got spliced. If we spliced a red node, then we’re done; there is nothing to fix. Suppose we spliced a black node. In this case, we will violate the black height property of all ancestors of z (except for the trivial case when z is the root or when z‘s parent is the root). Conceptually, if z was allowed to have “double blackness” then the black height property is not violated. Note that z‘s sibling cannot be a leaf node; thus, the sibling must have two children. We consider three cases as follows. 3. (2,4)-Trees A (2,4)-tree, also called a 2-3-4 tree was invented by Rodolf Bayer in 1972. It is a special case of B-trees and multiway search trees. So let’s talk about a multiway search tree first. In a multiway search tree, nodes no longer are restricted to having 2 children and holding one key. Nodes can hold multiple keys. A d-node is a node that holds d-1 keys and have d children. Let K[1] ≤ K[2] ≤ ... ≤ K[d-1] be the keys stored at this node. And, let v[1], v[2] … v[d] be the d children pointers. Then, in a multiway search tree, for every d-node, it must hold that the keys of the v[i] subtree is in between K[i-1] and K[i] as shown in the following picture. (We implicitly understand that K[0] is minus infinity and K[d] is plus infinity.) Here’s an example of a multiway search tree. Searching in a multiway search tree is straightforward: to search for key K, we find the index i such that K[i-1] < K < K[i] and follow link v[i]. Exercise: Similarly, you should think of the algorithms for finding the maximum, minimum, successor, and predecessor. Exercise: Does a pre/in/post-order traversal of a multiway search tree make sense? How about a levelorder traversal? How do you write codes for a levelorder traversal of a multiway search tree? A (2,4)-tree is a multiway search tree with two additional properties: • Size property: all internal nodes in the tree are 2-, 3-, or 4-nodes. In particular, every internal node holds 1, 2, or 3 keys. • Depth property: every leaf of a (2,4)-tree are at the same depth, called the depth of the tree. Here is a sample (2,4)-tree: 3.1. A (2,4)-tree has logarithmic height First, we show that the number of leaves of a (2,4)-tree is precisely n+1 where n is the number of keys stored in the tree. This holds for any “branching trees”, which are trees in which every internal node has at least 2 children. So, we prove this fact for branching trees. The proof is by induction on the height of the tree. If the tree has height 0 then the root is a leaf which stores no key. If the tree has height 1 then this is certainly the case because when the root is a d-node it has d children (which are leaves) and stores d-1 keys. If the tree has height at least 2, again suppose the root is a d-node. Let $n_1, \cdots, n_d$ be the number of keys stored in the d subtrees of the root. Since all sub-trees of the root are also branching trees, by the induction hypothesis the total number of leaves is $(n_1+1)+(n_2+1)+\cdots+(n_d+1) = \sum_{i=1}^d n_i+d = n+1$ where $n$ is the total number of keys in the entire tree. Here, we used the fact that the root stores $d-1$ keys. Next, let $h$ be the height of a (2,4)-tree, by the same reasoning as in the RB tree case we have $2^h \leq n+1$ which implies $h \leq \log_2(n+1)$. 3.2. How to maintain the (2,4)-tree properties We will assume that we do not store duplicate keys. After an insert. We insert a new key into a (2,4)-tree in the following way. We search for it in the tree. If the new key is found then we do nothing because duplicate keys are not allowed. If the new key is not found, then we will end up at a leaf node. Let v be the parent of that leaf node. We insert the key into the correct relative position in that node v. Prior to the insertion of the new key, if v was a 2-node or a 3-node, then we are OK because after the insertion of a new key v is at worst a 4-node. However, if v was already a 4-node then we have the overflow problem we need to resolve. The solution is intuitive: we split the node into two, bring a middle key up one level as shown in the following picture: In the worst-case, the updates are propagated up to the root and we have an increase in tree height The run time is proportional to the height of the tree which is logarithmic, and both the size property and the depth property are still satisfied. After a delete. To delete a key from a (2,4)-tree, we use the strategy similar to that of the BST case. If the key is at the lowest level, we simply remove it. If the key is at some node higher up, we find the successor key which is the left most key on the right subtree (of the key to be removed), replace the key with its successor, and remove the successor. The problem occurs when we try to remove a 2-node (which is at the lowest level, next to the leaves): the problem is called an underflow problem. To solve the underflow problem, we consider the following cases. • If the node to be removed has an adjacent sibling which is a 3-node or a 4-node, we perform a “transfer” as shown in the following picture, which hopefuly is self-explanatory: • If the node to be removed has only one adjacent sibling which is a 2-node, or has 2 adjacent siblings both of which are 2-nodes, then we perform a fusion as shown in the following picture: The fusion might cause an underflow at the parent node, in which case we repeat the process: The fusion may propagate all the way up to the root, in which case we have a height reduction of the tree. 4. Splay trees Splay trees are a self-adjusting binary search trees invented by Dan Sleator and Bob Tarjan in 1985. It does not require any additional field in each node such as color or balance. All operations have amortized cost O(log n). In addition, frequently accessed nodes are closer to the root. In a splay tree, insertion, deletion, and search are done exactly in the same way as in a normal BST, with one additional splay(c) step, where c is a node to be defined later. 4.1. Splaying To “splay” a node c, we perform O(height of tree) baby steps. Each of these baby steps is either a zig-zig, zig-zag, or zig defined as follows. • zig-zig is performed when c has a grandparent and its side (left or right) with respect to its parent is the same as its parent’s side with respect to its grandparent. • zig-zag is performed when c has a grandparent and it is on a different side of its parent than its parent is with respect to the grandparent. To splay a node c, we repeatedly perform the baby steps until the node is floated up all the way to the root. The cost of splaying is proportional to the height of the splayed node. 4.2. Which node to splay and why it works After a search, we splay the node whose key is found; or, if the key is not found then we splay the last non-NULL node seen in the search. After an insert, we splay the newly inserted node. After a delete, we splay the child of the spliced node. The main question is, why does this work, even just on average? It is conceivable that the tree will become so unbalanced that its height is $\Omega(n)$ and thus a search might take linear time. How can we then say that on average an operation takes logarithmic time? Let’s actually construct a sequence of insertions and such that a later search takes linear time. Suppose we insert the keys 1, 2, …, n in that order. After each insertion, the key ends up on the right node of the root, and a zig operation brings it up top. Insert 1: Insert 2: \ -- zig --> / Insert 3: / \ -- zig --> / and so on Now, after n insertions as described, when we search for key 1, we will have to take about n steps to get down to the left-most branch, and then about n/2 splaying baby steps to bring 1 up all the way to the root. Overall, we spent cn units of time where c is some constant. What is going for us is that the price we paid for all of the previous insertions were also some constants. And thus, when we distribute the total cost over n+1 operations we will still have a small cost per operation. For example, if each of the inserts took d units of times, then the total price we pay per operation is $(nd + nc)/n = d+c$ which is a constant! We will transfer this intuition into an (amortized) analysis by giving each operation a budget of $O(\log n)$ “dollars.” Then, we show that the money left in the accounts of all the nodes plus the $O(\log n)$ new dollars are always sufficient to pay for any of the operations. If an operation costs less money, we will deposit the residual amount to the accounts to pay for future and potentially more expensive operations. 4.3. Analysis Define the “size” of a node v, denoted by n(v), in a splay tree to be the number of keys in its subtree. And, call the “rank” of a node, denoted by r(v), the value log[2](2n(v)+1). Note that the ranks and the sizes will change over time. Let P be an operation to be performed: search, insert, or delete. Let cost(P) denote the cost of performing P. Our strategy is to make a “deposit” D(P) (in “miliseconds”, or “dollars”) for operation P in such away that the deposit plus what ever amount of “money” we still have in the tree is sufficient to pay for the cost of the operation. Our ultimate objective is to prove that If each operation P has a deposit of D(P) = c*log(n) dollars, then the splay tree “bank” will always have sufficient fund to pay for all operations. In particular, the amortized cost of search, insert, and delete in a splay tree is O(log n) The splay tree bank is organized as follows. The bank maintains a strict accounting policy: The invariant: Every node v in the tree will have an account that has $r(v) = \log_2(2n(v)+1)$ dollars in it. We will make sure that each operation P on the tree deposits an amount D(P) sufficient for the invariant to hold. The invariant certainly holds when the tree is empty, i.e. when it has only 1 NULL node whose rank r(v) = 0. Let r(T) denote the total amount of money the bank T (i.e. a splay tree) possesses, called the banking reserve: $r(T) = \sum_{v \in T} r(v)$ In order to maintain the invariant after an operation P is performed on the tree, we must make sure that the deposit D(P) is sufficiently large so that $r(T)+D(P)-\text{cost}(P) \geq r(T')$ where T' is the tree resulted from performing operation P on T. In other words, we want the variation in bank reserves to be small relative to the newly deposited amount: $r(T') - r(T) \leq D(P)-\text{cost}(P)$ Thus, in order to determine how much D(P) should be, we need to determine the variation $r(T') - r(T)$ after each pair of operations: P = insert + splay, P = delete + splay, P = search + splay. For example, an insert changes the tree from T to T[in], and then the splaying changes the tree from T[in] to T'. And, because $r(T') - r(T) = (r(T') - r(T_{\text{in}})) + (r(T_{\text{in}}) - r(T))$ we can assess the variations separately instead of assessing the variations for each pair insert + splay, delete + splay, and search + splay. In what follows, let T' be the tree after an operation was performed (search, insert, delete, splay, or baby step), and T be the tree before the operation. After a delete: r(T') can only be smaller than r(T) because the ranks of all nodes from the deleted node up to the root are reduced. Hence, in this case r(T') - r(T) ≤ 0. After a search: r(T')-r(T)=0. After an insert: suppose we just inserted node $v_0$ at depth $d$, and $v_0, v_1, \cdots, v_d$ is the path from $v_0$ up to the root of the tree. Then, the difference in bank reserve amounts is $r(T') - r(T) = \sum_{i=0}^d r'(v_0) - \sum_{j=1}^d r(v_j)$ where $r'(v_i)$ is the rank of node $v_i$ after the insertion. Also, let $n'(v_i)$ be the size of node $v_i$ after the insertion. Then, $n'(v_0) \leq n(v_1)$, $n'(v_1) \leq n(v_2)$, and so on, up to $n'(v_{d-1}) \leq n(v_d)$. Thus, $r'(v_i) \leq r(v_{i+1})$ for every $i \in \{0, 1, \dots, d-1\}$. Consequently, $r(T')-r(T) \leq r'(v_d) \leq \log_2(2n+1)$ After a splay: since a splay operation consists of many baby steps, let us estimate the banking reserve difference after each baby step. Please refer to the pictures above in the following derivation. We will use the fact that, for any two real numbers $a, b>0$, $\log_2 a + \log_2 b \leq 2\log_2(a+b)-2$ (*) Because, the inequality is equivalent to $ab \leq (a+b)^2/4$ which is equivalent to $(a-b)^2\geq 0$. • zig-zig: after zig-ziging node c up two levels, from inequality (*) we can see that $r(c)+r'(y) \leq 2r'(c)-2$. The banking reserve difference can then be bounded by: The last inequality follows from the fact that $r'(x) \leq r'(c)$ and $r(x) \geq r(c)$. • zig-zag: after zig-zaging node c, from inequality (*) we know $r'(x)+r'(y) \leq 2r'(c)-2$. Thus, the difference can be bounded by • zig: in this case, $\begin{array}{rcl}r(T')-r(T)&=&r'(x)+r'(y)-r(c)-r(x)\\&=&r'(y)-r(c)\\&\leq& r'(c)-r(c)\\&\leq&3(r'(c)-r(c))\end{array}$ In conclusion, after a zig-zig or zig-zag, the banking reserve difference is at most $3(r'(c)-r(c))-2$, and after a zig the difference is at most $3(r'(c)-r(c))$. Now, when we splay node $c$ at depth $d$ all the way to the root, there will be $\lceil d/2 \rceil$ baby steps all of which are zig-zig or zig-zag except for possibly the last step. Let $r_0(c)$ be the rank of $c$ before any baby step is done, $r_i(c)$ be the rank of $c$ after the $i$th baby step for $i\in \{1,2,\dots, \lceil d/2 \rceil\}$. Then, the net banking reserve difference is at most $\begin{array}{rcl} r(T')-r(T) &\leq& \sum_{i=1}^{\lceil d/2 \rceil} [3(r_i(c)-r_{i-1}(c))-2] + 2\\&=& 3(r_{\lceil d/2 \rceil} (c)-r_0(c))-2\lceil d/2 \rceil + 2\\ &\leq& 3(r_{\lceil d/2 \rceil} (c) Now, recall the inequality $r(T') - r(T) \leq D(P)-\text{cost}(P)$ that we wanted to maintain after each pair search + splay, insert + splay, and delete + splay. We need to determine the deposit D(P) to be made for operation P. Consider first the pair P = search + splay. Let d be the depth of the splayed node. Then, the cost of this operation is proportional to d; and, by a change of currency we can assume that it costs exactly d dollars. We have shown that the net banking reserve difference after search + splay is bounded by $\begin{array}{rcl} r(T')-r(T) &\leq& 3(r_{\lceil d/2 \rceil} (c)-r_0(c))+2-d\\&=&3(r_{\lceil d/2 \rceil} (c)-r_0(c))+2 - \text{cost}(P)\end{array}$ Hence, if we deposit an amount $D(P) = 3(r_{\lceil d/2 \rceil} (c)-r_0(c))+2 = O(\log n)$ we would have enough to cover the cost plus the extra banking reserve needed. Note that $r_{\lceil d/2 \ rceil} (c) = \log(2n+1)$. The analysis for the pair delete + splay is similar. For insert + splay, the net difference is $\begin{array}{rcl} r(T')-r(T) &\leq& \log(2n+1) + 3(r_{\lceil d/2 \rceil} (c)-r_0(c))+2-d\\&=&\log(2n+1)+3(r_{\lceil d/2 \rceil} (c)-r_0(c))+2 - \text{cost}(P)\end{array}$ and thus a $O(\log n)$ deposit is also sufficient. 5. Experimental performance analysis of balanced search trees We have analyzed some of the most important balanced BSTs. They perform well theoretically by ensuring that each operation takes logarithmic time (worst-case or on average). With theoretically identical performance like this, we do not have a good basis for picking which data structure to use in a real-world problems. In such case, the only choice is to implement them all and see which one fits best for the problem at hand. Fortunately, Ben Pfaff has done an set of experiments like that. Here’s the report. There are many interesting conclusions drawn from the experiments; the following is probably more relevant to this We found that in selecting data structures, unbalanced BSTs are best when randomly ordered input can be relied upon; if random ordering is the norm but occasional runs of sorted order are expected, then red-black trees should be chosen. On the other hand, if insertions often occur in a sorted order, AVL trees excel when later accesses tend to be random, and splay trees perform best when later accesses are sequential or clustered. For node representation, we found that parent pointers are generally fastest, so they should be preferred as long as the cost of an additional pointer field per node is not important. If space is at a premium, threaded representations conserve memory and lag only slightly behind parent pointers in speed. Post a Comment Phản hồi mới • nhat anh on Pinkie Pie • Tuan on Quả đấm của Jack Stall • thanh trung on Gỡ rối tơ lòng • Thanh Nguyen on Gỡ rối tơ lòng • CK on Lập trình máy kỳ dị • Khúc Phụ on Thơ Nguyễn Duy: “Kim Mộc Thuỷ Hoả Thổ” • Hannibal on PM 2: Pattern matching bằng automaton đơn định • Hoàng Vương on Cây nhị phân và cây tìm kiếm nhị phân • huongvu on Gỡ rối tơ lòng • huongvu on Phân rã cây và độ rộng cây • An Vinh on Một bài toán thú vị • Phạm Bá Chiểu on Đàn sếu • An Vinh on Một bài toán thú vị Chủ đề : C++, Cấu trúc dữ liệu and tagged 2-4 tree, amortized analysis, avl tree, phân tích khấu hao, red-black tree, splay tree. Bookmark the permalink. Trackbacks are closed, but you can post a
{"url":"http://www.procul.org/blog/2012/04/06/cac-cay-tim-ki%E1%BA%BFm-can-b%E1%BA%B1ng-avl-d%E1%BB%8F-den-24-va-cay-loe/","timestamp":"2014-04-21T04:33:21Z","content_type":null,"content_length":"129832","record_id":"<urn:uuid:98bb0e88-d1ee-476a-9734-53055b80cb7d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can any body now how to make a program ***** **** *** ** * on turbo c. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f5b1c56e4b0602be437c92b","timestamp":"2014-04-19T10:16:24Z","content_type":null,"content_length":"37716","record_id":"<urn:uuid:fb4c14f0-a1b7-4508-96ff-858d0b0db2a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Trajectory Triangulation: 3D Reconstruction of Moving Points from a Monocular Image Sequence Shai Avidan, Member, IEEE, and Amnon Shashua, Member, IEEE AbstractÐWe consider the problem of reconstructing the 3D coordinates of a moving point seen from a monocular moving camera, i.e., to reconstruct moving objects from line-of-sight measurements only. The task is feasible only when some constraints are placed on the shape of the trajectory of the moving point. We coin the family of such tasks as ªtrajectory triangulation.º We investigate the solutions for points moving along a straight-line and along conic-section trajectories. We show that if the point is moving along a straight line, then the parameters of the line (and, hence, the 3D position of the point at each time instant) can be uniquely recovered, and by linear methods, from at least five views. For the case of conic-shaped trajectory, we show that generally nine views are sufficient for a unique reconstruction of the moving point and fewer views when the conic is of a known type (like a circle in 3D Euclidean space for which seven views are sufficient). The paradigm of trajectory triangulation, in general, pushes the envelope of processing dynamic scenes forward. Thus static scenes become a particular case of a more general task of reconstructing scenes rich with moving objects (where an object could be a single point). Index TermsÐStructure from motion, multiple-view geometry, dynamic scenes. WE wish to remove the static scene assumption in 3D-from-2D reconstruction by introducing a new paradigm we call trajectory triangulation that pushes the
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/398/2559665.html","timestamp":"2014-04-18T09:25:28Z","content_type":null,"content_length":"8931","record_id":"<urn:uuid:1e6e4f1b-e765-409e-ab48-7b582e238213>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Supertypes Type Hierarchy Learn more about scaladoc diagrams 1. final def !=(arg0: Any): Boolean 2. final def ##(): Int 4. def ->[B](y: B): (RichLong, B) 5. Returns true if this is less than that Returns true if this is less than that Definition Classes 6. Returns true if this is less than or equal to that. Returns true if this is less than or equal to that. Definition Classes 7. final def ==(arg0: Any): Boolean 8. Returns true if this is greater than that. Returns true if this is greater than that. Definition Classes 9. Returns true if this is greater than or equal to that. Returns true if this is greater than or equal to that. Definition Classes 10. def abs: Long Returns the absolute value of this. 11. final def asInstanceOf[T0]: T0 12. def byteValue(): Byte 13. def compare(y: Long): Int Result of comparing this with operand that. Result of comparing this with operand that. Implement this method to determine how instances of A will be sorted. Returns x where: □ x < 0 when this < that □ x == 0 when this == that □ x > 0 when this > that Definition Classes OrderedProxy → Ordered 14. def compareTo(that: Long): Int Result of comparing this with operand that. Result of comparing this with operand that. Definition Classes Ordered → Comparable 15. def doubleValue(): Double 18. def ensuring(cond: Boolean, msg: ⇒ Any): RichLong 20. def floatValue(): Float 21. def formatted(fmtstr: String): String Returns string formatted according to given format string. Returns string formatted according to given format string. Format strings are as for String.format (@see java.lang.String.format). Implicit information This member is added by an implicit conversion from RichLong to Predef.StringFormat[RichLong] performed by method StringFormat in scala.Predef. Definition Classes 22. def getClass(): Class[_ <: AnyVal] Definition Classes AnyVal → Any 23. def intValue(): Int 24. final def isInstanceOf[T0]: Boolean 25. def isValidByte: Boolean Returns true iff this has a zero fractional part, and is within the range of scala.Byte MinValue and MaxValue; otherwise returns false. 26. def isValidChar: Boolean Returns true iff this has a zero fractional part, and is within the range of scala.Char MinValue and MaxValue; otherwise returns false. 27. def isValidInt: Boolean Returns true iff this has a zero fractional part, and is within the range of scala.Int MinValue and MaxValue; otherwise returns false. 28. def isValidLong: Boolean 29. def isValidShort: Boolean Returns true iff this has a zero fractional part, and is within the range of scala.Short MinValue and MaxValue; otherwise returns false. 30. def isWhole(): Boolean 31. def longValue(): Long 32. def max(that: Long): Long Returns this if this > that or that otherwise. 33. def min(that: Long): Long Returns this if this < that or that otherwise. 36. val self: Long 37. def shortValue(): Short 38. def signum: Int Returns the signum of this. 41. def toBinaryString: String 42. def toByte: Byte Returns the value of this as a scala.Byte. Returns the value of this as a scala.Byte. This may involve rounding or truncation. Definition Classes 43. def toChar: Char Returns the value of this as a scala.Char. Returns the value of this as a scala.Char. This may involve rounding or truncation. Definition Classes 44. def toDouble: Double Returns the value of this as a scala.Double. Returns the value of this as a scala.Double. This may involve rounding or truncation. Definition Classes 45. def toFloat: Float Returns the value of this as a scala.Float. Returns the value of this as a scala.Float. This may involve rounding or truncation. Definition Classes 46. def toHexString: String 47. def toInt: Int Returns the value of this as an scala.Int. Returns the value of this as an scala.Int. This may involve rounding or truncation. Definition Classes 48. def toLong: Long Returns the value of this as a scala.Long. Returns the value of this as a scala.Long. This may involve rounding or truncation. Definition Classes 49. def toOctalString: String 50. def toShort: Short Returns the value of this as a scala.Short. Returns the value of this as a scala.Short. This may involve rounding or truncation. Definition Classes 51. def toString(): String Definition Classes Proxy → Any 52. def underlying(): AnyRef 53. def unifiedPrimitiveEquals(x: Any): Boolean Should only be called after all known non-primitive types have been excluded. Should only be called after all known non-primitive types have been excluded. This method won't dispatch anywhere else after checking against the primitives to avoid infinite recursion between equals and this on unknown "Number" variants. Additionally, this should only be called if the numeric type is happy to be converted to Long, Float, and Double. If for instance a BigInt much larger than the Long range is sent here, it will claim equality with whatever Long is left in its lower 64 bits. Or a BigDecimal with more precision than Double can hold: same thing. There's no way given the interface available here to prevent this error. Definition Classes 54. def unifiedPrimitiveHashcode(): Int Definition Classes 57. def →[B](y: B): (RichLong, B) There is no reason to round a Long, but this method is provided to avoid accidental conversion to Int through Float. (Since version 2.11.0) This is an integer type; there is no reason to round it. Perhaps you meant to call this on a floating-point value?
{"url":"http://www.scala-lang.org/files/archive/nightly/docs/library/scala/runtime/RichLong.html","timestamp":"2014-04-19T09:31:21Z","content_type":null,"content_length":"84284","record_id":"<urn:uuid:9a903f3c-0ffb-4c9c-bb25-a424a921af6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home Rade Zivaljevi\'c Matematicki institut SANU, Beograd, Yugoslavia Abstract: A pair $(Y,\tau)$, where $Y$ is an internal set, whereas $\tau$ is a topology (usually external) on $Y$, is called a $^*$-topological space if $\tau$ has an internal base. The main example is $(^*X,\overline\tau)$ where $(X,\tau)$ is a standard topological space and $\overline\tau$ the topology generated by $^*\tau$. This is the so called $Q$-topology on $^*X$ induced by $(X,\tau)$, a notion introduced by A. Robinson in [4]. This note contains negative answers to some questions of R. W. Button, [1], who asked whether the following implications $$ \align &(^*X,\overline\tau)\enskip \text{normal}\enskip\Rightarrow (X,\tau) \enskip\text{normal} &(X,\tau)\enskip\text{scattered}\enskip\Rightarrow (^*X,\overline\tau) \enskip\text{scattered} \endalign $$ hold in some enlargement. Classification (MSC2000): 03M05, 54J05 Full text of the article: Electronic fulltext finalized on: 3 Nov 2001. This page was last modified: 22 Aug 2002. © 2001 Mathematical Institute of the Serbian Academy of Science and Arts © 2001--2002 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PIMB/049/21.html","timestamp":"2014-04-19T00:08:16Z","content_type":null,"content_length":"3814","record_id":"<urn:uuid:0722cf64-5936-4125-b534-24ac32e432de>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guided Tour of the Math Forum as a Portal to Mathematics on the Internet The MATH FORUM as a Portal to Mathematics on the Internet A Guided Tour for TechEd01 March, 2001 ... Ontario, CA Shelly Berman shelly@mathforum.org About The Math Forum The goal for this presentation is to give a quick overview of the Math Forum features and services that are of special interest to teachers and teacher educators. The Math Forum Home Page - http://mathforum.org/ The Math Forum is an online community of teachers, students, researchers, parents, educators, and citizens at all levels who have an interest in mathematics and math education. The Math Forum has been consistently recognized as the leader in its field, and continues to provide high quality content and useful features, attracting about 4 million pageviews each month. The home page offers easy access to all of the Math Forum services, with specific entry points provided to aid navigation for the novice, such as the Student Center and Teachers' Place. There are also links to 'What's New' on the site, a Search for Math on the Internet, and more. Problems of the Week - http://mathforum.org/pow/ The Problems of the Week are designed to challenge students with non-routine problems, and to encourage them to explain their solutions. There are six Problems of the Week (PoWs): Elementary, Middle School, Algebra, Geometry, Trigonometry & Calculus, and Discrete Mathematics. While we will continue to provide Problems of the Week, beginning this fall, a fee will be required to access a "mentored" environment in which every student submission is responded to by a mentor, and students are encouraged to strengthen their solutions. Here are a few problems we can DO now: Problem #1 Problem #2 Problem #3 Problem #4 There is a searchable archive of over 700 problems, each with the administrator's comments and highlighted solutions. Current or archived problems can be integrated into teachers' courses in a variety of ways - as an introductory or summary activity, as enrichment, to encourage team work or written communications, to allow the teacher unique access to student thinking, to allow students to mentor other students, and more. See the pow-teach discussion for more ideas. The Problems of the Week have evolved to include additional useful features: □ There is now a Library of Problems of the Week that organizes the archive of each of the six services for browsing by mathematics topics appropriate to that PoW, rates problems for difficulty level, and provides for searching by selected keywords or full text word search. You may also choose to browse an alphabetical listing from the Library page, or browse past problems by date from the individual PoW page. Given that this service is new, you might benefit from reading the "About the PoW Library" page. □ by using the "Print This Problem" link just above the title, any current or past problem can be printed with a simple "Math Forum Problem of the Week" header; this allows problems to be used without indicating a course or grade level; □ teachers can request accounts that can be sorted by class and alphabet, and that track each student's last posting date, and the number of correct, bonus and total submissions; to apply, follow the "Teacher Account" link for a particular PoW from the Teacher Information page; here is a sample account page; □ teachers are invited to read through and contribute to a pow-teach discussion, which has been established to facilitate conversation about general issues concerning the instructional use of the Problems of the Week, as well as ideas related to specific problems; □ teachers are invited to participate in mentor projects, which allows faculty to have their students experience what it is like to mentor the work of other students at a a lower mathematical level. Internet Mathematics Library - http://mathforum.org/library/ The Math Forum continues to collect, organize, catalog and annotate thousands of math related web sites from diverse sources to create its Internet Mathematics Library. You can search or browse through over 7,000 items in the collection, organized under the headings of Mathematics Topics, Resource Types, Mathematics Education Topics or Educational Level. "Drilling down" from a heading takes you to a set of categories, then to a page showing subcategories, selected sites, and all sites in the category. At the bottom of the Internet Mathematics Library home page is a link to a Power Search, which allows very pointed searches within the Library. This category scheme was adapted from the MAA architecture of mathematics topics. Ask Dr. Math - http://mathforum.org/dr.math/ Ask Dr. Math is an ask-an-expert service in which anyone in the world can pose a math question at any level. A cadre of volunteer 'doctors' select and respond to problems of interest. In addition to an archive of over 5,000 questions and answers that is searchable by level and topic, there is: ☆ a set of nearly 50 Frequently Asked Questions on the FAQ page, including items about multiplying a negative by a negative, permutations and combinations, the Fibonacci sequence, Pascal's Triangle, and more; ☆ a Classic Problems page, including such favorites as: "two trains leave from different cities at the same time ...", or "how large must a group be so that the chance of at least two people having the same birthday is ...", etc.; ☆ a Formulas page, which shows formulas for area, perimeter, and volume of a variety of figures, the connections between coordinate systems, trigonometric relationships, and more. Teacher2Teacher - http://mathforum.org/t2t/ Teacher2Teacher, like a virtual teacher's lounge, is an environment in which questions are asked and opinions are shared about topics across the broad spectrum of interest to teachers, including classroom techniques, activities, resources, professional development, etc. The archive contains over 500 questions and their related discussion threads. Initial responses are provided by master teachers, and many questions stimulate a public discussion as issues are explored and opinions expressed. For example, searching the archive under the Education topic " Technology in Math Ed" finds about 50 matches. A frequently asked question, like "how can I use assess my students", can generate much discussion, as found on this FAQ page. You are encouraged to join T2T to receive the Teacher2Teacher Community Update, which contains community news and related items of interest from the Math Forum. Math Forum Searches - http://mathforum.org/grepform.html We have over 300,000 pages of content, so this is quite an extensive search field. Given that ours is a full text searcher, you may want to focus a search in a specific area, or use the "that exact phase" and "complete words only" options. Efficient searching is an art. You will find our Searching Tips and Tricks page helpful, and our Search Features page offers even more detail about such items as the "Starting Points" that are generated for many keywords and topics, and the automatic spell correction. These features are the result of the on-going design efforts to make the search environment more user-friendly. We invite you to contact us to clarify any unresolved confusion or questions. Web Units and Lessons - http://mathforum.org/web.units.html The Math Forum is committed to building upon the activity of the teachers, students, and researchers who use it. The Forum provides a platform and the opportunity to share excellent resources and materials with colleagues world wide. We are particularly pleased to highlight the exemplary work of Suzanne Alejandre, who's prolific efforts are targeted mostly at the middle school level. Math Forum Internet Newsletter - http://mathforum.org/electronic.newsletter/ Our electronic newsletter is sent out via e-mail once a week to those who subscribe, and is archived on our site. It offers tips about what we have at the Math Forum and how to find it, notes about new items on the site or on the Internet, questions and answers from services like Ask Dr. Math or the Problems of the Week, suggestions for K-12 teachers and students, and pointers to key issues in mathematics and math education. Discussion Groups & Projects - http://mathforum.org/discussions/ The Math Forum's discussion archives include many mathematics and math education-related newsgroups, mailing lists, and Web-based discussions. For example, people here at the TechEd01 Conference might find particular interest in the American Mathematical Society's calc-reform discussion, Rutgers University's Discrete Math and Theoretical Computer Science (DIMACS) discretemath discussion, and the post-calculus level mathedu dicussion. We also host the American Mathematics Association of Two Year Colleges (AMATYC) mathedcc discussion. Some discussion sites are very active, like sci.math, which gets about 1,000 threads posted each academic month. Join Us As a Contributor - http://mathforum.org/join.forum.html/ There are many ways to contribute to the Math Forum community. Beyond using the various services we provide, many people subscribe to the newsletter, participate in T2T and other discussions, and make suggestions, such as alerting us to other good materials and websites they have discovered. Others find satisfaction in sharing their content as web units or lessons, or showcasing their students' work. Many people voluteer their time and efforts to respond to T2T or Ask Dr. Math questions, while others act as mentors for one of the Problems of the Week. In what ever ways this might work best for you, please know that you are always welcomed and invited to interact with us in our on-line math ed community center. [Privacy Policy] [Terms of Use] Home || The Math Library || Quick Reference || Search || Help © 1994-2014 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education. The Math Forum
{"url":"http://mathforum.org/workshops/tours/teched01tour.html","timestamp":"2014-04-16T22:21:46Z","content_type":null,"content_length":"15943","record_id":"<urn:uuid:e52359ff-8a88-4417-8887-de796e735f39>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
The Basics of Torque Measurement | EE Times Design How-To The Basics of Torque Measurement Torque is an important factor in much of the equipment on a factory floor. Measuring torque is often something that's misunderstood, which can lead to over- under-designing of measurement systems. This article addresses the many techniques and tradeoff of torque measurement techniques. Torque can be divided into two major categories, either static or dynamic. The methods used to measure torque can be further divided into two more categories, either reaction or in-line. Understanding the type of torque to be measured, as well as the different types of torque sensors that are available, will have a profound impact on the accuracy of the resulting data, as well as the cost of the measurement. In a discussion of static vs. dynamic torque, it is often easiest start with an understanding of the difference between a static and dynamic force. To put it simply, a dynamic force involves acceleration, were a static force does not. The relationship between dynamic force and acceleration is described by Newton’s second law; F=ma (force equals mass times acceleration). The force required to stop your car with its substantial mass would a dynamic force, as the car must be decelerated. The force exerted by the brake caliper in order to stop that car would be a static force because there is no acceleration of the brake pads Torque is just a rotational force, or a force through a distance. From the previous discussion, it is considered static if it has no angular acceleration. The torque exerted by a clock spring would be a static torque, since there is no rotation and hence no angular acceleration. The torque transmitted through a cars drive axle as it cruises down the highway (at a constant speed) would be an example of a rotating static torque, because even though there is rotation, at a constant speed there is no acceleration. The torque produced by the cars engine will be both static and dynamic, depending on where it is measured. If the torque is measured in the crankshaft, there will be large dynamic torque fluctuations as each cylinder fires and its piston rotates the crankshaft. If the torque is measured in the drive shaft it will be nearly static because the rotational inertia of the flywheel and transmission will dampen the dynamic torque produced by the engine. The torque required to crank up the windows in a car (remember those?) would be an example of a static torque, even though there is a rotational acceleration involved, because both the acceleration and rotational inertia of the crank are very small and the resulting dynamic torque (Torque = rotational inertia x rotational acceleration) will be negligible when compared to the frictional forces involved in the window movement. This last example illustrates the fact that for most measurement applications, both static and dynamic torques will be involved to some degree. If dynamic torque is a major component of the overall torque or is the torque of interest, special considerations must be made when determining how best to measure it.
{"url":"http://www.eetimes.com/document.asp?doc_id=1273988","timestamp":"2014-04-16T08:32:11Z","content_type":null,"content_length":"130237","record_id":"<urn:uuid:29d7d878-5e1a-498c-8aa5-055f2b0c5ce3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Induction proof? May 4th 2009, 12:12 AM #1 Induction proof? If $\prod_{n = 1}^{n} x_{n}$ denotes $x_{1}\cdot x_{2}\cdot x_{3}\cdot ...\cdot x_{n}$ then prove, by induction, that: $\left\{\prod_{n = 1}^{n} f_{n}(x)\right\}' = \sum_{i = 1}^{n} \{f_{1}(x)\cdot f_{2}(x)\cdot ...\cdot f_{i}'(x)\cdot ...\cdot f_{n}(x)\}$ where $'$ denotes derivative w.r.t $x$ Have you tried ? It's not difficult, you know that $(f_1f_2)'=f_1'f_2+f_1f_2'$ (initialisation). So, suppose this egality true for $n$ functions, and show it's true for $n+1$ functions too : $\left(\prod_{i=1}^{n+1}f_i\right)'=\left(f_{n+1}\p rod_{i=1}^{n}f_i\right)'=f_{n+1}'\prod_{i=1}^{n}f_ i+f_{n+1}{\color{red}\left(\prod_{i=1}^{n}f_i\righ t)'}$ Now substitute term in red (hypothesis of induction) : $\left(\prod_{i=1}^{n+1}f_i\right)'=\left(f_{n+1}\p rod_{i=1}^{n}f_i\right)'=f_{n+1}'\prod_{i=1}^{n}f_ i+f_{n+1}\sum_{i=1}^{n}\left\{f_1...f_i'...f_n\rig ht\}$ i.e : $\left(\prod_{i=1}^{n+1}f_i\right)'=f_{n+1}'\prod_{ i=1}^{n}f_i+\sum_{i=1}^{n}\left\{f_1...f_i'...f_nf _{n+1}\right\}$ And finally : $\color{blue}\boxed{\left(\prod_{i=1}^{n+1}f_i\righ t)'=\sum_{i=1}^{{\color{red}n+1}}\left\{f_1...f_i' ...f_{n+1}\right\}}$ What ends the prove by induction. May 4th 2009, 08:26 AM #2
{"url":"http://mathhelpforum.com/discrete-math/87316-induction-proof.html","timestamp":"2014-04-19T16:15:25Z","content_type":null,"content_length":"35081","record_id":"<urn:uuid:8ee5aafd-6ed6-49b1-9695-f59aa69b1431>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: precise numerical integration Replies: 6 Last Post: Jun 3, 2010 11:25 PM Messages: [ Previous | Next ] Re: precise numerical integration Posted: Jun 3, 2010 11:04 PM "Marcio Barbalho" <marciobarbalho@live.com> wrote in message <hu9mfj$rhl$1@fred.mathworks.com>... > No, that was not my point. I can't change the data, nor should I. What I am trying to find is an alternative integrator to 'trapz'. To expound further upon the point Urs and I have been making about your data, Marcio, even though the data may have been determined very accurately from some hypothetical infinite continuum source of data, it can only be a discrete representation of that source. All the points between each of the discrete points are missing, and yet by definition what calculus defines as an integral depends, not just on those discrete points, but on all the points in between. For that reason, no integration routine that depends only on discrete data can ever give you a perfectly accurate answer. The error it is bound to make depends on the nature of that infinitude of points that were left out. The use of discrete quadrature routines is based purely on the ability of the known discrete points to predict something of the probable values of those that are missing. If your source has rather a smooth nature where its derivatives are reasonably well-behaved, there is something to be gained in using higher order routines. If the source is very unruly so that such prediction is inaccurate, then you are better off using the simpler routines such as trapz which does only first order approximation. You will notice that the above argument in no way casts aspersions on the data-gathering process itself, which may have been excellent. It states only that something is inherently being lost in the attempt to represent a continuum of data with a discrete representation. Roger Stafford Date Subject Author 6/3/10 precise numerical integration Marcio Barbalho 6/3/10 Re: precise numerical integration us 6/3/10 Re: precise numerical integration Marcio Barbalho 6/3/10 Re: precise numerical integration us 6/3/10 Re: precise numerical integration Roger Stafford 6/3/10 Re: precise numerical integration John D'Errico 6/3/10 Re: precise numerical integration Roger Stafford
{"url":"http://mathforum.org/kb/message.jspa?messageID=7087441","timestamp":"2014-04-16T16:35:40Z","content_type":null,"content_length":"24671","record_id":"<urn:uuid:9c9e55db-faac-4bf0-9c31-f83a5e591e86>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
The Basics of Torque Measurement | EE Times Design How-To The Basics of Torque Measurement Torque is an important factor in much of the equipment on a factory floor. Measuring torque is often something that's misunderstood, which can lead to over- under-designing of measurement systems. This article addresses the many techniques and tradeoff of torque measurement techniques. Torque can be divided into two major categories, either static or dynamic. The methods used to measure torque can be further divided into two more categories, either reaction or in-line. Understanding the type of torque to be measured, as well as the different types of torque sensors that are available, will have a profound impact on the accuracy of the resulting data, as well as the cost of the measurement. In a discussion of static vs. dynamic torque, it is often easiest start with an understanding of the difference between a static and dynamic force. To put it simply, a dynamic force involves acceleration, were a static force does not. The relationship between dynamic force and acceleration is described by Newton’s second law; F=ma (force equals mass times acceleration). The force required to stop your car with its substantial mass would a dynamic force, as the car must be decelerated. The force exerted by the brake caliper in order to stop that car would be a static force because there is no acceleration of the brake pads Torque is just a rotational force, or a force through a distance. From the previous discussion, it is considered static if it has no angular acceleration. The torque exerted by a clock spring would be a static torque, since there is no rotation and hence no angular acceleration. The torque transmitted through a cars drive axle as it cruises down the highway (at a constant speed) would be an example of a rotating static torque, because even though there is rotation, at a constant speed there is no acceleration. The torque produced by the cars engine will be both static and dynamic, depending on where it is measured. If the torque is measured in the crankshaft, there will be large dynamic torque fluctuations as each cylinder fires and its piston rotates the crankshaft. If the torque is measured in the drive shaft it will be nearly static because the rotational inertia of the flywheel and transmission will dampen the dynamic torque produced by the engine. The torque required to crank up the windows in a car (remember those?) would be an example of a static torque, even though there is a rotational acceleration involved, because both the acceleration and rotational inertia of the crank are very small and the resulting dynamic torque (Torque = rotational inertia x rotational acceleration) will be negligible when compared to the frictional forces involved in the window movement. This last example illustrates the fact that for most measurement applications, both static and dynamic torques will be involved to some degree. If dynamic torque is a major component of the overall torque or is the torque of interest, special considerations must be made when determining how best to measure it.
{"url":"http://www.eetimes.com/document.asp?doc_id=1273988","timestamp":"2014-04-16T08:32:11Z","content_type":null,"content_length":"130237","record_id":"<urn:uuid:29d7d878-5e1a-498c-8aa5-055f2b0c5ce3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
The Complexity and Distribution of Hard Problems Juedes, David W. and Lutz, Jack H. (1992) The Complexity and Distribution of Hard Problems. Technical Report TR92-23, Department of Computer Science, Iowa State University. Full text available as: Postscript Adobe PDF The Complexity and Distribution of Hard Problems David W. Juedes and Jack H. Lutz Measure-theoretic aspects of the polynomial-time many-one reducibility structure of the exponential time complexity classes E=DTIME(2^linear) and E2=DTIME(2^polynomial) are investigated. Particular attention is given to the complexity (measured by the size of complexity cores) and distribution (abundance in the sense of measure) of languages that are polynomial-time many-one hard for E and other complexity classes. Tight upper and lower bounds on the size of complexity cores of hard languages are derived. The upper bounds say that the polynomial-time many-one hard languages for E are unusually simple, in the sense that they have smaller complexity cores than most languages in E. It follows that the polynomial-time many-one complete languages for E form a measure 0 subset of E (and similarly in E2). This latter fact is seen to be a special case of a more general theorem, namely, that every polynomial-time many-one degree (e.g., the degree of all polynomial-time many-one complete languages for NP) has measure 0 in E and in E2. Subjects: All uncategorized technical reports ID code: 00000026 Deposited by: Staff Account on 13 August 1992 Contact site administrator at: ssg@cs.iastate.edu
{"url":"http://archives.cs.iastate.edu/documents/disk0/00/00/00/26/index.html","timestamp":"2014-04-17T18:57:37Z","content_type":null,"content_length":"4550","record_id":"<urn:uuid:fcd1fc3e-d19e-404d-89ea-ca6a5f2e8863>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
New Almaden Trigonometry Tutor Find a New Almaden Trigonometry Tutor ...I have been tutoring for the past eleven years in Physics, Math, and Chemistry. I started tutoring when I was an undergrad in Electrical Engineering at UC Berkeley. At first, I started helping my friends with their classes in math, physics, and chemistry. 11 Subjects: including trigonometry, chemistry, calculus, physics ...I like to talk through examples and discuss the problems, to ensure there is a true understanding of the concepts. I'm currently in school to gain my credentialing in teaching in order to teach Mathematics for grades 6-12, and have been tutoring for over 10 years. I do have a passion for Math because I have my Bachelors of Science in Mathematics and Masters of Science in Actuarial 9 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...What makes me good at tutoring? Knowing math, knowing my students, being good at drawing people out, and being good at adjusting how I teach so that it suits the unique individual I am working with. To learn, students must feel comfortable, interested, and challenged. 22 Subjects: including trigonometry, English, reading, geometry I graduated from UCLA with a math degree and Pepperdine with an MBA degree. I have taught business psychology in a European university. I tutor middle school and high school math students. 11 Subjects: including trigonometry, calculus, statistics, geometry ...I have ten years of practical, hands-on computer programming experience through my work as a scientist. Python is my primary programming language. I have also programmed in Pascal and C. 17 Subjects: including trigonometry, chemistry, writing, geometry
{"url":"http://www.purplemath.com/New_Almaden_trigonometry_tutors.php","timestamp":"2014-04-19T07:25:35Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:9af9ef2c-8201-402a-9d1e-4cf07e32d432>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Boyle Heights, CA Algebra 1 Tutor Find a Boyle Heights, CA Algebra 1 Tutor ...I have taught students with special needs and students identified as "gifted," though I believe that all students are gifted in their own way, especially when they are given instruction that meets their needs. Contact me and we'll see if we're a good match!I have years of experience with writing... 29 Subjects: including algebra 1, reading, writing, English ...Algebra 2 builds on the topics explored in Algebra 1. These topics include: real and imaginary numbers, inequalities, exponents, polynomials, equations, graphs, linear equations, functions and more. For me, the skills gained in Algebra 2 were the very foundation of my study in Civil Engineering. 10 Subjects: including algebra 1, geometry, algebra 2, trigonometry ...I always try to begin teaching through exposure. The more a student sees and works through a problem or concept, the more comfortable they become. Repetition is key! 14 Subjects: including algebra 1, calculus, physics, algebra 2 ...I taught HS chem for several years and have tutored several students over the years. Chemistry is a really fun subject. I am ready to pass on my chemistry knowledge to either you or your 24 Subjects: including algebra 1, chemistry, English, geometry ...I am also currently teaching Algebra 1 in a private middle school in Los Angeles, Ca. I graduated with a Bachelor and Masters degree in Mathematics in the Philippines. I have experience tutoring students in advance mathematics. 3 Subjects: including algebra 1, statistics, algebra 2 Related Boyle Heights, CA Tutors Boyle Heights, CA Accounting Tutors Boyle Heights, CA ACT Tutors Boyle Heights, CA Algebra Tutors Boyle Heights, CA Algebra 2 Tutors Boyle Heights, CA Calculus Tutors Boyle Heights, CA Geometry Tutors Boyle Heights, CA Math Tutors Boyle Heights, CA Prealgebra Tutors Boyle Heights, CA Precalculus Tutors Boyle Heights, CA SAT Tutors Boyle Heights, CA SAT Math Tutors Boyle Heights, CA Science Tutors Boyle Heights, CA Statistics Tutors Boyle Heights, CA Trigonometry Tutors Nearby Cities With algebra 1 Tutor August F. Haw, CA algebra 1 Tutors Dockweiler, CA algebra 1 Tutors East Los Angeles, CA algebra 1 Tutors Firestone Park, CA algebra 1 Tutors Foy, CA algebra 1 Tutors Glassell, CA algebra 1 Tutors Hazard, CA algebra 1 Tutors Los Angeles algebra 1 Tutors Los Nietos, CA algebra 1 Tutors Rancho Dominguez, CA algebra 1 Tutors Sanford, CA algebra 1 Tutors South, CA algebra 1 Tutors View Park, CA algebra 1 Tutors Walnut Park, CA algebra 1 Tutors Windsor Hills, CA algebra 1 Tutors
{"url":"http://www.purplemath.com/Boyle_Heights_CA_algebra_1_tutors.php","timestamp":"2014-04-17T01:20:42Z","content_type":null,"content_length":"24165","record_id":"<urn:uuid:0db119cd-2521-434e-a0b2-c8df72057714>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Maths Geometry Geometry Info from our Encyclopedia Mathematics Geometry Session …. You need to follow the mathematics geometry lesson but prepare. Thinking ahead… with each other at certain times through the math geometry training, certain students are generally… Mathematics, Angles and Islamic Artwork Hi there all, I had a question recently about the way to obtain an image of the tessellated mosaic panel from your Alhambra that appeared within a recent post of the blog. (Thirty-Sixth Posting: Mosaics from the Islamic World) The problem… Industries of Math – Angles (Factual part) Angles as a sector from the factual part of mathematics education is dealt right here. What is meant simply by Geometry sector and what makes it important? Angles involves the ability to imagine Redmond sixth is v. Palo Alto more than Yale v. Prison … exclusive education. Toma said the particular department would support adding one more year of math — angles — to Tranca Alto' s graduation necessity. Within the San Mateo Union Senior high school District, that… Allow c sama dengan (c1, c2, c3), by sama dengan (x1, x2, x3), after that 1) cXx sama dengan (c2x3-c3x2, c3x1-c1x3, c1x2-c2x1) Assume the matrix A is actually [aij], i will be the row, j will be the line. After that 2) [aij] by sama dengan (a11x1+a12x2+a13x3, a21x1+a22x2+a23x3 thirty six = pi r l =36/pi Part of semicircle is actually: A = 0.5 pi r^2 The = 1/2 professional indemnity (36/pi)^2 sama dengan 206. 2648062″36 meters in the event that edging” or “36 meters associated with edging” I hope it is later 1. pi(r) & 2r = thirty six. So what now valu How tough is analytic geometry maths at the engineering… I discovered it much easier compared to calculus. If you are fairly productive in your math classes there is absolutely no reason to show concern analytic angles. Help on Maths Project on Geometry thru… Several problems here may inspire for the geometry task: http: //www. 8foxes. com/Home/35 http: //www. 8foxes. com/Home/91 http: //www. 8foxes. com/Home/111 http: //www. 8foxes. com/Home/119 http: //www. 8foxes. com/H Do you need strong knowledge in other areas of maths to understand Geometry well, and if so, which… algebra and trig assist a tonnnnnnn when you enter into the more innovative stuffFor basic geometry just easy math will help unless you get into more complex calculations that need trig and algebra, in the event that Can anyone please help with this Hard Maths Geometry… This particular answer could be better described with a diagram. When u create a hyperlink where u drew an appropriate diagram, I might be able to clarify u with clarityI might have some trouble me personally, seeing as exactly how this GCSE Greater Maths Paper – Angles online world. hometutoringonline. company. uk Video from your own home Tutoring Online, shown by Bob Goodband, PhD. Much more the website ISSUE: PQR and STUV are generally parallel straight ranges. (i) Workout the value of a Jain : Sacred Geometry & Vedic Math online world. jainmathemagics. net Vedic mathematics is a approach to mathematics consisting of a listing of 16 simple sutras, or even aphorisms, that allegedly include all mathematics. These were presented by a Indio sch Chinese of Math (32): Coordinate Angles Proof one Desk of contents listing all training math videos available through Chycho TV at online world. chycho. com Listing of videos that are offered for download in www. chycho. net INTRODUCTION: online world. chycho. com Throughout Math of Fractal Angles Pt6/9 Character translated Slide-together: with credit cards Slide-together: Stella artois lager Octangula Substance of 10 tetrahedra Mummy Mathematics: An Adventure within Angles He, Bibi, and their own dog Riley crawled with the tiny opening 1st. FWUMP! A top secret door suddenly closed to their rear… The actual Zills household Hutchison’s Fundamental Math Skills with Angles (Hutchison Series within Mathematics) Fundamental Mathematical Skills with Angles, 8/e by Baratto/Bergman is actually part of the latest products in the successful Hutchison Collection in Math. A
{"url":"http://www.sciencetis.com/topics/m/maths-geometry.htm","timestamp":"2014-04-23T22:37:15Z","content_type":null,"content_length":"32689","record_id":"<urn:uuid:65f4a45a-cb48-4239-8415-4193af0cd358>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
NN2012 Gandolfi Abstract Quantum Monte Carlo study of inhomogeneous neutron matter and neutron stars, S. Gandolfi, Los Alamos National Laboratory, Los Alamos, New Mexico, USA − Recent advances in experiments of the symmetry energy of nuclear matter and in neutron star observations, yield important new insights on the equation of state of neutron matter at nuclear densities. In this regime the EOS of neutron matter plays a critical role in determining the mass-radius relationship for neutron stars. We show how microscopic calculations of neutron matter make clear predictions for the relation between the isospin-asymmetry energy of nuclear matter and its density dependence, and the maximum mass and radius for a neutron star. On the other side, the properties of inhomogeneous neutron matter at low to moderate densities are very important to describe the neutron star crust, and we show how microscopic calculations of confined neutrons can put severe constraints on density functionals used to describe heavy nuclei and the crust.
{"url":"http://cyclotron.tamu.edu/nn2012/Abstracts/Parallel/NA3/Gandolfi.html","timestamp":"2014-04-16T18:56:59Z","content_type":null,"content_length":"42172","record_id":"<urn:uuid:a60fdb88-db9f-4612-b66a-6966b19a4b3c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2008 [00360] [Date Index] [Thread Index] [Author Index] Re: Basic programming • To: mathgroup at smc.vnet.net • Subject: [mg93590] Re: Basic programming • From: "sjoerd.c.devries at gmail.com" <sjoerd.c.devries at gmail.com> • Date: Sun, 16 Nov 2008 07:04:49 -0500 (EST) • References: <gfmaak$g6c$1@smc.vnet.net> As always, there will be a multitude of possible solutions. A functional programming style solution would be: Partition[yourData, 10, 1] or, in a more procedural or imperative programming style: Take[yourData, {i, i + 9}] {i, 1, Length[yourData] - 9} or even more procedural (and ugly): result = {}; b = StandardDeviation[ yourData[[i ;; i + 9]] ]; result = Append[result, b];, {i, 1, Length[yourData] - 9} yourData should be an array with your data set, e.g. yourData={2.2, 3.7, -1.3, 4.5, 9.0, 2.2, 2.4, 5.6, 9.8, 12.3,3.8} ; With respect to your second question: are you sure you want to calculate the standard deviation of a single data point? Doesn't make much sense to me. Anyway, to do this, the first example can be modified thus: Partition[yourData, 10, 1, {-1, 1}, {}] The others are easy to modify as well, but will become even more ugly in the process. Cheers -- Sjoerd On Nov 15, 1:03 pm, BionikBlue <frankf... at hotmail.com> wrote: > Hey I'm a beginner in mathematica and I'm trying to code a little somethi= ng but can't seem to be able to make it work... I know its pretty basic and= that's exactly my problem since the mathematica help is a bit overkill for= what I need. > I have daily stock prices for a stock on a 100 day period, I want to comp= ute the standard deviation for a rolling 10 days period for the whole list = of data. > So basically, I would like to do something like this : > stdev(1;;10) > then stdev(2;;11) > until stdev(91;;100) > and get the results in a list > Id also like to get it another way, by starting with only one observation= and building my standard deviation calculation until I have my whole rolli= ng period built up, for example : > stdev(1) > stdev(1,2) > stdev(1,2,3) > until stdev(1;;10) > then same as before, roll the period until the end of dataset and produce= a list of the results. > Thanks for the help, if what I wrote is not clear enough let me know, Ill= try to explain it in more details!
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Nov/msg00360.html","timestamp":"2014-04-19T12:12:57Z","content_type":null,"content_length":"27158","record_id":"<urn:uuid:5be3386d-a5ba-4b95-a94f-673d21c6b08c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
d th No data available. Please log in to see this content. You have no subscription access to this content. No metrics data to plot. The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. FIG. 1. Contours of the electron bulk kinetic energy density for the cases (a) at and (b) at . In the figure, the magnetic field lines are also plotted for reference. The electron bulk kinetic energy density is normalized by . FIG. 2. Contours of the electron thermal energy density for the cases (a) at and (b) at . In the figure, the magnetic field lines are also plotted for reference. The electron bulk kinetic energy density is normalized by . FIG. 3. Time evolutions of the right-hand-side terms of Eq. (3) integrated in the selected region denoted with red rectangles (in the vicinity of the X line) in Figs. 1 and 2 for the cases (a) and (b) , respectively. The green curve represents the electron bulk kinetic energy flux term , the red curve denotes the power density of the work done by the electric field , and the power density of the work done by the electron pressure gradient is described by the blue curve. The black curves are the sums of the three terms. All these terms are normalized by . FIG. 4. Time evolutions of the right-hand-side terms of Eq. (4) integrated in the selected region denoted by red rectangles (in the vicinity of the X line) in Figs. 1 and 2 for the cases (a) and (b) . The green, blue, and red curves represent the electron enthalpy flux term , the electron heat flux term , and the thermal energy source term , respectively. The black curves are the sums of the three terms. All these terms are normalized by . FIG. 5. Time evolutions of the right-hand-side terms of Eq. (4) integrated in the selected region denoted by black rectangles (in the magnetic island) in Fig. 2 for the cases (a) and (b) . The green, blue, and red curves represent the electron enthalpy flux term , the electron heat flux term , and the thermal energy source term , respectively. The black curves are the sums of the three terms. All these terms are normalized by . Table I. The difference in the ion and electron kinetic energy in magnetic reconnection. denotes the ion kinetic energy (it includes the ion bulk kinetic energy and ion thermal energy), and denotes the electron kinetic energy (it includes the electron bulk kinetic energy and electron thermal energy). The electron bulk kinetic energy is denoted by , and the electron thermal energy is denoted by . “D” means the energy difference, which is calculated by subtracting the energy, when the reconnection attains its maximum rate, to its initial value. The energy is integrated over the entire simulation domain, and it is normalized by . An approximate conservation of the total energy is kept in our simulation models, and the percentage of energy non-conservation is within 0.4%. Article metrics loading...
{"url":"http://scitation.aip.org/content/aip/journal/pop/20/6/10.1063/1.4811119","timestamp":"2014-04-21T00:57:48Z","content_type":null,"content_length":"107394","record_id":"<urn:uuid:816f0f36-3d27-403a-af14-a103d2f74dd3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Vladimir Markovic Geometric and algebraic structures on spaces of quasiconformal mappings In the first part of the talk I will speak about the recent theorem which states the classification of biholomorphic and isometric mappings between Teichmuller spaces of arbitrary Riemann surfaces. In particular, I will discuss the new proof of the corresponding theorem of Royden for closed surfaces (joint with C. Earle). In the second part I will talk about some interesting algebraic properties of the groups QS(S) and QC(D), that is the groups of quasisymmetric and quasiconformal mappings that act on the unit circle and the unit disc respectively. We show that there is no homomorphism from QS(S) to QC(D) which is splitting. Furthermore, we show that the group of normalized quasisymmetric mappings of the circle is not a simple group, while the group of normalized quasiconformal mappings of the two sphere is (joint with D. Epstein).
{"url":"http://www.newton.ac.uk/programmes/SKG/markovic.html","timestamp":"2014-04-17T12:29:52Z","content_type":null,"content_length":"1533","record_id":"<urn:uuid:3c9a1efe-7320-48c3-89fb-a2cacec7b17f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Title: Computational Strategies for Meshfree Nonrigid Registration Advisor: Lawrence Hamilton Staib December 2006 Biological shapes such as the brain are difficult to register due to their complicated geometry. To deal with this, registration methods often rely on a transformation model consisting of a dense regular grid such as a free form deformation or B-spline grid. However, very dense grids or meshes are usually needed to register images with convoluted shapes, and a regular mesh structure is not well suited for the irregular structure of the brain. What is therefore needed is a meshfree approach such as a radial basis function transformation model. Unfortunately, because radial basis functions are typically non-compact, using them with large numbers of points is fraught with numerical difficulties and, as a result, their use in image registration is not prevalent. The goal of this work is to overcome these computational difficulties so that radial basis function transformations can be used efficiently, even with large numbers of points. To achieve this, a new registration framework was developed based on automatic differentiation and the fast multipole method. Automatic differentiation is useful since an important component of registration is computing the gradient of the similarity metric which is to be optimized. Automatic differentiation allows one to efficiently calculate gradients without having to write any gradient code explicitly. Although the technique of automatic differentiation is well established, it does not appear to be used for image registration. The fast multipole method was developed to efficiently evaluate large sums such as radial basis functions but its use in image registration is still minimal. With the integration of these algorithms within a complete registration framework, it should be possible to obtain a truly meshfree registration. Download the complete dissertation here. BibTeX Entry author = "Eliezer Kahn", title = "Computational Strategies for Meshfree Nonrigid Registration", school = "Yale University", month = "December", year = "2006")
{"url":"http://noodle.med.yale.edu/~kahn/publications.html","timestamp":"2014-04-16T22:08:10Z","content_type":null,"content_length":"2754","record_id":"<urn:uuid:32d8d64a-4091-44bf-b794-df1ce45e681d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Let's just discuss this Math. Computer science. I found this quote on Google Groups: "I think he asks this way because most to the programming curriculums that I have seen seem to try to point out that all a programmer ever does is math. In fact I have had professors tell me that unless I can handle Calculus I will forever suck as a programmer. As far as I can tell, this sounds like the general consensus of most CS professors at the larger schools for some reasons. " I feel compelled to study math because the interesting stuff, the theoretical, is discussed in terms of mathematics. Not only that, I am really not very good at it, and I want to prove to myself I can do it. Is it really going to help my career? Yes. It will give me the "street cred" with people who think like certain professors and employers who believe people who are good at math are somehow superstar I enjoy math, too. It doesn't require study as much as practice. Which fits my personality to some degree. My question is, then, of you who studied math, did it help you in your programming? If so, how?
{"url":"http://www.crazyontap.com/topic.php?TopicId=10346&Posts=54","timestamp":"2014-04-17T21:23:41Z","content_type":null,"content_length":"77730","record_id":"<urn:uuid:0014ad9d-cfc8-45f0-83b7-6ed27b52b88a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Hydrodynamic turbulence as a problem in non-equilibrium statistical mechanics Seminar Room 1, Newton Institute The problem of hydrodynamic turbulence is reformulated as a heat flow problem along a chain of mechanical systems which describe units of fluid of smaller and smaller spatial extent. These units are macroscopic but have few degrees of freedom, and can be studied by the methods of (microscopic) non-equilibrium statistical mechanics. The fluctuations predicted by statistical mechanics correspond to the intermittency observed in turbulent flows. Specifically, we obtain the formula $$ \zeta_p={p\over3}-{1\over\ln\kappa}\ln\Gamma({p\over3}+1) $$ for the exponents of the structure functions ($\ langle|\Delta_rv|^p\rangle\sim r^{\zeta_p}$). The meaning of the adjustable parameter $\kappa$ is that when an eddy of size $r$ has decayed to eddies of size $r/\kappa$ their energies have a thermal distribution. The above formula, with $(\ln\kappa)^{-1}=.32\pm.01$ is in good agreement with experimental data. This lends support to our physical picture of turbulence, a picture which can thus also be used in related problems. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/MFE/seminars/2013110109001.html","timestamp":"2014-04-16T11:31:12Z","content_type":null,"content_length":"6673","record_id":"<urn:uuid:4dda591b-ea16-4cee-bce0-2da625d54f2d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you find the find the determinant of 3x3 matrix ? June 14th 2009, 02:11 PM #1 Mar 2009 How do you find the find the determinant of 3x3 matrix ? I was given this review problem but i dobnt remeber learning this : find u • v x w (u dot v cross w). in other words find the determinant of 3x3 matrix: How would you do this ? see here you may also want to see here You may also be interested in the Rule of Sarrus : Rule of Sarrus - Wikipedia, the free encyclopedia Once you've understood the picture on the right, you've understood the method. And it's quite straightforward :P You may also be interested in the Rule of Sarrus : Rule of Sarrus - Wikipedia, the free encyclopedia Once you've understood the picture on the right, you've understood the method. And it's quite straightforward :P i wanted to post a link to that method as well. but i never remembered what it was called. couldn't bother searching for it June 14th 2009, 02:14 PM #2 June 14th 2009, 11:42 PM #3 June 15th 2009, 02:16 PM #4
{"url":"http://mathhelpforum.com/pre-calculus/92822-how-do-you-find-find-determinant-3x3-matrix.html","timestamp":"2014-04-20T18:07:29Z","content_type":null,"content_length":"42760","record_id":"<urn:uuid:ed194ae2-3603-4b1f-9d30-344cb4600f43>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
6.2 Types of Symmetry Leading to Groups Download PDFs Additional Resources 6.2 Types of Symmetry Leading to Groups • Symmetry is a basic notion in the visual arts.The Ishango bone represents a level of early mathematics more sophisticated than simple counting. • Rotation, reflection, and translation are the most common types of visual symmetry. Symmetry is perhaps most familiar as an artistic or aesthetic concept. Designs are said to be symmetric if they exhibit specific kinds of balance, repetition, and/or harmony. In mathematics, symmetry is more akin to something like "constancy," or how something can be manipulated without changing its form. In other words, the mathematical notion of symmetry relates to "objects" that appear unchanged when certain transformations are applied. Think of the form of a butterfly; its right and left halves mirror each other. If you knew what the right half of a butterfly looked like, you could construct the left half by reflecting the right half over a line that bisects the butterfly. Butterflies exhibit a type of symmetry called "bilateral symmetry" or a "mirror symmetry," (either half of the butterfly is the mirror image of the other) one that is very common among living things. Perhaps most familiar to us is our own bilateral symmetry, the symmetry of our left and right arms and hands, or our left and right legs and feet, or the approximate symmetry of our bodies if bisected vertically into left and right halves. In general, bilateral symmetry is present whenever an object or design can be broken down into two parts, one of which is the reflection of the other. Given any motif, one can generate a design with bilateral symmetry by choosing a line and reflecting the motif over it. Conversely, if a motif already possesses bilateral symmetry, it can be reflected over a line and we would notice no difference between the original and the reflected versions. This action, reflection, leaves the original design apparently unchanged, or invariant. Bilateral symmetry is quite common in nature, but it is by no means the only form of visual symmetry that we see in the world around us. Another common form is rotational symmetry, such as that seen in sea stars and daisies. Recall that to be symmetric an object must appear unchanged after some action has been taken on it. An object that exhibits rotational symmetry will appear unchanged if it is rotated through some angle. A circle can be rotated any amount and still look like a circle, but most objects can be rotated only by some specific amount, depending on the exact design. For example, an ideal sea star, having five arms, is not symmetric under all rotations, but only those equivalent to A daisy, on the other hand, is rotationally symmetric under smaller rotational increments. Let’s say it has 30 petals, all of which are the same in appearance— no such daisy exists in the real world, of course—this is an ideal mathematical daisy. The flower will be symmetric under a rotation of 12° You might have observed that the sea star and the daisy are not limited to rotational symmetry. Depending on how you choose an axis of reflection, they can each display bilateral (reflection) symmetries as well. Notice, however, that only certain dividing lines can serve as axes of reflection. This brings us to an important point: an object may have more than one type of symmetry. The specific symmetries that an object exhibits help to characterize its shape. Remember, the motions associated with symmetries always leave the object invariant. This means that combinations of these motions, which are how mathematicians tend to think of symmetries, will also leave the original object invariant. Let’s explore this idea a bit further by looking at the symmetries of an equilateral triangle. • The symmetries of the equilateral triangle can be thought of as the transformations (i.e., motions or actions) that leave the triangle invariant— looking the same as before the motion. • Combinations of symmetries are also symmetries. Notice that there are three lines over which the triangle can be reflected and maintain its original appearance. As we have seen, an equilateral triangle has three distinct reflections and three rotations under which it remains invariant. Furthermore, since all of these symmetries leave the triangle invariant, the combination of any two of them creates a third symmetry. For example, a rotation through 120°, followed by a reflection over a vertical line passing through its top vertex, leaves the triangle in the same position it was in at the start. Let’s look at all possible combinations of symmetries in an equilateral triangle a little more closely. To do this, it will be helpful to label each vertex so that we can keep track of what we have done. If we do nothing to the triangle, this is called the identity transformation, I. This symmetry is simply a rotation of 120° counterclockwise; let’s call it R[1]. A rotation of 240° degrees counterclockwise is another symmetry of the equilateral triangle; let’s call it R[2]. This diagram represents a reflection over the vertical axis (notice how vertices A and B have switched sides); let’s call it L. This symmetry is a reflection over the line extending from B through the midpoint of AC; let’s call this motion M. In this diagram the triangle has been reflected over the line extending from C through the midpoint of AB; let’s call this action N. Now that we have identified all the possible motions that leave the triangle invariant, we can organize their combinations in a chart. In these combined movements, the motions in the left column of the chart are done first, then the motions across the top row. For instance, the rotation R[1], followed by the Identity, I, yields the same result as simply performing R[1] by itself. Performing the reflection M, followed by N, gives us the same result as simply performing the rotation R[2]. Notice that as we complete this chart of all possible combinations of two motions, every result is one of our original symmetries. This is an indication that we have found some sort of underlying relational structure. Mathematicians call sets of objects that express this type of structure a group. • A group is a collection of objects that obey a strict set of rules when acted upon by an operation. A group is just a collection of objects (i.e., elements in a set) that obey a few rules when combined or composed by an operation that we often call"multiplication." This may seem like a vague, even unhelpful, description, but it is precisely this generality that gives the study of groups, or group theory, its power. It is also amazing and a bit mysterious that out of just a few simple rules we can create mathematical structures of great beauty and intricacy. The symmetries of an equilateral triangle form a group. Remember, these symmetries are all the rigid motions that leave the triangle invariant. One of the powers of group theory is that it allows us to perform operations that are"sort of" arithmetic with things that are not numbers. Notice that the operation we used in the triangle example above was simply the notion of "followed by." This is going to be completely analogous to the idea of combining two integers by addition and getting another integer, or multiplying together two nonzero fractions and getting another fraction! Group theorists study objects that don’t have to be numbers as well as operations that don’t have to be the standard arithmetical operations. Now we can be a little more precise about what we mean by a group and how groups function. For example, we would like to be able to use the members of a group to do arithmetic and even to solve simple equations, such as 3x = 5. To solve this equation, we need the operation of multiplication, and we need the number 3 to have an inverse. An inverse is simply a group member that, when combined with another group member under the group operation, gives the Identity. In the case of 3x = 5, the inverse is This scenario has pointed out the first two rules of a group. First, the group must have an element that serves as the Identity. The characteristic feature of the Identity is that when it is combined with any other member under the group operation, it leaves that member unchanged. Second, each member or element of the group must have an inverse. When a member is combined with its inverse under the group operation, the result is the Identity. In addition to these two basic rules of group theory, there are two more concepts that characterize groups. The third property, or requirement, of a group is that it is closed under the group operation. This means that whenever two group members are combined under the group operation, the result is another member of the group. We saw this as we looked at all possible combinations of symmetries of the equilateral triangle above. No matter which symmetry was"followed by" which, the result was always another symmetry. For simplification as we go forward in our exploration of groups, we might as well use the term"multiplication" to express the operation of "followed by." The fourth and final requirement of a group is that it is associative. In other words, if we take a list of three or more group members and combine them two at a time, it doesn’t matter which end of the list we start with. Arithmetic with numbers is governed by the associative property, so if we want to do arithmetic with members of a group, we need them to be associative as well. A group is a set of objects that conforms to the above four rules. It is worth noting that although groups obey the associative property, the commutative property generally does not apply; that is, the order in which we combine motions usually matters. For example, in the table above for the equilateral triangle symmetries notice that the rotation R[1] followed by the reflection L gives the reflection M as a result, whereas L followed by R[1] gives the reflection N as a result. As a side note, specialized types of groups that do conform to the commutative property are called Abelian groups. For our purposes the current discussion will focus solely on more-general, non-commutative groups. In examining the equilateral triangle, we saw that its symmetries formed a group. Another example of a group would be the set of integers under the operation of addition. If you add any two integers together, you get another integer, this demonstrates that this set is closed. There is an identity element, zero, that you can add to any integer without changing its value. Every integer also has an inverse. For instance, if you take positive 3 and add to it negative 3, you get the Identity, zero. (Zero, just in case you were wondering, serves as its own inverse, which is perfectly acceptable!) Finally, we know intuitively that adding more than 2 numbers gives the same result no matter how we choose to group them. For example: (3+2) + 6 = 3 + (2 +6) This demonstrates the associativity of the group of integers. Group theory is very useful in that it finds commonalities among disparate things through the power of abstraction. We will explore this idea in more depth soon, but first let’s return to the concept we introduced at the beginning of this section. With all of this focus on rules and axioms, it’s easy to forget that we are chiefly concerned with understanding and characterizing symmetry in a mathematical fashion. Now that we have introduced the basic requirements of groups, we can start to characterize a wide variety of designs using groups. In the next section, we will focus on one- and two-dimensional patterns and the groups that describe them. Next: 6.3 Frieze and Wallpaper Groups
{"url":"http://learner.org/courses/mathilluminated/units/6/textbook/02.php","timestamp":"2014-04-19T19:56:00Z","content_type":null,"content_length":"57839","record_id":"<urn:uuid:a1fdaec3-d671-4195-93c0-b2da2a9acb80>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
Action of PGL(2) on Projective Space up vote 26 down vote favorite Let $k$ be a field, let $G = PGL_2(k)$ be the projective general linear group of $k$, and let $X = k \cup \{ \infty \}$ be one-dimensional projective space over $k$. Then $G$ acts on $X$ (via fractional linear transformations). This action has the following properties: 1) The action of $G$ on $X$ is simply 3-transitive. That is, it acts simply transitively on the set of 3-tuples of distinct elements of $X$. (Edited as indicated in the comments.) 2) Suppose that $x,y \in X$ are distinct elements and that $g \in G$ satisfies $gx = y$, $gy = x$. Then $g$ has order $2$. Is the converse true? (That is, if we are given an action of a group $G$ on a set $X$ satisfying 1) and 2), does it follow that $G = PGL_2(k)$ for some field $k$, with its natural action on $k \cup \ { \infty \}$? (This is true at least when $G$ and $X$ are finite: it can be deduced from the theorem of Frobenius on Frobenius groups.) 2 Daniel, your silliness can be generalized: $X$ could be empty; and if it is empty or a singleton then $G$ can be arbitrary. – Tom Goodwillie Jun 4 '11 at 3:19 4 And the remedy is to redefine $3$-transitive as meaning that the action of $G$ on the set of ordered triples of distinct elements is transitive, i.e. has exactly one orbit. – Tom Goodwillie Jun 4 '11 at 3:19 2 Yes, that is what I should have said. – Jacob Lurie Jun 4 '11 at 5:03 1 The question reminds me of the theorem that abstract projective planes satisfying Desargues' "theorem" are of the form $DP^2$ for a division ring $D$, and satisfying Pappus' "theorem" are of the form $FP^2$ for a field $F$. The proofs do rather a lot, in that they need to use incidence geometry to build addition, subtraction, multiplication, and division on a set built from the plane. Presumably a couple of the same ideas could be put into play here. – Allen Knutson Jun 5 '11 at 2:44 1 ...In particular, the first step in that theorem is to work with the group of collineations; Desargues' condition says that this is doubly transitive on the points. Crazy idea: can one build $P^2$ from $P^1$, say as $X^2/S_2$, and reduce your 1-dim question to this known 2-dim result? – Allen Knutson Jun 5 '11 at 4:47 show 4 more comments 1 Answer active oldest votes A KT-field $(F,+,\times,\sigma)$ consists of a neardomain $(F,+,\times)$ together with an involutionary automorphism $\sigma$ satisfying $$\sigma(1 + \sigma(x)) = 1 - \sigma(1 + x)$$ for all $x \in F \setminus \{0,1\}$. (My impression is that neardomains are quite weak entities, e.g. $F^{\times}$ is required to be a group but it may not be commutative, $(F,+)$ is not even necessarily a group. Industrious MO reader adds the definition of a neardomain to this answer if they wish.) Sharply $3$-transitive groups are determined up to isomorphism as permutation groups on $\mathbf{P}^1(F) = F \cup \{ \infty \}$ consisting of maps of the form: (i): $x \mapsto a + m x, \quad \infty \mapsto \infty$ (ii): $x \mapsto a + \sigma(b + m x), \quad \infty \mapsto a, \quad - m^{-1} b \mapsto \infty$, where $a,b \in F$ and $m \in F^{\times}$. up vote 19 down vote Consider the set of elements $\gamma \in G$ such that $\gamma(0) = \infty$ and $\gamma(\infty) = 0$. They are given exactly by mappings of the form: $$\gamma: x \mapsto \sigma( \lambda accepted x)$$ for any $\lambda \in F^{\times}$. If all such $\gamma$ have order two, then $$\sigma(\lambda \sigma(\lambda x)) = x$$ for all $x, \lambda \in F^{\times}$. Setting $x = \lambda^{-1} $, it follows that $\sigma(\lambda) = \lambda^{-1}$ for all $\lambda \in F^{\times}$. Since $\sigma$ is an automorphism, it follows that $F^{\times}$ is commutative. From a Theorem of Kirby (see below), it follows that $(F,+,\times)$ is actually a commutative field, and $G = \mathrm{PGL}_2(F)$. All the results and definitions of this answer can be gleamed from the math review: MR0997066 (91b:20004a) of a paper by William Kerby. The paper is only $3$-pages long, so I assume that is is relatively elementary - although I can't access it myself, and it may refer to previous results. (Full disclosure, all I did was type "sharply 3-transitive" into mathscinet, I don't actually know what a neardomain actually is.) In case your actual purpose is to generalize this result to $(\infty,\pi)$-whatzit categories with creamy rice pudding centres, you might want to take a glance at the actual paper. Sounds like exactly what I'm looking for. Thanks. – Jacob Lurie Jun 5 '11 at 11:09 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/66865/action-of-pgl2-on-projective-space?sort=oldest","timestamp":"2014-04-18T01:09:58Z","content_type":null,"content_length":"59782","record_id":"<urn:uuid:67f52fab-c611-4106-a38d-85f0419041d8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Ray Intersection Tests Finding which points lie inside which objects is not actually that hard. But its use is limited. The ray intersection test is much more powerful. For example, imagine a fast bullet and a collision detection routine with a relatively small target such as a can of soda. Because the can is relatively small and the bullet is incredibly fast, it could very well happen that on successive frames the bullet is on opposite sides of the soda can, but no single frame exists where the bullet is actually inside the can. Did the intersection happen? The only way to find out is to test the ray formed by the successive positions. If the ray intersected the can, the bullet hit its target. Testing for intersection between a ray and a plane is easy (see Figure 22.6). All we have to do is analyze the ray in its parametric form: X = org.x + dir.x*t Y = org.y + dir.y*t Z = org.z + dir.z*t Figure 22.6. Ray-plane intersection test. and the plane with the equation AX + BY + CZ + D = 0 Half a page of algebra can prove that the preceding equations blend into t = - (A*org.x + B*org.y + C*org.z + D) / (A*dir.x + B*dir.y + C*dir.z ) Or, if you prefer a more compact notation (using the fact that (A,B,C) is the normal vector to the plane), you can say t = - (n·org +D) / (n·dir) Obviously, a ray parallel to the vector will return (n·dir)=0, because the dot product of perpendicular vectors equals zero. Thus, we must compute the denominator first, and if it is different from zero, use the numerator to actually compute the t parameter. Remember that negative t values mean that the ray actually pierces the plane, but not in the direction expressed by the parametric equation of the ray. Actually, if we want the intersection to take place between two specific points in the ray, here is the usual routine: compute dir as the vector from one point to the other do not normalize dir use the regular test if the computed t value is in the range from zero to one the segment intersected the plane .end if Testing whether a ray intersects a triangle can be performed in a variety of ways. The most popular is not really a test on its own merit, but a composite of two other tests that have already been discussed. The routine would be as follows: Compute the intersection between the ray and the support plane for the triangle If there is an intersection point, compute if that point is actually inside the triangle Other solutions might be derived using linear algebra, but as far as cost is concerned, none of them offers a significant improvement over this one. Ray-AABB Test One of the best methods for detecting whether a ray intersects an AABB was introduced by Woo. It uses a three-step algorithm to progressively discard candidate planes, and thus performs costly computations on the minimum possible data set. The pseudocode of the algorithm is as follows: From the six planes, reject those with back-facing normals From the three planes remaining, compute the t distance on the ray-plane test Select the farthest of the three distances Test if that point is inside the box's boundaries Note that we are assuming we are outside of the object. If we were inside or normals were flipped for some unknown reason, step one would be negated. Thus, the overall test involves • Six dot products to check the first step • Three point-ray tests • A few comparisons for the last step Incidentally, the 4-step algorithm from the previous section can be used for object-oriented bounding boxes (OOBBs) with minor changes. The first three steps remain unchanged. The last one, however, becomes a bit more complex, as we cannot optimize some computations due to non-axial alignment of the box's support planes. Even so, Woo's algorithm would be a good solution for these cases. Ray-Sphere Test Let's now analyze the intersection between a ray and a sphere in 3D. Given the ray X = Rorg.x + Rdir.x * lambda Y = Rorg.y + Rdir.y * lambda Z = Rorg.z + Rdir.z * lambda and the sphere defined by (X-CenterX)2 + (Y-CenterY)2 + (Z-CenterZ)2 = Radius2 the intersection test can fail (if the ray misses the sphere), return one single solution (if the ray touches the sphere tangentially), or return two points (for a general collision). Whichever the case, the preceding equations are easily combined to yield A*t2 + B*t + C = 0 A = Rdir.x2 + Rdir.y2 + Rdir.z2 B = 2* (Rdir.x2*(Rorg.x-CenterX) + Rdir.y2*(Rorg.y-CenterY) + Rdir.z2*(Rorg.z-CenterZ)) C = (Rorg.x-CenterX)2 + (Rorg.y-CenterY)2 + Rorg.z-CenterZ)2 – Radius2 In the preceding equation, A usually equals one because the ray's direction vector is normalized, saving some calculations. Because we have a quadratic equation, all we have to do is solve it with the classic formula -b +- sqrt (b2 – 4AC) / 2a The quantity B2 – 4ac is referred to as the determinant. If it is negative, the square root function cannot be evaluated, and thus the ray missed the sphere. If it's zero, the ray touched the sphere at exactly one point. If it's greater than zero, the ray pierced the sphere, and we have two solutions, which are -b + sqrt (b2 – 4AC) / 2a -b - sqrt (b2 – 4AC) / 2a Ray-Convex Hull Computing the intersection test between a ray and a convex hull is easy. We loop through the different planes of the convex hull. If we reach the end of the list and all tests were negative (meaning both the first and second point in the ray lie outside the hull), we can stop our search. But what we are interested in are those cases in which a ray's origin has a different sign than the ray's destination. In these cases, we can be sure that a collision took place. Then all we have to do is test the segment and the plane, thus computing the effective intersection point. Ray-General Object (3DDDA) Computing the intersection point between a ray and a concave object is a complex issue. We can use the Jordan Curve Theorem, but because there can be several intersections, we need additional information in order to decide which one we should return. Thus, a good idea is to use a 3DDDA approach, starting from the cell closest to the origin of the ray and advancing in the direction of the ray. If one cell is OUTSIDE, we move to the next one with no test at all. When we reach the first cell where we get a DON'T KNOW value, we test for the ray with the triangles of the cell. This way the search space is small, and we can converge quickly to the point in space that marks the first collision between the ray and the general object. Again, the speed of the test has a downside of higher memory consumption.
{"url":"http://www.yaldex.com/game-programming/0131020099_ch22lev1sec2.html","timestamp":"2014-04-18T08:02:23Z","content_type":null,"content_length":"12492","record_id":"<urn:uuid:0b711547-3f50-47f4-946c-8f989db8c5fd>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Fermentation Kinetics Download Policy: Content on the Website is provided to you AS IS for your information and personal use only and may not be sold or licensed nor shared on other sites. SlideServe reserves the right to change this policy at anytime. While downloading, If for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1. Fermentation Kinetics Alfred Carlson 2. Fermentation Kinetics There are 4 things to understand about cell growth Cell growth is exponential The specific growth rate depends on environmental conditions An empirical equation called the Monod equation is used to relate specific growth rate to the substrate concentration in the media Production of products 3. Scenario A 1 L reactor containing 2g/L of glucose in a minimal media was inoculated with 1 cc of an overnight shake flask culture. The OD of the shake culture was measured to be 7. For the next 16 hours, samples were taken from the reactor every 2-3 hours and the OD was measured in a spectrophotometer @ 600 nm. The results are on the next slide. 4. Bioreactor Sampling for OD 5. Results Time (hr) OD reading 0 ND (not determined) 2 0.007 4 0.02 7 0.06 10 0.18 12.5 0.45 15 1.45 6. What you do with this data The ?specific growth rate? is a parameter that characterizes the cell growth and is another way of measuring the ?doubling time? If the cells are growing right, they will have a constant specific growth rate that falls in a ?normal range?. 7. Steps - Plot Plot the data on a semilog plot of (log) OD vs t ? this gives you a big picture view of the data. 8. Steps - Regress 2. Fit the data using an exponential growth model OD = ODoexp{mt} 9. Fitting to Monod model Use Monod, substrate limited kinetics to model the growth of these cells. Predict the entire growth curve and predict the glucose depletion curve for the cells based on a yield factor of 0.5 gcells/gram glucose. 10. Monod model The Monod model accounts for the fact that the cells specific growth rate decreases (doubling time increases) when the concentration of food is low. This is the most common growth model. m = mmax*S/(Ks+S) 11. Using the Monod model Either you have to solve 2 simultaneous differential equations, or you have to do periodic update material balances: Cell balance = dX/dt = mX Substrate balance dS/dt = -mX/ Yx/s Remember: m = mmax*S/(Ks+S) 12. Update method 13. Model graph 14. Glucose depletion 15. Lecture slides Doubling time vs specific growth rate Temperature effect on specific growth rate Ribosome limited growth Product expression models 16. Cell Doubling Cells grow by doubling at regular intervals 17. Culture Kinetics Exponential Growth Specific growth rate N DN/Dt 1 1 2 2 4 4 8 8 18. Exponential growth As long as the doubling time remains constant, the specific growth rate (m) will remain constant. 19. N, Cell mass, OD For constant volume reators ? The number concentration of cells n (cells/L) is proportional to N (n = N/V). The cell mass is proportional to N, and OD is proportional to n. ? All measurements of cell growth follow exponential law. 20. Cell Growth Data 21. Doubling Time and Specific Growth Rate The specific growth rate is just another way of expressing the doubling time. During a doubling time N/No = 2 Ln(N/No) = Ln(2) = 0.693 = mtd td = 0.693/m 22. Culture Kinetics Cell type doubling time growth rate (hr-1) bacteria 20 ? 90 min ~0.3 - 2.1 yeast 45-120 min ~0.2 ? 1.0 mold 4-8 hours ~0.1-0.2 mammalian cells 20-48 hours ~0.05 These are just rough numbers, not rules. 23. Example A certain yeast cell is said to have a doubling time of 90 minutes. What is its specific growth rate in hr-1? A 1 cc inoculum of the same cells at an OD of 5 (600 nm) is used to inoculate 1 L of fresh media. How long will it take for the new culture to reach an OD of 1? of 5? 24. Answers A certain yeast cell is said to have a doubling time of 90 minutes. What is its specific growth rate in hr-1? m = 0.693/td td = 1.5 hr ? m = 0.693/1.5 = 0.462 hr-1 25. Answers A 1 cc inoculum of the same cells at an OD of 5 (600 nm) is used to inoculate 1 L of fresh media. How long will it take for the new culture to reach an OD of 1? Of 5? Ln (OD/ODo) = mt ODo = 5/1001 ~ 0.005 t1 = ln (1/.005)/0.462 = 11.5 hours t5 = ln (5/.005)/0.462 = 14.95 hours 26. Environmental Conditions The specific growth rate is the fastest the cells can double on that particular media at that particular temperature. 27. Growth vs Temp 28. pH effect 29. pH effect Optimum growth below 4 ? acidophiles Optimum growth between 6 and 8 ? mesophiles Optimum growth above 8 - alkaliphiles 30. Maximum growth rate determined by ribosomes In order for cells to double regularly they need to duplicate their duplication system in the doubling time. Only 16 amino acids per second can be added to a growing protein chain (ribosome limit) 31. Limits to Cell Growth Rate For cells to grow they must reproduce themselves This requires (minimum) the reproduction of the reproduction system (PSS) 32. Limits to Cell Growth Rate Translation is rate limiting Time required for ribosome self-reproduction is minimum doubling time 33. Limits to Cell Growth Rate Reproduction time = Number of amino acids per ribosome Rate of amino acid addition to protein chain 34. Limits to Cell Growth Rate 10,000 amino acids/ ribosome 15 -20 amino acids added per second ~ 600 seconds (10 minutes) to reproduce a ribosome 10 minutes for ribosomes to reproduce ribosomes 35. Limits to Cell Growth Rate Production of other proteins slow down ribosome reproduction minimum need is 20,000 aa/ribosome = 20 minute doubling time (observed = 17 minutes or so) More complex cells = more aa addition = slower growth 36. Limits to Cell Growth Rate On poor media cells must make more proteins for getting and processing food ---- slower growth When cells produce product, more protein is needed for these processes ---- slower growth 37. Limits to Cell Growth Rate Net effect of engineering on cells (ideal) Ribosomes diverted from required functions growth rate cut proportional to 2 x fraction of foreign protein produced -- at 50% protein growth rate = 0 38. Cell Growth Modeling When there is no substrate, cells can?t double and m goes to zero No matter how much substrate, cells can only double so fast (m goes to mmax) 39. Monod Equation Growth equation (Monod equation) 40. Stationary Phase Growth equations with substrate exhaustion: Cells: (rate of change of cell concentration) dX/dt = mX = mmaxS/(Ks + S) * X Substrate: (rate of change of substrate conc) dS/dt = -1 /Yx/S dX/dt = -1/Yx/S mmaxS/(Ks + S) X 41. Substrate depletion in batch culture 42. Batch Reactor Kinetics ? Real data 43. Cell growth kinetics 44. Batch Reactor Kinetics 45. Products Metabolites (stoichiometric) Example: how much ethanol can a cell make from a gram of glucose: (Facts ? hardly any cell mass therefore follow glycolytic pathway to ethanol.)( 2 moles per mole glucose) 1 gram = .0055 moles ? .0110 moles ethanol x 46 g/mole = 0.5 grams. The rest is CO2. Rate of ethanol production = rate of glucose consumption x 2 46. Products Non-metabolites (2 broad types) Growth associated products Example: Cell components: Cells must be growing to produce product. dP/dt = b dX/dt = bmX 47. Non-metabolites (2 broad types) Non-Growth associated products Example: Penicillin dP/dt = aX Products 48. Overall Production Coefficient Production coefficient dP/dt = (bm + a) X dP/dt = qpX 49. Summary Cell growth kinetics are described by either the doubling time or the specific growth rate. Monod kinetics take into account that cells stop growing when the substrate runs out. Ultimately cells grow at a certain rate because of what the ribosomes are doing Product expression kinetics are either related to the consumption rate of precursors or to empirical patterns of expression called growth related or non-growth related expression
{"url":"http://www.slideserve.com/brook/fermentation-kinetics","timestamp":"2014-04-23T23:16:24Z","content_type":null,"content_length":"49182","record_id":"<urn:uuid:c74bb558-85b6-47db-b05d-8981392ab7ed>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Matheology § 190 Replies: 15 Last Post: Jan 14, 2013 3:35 PM Messages: [ Previous | Next ] Virgil Re: Matheology � 190 Posted: Jan 12, 2013 4:00 PM Posts: 6,970 Registered: 1/6/11 In article WM <mueckenh@rz.fh-augsburg.de> wrote: > Matheology § 190 > The Binary Tree can be constructed by aleph_0 finite paths. > 0 > 1, 2 > 3, 4, 5, 6 > 7, ... Finite trees can be built having finitely many finite paths. A Complete Infinite Binary Tree cannot be built with only finite paths, as none of its paths can be finite. > But wait! The Binary Tree has aleph_0 levels. At each level the number > of nodes doubles. We start with the (empty) finite path at level 0 and > get 2^(n+1) - 1 finite paths within the first n levels. The number of > all levels of the Binary Tree is called aleph_0. That results in > 2^(aleph_0 + 1) - 1 = 2^aleph_0 finite paths. Wrong! At any finite level one has a finite number of finite paths but in the Complete Infinite Binary Tree one has no finite paths at all but does have 2^aleph_0 INfinite paths. > The bijection of paths that end at the same node proves 2^aleph_0 = > aleph_0. No two paths end in the same node in any binary tree, and in the Complete Infinite Binary Tree no path "ends" at all. > This is the same procedure with the terminating binary representations > of the rational numbers of the unit interval. Each terminating binary > representation q = 0,abc...z is an element out of 2^(aleph_0 + 1) - 1 > = 2^aleph_0. But so are all the nonterminating ones. And the terminating ones only make up aleph_0 of that 2^aleph_0 total. > Or remember the proof of divergence of the harmonic series by Nicole > d'Oresme. He constructed aleph_0 sums (1/2) + (1/3 + 1/4) + (1/5 + ... > + 1/8) + ... requiring 2^(aleph_0 +1) - 1 = 2^aleph_0 natural numbers. While I can see a need for aleph_0 natural numbers in that proof, and in others needing all the natural numbers. Neither I nor anyone else outside of WMytheology can see any need for more naturals than exist in any proof. > If there were less than 2^aleph_0 natural numbers (or if 2^aleph_0 was > larger than aleph_0) the harmonic series could not diverge and > mathematics would deliver wrong results. > Beware of the set-theoretic interpretation which tries to contradict > these simple facts by erroneously asserting aleph_0 =/= 2^aleph_0. Beware of those so ignorant of mathematics, like WM, that they claim to be able to surject any set onto its power set. There is an easy bijection between the set of paths of a complete infinite binary tree and the set of all subsets of |N: For each path, the set of levels at which it branches left will be a subset of |N and each such subset will correspond to a unique path. So that if WM wishes to establish his claims, he must PROVE that aleph_0 = 2^aleph_0, or show a bijection between |N and 2^|N, which he often claimed but has never proved (shown a bijection for).
{"url":"http://mathforum.org/kb/message.jspa?messageID=8062833","timestamp":"2014-04-16T19:23:55Z","content_type":null,"content_length":"36860","record_id":"<urn:uuid:4133bfd9-d6ab-4679-a681-5c64d9a50062>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Kinetic energy flow in Nb(400 A MeV) + Nb: evidence for hydrodynamic compression of nuclear matter (1984) A kinetic-energy—flow analysis of multiplicity-selected collisions of 93Nb(Elab=400A MeV)+93Nb is performed on the basis of the nuclear fluid dynamical model. The effects of finite particle numbers on the flow tensor are explicitly taken into account. Strong sidewards peaks are predicted in dN/dcos&#952;F, the distribution of event by event flow angles. This is in qualitative agreement with recent data from the "Plastic Ball" electronic detection system. Cascade simulations fail to reproduce the data. Gerd Buchwald Gerhard Graebner J. Theis Joachim A. Maruhn Walter Greiner Horst Stöcker K .A. Frankel Miklos Gyulassy Intranuclear cascade calculations and fluid dynamical predictions of the kinetic energy flow are compared for collisions of 40Ca + 40Ca and 238U + 238U. The aspect ratio, R13, as obtained from the global analysis, is independent of the bombarding energy for the intranuclear cascade model. Fluid dynamics, on the other hand, predicts a dramatic increase of R13 at medium energies Elab&# 8818;200 MeV/nucleon. In fact, R13(Elab) directly reflects the incompressibility of the nuclear matter and can be used to extract the nuclear equation of stat at high densities. Distortions of the flow tensor due to few nucleon scattering are analyzed. Possible procedures to remove this background from experimental data are discussed.
{"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Walter+Greiner%22/start/0/rows/10/author_facetfq/Joachim+A.+Maruhn/yearfq/1984","timestamp":"2014-04-20T18:29:05Z","content_type":null,"content_length":"25586","record_id":"<urn:uuid:be01a218-dbb9-4fb4-81f2-7cf25ed4eb69>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Field Equations for Localized Individual Photons and Relativistic Field Equations for Localized Moving Massive Particles Authors: André Michaud Calculation of the energy of localized electromagnetic particles by integration of energy fields mathematically deemed spherically isotropic and whose density is radialy decreasing from a lower limit of λα/2π to an infinite upper limit (∞), allowing the definition of discrete local electromagnetic fields coherent with permanently localized moving particles. Comments: 21 pages Download: PDF Submission history [v1] 17 Jul 2009 Unique-IP document downloads: 165 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. comments powered by
{"url":"http://vixra.org/abs/0907.0013","timestamp":"2014-04-20T05:44:39Z","content_type":null,"content_length":"7438","record_id":"<urn:uuid:23c7de11-07b3-49aa-b485-74d9b9cf40e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
On Rate Allocation Policies with Minimum Cell Rate Guarantee for - IEEE/ACM Trans. Networking "... We develop a new class of asynchronous distributed algorithms for the explicit rate control of elastic sessions in an integrated packet network. Sessions can request for minimum guaranteed rate allocations (e.g., MCRs in the ATM context), and, under this constraint, we seek to allocate the max-min f ..." Cited by 5 (0 self) Add to MetaCart We develop a new class of asynchronous distributed algorithms for the explicit rate control of elastic sessions in an integrated packet network. Sessions can request for minimum guaranteed rate allocations (e.g., MCRs in the ATM context), and, under this constraint, we seek to allocate the max-min fair rates to the sessions. We capture the integrated network context by permitting the link bandwidths available to elastic sessions to be stochastically time varying. The available capacity of each link is viewed as some statistic of this stochastic process (e.g., a fraction of the mean, or a large deviations Equivalent Service Capacity (ESC)). For fixed available capacity at each link, we show that the vector of max-min fair rates can be computed from the root of a certain vector equation. A distributed asynchronous stochastic approximation technique is then used to develop a provably convergent distributed algorithm for obtaining the root of the equation, even when the link flows and the ... - in Proceedings of the IEEE GLOBECOM , 1997 "... An important concept in the ABR service model is the minimum cell rate (MCR) guarantee as well as the peak cell rate (PCR) constraint for each connection. Because of the MCR and PCR constraints, the classical maxmin policy no longer suffices to determine rate allocation since it does not support eit ..." Cited by 4 (0 self) Add to MetaCart An important concept in the ABR service model is the minimum cell rate (MCR) guarantee as well as the peak cell rate (PCR) constraint for each connection. Because of the MCR and PCR constraints, the classical maxmin policy no longer suffices to determine rate allocation since it does not support either the MCR or the PCR. , 2006 "... We consider a distributed stochastic approximation algorithm that computes max-min fair rate allocations to several elastic flows sharing a network (an elastic flow is one that can adapt its sending rate to the rate that the network can provide it). The flows are assumed to traverse a fixed sequence ..." Cited by 2 (0 self) Add to MetaCart We consider a distributed stochastic approximation algorithm that computes max-min fair rate allocations to several elastic flows sharing a network (an elastic flow is one that can adapt its sending rate to the rate that the network can provide it). The flows are assumed to traverse a fixed sequence of links in the network. The available capacities at the network links are modeled as stochastic processes. Each session can request a minimum rate guarantee, hence we work with a notion of max-min fairness with minimum rates. A major part of this paper is the proof that the rate allocation computed by the stochastic approximation iterations converges to max-min rate. 1 , 1997 "... An important concept in the available bit rate (ABR) service model as defined by the ATM Forum is the min- imum cell rate (MCR) guarantee as well as the peak cell rate (PCR) constraint for each ABR virtual connection (VC). Because of the MCR and PCR requirements, the well-known max-rain fairness pol ..." Add to MetaCart An important concept in the available bit rate (ABR) service model as defined by the ATM Forum is the min- imum cell rate (MCR) guarantee as well as the peak cell rate (PCR) constraint for each ABR virtual connection (VC). Because of the MCR and PCR requirements, the well-known max-rain fairness policy no longer suffices to determine rate allocation in the ABR service model. We introduce a network bandwidth assignment policy, MURadd, which supports both the MCR and PCR requirements for each ABR virtual connection. A centralized algorithm is presented to compute network-wide bandwidth allocation to achieve this policy. Further- more, an explicit-rate (ER) based ABR switch algo- rithm is developed to achieve the MCRadd policy in the distributed ABR environment and its convergence proof is also given. The performance of our ABR algorithm is demonstrated by simulation results based on the benchmark network configurations suggested by the ATM Forum. "... An important concept in the available bit rate (ABR) service model as defined by the ATM Forum is the minimum cell rate (MCR) guarantee as well as the peak cell rate (PCR) constraint for each ABR virtual connection (VC). Because of the MCR and PCR requirements, the well-known max-min fairness policy ..." Add to MetaCart An important concept in the available bit rate (ABR) service model as defined by the ATM Forum is the minimum cell rate (MCR) guarantee as well as the peak cell rate (PCR) constraint for each ABR virtual connection (VC). Because of the MCR and PCR requirements, the well-known max-min fairness policy no longer sufices to determine rate allocation in the ABR service model. We introduce a network bandwidth assignment policy, MCRadd, which supports both the MCR and PCR requirements for each ABR virtual connection. A centralized algorithm is presented to compute network-wide bandwidth allocation to achieve this policy. Furthermore, an explicit-rate (ER) based ABR switch algorithm is developed to achieve the MCRadd policy in the distributed ABR environment and its convergence proof is also given. The performance of our ABR algorithm is demonstrated by simulation results based on the benchmark network configurations suggested by the
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=968176","timestamp":"2014-04-18T21:06:00Z","content_type":null,"content_length":"23892","record_id":"<urn:uuid:156a52ff-296a-4723-a5ab-369cd4e6bb00>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
What are some proofs of Godel's Theorem which are *essentially different* from the original proof? up vote 21 down vote favorite I am looking for examples of proofs of Godel's (First) Incompleteness Theorem which are essentially different from (Rosser's improvement of) Godel's original proof. This is partly inspired by questions two previously asked questions: (Proofs of Gödel's theorem) (When are two proofs of the same theorem really different proofs) To give an example of what I mean: The Godel/Rosser proof (see http://www.jstor.org/pss/2269059 for an exposition) shows that any consistent sufficiently strong axiomatizable theory is incomplete. The proof uses a substantial amount of recursion theory: the representability of primitive recursive functions and the diagonal lemma (roughly the same as Kleene's Recursion Theorem) are essential ingredients. The second incompleteness theorem - that no consistent sufficiently strong axiomatizable theory can prove its own consistency - is essentially a corollary to this proof, and a rather natural one at that. On the other hand, in 2000 Hilary Putnam published (http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.ndjfl/1027953483) an alternate proof of Godel's first incompleteness theorem, due to Saul Kripke around 1984. This proof uses much less recursion theory, instead relying on some elementary model theory of nonstandard models of arithmetic. The theorem proven is slightly weaker, since Kripke's proof requires $\Sigma^0_2$-soundness, which is stronger than mere consistency (although still weaker than Godel's original assumption of $\omega$-consistency). Kripke's proof is clearly sufficiently different from the Godel/Rosser proof that it deserves to be considered a genuinely separate object. What makes the difference seem really impressive, at least to me, is that Kripke's proof yields a different corollary than that of Godel/Rosser. In a short paragraph, Putnam shows (and I do not know whether this part of his paper is due to Kripke) that Kripke's argument proves that there is no consistent finitely axiomatizable extension of $PA$. This is not a result which I know to follow from the Godel/Rosser proof; moreover, the Second Incompleteness Theorem, which is a corollary to Godel/Rosser's proof, does not seem easily derivable from Kripke's proof. Motivated by this, say that two proofs of (roughly) the same theorem are essentially different if they yield different natural corollaries. Clearly this is a totally subjective notion, but I think it has enough shared meaning to be worthwhile. My main question, then, is: 1) What other essentially different proofs of something resembling Godel's First Incompleteness Theorem are known? In other words, is there some other proof of something close to "every consistent axiomatizable extension of $PA$ is incomplete" which does not yield Godel's Second Incompleteness Theorem as a natural corollary? I am especially interested in proofs which don't yield the nonexistence of consistent finitely axiomatizable extensions of $PA$, either, and in proofs which do yield some natural corollary. I don't particularly care about the precise version of the First Incompleteness Theorem proved: if it applies to systems in the language of second-order arithmetic, if it assumes $\omega$-consistency, or if it only applies to systems stronger than $ATR_0$, say, that's all the same to me. However, I do require that the version of the incompleteness theorem proved apply to all sufficiently strong systems with whatever consistency property is needed; so, for example, I would not consider the work of Paris and Harrington to be a good example of this. The only other potential example of such an essentially different proof that I know of is the proof(s) by Jech and Woodin (see http://andrescaicedo.files.wordpress.com/2010/11/2ndincompleteness1.pdf ), but I don't understand that proof at a level such that I would be comfortable saying that it is in fact an essentially different proof. It seems to me to be rather similar to the original proof. Perhaps someone can enlighten me? Of course, entirely separate from my main question, my characterization of the difference between the specific two proofs of the incompleteness theorem mentioned above may be incorrect. So I'm also interested in the following question: 2) Is it in fact the case that Kripke's proof does not yield Second Incompleteness as an natural corollary, and that Godel/Rosser's proof does not easily yield the nonexistence of a consistent finitely axiomatizable extension of PA as a natural corollary? lo.logic model-theory alternative-proof soft-question Based on the nature of my question, I've made it community wiki and added the "big-list" tag. Please let me know if either is inappropriate. – Noah S Aug 4 '11 at 3:35 6 I think the "big-list" tag is overly optimistic, even if "big" means "at least two" . Gerhard "Not Always Optimistic About Big" Paseman, 2011.08.03 – Gerhard Paseman Aug 4 '11 at 3:57 Fair point, I've removed it. – Noah S Aug 4 '11 at 4:04 3 Godel's original proof does not use full omega consistency. He only needs a small fragment, namely that if the axiom system proves a program P halts, then P actually halts. This is sigma-0-1 soundness. – Ron Maimon Aug 4 '11 at 4:43 Kripke’s proof is interesting, however it only works for extensions of PA in the language of PA, which is a rather uninteresting class of theories. It does not apply to fragments of PA, and it 2 does not apply to theories whose language includes objects that are not integers (such as ZFC or ACA_0). That’s not what I would call an alternate proof of Gödel’s first incompleteness theorem, but rather of its very special case. – Emil Jeřábek Aug 5 '11 at 15:45 show 3 more comments 6 Answers active oldest votes The question hinges on the best answer to a previous question, when are two proofs the same? I believe that the only satisfactory answer to this earlier question is by considering the construction implicit in the proof. Two proofs are the same when they give the same construction. To isolate a construction, one must carefully distinguish between a proof of statements of the form "There exists..." and "For all...", and one must also distinguish between a statement and its double negation. For the purposes of practically distinguishing proofs, one does not need to be so pedantic most of the time. For example, the "topological proof" of the infinity of primes is essentially the same as Euclid's proof, because given a collection of primes, when unpacked to its construction, it builds the same number. One must note here that the problem of deciding when two programs produce the same answers is undecidable, even more so when the programs have access to oracles, which is necessary sometimes. Further complicating matters is the fact that certain programs are only superficially different to the eye, but are essentially the same. Nevertheless, I think this is a useful heuristic, which has a chance of having a precise counterpart. The construction in Godel's theorem is often obscured by the heavy coding involved. Since today, coding is standardized in computer science, I prefer to state the construction explicitly as a computer program, instead of as a coded statement of first-order logic. Two proofs are the same when they construct the same computer program. There are exactly three types of unpacked proofs of Godel's theorem and related results, as far as I know. To save typing, a program P "runs" iff it does not halt. TYPE I: self referential pi-0-1 statements (statements about the non-halting of a certain computer program) GODEL_1: To prove Godel's theorem Godel's way (as clarified by Turing and Kleene), given an axiomatic system S whose deduction system is computable, you construct the program GODEL which does the following: 1. It prints its own code into a variable R. (This is possible since you can write quines, and make quining into a subroutine) 2. It deduces all consequences of S, looking for a proof in S of the statement "R does not halt" ("R runs").(This is a statement of arithmetic of the form "forall n F^n(R) is non-halting", where F is a primitive recursive instruction set for any computer you care to code up). 3. If it finds this statement, it halts. The statement G--- "GODEL runs" is true precisely when S does not prove it. So G states "S does not prove G". The self reference is obvious in the first step of the program. This is equivalent to Godel's original construction. From the construction, you can read off the requirements on the axiom system. In order to be sure that GODEL halting leads a contradiction, the axiom system has to be able to prove every statement of the form "program P leads to memory state M after time t" for all integer times t, and for all programs P. Each of these statements is a finite computation, a sigma-0-1 statement, so the axiom system must be able to prove all true sigma-0-1 statements (or even just a subset of these rich enough to allow a computer to be embedded in the language). GODEL_2: Note that if S is inconsistent, it proves every statement, including "GODEL runs", so "S is inconsistent" implies "GODEL halts". "GODEL halts" also implies "S is consistent", if S can prove that it is sigma-0-1 complete. So the unprovability of "GODEL halts" is tantamount to the unprovability of consis(S). But S can (falsely) prove "GODEL halts", without any contradiction, so long as GODEL never actually halts. This is saying that it is possible for an axiom system to prove its own inconsistency without actually being inconsistent, just by telling lies about computer programs. The assumption that S is omega consistent (or even just sigma-0-1 sound), means that it does not prove "P halts" unless P actually halts. So the same construction of GODEL proves the second incompleteness theorem as stated by GODEL, an omega-consistent system (or a sigma-0-1 sound system) cannot prove its own consistency. The proofs of Godel's theorem which go through the halting problem all give this construction. ROSSER: The program ROSSER is just a slight modification of GODEL. ROSSER does this: 1. prints its code into a variable R 2. looks in S for a proof of 1. "R prints to the screen" or 2. "R does not print to the screen". 3. If it finds 1. It halts without printing, if 2, it halts after printing "hello". Now note that a consistent S cannot prove 1 nor 2, because either way, there is a halting computation that contradicts the statement. So a consistent S is incomplete. If we call the statement "ROSSER prints to the screen" by the name R, and its negation 2 by the name "notR", then "ROSSER does not print" is iff equivalent to "S proves R before notR", which is the standard gloss for ROSSER's construction. ROSSER's statement is different than GODEL statement, because the statement "ROSSER does not print" is not equivalent to the statement "S is consistent". But since the slightly different statement "ROSSER does not halt" actually is equivalent to "S is consistent", ROSSER's construction includes GODEL's construction in a simple way. Here are some simple modifications which also prove Godel's theorem: PROOF_LENGTH: Given a provable statement of length L bytes in an axiomatic system S, there is no computable function of L, f(L), which bounds the length of the proof of L (relatively short theorems can have enormously long proofs). construct PROOF_LENGTH to do the following: 1. Print its own code into a variable R 2. look through all deductions of S of length up to f(|R|) bytes for a proof of "R prints ok" 3. If you finds it, halt without printing 4. If not, print "ok" and halt. In this case, the construction is clarified with a gloss: suppose f(L) exists, then you can decide the halting problem by running through all proofs of length f(|"P halts"|) for a proof of "P halts". If you don't find it, then P doesn't halt. This is also a proof of Godel's theorem, since if S is complete, then it will decide all statements of the form "P halts", and then you can compute the function f which is the length of the proof of the statement. But the program constructed is essentially the same as GODEL (actually ROSSER, in the version I gave here). But the explicit construction does give you an important corollary: if you assume S is consistent, then just by the form of PROOF_LENGTH, you can see that PROOF_LENGTH has to print "ok" independent of the function f, since if it does not, this means it has found a proof that it didn't. So the assumption of consis(S) will collapse this f dependent enormously long proof to a short f-independent proof of the same statement. This construction is just a finitary version of the original GODEL program, and this theorem is called the GODEL speedup theorem. The assumption of consis(S) reduces the length of proofs of certain statements by an amount greater than any computable function of the length. LOB: Given an axiomatic system S, consider the program LOB which, given statement A, does the following: 1. Prints its code into R 2. Deduces consequences of S, looking for "R halts implies A" 3. if it finds this theorem, it halts. "LOB halts" only if S proves "LOB halts implies A", and then S also proves "LOB halts", so it proves A by modus ponens, so LOB halts iff A. But "LOB halts" is equivalent to "(S proves LOB halts) implies A", and therefore to "(S proves (S proves A) implies A)". Therefore, S proves "S proves ((S proves A) implies A) iff (S proves A)". This theorem can be repackaged into an infinite sequence of ever more obscure statements, by replacing "LOB halts" with its different equivalent forms (some of which contain itself), and eventually closing the recursion. The full set of Lob statements is generated by a simple recursive grammar. LOB's theorem does not prove Godel's theorem, but it extends it. The proof is of a similar kind. TWEEDLEDEE and TWEEDEDUM: consider two programs as follows: 1. prints TWEEDLEDEE's code into ME, and TWEEDLEDUM's code into HE 2. looks for 1. ME runs 2. HE runs 3. if it finds 1. it halts, if it finds 2. it prints "tweedle-dee-dee!" and goes into an infinite loop. 1. prints TWEEDLEDUM's code into ME, and TWEEDLEDEE's code into HE 2. looks for 1. ME runs 2. HE runs up vote 3. If it finds 1. it halts, if it finds 2. it prints "tweedle-dee-dum!" and goes into an infinite loop. 16 down vote These give a kind of splitting theorem for axiomatic systems which satisfy the hypotheses of GODEL's theorem. "DEE runs" and "DUM runs" are both unprovable in S, since proving either one leads to a contradiction. "consis(S)" implies "DEE runs & DUM runs", and conversely "DEE runs & DUM runs" implies consis(S). So if S is inconsistent, then one of TWEEDLEDEE or TWEEDLEDUM has to halt But which one? This is not decidable in S. That is, S+"DEE runs" is a theory which is strictly stronger than S, since it proves "DEE runs", but is weaker than S+"consis(S)" because it cannot prove "DUM runs". To prove this, note that S proves "DEE runs or DUM runs", i.e. "DEE halts implies DUM runs". So if S also proved "DEE runs implies DUM runs", it would prove plain old "DUM runs", which is The reason for the spurious print statement is just to make absolutely sure that the programs DEE and DUM, which are so similar, don't end up identical, which would wreck the proof (this subtlety is hard to see if you don't unpack the construction into an explicit program, but it is also easy to avoid by using different variable names, or extra spaces, or whatever). This construction is strictly stronger than GODEL's. It shows that for any sound system S, the implication "DEE runs implies DUM runs" is unprovable. The construction provides a proof of Godel's theorem, although it is similar to ROSSER (The statement DEE halts is provably NOT the negation of "DUM halts", that's the whole point) I wondered if this construction was in the literature for a long time. I recently ran across it in "The Realm of Ordinal Analysis" by Michael Rathjen (proposition 2.17 on page 14). He couldn't find it in the rest of the literature, but the methods are sufficiently well known (and sufficiently close to Rosser's) to make it folklore. But, as emphasized by Rathjen, the result is significantly stronger than the usual theorems. TWEEDLE_N: To push this further into uncharted territory, consider the infinite sequence of programs TWEEDLE_N (where N is an integer) 1. loops over M, printing the code of TWEEDLE_M into a variable R(M) 2. Deduces consequences of S, looking for a theorem of the form "TWEEDLE_M runs" for some M 3. If it finds this theorem, and M=N, it halts. If M != N, it goes into an infinite loop. It is easy to see that either all TWEEDLE-N's run, or exactly one of them halts, something which S can prove, because steps 1+2 (which must be run simultaneously in two threads) are the same for all the programs. But S cannot prove that any single one of them runs. To prove this, note that if there is an effective list of programs A_N (like the TWEEDLE's), you can make a program MERGE(A_N) which generates and runs all of the programs on parallel threads and halts exactly when any one of them does. Then S proves that either TWEEDLE_k runs or MERGE(A_r (r!=k)) runs. That is, TWEEDLE_k and MERGE(all the others) form a TWEEDLEDEE/ TWEEDLEDUM pair. This means that it cannot prove that one runs implies the other runs. The result is that for any computable partition of the TWEEDLE's into two disjoint subsets A and B, S cannot prove that the TWEEDLE_A's run implies the TWEEDLE_B's run, although consis(S) proves that all the TWEEDLE's run. The theories "S+all the TWEEDLE_A's run" are sound theories strictly between S and S+consis(S) in terms of Pi-0-1 content--- they prove new correct theorems about the non-halting of computer programs, but they are weaker than S+consis(S) (and weaker than each other in a way described by the partial order of set containment). I like this theorem, because it is a proof which is very dastardly to translate to more traditional logic language. I think that computational language is more natural for these results. I could go on making more complicated self-referential proofs (and I think this is an interesting thing to do, they all prove somewhat different things), but I will stop here to consider non self-referential proofs, which work at a higher level of the arithmetic hierarchy. TYPE II: these prove that there exist total functions which are not provably total. The statements in this case are pi-0-2, statements about the totality of some computable function. FASTER_GROWTH: Given axiomatic system S, consider all computable functions f from the integers to the integers that are proven to be total (that is, which halt for all arguments). Now construct the program FASTER_GROWTH(n) which does the following: 1. Lists the first n functions which are provably total, and computes their value at position n. 2. returns the biggest value at n, plus 1. If S is sound for pi-0-2 statements, then there are infinitely many provably total functions, and FASTER_GROWTH halts at every input. Further, FASTER_GROWTH is eventually bigger than any function provably total in S. So "FASTER_GROWTH is total" is an unprovable true theorem. The function FASTER_GROWTH is constructed entirely from other functions which are not equal to itself. The requirement on the theory is that when it proves a function is total, it is telling the truth, otherwise FASTER_GROWTH will get stuck in an infinite loop at some point. This is the pi-0-2 soundness. The pi-0-2 soundness proofs generally construct this type of thing, when The most common abbreviated form of this argument runs as follows: given an axiomatic system S, diagonalizing against all provably total recursive functions in the theory gives a total recursive function which the theory cannot prove is total. This argument is folklore. Type II Godel theorems provide a different way to strengthen the axiom system, by adding the statement of the totality of FASTER_GROWTH. This statement implies consistency of S, but is strictly stronger, since consistency is not enough to ensure FASTER_GROWTH is total (you need some soundness). TYPE III: nonconstructive theorem about a large class of statements, which do not provide an explicit unprovable statement, and so cannot be used to step up the heirarchy of systems. BOOLOS: There is no computer program which will output the true answer to statements of the form "Integer N can be named using k bytes or less worth of symbols of Peano Arithmetic" write program BOOLOS: 1. loops over all integers N, looking for the first N which requires more than M symbols to name, where M is the length of the symbols describing the output of BOOLOS, translated to arithmetic. 2. prints out N. The contradiction means that BOOLOS does not work. Boolos is not so great, because it isn't focused on a particular system, but it's the same basic idea as... CHAITIN: Which replaces the notion of definability with Kolmogorov complexity, which is definability by an algorithm. Write the program CHAITIN to do the following: 1. List all proofs of S, looking for "the Kolmogorov complexity of string Q is greater than N" (where N is the length of CHAITIN) 2. Print string Q Now if S ever proves that the Kolmogorov complexity of any string is greater than the length of CHAITIN, then CHAITIN will make S into a sigma-0-1 liar (inconsistent). This proves that there is a completely effective bound on the maximum provable Kolmogorov complexity of any string. (this was given previously as an answer). There is an infinite list of true sentences of the form "The Kolmogorov complexity of Q is N", since there are infinitely many strings and only finitely many programs of length less than N. But only a finite number of these theorems get decided by any given axiom system S. This is a less explicit proof, because you can't be sure which strings are unprovably complex, so there isn't a natural axiom to add on to strengthen the system. The statement "the Kolmogorov complexity of Q is N", translated to Arithmetic, is forall P, there exists N, ((F^N(P) is a halted state with output Q) implies |P|>n), so that it's Pi-0-2. Now to identify which proofs is what type: • Self-referential sentence proofs--- type I (Godel, Rosser, Kleene, Post, Church, Turing, Smullyan, popular works) • Epsilon-naught induction proofs--- these are type II, but specific to Peano Arithmetic. The general version is the one presented above (Kripke's proof, and Paris-Harrington, Goodstein, Hydra). The version they give is that the limit ordinal of all recursive provable ordinals is recursive but not provably recursive, but this is a type II argument. • Jech/Woodin Set theory model proof--- despite all its elegance and generality, the proof is type 1 when formulated computationally. I will elaborate below • Chaitin/Boolos--- type III. I don't know any other type IIIs. By the way, I agree with Sergei that finite axiomatizability (although emphasized by Putnam for some reason) is not so important. That property depends on exactly how you choose your axioms. The completeness theorem is strong enough to get a general computation from only finitely many axioms. The proof of impossibility of finite axiomatization (when it holds) is that the theory is self-reflecting, it can prove the consistency of any finite fragment (this is true in PA, because PA proves the consistency of induction restricted to level N in the Arithmetic Hierarchy), and the axiomatization is weak, in that finitely many axioms are stuck in some finite fragment no matter how many times you use them. Self-reflection is interesting, but not that relevant for the incompleteness theorem. To see that the Jech/Woodin proof is really a type I proof in disguise, it is important for the purpose of unpacking the construction to supplement the set theory with an effective procedure to give computational meaning to the models. This is just first order logic and the completeness theorem, as Andreas Caicedo states in the introduction. (To be continued --- I am too tired to avoid wrong statements--- sorry for the excessive length) Your criterion, that two proofs are the same if they give the same construction, is very restrictive. Consider, for example, the well-known proof that there are infinitely many primes, 2 the proof where you multiply the first $n$ primes, add 1, and find a prime factor of the result. Now modify it by changing "add 1" to "subtract 1". The modification results in finding a different prime. Yet most mathematicians would not consider it a really different proof. You probably intended something like "the same construction up to silly changes", but it's not easy to define silliness. – Andreas Blass Aug 5 '11 at 15:02 Of course you are right. The way I think of silly changes is by the complexity of the proof required to prove statement II given statement I and vice versa. For the example you gave, I would be happy thinking of them as (slightly)different proofs because to get from one to the other is not much simpler than proving either. There is a measure of closeness defined by how long/complex (axiom strength wise) the equivalence between the constructions is. – Ron Maimon Aug 5 '11 at 18:54 1 An awesome summary! – Alon Amit Aug 5 '11 at 22:29 I still am having some trouble with the full computational interpretation of Jech/Woodin. The simpler consequences are easy enough to interpret as standard type I arguments, but there is one theorem which is completely different: there is no descending infinite sequence of models of set theory. I had a similar proof for the well-foundedness of the collection of theories stronger than PA under the ordering A is stronger than B when A proves the consistency of B. But this theorem has a more involved proof than type I arguments. I'll try to finish Jech Woodin today. – Ron Maimon Aug 6 '11 at 22:02 2 The Jech/Woodin proof has an important ancestor, due to Kreisel, who came up with the first model-theoretic proof of the second incompleteness theorem in the 1960's (see, e.g., logika.umk.pl/llp/06/du.pdf). – Ali Enayat Aug 7 '11 at 16:42 add comment There are a couple well known proofs of incompleteness based on properties of PA degrees. PA degrees have been studied extensively in recursion theory. A PA degree is a Turing degree that can compute a complete extension of PA. Obviously, to prove the incompleteness theorem, it's enough to show that no PA degree can be recursive. (If we had a consistent r.e. theory T extending PA that was not incomplete, then its completion would be recursive -- to decide whether $T \models \varphi$ or $T \models \neg \varphi$, simply look for a proof of $\varphi$ or $\neg \varphi$ from $T$, and because $T$ is assumed to be complete, this process always terminates and is hence recursive). up vote 4 One way to see that there are no recursive PA degrees is to observe that any PA degree can compute a nonstandard model of PA via compactness and a Henkin construction. Now apply down vote Tennenbaum's theorem that there are no recursive nonstandard models of PA. Another way to see that there are no recursive PA degrees is with $\Pi^0_1$ classes. Every PA degree can compute a path through each $\Pi^0_1$ class (this is one direction of the Scott basis theorem). To finish, note that one can construct $\Pi^0_1$ classes that do not contain any recursive elements. For instance, there are $\Pi^0_1$ classes that contain only diagonally nonrecursive elements. add comment I seem to recall that in reasonable systems of arithmetic (there's got to be an algorithm for deciding whether a statement is an axiom, and there's got to be a proof-checking algorithm, and a certain amount of arithmetic has to be provable) there are only a finite number of sequences that can be proved to be random in the Kolmogorov--Chaitin sense, although there must be up vote 3 infinitely sequences that are random in that sense. down vote add comment Besides the proofs already listed, one essentially different treatment which comes to mind is Gentzen's consistency proof of PA, which established that PA can prove the well-ordering of ordinal notations less than $\epsilon_0$ but could not prove the well-ordering of a notation for $\epsilon_0$, and that, in turn, the well-ordering of $\epsilon_0$ would suffice to prove the up vote consistency of PA. Characterizing the proof-theoretic ordinal of a theory yields incompleteness results by an essentially different (and arguably far deeper / more general) route to that of 2 down Godel. Does Gentzen's proof actually establish incompleteness, though? My understanding is that Gentzen proves (i) that $PA$ proves induction along (notations for) well-orderings of all ordertypes $<\epsilon_0$, and (ii) that $T+Ind(\epsilon_0)$ proves $Con(PA)$, where $T$ is a small subtheory of $PA$. From this, we can conclude that $PA$ does not prove $Ind(\epsilon_0)$, but to do so we need Goedel's second incompleteness theorem. That is, Gentzen proved a new instance of incompleteness, but relies on already knowing some other incompleteness. Is this correct? If so, this isn't what I was asking. – Noah S Oct 5 '13 at 19:57 I think it would depend on whether there was any non-Godelian route of establishing that successive powers of $\omega^{\omega^{...}}$ require increasing levels of quantification in PA. If so it's straightforward that you couldn't get to $\epsilon_0$ without infinitely long formulas. Unfortunately I do not know the answer to this - I'm still struggling to understand Gentzen on a truly intuitive level. (Vide my question here: mathoverflow.net/questions/138875/… ) But it 'feels' like such a proof ought to exist. – Eliezer Yudkowsky Oct 6 '13 at 20:14 I think that proof, if it exists, would be an answer to my question, but Gentzen's given argument by itself isn't. – Noah S Oct 6 '13 at 20:55 add comment The motivation of your interest in finding the proof of the general first incompleteness theorem which does not 'naturally' (whatever you mean by that) imply the non-existence of finitely axiomatizable extensions of PA is unclear to me. For I would not overestimate the significance of the later (as long as you do not restrict yourself to the language of PA only, which seems up vote unnecessarily restrictive to me). For if just one new unary predicate symbol is added to the language of PA, then the finitely axiomatized conservative extension of PA can already be 1 down constructed. This follows from the general result of Craig and Vaught extended from the one of Kleene. You can find the details and references here. The obvious motivation in incompleteness theorem that does not imply finite inaxiomatizability is that the latter condition severely reduces the class of theories to which the incompleteness theorem may be applicable: there are quite a lot of interesting finitely axiomatized theories, even extending PA, as you noted yourself. – Emil Jeřábek Aug 5 '11 at 15:34 add comment One of the Smullyan books. I think this one... Raymond M. Smullyan, The Lady or the Tiger? And Other Logic Puzzles Including a Mathematical Novel That Features Godel's Great Discovery up vote 0 down vote After many years I can no longer tell you how like or different it is to Gödel's original proof. add comment Not the answer you're looking for? Browse other questions tagged lo.logic model-theory alternative-proof soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/72062/what-are-some-proofs-of-godels-theorem-which-are-essentially-different-from-t/72108","timestamp":"2014-04-19T22:23:56Z","content_type":null,"content_length":"114156","record_id":"<urn:uuid:ddf58f91-0003-473a-b1f0-afb3eee75852>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Tobit interpretation and post estimation [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Tobit interpretation and post estimation From "Austin Nichols" <austinnichols@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Tobit interpretation and post estimation Date Wed, 18 Apr 2007 12:29:59 -0400 This discussion is probably better suited to Econlist (does such a thing exist?) than Statalist, but I am compelled to argue again that you do not have a censoring problem per se. Censoring is, and -tobit- is designed to handle, the situation where you do not observe the true value of the dependent value because some other process censors it. In the case of household debt, there is a desired level of debt for each household (may be pos or neg) conditional on each price of debt (interest rate bid/ask) and any rationing or discontinuities (e.g. a home loan in the US over $417K suddenly gets more expensive). The prices and restrictions are determined endogenously in a two-sided market--and are themselves partly a product of choices made by household members, and therefore reflect preferences. And the "censoring" here is really just "facing a very high price of borrowing gobs of additional money," not "below the minimal detectable concentration on my debt-o-meter, so I have no idea what the desired level is, I just know it's less than X." You could model this naively with a -probit- or -logit- to determine whether folks are constrained (face rationing) and then -reg- or -poisson- or -glm- for those who are unconstrained (one thinks of the RAND model of health expenditures), or you could model the whole process with -poisson- or -glm- (which often fit better, in the sense of having higher pseudo-R2), but all of these estimates will not be identifying some causal relationship. You should go back to the drawing board and ask yourself: What are the parameters of interest here? Where is there any exogenous variation that might identify Well, at least in your next study you should. For now, you might just use the -poisson- command, or -zip- where the -inflate()- option contains the variables that predict rationing. See also the -vuong- option (the Vuong test of zip versus poisson--this test statistic has a standard normal distribution with large positive values favoring the zip model and large negative values favoring the poisson model). On 4/18/07, Elena Giarda <elena.giarda@prometeia.it> wrote: We chose a tobit model because we have the problem of debt rationing: some households have zero debt not because they choose so, but because were refused the loan by banks or financial institutions. Also some households might have a lower level of debt than desired because of partial rationing. We excluded non-rationed households (with zero debt) from our sample (is this reasonable?) because this is their desired level of debt. We are able to detect rationed and non-rationed households from a couple of questions in the Bank of Italy's household survey. We also found reference of this approach in the literature, but maybe is not the most appropriate. Anyway...we made a mistake in our first estimates, because we set the censoring level at zero, when instead it should be a positive number. In case we decide to go further with the tobit estimation how do we choose the level of censoring? About the "positive debt" problem pointed out by Austin: we are using the variable "debt" (we have now switched to the total amount of debt=consumer debt + mortgages) with the amount of debt declared by the household. We are not considering overall wealth of households, therefore debt is either zero or positive. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-04/msg00549.html","timestamp":"2014-04-18T05:41:05Z","content_type":null,"content_length":"9201","record_id":"<urn:uuid:52a52669-91c7-4633-8ade-178cecaa4cff>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Difficulty - Math Genius Answer : Though this puzzle presents no great difficulty to any one possessing a knowledge of algebra, it has perhaps rather interesting features. Seeing, as one does in the illustration, just one corner of the proposed square, one is scarcely prepared for the fact that the field, in order to comply with the conditions, must contain exactly 501,760 acres, the fence requiring the same number of rails. Here is a little rule that will always apply where the length of the rail is half a pole. Multiply the number of rails in a hurdle by four, and the result is the exact number of miles in the side of a square field containing the same number of acres as there are rails in the complete Thus, with a one-rail fence the field is four miles square; a two-rail fence gives eight miles square; a three-rail fence, twelve miles square; and so on, until we find that a seven-rail fence multiplied by four gives a field of twenty-eight miles square. In the case of our present problem, if the field be made smaller, then the number of rails will exceed the number of acres; while if the field be made larger, the number of rails will be less than the acres of the field.
{"url":"http://www.pedagonet.com/mathgenius/answer117.html","timestamp":"2014-04-18T21:14:57Z","content_type":null,"content_length":"4952","record_id":"<urn:uuid:4d1f3d2a-5b1c-47c4-a64c-229b2e88d251>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
scipy.linalg.qr_multiply(a, c, mode='right', pivoting=False, conjugate=False, overwrite_a=False, overwrite_c=False)[source]¶ Calculate the QR decomposition and multiply Q with a matrix. Calculate the decomposition A = Q R where Q is unitary/orthogonal and R upper triangular. Multiply Q with a vector or a matrix c. New in version 0.11.0. a : ndarray, shape (M, N) Matrix to be decomposed c : ndarray, one- or two-dimensional calculate the product of c and q, depending on the mode: mode : {‘left’, ‘right’}, optional dot(Q, c) is returned if mode is ‘left’, dot(c, Q) is returned if mode is ‘right’. The shape of c must be appropriate for the matrix multiplications, if mode is ‘left’, min (a.shape) == c.shape[0], if mode is ‘right’, a.shape[0] == c.shape[1]. Parameters pivoting : bool, optional Whether or not factorization should include pivoting for rank-revealing qr decomposition, see the documentation of qr. conjugate : bool, optional Whether Q should be complex-conjugated. This might be faster than explicit conjugation. overwrite_a : bool, optional Whether data in a is overwritten (may improve performance) overwrite_c : bool, optional Whether data in c is overwritten (may improve performance). If this is used, c must be big enough to keep the result, i.e. c.shape[0] = a.shape[0] if mode is ‘left’. CQ : float or complex ndarray the product of Q and c, as defined in mode R : float or complex ndarray Returns : Of shape (K, N), K = min(M, N). P : ndarray of ints Of shape (N,) for pivoting=True. Not returned if pivoting=False. Raises : Raised if decomposition fails This is an interface to the LAPACK routines dgeqrf, zgeqrf, dormqr, zunmqr, dgeqp3, and zgeqp3.
{"url":"http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.qr_multiply.html","timestamp":"2014-04-20T02:09:49Z","content_type":null,"content_length":"9954","record_id":"<urn:uuid:39001eac-aa22-4806-99e3-3deccb837f43>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Prairie Grove, IL Algebra Tutor Find a Prairie Grove, IL Algebra Tutor ...I was a French teacher, but I also did official district homebound instruction (for students that cannot attend regular high school for health related reasons) for 2 years as well. I also tutored officially after school for students on campus, again in various subjects. I have been tutoring for WyzAnt for over 4 years now and very much enjoy it! 16 Subjects: including algebra 1, algebra 2, chemistry, English ...I am very familiar with the logic and structure of SAS. I also have given training in basic SAS programming at my work. I have a masters in statistics and have taken classes in survival analysis and biostatistics. 25 Subjects: including algebra 2, algebra 1, chemistry, physics My name is Michael. I grew up in Bloomingdale, graduated from Lake Park High School in Roselle, and returned to Itasca in 2001 to be closer to family. I married my high school sweetheart 16 years ago and now have two daughters (5th grade and 2nd grade) and our new Golden Retriever puppy, Peaches. 13 Subjects: including algebra 1, algebra 2, calculus, statistics ...I graduated from the University of California, San Diego with a degree in Biochemistry in 2012. Since graduation, I've worked as a vision therapist for children ages 6-18, combining my love of optometry with one-on-one tutoring. I believe that students are able to learn anything with the right instruction and I know my passion for science and math will contribute greatly to this 25 Subjects: including algebra 1, algebra 2, chemistry, biology ...I also read Study Power by William Luckie, which was similar to the above. I should be able to help students by eliminating cramming, preparing daily, creating a schedule, and reducing test anxiety. We can determine if they should do homework alone or with a buddy or group. 37 Subjects: including algebra 2, precalculus, GED, SAT math Related Prairie Grove, IL Tutors Prairie Grove, IL Accounting Tutors Prairie Grove, IL ACT Tutors Prairie Grove, IL Algebra Tutors Prairie Grove, IL Algebra 2 Tutors Prairie Grove, IL Calculus Tutors Prairie Grove, IL Geometry Tutors Prairie Grove, IL Math Tutors Prairie Grove, IL Prealgebra Tutors Prairie Grove, IL Precalculus Tutors Prairie Grove, IL SAT Tutors Prairie Grove, IL SAT Math Tutors Prairie Grove, IL Science Tutors Prairie Grove, IL Statistics Tutors Prairie Grove, IL Trigonometry Tutors Nearby Cities With algebra Tutor Crystal Lake, IL algebra Tutors Fox River Grove algebra Tutors Hainesville, IL algebra Tutors Holiday Hills, IL algebra Tutors Island Lake algebra Tutors Lake Barrington, IL algebra Tutors Lakemoor, IL algebra Tutors Lakewood, IL algebra Tutors Mchenry, IL algebra Tutors North Barrington, IL algebra Tutors Oakwood Hills, IL algebra Tutors Port Barrington, IL algebra Tutors Spring Grove, IL algebra Tutors Wauconda, IL algebra Tutors Wonder Lake algebra Tutors
{"url":"http://www.purplemath.com/Prairie_Grove_IL_Algebra_tutors.php","timestamp":"2014-04-20T07:10:16Z","content_type":null,"content_length":"24480","record_id":"<urn:uuid:54006e1e-cd84-4158-a70f-8b53ed49e2f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Filling the tank A tank has three taps. The first can fill the tank in 4 hours, the second can fill the tank in 2 hours and the third can empty the tank in 8 hours. How long will it take to fill the tank with all three taps operating at the same time? (You can assume the tank is empty to begin with). Click here to reveal answer It will take 1 hour 36 mins to fill the tank. In one hour the fraction, or portion, of the tank filled will be: 1/4 + 1/2 – 1/8 = 5/8 If 5/8 takes 60 mins, then 1/8 takes 12 mins, and 8/8 = 96 minutes 3 thoughts on “Filling the tank” 1. Forever if the third tap empties the tank. 2. nevermind 3. 1:36 minutes is the correct answer, but let me fully explain it for anyone that is having trouble: TapA can fill in 4 hours (ie: TapA=Hour/4 to fill) TapB can fill in 2 hours (ie: TapB=Hour/2 to fill) TapC emptys the tank in 8 hours (ie: TapC=Hour/8 to empty) So our formula is 1=TapA+TapB-TapC which then becomes 1=h/4+h/2-h/8 Now we need a common denominator to make this easier. 4,2 and 8 call all use the denominator 8, so our new formula becomes: simplified that becomes: 1=(2h)/8+(4h)/8-h/8 simplified again becomes: 1=5h/8 multiply our denominator makes: 8=5h divide by to get our variable alone makes: 8/5=h 8/5 of an hour is 1 and 3/5hour or 1hour and 36 minutes.
{"url":"http://www.pzzlr.com/filling-the-tank/","timestamp":"2014-04-16T22:02:19Z","content_type":null,"content_length":"23376","record_id":"<urn:uuid:966da2a9-377e-4713-abf6-a5c949e32bcd>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivative Help using Quotient & Chain Rule March 5th 2011, 10:47 AM #1 Oct 2009 Derivative Help using Quotient & Chain Rule Can someone please help me out... I've been struggling with this question for a while. I'm decent at doing just the chain rule or just the quotient rule but for some reason I can't do them together. The question is: Find the derivative of: 1/ square root of 2x-3 One part that really confuses me is when i do the quotient rule: f' (x) g (x) - g' (x) f (x) = 0 - 1x-3^-1/2 (1) / g(x)^2 = ??? I can't seem to figure out g(x) squared... G (x) is 2x-3^-1/2, and if we do that squared would it be just 2x-3 ?? (1/2 x 2 = 1) ? if that's the case I have something like: -x-3^-1/2 / (2x-3) squared = square root (-x-3) / (2x-3) squared... as you can see it's a jumbled mess.. If someone can provide me an answer step by step that'll really help. I tried it but can't seem to get it. Thanks in advance! I would recommend re-writing the question as: $\left(2x - 3\right)^{-\frac{1}{2}}$ Then, the derivative is found by applying the Power Rule: $(-\frac{1}{2})\left(2x - 3\right)^{-\frac{1}{2} - 1} <br /> \times \dfrac{d}{dx}\left[2x-3\right] = (-\frac{1}{2})\left(2x - 3\right)^{-\frac{3}{2}} \times 2 = -\left(2x - 3\right)^{-\frac{3} Of course, you can re-write the final answer as: Using the Quotient Rule, let $f(x) = 1$ and $g(x) = \sqrt{2x-3}$. Then, $f'(x) = 0$ and $g'(x) = \dfrac{1}{\sqrt{2x-3}}$. The derivative is given by: $\dfrac{f'(x) \times g(x) - f(x) \times g'(x)}{[g(x)]^2}$. Substitute the values of f(x), f'(x), g(x), and g'(x) into the above equation. $\dfrac{0 \times \sqrt{2x - 3} - 1 \times \dfrac{1}{\sqrt{2x-3}}}{(\sqrt{2x-3})^2} = \dfrac{-\dfrac{1}{\sqrt{2x-3}}}{2x - 3} = -\dfrac{1}{(2x-3)^{\frac{3}{2}}}$ Thanks so much... it makes so much more sense doing it the power rule way... I understood steps of it when I did it but messed up a lot... I still don't fully understand it if we went through the quotient way because I don't see where the 3/2 comes from since it's -1 / square root 2x -3 / 2x - 3... I understand that whole part, just not how to do it further... The test I have coming up says it's going to be the chain rule along with other stuff... do they usually specify whether we have to use quotient rule with the chain rule or is it up to us to solve it the way we want to? Thanks for the help Also what happens with the 1 at the top? we turn the equation into 2x-3^-1/2 but what happens to the 1 that was given to us? it practically disappeared... would that always happen for the numerator? What happens if it's a number other than 1 such as: 5/ square root of 2x-3 Thanks so much... it makes so much more sense doing it the power rule way... I understood steps of it when I did it but messed up a lot... I still don't fully understand it if we went through the quotient way because I don't see where the 3/2 comes from since it's -1 / square root 2x -3 / 2x - 3... I understand that whole part, just not how to do it further... The test I have coming up says it's going to be the chain rule along with other stuff... do they usually specify whether we have to use quotient rule with the chain rule or is it up to us to solve it the way we want to? Thanks for the help Also what happens with the 1 at the top? we turn the equation into 2x-3^-1/2 but what happens to the 1 that was given to us? it practically disappeared... would that always happen for the numerator? What happens if it's a number other than 1 such as: 5/ square root of 2x-3 What happens is you simply multiply it. You do the same with the one, it's just understood anything multiplied by 1 is itself. This concept had me for a while when I was learning chain rule because most first year calculus teachers assume their students operate at a much higher level of mathematical thinking than they actually do. This is good if the student wants to be mathematically strong, like myself, but bad for students that only need first year calculus - like business/econ majors. But I digress.... Here's what the work actually looks like. is the same as Which is the same as $5 * \frac{1}{\sqrt{2x-3}}$ Even though this is a relatively simple topic, it's crucial to understand in order to know when to use the chain rule to simplify vs. using the quotient rule. Whenever a constant is in the numerator it's often easier to use the chain rule than the quotient rule. Does this make sense? I will show more steps starting from here: $\dfrac{-\dfrac{1}{\sqrt{2x-3}}}{2x - 3}$ Division by a number is the same as multiplying by the reciprocal of that number. In this case, division by $2x - 3$ is the same as multiplying by the reciprocal of $2x - 3$, which is $\dfrac{1} $\dfrac{-\dfrac{1}{\sqrt{2x-3}}}{2x - 3} = -\dfrac{1}{\sqrt{2x-3}} \times \dfrac{1}{2x - 3}$ Multiply the numerators and multiply the denominators to combine the fractions. $-\dfrac{1}{\sqrt{2x-3}} \times \dfrac{1}{2x - 3} = -\dfrac{1 \times 1}{\sqrt{2x-3} \times (2x-3)} = -\dfrac{1}{(2x-3)\sqrt{2x-3}}$ Now, re-write $\sqrt{2x-3}$ as $(2x-3)^{\frac{1}{2}}$. $-\dfrac{1}{(2x-3)\sqrt{2x-3}} = -\dfrac{1}{(2x-3)(2x-3)^{\frac{1}{2}}}$ Finally, $(2x-3)$ and $(2x-3)^\frac{1}{2}$ have a common base, so add exponents to find their product. (The exponent on $(2x-3)$ is implied to be 1.) $-\dfrac{1}{(2x-3)(2x-3)^{\frac{1}{2}}} = -\dfrac{1}{(2x-3)^{1 + \frac{1}{2}}} = -\dfrac{1}{(2x-3)^{\frac{3}{2}}}$ Can someone please help me out... I've been struggling with this question for a while. I'm decent at doing just the chain rule or just the quotient rule but for some reason I can't do them together. The question is: Find the derivative of: 1/ square root of 2x-3 One part that really confuses me is when i do the quotient rule: f' (x) g (x) - g' (x) f (x) = 0 - 1x-3^-1/2 (1) / g(x)^2 = ??? I can't seem to figure out g(x) squared... G (x) is 2x-3^-1/2, and if we do that squared would it be just 2x-3 ?? (1/2 x 2 = 1) ? if that's the case I have something like: -x-3^-1/2 / (2x-3) squared = square root (-x-3) / (2x-3) squared... as you can see it's a jumbled mess.. If someone can provide me an answer step by step that'll really help. I tried it but can't seem to get it. Thanks in advance! $\displaystyle\ h(x)=\frac{f(x)}{g(x)}$ $\displaystyle\ h'(x)=\frac{g(x)f'(x)-f(x)g'(x)}{[g(x)]^2}$ $f(x)=1\Rightarrow\ f'(x)=0$ $\displaystyle\ g'(x)=0.5u^{-0.5}u'(x)=\frac{2}{2\sqrt{u}}=\frac{1}{\sqrt{2x-3}}$ Just apply the Quotient Rule and use the Chain Rule independently for the part that requires it. $\displaystyle\ h'(x)=\frac{0-\frac{1}{\sqrt{2x-3}}}{2x-3}=-\sqrt{2x-3}^{-3}$ Of course, you could simply use and bypass the Quotient Rule. March 5th 2011, 11:01 AM #2 Dec 2009 March 5th 2011, 11:17 AM #3 Oct 2009 March 5th 2011, 12:48 PM #4 Jan 2011 March 5th 2011, 03:38 PM #5 Dec 2009 March 5th 2011, 04:23 PM #6 MHF Contributor Dec 2009
{"url":"http://mathhelpforum.com/calculus/173529-derivative-help-using-quotient-chain-rule.html","timestamp":"2014-04-17T05:31:29Z","content_type":null,"content_length":"56041","record_id":"<urn:uuid:56a97b49-a5b0-4a9f-b186-723ce077fcc3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Step-by-Step Differential Equation Solutions in Wolfram|Alpha January 30, 2012 Posted by Wolfram|Alpha has become well-known for its ability to perform step-by-step math in a variety of areas. Today we’re pleased to introduce a new member to this family: step-by-step differential equations. Differential equations are fundamental to many fields, with applications such as describing spring-mass systems and circuits and modeling control systems. From basic separable equations to solving with Laplace transforms, Wolfram|Alpha is a great way to guide yourself through a tough differential equation problem. Let’s take a look at some examples. Wolfram|Alpha can show the steps to solve simple differential equations as well as slightly more complicated ones like this one: Wolfram|Alpha can help out in many different cases when it comes to differential equations. Get step-by-step directions on solving exact equations or get help on solving higher-order equations. Even differential equations that are solved with initial conditions are easy to compute. What about equations that can be solved by Laplace transforms? Not a problem for Wolfram|Alpha: This step-by-step program has the ability to solve many types of first-order equations such as separable, linear, Bernoulli, exact, and homogeneous. In addition, it solves higher-order equations with methods like undetermined coefficients, variation of parameters, the method of Laplace transforms, and many more. So the next time you find yourself stuck solving a differential equation or wanting to check your work, consult Wolfram|Alpha! 11 Comments Is this option also available on Mathematica or it’s just for the WolframAlpha? Posted by José Carlos Méndez January 31, 2012 at 4:15 am Reply @ Jose - This option is available in Mathematica. Once you create a Wolfram|Alpha input, put in your query in Mathematica the same way you would in Wolfram|Alpha to get the same results. Follow this link to learn more: http://www.wolfram.com/mathematica/new-in-8/combine-knowledge-and-computation/ Thank you! Posted by The Wolfram|Alpha Team January 31, 2012 at 9:48 am Reply Muy util. En aplicaciones como diseño de filtros en electronica y en diseño de sistemas de control. Muchas gracias. Posted by JUAN January 31, 2012 at 10:58 am Reply Bearing in mind that the target is to add everything to Wolfram Alpha with a view to getting as near as practicable…I suggest that the emphasis be put on Wolfram Alpha generating the Step by Step information from the actual steps it takes to arricve at its solution. As it is the work involved will prove a significant limiting factor. Posted by Brian Gilbert January 31, 2012 at 6:29 pm Reply I have a problem with the input of the ( ‘ ) to write the Differential equation in the android app. Any solution? Posted by Oriol February 1, 2012 at 3:57 pm Reply Well, one workaround is to use the generic Android keyboard, which also has the advantage of Swype support. You can change this under Menu -> More -> Preferences. Posted by Bhuvanesh February 3, 2012 at 8:56 pm Reply search the android market for “Hacker’s Keyboard” – it has just about every symbol you can think of. Posted by rymo February 7, 2012 at 3:40 pm Reply My teacher was very impressed with this when I showed it to him, little did I know it was such a new feature. I know that this request is propably way under your league, but what about showing steps when solving two equations with two unknowns, or three EQ with three unknowns etc.. ? Thanks for a great piece of tech.! Posted by Peter Helstrup Jensen February 5, 2012 at 8:23 am Reply whats the procedure for partial differential equation ?? Posted by Owais May 13, 2012 at 7:45 am Reply is it possible to make wolfram solve a problem using an specific method like D operator method etc??? if yes plz explain how?! Posted by Cna SamPaD August 14, 2012 at 8:34 am Reply Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more. Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies… Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes! Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon? Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
{"url":"http://blog.wolframalpha.com/2012/01/30/step-by-step-differential-equation-solutions-in-wolframalpha/","timestamp":"2014-04-19T05:10:53Z","content_type":null,"content_length":"52491","record_id":"<urn:uuid:736761f9-1509-44dc-a63c-a340a0159557>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
January 19, 2007 is the date for the world scientific community to celebrate the 95th anniversary of the birth of Leonid Vital'evich Kantorovich. His creation of a mathematical theory of the best use of resources won a Nobel prize in economics and brought glory to the Russian science. Kantorovich invented linear programming in 1939. At the same time he made his major contribution to mathematics, the theory of Dedekind complete vector lattices also called K-spaces. Kantorovich discovered “a kind of generalized numbers” and suggested new models of the real axis which is the main tool of the mathematics of variable quantities. Practically none has survived of those who knew Kantorovich in the prime years of his creativity. The blurred photos of the late 1930s reveal a serene gaze of a maverick genius. The ideas and methods of linear programming gave rise to deep interdisciplinary research, trespassed the frontiers of economics, and won appreciation in the various spheres of human activities. is difficult to distinguish another scholar It is difficult to distinguish another scholar in the history of the twentieth century who contributed as much as him to the fusion of mathematics and economics, the sciences with the antipodal standards of scientific thought. Israel Gelfand pointed out that he can list only John von Neumann and Andrei Kolmogorov alongside Leonid Kantorovich among those few of his contemporaries who synthesized the mathematical and humanitarian cultures. The gift of Kantorovich falls beyond any doubt. However, it is far from enough to be gifted in science. The human being is primary; the scholar is secondary. The papers of Kantorovich hide his notes on the self-teaching technique of the art of dancing... The phenotype of Kantorovich as well as his natural character unveiled a few of the traits harmful to successful work in science and conspicuously incomparable with the craft of “implanting” new methods and skills into industry and technology. Kantorovich had nothing in common with any of his genuinely or ostensibly successful compatriots but remained a challenging “dark duckling” in the national scientific establishment. Alfred Marshall, the founder of the Cambridge school of neoclassicals and the author of a multi-volume treatise on political economy, vehemently collated the mathematical and economic trends of thought. He wrote that “there is no room in economics for long trains of deductive reasoning” and claimed that the aim of analysis and deduction in economics “is not to forge a few long chains of reasoning, but to forge rightly many short chains and single connecting links....” Marshall’s metaphor of a plentitude of short “combs” has nothing in common with the upside-down pyramid of the cumulative hierarchy of the von Neumann universe, the residence of the modern Zermelo–Fraenkel set theory. It is from the times of Ancient Greece that the beauty and power of mathematics rest on the axiomatic method which presumes the derivation of new facts by however lengthy chains of formal implications. The conspicuous discrepancy between economists and mathematicians in mentality has hindered their mutual understanding and cooperation. Many partitions, invisible but ubiquitous, were erected in ratiocination, isolating the economic community from its mathematical counterpart and vice versa. This status quo with deep roots in history was always a challenge to Kantorovich, contradicting his views of the inevitable integration of mathematics and economics. The contradistinction between the brilliant achievements and the instances of poor adaptation to the practical seamy side of life is listed among the dramatic enigmas by Kantorovich. His life became a fabulous and puzzling humanitarian phenomenon. Kantorovich’s introvertness, obvious in personal communications, was inexplicably accompanied by outright public extravertness. The absence of any orator’s abilities neighbored his deep logic and special mastery in polemics. His innate freedom and self-sufficiency coexisted with the purposeful and indefatigable endurance that reached the power of a “wolf grip” in the case of necessity. The freedom of Kantorovich can hardly bewilder anyone as stemming from his essence, the gift of mathematics. His kindness and mildness were inborn. The tenacity and tremendous force of penetration were the acquired traits that he selected and cultivated conscientiously for the sake of rationality. Kantorovich might seem a looser in regard to the acknowledgement of his great idea of the fusion of mathematics and economics. However, this opinion is definitely wrong. Despite the disdaining and neglecting of Kantorovich and his ideas, their triumph is incontestable. The unrefutable evidence transpires in the drastic reorganization of the entire system of education of economists as well as in the already indispensable mathematization and informatization of each instance of economy in its every functional or managerial aspect. The stance of mathematics and the dream of optimality incarnate in everyday routine gadgets of the working economist. Calculation will supersede prophecy. Economics as a boon companion of mathematics will avoid merging into any esoteric part of the humanities, or politics, or belles-lettres. The new generations of mathematicians will treat the puzzling problems of economics as an inexhaustible source of inspiration and an attractive arena for applying and refining their formal methods. The life of Kantorovich is a turnpike of service to his homeland irrespective of the prevalent ideological obstinacies. This lesson is of utmost import these days. Attempts at slandering and silencing the life and legacy of Kantorovich are doomed to vanish. Pygmies can never hide a giant... The genius of rationality in science, Kantorovich was ingeniously rational in choosing his world-line and path in science. He bequeathed us an exemplar of the best use of personal resources in the presence of restrictive internal and external constraints. November 29, 2006 ┃English Page │Russian Page ┃ © Kutateladze S. S. 2006
{"url":"http://www.math.nsc.ru/LBRT/g2/english/ssk/lvk_phenomenon_e.html","timestamp":"2014-04-21T00:54:53Z","content_type":null,"content_length":"9006","record_id":"<urn:uuid:c6465b75-2432-4fd4-a36b-7de9e95d1c36>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
New Almaden Trigonometry Tutor Find a New Almaden Trigonometry Tutor ...I have been tutoring for the past eleven years in Physics, Math, and Chemistry. I started tutoring when I was an undergrad in Electrical Engineering at UC Berkeley. At first, I started helping my friends with their classes in math, physics, and chemistry. 11 Subjects: including trigonometry, chemistry, calculus, physics ...I like to talk through examples and discuss the problems, to ensure there is a true understanding of the concepts. I'm currently in school to gain my credentialing in teaching in order to teach Mathematics for grades 6-12, and have been tutoring for over 10 years. I do have a passion for Math because I have my Bachelors of Science in Mathematics and Masters of Science in Actuarial 9 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...What makes me good at tutoring? Knowing math, knowing my students, being good at drawing people out, and being good at adjusting how I teach so that it suits the unique individual I am working with. To learn, students must feel comfortable, interested, and challenged. 22 Subjects: including trigonometry, English, reading, geometry I graduated from UCLA with a math degree and Pepperdine with an MBA degree. I have taught business psychology in a European university. I tutor middle school and high school math students. 11 Subjects: including trigonometry, calculus, statistics, geometry ...I have ten years of practical, hands-on computer programming experience through my work as a scientist. Python is my primary programming language. I have also programmed in Pascal and C. 17 Subjects: including trigonometry, chemistry, writing, geometry
{"url":"http://www.purplemath.com/New_Almaden_trigonometry_tutors.php","timestamp":"2014-04-19T07:25:35Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:9af9ef2c-8201-402a-9d1e-4cf07e32d432>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
metric spaces October 28th 2010, 02:06 PM #1 Junior Member Sep 2009 metric spaces Let (M,d) be a metric space. 1. Choose x element of M. Show that a set U subset of M is a neighbourhood of x if and only if x is an element of the Interior of U 2. Prove that the intersection U intersection V of any two neighbourhoods U and V of a point x element of M is also a neighbourhood of x. Last edited by Godisgood; November 1st 2010 at 06:13 AM. Let (M,d) be a metric space. 1. Choose x element of M. Show that a set U subset of M is a neighbourhood of x if and only if x is an element of the Interior of U 2. Prove that the intersection U intersection V of any two neighbourhoods U and V of a point x element of M is also a neighbourhood of x. Thanks soo much. I am really struggling with this since it is the first time I am taking a course of this nature, so please go in steps Thanks for the help What is a neighborhood? Do you take the Bourbaki "set containing an open set containing the point" or just "open set containing point" set containing an open set containing the point O.K. then if $O$ is an open set and $a\in\O\subseteq V$ the by definition $a\in\mathscr{I}(V)$, interior of $V$. On the other,hand, if $b\in\mathscr{I}(V)$ the by definition of interior, an open set $\left( {\exists Q} \right)$ such that $b\in Q\subseteq V$ so $V$ is a neighborhood of $b$. If each of $V~\&~U$ is a neighborhood of $c$ there are open sets $c \in O \subseteq V\;\& \;c \in Q \subseteq U$. But $O\cap Q$ is open and $c\in O\cap Q \subseteq V\cap U$. October 28th 2010, 09:47 PM #2 October 30th 2010, 10:08 AM #3 October 30th 2010, 10:20 AM #4 Junior Member Sep 2009 October 30th 2010, 10:55 AM #5
{"url":"http://mathhelpforum.com/differential-geometry/161342-metric-spaces.html","timestamp":"2014-04-18T04:22:47Z","content_type":null,"content_length":"46603","record_id":"<urn:uuid:bfffd0e1-16f3-4890-90af-31285979b0ef>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
The Preparation for the GRE Mathematics Subject Test | The Classroom | Synonym The GRE Mathematics Subject Test is designed to test your understanding of mathematical concepts and ability to solve problems. The level of math of this GRE subject test combines both undergraduate and high school level math. The main difficulty arises in how the questions are presented. Familiarizing yourself with what you will see on testing day can help improve your score. GRE Mathematics Subject Test Overview On the GRE Mathematics Subject Test, you are primarily tested on calculus, elementary algebra, abstract algebra and number theory. You can also expect to see a few questions dealing with arithmetic, algebra, geometry and data analysis. There are 66 multiple-choice questions on the test and you have two hours and 50 minutes to complete it. No notes, books or calculators are permitted. Calculus Preparation Fifty percent of the GRE Mathematics Subject Test is calculus-related. When studying, start relearning your calculus in the same sequence that is was taught in your first calculus course. If you have access to your old calculus book, go back and review the material. Begin with differential and integral calculus with one or more variables and then expand into coordinate geometry, trigonometry and differential equations. When you are studying these topics, familiarize yourself with definitions and theorems. This can help you understand the techniques for solving most of these problems. Algebra Preparation Twenty-five percent of the GRE Mathematics Subject Test involves algebra. Most of the algebra tested is what you learned in high school. If you have taken math in college, it is most likely that you have continued to use this type of mathematics. These questions cover topics such as linear algebra, abstract algebra and number theory. If you have forgotten any of these definitions or theories, start by freshening up on their meaning and then practice answering questions. For any questions that you answer incorrectly, go back and rework the question. Understanding why and where you made an error can help you avoid it on future problems. Additional Topics The GRE Mathematics Subject Test is designed to measure the mathematical skills and knowledge you have gained over several years. The remaining 25 percent of the GRE Mathematics Subject Test is a mix of other mathematical topics you have been exposed to, such as set theory, probability and statistics, combinations, real analysis, topology and complex variables. These questions are not testing your ability to recall information, but to assess your understanding of fundamental concepts and the ability to apply those concepts in various situations. Reworking practice problems on these topics is a solid way to prepare for the test. Style Your World With Color Photo Credits • BananaStock/BananaStock/Getty Images
{"url":"http://classroom.synonym.com/preparation-gre-mathematics-subject-test-1073.html","timestamp":"2014-04-19T14:29:37Z","content_type":null,"content_length":"32895","record_id":"<urn:uuid:fe81fef1-aa8f-4a58-97fe-c668658fcaf1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
The Sound of One Physicist Wailing One of the delights of teaching elementary physics is discovering some basic thing that you thought you understood, but actually didn’t. Usually, this occurs late at night, while preparing your lecture for the next morning. And you wonder whether you’ll be able to keep a straight face, the next morning, as you say words you’re no longer quite so sure are true. I’ve been teaching about waves in a non-technical course. One of the points I like to emphasize is that the energy density, or the intensity, of the wave is quadratic in the amplitude. There are lots of examples of that, with which you are doubtless familiar: electromagnetic waves, transverse waves on a stretched string, … But we’re studying sound, now. So I thought I would reassure myself that the same is true of sound waves … Where to start — not for the kids, mind you, but for my own satisfaction? Clearly, we should start with the Navier-Stokes equations. Neglecting viscosity, these equations have five locally-conserved quantities: mass, energy, and (three components of) momentum. The equations we are dealing with are … (1)$\rho\left(\frac{\partial \vec{v}}{\partial t}+ (\vec{v}\cdot\vec{abla})\vec{v}\right) = - \vec{abla} p$ Conservation of mass: (2)$0 =\frac{\partial\rho}{\partial t} + \vec{abla} \cdot(\rho\vec{v})$ and the equation of state: (3)$p = c \rho^\gamma$ It’s easy to write down the aforementioned conservation equations^1. Indeed, we already wrote one of them down, (2). The momentum density in the fluid is (4)$\vec{\mathcal{P}} = \rho \vec{v}$ and the stress tensor is (5)$\sigma_{i j} = p \delta_{i j} + \rho v_i v_j$ By virtue of (1),(2),(3), these satisfy $0 = \frac{\partial \vec{\mathcal{P}} }{\partial t} + \vec{abla}\cdot \overset{\leftrightarrow}{\sigma}$ The energy density in the fluid is (6)$\mathcal{E} = \frac{1}{2} \rho v^2 +\frac{1}{\gamma-1} p$ and the energy flux, or intensity (7)$\vec{\mathcal{I}} = \left(\frac{1}{2}\rho v^2 + \frac{\gamma}{\gamma-1} p \right) \vec{v}$ Again, by virtue of (1),(2),(3), these satisfy (8)$0 = \frac{\partial \mathcal{E}}{\partial t} + \vec{abla}\cdot \vec{\mathcal{I}}$ Now, to write down a sound wave. Let’s work in rest frame of the fluid, and expand $\begin{split} p(\vec{x},t) &= p_0 + p_1(\vec{x},t) + \dots \\ \rho(\vec{x},t) &= \rho_0 + \rho_1(\vec{x},t) + \dots \\ \vec{v}(\vec{x},t) &= \vec{v}_1(\vec{x},t) + \dots \end{split}$ The general solution of the linearized equations is a (superposition of) plane wave(s) (9)$\begin{split} p_1 &= f(\vec{x}-\vec{u} t)\\ \rho_1 &= \frac{1}{v_s^2}f(\vec{x}-\vec{u} t)\\ \vec{v}_1 &= \frac{\vec{u}}{\rho_0 v_s^2} f(\vec{x}-\vec{u} t) \end{split}$ where $v_s = \sqrt{\gamma p_o/\rho_0}$ is the sound speed, and $u^2= v_s^2$. So far, so good. But, here comes the puzzle: if we plug (9) into (6) and (7), we find that the energy density and the flux have terms linear in $f$! Surely, they should be quadratic^2. What the heck!?! Turns out that the resolution is remarkably simple. We can modify the energy conservation equation (8), by adding a multiple of the mass conservation equation (2). If we choose astutely, we can kill the unwanted linear terms. Define (10)$\begin{split} \mathcal{E}' &= \frac{1}{2} \rho v^2 +\frac{1}{\gamma-1} (p - v_s^2 \rho) + p_0 \\ \vec{\mathcal{I}}' &= \left(\frac{1}{2}\rho v^2 + \frac{1}{\gamma-1}(\gamma p- v_s^2\rho) \right) \vec{v} \end{split}$ These still satisfy the same conservation equation (8), as before, but now, when we plug in (9), we find^3 $\begin{split} \mathcal{E}' &= \frac{1}{\gamma p_0} f^2 \\ \vec{\mathcal{I}}' &= \frac{\vec {u}}{\gamma p_0} f^2 \end{split}$ Whew! That’s a relief. Not that we’re going to ever discuss it in some Freshman physics class, but we should have some word to say about the interpretation of what we’ve done. The first term in (6) is the kinetic energy density in the fluid; the second term has the interpretation of a potential energy density. What we’ve done, in (10), is redefine the zero of the potential energy density in some peculiar, position-dependent, way. (We also added a constant, to make homogeneous solution have zero potential.) Is there a more insightful explanation of the modification we’ve made? Also, you might try do something similar for the momentum density (4). But, I think, you will search in vain for a suitable modification. I think the momentum density really is linear in the ^1 Many fluid dynamicists like to write conservation laws, using the convective derivative, $\frac{D}{D t} = \frac{\partial}{\partial t} + \vec{v}\cdot \vec{abla}$ instead of $\frac{\partial}{\ partial t}$. In this context, that would be a wacky thing to do. If I’m an observer at some fixed location, $\vec{x}$, I want to know how much stuff is flowing past my location. Perhaps if I were interested in bulk fluid motion, I might be interested in the observations of observers co-moving with the fluid. But, for present purposes, fixed observers are more natural. ^2 There’s also a constant term in $\mathcal{E}$, but that’s harmless. We can just subtract it off, and do so in writing down (10). ^3 There’s actually a little bit of trickiness involved. Naïvely, it appears that we need to evaluate the $\frac{1}{\gamma-1}(p-v_s^2\rho)$ term in $\mathcal{E}'$ to second order in the fluctuations. It would be rather ugly if we had to go back and solve Navier-Stokes to second order. Fortunately, expanding (3) to second order, we find $\begin{split} p_1 &= v_s^2 \rho_1\\ p_2 &= v_s^2 \left( \ frac{\gamma-1}{2} \frac{\rho_1^2}{\rho_0} + \rho_2\right) \end{split}$ which is just what is needed to evaluate (10). Posted by distler at September 16, 2009 1:44 AM Re: The Sound of One Physicist Wailing Fun stuff. It might also be fun to convert these equations from vector calculus to differential forms and see if the conservation laws can be expressed in terms of adjoint nilpotent operators $d^2 = 0$ and $\delta^2 = 0$ in analogy with Maxwell’s equations. I used to the think we could start with a 0-form connection $A$ and compute the curvature $F = dA$, then follow the usual prescription from Maxwell, i.e. $\mathcal{L} = \frac{1}{2} \int_{\mathcal{M}} (F,F) vol$ leading to $dF = 0\quad\text{and}\quad \delta F = 0$ except $F$ is a 1-form. The constitutive equation (3) as well as the non-relativistic metric can be encoded in the Hodge star defining $\delta$. Then we dig up some of Urs’ old notes on Hamiltonian evolution and Killing vector fields, etc. There is probably some neat cohomology buried here too. For example, Equation (8) can be written as $\delta \alpha = 0$ for some 1-form $\alpha$ and in going to Equation (10), you’re writing $\alpha' = \alpha + \delta \beta$ for some 2-form $\beta$ so that obviously $\delta\alpha' = \delta\alpha.$ Another way to maybe look at it is that you’re performing a gauge transformation, but here (unlike Maxwell) the “amplitude” is actually the gauge field. Sorry for thinking out loud. It’s been ages since I’ve thought about this stuff and I wasn’t exactly an expert back then either :) Anyway, thanks for writing this. It was a welcome distraction. Posted by: Eric Forgy on September 16, 2009 2:48 AM | Permalink | Reply to this Re: The Sound of One Physicist Wailing It might also be fun to convert these equations from vector calculus to differential forms and see if the conservation laws can be expressed in terms of adjoint nilpotent operators $d^2=0$ and $\ delta^2=0$ in analogy with Maxwell’s equations. I don’t see any such simplification. Incompressible fluid flow, in $n$ spatial dimensions, is simplified by writing the $(n-1)$-form, $\mathbf{v}$ as $\mathbf{v}= d\phi$. That’s particularly useful for $n=2$, where the vorticity, $\phi$, is a scalar. But we’re not doing incompressible fluid flow (a limit in which the sound speed, $v_s\to\infty$) … and in going to Equation (10), you’re writing $\alpha&#8242;=\alpha+\delta\beta$ for some 2-form $\beta$. No, sorry, I don’t see that. If that were true, the conservation equation for the 1-form “$\delta\beta$” would be an identity. But the mass conservation equation, (2), is not an identity. Put a different way, $\mathcal{E}$ and $\mathcal{E}'$ are not gauge-equivalent. They have different physical meanings. It just turns out that $\mathcal{E}'$ is the relevant notion of energy density for the sound-wave. Posted by: Jacques Distler on September 16, 2009 8:37 AM | Permalink | PGP Sig | Reply to this Re: The Sound of One Physicist Wailing I don’t see any such simplification. Thanks Jacques. I played around with it a little bit last night and things definitely are not as simple as I was hoping they would be. Incidentally, I did find a paper that reformulates Navier-Stokes via vector-valued forms. If one were inclined to pursue it (which is beyond me and probably misguided anyway), I wonder if this might help with a “gauge theoretic” reinterpretation. Instead of a $u(1)$-valued 0-form “connection” (and corresponding 1-form “curvature”) you might have higher dimensional Lie algebra (?) No, sorry, I don’t see that. Put a different way, ℰ and ℰ′ are not gauge-equivalent. They have different physical meanings. It just turns out that ℰ′ is the relevant notion of energy density for the sound-wave. Oops! Sorry about that. It was wishful thinking. That makes your note even more interesting than I already thought it was! How was the lecture? Posted by: Eric Forgy on September 16, 2009 11:05 AM | Permalink | Reply to this Re: The Sound of One Physicist Wailing I don’t think we ought to force models into the mold of our favorite mathematical frameworks just because we can. We also have to consider the physical properties of the model. For the Navier-Stokes equation, this means coming up with a mathematical framework which makes the Galilean symmetry of the system explicit. Posted by: Grapes on October 18, 2009 9:41 PM | Permalink | Reply to this Re: The Sound of One Physicist Wailing The reason this works is that in non-rel hydrodynamics mass is conserved, so we can always add a term to the energy density which is proportional to the mass density. In terms of a more physical picture I would argue the following: In a density wave there is a non-zero “background” energy density, and part of the local energy density of the wave is a periodic change in this background energy. This change is linear in the amplitude, but it integrates to zero over a period of the wave, and we therefore do not include it in the total energy of the wave. The effect is nevertheless real – I could try to build a little powerplant that extracts energy from this term (or from the analogous potential energy term in a surface wave). Posted by: Thomas on September 16, 2009 11:25 AM | Permalink | Reply to this Re: The Sound of One Physicist Wailing This change is linear in the amplitude, but it integrates to zero over a period of the wave, and we therefore do not include it in the total energy of the wave. The effect is nevertheless real… It’s most definitely real. And it would be a mistake to focus only on time-averaged quantities (for which that term integrates to zero). We are, after all, interested in sending signals with our sound waves. So we really do care about the time-dependence. … or from the analogous potential energy term in a surface wave I should probably make sure I understand the surface wave case, too … Posted by: Jacques Distler on September 16, 2009 12:01 PM | Permalink | PGP Sig | Reply to this Surface waves I could try to build a little powerplant that extracts energy from this term (or from the analogous potential energy term in a surface wave). The analogous potential energy term for the surface wave is 1. everwhere positive 2. quadratic in the amplitude So it doesn’t pose the same puzzles that this sinusoidal (and linear in the amplitude) term, for the sound wave, poses. I’m not sure you can build a device to extract energy from the latter, though you certainly can extract energy from the former. Posted by: Jacques Distler on September 17, 2009 12:01 PM | Permalink | PGP Sig | Reply to this Re: Surface waves Are you sure? The motion of the fluid in a gravity wave can be described as individual fluid particles performing a circular motion with constant angular velocity (and an amplitude that decreases exponentially as you go away from the surface). This would seem to give a gravitational potential energy which is proportional to the density of the fluid, linear in the amplitude and alternating in Posted by: Thomas on September 17, 2009 1:08 PM | Permalink | Reply to this Re: Surface waves I should have been more precise: It would seem to give a linear and a quadratic term, where the linear term averages to zero, and the quadratic term is one half of the total energy of the wave (the other half residing in kinetic energy). Posted by: Thomas on September 17, 2009 1:12 PM | Permalink | Reply to this Re: Surface waves Yes, I’m sure. The potential energy density of the surface wave is $\begin{split} \mathcal{U}(x,y,t) &= \int_0^{h(x,y,t)} \rho g z d z\\ &= \tfrac{1}{2} \rho g {h(x,y,t)}^2 \end{split}$ This is 1. quadratic in the fluctuation 2. non-negative, regardless of the sign of $h$. Posted by: Jacques Distler on September 17, 2009 1:54 PM | Permalink | PGP Sig | Reply to this Re: Surface waves It’s a small amplitude wave, isn’t it? So h=h_0+a*Cos(k*x-w*t), where a is the amplitude of the wave. Posted by: Thomas on September 17, 2009 2:28 PM | Permalink | Reply to this Re: Surface waves Obviously, we want to normalize so that the potential energy density vanishes for vanishing displacement from equilibrium. That corresponds to taking $h_0=0$ in your notation. Posted by: Jacques Distler on September 17, 2009 2:49 PM | Permalink | PGP Sig | Reply to this Re: Surface waves You do more than that, don’t you? You change the sign of the gravitational force at h_0, so that a particle acquires positive gravitational potential energy no matter if it is raised or lowered. Posted by: Thomas on September 17, 2009 3:11 PM | Permalink | Reply to this Re: Surface waves Removing a particle from below the equilibrium height makes a positive contribution to $\mathcal{U}$, just as surely as adding a particle above the equilibrium height. Think about it … Posted by: Jacques Distler on September 17, 2009 3:16 PM | Permalink | PGP Sig | Reply to this Re: Surface waves My definition of the potential energy of a fluid column and yours differ by a constant and a term proportional to the mass in the column. Since total mass is conserved your definition is indeed as good as mine. I would argue that my expression is the one that follows from the textbook definition of the energy density of a fluid. In that sense, the situation with density waves and surface waves is indeed exactly the same: If the text book definition is adopted we find a term in the local energy density which is linear in the amplitude, but it disappears if one computes the total energy of the wave, or if one redefines the energy density by a suitable multiple of the mass density. The exercise does shed some light on the meaning of the extra term: The fluid motion involves local mass transport, and the extra term in the energy density accounts (to leading order) for the energy required to take a mass element and put it somewhere else. Posted by: Thomas on September 17, 2009 9:04 PM | Permalink | Reply to this Re: Surface waves If I wanted the gravitational potential energy corresponding to the total mass in the column (of equilibrium depth, $D$), I would compute $\begin{split} \mathcal{U}'(x,y,t) &= \int_{-D}^{h(x,y,t)} \ rho g z d z \\ &= \tfrac{1}{2} \rho g (h^2 - D^2) \end{split}$ This differs by a constant ($\tfrac{1}{2} \rho g D^2$) from the expression I wrote before. As always, I think I am justified in defining $h(x,y,t)$ as the deviation from the equilibrium height of the column. Again, up to the additive constant, the potential energy density is a quadratic function of that deviation if we choose the surface of the water (in equilibrium) as the zero of the gravitational potential (as above). This seems like quite a natural choice. Moreover, if you want to study surface waves, where the channel has a varying depth (waves breaking at the beach!), it’s the only choice that makes sense. Posted by: Jacques Distler on September 17, 2009 11:26 PM | Permalink | PGP Sig | Reply to this Re: The Sound of One Physicist Wailing “So far, so good. But, here comes the puzzle: if we plug (9) into (6) and (7), we find that the energy density and the flux have terms linear in f! Surely, they should be quadratic2.” Silly question from ignoramus here but why is it necessary to have quadratic terms instead of linear here, what problems exactly would this cause? Posted by: TinyGrasshopper on September 16, 2009 6:10 PM | Permalink | Reply to this Re: The Sound of One Physicist Wailing [W]hy is it necessary to have quadratic terms instead of linear here, what problems exactly would this cause? Among other weirdnesses, the latter is not positive-definite… Posted by: Jacques Distler on September 17, 2009 12:04 PM | Permalink | PGP Sig | Reply to this Re: The Sound of One Physicist Wailing Just out of curiosity, why is there no viscosity and why is the equation of state a power law? Not that it affects your main argument at all. Posted by: Grapes on October 18, 2009 7:29 PM | Permalink | Reply to this Re: The Sound of One Physicist Wailing Just out of curiosity, why is there no viscosity Putting viscosity into the linearized equations, I was aiming for, just dissipates the plane wave solutions that we found. And, in so doing, it dissipates the energy that — in the absence of viscosity — is a conserved quantity. Since the point was to discuss the latter, it makes sense to ignore viscosity for the purposes of this discussion. and why is the equation of state a power law? Again, the departures from a perfect fluid are irrelevant to the discussion. Posted by: Jacques Distler on October 18, 2009 11:24 PM | Permalink | PGP Sig | Reply to this Re: The Sound of One Physicist Wailing The true root of the problem is that zero point we are expanding about doesn’t have zero compressibility. We have exactly the same situation for electromagnetic waves propagating over a nonzero electromagnetic background field, or sound waves over a solid medium. What we ought to do instead is to average the energy density over several wavelengths. Posted by: Grapes on October 18, 2009 9:27 PM | Permalink | Reply to this Re: The Sound of One Physicist Wailing In fact, at least up to the quadratic level and for small enough perturbations, it’s possible to show that provided $\rho=\rho_0$ and $v=0$ at the boundaries, even though the energy density at any given point might be below the zero point energy density, the total energy has to be greater than or equal to the zero point energy. Posted by: Grapes on October 18, 2009 9:35 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/~distler/blog/archives/002058.html","timestamp":"2014-04-16T10:11:44Z","content_type":null,"content_length":"73758","record_id":"<urn:uuid:eafd42bd-56dc-4176-8209-a8f932068610>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest View, IL Calculus Tutor Find a Forest View, IL Calculus Tutor Hi! Thank you for considering my tutoring services. I have a diverse background that makes me well suited to help you with your middle school through college level math classes, as well as physics, mechanical engineering, intro computer science and Microsoft Office products. 17 Subjects: including calculus, physics, geometry, GRE ...I also can teach them things they don't know with just enough detail to allow them to apply the material on the exam, and I can re-teach them material they may have forgotten. I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on th... 24 Subjects: including calculus, physics, geometry, GRE I earned High Honors in Molecular Biology and Biochemistry as well as an Ancient History (Classics) degree from Dartmouth College. I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece. 41 Subjects: including calculus, chemistry, physics, English ...Collectively, these three fields are the critical foundation for analyzing human civilizations.Lecturer at the Oriental Institute, University of Chicago, Creator of a Life Long Learning Network in Chicago. Worked as a historian and archaeologist for 20 years. Gained a Phd in Historical Archaeology focused on the ancient world at the University of Chicago. 10 Subjects: including calculus, geometry, algebra 1, algebra 2 ...Geometry is unlike many other Math courses in that it is a spatial/visual class and deals minimally with variables and equations. Geometry students can expect topics such as: proofs using theorems and axioms, calculation of distance, area and volume, congruence and similarity of triangles, trans... 11 Subjects: including calculus, geometry, algebra 1, algebra 2 Related Forest View, IL Tutors Forest View, IL Accounting Tutors Forest View, IL ACT Tutors Forest View, IL Algebra Tutors Forest View, IL Algebra 2 Tutors Forest View, IL Calculus Tutors Forest View, IL Geometry Tutors Forest View, IL Math Tutors Forest View, IL Prealgebra Tutors Forest View, IL Precalculus Tutors Forest View, IL SAT Tutors Forest View, IL SAT Math Tutors Forest View, IL Science Tutors Forest View, IL Statistics Tutors Forest View, IL Trigonometry Tutors Nearby Cities With calculus Tutor Argo, IL calculus Tutors Bedford Park calculus Tutors Berwyn, IL calculus Tutors Broadview, IL calculus Tutors Brookfield, IL calculus Tutors Burbank, IL calculus Tutors Cicero, IL calculus Tutors Lyons, IL calculus Tutors Maywood, IL calculus Tutors Mc Cook, IL calculus Tutors Mccook, IL calculus Tutors Riverside, IL calculus Tutors Stickney, IL calculus Tutors Summit Argo calculus Tutors Summit, IL calculus Tutors
{"url":"http://www.purplemath.com/Forest_View_IL_Calculus_tutors.php","timestamp":"2014-04-17T11:24:22Z","content_type":null,"content_length":"24238","record_id":"<urn:uuid:50c4d7ad-1e10-41ba-a155-7d299d563ea0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Talks and slides - samuelmonnier Selected talks: • Séminaire commum, LMTP - Université François Rablais, April 3rd 2014, Tours, Finite higher spin symmetries from exponentiation. • Séminaire commun LPTENS-LPTHE, April 2nd 2014, Paris, Global gravitational anomaly cancellation for five-branes. • 19th European Workshop on String Theory, September 2nd 2013, Bern, Global gravitational anomaly cancellation for five-branes. • String Math 2013, Simons Center for Geometry and Physics, June 18th 2013, Global gravitational anomaly cancellation for five-branes. • Laboratoire Camille Jordan, Université Lyon 1, October 5th 2012, The global gravitational anomaly of the self-dual field theory. • ETH Zürich, September 20th 2012, End of Summer Meeting in Mathematical Physics, The global gravitational anomaly of the self-dual field theory • TUT, Tallinn, July 11th 2012, Conference 3Quantum, The global gravitational anomaly of the self-dual field theory. Home • SCGP, Stony Brook, May 22nd 2012, Workshop on Algebraic Topology, Field Theory and Strings, The global gravitational anomaly of the self-dual field theory. Research • IST, Lisbon, April 16th 2012, Geometric quantization and the metric dependence of the self-dual field theory. • IHP, Paris, ``Rencontres théoriciennes'', April 12th 2012, The global gravitational anomaly of the self-dual field theory. • CERN, Geneva, January 24th 2012, The global gravitational anomaly of the self-dual field theory. • NHETC, Rutgers University, November 29th 2011, The global gravitational anomaly of the self-dual field theory. • University of Geneva, Section of Mathematics, November 8th 2011, Anomalies, quadratic refinements and index theory. • String-Math 2011, Philadelphia, June 8th 2011, The global gravitational anomaly of the self-dual field theory. • CERN, Geneva, February 2nd 2010, Defects in rational conformal field theories. • L.P.T.A., Université Montpellier II, May 26th 2008, Kondo flow invariants, twisted K-theory and Ramond-Ramond charges. • ETH Zürich, May 21st 2008, Kondo flow invariants, twisted K-theory and Ramond-Ramond charges. • LPT, Ecole normale supérieure, Paris, January 16th 2008, Quantization of Wilson loops in Wess-Zumino-Witten models. • Max Plack Institut für Gravitations-Physik, Golm, September 17th 2007, Quantization of Wilson loops in Wess-Zumino-Witten models. • Workshop on Poisson geometry and sigma models, Vienna, August 20th-24th 2007, Quantization of Wilson loops in Wess-Zumino-Witten models.
{"url":"https://sites.google.com/site/samuelmonnier/home/talks-and-slides","timestamp":"2014-04-16T07:31:46Z","content_type":null,"content_length":"19770","record_id":"<urn:uuid:0a698940-e084-4631-a97f-092f0b6795ce>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Lesson Aligned to the Common Core Standards Pythagorean Theorem Lesson Aligned to the Common Core Standards This Pythagorean theorem lesson aligned to the Common Core Standards comes from math teacher Megel Barker (you can see more of his work at https://www.youtube.com/user/whynvme1?feature=watch). It does a pretty good job of connecting math to objects in students’ lives, and should help increase engagement. This Pythagorean theorem lesson aligned to the Common Core Standards is aligned to CCSS.Math.Content.8.G.B.7 How Big Is My Screen? Pythagorean theorem made real by connecting the size of television screens to the hypotenuse of a RAT. 1) Objectives • To investigate patterns in numbers • To know how to find the size of a mobile phone screen • Find the sizes of different screens using the Pythagorean theorem. 2) Activity • Match the cards with the other that is exactly the same • In your pairs, decide what these numbers are called and suggest what keywords you could you think of • Look for a connection between them In this Pythagorean theorem lesson aligned to the Common Core Standards, learners are given cards with information such as "3 squared," and asked to seek out the matching answers. This is done with Tarsia domino software and grouped from 1–10 (higher for better groups). Learners will then seek connections between the square numbers, hence Pythagorean triples. This is better as a group activity. It introduce history of Pythagorean triples. 3) Pythagorean Triples • Are whole numbers • Eg. 3, 4 and 5 are a PT since 9 + 16 = 25 • Can you find any others? • For homework, see if you can find others. Following on from previous slides. Make sure to set homework to find others that are co-prime (could be a challenge to class). 4) How Big Is My Screen? In this Pythagorean theorem lesson aligned to the Common Core Standards, relate a fictitious story of TV shopping and finding it difficult to understand why TVs with same size have different shape. Find out what the measurement is of a 50 inch TV. Link this to a right angle and hence keyword hypotenuse = diagonal of rectangle etc. 5) Where is the Hypotenuse? This is a simple activity to locate a hypotenuse. Print this screen for the class to work on. This is a vital skill in doing Pythagorean theorem. 6) Pythagoras' Theorem Here you can share the formula and connect c to hypotenuse. 7) How Big Is My iPad? In this Pythagorean theorem lesson aligned to the Common Core Standards, use the theorem to find iPad screen size. Model working out and emphasizing the square root function. 8) How Big Are the Screens? Here are some practice questions. Print this screen for learners to use. 9) What Have We Learned Today? Recap keywords, hypotenuse, diagonal; reset HW on finding triples, explanations on how to do it, could even provide a wrong answer for them to correct.
{"url":"http://www.schoolimprovement.com/common-core-360/blog/pythagorean-theorem-lesson-aligned-to-the-Common-Core-Standards/?pr-sixth-grade-ela-common-core-literacy-lesson-plan","timestamp":"2014-04-16T13:21:55Z","content_type":null,"content_length":"25458","record_id":"<urn:uuid:312a22c8-6aa0-44d2-abd7-866cb76bf16b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Mike Tomlin makes aggressive, unconventional call, converts, no one cares [Note: I'm scheduled to appear on The Bobby Curran Show on ESPN 1420 at just after 12:30 today. If you're interested, you can listen here.] If you weren’t watching the Steelers-Raiders game, you probably didn’t hear about Mike Tomlin’s gutsy call late in the 4th quarter. That’s because it worked. On 3rd-and-1 with 4:34 remaining in a tie game, Pittsburgh had the ball at their own 29-yard line. The Steelers ran Isaac Redman over the left guard for no gain, leaving them in a precarious position. According to Brian Burke, immediately following Redman’s run, Pittsburgh had just a 34% chance of winning the game. This makes sense, because on average, punts from the 29-yard line end up with the other team gaining possession at their own 33-yard line (net of 38 yards). This conforms with Burke’s win probability model, which states that a team with 1st and 10 at their own 33 with 3:45 remaining has a 66% chance of winning. That’s just the average, though. What about the specific teams in this case? Pittsburgh has a rookie punter, so we probably shouldn’t assume anything better would happen if they punted. The biggest variable in the Raiders’ favor was the presence of Sebastian Janikowski, an uber kicker who appears capable of connecting from anywhere on the opponent’s side of the field. Since 2010, Janikowski is 12-of-18 from 50-yards or more, including a miss from 65; on average, those 18 kicks were 55-yard attempts. Essentially, if the Raiders got 30 yards after the punt, they would have had a very good chance of winning the game. Of course, the Steelers defense is generally one of the best in the league, even without Troy Polamalu and James Harrison. The Raiders had scored 3 touchdowns and a field goal on their prior 4 drives, although we shouldn’t let a small sample size persuade us too much. Additionaly, the Steelers would go three-and-out after converting the 4th down, and the Raiders ended up driving down the field and kicking the game-winning field goal, anyway. Again, it’s tempting to consider this when determining the Raiders’ odds of winning following a punt, but that’s the sort of logic I would rally against if the circumstances were different. My gut tells me the Raiders being at home, having a pretty decent offense, and a super kicker would outweight the fact that generally Pittsburgh has a very good defense. So at a minimum, I’d argue that a Steelers punt gives the Raiders a 66% chance of winning. If Pittsburgh converted, they’d probably have the ball somewhere between their own 30 and 35-yard lines; ironically, right where the Raiders ended up having the ball. That makes the calculus pretty easy: Pittsburgh would have a 66% chance of winning if they converted. You might argue that their odds would be greater, because of the presence of Ben Roethlisberger and a strong passing attack and considering Oakland’s pass defense is suspect. I’m sure Steelers fans were very confident that they would win the game after converting on 4th-and-1, but taking the conservative approach would say Pittsburgh had “only” a 66-percent chance of winning if they converted. Now what were the odds of converting? As always, you can trade a larger sample for a more precise one, and determining the appropriate cutoff is tricky. I looked at all plays in the second half or overtime of games where the team had 4th-and-1 on their own side of the field. I also limited this to games where the team was trailing by 3 or fewer, tied, or winning, to make sure that defenses were truly focused. That left 64 examples from ’00 to ’11. Teams converted 48 of the 64 attempts, or exactly 75% of the time. On average the teams gained 2.8 yards with a median gain of 2 yards. 55 of the 64 times the team ran the ball, with 44 of those being successful (80%). Only 4 of the 9 passes were successful, although the quarterbacks in the misses (Ryan Leaf, Byron Leftwich, Jason Campbell, Gus Frerotte and Alex Smith) leave something to be If we increase the sample to any 4th-and-1 attempt outside of the opponent’s 30 (so the first 70 yards of the field for the offense), teams converted 67% of the time. Let’s split the difference and give Pittsburgh a 70% chance of converting. Facing 4th-and-1, Pittsburgh has a 70% chance of getting a 66% chance of winning the game; that means they have a 46% chance of converting the 4th-and-1 and of then winning the game. This ignores the possibility of Pittsburgh missing the 4th-and-1 and still winning the game, which is clearly non-zero. And remember, if they punt, they have only a 34% chance of winning. Even if we force them to automatically lose if they don’t convert, they still are more likely to win the game by going for it. In fact, they only need to convert half of the time on 4th-and-1 to make it a break-even proposition, and that’s still ignoring the possibility of failing and still winning. What are the odds of that? With just under 4 minutes left, maybe not as bad as you think. If Oakland has the ball at the Steelers’ 29-yard line, they are extremely unlikely to be able to run out the clock. Pittsburgh called its first timeout before the 4th-down decision, meaning the Steelers still would have had 2 timeouts left if they could not gain one yard. Odds are the Raiders play it pretty conservatively and kick a field goal, and the Steelers have 2 minutes to go to kick a field goal to force overtime (or score a touchdown). That’s hardly a hopeless position in which to be. Based on past history, Oakland would have had an 82% chance — not 100% — of winning if they had the ball at the Pittsburgh 29-yard line with 3:45 left in the game. Oakland’s odds would be higher because of Janikowski, although that would be counterbalanced by Pittsburgh having one of the best quarterbacks in the league in the two minute drill. Add it all up, and it becomes a pretty obvious call… unless you’re risk averse. If Pittsburgh punts, they have just a 34% chance of winning, maybe even lower because of Janikowski. If Pittsburgh is successful, they are the team with the 66% chance of winning; if they miss, they still have an 18% chance of winning, based on having a small chance of winning in regulation and a decent chance of still going to overtime based on the amount of time remaining. Note that if there was one minute left, Pittsburgh’s odds of winning drop to just 9% if they don’t convert, but with nearly 4 minutes to go, they would not be out of the game if they failed. Considering a 70% success rate on 4th and 1, and they would have a 52% chance (66% x 70% + 18% x 30%) of winning they game if they went for it. In other words, punting it on 4th and 1 would drop Pittsburgh’s odds of winning from 52% to 34%, making this a significant and obvious decision for Tomlin. To make punting the better decision, you would really need to skew the odds. If you have the utmost faith in your defense, perhaps you think the Raiders having the ball at their own 33-yard line with 3:45 to go doesn’t make them the favorite to win. If you view that as a coin-flip game — a pretty difficult proposition to believe — Pittsburgh would *still* benefit by going for it, since their win probability was 52%. It also would have been wise to go for it if they were winning by 1 or 2 points… or even 3 points. A larger lead and it gets a little cloudy, but this is not much different than Bill Belichick’s decision against the Colts a few years ago. At the end of the game, especially in today’s high-octane NFL, you don’t want to be in a close game without the ball. And as you can see, converting the 4th down was one of the biggest swings in the game. Take a look at Brian Burke’s win probability graph: I said it was an obvious call unless you’re risk averse. As we all know, NFL coaches think conservatives are very liberal. On the surface this wasn’t a unique situation, but when you try to find comparables, you have to limit yourself. Since 2000, I looked at all situations where a team faced 4th-and-1 on their own side of the field, in a game where they were leading by 8 or less (or were tied), and with between 2 and 6 minutes remaining. There were only 20 situations like that, and 18 times the teams punted. The two other times? One came in week 17 for the Steelers in the game where Jamal Lewis crossed the 2000-yard mark and Pittsburgh was trying to close the curtain on a 6-10 season. A year after “4th and 2“, Bill Belichick went at it again against the Chargers. With exactly 2 minutes to go and the ball at the Patriots 49, New England ran it on 4th and 1. They missed, but went to win after Kris Brown could not connect on a 50-yard field goal. { 8 comments… read them below or add one } Leave a Comment { 1 trackback }
{"url":"http://www.footballperspective.com/mike-tomlin-makes-aggressive-unconventional-call-converts-no-one-cares/","timestamp":"2014-04-19T22:08:51Z","content_type":null,"content_length":"53628","record_id":"<urn:uuid:88564b81-3d8d-4826-bbd4-dcca41ecd957>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Motion I Kinematics and Newton’s Laws Basic Quantities to Describe Motion Space (where are you) Time (when are you there) Motion is how we move through space as a function of the time. Newton’s Definitions: Space: Absolute space, in its own nature, without relation to anything external, remains always similar and immovable. Time: Absolute true and mathematical time, of itself, and from its own nature, flows equably, without relation to anything external, and by another name is called Newton’s definitions are so obvious that they were taken to be fundamental They are not really correct, but they were not questioned until 1905 when Einstein showed that space and time are intimately connected (Relativity) Speed, Velocity and Acceleration dist. _ traveled time _ for _ travel Note that this is another Rate Equation Suppose that we have a car that covers 20 miles in 30 minutes. What was its average speed? Speed = (20 mi)/(30 min) = 0.67 mi/min Speed = (20 mi)/(0.5 hr) = 40 mi/hr Note: Units of speed are distance divided by time. Any will do, but we need to know how to convert. Unit Conversion Essentially just multiply the quantity you want to convert by a judiciously selected expression for 1. 1 ft = 12 in (1 ft)/(1 ft) = 1=(12in)/(1ft) (12 in)/(12 in) = 1 = (1 ft)/(12 in) You cannot cancel the units here, they are important. Convert 27 in into feet. 1 ft 27 27 in 27 in ft 2.25 ft 12 in 12 You can do this for any type of unit If your unit to be converted is in the numerator, make sure it is in the denominator when you multiply by “one” If your unit to be converted is in the denominator, make sure it is in the numerator when you multiply by “one” I know that 1.609km = 1 mi. If I want to find out how many miles are 75 km I would multiply the 75 km by 50% 50% 1. (1mi)/(1.609km) 2. (1.609km)/(1mi) Given that we know 1609m = 1mi and 1hr=3600s, convert 65mi/hr into m/s. mi mi 1hr 1609m m hr hr 3600s 1mi s Find the speed of light in absolutely useless units c 3 10 8 m 1mi 8 furlong 3600s 24hr 14day 3 10 1day 1 fortnight s 1609m 1mi 1hr 1.8 10 12 Given that 1hr=3600s, 1609m=1mi and the speed of sound is 330 m/s, what is the speed of sound given in mi/hr? 25% 25% 25% 25% a) 12.3 mi/hr b) 147 mi/hr c) 738 mi/hr d) 31858200 mi/hr Back to Physics Given the speed, we can also calculate the distance traveled in a given time. distance = (speed) x (time) Example: If speed = 35m/s, how far do we travel in 1 hour. d=(35 m/s)(3600 s)=126,000m Velocity tells not only how fast we are going (speed) but also tells us the direction we are going. Velocity is a VECTOR, i.e. a quantity with both a magnitude and direction. Speed is a SCALAR, i.e. a quantity that only has a magnitude Displacement is a vector that tells us how far and in what direction Example: Plane fight to Chicago 100mi _ North V 200 mi North 0.5hr hr If we went in any other direction, we would still have a speed of 200 mi/hr, but we would end up in the wrong location. EXAMPLE: Daytona 500 Average speed is approximately 200 mi/hr, but what is average velocity? Since we start and stop at the same location, displacement is zero Velocity must also be zero. Car keeps changing direction so on average it doesn’t actually go anywhere, but it is still moving quickly Acceleration is the rate at which velocity Note that acceleration is a vector! change _ in _ velocity We may have acceleration (i.e. a change in velocity) by 1. Increasing speed 2. Decreasing speed 3. Changing directions Units of Acceleration V m / s V m a 2 t s t s How many “accelerators” (i.e. ways to change velocity) are there on a car? 1. One 2. Two 3. Three 4. Four Newton’s Laws 1. Every body continues it its state of rest OR uniform motion in a straight line, UNLESS it is compelled to change that state by forces impressed on it. Originally formulated by Galileo Qualitative statement about what a force is. A body moving at constant velocity has zero Net Force acting on it 2. The acceleration experienced by an object equals the net force acting on it divided by its Defines mass as a resistance to changes in motion. For a given force, a small mass experiences a big acceleration and a big mass experiences a small Standard unit of mass is the kilogram. Units of Force: F ma kg 2 ma ( N ) By definition, a Newton (N) is the force that will cause a 1kg mass to accelerate at a rate of Force due to Gravity Near the surface of the earth, all dropped objects will experiences an acceleration of g=9.8m/s2, regardless of their mass. Neglects air friction Weight is the gravitational force on a mass F=ma =mg =W Note the Weight of a 1kg mass on earth is 3. If and object (A) exerts a force on an object (B), then object B exerts an equal but oppositely directed force on A. When you are standing on the floor, you are pushing down on the floor (Weight) but the floor pushes you back up so you don’t If you jump out of an airplane, the earth exerts a force on you so you accelerate towards it. You put an equal (but opposite) force on the earth, but since its mass is so big its acceleration is very small When a bug hit the windshield of a car, which one experiences the larger force? 1. The bug 33% 33% 33% 2. The car 3. They experience equal but opposite When a bug hit the windshield of a car, which one experiences the larger 1. The bug 33% 33% 33% 2. The car 3. Since they have the same force, they have the same acceleration. Four Fundamental Forces 1. Gravity 2. Electromagnetic 3. Weak Nuclear 4. Strong Nuclear Examples of Non-fundamental forces: friction, air drag, tension Example Calculations Suppose you start from rest and undergo constant acceleration (a) for a time (t). How far do you go. Initial speed =0 Final speed = v=at Average speed vavg= (Final speed – Initial speed)/2 Vavg = ½ at Now we can calculate the distance traveled as d= vavg t = (½ at) t = ½ at2 Note: This is only true for constant acceleration. Free Fall Suppose you fall off a 100 m high cliff . How long does it take to hit the ground and how fast are you moving when you hit? d at 2d (2)(100m) t 20.4 s 2 4.52s a 9 . 8m / s 2 Now that we know the time to reach the bottom, we can solve for the speed at the v at v (9.8m / s )(4.52s) 44.3m / s We can also use these equations to find the height of a cliff by dropping something off and finding how log it takes to get to the ground (t) and then solving for the height (d). While traveling in Scotland I came across a deep gorge. To find out how deep it was I dropped rocks off of the bridge and found that it took them about 3 seconds to hit the bottom. What was the approximate depth of the gorge? 25% 25% 25% 25% 1. 15m 2. 30m 3. 45m 4. 90m
{"url":"http://www.docstoc.com/docs/127845914/Motion","timestamp":"2014-04-18T14:04:06Z","content_type":null,"content_length":"59213","record_id":"<urn:uuid:8d1af728-6fc7-49c3-aaa7-f4d9550572cf>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving 2 variables for optimal numbers October 10th 2010, 12:51 PM #1 Jan 2010 Solving 2 variables for optimal numbers Okay say I have 5,000 white cubes, and I want to paint them either red or blue, but the paints cost differently: red paint costs $200, and blue paint costs $600. I want as many blue colored cubes as possible while still having enough money left over for red cubes, and all 5,000 cubes must be either red or blue by the time I have no money left over. I have $2,000,000. What is the optimal number of red/blue cubes if I want as many blue cubes as possible? This is what I have so far: (#red cubes*$200) + (#blue cubes*$600) = $2,000,000 #red cubes + #blue cubes = 5,000 How would I go about solving this you must solve for either red or blue in one equation and substitute it into the other equation. R = #of red B = #ofblue $R = 5000 - B$ put that in for R in the first equation, solve for B. Okay say I have 5,000 white cubes, and I want to paint them either red or blue, but the paints cost differently: red paint costs $200, and blue paint costs $600. I want as many blue colored cubes as possible while still having enough money left over for red cubes, and all 5,000 cubes must be either red or blue by the time I have no money left over. I have $2,000,000. What is the optimal number of red/blue cubes if I want as many blue cubes as possible? This is what I have so far: (#red cubes*$200) + (#blue cubes*$600) = $2,000,000 #red cubes + #blue cubes = 5,000 How would I go about solving this 200R + 600B = 2000000 simplifies to R + 3B = 10000 second equation is R + B = 5000 solve the system of equations by one of the methods you learned in class (I recommend either substitution or elimination) October 10th 2010, 12:55 PM #2 Junior Member Jun 2009 October 10th 2010, 12:57 PM #3
{"url":"http://mathhelpforum.com/algebra/159069-solving-2-variables-optimal-numbers.html","timestamp":"2014-04-17T23:08:51Z","content_type":null,"content_length":"36631","record_id":"<urn:uuid:797f66c4-bad5-474f-983c-eac2d43f3fd5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
[Microsoft]Median in BST[Tough one] Answers to technical interview questions. A part of TechInterview.org A part of techInterview.org: answers to technical [Microsoft]Median in BST[Tough one] interview questions. Your host: Michael Pryor The question is easy without constraint do it within contraints. Here's the original version of problem from MS Given a BST (Binary search Tree) how will you find median in that? -No extra memory. -Function should be reentrant (No static, global variables allowed.) -Median for even no of nodes will be the average of 2 middle elements and for odd no of terms will be middle element only. -Algorithm should be efficient in terms of complexity. Write a solid secure code for it. No extra memory--u cannot use stacks to avoid recursion. No static,global--u cannot use recursion and keep track of the elements visited so far in inorder. Wednesday, September 16, 2009 Whats the data structure used? Does it have any parent pointer? Thursday, September 17, 2009 Use left pointer to point its parent and right pointer to point to its successor and use the property of BST to update those pointers. Use a modified inorder traversal, where u update the above mentioned pointers. Left pointer gets updated when you reach that node (from its parent). Right pointer gets updated when you reach its successor. For finding successor you might have to go through left pointers (as current node ancestors will have its lp pointing to parent), and use BST property also to find successor of a node (if successor is not in its right child). If a node has a right child its rp will get updated only after exploring its right child and will set to its successor. Execution time O(n). Thursday, September 17, 2009 Why cant we apply the approach of finding the middle node of a linked list in this case? Friday, September 18, 2009 You can, but you need to find a way to convert the BST to a linked list without using recursion or extra space. (It can be done IIRC.) Friday, September 18, 2009 with the above idea..we can use the algorithm to convert temporarily the BST into DLL , find median and then revert back to BST. So here goes another problem. WAP to convert a DLL to BST(no extra space). So original problem is reduced now to this problem. Easier or tougher??? Anyways we can try. Friday, September 18, 2009 Sorry didn't thought of it earlier...actually its not at all easy(perhaps impossible) to get the original BST with this approach so this can be ruled out. Neophyte(Mohammad Akhtar Ali) Friday, September 18, 2009 we can traverse the tree in inorder and for every node whose right pointer is NULL we can make it to point to its inordersuccessor. This operation would be O(n). After this we can use the hare and tortoise as follows. start with minimum.Let pointer p point to it. Do a inorder traversal again this time point p to inordersuccesor of node pointer by p and fast pointer q to point to the succesor's succesor.Since we know we have inorder succesor of every node we never block from proceeding Once we reach the last element node(max), we can return the node->data pointed by p. There's one issue again that is to revert back the tree again to its original structure!! Friday, September 18, 2009 The actual solution to the problem is in using morris inorder-a traversal algorithm which does the tree traversal without recursion or stacks. It does it through a temporary transformation of tree so that we can traverse it in ascending order (in case of BST) by just following the right pointers to a node. The transformed tree is such that for every node left child has already been visited. So, a simple algo for median in BST would be: 1) Use any algo to count the number of nodes in the BST.Let it be n. 2) Use morris inorder(no recursion/no stacks-all constraint met ) to traverse the tree. For each node visited increment the counter. a) If n is even then return avg(n/2,n/2+1)(counter == n/2) b) If n is odd return when counter == (n+1)/2(1-based indexing) The morris inorder algo takes care of reforming the tree to original while it is traversing the tree. Summary of MorrisInorder while (not finished) if(node has no left descendant) visit it and go to the right make this node right child of the rightmost node in its left descendant go to this left descendant A beautiful implementation along with explanation is given in adam drozdek's data structures and algo book Saturday, September 19, 2009 This topic is archived. No further replies will be accepted.
{"url":"http://discuss.joelonsoftware.com/default.asp?interview.11.780597.8","timestamp":"2014-04-18T00:29:45Z","content_type":null,"content_length":"30357","record_id":"<urn:uuid:e49fa175-8684-47b6-9627-557e01e032a5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
8.174 MINLOC — Location of the minimum value within an array Determines the location of the element in the array with the minimum value, or, if the DIM argument is supplied, determines the locations of the minimum element along each row of the array in the DIM direction. If MASK is present, only the elements for which MASK is .TRUE. are considered. If more than one element in the array has the minimum value, the location returned is that of the first such element in array element order. If the array has zero size, or all of the elements of MASK are .FALSE., then the result is an array of zeroes. Similarly, if DIM is supplied and all of the elements of MASK along a given row are zero, the result value for that row is zero. Fortran 95 and later Transformational function RESULT = MINLOC(ARRAY, DIM [, MASK]) RESULT = MINLOC(ARRAY [, MASK]) ARRAY Shall be an array of type INTEGER or REAL. DIM (Optional) Shall be a scalar of type INTEGER, with a value between one and the rank of ARRAY, inclusive. It may not be an optional dummy argument. MASK Shall be an array of type LOGICAL, and conformable with ARRAY. Return value: If DIM is absent, the result is a rank-one array with a length equal to the rank of ARRAY. If DIM is present, the result is an array with a rank one less than the rank of ARRAY, and a size corresponding to the size of ARRAY with the DIM dimension removed. If DIM is present and ARRAY has a rank of one, the result is a scalar. In all cases, the result is of default INTEGER type. See also: MIN, MINVAL
{"url":"http://www.lahey.com/docs/lfpro75help/gfortran/minloc.html","timestamp":"2014-04-21T02:14:15Z","content_type":null,"content_length":"8958","record_id":"<urn:uuid:aeac48a3-9281-44e9-ba58-926306c5388d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Towards Topological Quantum Computation? — Knotting and Fusing Flux Tubes Meagan Thompson, MIT. Abstract: Models for topological quantum computation are based on braiding and fusing anyons (quasiparticles of fractional statistics) in 2D. The anyons that can exist in a physical theory are determined by the symmetry group of the Hamiltonian. From the mathematical perspective, any theory of anyons must have braiding and fusion rules that satisfy certain consistency conditions known as the Seiberg-Moore Polynomial Equations (also known as the pentagon and hexagon equations). Maclane's coherence theorem states these are in fact all that is required in order to achieve commutativity of all combinations of fusion and braiding (i.e. a consistent physical theory). Two applications of the Hexagon Equation yield the Yang-Baxter Equation familiar from statistical mechanics: σ[j] σ [j+1] σ[j] = σ[j+1] σ[j] σ[j+1] where the σ[i] are the abstract braid group generators. It is an unsolved mathematical problem to determine in general all the matrix solutions to the Yang-Baxter Equation. In the case that the Hamiltonian undergoes spontaneous symmetry breaking of the full symmetry group G to a finite residual gauge group H, however, solutions are given by representations of the quantum double D(H) of the subgroup. The quasi-triangular Hopf Algebra D(H) is obtained from Drinfeld's quantum double construction applied to the algebra F(H) of functions on the finite group H. As a vector space, D(H) = F(H) ⊗ ℂ[H] = C(H × H) where ℂ[H] is the group algebra over the complex numbers and C(H × H) is the space of ℂ-valued functions on H × H. A major new contribution of this work is a program written in MAGMA to compute the particles (and their properties—including spin) that can exist in a system with an arbitrary finite residual gauge group in addition to the braiding and fusion rules for those particles. We compute explicitly the fusion rules for two non-abelian groups thought to be sufficient for universal quantum computation under certain circumstances: S[3] and A[5], and determine that the anyons are all Majorana for these groups. (In the appendices, a few other non-abelian groups of interest—S[4], A[4], and D[4]—are addressed). In addition, experimental proposals for topological quantum computation with these groups are suggested, assessed, and compared to other quantum computing proposals currently on the
{"url":"http://web.mit.edu/physics/cmt/informalseminar_abstracts/meagan.html","timestamp":"2014-04-19T04:28:13Z","content_type":null,"content_length":"3371","record_id":"<urn:uuid:0d4b23d9-18b3-4c80-9643-7e7cbca5833a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 1998 [00362] [Date Index] [Thread Index] [Author Index] Re: tag Times protected?? • To: mathgroup at smc.vnet.net • Subject: [mg13454] Re: [mg13419] tag Times protected?? • From: David Withoff <withoff> • Date: Fri, 24 Jul 1998 01:45:39 -0400 • Sender: owner-wri-mathgroup at wolfram.com > I am writing with an annoying behavior in Mathematica, which I hope has > a logical explanation. I have found that sometimes when I write two > consecutive lines which display output, I get the message "tag Times > protected" > Say I write: > pe[j] = pate[[1]] > ph[j] = path[[1]] > peh[j] = pateh[[1]] > where pate, path, and pateh are lists. So this is merely an assignment > exercise, but Mathematica somehow thinks that I've written instead: > pe[j] = pate[[1]]ph[j] = path[[1]]peh[j] = pateh[[1]] > i.e. I'm trying to multiply the three. Of course, this is an error, so > Mathematica gives a warning message. But, if I write the same > expression as: > pe[j] = pate[[1]]; > ph[j] = path[[1]]; > peh[j] = pateh[[1]]; > the problem disappears, and Mathematica doesn't want to multiply the > three. > Does anyone know why this is happening, or more importantly, how to > prevent this behavior? I'd appreciate any help. > Jordan The only way to be certain that a collection of inputs will be separated as you want is to include the necessary parentheses, semicolons, etc., to make the input unambiguous, or to put each input in a separate cell. Otherwise, the computer must guess where one input ends and the next input begins, and those guesses won't necessarily be what you want. In most computer languages this type of input would be a syntax error. In most computer languages, the extra line-continuation marks and other notations that disambiguate the input are required. In Mathematica that you don't have to do that, but you might want to do it anyway. It is possible that the heuristics that are used in Mathematica for guessing where one input ends and the next input begins will be changed in future versions of Mathematica. This is a difficult task, especially in typeset input with automatic line-breaking, where line breaks can change depending on such things as the width of the notebook window. If the only thing that distinguishes one input from another is that the inputs are on separate lines, it is possible, for example, if the heuristics aren't quite right, for the meaning of the input to change when you change the width of the notebook window. That would be In any case, the answer to your question is that this happens because the computer isn't always able to guess where you want to separate the inputs, and the solution is to add disambiguating syntax, or to put the inputs in separate cells. Dave Withoff Wolfram Research
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Jul/msg00362.html","timestamp":"2014-04-17T04:02:44Z","content_type":null,"content_length":"36733","record_id":"<urn:uuid:127010d2-e8ef-4054-a62c-b50192e8c40c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
The effect of pruning and compression on graphical representation of the output of a speech recognizer,” Computer Speech and Language - in SphinxTrain,” CMU Sphinx Workshop for Users and Developers (CMU-SPUD , 2010 "... Discriminative training schemes, such as Maximum Mutual Information Estimation (MMIE), have been used to improve the accuracy of speech recognition systems trained using Maximum Likelihood Estimation (MLE). In this paper, we present the implementation details of MMIE training in SphinxTrain and base ..." Cited by 1 (1 self) Add to MetaCart Discriminative training schemes, such as Maximum Mutual Information Estimation (MMIE), have been used to improve the accuracy of speech recognition systems trained using Maximum Likelihood Estimation (MLE). In this paper, we present the implementation details of MMIE training in SphinxTrain and baseline results for MMIE training on the Wall Street Journal (WSJ) SI84 and SI284 data sets. This paper also introduces an efficient lattice pruning technique that both speeds up the process and increases the impact of MMIE training on recognition accuracy. The proposed pruning technique, based on posterior probability pruning, is shown to provide better performance than MMIE using standard pruning techniques. Index Terms — SphinxTrain, MMIE training, word lattice, lattice pruning 1. "... Complex graphs, ones containing thousands of nodes of high degree, are difficult to visualize. Displaying all of the nodes and edges of these graphs can create an incomprehensible cluttered output. This paper presents a simplification algorithm that may be applied to a complex graph in order to prod ..." Cited by 1 (0 self) Add to MetaCart Complex graphs, ones containing thousands of nodes of high degree, are difficult to visualize. Displaying all of the nodes and edges of these graphs can create an incomprehensible cluttered output. This paper presents a simplification algorithm that may be applied to a complex graph in order to produce a controlled thinning of the graph. Using importance metrics, the simplification process removes nodes from the graph, leaving the central structure for visualization and evaluation. The simplification algorithm consists of two steps, calculation of the importance metrics and pruning. Several metrics based on various topological graph properties are described. The metrics are then used in a pruning process to simplify the graph. Nodes, along with their corresponding edges, are removed from the graph, while maintaining the graph’s overall connectivity. This simplified graph provides a cleaner, more meaningful visual representation of the graph’s structure; thus aiding the analysis of the graph’s underlying data. 1 "... In discriminative training, such as Maximum Mutual Information Estimation (MMIE) training, a word lattice is usually used as a compact representation of many different sentence hypotheses and hence provides an efficient representation of the confusion data. However, in a large vocabulary continuous ..." Cited by 1 (1 self) Add to MetaCart In discriminative training, such as Maximum Mutual Information Estimation (MMIE) training, a word lattice is usually used as a compact representation of many different sentence hypotheses and hence provides an efficient representation of the confusion data. However, in a large vocabulary continuous speech recognition (LVCSR) system trained from hundreds or thousands hours training data, the extended Baum-Welch (EBW) computation on the word lattice is still very expensive. In this paper, we investigated the effect of lattice pruning on MMIE training, where we tested the MMIE performance trained with different lattice complexity. A beam pruning and a posterior probability pruning method were applied to generate different sizes of word lattices. The experimental results show that using the posterior probability lattice pruning algorithm, we can save about 40 % of the total computation and get the same or more improvement compared to the baseline MMIE result. Index Terms — MMIE training, word lattice, lattice pruning 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=8952175","timestamp":"2014-04-20T02:21:40Z","content_type":null,"content_length":"18905","record_id":"<urn:uuid:859f8b2b-08a0-4fe6-bbe6-0ce3992c41fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Bonsall Math Tutor Find a Bonsall Math Tutor ...I never teach "in a vacuum," meaning that wherever possible and appropriate I tie in the current knowledge with previous topics and with an eye to the future. Just this past year I tutored groups of students in SAT preparation, both in math and verbal, and a few years ago I tutored English for a... 29 Subjects: including precalculus, algebra 1, algebra 2, SAT math ...Recently, I worked in Pristina, Kosovo, which provided me the opportunity to learn about the complicated history of the Balkans. I ran track & Field throughout Middle School and High School. I was a distance runner; therefore, I ran the 4X8 relay, 800, 1600, and 3200. 27 Subjects: including calculus, precalculus, English, statistics ...My studies included a course in logic, and many of my lectures required logical proofs as assignments. I use logic every day, still have my logic textbook, and reference it while writing philosophy papers. I'm thoroughly prepared to tutor in logic, including Aristotelain Logic, Boolean Logic, Predicate and Propositional Logic. 38 Subjects: including algebra 2, physics, precalculus, study skills ...A presto! I enjoyed my freshman economics course so much that I decided to major in economics, graduating with honors, and went on to an MBA. I love helping others understand not just the concepts and formulas, but what they really mean, and how they can be applied to real-world problems. 16 Subjects: including algebra 1, writing, statistics, geometry ...I love teaching and explaining almost anything, and it is quite the thrill to engage a student who was confused and now completely understands because of me. The problem with school is that teachers and older tutors are scary and I combat that with relating to each student as a peer while growin... 29 Subjects: including geometry, ACT Math, ASVAB, SAT math
{"url":"http://www.purplemath.com/bonsall_ca_math_tutors.php","timestamp":"2014-04-16T04:45:20Z","content_type":null,"content_length":"23667","record_id":"<urn:uuid:17d87d9e-3ac7-43c4-8be6-4cf66738be90>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS-L archives -- May 2005, week 2 (#264)LISTSERV at the University of Georgia Date: Wed, 11 May 2005 11:32:37 -0700 Reply-To: cassell.david@EPAMAIL.EPA.GOV Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU> From: "David L. Cassell" <cassell.david@EPAMAIL.EPA.GOV> Subject: Re: Dependent sample difference in mean test In-Reply-To: <1115826748.025038.172760@g44g2000cwa.googlegroups.com> Content-type: text/plain; charset=US-ASCII gblockhart@YAHOO.COM wrote: > I have two dependent samples with different numbers of observations. > need to know whether the means of the two samples are statistically > different from each other. > My sample_1 has approximately 800,000 observations. Sample_2 has > approximately 130,000 observations. > I have run a regression on sample_1 to generate coefficients. I then > "fit" the coefficients from sample_1 to the characteristics of > observations. This gives me a predicted value for sample_2 based on > sample_1 coefficients. I then calculate a residual by subtracting > sample_2 observation actual value from the predicted value (predicted > from the sample_1 coefficients applied to the sample_2 > characteristics). > Then I take the mean of the residuals from sample_2. > I repeat the process in the opposite, i.e., I run a regression on > sample_2, get coefficients, then fit the coeffificients from sample_2 > to the sample_1 characteristics. This generates a predicted value, > which I subtract from each sample_1 actual - this generates the > sample_1 residuals. I then take the mean sample_1 residual. > I expect the sample_1 and sample_2 residuals to be of opposite sign. > need to test the difference in the mean residuals. I have two > dependent samples (of residuals) and I have very different sample > (of residuals). > I can make the assumption that they are perfectly negatively > and proceed with a t-test. Then assume that they are perfectly > uncorrelated and proceed with a t-test. This will give me a range of > t-stats for my test. > But, I was hoping someone could help me with a stronger (or more > direct) test. I'm afraid the range won't give strong enough results. > So, this is a statistical theory question instead of a direct SAS > question. Hey, stat questions are allowed here too. But first... Why are you doing this? This doesn't make much sense to me, and your resulting data are NOT directly comparable. You cannot do either t-test. Period. You want to assume that you have something in between perfectly correlated and uncorrelated, so your t-statistic would be bracketed. It doesn't work that way. Even worse, both of the t-statistics you have in mind assume that the observations are independent. In a paired t-test, one assumes that the *differences* are independent. In a two-sample test, one assumes that all n1+n2 observations are independent of one another. You have created residuals which are (by construction) all You have no independent observations here, and you shouldn't be considering a basic t-test. So, step back. Write to SAS-L (not to me personally) and explain why you are doing this, and what you hope to achieve. The big picture would be helpful. Perhaps someone here can point you toward a more productive approach. BTW, with sample sizes like you have, your statistical tests will be really flaky, since the size of n will drive virutally anything to appear significant. Why do you have such large samples, and where do they come from, and what do they represent? David Cassell, CSC Senior computing specialist mathematical statistician
{"url":"http://www.listserv.uga.edu/cgi-bin/wa?A2=ind0505b&L=sas-l&D=1&O=D&F=&S=&P=29336","timestamp":"2014-04-17T12:53:47Z","content_type":null,"content_length":"12667","record_id":"<urn:uuid:87897ad3-3250-47dc-9879-07418de6cce1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] matrix indexing question Charles R Harris charlesr.harris@gmail.... Fri Mar 30 00:16:26 CDT 2007 On 3/29/07, Timothy Hochberg <tim.hochberg@ieee.org> wrote: > On 3/29/07, Bill Baxter <wbaxter@gmail.com> wrote: > > > > On 3/30/07, Timothy Hochberg <tim.hochberg@ieee.org> wrote: > > > Note, however that you can't (for instance) multiply column vector > > with > > > a row vector: > > > > > > >>> (c)(r) > > > Traceback (most recent call last): > > > ... > > > TypeError: Cannot matrix multiply columns with anything > > > > > > > That should be allowed. (N,1)*(1,M) is just an (N,M) matrix with > > entries C[i,j] = A[i,0]*B[0,] > I thought about that a little, and while I agree that it could be allowed, > I'm not sure that it should be allowed. It's a trade off between a bit of > what I would guess is little used functionality with some enhanced error > checking (I would guess that usually row*column signals a mistake). However, > I don't care much one way or the other; it's not hard to allow. It's really a sort of tensor product, so use outer(.,.). In my mind, row and column vectors are *not* matrices, they only have a single dimension. On the other hand (r)(c) is really the application of the dual vector r (a functional) to the vector c, i.e., r is a map from vectors into the reals (complex). However, I think overloading the multiplication in this case is I kind of like the idea of using call for multiply, though. If it > > doesn't turn out to have any major down sides it could be a good way > > to give ndarray a concise syntax for "dot". Hmmm, have to try it a bit to see how it looks. Overall, I like this -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20070329/a8fbacc7/attachment.html More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026949.html","timestamp":"2014-04-17T21:47:02Z","content_type":null,"content_length":"5072","record_id":"<urn:uuid:010b980b-7ce2-4110-ac75-f2b04871745c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Babies Are Born With Some Math Skills Replies: 2 Last Post: Oct 30, 2013 5:45 PM Messages: [ Previous | Next ] Re: Babies Are Born With Some Math Skills Posted: Oct 30, 2013 5:45 PM In sci.physics Sam Wormley <swormley1@gmail.com> cut and pasted from some web sit: > Babies Are Born With Some Math Skills Perhaps but it has nothing to do with physics, ass hat. Jim Pennino Date Subject Author 10/30/13 Babies Are Born With Some Math Skills Sam Wormley 10/30/13 Re: Babies Are Born With Some Math Skills Nathan Vanderpool 10/30/13 Re: Babies Are Born With Some Math Skills jimp@specsol.spam.sux.com
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2604012&messageID=9315006","timestamp":"2014-04-17T19:02:33Z","content_type":null,"content_length":"18432","record_id":"<urn:uuid:799d26bf-96ea-4b0e-9c8b-eb6f7fdb7828>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
When is the K-theory presheaf a sheaf? up vote 6 down vote favorite Let $F$ be a Deligne-Mumford stack that is of finite type, smooth and proper over $\mathrm{Spec~}k$ for a perfect field $k$. Consider $K_m$, the presheaf of $m$-th $K$-groups on $F_{et}$, the etale site of $F$: $K_m : F_{et} \to Ab$ $(U \to F) \mapsto K_m(U)$ $(f : U \to V) \mapsto (f^{*} : K_m(V) \to K_m(U))$ My question is, what are some simple cases when this is already a sheaf? For example, is it a sheaf when $F = BG$ for a finite group $G$? My question is aimed at a computation of motives of DM-stacks. The sheaffification $\mathcal{K}_m = K_m^{++}$ is one way to define the Chow groups of $F$: $A^m(f) := H^m(F_{et}, \mathcal{K}_m \otimes {\bf Q})$ A twist on this definition leads to a well-behaved theory of motives for DM-stacks described by Toen Etale site Someone might be able to confirm that the cohomology can be computed using the etale site whose objects are etale morphisms from affine schemes, since Laumon and Moret-Bailly show it's equivalent (by the inclusion) to the larger site which contains all etale morphisms from algebraic spaces (Champs algebriques, p.102). This might simplify working with the $K$-groups. kt.k-theory-homology ag.algebraic-geometry stacks motives 2 In some sense, $K$-theory is a global invariant exactly because it's not a sheaf. Consider $K_0$ as a warm-up. If it were a sheaf, it would be zero much too often. – Minhyong Kim Dec 12 '10 at 1 Right, but there are some descent properties for presheaves of K-theory spectra aren't there ? – Zoran Skoda Dec 25 '10 at 20:34 1 Zoran: Yes, Zariski (or better, Nisnevich) but not etale in general. – Dustin Clausen Dec 25 '10 at 21:17 add comment 1 Answer active oldest votes In general, these presheaves are not sheaves, even on the etale sites of fields. As an easy example, $K_2(\mathbb{C})$ is non-torsion divisible, but $K_2(\mathbb{R})$ has a $2$-torsion element given in symbols by $(-1,-1)$ in Milnor K-theory. But, $K_2(\mathbb{R})$, if $K_2$ were a sheaf, would be the $\mathbb{Z}/2$-fixed points of $K_2(\mathbb{C})$. This cannot happen in this example. Using the fact that $K_{2i}$ of an algebraically closed field is a non-torsion uniquely divisible group, I imagine one can construct counter-examples for any even K-group. I would imagine that odd K-groups are also not sheaves. up vote 4 down vote However, for finite fields, the situation might be different, by Quillen's computation. There, it looks as if the K-groups might be sheaves. For details on $K_2$ and Milnor $K$-theory, look up Matsumoto's Theorem. For other K-groups of algebraically closed fields, see Suslin's paper On the K-theory of algebraically closed In general, the place to start thinking about the etale site and algebraic K-theory would be Thomason's paper Algebraic K-theory and etale cohomology. Thanks for resolving the question in the case of Milnor's K-theory. Unfortunately, my question was about the K-theory defined for schemes by the $Q$ construction. I think it agrees for $K_0$ and $K_1$, but not for the higher K-groups. – Jon Skowera Jan 25 '11 at 16:31 1 For a field, it agrees for K_0, K_1, and K_2. – Benjamin Antieau Jan 31 '11 at 5:25 add comment Not the answer you're looking for? Browse other questions tagged kt.k-theory-homology ag.algebraic-geometry stacks motives or ask your own question.
{"url":"http://mathoverflow.net/questions/39499/when-is-the-k-theory-presheaf-a-sheaf","timestamp":"2014-04-18T13:32:56Z","content_type":null,"content_length":"58663","record_id":"<urn:uuid:e4ce2880-dad1-42a5-a24c-440bbf134ec8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Guest Editors' Introduction: The Top 10 Algorithms JANUARY/FEBRUARY 2000 (Vol. 2, No. 1) pp. 22-23 1521-9615/00/$31.00 © 2000 IEEE Published by the IEEE Computer Society Guest Editors' Introduction: The Top 10 Algorithms Article Contents In This Issue YOUR THOUGHTS? Download Citation Download Content DOWNLOAD PDF In putting together this issue of Computing in Science & Engineering, we knew three things: it would be difficult to list just 10 algorithms; it would be fun to assemble the authors and read their papers; and, whatever we came up with in the end, it would be controversial. We tried to assemble the 10 algorithms with the greatest influence on the development and practice of science and engineering in the 20th century. Following is our list (here, the list is in chronological order; however, the articles appear in no particular order): • Metropolis Algorithm for Monte Carlo • Simplex Method for Linear Programming • Krylov Subspace Iteration Methods • The Decompositional Approach to Matrix Computations • The Fortran Optimizing Compiler • QR Algorithm for Computing Eigenvalues • Quicksort Algorithm for Sorting • Fast Fourier Transform • Integer Relation Detection • Fast Multipole Method With each of these algorithms or approaches, there is a person or group receiving credit for inventing or discovering the method. Of course, the reality is that there is generally a culmination of ideas that leads to a method. In some cases, we chose authors who had a hand in developing the algorithm, and in other cases, the author is a leading authority. Monte Carlo methods are powerful tools for evaluating the properties of complex, many-body systems, as well as nondeterministic processes. Isabel Beichl and Francis Sullivan describe the Metropolis Algorithm. We are often confronted with problems that have an enormous number of dimensions or a process that involves a path with many possible branch points, each of which is governed by some fundamental probability occurrence. The solutions are not exact in a rigorous way, because we randomly sample the problem. However, it is possible to achieve nearly exact results using a relatively small number of samples compared to the problem's dimensions. Indeed, Monte Carlo methods are the only practical choice for evaluating problems of high dimensions. John Nash describes the Simplex method for solving linear programming problems. (The use of the word programming here really refers to scheduling or planning—and not in the way that we tell a computer what must be done.) The Simplex method relies on noticing that the objective function's maximum must occur on a corner of the space bounded by the constraints of the "feasible region." Large-scale problems in engineering and science often require solution of sparse linear algebra problems, such as systems of equations. The importance of iterative algorithms in linear algebra stems from the simple fact that a direct approach will require O( N ^3) work. The Krylov subspace iteration methods have led to a major change in how users deal with large, sparse, nonsymmetric matrix problems. In this article, Henk van der Vorst describes the state of the art in terms of methods for this problem. Introducing the decompositional approach to matrix computations revolutionized the field. G.W. Stewart describes the history leading up to the decompositional approach and presents a brief tour of the six central decompositions that have evolved and are in use today in many areas of scientific computation. David Padua argues that the Fortran I compiler, with its parsing, analysis, and code-optimization techniques, qualifies as one of the top 10 "algorithms." The article describes the language, compiler, and optimization techniques that the first compiler had. The QR Algorithm for computing eigenvalues of a matrix has transformed the approach to computing the spectrum of a matrix. Beresford Parlett takes us through the history of early eigenvalue computations and the discovery of the family of algorithms referred to as the QR Algorithm. Sorting is a central problem in many areas of computing so it is no surprise to see an approach to solving the problem as one of the top 10. Joseph JaJa describes Quicksort as one of the best practical sorting algorithm for general inputs. In addition, its complexity analysis and its structure have been a rich source of inspiration for developing general algorithm techniques for various Daniel Rockmore describes the FFT as an algorithm "the whole family can use." The FFT is perhaps the most ubiquitous algorithm in use today to analyze and manipulate digital or discrete data. The FFT takes the operation count for discrete Fourier transform from O( N ^2) to O( N log N). Some recently discovered integer relation detection algorithms have become a centerpiece of the emerging discipline of "experimental mathematics"—the use of modern computer technology as an exploratory tool in mathematical research. David Bailey describes the integer relation problem: given n real numbers x [1], ..., x [ n ], find the n integers a [1], ... , a [ n ] (if they exist) such that a [1] x [1] + ... + a [ n ] x [ n ] = 0. Originally, the algorithm was used to find the coefficients of the minimal integer polynomial an algebraic number satisfied. However, more recently, researchers have used them to discover unknown mathematical identities, as well as to identify some constants that arise in quantum field theory in terms of mathematical constants. The Fast Multipole Algorithm was developed originally to calculate gravitational and electrostatic potentials. The method utilizes techniques to quickly compute and combine the pair-wise approximation in O( N) operations. This has led to a significant reduction in the computational complexity from O( N ^2) to O( N log N) to O( N) in certain important cases. John Board and Klaus Schulten describe the approach and its importance in the field. We have had fun putting together this issue, and we assume that some of you will have strong feelings about our selection. Please let us know what you think. Jack Dongarra is a professor of computer science in the Computer Science Department at the University of Tennessee and a scientist in the mathematical science section of Oak Ridge National Lab. He received his BS in mathematics from Chicago State University, his MS in computer science from the Illinois Institute of Technology, and his PhD in applied mathematics from the University of New Mexico. Contact him at dongarra@cs.utk.edu; www.cs.utk.edu/~dongarra. Francis Sullivan is the associate editor-in-chief of CiSE and director of the Institute for Defense Analyses' Center for Computing Sciences. Contact him at the IDA/Center for Computing Sciences, Bowie, MD 20715;
{"url":"http://www.computer.org/portal/csdl/mags/cs/2000/01/c1022.html","timestamp":"2014-04-17T12:40:01Z","content_type":null,"content_length":"48023","record_id":"<urn:uuid:9e8146a6-912a-44d2-b5f4-6528d73ecf28>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra II Recipe: Linear Inequalities in Two VariablesAlgebraLAB: StudyAids A. Determining if an Ordered Pair is a Solution 1. Substitute the "x" and "y" value into the inequality. 2. Do all operations on each side until a true or false statement can be determined. 3. If the final statement is a true statement, the ordered pair is a solution. 4. If the final statement is a false statement, the ordered pair is NOT a solution. Given 2x + 3y ≥ 5. Is (0,1) a solution? Given 2x + 3y ≥ 5. Is (4,-1) a solution? Given 2x + 3y ≥ 5. Is (2,1) a solution? B. Graphing Linear Inequalities 1. Solve the inequality for "y" (to look like y=mx+b). 2. Determine the slope and y-intercept. 3. Graph the y-intercept. 4. Use the movement from slope to get additional points for the boundary. 5. Connect the points. 6. Shade Graph the following inequality. 9x - 3y ≤ 15 Graph the following inequality. 4x + 12y > 15
{"url":"http://algebralab.org/studyaids/studyaid.aspx?file=Algebra2_2-6.xml","timestamp":"2014-04-16T21:51:39Z","content_type":null,"content_length":"16558","record_id":"<urn:uuid:578edd4f-49ab-4a0d-87c0-d181f3a26537>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Free let us c solutions Chapter Lists for "Let us C Solution" │S.N│ Chapter │S.N│ Chapter │ │1. │ │ 5 │ Arrays │ │2. │ │6. │Puppetting On Strings │ │3. │ │7. │ Structures │ │4. │Function and Pointer │8. │ Input/Output in C │ CHAPTER: The Decision Control Structure solutions: if,if-else,Nested if-else [C] Attempt the following: (a) If cost price and selling price of an item is input through the keyboard, write a program to determine whether the seller has made profit or incurred loss. Also determine how much profit he made or loss he incurred. download solution (.c) download solution (.exe) (b) Any integer is input through the keyboard. Write a program to find out whether it is an odd number or even number. download solution (.c) download solution (.exe) View Solution Here... (c) Any year is input through the keyboard. Write a program to determine whether the year is a leap year or not. (Hint: Use the % (modulus) operator) download solution (.c) download solution (.exe) View Solution Here... (d) According to the Gregorian calendar, it was Monday on the date 01/01/1900. If any year is input through the keyboard write a program to find out what is the day on 1st January of this year. download solution (.c) download solution (.exe) View Solution Here... (e) A five-digit number is entered through the keyboard. Write a program to obtain the reversed number and to determine whether the original and reversed numbers are equal or not. download solution (.c) download solution (.exe) View Solution Here... (f) If the ages of Ram, Shyam and Ajay are input through the keyboard, write a program to determine the youngest of the three. download solution (.c) download solution (.exe) View Solution Here... (g) Write a program to check whether a triangle is valid or not, when the three angles of the triangle are entered through the keyboard. A triangle is valid if the sum of all the three angles is equal to 180 degrees. download solution (.c) download solution (.exe) View Solution Here... (h) Find the absolute value of a number entered through the keyboard. download solution (.c) download solution (.exe) View Solution Here... (i) Given the length and breadth of a rectangle, write a program to find whether the area of the rectangle is greater than its perimeter. For example, the area of the rectangle with length = 5 and breadth = 4 is greater than its perimeter. download solution (.c) download solution (.exe) View Solution Here... (j) Given three points (x1, y1), (x2, y2) and (x3, y3), write a program to check if all the three points fall on one straight line. download solution (.c) download solution (.exe) View Solution Here... (k) Given the coordinates (x, y) of a center of a circle and it’s radius, write a program which will determine whether a point lies inside the circle, on the circle or outside the circle. (Hint: Use sqrt( ) and pow( ) functions) download solution (.c) download solution (.exe) View Solution Here... (l) Given a point (x, y), write a program to find out if it lies on the x-axis, y-axis or at the origin, viz. (0, 0) View Solution Here... [F] ([G] in 4th edition ) Attempt the following: (a) Any year is entered through the keyboard, write a program to determine whether the year is leap or not. Use the logical operators && and ||. download solution (.c) download solution (.exe) View Solution Here... (b) Any character is entered through the keyboard, write a program to determine whether the character entered is a capital letter, a small case letter, a digit or a special symbol. The following table shows the range of ASCII values for various characters. │ Characters │ ASCII Values │ │A – Z │65 – 90 │ │a – z │97 – 122 │ │0 – 9 │48 – 57 │ │special symbols│0 - 47, 58 - 64, 91 - 96, 123 - 127 │ download solution (.c) download solution (.exe) View Solution Here... (c) An Insurance company follows following rules to calculate premium. (1) If a person’s health is excellent and the person is between 25 and 35 years of age and lives in a city and is a male then the premium is Rs. 4 per thousand and his policy amount cannot exceed Rs. 2 lakhs. (2) If a person satisfies all the above conditions except that the sex is female then the premium is Rs. 3 per thousand and her policy amount cannot exceed Rs. 1 lakh. (3) If a person’s health is poor and the person is between 25 and 35 years of age and lives in a village and is a male then the premium is Rs. 6 per thousand and his policy cannot exceed Rs. 10,000. (4) In all other cases the person is not insured. Write a program to output whether the person should be insured or not, his/her premium rate and maximum amount for which he/she can be insured. download solution (.c) download solution (.exe) View Solution Here... (d) A certain grade of steel is graded according to the following conditions: (i) Hardness must be greater than 50 (ii) Carbon content must be less than 0.7 (iii) Tensile strength must be greater than 5600 The grades are as follows: Grade is 10 if all three conditions are met Grade is 9 if conditions (i) and (ii) are met Grade is 8 if conditions (ii) and (iii) are met Grade is 7 if conditions (i) and (iii) are met Grade is 6 if only one condition is met Grade is 5 if none of the conditions are met Write a program, which will require the user to give values of hardness, carbon content and tensile strength of the steel under consideration and output the grade of the steel. download solution (.c) download solution (.exe) View Solution Here... (e)A library charges a fine for every book returned late. For first 5 days the fine is 50 paise, for 6-10 days fine is one rupee and above 10 days fine is 5 rupees. If you return the book after 30 days your membership will be cancelled. Write a program to accept the number of days the member is late to return the book and display the fine or the appropriate message. download solution (.c) download solution (.exe) View Solution Here... (f)If the three sides of a triangle are entered through the keyboard, write a program to check whether the triangle is valid or not. The triangle is valid if the sum of two sides is greater than the largest of the three sides. download solution (.c) download solution (.exe) View Solution Here... (g)If the three sides of a triangle are entered through the keyboard, write a program to check whether the triangle is isosceles, equilateral, scalene or right angled triangle. download solution (.c) download solution (.exe) View Solution Here... (h) In a company, worker efficiency is determined on the basis of the time required for a worker to complete a particular job. If the time taken by the worker is between 2 – 3 hours, then the worker is said to be highly efficient. If the time required by the worker is between 3 – 4 hours, then the worker is ordered to improve speed. If the time taken is between 4 – 5 hours, the worker is given training to improve his speed, and if the time taken by the worker is more than 5 hours, then the worker has to leave the company. If the time taken by the worker is input through the keyboard, find the efficiency of the worker. download solution (.c) download solution (.exe) View Solution Here... (i) A university has the following rules for a student to qualify for a degree with A as the main subject and B as the subsidiary subject: (a) He should get 55 percent or more in A and 45 percent or more in B. (b) If he gets less than 55 percent in A he should get 55 percent or more in B. However, he should get at least 45 percent in A. (c) If he gets less than 45 percent in B and 65 percent or more in A he is allowed to reappear in an examination in B to qualify. (d) In all other cases he is declared to have failed. Write a program to receive marks in A and B and Output whether the student has passed, failed or is allowed to reappear in B. download solution (.c) download solution (.exe) View Solution Here... (j) The policy followed by a company to process customer orders is given by the following rules: (a) If a customer order is less than or equal to that in stock and has credit is OK, supply has requirement. (b) If has credit is not OK do not supply. Send him intimation. (c) If has credit is Ok but the item in stock is less than has order, supply what is in stock. Intimate to him data the balance will be shipped. Write a C program to implement the company policy download solution (.c) download solution (.exe) View Solution Here... [J]( [k]in 4th edition) Attempt the following: (a)Using conditional operators determine: (1) Whether the character entered through the keyboard is a lower case alphabet or not. (2) Whether a character entered through the keyboard is a special symbol or not. download solution (.c) download solution (.exe) View Solution Here... (b) Write a program using conditional operators to determine whether a year entered through the keyboard is a leap year or not. download solution (.c) download solution (.exe) View Solution Here... (c) Write a program to find the greatest of the three numbers entered through the keyboard using conditional operators. download solution (.c) download solution (.exe) View Solution Here...
{"url":"http://peaceinf.com/cat/prog/letusc/let-us-c-solution.php","timestamp":"2014-04-16T16:38:54Z","content_type":null,"content_length":"33044","record_id":"<urn:uuid:2500ca1b-c4d2-4702-b506-fe2b95ecd58e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
February 24: Gil Ariel, CIMS Transition State Theory in a Solvable Model Transition state theory is a method for calculating transition rates between metastable states in ergodic dynamical systems.In this method one counts the number of times a typical trajectory between one metastable set to the other crosses a given hypersurface that separates the states in phase space. This count gives the average rate of crossing the surface. In this talk I will present an analysis of the predictions of Transition state theory and its underlying assumptions in the context of the Kac-Zwanzig model. This model is a simplified system in which a 1D particle is coupled to a bath of harmonic oscillators.
{"url":"http://www.cims.nyu.edu/seminars/gsps/past_talks/ArielFeb2406.html","timestamp":"2014-04-20T20:55:59Z","content_type":null,"content_length":"2463","record_id":"<urn:uuid:ce093f47-bfd7-44f0-927e-9f41b5dd8c79>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Analysis Continuous Functions October 6th 2008, 12:36 PM Real Analysis Continuous Functions Let f:[a, b] -> R be continuous at c of [a,b] and suppose that f(c) > 0. Prove that there exist a positive number m and an interval [u,v] is a subset of [a,b] such that c of [u,v] and f(x) >(or equal) for all x of [u,v]. I don't know where to begin with this problem!! October 6th 2008, 12:43 PM Use the basic definition of continuity at c. Use $\varepsilon = \frac{{f(c)}}{2}=m$ and find the $\delta$ that goes with it. Then let $u = c - \frac{\delta }{2}\,\& \,v = c + \frac{\delta }{2}$. October 7th 2008, 06:47 AM ok, so we know that f is continuous so, $|f(x) - f(c)|< \varepsilon$ and $|x-c|< \delta$. So to prove my thing do I go something like $|f(x) - f(c)|< \frac{{f(c)}}{2} = m$ with $c$$\in [u,v]$ with $u = c - \frac{\delta }{2}\,\& \,v = c + \frac{\delta }{2}$ You told me to find $\delta$ and I found it to be $\delta = v-u$ I know this may sound stupid but I have starred at that since last night and I still can't figure out how to get to $f(x) \geq m$$\forall x \in [u,v]$ October 7th 2008, 07:16 AM Well, that was not a full proof but rather an outline more or less. There are several careful adjustments to make to get u & v . In the definition of continuity at c let $\varepsilon = \frac{{f(c)}}{2} > 0$. Now corresponding to that epsilon $\left( {\exists \delta > 0} \right)\left[ {\left| {x - c} \right| < \delta \Rightarrow \left| {f(x) - f(c)} \right| < \frac{{f(c)}}<br /> {2}} \right]$. Remove the absolute value: $- \frac{{f(c)}}{2} < f(x) - f(c) < \frac{{f(c)}}{2} \Rightarrow \quad \frac{{f(c)}}{2} < f(x)$. That means that every x within a distance of $\delta$ of c has the property $\frac{{f(c)}}{2} < f(x)$. Let $u = \max \left\{ {a,c - \frac{\delta}{2} } \right\}\,\& \,v = \min \left\{ {b,c + \frac{\delta}{2} } \right\}$.
{"url":"http://mathhelpforum.com/calculus/52281-real-analysis-continuous-functions-print.html","timestamp":"2014-04-18T01:18:43Z","content_type":null,"content_length":"9394","record_id":"<urn:uuid:7979a6d5-7a24-4b84-b0ef-c6fb1a65373c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Self-Learn Textbook Suggestions I don't know why no one has mentioned the books in the Schaum's Outline series. The books I have come across in that series are generally well illustrated with graphics and have many fully solved problems. There are also problems that have no solutions or answers, but each chapter has 20 or 30 worked problems, and then another 20 or 30 without answers. Also, the books are quite cheap. The level of the books, I have found, are usually slightly lower than what it would be in a university course. But this makes them good for self-study and as introductions to the subjects. I think that you will probably have to move on to more standard textbooks afterwards, but they are very good supplements. They have books on all the subjects you listed except relativity and magnetohydrodynamics. By the way, if you're worried about the price (Munkres's topology book is ridiculously expensive new) you may want to look at books by Dover publications. Their books are classics that would have otherwise gone out of print. This also means some of them may be a bit old-fashioned, but classics can never be obsolete. For example, Dover has many excellent books on general (or point-set) topology. I am familiar with Willard, which is very good but may be a bit advanced, but they also have Mendelson, Gamelin & Greene, Hocking & Young, and some others, books which have very good However, Dover books generally do not have solutions. Munkres doesn't, and most university books don't. But at least Dover books are cheap. And Munkres also covers algebraic topology, but I hardly think that justifies the price.
{"url":"http://www.physicsforums.com/showthread.php?p=3923845","timestamp":"2014-04-17T07:28:13Z","content_type":null,"content_length":"40621","record_id":"<urn:uuid:ed628a69-86ff-46ac-b857-8c2569d69f85>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Project Euler 43: Pandigital numbers with an unusual sub-string divisibility in C# Project Euler 43: Find the sum of all pandigital numbers with an unusual sub-string divisibility property Written by Kristian on 17 May 2011 Topics: Project Euler When I first saw pandigital numbers I thought it was just a curious thing that we would visit once. I was wrong as Problem 42 of Project Euler is also about a special group of pandigital numbers. The problem reads The number, 1406357289, is a 0 to 9 pandigital number because it is made up of each of the digits 0 to 9 in some order, but it also has a rather interesting sub-string divisibility property. Let d[1] be the 1^st digit, d[2] be the 2^nd digit, and so on. In this way, we note the following: □ d[2]d[3]d[4]=406 is divisible by 2 □ d[3]d[4]d[5]=063 is divisible by 3 □ d[4]d[5]d[6]=635 is divisible by 5 □ d[5]d[6]d[7]=357 is divisible by 7 □ d[6]d[7]d[8]=572 is divisible by 11 □ d[7]d[8]d[9]=728 is divisible by 13 □ d[8]d[9]d[10]=289 is divisible by 17 Find the sum of all 0 to 9 pandigital numbers with this property. We will take two different approaches to this. First We will explore the brute force of generating all permutations and after that we will use the divisibility requirements to limit the number of permutations we have to explore. Brute Force In Problem 24 we were asked to find a certain permutation of the numbers 0-9, at that point we developed an algorithm for finding the next permutation. We can reuse that algorithm to generate permutation which we need to check. At each permutation we need to check if it has the wanted sub string divisibility property. Once you understand the code from problem 24 the rest is rather trivial, and the C# code looks like this long result = 0; long result = 0; int[] divisors = { 1, 2, 3, 5, 7, 11, 13, 17 }; int count = 1; int numPerm = 3265920; while (count < numPerm) { int N = perm.Length; int i = N - 1; while (perm[i - 1] >= perm[i]) { i = i - 1; int j = N; while (perm[j - 1] <= perm[i - 1]) { j = j - 1; // swap values at position i-1 and j-1 swap(i - 1, j - 1); j = N; while (i < j) { swap(i - 1, j - 1); bool divisible = true; for (int k = 1; k < divisors.Length; k++) { int num = 100 * perm[k] + 10 * perm[k + 1] + perm[k + 2]; if (num % divisors[k] != 0) { divisible = false; if (divisible) { long num = 0; for(int k = 0; k < perm.Length; k++){ num = 10*num + perm[k]; result += num; One thing to note is that the first digit can’t be zero and thus the number of possible entries are 9*9! = 3265920. The code gives the following result The sum of numbers is 16695334890 Solution took 108 ms Divisibility Analysis Lets look at the sub string divisibility property to see if we can figure out a solution from the sub-string divisibility properties. Observation 1: Since d[4]d[5]d[6] must be divisible by 5, d[6] must be either 0 or 5. Observation 2: d[6]d[7]d[8] must be divisible by 11 and from observation 1 we know that d[6] is either 0 or 5. if d[6] is 0 then the only results are the set {011, 022,…, 099} and those are not pandigital numbers, therefore d[6] must be 5. Observation 3: Since d[6] is 5 that limits d[6]d[7]d[8] are limited to the five-hundreds divisible by 11 with no repeated digits which gives the set {506, 517, 528, 539, 561, 572, 583, 594}. Observation 4: d[7]d[8]d[9] has to be divisible by 13 and from observation 3 we know have limited d[7]d[8] to 8 combinations. That means we have at maximum 8 sequences for d[7]d[8]d[9]. We can limit d[6]d[7]d[8]d[9] to the set of 4 sequences {5286, 5390, 5728, 5832} by eliminating the sequences with repeated digits Observation 5: Repeating the above for d[8]d[9]d[10] we get that d[6]d[7]d[8]d[9]d[10] must be in the set {52867, 53901, 57289} so now we have limited the set to 3 possible endings of the pandigital Observation 6: d[5]d[6]d[7] must be divisible by 7 and must end on 52, 01 or 89. That property limits our ending sequence to d[5]d[6]d[7]d[8]d[9]d[10] {952867, 357289} Observation 7: Since d[2]d[3]d[4] has to be divisible by 2 it means that d[4] must be even. This expands our set significantly to {0952867, 4952867, 0357289, 4357289, 6357289} since there can’t be repeat digits. Observation 8: We can continue this with analysing d[3]d[4]d[5] which must be divisible by 3. It must end on {09, 49, 03, 43, 63} and contain no repeat digits. Based on the digit sum we know that the sum d[3] + d[4] + d[5] be divisible by 3 in order for the number to be divisible by 3. That gives us {30952867, 60357289, 06357289} Observation 9: The three entries in the set of possible endings has the common thing that non of them contain 1 or 4. Since there are no rules for d1 and d2, we can have both permutations of the two numbers and still have a valid number. That gives us a total of 6 numbers we need to sum up for the result. Result: 1430952867 + 1460357289 + 1406357289 + 4130952867 + 4160357289 +4106357289 = 16695334890 So it was possible to find all numbers with this property by hand. Wrapping up I have shown you two ways to get the result, either by generating all permutations and checking them or by doing a pen and paper analysis. Personally I think it was more fun to do the pen & paper analysis than writing a piece of C# code for brute forcing the problem. It gave me a much better feel for the problem and the properties of the numbers. I have uploaded the C# source code for the brute force approach so you can check out the details if you like. Do you have another possible solution or way of attacking this problem? Let me hear from you. Comments, questions or pointing out mistakes is also very welcome. Update: Suprdewd sent me another brute force approach which is longer to write, but significantly faster to execute. I have included it in the source code if you would like to study it. I think it is pretty straightforward to read. 14 Comments For This Post I'd Love to Hear Yours! 1. That was pretty cool. I took yet another approach: Loop through multiples of 17. Make sure the multiple has distinct digits. Loop through multiples of 13. Make sure the multiple has distinct digits and starts with the end of the multiple of 17. Continue this with the other multiples. Concatenate the multiples together and make sure it contains distinct digits, and then add the missing digit in front of the number and add that number to the sum. That should runs in under 10ms. Anyways, thanks for the post. 2. Hi suprDewd I think your approach is a pretty smart way to bruteforce the problem. Instead of testing all possible candidates, you build a string with the correct property. If you don’t mind I would like to include your source code in the downloadable file for other people to get inspired by. 3. Of course I don’t mind. 4. I don’t know if I understood the question, but wy the number 1024356789 aren’t between this numbers? 024 / 2 = 12 243 / 3 = 81 435 / 5 = 87 356 / 2 = 178 567 / 3 = 189 678 / 2 = 339 789 / 3 = 263 I will show my solution now 5. Hi Luiz Thanks for the comment. The reason the number you propose is not a solution is that 789/17 = 46.41… Therefore the requirement that d[7]d[8]d[9] should be divisible by 17 as stated in the problem formulation is not fulfilled. 6. you are right! I didn’t see it! I’m sorry about it. The answers are the sequence of primes. thanks! 7. Hi Kristian, First off, your posts have been very helpful to me as I go through Project Euler. But I have another reason for commenting here. I think I may have made a simpler version of SuprDewd’s code (in Java, just change a few characters and it’s C#). It uses recursion instead of a set number of for loops, and because of that, it can handle any number of digits by only changing one or two variables. If you would like the code, I can send it to you, but it’s a little much to post here. Thanks again! 8. Hi Jon That sounds really interesting. I would love to see that. However, I would love that anyone can see the solution. So in case it is too big for the blog, I would suggest you to use pastebin.com to hold the code, and throw a link here. 9. I managed to shorten it, so here it is: public class Main { public static int[] ps = { 2, 3, 5, 7, 11, 13, 17 }; public static int[] perm = { 1, 0, 2, 3, 4, 5, 6, 7, 8, 9 }; public static long sum = 0; public static void main(String[] args) { run(ps.length-1, new int[]{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }); public static void run(int p, int[] previous) { if(p <= -1) { long n = (long)concat(previous[0], concat(previous[1] % 10, concat(previous[2] % 10, concat(previous[3] % 10, concat(previous[4] % 10, concat(previous[5] % 10, previous[6] % 10)))))); if(!distinct(n)) return; long pan = make_pan(n); if(n == pan) return; sum += pan; for(int i = ps[p]; i < 1000; i += ps[p]) { if(!distinct(i) || (p < ps.length-1 && previous[p+1]/10 != i%100)) continue; previous[p] = i; run(p-1, previous); public static boolean distinct(long n) { boolean[] digits = new boolean[10]; while(n > 0) { if(digits[(int)n%10]) return false; digits[(int)n%10] = true; n /= 10; return true; public static long make_pan(long n) { boolean[] digits = new boolean[10]; long origN = n, newN = 0L; while(n > 0) { digits[(int)n%10] = true; n /= 10; for(long i = 0; i < 10; i++) if(!digits[(int)i]) { if(newN != 0) return origN; newN = concat(i, origN); return newN != 0 ? newN : origN; public static long concat(long a, long b) { long c = b; while (c > 0) { a *= 10; c /= 10; return a + b; Hopefully it all makes sense…If not, I’ll see what I can try to explain better. 10. Thanks for that. I will have to study it in detail later on. 11. numPerm = 3265920 This is 9! but you have 10 digits not 9… 12. I think you are missing the comment One thing to note is that the first digit can’t be zero and thus the number of possible entries are 9*9! = 3265920. The code gives the following result 13. Indeed… my mistake. Thanks for the answer. 14. No problem, it took me a while to recall Leave a Comment Here's Your Chance to Be Heard! You can use these html tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> You can use short tags like: [code language="language"][/code] The code tag supports the following languages: as3, bash, coldfusion, c, cpp, csharp, css, delphi, diff,erlang, groovy, javascript, java, javafx, perl, php, plain, powershell, python, ruby, scala, sql, tex, vb, xml You can use [latex]your formula[/latex] if you want to format something using latex Notify me of comments via e-mail. You can also subscribe without commenting.
{"url":"http://www.mathblog.dk/project-euler-43-pandigital-numbers-sub-string-divisibility/","timestamp":"2014-04-21T12:10:34Z","content_type":null,"content_length":"61208","record_id":"<urn:uuid:0eb84065-c897-4282-9d91-f1a1517486bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Hits and Depth Question (JOGL/OpenGL) [Archive] - OpenGL Discussion and Help Forums Zed Gimbal 01-23-2009, 09:07 AM Hi! I'm not only a new forum member, but also a new OpenGL programmer. I'm writing a Java/JOGL application which has selecting and picking functionality, and I'm finding that, when processing hits, all of the hit objects are coming back with zero minDepth and maxDepth. When retrieving the minDepth and maxDepth from the select buffer, I'm getting depthInt = -2147483648, which converts to a float value of 0.0 when using this algorithm: depthFloat = 1f + (float)((long)depthInt)/0x7fffffff My understanding is that, if minDepth is 0.0, then the hit object is thought to be at the near clipping plane, and if minDepth is 1.0, then the object is thought to be at the far clipping plane. My first thought was that perhaps the camera's frustum spans too much Z space, resulting in lower-than-needed precision when OpenGL tries to determine their depths. This doesn't seem to be a problem, though, because: * the meshes that I'm displaying have been glScalefed to fit within a 4x4x4 cube * the meshes all reside within the camera's frustum * the camera's frustum has a near clipping plane distance of 1.5, and a far clipping plane distance of 20.0. The problem seems to occur regardless of the number of meshes that I display. Does anyone have any suggestions about what I might be doing incorrectly? Thanks in advance!
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-166558.html","timestamp":"2014-04-21T10:13:55Z","content_type":null,"content_length":"6891","record_id":"<urn:uuid:91a4b506-ee4c-4bd7-8cf1-015464c6509f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest View, IL Calculus Tutor Find a Forest View, IL Calculus Tutor Hi! Thank you for considering my tutoring services. I have a diverse background that makes me well suited to help you with your middle school through college level math classes, as well as physics, mechanical engineering, intro computer science and Microsoft Office products. 17 Subjects: including calculus, physics, geometry, GRE ...I also can teach them things they don't know with just enough detail to allow them to apply the material on the exam, and I can re-teach them material they may have forgotten. I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on th... 24 Subjects: including calculus, physics, geometry, GRE I earned High Honors in Molecular Biology and Biochemistry as well as an Ancient History (Classics) degree from Dartmouth College. I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece. 41 Subjects: including calculus, chemistry, physics, English ...Collectively, these three fields are the critical foundation for analyzing human civilizations.Lecturer at the Oriental Institute, University of Chicago, Creator of a Life Long Learning Network in Chicago. Worked as a historian and archaeologist for 20 years. Gained a Phd in Historical Archaeology focused on the ancient world at the University of Chicago. 10 Subjects: including calculus, geometry, algebra 1, algebra 2 ...Geometry is unlike many other Math courses in that it is a spatial/visual class and deals minimally with variables and equations. Geometry students can expect topics such as: proofs using theorems and axioms, calculation of distance, area and volume, congruence and similarity of triangles, trans... 11 Subjects: including calculus, geometry, algebra 1, algebra 2 Related Forest View, IL Tutors Forest View, IL Accounting Tutors Forest View, IL ACT Tutors Forest View, IL Algebra Tutors Forest View, IL Algebra 2 Tutors Forest View, IL Calculus Tutors Forest View, IL Geometry Tutors Forest View, IL Math Tutors Forest View, IL Prealgebra Tutors Forest View, IL Precalculus Tutors Forest View, IL SAT Tutors Forest View, IL SAT Math Tutors Forest View, IL Science Tutors Forest View, IL Statistics Tutors Forest View, IL Trigonometry Tutors Nearby Cities With calculus Tutor Argo, IL calculus Tutors Bedford Park calculus Tutors Berwyn, IL calculus Tutors Broadview, IL calculus Tutors Brookfield, IL calculus Tutors Burbank, IL calculus Tutors Cicero, IL calculus Tutors Lyons, IL calculus Tutors Maywood, IL calculus Tutors Mc Cook, IL calculus Tutors Mccook, IL calculus Tutors Riverside, IL calculus Tutors Stickney, IL calculus Tutors Summit Argo calculus Tutors Summit, IL calculus Tutors
{"url":"http://www.purplemath.com/Forest_View_IL_Calculus_tutors.php","timestamp":"2014-04-17T11:24:22Z","content_type":null,"content_length":"24238","record_id":"<urn:uuid:50c4d7ad-1e10-41ba-a155-7d299d563ea0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Downey, CA Algebra Tutor Find a Downey, CA Algebra Tutor ...Finally, as an SAT tutor, I gave students ivy league college level study skills as applied to SAT test prep. I am extremely qualified to teach anything to do with theatre. I did a BFA and MFA degree, both in theatre and acting. 47 Subjects: including algebra 1, English, reading, business ...Having outstanding grades all throughout high school and graduating with a 3.9 GPA, I believe I am very qualified to tutor in the subjects noted. As a young female, I believe I can connect to my students in ways that other people cannot. I can develop a friendship and maintain a professional position at the same time. 19 Subjects: including algebra 1, algebra 2, Spanish, elementary math ...I have had 10 years of computer networking experience in the United States and five years as a management supervisor. Currently, I have been a math tutor for 9 years for grades 4 to 12 in the East San Gabriel Valley. My success in helping students improve their grades is evident in that 95% of ... 9 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have also taught Geometry, Algebra and SAT to advanced Math students; as well as to students participating in the "NO CHILD LEFT BEHIND" (NCLB) Program for various unified school districts in Orange County, California. In addition I have also helped elementary, middle school and high school st... 18 Subjects: including algebra 1, algebra 2, geometry, ASVAB ...Why various things work in certain ways? I graduated with a BS in Microbiology, the scientific study of germs and other very tiny things, from California state University Long Beach. I also have a minor in Chemistry. 19 Subjects: including algebra 1, English, reading, biology Related Downey, CA Tutors Downey, CA Accounting Tutors Downey, CA ACT Tutors Downey, CA Algebra Tutors Downey, CA Algebra 2 Tutors Downey, CA Calculus Tutors Downey, CA Geometry Tutors Downey, CA Math Tutors Downey, CA Prealgebra Tutors Downey, CA Precalculus Tutors Downey, CA SAT Tutors Downey, CA SAT Math Tutors Downey, CA Science Tutors Downey, CA Statistics Tutors Downey, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/downey_ca_algebra_tutors.php","timestamp":"2014-04-16T07:27:06Z","content_type":null,"content_length":"23841","record_id":"<urn:uuid:a4bd162b-c70c-4793-842d-ca48afcb5b0c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
du support et du contour du support d’une loi de probabilitÂe. Annales de l’InstitutHenri PoincarÂe. Section B. Calcul des ProbabilitÂes et Statistique. Nouvelle SÂerie , 1999 "... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propose a metho ..." Cited by 501 (32 self) Add to MetaCart Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a preliminary theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled d... "... This vignette presents the R package alphahull which implements the α-convex hull and the α-shape of a finite set of points in the plane. These geometric structures provide an informative overview of the shape and properties of the point set. Unlike the convex hull, the α-convex hull and the α-shape ..." Cited by 1 (0 self) Add to MetaCart This vignette presents the R package alphahull which implements the α-convex hull and the α-shape of a finite set of points in the plane. These geometric structures provide an informative overview of the shape and properties of the point set. Unlike the convex hull, the α-convex hull and the α-shape are able to reconstruct non-convex sets. This flexibility make them specially useful in set estimation. Since the implementation is based on the intimate relation of theses constructs with Delaunay triangulations, the R package alphahull also includes functions to compute Voronoi and Delaunay tesselations. "... Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a “simple ” subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori speci�ed value between 0 and 1. We propose a method to ..." Add to MetaCart Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a “simple ” subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori speci�ed value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data;it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coef�cients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10310030","timestamp":"2014-04-23T08:47:50Z","content_type":null,"content_length":"18693","record_id":"<urn:uuid:bd438364-50d0-4bd8-b7d0-3cd43d1dc630>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Examples Up: Generalized Nonsymmetric Eigenvalue Problems Previous: Purpose &nbsp Contents &nbsp Index A (input/output) REAL or COMPLEX square array, shape On entry, the matrix On exit, the matrix B (input/output) REAL or COMPLEX square array, shape B,1) A,1). On entry, the matrix On exit, the matrix (output) REAL or COMPLEX array, shape The values of alpha(:) ::= ALPHAR(:), ALPHAI(:) ALPHA(:), ALPHAR(:), ALPHAI(:) are of REAL type (for the real and imaginary parts) and ALPHA(:) is of COMPLEX type. (output) REAL or COMPLEX array, shape BETA) A,1). The values of Note: The generalized eigenvalues of the pair Note: If A and B are real then complex eigenvalues occur in complex conjugate pairs. Each pair is stored consecutively. Thus a complex conjugate pair is given by VSL Optional (output) REAL or COMPLEX square array, shape VSL,1) A,1). The left Schur vectors. VSR Optional (output) REAL or COMPLEX square array, shape VSR,1) A,1). The right Schur vectors. Optional (input) LOGICAL FUNCTION LOGICAL FUNCTION SELECT( alpha, BETA type(wp), INTENT(IN) :: alpha, BETA type ::= REAL wp ::= KIND(1.0) alpha ::= ALPHAR Note: Select must be present if SDIM is desired. Optional (output) INTEGER. The number of eigenvalues (after sorting) for which SELECT TRUE. (If SELECT TRUE. for either eigenvalue count as 2). Optional (output) INTEGER. If INFO is not present and an error occurs, then the program is terminated with an error message. References: [1] and [17,9,20]. Next: Examples Up: Generalized Nonsymmetric Eigenvalue Problems Previous: Purpose &nbsp Contents &nbsp Index Susan Blackford 2001-08-19
{"url":"http://www.netlib.org/lapack95/lug95/node303.html","timestamp":"2014-04-18T18:37:44Z","content_type":null,"content_length":"11757","record_id":"<urn:uuid:19bab51a-3b7a-4fa9-ac6b-d01396bf9277>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
System for supporting user's behavior - Patent # 7467120 - PatentGenius System for supporting user's behavior 7467120 System for supporting user's behavior (5 images) Inventor: Kashima Date Issued: December 16, 2008 Application: 11/550,846 Filed: October 19, 2006 Inventors: Kashima; Hisashi (Yamato, JP) Assignee: International Business Machines Corporation (Armonk, NY) Primary Vincent; David Assistant Kennedy; Adrian L Attorney Or Shimokaji & Associates, P.C. U.S. Class: 706/45; 706/46; 706/61 Field Of 706/45; 706/46; 706/61; 705/38; 705/36R; 705/35 International G06N 3/00; G06N 5/04 U.S Patent Other Geibel et al., Peter, "Perception and SVM Learning with Generalized Cost Models", Intelligent Data Analysis, Feb. 2004. cited by examiner. References: Elkan, "The Foundations of Cost-Sensitive Learning", Proc. Of 17th Inter. Joint Conference on Artificial Intelligence, pp. 1-6, 2001. cited by other. Abe et al., "An Iterative Method for Multi-Class Cost-Insensitive Learning", pp. 1-9. cited by other. Pedro Domingos, "MetaCost: A General Method for Making Classifiers Cost-Sensitive", pp. 155-164. cited by other. Geibel et al., "Perceptron and SVM Learning With Generalized Cost Models", Feb. 13, 2004, pp. 1-29. cited by other. Rockafellar et al., "Optimization of Conditional Value-At-Risk", Sep. 5, 1999, pp. 1-26. cited by other. Zadrozny et al., "Cost-Sensitive Learning by Cost-Proportionate Example Weighting", pp. 1-8. cited by other. Jun-Ya Goto, and Akiko Takeda, Linear decision model based on Conditional Geometric Score, Abstract collection of Spring Meeting for Reading Research Paper by the Operations Research Society of Japan, Mar. 2005. cited by other. Abstract: Provided is a system 10 which supports a user's behavior by generating a behavioral decision function indicating behavior to be adopted to a certain target. The system 10 includes: a data acquiring section 110 which acquires a cost caused as a result of adopting each of a plurality of behaviors to a target as training data for generating the behavioral decision function, the plurality of behaviors having already been adopted to the target; and a function generator 120 which generates, based on the training data, the behavioral decision function to minimize the expected shortfall of a cost to be obtained as a result of adopting the behavior to the target. Claim: What is claimed is: 1. A method of generating a behavioral decision function by an information processing apparatus that enters a target and a behavior to be adopted the target, and thatgenerates the behavioral decision function by quantifying a degree of proprietary to be adopted the target and computing a parameter to define the behavioral decision function to be outputted, comprising the steps of: data acquisition where theinformation processing apparatus acquires the target, behaviors, which have already been adopted with respect to the target, and a cost caused as a result of adopting the behaviors to the target as training data for generating the behavioral decisionfunction; and function generation where the information processing apparatus generates the behavioral decision function to minimize expected shortfall of the cost to be obtained as the result of adopting the behaviors to the target, based on thetraining data, wherein the step of function generation comprises the steps of: first calculation where the information processing apparatus calculates the behavioral decision function by calculating the parameter to minimize an index value, whichindicates an upper bound of an expected shortfall based upon a sum of costs exceeding a value-at-risk in the training data, and which is convex downward with respect to the parameter, and that stores a calculated behavioral decision function in a memory,in a case that a provided value is the value-at-risk of the cost; second calculation where the information processing apparatus reads the behavioral decision function calculated by a first calculator from the memory, and that calculates thevalue-at-risk of the cost caused as the result of adopting the behavior shown by the behavioral decision function, based on the training data, thus providing the result to the first calculator; and convergence judgment where the information processingapparatus judges whether or not expected shortfall based on the index value has been converged to a value within a predetermined range; and the parameter of the behavioral decision function calculated by the first calculator is outputted, on conditionthat the expected shortfall has been converged. Description: BACKGROUND OF THE INVENTION The present invention relates to a system for supporting a user's behavior, and particularly relates to a system which supports a user's behavior by generating a behavioral decision Classification learning is studied as a basic technique of data mining. An object of the classification learning is to output behavior on a certain target, which should be adopted in the future, based on information showing the result ofbehavior which was adopted to the target in the past (hereinafter, referred to as training data). If this technique is applied, according to the past events, it is possible to suggest the most statistically appropriate (e.g., the number of errors isminimized) behavior to a user to support a user's behavior. The classification learning can be applied to various technical fields as follows: (1) Diagnosis in the Medical Field Target: test result of patient Behavior: whether or not a certain treatment should be performed Training data in this example is information showing whether or not a certain treatment was successful when the treatment was performed in the past on a patient having a certain test result. According to the classification learning, it ispossible to predict the appropriateness of a treatment on a future patient based on such training data. (2) Credit Assessment in the Financial Field Target: credit history of applicant for loan Behavior: whether or not a loan is granted Training data in this example is information showing whether or not a bond was collectible when a loan was made in the past for an applicant having a certain credit history. According to the classification learning, it is possible to judgewhether or not to finance a certain applicant in the future based on such training data. (3) Topic Classification in a Search Engine Target: webpages of news Behavior: classification into economic, sport, and political fields Training data in this example is information showing whether or not the classification was appropriate when a certain webpage was classified into a certain field in the past. According to the classification learning, a webpage which will becreated in the future can be classified appropriately based on such training data. In general, an object of such classification learning is to accurately predict behavior to be adopted to a target. In other words, the classification learning aims to minimize the number and probability of errors in behaviors. However, minimizing the number of errors alone may not be sufficient in some problems. For example, in the case of the above example (1), there is a clear difference between a loss (hereinafter, referred to as a cost) caused as a result ofdiagnosing a healthy patient with a disease and then performing an unnecessary treatment and a cost caused as a result of leaving a sick patient alone and then leading to his/her death. Moreover, there may be a case where a cost is different accordingto the social status of a patient. Similarly, in the case of the above example (2), a cost caused as a result of refusing a loan to an excellent applicant is an interest alone. However, a cost caused as a result of granting a loan to a bad applicantmay be the entire amount of the loan. The cost is different also in this case according to the respective amount of his/her loan and degree of his/her badness. As an applicable technique in such a case where costs are different among each target and behavior and they are unknown when prediction is made, cost-sensitive learning has conventionally been proposed (please refer to: N. Abe and B. Zadrozny. An interactive method for multi-class cost-sensitive learning. In Proceedings of ACM SIGKDD Conference, 2004.; J. P. Bradford, C. Kunz, R. Kohavi, C. Brunk, and C. E. Brodley. Pruning decision trees with misclassification costs. In Proceedings of the9.sup.th European Conference on Machine Learning (ECML), 1998.; P. Domingos. MetaCost: A general method for making classifier cost sensitive. In Proceedings of the 5.sup.th International Conference on Knowledge Discovery and Data Mining, pages 155-164,1999.; C. Elkan. The foundations of cost-sensitive learning. In Proceedings of the 17.sup.th International Joint Conference on Artificial Intelligence (IJCAI), pages 973-978, 2001.; W. Fan, S. J. Stolfo, J. Zhang, and P. K. Chan. Ada-Cost:Misclassification cost sensitive boosting. In Proceedings of the 16.sup.th International Conference on Machine Learning (ICML), pages 97-105, 1999.; P. Geibel, U. Bredford, and F. Wysotzki. Perceptron and SVM learning with generalized cost models. Intelligent Data Analysis, 8(5):439-455, 2004.; B. Zadrozny and C. Elkan. Learning and making decisions when costs and probabilities are both unknown. In Proceedings of ACM SIGKDD Conference, 2001.; B. Zadrozny, J. Langford, and N. Abe. Cost-sensitivelearning by cost-proportionate example weighting. In Proceedings of the 3.sup.rd International Conference on Data Mining (ICDM), pages 435-442, 2003.; and Suzuki. More Advantageous Learning than Accurate Learning--Classification Learning ConsideringMisclassification Costs--(1) (2). Information Processing, 45 (4-5), 2004.). An object of the cost-sensitive learning is not to minimize the rate of behavioral errors, but to minimize the expected value of a cost. Therefore, it is possible to handleproblems in a wider range. Hereinafter, more detailed descriptions will be given of the cost-sensitive learning. Firstly, problems targeted in the cost-sensitive learning will be defined by the following (1) to (1) Cost Function A cost means an indicator which shows a loss caused as a result of behavior adopted to a certain target, for example. Assume that X is a set of targets (for example, X=R.sup.M) and that Y is a set of behaviors which can be adopted to thetargets. It should be noted that Y is assumed to be a discrete and finite set. A cost caused as a result of adopting behavior y.di-elect cons.Y on a target x.di-elect cons.X is assumed to be c(x, y).di-elect cons.R. For example, the badness of a result caused when a certain treatment y is performed on a patient having a test result x is c(x, y). If the treatment is appropriate, c(x, y) is small. If the treatment is inappropriate, c(x, y) is large. If thetreatment y is extremely inappropriate as a treatment for the patient and leads to his/her death, the cost becomes very large. Incidentally, in a problem setting (please refer to J. P. Bradford, C. Kunz, R. Kohavi, C. Brunk, and C. E. Brodley. Pruningdecision trees with misclassification costs. In Proceedings of the 9.sup.th European Conference on Machine Learning (ECML), 1998.) in the early study stage, handled is a simple case where: the cost does not depend on x directly; classes are set aslatent variables; the cost depends on the class and the behavior; and moreover, the scale of the cost is already known. Here, handled is a more common case where costs are different according to targets and a real cost function c (x, y) is unknown(please refer to B. Zadrozny and C. Elkan. Learning and making decisions when costs and probabilities are both unknown. In Proceedings of ACM SIGKDD Conference, 2001.). (2) Behavior Decision Model X is assumed to be a set of targets (for example, X=R.sup.M), and Y is assumed to be a (discrete and finite) set of behaviors which can be adopted for the targets. A function used for deciding the behavior y.di-elect cons.Y to the targetx.di-elect cons.X is assumed to be the following equation (1). (Equation 1) h(x,y;.theta.):X.times.Y.fwdarw.R Equation 1 Here, .theta. is a parameter of the model. In general, using this, behavior y' which should be adopted is alternatively decided by the following equation 2. h(x, y; .theta.) may have a probabilistic constraint as in the following equation 3. .times..times.'.di-elect cons..times..times..function..theta..times..times..times..times..di-elect cons..times..function..theta..times..times..function..theta..gtoreq..time- s..times. In other words, when the target x.di-elect cons.X is given, behavior decision on this may probabilistically be made by equation 3 instead of equation 2. Furthermore, it is also conceivable that the behavior decision is a resource distributiontype, that is, it is a case where the number of behaviors which can actually be adopted is not one but a diversified investment can be made in h (x, y; .theta.) in terms of the resource in accordance with the proportion thereof. However, behavior isalternatively decided by equation 2 in the embodiment of the present invention. In addition, c(x, h, (.theta.)) is assumed to be a cost caused when behavior on x is decided by using h (x, y; .theta.). In the case (1) of an alternative action, c(x, h(.theta.)) is described in the following equation 4. .times..times..function..function..theta..times..times..times..function..t- heta..di-elect cons..times..times. ##EQU00002## In a case of a diversified-investment typed action, the definition is not necessarily obvious. However, here, as a simpler case, c(x, h, (.theta.)) is assumed, as shown in equation 5, that a cost produced by each action is proportional to aninvestment amount. .times..times..function..function..theta..di-elect cons..times..function..theta..times..function..times..times. ##EQU00003## (3) Training Data A target and a cost are considered to be uniformly generated from a probability distribution D defined by X.times.R.sup.Y, and a set E of N pieces of data which have been sampled from D is assumed to be given. Here, the i-th training data of Eis assumed to be e.sup.(i)=(x.sup.(i), {c.sup.(i)(x.sup.(i), y)}y.di-elect cons.Y). x.sup.(i).di-elect cons.X is assumed to be the i-th target of the training data, and the cost c.sup.(i)(x.sup.(i), y) is assumed to be given to each action y.di-electcons.Y on the i-th target. With regard to the above problems, conventionally, used is a method whose object is to minimize the expected value of a cost in a classification problem which requires a consideration into a cost. Specifically, although .theta. is desired to bedecided in a manner of minimizing an expected cost (equation 6) with respect to the distribution D of the data, since the distribution D is actually unknown, the parameter .theta. is to be decided in a manner of minimizing an experienced expectationcost (equation 7) (please refer to N. Abe and B. Zadrozny. An interactive method for multi-class cost-sensitive learning. In Proceedings of ACM SIGKDD Conference, 2004., P. Geibel, U. Bredford, and F. Wysotzki. Perceptron and SVM learning withgeneralized cost models. Intelligent Data Analysis, 8(5):439-455, 2004., and B. Zadrozny and C. Elkan. Learning and making decisions when costs and probabilities are both unknown. In Proceedings of ACM SIGKDD Conference, 2001.) .times..times..function..theta..function..function..function..theta..times- ..times..times..times..function..theta..times..times..function..function..- theta..times..times. ##EQU00004 It should be noted that it is considered that the target and the cost are generated from the probability distribution D which is defined by X.times.R.sup.Y, independently of each other. The set E of N pieces of the data, which has been sampledfrom D, is assumed to be given as the training data. Here, the i-th training data of E is assumed to be training data e.sup.(i)=(x.sup.(i), {c.sup.(i)(x.sup.(i), y)}y.di-elect cons.Y). x.sup.(i).di-elect cons.X is assumed to be the i-th target of thetraining data and a cost c.sup.(i)(x.sup.(i), y) of when adopting each behavior y.di-elect cons.Y is assumed to be given. However, considering from a viewpoint of a risk management, an approach of simply minimizing an experienced expectation cost may not be sufficient. After the training, behavior is assumed to be adopted for M pieces of data. When M is large, asum of their costs comes close to MC.sup.D(.theta.). Hence, it seems that there is no problem in setting C.sup.E(.theta.) as an objective function of learning. However, since M is relatively small, the above approximation cannot hold true. Additionally, consideration is given to a case where the generation of a large amount of cost is critical. For example, in a case of a problem of deciding where to invest a fund, a fact that big mistakes occur consecutively some times is a seriousproblem which is directly connected to the risk of bankruptcy. When the probability of the occurrence is small but there is a possibility that a large amount of cost to an unacceptable degree occurs, a user should wish to avoid its risk as much aspossible. Furthermore, for example, assume that there are two decision functions h1 and h2 which can be expected to obtain the same cost expected values. Although a probability distribution of a cost brought by h1 has a high peak around the expectedvalue, a probability distribution of a cost brought by h2 has a form which has gentle slopes and whose bottom side is wide in a high cost area. In this case, even if the expectation cost is the same, it is presumed that preferred is h1 whose possibilityof the occurrence of a high cost is smaller. In such a case, it cannot be said that the object is correctly reflected by the minimization of an experienced expectation cost. Therefore, desired is a learning method in which a risk is avoided moreactively, taking the distribution of a cost into consideration. SUMMARY OF THE INVENTION Hence, an object of the present invention is to provide a system, a method, and a program, which can solve the above problems. The object is achieved by combining the features recited in independent claims in the scope of claims. Additionally,dependent claims stipulate more advantageous, specific examples of the present invention. In order to solve the above problems, in the embodiment of the present invention, provided are a system which supports the behavior of a user by generating a behavioral decision function showing behavior to be adopted to a certain target,including: a data acquiring section which acquires a cost caused as a result of adopting the behavior to the target as training data for generating a behavioral decision function, for each of a plurality of behaviors already adopted to the target; and afunction generator which generates, based on the training data, a behavioral decision function to minimize the expected shortfall of the cost obtained as a result of adopting the behavior to the target, a program causing an information processor tofunction as the system, and a method which supports the behavior of a user with the system. It should be noted that the above summary of the invention does not cite all the necessary features of the present invention, and that the subcombination of the groups of these features can be the invention. BRIEF DESCRIPTION OF THEDRAWINGS For a more complete understanding of the present invention and the advantage thereof, reference is now made to the following description taken in conjunction with the accompanying FIG. 1 shows a functional configuration of a behavior supporting system 10. FIG. 2 shows an example of a data structure of a training data DB 100. FIG. 3 shows a functional configuration of a function generator 120. FIG. 4 shows a flowchart of processes that the behavior supporting system 10 supports a decision on a user's behavior. FIG. 5 shows the details of the processes of calculating a behavioral decision function in S410. FIG. 6 shows a result of performing a test by the behavior supporting system 10 according to an embodiment. FIG. 7 shows an example of hardware configuration of an information processor 700 which functions as the behavior supporting system 10. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Hereinafter, descriptions will be given of the present invention through an embodiment of the invention. However, the embodiment below is not intended to limit the invention to the scope of claims, and all the combinations of features describedin the embodiment are not necessarily essential to the solving means of the invention. FIG. 1 shows a functional configuration of a behavior supporting system 10. The behavior supporting system 10 aims to support a decision on behavior to be adopted to a certain target by a user. As an example, the behavior supporting system 10aims to support a doctor to make a decision on a treatment policy for a patient having a certain test result. The behavior supporting system 10 includes a training data DB 100, a data acquiring section 110, a function generator 120, and a behavior decider 130. The training data DB 100 stores training data for causing the behavior supporting system 10 togenerate a behavioral decision function. The training data, for example, shows a cost caused as a result of adopting a certain behavior on its target for each of a plurality of behaviors already adopted to the target. The training data may be generatedbased on the history of past behaviors on a certain target, or may be generated based on the result of each type of simulations and tests. In an example of supporting a decision on a medical policy, the training data is data showing the scale of a losscaused on a relevant patient for a plurality of patients who have already received treatment. The data acquiring section 110 provides training data for the function generator 120 by acquiring the training data from the training data DB 100. The function generator 120 generates a behavioral decision function to minimize the expectedshortfall of a cost obtained as a result of adopting behavior to a certain target, based on the training data. In the example of supporting a decision on a medical policy, a behavioral decision function is a function which decides a treatment policy forpatients. In addition, the behavioral decision function is generated in a manner of minimizing the expected shortfall of a cost caused by a treatment. This function may uniquely output one treatment policy when a certain target is given, or may outputan index value showing a degree of the appropriateness of each of the plurality of treatment policies. When a target of a certain behavior is given, the behavior decider 130 decides behavior to be adopted to the target based on a behavioral decision function generated by the function generator 120, and notifies a user of the above. Accordingly,the user can know behavior to minimize an index value that conforms to an actual event, that is, expected shortfall, prior to a decision on the behavior. Consequently, the future risks can be reduced. In other words, specifically, a doctor can reducethe risk of leading an important patient to death due to a medical error, a financer can reduce the amount of a loan loss, and an investor can reduce the risk of bankruptcy. Moreover, the expected shortfall of a cost on a certain target is shown as a value-at-risk (hereinafter, referred to as VaR and please refer to T. C. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, Cambridge,Mass., 1990.) of a cost which can be obtained as a result of adopting behavior to its target and a function of a parameter .theta. of a behavioral decision function. Furthermore, in the embodiment, an index values which shows the upper bound ofexpected shortfall is adopted. This index value is shown as a function which is convex downward with regard to the VaR and the parameter .theta.. The function generator 120 of the embodiment firstly calculates the parameter .theta. to minimize theindex value by fixing the VaR. Secondly, the function generator 120 calculates the VaR to minimize the index value by fixing the parameter .theta.. The function generator 120 repeats these calculations until the expected shortfall converges. In thismanner, the behavior supporting system 10 of the embodiment can calculate an appropriate behavioral decision function rapidly by setting a risk indicator to be minimized as expected shortfall and by minimizing the index value whose upper bound is shownby the convex function. FIG. 2 shows an example of a data structure of the training data DB 100. The training data DB 100 stores a loss caused as a result of treating a patient, as training data. In the example of FIG. 2, a behavioral target corresponds to a testresult, behavior corresponds to a treatment policy, and a loss corresponds to a drawback caused by a treatment. According to the training data stored in the training data DB 100, a loss is 10 in a case where a treatment is performed on a certain patienthaving a test result A with a treatment policy 1, and a loss is 6 in a case where a treatment is performed on the patient with a treatment policy 2. On the other hand, a loss is 1 in a case where a treatment is performed on another patient having thesame test result A with the treatment policy 1, and a loss is 5 in a case where a treatment is performed on the patient with the treatment policy 2. Although the loss is shown by being normalized with 1 to 10, a phenomenon shown as a result of a treatment, in reality, is shown by being converted into numbers, for example. In other words, for example, 10 shows a state of death or inconformance with death, and 1 shows a state of the appearance of the adverse effects of medication. Instead of this, the loss may be something that a phenomenon appearing as a result of a treatment is replaced with a monetary value. Here, a mean loss of when a treatment is performed on the patient having the test result A with the treatment policy 1 is 5.5, and also a mean loss of when a treatment is performed with the treatment policy 2 is 5.5. In this manner, although themean losses in both cases of the treatment policies 1 and 2 are 5.5, the maximum loss of the treatment policy 1 is 10 and the maximum loss of the treatment policy 2 is 6. Since a loss equivalent to death should be avoided, it can be judged in thisexample that the treatment policy 2, which has the smaller maximum loss, should be adopted even if the mean losses are the same. It is often difficult to make such a judgment in more complicated cases. However, it is possible to make such a judgmentappropriately in the embodiment, by setting an indicator which should be minimized to be expected shortfall. Similarly, a loss caused as a result of treating a certain patient having a test result B with the treatment policy 1 is 8, and a loss caused as a result of treating the patient with the treatment policy 2 is 6. On the other hands, a loss causedas a result of treating another patient having the same test result B with the treatment policy 1 is 7, and a loss caused as a result of treating the patient with the treatment policy 2 is 7. As described above and shown in FIG. 2, the training data DB 100 stores a cost caused as a result of adopting a certain behavior to a certain target. The cost may be decided based on the history of the results of adopting actual behaviors in thepast to the target, or may be decided based on the result of each type of test or simulations. FIG. 3 shows a functional configuration of the function generator 120. The function generator 120 includes a first calculator 300, a third calculator 305, a second calculator 310, and a convergence judgment section 330. When a given value isset to be the VaR of a cost, the first calculator 300 calculates a behavioral decision function to minimize an index value based on the total amount of costs which exceeds the VaR, and stores the behavioral decision function in a memory. The indexvalue, for example, is a predetermined value showing the upper bound of expected shortfall, and also a behavioral decision function of when minimizing the index value is known in advance to minimize the expected shortfall. The third calculator 305 calculates a behavioral decision function to minimize the expected value of a cost based on training data. A technique to efficiently calculate the behavioral decision function to minimize the expected value of the costis called a cost-sensitive algorithm. A method of realizing this technique is, for example, described in N. Abe and B. Zadrozny. An interactive method for multi-class cost-sensitive learning. In Proceedings of ACM SIGKDD Conference, 2004., P. Geibel,U. Bredford, and F. Wysotzki. Perceptron and SVM learning with generalized cost models. Intelligent Data Analysis, 8(5):439-455, 2004. and B. Zadrozny, J. Langford, and N. Abe. Cost-sensitive learning by cost-proportionate example weighting. InProceedings of the 3.sup.rd International Conference on Data Mining (ICDM), pages 435-442, 2003., and is conventionally and publicly known. Therefore, the descriptions will be omitted. The first calculator 300 subtracts the given VaR from costs whichcorrespond respectively to behaviors included in the training data, and provides the result as new training data for the third calculator 305. Then, the first calculator 300 causes the third calculator 305 to calculate the behavioral decision functionto minimize the expected value of a cost in the new training data. The calculated behavioral decision function becomes a behavioral decision function to minimize expected shortfall in a case where the given value is set to be a VaR. The second calculator 310 reads the behavioral decision function calculated by the first calculator 300 from the memory, calculates the VaR of a cost caused as a result of adopting behavior indicated by the behavioral decision function based onthe training data, and provides the result for the first calculator 300. The convergence judgment section 330 judges whether or not expected shortfall calculated based on the index value minimized by the first calculator 300 and the VaR calculated bythe second calculator 310, have converged to a value within a predetermined range. The function generator 120 outputs the behavioral decision function calculated by the first calculator 300 on condition that the expected shortfall has converged. In this manner, the function generator 120 alternatively calculates behavioral decision functions and VaRs which make the expected shortfall smaller, in order to cause the expected shortfall to come close to a minimum value. Then, a behavioraldecision function at the point is outputted on condition that the expected shortfall has converged. At this point, since the upper bound of the expected shortfall is shown as a function which is convex downward with regard to the parameter .theta. anda VaR, a parameter .theta. and a VaR to minimize the expected shortfall, can be calculated with an algorithm which is greedy for .theta. and a VaR. Due to this, the expected shortfall approaches the minimum value everytime the behavioral decisionfunction is calculated. Thus, it is made possible to calculate a target behavioral decision function efficiently. Furthermore, with regard to the calculation of a behavioral decision function, it is possible to utilize an existing technique tocalculate the expected value of a cost in the third calculator 305, thus making it possible to make the process efficient by use of the accumulation of existing techniques. Hereinafter, by use of FIGS. 4 and 5, descriptions will be given of the flow of the processes of the embodiment. Prior to the descriptions of the flow of the processes, descriptions will firstly be given of the process of leading an algorithmwhich calculates a behavioral decision function to minimize expected shortfall. The algorithm generates a behavioral decision function by calculating the parameter .theta. of the behavioral decision function. In other words, the behavioral decisionfunction outputs the degree of appropriateness of the behavior by converting the degree into numbers while setting a target x, behavior y, and the parameter .theta. to be input, and the parameter to decide the nature of the behavioral decision functionis set at .theta.. By using the parameter .theta., expected shortfall is described as the following equation 8. It should be noted that, in equation 8, [x].sup.+ is assumed to be a function to return x when x is positive and 0 otherwise. .times..times..PHI..beta..function..theta..alpha..beta..function..theta..b- eta..times..function..function..function..theta..alpha..beta..function..th- eta..times..times. ##EQU00005## In this equation, the distribution D of a cost is unknown. Hence, a distribution E of the training data is used instead of the distribution D. Due to this, equation 8 can be rewritten as in equation 9. It should be noted that the target ofbehavior in the training data is denoted by x.sup.(1) to x.sup.(n) in equation 9. In addition, a behavioral decision function is denoted by a function h for deciding the behavior y to be adopted in accordance with the parameter .theta. while settingthe target x to be input. Furthermore, a cost in the training data is denoted by c(x.sup.(i), h(.theta.)). Moreover, a probability of the generation of a cost exceeding a VaR is set at a constant .beta.. .times..times..PHI..beta..function..theta..alpha..beta..function..theta..b- eta..times..times..times..function..function..theta..alpha..beta..function- ..theta..times..times. ## Here, .alpha..sup.E.sub..beta.(.theta.) is a VaR with respect to the distribution E of the training data, and is described by equation 10. It should be noted that a function I is assumed to be a function which adopts 1 when a condition shown byan argument is satisfied, and 0 when the condition is not satisfied. .times..times..alpha..beta..function..theta..times..alpha..di-elect cons..times..times..times..function..function..function..theta..gtoreq..a- lpha..ltoreq..beta..times..times. ## Incidentally, given that .alpha..sup.E.sub..beta.(.theta.) is an already-known constant .alpha.' in equation 9, it is sufficient if only the following equation 11 included in the second term of equation 9 is minimized. .times..times..alpha.'.function..theta..times..times..function..function..- theta..alpha.'.times..times. ##EQU00008## It should be noted that a function [x].sup.+ is a convex function which is not reduced with respect to x. Therefore, if c(x.sup.(i), h(.theta.)) is convex with respect to .theta., equation 11, too, becomes convex likewise. Descriptions willlater be given of an algorithm which minimizes equation 11. Next, with regard to the given parameter .theta., a VaR with respect to this .theta. is defined for the training data. Hence, the parameter .theta. can be described as in the following equations 12 and 13. .times..times..alpha..beta..function..theta..function..function..theta..ti- mes..times..times..times.'.times..times..times..times..function..function.- .function..theta..gtoreq..function..function..theta..ltoreq..beta..times..- times. ##EQU00009## This equals to the (1-.beta.)N-th (round down) largest one (c(x.sup.(k), h(.theta.)) among costs generated by .theta.. Therefore, it is possible to find this by the algorithm obtaining an order statistic in O (N) period of time (please refer toT. C. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, Cambridge, Mass., 1990., for example). Based on the above derivation, descriptions will hereinafter be given of the processes of calculating a behavioral decision function to minimize expected shortfall by use of FIG. 4. FIG. 4 shows a flowchart of the processes that the behavior supporting system 10 supports a decision on the behavior of a user. The data acquiring section 110 acquires training data from the training data DB 100 (S400). Based on the trainingdata, the function generator 120 performs the following processes. Firstly, when a given value is set to be the VaR of a cost, the first calculator 300 calculates, based on the training data, a behavioral decision function to minimize an index valuebased on the total amount of costs which exceed the VaR (S410). The given value is an initial value given in advance (for example, 0) in the first calculation, and a VaR given by the second calculator 310 in subsequent calculations. In details, thefirst calculator 300 calculates a parameter .theta.' of a behavioral decision function to minimize an index value C.sup.E.sub..alpha.(.theta.) which is calculated by the above equation 11 while setting the given VaR at .alpha.'. The value of theparameter .theta.' is set at a new value of .theta.. Next, the second calculator 310 calculates, based on the training data, a VaR of a cost caused as a result of adopting behavior shown by a behavioral decision function which is calculated by the first calculator 300 (S420). In details, thesecond calculator 310 calculates .alpha..sup.E.sub..beta.(.theta.) which is calculated by the above equations 12 and 13, as a VaR, with respect to the calculated parameter .theta.. Then, the second calculator 310 provides the value as .alpha.' for thefirst calculator 300. Next, the convergence judgment section 330 calculates an index value F.sup.E.sub..beta.(.theta., .alpha.) showing the upper bound of expected shortfall, based on the index value C.sup.E.sub..alpha.' minimized by the first calculator 300 and a VaRcalculated by the second calculator 310 (S430). The index value, for example, is a value that a value that the index value of C.sup.E.sub..alpha.' minimized by the first calculator divided by a probability (1-.beta.) that a cost is equal to or below avalue-at-risk, is added to the VaR. Additionally, the index value is shown by the following equation 14. .times..times..beta..function..theta..alpha..alpha..beta..times..times..ti- mes..function..function..theta..alpha..times..times. ##EQU00010## With regard to this equation 14, as shown in R. T. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of Risk, 2(3):21-41, 2000., it is known that the following equation 15 is satisfied. In addition, equation 14 isconvex with respect to .alpha., and equation 7 is convex with respect to .theta.. Therefore, equation 14 is convex with respect to .theta. and .alpha.. Furthermore, the following equation 16 holds true. .times..times..times..times..theta..times..PHI..beta..function..theta..the- ta..alpha..times..beta..function..theta..alpha..times..times..times..times- ..times..times..alpha..beta..function..theta..times..alpha..di-electcons..times..times..times..alpha..times..times..beta..function..theta..al- pha..times..times. ##EQU00011## As shown above, it can be seen that when the index value of equation 16 converges to its minimum value, expected shortfall, too, converges to its minimum value. Moreover, the index value is shown by the functions of a VaR and .theta., andbecomes a function which is convex downward, with respect to the VaR and .theta.. Therefore, the convergence judgment section 330 judges whether or not expected shortfall has converged, by judging whether or not the index value of equation 16 convergesto a value within a predetermined range (S440). It should be noted that since the VaR and .theta., too, converge to predetermined values accompanied by the convergence of the expected shortfall, the convergence judgment section 330 may judge theconvergence of expected shortfall by judging the convergence of the VaR, or may judge the convergence of expected shortfall by judging the convergence of .theta.. If the expected shortfall has not converged (S440: NO), the function generator 120 returns the process to S410. In this case, the first calculator 300 recalculates the parameter .theta. by use of the VaR calculated in S420. On the other hand,on condition that the expected shortfall has converged (S440: YES), the function generator 120 outputs the parameter .theta. of a behavioral decision function calculated by the first calculator 300 (S450). The behavior decider 130 supports a user'sbehavior based on the behavioral decision function calculated by the above processes (S460). FIG. 5 shows the details of the processes of calculating a behavioral decision function in S410. In a case where behavior is alternatively decided by a behavioral decision function, a cost is limited to a form of[C.sup.(i)(x.sup.(i),y)-.alpha.'].sup.++.alpha.'. Considering the above equation 11 becomes the expected value of a cost exceeding .alpha.', equation 7 is shown as the following equation 18 by replacing the cost with an original cost in the followingequation 17. Since this has the same form as equation 7, it is possible to minimize expected shortfall by providing a training example in which a cost is changed in an existing example-dependent cost-sensitive algorithm as in equation 17. .times..times..times..times.'.function..function..function..alpha.'.times.- .times..times..times..times..times..alpha.''.times..function..theta..times- ..times..times.'.function..function..times..times. ##EQU00012## Considering the above, the first calculator 300 performs the following processes. Firstly, the first calculator 300 calculates a cost c'.sup.(i) by subtracting the given value-at-risk .alpha.' from a cost c.sup.(i) with respect to the targetx.sup.(i) and the behavior y in the training data by equation 17, and provides the result for the third calculator 305 (S500). Then, the third calculator 305 calculates the parameter .theta. of a behavioral decision function to minimize a costC'.sup.E.sub..alpha.'(.theta.) shown by equation 18 (S510). The cost C'.sup.E.sub..alpha.'(.theta.) shown by this equation 18 becomes the expected value of a cost found from the training data. An algorithm to calculate the expected value of a cost is conventionally studied in N. Abe and B. Zadrozny. Aninteractive method for multi-class cost-sensitive learning. In Proceedings of ACM SIGKDD Conference, 2004., P. Geibel, U. Bredford, and F. Wysotzki. Perceptron and SVM learning with generalized cost models. Intelligent Data Analysis, 8(5):439-455,2004. and B. Zadrozny, J. Langford, and N. Abe. Cost-sensitive learning by cost-proportionate example weighting. In Proceedings of the 3.sup.rd International Conference on Data Mining (ICDM), pages 435-442, 2003., and the like, and its efficientexecution method is known. The first calculator 300 of the embodiment can lead a problem of minimizing expected shortfall to a problem of minimizing the expected value of a cost, by subtracting a given VaR from each cost included in the training data. Expected shortfall can be thereby minimized efficiently by using an efficient algorithm which is conventionally studied. FIG. 6 shows a result of performing a test by the behavior supporting system 10 according to the embodiment. As a test result, German Credit Data used in P. Geibel, U. Bredford, and F. Wysotzki. Perceptron and SVM learning with generalized costmodels. Intelligent Data Analysis, 8(5):439-455, 2004., was used. A problem targeted by the test is a problem to predict the credit risk of a customer, and a problem to classify him/her into good or bad customers based on the customer's information. The attribute of customer's information x is composed of 20 items such as sex, job, the intended purpose of a fund, and a past history. In this test, all the attributes attached to a dataset were converted into 24 numeric attributes. There are behaviors of two types of whether or not to make a loan. If a loan which should be made is not made, a loss equivalent to an interestoccurs. On the other hand, a loan which should not be made is made, a loss equivalent to a large part of the loan amount occurs. Other conditions were in conformance with P. Geibel, U. Bredford, and F. Wysotzki. Perceptron and SVM learning withgeneralized cost models. Intelligent Data Analysis, 8 (5):439-455, 2004. In the test, a model h(x, y) of a case of alternatively selecting an action and a weak hypothesis ft(x, y) of a case of a diversified-investment typed algorithm were used. In any case, used is a kernel-based cost-sensitive perceptron (pleaserefer to P. Geibel, U. Bredford, and F. Wysotzki. Perceptron and SVM learning with generalized cost models. Intelligent Data Analysis, 8(5):439-455, 2004.) as a cost-sensitive learning machine. The Gaussian kernel (.sigma.=50) was used as a kernelfunction. For the test result, a measurement was taken by use of the mean value of values obtained by 3 fold cross validation on data (666 training data and 334 test data). The column of Cost-Sensitive showed the results obtained by a conventionalcost-sensitive perceptron in which an expected cost is set to be an objective function, and the column of Risk-Sensitive showed the results obtained by a proposal method in which expected shortfall is set to be an objective function (the respective casesof .beta.=0.20, 0.10, 0.05 and 0.01). Each line shows the mean of three times of expected shortfall in each .beta. in the test data. Moreover, the figures within parentheses show value-at-risks in the respective cases. A Mean Cost line in the bottom of the Cost-Sensitive columnshows mean costs. As expected, it can be seen that the expected shortfall in corresponding .beta. could be reduced in the case of the cost-sensitive learning. The larger .beta. becomes, the less dramatic the amount of reduction in the expectedshortfall becomes in the risk-sensitive type. It is conceivable that this is because a difference between the expected shortfall and the expected costs becomes smaller since the cost distribution is heavily tilted toward the left side (in the vicinityof 0). Additionally, in .beta.=0.20, it can be seen that a smaller value-at-risk was realized in the cost-sensitive type rather than the risk-sensitive type. This indicates that a small value-at-risk does not necessarily suppress the large cost itselfwhich is caused with small probability. FIG. 7 shows an example of a hardware configuration of an information processor 700 functioning as the behavior supporting system 10. The information processor 700 includes: a CPU periphery having a CPU 1000, a RAM 1020 and a graphic controller1075, which are mutually connected by a host controller 1082; an input/output section having a communication interface 1030, a hard disc drive 1040, and a CD-ROM drive 1060, which are connected to the host controller 1082 by an input/output controller1084; and a legacy input/output section having a BIOS 1010, a flexible disc drive 1050, and an input/output chip 1070, which are connected to the input/output controller 1084. The host controller 1082 connects the RAM 1020 to the CPU 1000 and the graphic controller 1075 which access the RAM 1020 at a high transfer rate. The CPU 1000 is operated based on a program stored in the BIOS 1010 and the RAM 1020, and controlseach section. The graphic controller 1075 acquires image data generated on a frame buffer which is provided by the CPU 1000 and the like in the RAM 1020, and displays the image data on a display device 1080. In stead of this, the graphic controller1075 may internally include the frame buffer which stores image data generated by the CPU 1000 and the like. The input/output controller 1084 connects the host controller 1082 to the communication interface 1030, the hard disc drive 1040, and the CD-ROM drive 1060, which are relatively high-speed input/output devices. The communication interface 1030communicates with an external device via a network. The hard disc drive 1040 stores a program or data used by the information processor 700. The CD-ROM drive 1060 reads the program or data from a CD-ROM drive 1095, and provides it for the RAM 1020 orthe hard disk drive 1040. Furthermore, the input/output controller 1084 is connected to the relatively low-speed input/output devices such as the BIOS 1010, the flexible disc drive 1050 and the input/output chip 1070. The BIOS 1010 stores: a boot program executed by theCPU 1000 upon the boot-up of the information processor 700; a program dependent on the hardware of the information processor 700; and the like. The flexible disk drive 1050 reads a program or data from a flexible disk 1090, and provides it for the RAM1020 or the hard disk drive 1040 via the input/ output chip 1070. The input/output chip 1070 connects each type of the input/output devices via the flexible disk 1090, and a parallel port, a serial port, a keyboard port, a mouse port, for example. A program provided for the information processor 700 is stored in a recording medium such as the flexible disc 1090, the CD-ROM 1095, or an IC card, and is provided by a user. The program is read from the recording medium via the input/outputchip 1070 and/or the input/output controller 1084, and is executed by being installed in the information processor 700. The operations to be performed by causing the program to work on the information processor 700 and the like are the same as those inthe behavior supporting system 10 described in FIGS. 1 to 6. Thus, the descriptions will be omitted. The program described above may be stored in an external recording medium. A recording medium can be: an optical recording medium such as a DVD and aPD; a magneto-optical medium such as an MD; a tape medium; a semiconductor memory such as an IC card; and the like, in addition to the flexible disc 1090 and the CD-ROM 1095. Moreover, a recorder such as a hard disc or a RAM provided in a server systemwhich is connected to a dedicated communications network or the Internet may be used as a recording medium, and the program may be provided for the information processor 700 via the network. The technical scope of the present invention is not limited to the scope as recited in the aforementioned embodiment, although the present invention has been described using the embodiment. It is obvious to those skilled in this art that variousmodifications and improvements can be added to the aforementioned embodiment. It is clearly understood from description of the scope of claims that embodiments which are obtained by adding any of such various modifications and improvements to theaforementioned embodiment are also included in the technical scope of the present invention. According to the present invention, it is possible to efficiently calculate a behavioral decision function which instructs behavior for reducing the scale of a loss. Although the preferred embodiment of the present invention has been described in detail, it should be understood that various changes, substitutions and alternations can be made therein without departing from spirit and scope of the inventions asdefined by the appended claims. * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/7467120.html","timestamp":"2014-04-17T13:33:56Z","content_type":null,"content_length":"69256","record_id":"<urn:uuid:f2f9fc88-327c-44ac-abb7-1a769327b2c9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Develop a p series that represents a series April 5th 2007, 08:03 PM #1 Oct 2006 Develop a p series that represents a series find a power series that reps 1/(1+x)^3 on the interval -1,1 now we know that 1/(1+x)= x^0-x^1+x^2-x^3+x^4-x^5+... and I originally thought that I could cube each term to get to get the p series = Sigma(n=0,infinity)[(-1)^n*x^(3n)] but this I am not 100% sure of. Any tips? find a power series that reps 1/(1+x)^3 on the interval -1,1 now we know that 1/(1+x)= x^0-x^1+x^2-x^3+x^4-x^5+... and I originally thought that I could cube each term to get to get the p series = Sigma(n=0,infinity)[(-1)^n*x^(3n)] but this I am not 100% sure of. Any tips? Ok, so i felt guilty about not being able to help you, so i looked some stuff up, and i came up with this. chances are there's a more efficient way to do it, since i'm still rusty on this stuff, but i'm pretty sure this is correct. hope you can see the image ok Of course you could just use the binomial expansion of (1+x)^{-3}. How would one accomplish solving it that way Captainblack? April 5th 2007, 08:32 PM #2 April 5th 2007, 11:13 PM #3 Grand Panjandrum Nov 2005 April 6th 2007, 04:03 AM #4 Oct 2006
{"url":"http://mathhelpforum.com/calculus/13391-develop-p-series-represents-series.html","timestamp":"2014-04-17T11:35:23Z","content_type":null,"content_length":"41085","record_id":"<urn:uuid:451011a3-84ff-443a-8888-9117c68c7fc4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
New Almaden Algebra Tutor Find a New Almaden Algebra Tutor I have over 500 hours of experience tutoring math. Why? Because I enjoy doing it and because I am good at it. 22 Subjects: including algebra 2, algebra 1, English, reading I have been teaching science and math for about 30 years. After retiring from the Pajaro Valley Unifed Schools, where I taught secondary Chemistry, Physics, Algebra and Geometry in the Watsonville /Aptos area, I worked at UCSC to supervise the training of college graduates for secondary science teac... 13 Subjects: including algebra 1, algebra 2, physics, chemistry I graduated from UCLA with a math degree and Pepperdine with an MBA degree. I have taught business psychology in a European university. I tutor middle school and high school math students. 11 Subjects: including algebra 2, algebra 1, calculus, statistics ...I have experience tutoring junior high, high school, and Freshman and Sophomore level college students. If you would like some more information about tutoring or to schedule a session, please email me. Let's get you back on track with your Math class! 5 Subjects: including algebra 2, algebra 1, geometry, prealgebra ...I recently moved to Sunnyvale after spending a year working for Partners In Health in rural Mexico and Guatemala, and 4 months before then working for Floating Doctors in Panama. I am bilingual in Spanish, and have six years of tutoring and teaching experience. I have done private tutoring for ... 27 Subjects: including algebra 2, algebra 1, reading, Spanish
{"url":"http://www.purplemath.com/New_Almaden_Algebra_tutors.php","timestamp":"2014-04-19T02:18:47Z","content_type":null,"content_length":"23589","record_id":"<urn:uuid:4d4a4b8f-a871-4294-a90a-da05f746ea39>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Guessing plot of two-variable function Some things you can see right away. The function is always positive. It depends only on [tex] x^2 + y^2 [/tex] and is thus symmetric between x and y. The function is an increasing function of x and y. Basically this function is the distance from the origin to the point (x,y) and therefore its level curves are circles. In general, look for symmetry, positivity, maxima, zeros, behavior at infinity, level curves, and anything you can easily identify (I'm not saying it is always easy to just see the level curves, max/ min, etc but in this case it is). Does that help?
{"url":"http://www.physicsforums.com/showthread.php?t=93214","timestamp":"2014-04-19T22:56:58Z","content_type":null,"content_length":"23336","record_id":"<urn:uuid:fa162a43-db12-4f3c-abb8-308dd8793b2c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Change direction of Corona/LUA object up vote 0 down vote favorite HI my object is moving right to left how can i change this to from up to down ? function movebadc1(self,event) if self.x < -50 then self.x =300 self.y = 300 self.speed = math.random (2,6) self.initY = self.y self.amp = math.random (20,100) self.angle = math.random (1,360) self.x = self.x - self.speed self.angle = self.angle + .1 self.y = self.amp * math.sin(self.angle)+self.initY Regards Kevin lua corona coronasdk corona-storyboard 1 First of all, you're using degrees instead of radians. Try: self.angle = math.random() * math.pi * 2 to get the angle in radians. Radians describe an angle just like degrees do, but instead of going from 0 to 360, they go from 0 to 2π (~6.28). The only difference is that you can use radians in your trigonometry methods. – Johnny Fuchs Feb 11 '13 at 15:52 add comment 2 Answers active oldest votes To change from right to left, to up to down, supposing you also want other behavior to remain the same, change the three following lines: self.x = self.x - self.speed self.angle = self.angle + .1 self.y = self.amp * math.sin(self.angle)+self.initY up vote 1 down vote accepted to self.y = self.y + self.speed self.angle = self.angle + .1 self.x = self.amp * math.sin(self.angle)+self.initX add comment You can also use Cos() instead of sin(). It will work hopefully. up vote 0 down vote add comment Not the answer you're looking for? Browse other questions tagged lua corona coronasdk corona-storyboard or ask your own question.
{"url":"http://stackoverflow.com/questions/14815296/change-direction-of-corona-lua-object/14837141","timestamp":"2014-04-16T05:23:36Z","content_type":null,"content_length":"68219","record_id":"<urn:uuid:9406ed45-7a4e-4090-88dd-7af3cedc4570>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Thinkfinity Lesson Plans Subject: Mathematics Title: Learning about Properties of Vectors and Vector Sums Using Dynamic Software Add Bookmark Description: In this two-lesson unit, from Illuminations, students manipulate a velocity vector to control the movement of an object in a gamelike setting. They develop an understanding that vectors are composed of both magnitude and direction, and extend their knowledge of number systems to the system of vectors. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Learning about Properties of Vectors and Vector Sums Using Dynamic Software: Sums of Vectors and Their Properties Add Bookmark Description: This is part two of a two-part e-example from Illuminations that illustrates how using a dynamic geometrical representation can help students develop an understanding of vectors and their properties. In this part, Sums of Vectors and Their Properties, students extend their knowledge of number systems to the system of vectors. e-Math Investigations are selected e-examples from the electronic version of the Principles and Standards for School Mathematics (PSSM). Given their interactive nature and focused discussion tied to the PSSM document, the e-examples are natural companions to the i-Math Investigations. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Components of a Vector Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students manipulate a velocity vector to control the movement of an object in a gamelike setting. In the process, they develop an understanding that vectors are composed of both magnitude and direction. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics,Science Title: Learning about Properties of Vectors and Vector Sums Using Dynamic Software: Components of a Vector Add Bookmark Description: This e-example from Illuminations illustrates how using a dynamic geometrical representation can help students develop an understanding of vectors and their properties. Students manipulate a velocity vector to control the movement of an object in a gamelike setting. In this part, Components of a Vector, students will develop an understanding that vectors are composed of both magnitude and direction. In the second part, Sums of Vectors and Their Properties, students extend their knowledge of number systems to the system of vectors. e-Math Investigations are selected e-examples from the electronic version of the Principles and Standards of School Mathematics (PSSM). The e-examples are part of the electronic version of the PSSM document. Given their interactive nature and focused discussion tied to the PSSM document, the e-examples are natural companions to the i-Math investigations. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
{"url":"http://alex.state.al.us/plans2.php?std_id=54604","timestamp":"2014-04-20T05:52:13Z","content_type":null,"content_length":"25198","record_id":"<urn:uuid:41494466-79c0-41d6-81cc-939f7eb5186d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
glBegin [Archive] - OpenGL Discussion and Help Forums I'm making a little 3d engine and I got to the point where I want to make it faster. I found out that when I use opengl it seems to use the zbuffer also with polygons that are completely behind the camera (my back), so it calculates those poligons as well, I thought about testing the polygons by creating a plane with normal equal to the target of the camera and a valid point from this plane at the position of the camera, so if I use the equation Ax+By+Cz+D=0 and calculate A B C D, then use this equation with the faces in the world, if all the points of the face are in the back of the camera, the equation will be less than 0. But it stayed slow and with some problems, so I decided to use matrices to multiply the points before rendering , then test all the faces, if all the points of a face had an z component greater than 0 it ment that the face was behind the camera,now the engine feels faster. The question is, since I'm multiplying the points myself and not using the opengl tranformation matrix, but I use glBegin(GL_TRIANGLE) to draw the faces of the world, does opengl still multiply the points that it draws by the tranformation matrix, I use glLoadIdentity() before drawing, since I rotated all the points by myself I don't need to use any calls like glRotated(), etc. But even with the identity matrix loaded in the opengl transformation matrix, does opengl still multiply this tranformation matrix by the point it will draw?.. if it does.. is there a way of switching it off so it only draws a point and ignores the transformation matrix??? thank you
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-124824.html","timestamp":"2014-04-17T03:55:27Z","content_type":null,"content_length":"5372","record_id":"<urn:uuid:49aaeb8d-04fb-4b48-8de4-c89a790eac5e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Only prime superpositions need be considered for the Knuth-Bendix procedure Results 1 - 10 of 14 - Journal of Automated Reasoning , 1997 "... . In this article we show that the three equations known as commutativity, associativity, and the Robbins equation are a basis for the variety of Boolean algebras. The problem was posed by Herbert Robbins in the 1930s. The proof was found automatically by EQP, a theorem-proving program for equationa ..." Cited by 130 (3 self) Add to MetaCart . In this article we show that the three equations known as commutativity, associativity, and the Robbins equation are a basis for the variety of Boolean algebras. The problem was posed by Herbert Robbins in the 1930s. The proof was found automatically by EQP, a theorem-proving program for equational logic. We present the proof and the search strategies that enabled the program to find the proof. Key words: Associative-commutative unification, Boolean algebra, EQP, equational logic, paramodulation, Robbins algebra, Robbins problem. 1. Introduction This article contains the answer to the Robbins question of whether all Robbins algebras are Boolean. The answer is yes, all Robbins algebras are Boolean. The proof that answers the question was found by EQP, an automated theoremproving program for equational logic. In 1933, E. V. Huntington presented the following three equations as a basis for Boolean algebra [6, 5]: x + y = y + x, (commutativity) (x + y) + z = x + (y + z), (associativit... - Journal of the ACM , 1992 "... We describe the application of proof orderings---a technique for reasoning about inference systems---to various rewrite-based theorem-proving methods, including re#nements of the standard Knuth-Bendix completion procedure based on critical pair criteria; Huet's procedure for rewriting modulo a congr ..." Cited by 30 (11 self) Add to MetaCart We describe the application of proof orderings---a technique for reasoning about inference systems---to various rewrite-based theorem-proving methods, including re#nements of the standard Knuth-Bendix completion procedure based on critical pair criteria; Huet's procedure for rewriting modulo a congruence; ordered completion #a refutationally complete extension of standard completion#; and a proof by consistency procedure for proving inductive theorems. # This is a substantially revised version of the paper, #Orderings for equational proofs," co-authored with J. Hsiang and presented at the Symp. on Logic in Computer Science #Boston, Massachusetts, June 1986#. It includes material from the paper #Proof by consistency in equational theories," by the #rst author, presented at the ThirdAnnual Symp. on Logic in Computer Science #Edinburgh, Scotland, July 1988#. This researchwas supported in part by the National Science Foundation under grants CCR-89-01322, CCR-90-07195, and CCR-90-24271. 1 ... , 1996 "... Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "ou ..." Cited by 24 (5 self) Add to MetaCart Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "our rule is complete and it heavily prunes the search space; therefore it is efficient". 2 These positions are highly questionable and indicate that the authors have little or no experience with the practical use of automated inference systems. Restrictive rules (1) can block short, easy-to-find proofs, (2) can block proofs involving simple clauses, the type of clause on which many practical searches focus, (3) can require weakening of redundancy control such as subsumption and demodulation, and (4) can require the use of complex checks in deciding whether such rules should be applied. The only way to determ - In Proc. 7th RTA, LNCS 1103 , 1996 "... We present a new approach for proving termination of rewrite systems by innermost termination. From the resulting abstract criterion we derive concrete conditions, based on critical peak properties, under which innermost termination implies termination (and confluence). Finally, we show how to apply ..." Cited by 20 (0 self) Add to MetaCart We present a new approach for proving termination of rewrite systems by innermost termination. From the resulting abstract criterion we derive concrete conditions, based on critical peak properties, under which innermost termination implies termination (and confluence). Finally, we show how to apply the main results for providing new sufficient conditions for the modularity of termination. , 1994 "... We introduce normalised rewriting, a new rewrite relation. It generalises former notions of rewriting modulo E, dropping some conditions on E. For example, E can now be the theory of identity, idempotency, the theory of Abelian groups, the theory of commutative rings. We give a new completion algor ..." Cited by 19 (2 self) Add to MetaCart We introduce normalised rewriting, a new rewrite relation. It generalises former notions of rewriting modulo E, dropping some conditions on E. For example, E can now be the theory of identity, idempotency, the theory of Abelian groups, the theory of commutative rings. We give a new completion algorithm for normalised rewriting. It contains as an instance the usual AC completion algorithm, but also the wellknown Buchberger's algorithm for computing standard bases of polynomial ideals. We investigate the particular case of completion of ground equations, In this case we prove by a uniform method that completion modulo E terminates, for some interesting E. As a consequence, we obtain the decidability of the word problem for some classes of equational theories. We give implementation results which shows the efficiency of normalised completion with respect to completion modulo AC. 1 Introduction Equational axioms are very common in most sciences, including computer science. Equations can ... - Journal of Symbolic Computation , 1991 "... The inductionless induction (also called proof by consistency) approach for proving equations by induction from an equational theory, requires a consistency check for equational theories. A new method using test sets for checking consistency of an equational theory is proposed. Using this method, ..." Cited by 16 (3 self) Add to MetaCart The inductionless induction (also called proof by consistency) approach for proving equations by induction from an equational theory, requires a consistency check for equational theories. A new method using test sets for checking consistency of an equational theory is proposed. Using this method, a variation of the Knuth-Bendix completion procedure can be used for automatically proving equations by induction. The method does not suffer from limitations imposed by the methods proposed by Musser as well as by Huet and Hullot, and is as powerful as Jouannaud and Kounalis' method based on ground-reducibility. A theoretical comparison of the test set method with Jouannaud and Kounalis' method is given showing that the test set method is generally much better. Both the methods have been implemented in RRL, Rewrite Rule Laboratory, a theorem proving environment based on rewriting techniques and completion. In practice also, the test set method is faster than Jouannaud and Kounalis' ... , 1996 "... : In this report we give an overview of the development of our new Waldmeister prover for equational theories. We elaborate a systematic stepwise design process, starting with the inference system for unfailing Knuth--Bendix completion and ending up with an implementation which avoids the main dise ..." Cited by 14 (0 self) Add to MetaCart : In this report we give an overview of the development of our new Waldmeister prover for equational theories. We elaborate a systematic stepwise design process, starting with the inference system for unfailing Knuth--Bendix completion and ending up with an implementation which avoids the main diseases today's provers suffer from: overindulgence in time and space. Our design process is based on a logical three--level system model consisting of basic operations for inference step execution, aggregated inference machine, and overall control strategy. Careful analysis of the inference system for unfailing completion has revealed the crucial points responsible for time and space consumption. For the low level of our model, we introduce specialized data structures and algorithms speeding up the running system and cutting it down in size --- both by one order of magnitude compared with standard techniques. Flexible control of the mid--level aggregation inside the resulting prover is made po... - In Proceedings of the 21st International Colloquium on Trees in Algebra and Programming (CAAP'96 , 1996 "... We present a new criterion for confluence of (possibly) non-terminating leftlinear term rewriting systems. The criterion is based on certain strong joinability properties of parallel critical pairs . We show how this criterion relates to other well-known results, consider some special cases and disc ..." Cited by 10 (3 self) Add to MetaCart We present a new criterion for confluence of (possibly) non-terminating leftlinear term rewriting systems. The criterion is based on certain strong joinability properties of parallel critical pairs . We show how this criterion relates to other well-known results, consider some special cases and discuss some possible extensions. 1 Introduction and Overview Computation formalisms which are based on rewriting systems heavily rely on the fundamental properties of termination and confluence. For terminating and confluent systems normal forms exist and are unique, irrespective of the computation (rewriting) strategy. For non-terminating but confluent systems, normal forms need not exist, however, if a normal form exists, it is still unique. More generally, any (possibly infinite) diverging computations can be joined again. In some cases, non-termination is inherently unavoidable, in other cases it may be very difficult to verify this property. Hence the problem of proving confluence (with o... - Proc. of PASCO-97 , 1997 "... We introduce the distributed theorem prover Peers-mcd for networks of workstations. Peers-mcd is the parallelization of the Argonne prover EQP, according to our Clause-Diffusion methodology for distributed deduction. The new features of Peers-mcd include the AGO (Ancestor-Graph Oriented) heuristic c ..." Cited by 6 (2 self) Add to MetaCart We introduce the distributed theorem prover Peers-mcd for networks of workstations. Peers-mcd is the parallelization of the Argonne prover EQP, according to our Clause-Diffusion methodology for distributed deduction. The new features of Peers-mcd include the AGO (Ancestor-Graph Oriented) heuristic criteria for subdividing the search space among parallel processes. We report the performance of Peers-mcd on several experiments, including problems which require days of sequential computation. In these experiments Peersmcd achieves considerable, sometime super-linear, speed-up over EQP. We analyze these results by examining several statistics produced by the provers. The analysis shows that the AGO criteria partitions the search space effectively, enabling Peers-mcd to achieve super-linear speed-up by parallel search. 1 Introduction Distributed deduction is concerned with the problem of proving difficult theorems by distributing the work among networked computers. The motivation is to st... - In Proceedings of the 7th International Conference on Rewriting Techniques and Applications , 1996 "... . The inefficiency of AC-completion is mainly due to the doubly exponential number of AC-unifiers and thereby of critical pairs generated. We present AC-complete E-unification, a new technique whose goal is to reduce the number of AC-critical pairs inferred by performing unification in a extension E ..." Cited by 5 (0 self) Add to MetaCart . The inefficiency of AC-completion is mainly due to the doubly exponential number of AC-unifiers and thereby of critical pairs generated. We present AC-complete E-unification, a new technique whose goal is to reduce the number of AC-critical pairs inferred by performing unification in a extension E of AC (e.g. ACU, Abelian groups, Boolean rings, ...) in the process of normalized completion [21]. The idea is to represent complete sets of AC-unifiers by (smaller) sets of E-unifiers. Not only the theories E used for unification have exponentially fewer most general unifiers than AC, but one can remove from a complete set of E-unifiers those solutions which have no E-instance which is an AC-unifier. First, we define AC-complete E-unification and describe its fundamental properties. We show how AC-complete E-unification can be done in the elementary case, and how the known combination techniques for unification algorithms can be reused for our purposes. Finally, we give some evidence of t...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2193257","timestamp":"2014-04-24T13:52:29Z","content_type":null,"content_length":"39976","record_id":"<urn:uuid:0f1389e0-795a-4683-ab96-c9a653766d3b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00068-ip-10-147-4-33.ec2.internal.warc.gz"}