text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Queen's students secure place on US scholarship programme Twenty-six students from Queen's University Belfast are off to study in the USA this month. They are among 54 students from across Northern Ireland to be selected to take part in the British Council's prestigious Study USA programme and will spend the next academic year studying business or STEM (Science, Technology, Engineering and Mathematics) related subjects in American colleges, across 34 States, with the aim of developing their career prospects when they return to Northern Ireland. Lucy Bill, a Law student at Queen's, will spend her year at Mary Baldwin University in Virginia. She says she is excited to be part of the programme; "For me, Study USA is a once-in-a-lifetime chance to study in a traditional US college, while pursuing my passion for sports and embracing a totally different culture. "I'm excited to delve into my business classes and I also hope to undergo an internship during my time away. When I return, my plan is to hibernate in the library during final year then focus on my life-long goal of becoming a barrister- specialising in either commercial law or clinical negligence." Every year, over 800 Queen's students take the opportunity to go outside Northern Ireland to study or gain work related experience. Queen's works with a range of organisations and universities across the world to provide global opportunities for its students. The British Council, which is the UK's international organisation for educational opportunities and cultural relations, manages Study USA on behalf of the Department for the Economy – and since its formation in 1994, the initiative has sent over 2,000 students from Northern Ireland to all four corners of the United States. Also speaking about the programme was Jonathan Stewart, Director, British Council Northern Ireland, who said: "I am confident that the students departing for the US this year will be great ambassadors for Northern Ireland and will help to further important long-term links and connections between the two countries. "Through Study USA, students will have the opportunity to not only enhance their employability skills but also develop intercultural skills, which will help them to prepare to work in a global economy. "We wish them every success, and trust that new connections and friendships will be developed in the year ahead." For further information about global opportunities available at Queen's, contact visit https://www.qub.ac.uk/Study/Undergraduate/Global-opportunities/ For media inquiries, please contact the Communications Office, 028 9097 3091 comms.office@qub.ac.uk Strongest link yet between nitrites and cancer - but 'not all processed meat has same risk' The Border into Brexit Report Dr Leonard O'Hagan appointed to Pro-Chancellor on Queen's University Senate Leading businessman receives honorary degree from Queen's University Belfast Midwife Vince celebrates success at Queen's graduation
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,163
Get cheap rental car deals in 06095 Windsor, CT for your next business trip from Hertz Rent a Car. Where is Hertz in 06095 Windsor? The Hertz Rent a Car Alstom office is located at 200 Great Pond Drive, Windsor, CT 06095 USA. The office is located in the city, making it easy for you to get to your nearby destination. The map below shows you where the Hertz office is, as well as all the other Hertz Rent a Car locations around the Windsor area. Rent a car with Hertz Rent a Car at 200 Great Pond Drive in Windsor! The Hertz office is conveniently located in the city. The major regional airport this office is nearest to is Bradley Intl. BDL, and its primary airport is BDL, which is sometimes not always the same as the regional airport. Whether you're coming for a business meeting or convention in Windsor, or just arriving on a family holiday, we can help you get Hertz Rent a Car 06095 car rentals in a wide variety of car classes at just the right price. Get an even better deal on a car from Hertz by booking your flight, hotel and renting your car together with Travelocity! If you are not returning your car rental to 200 Great Pond Drive, you can choose from other Hertz drop off locations in the Windsor area. Finding the best Hertz Rent a Car office in Windsor to rent from doesn't have to be a shot in the dark. Here's what others have been saying on Yelp reviews about Hertz rental cars offices in the Windsor area. Looking for more nearby Hertz locations in 06095 Hartford?
{ "redpajama_set_name": "RedPajamaC4" }
4,613
First Tiny Forest in Germany! Germany's first Tiny Forest was realized on a private property in the Uckermark 🌳🌲 The idea behind the "Forest of Diversity" project: This rapidly growing microhabitat is intended to make a contribution to climate protection that is also easy for others to imitate. The project is being implemented by Stefan Scharfe (Forestry System Transformation M.Sc.) and Lukas Steingässer (International Forest Ecosystem Management B.Sc.)... You can find reports on the project at FRITZ and [w] wie Wissen (ARD). More information about the project! Finizio at the German Bundestag Here you can find an interview with the managing director of Finzio, Florian Augustin IFEM graduate Florian Augustin and his team from Finizio were guests at the kick-off meeting in the German Bundestag to give a lecture on "sustainable food production in closed material cycles". It was more precisely about their "Recycling demonstration plant" in Eberswalde. Within the scope of their project Finizio, Florian and his team design comfortable and future-oriented sanitary systems by refining human excrements into fertile soil - hygienic, odourless and efficient. Speakers from the Federal Ministry of Food & Agriculture (BMEL), the Federal Ministry for the Environment (BMU) as well as experts from science & research were present at the technical discussion. Further information about the project can be found HERE A current report on the project can also be seen on ZDF Image: Finizio German Forest Award for IFEM graduate Milan Hänsel (wald-wird-mobil.de gGmbH) For the first time this year the special prize "Sustainability Forest" was awarded in the context of the German Forest Award (Deutscher Waldpreis) The choice fell thereby on the wald-wird-mobil.de gGmbH from Berlin. Digitalisation represents a particular challenge in forest management. The wald-wird-mobil.de gGmbH with employee Milan Hänsel (IFEM graduate) develops solutions for a forestry 4.0, which meets the requirements of family forestry enterprises and forest enterprise communities in particular. Thus wald-wird-mobil.de provides an innovative contribution to the achievement of our common goal of an efficient, sustainable and multifunctional forestry. This commitment convinced the jury. Image: C. Mühlhausen via forstpraxis.de Visit to the University of Quindío During his tour, visiting selected Universities with a strong focus on environment and sustainable development, Christoph Nowicki was invited by the University of Quindío, Colombia (from August 7th to August 13th 2019). First discussions on cooperation options for the development of a new study programme on forest management and conservation, started in 2018 during a visit of a Colombian delegation in Eberswalde, thanks to the initiative of Norbert Pudzich, German Honorary Consul in Armenia. Now, first concrete steps towards a living cooperation have been intensively discussed and defined with the heads of faculties, study programmes and the Rector, how strongly supports the cooperation. In 2020, the University of Quindío will offer internships (in cooperation with partner institutions) for IFEM-students but would also like to cooperate with the study programmes on Global Change Management and Forestry System Transformation. From left to right:: C. Nowicki, Carolina Bolaños Rodríguez (Head of International Office at Ikiam), Jesús Ramos Martín (Principal at Ikiam) Invitation to "Universidad Regional Amazónica – Ikiam" On August 5th 2019, Christoph Nowicki (EUSD) has been invited by the "Universidad Regional Amazónica – Ikiam", Ecuador, to discuss possibilities for cooperation in student exchange between our Universities together with the Heads of the University and of the most suitable study programmes. Special emphasis was given on the cooperation with the undergraduate programme "International Forest Ecosystem Management" (B.Sc.). Ikiam has a strong focus on sustainable development in the Amazon region, implements projects in close collaboration with the local and indigenous population and showed a great interest in EUSD and a close future cooperation. Images: C. Nowicki Cooperation talks at Western Colorado University On July 15th-17th 2019, the dean of the 'School of Environment and Sustainability', Dr. John Hausdoerffer (middle), from 'Western Colorado University' in Gunnison (Colorado), invited his faculty members and Christoph Nowicki (EUSD) to intensively discuss the future cooperation and student exchange between our two partner Universities. Special emphasis was given on the cooperation with the undergraduate programme 'International Forest Ecosystem Management" (B.Sc.) and the graduate programme 'Global Change Management' (M.Sc.). Image: C. Nowicki EUSD to become member of the 'Resilience Studies Consortium' (RSC) Eberswalde University for Sustainable Development (EUSD) has been invited to become a member of the 'Resilience Studies Consortium' (RSC), a network of eleven North American Universities interested in applied research on resilience and sustainability topics. For the first time, the RSC is reaching out for international partners and invited EUSD to become the first European and the 'Environmental University' (UMA - Mexico) the first Latin American partner. Christoph Nowicki participated in the RSC annual retreat in Tacoma, Washington, at 'The Evergreen State College' (11.07.-13.07.2019) as a representative for EUSD and presented how resilience topics are being integrated into teaching and research at EUSD. Visit from the Netherlands Prof. Peter van der Meer (centre), from Van Hall Larenstein – University for Applied Sciences, Netherlands, surrounded by his colleagues from Eberswalde and 6. Semester IFEM-Students participating in the module on "Forest landscape restoration", coordinated by Prof. Peter Spathelf. Peter von der Meer is a regular guest lecturer of this module, sharing essential international experiences from his research at Van Hall Larenstein from projects all over the world. Professional field presentation Last Wednesday, the annual professional field presentation took place at the Department of Forest and Environment, where alumni of the study programmes Forestry & International Forest Ecosystem Management students gave an insight into their working lives. Becoming Partner Universities During the Sustainability Days, Dr. John Hausdoerffer (Dean of School of Environment & Sustainability; Western Colorado University) will give a lecture on the topic: »Deep Sustainability: Humans as a Keystone Species« Prior to the lecture, Dr. John Hausdoerffer for Western Colorado University and Prof. Dr. Wilhelm-Günther Vahrson for HNEE will sign a Memorandum of Understanding, to seal the new partnership between the US University and the HNEE. Image: Madeleine Scharnweber From February 27th to March 1st, a delegation from the European Wilderness Society, partner of International Forest Ecosystem Management, and the Western Colorado University in Gunnison, USA, started to discuss a new cooperation between the two Universities, with a special focus on International Forest Ecosystem Management (B.Sc.), Global Change Management (M.Sc.) and joint research projects related to the Centre of Econics and Ecosystem Management. From left to right: Prof. Pierre Ibisch, Vlado Vancura, Christoph Nowicki, Anni Henning, Prof. Jonathan Coop, Max Rossberg, Jeanette Blumroeder, Prof. John Hausdoerfer (photo: C. Nowicki) Guest lecture in Sweden On November 30th Prof. Spathelf gave a lecture on 'Continuous-Cover-Forestry in Germany' at the Swedish Agricultural University (SLU). Close-to-nature forest management or permanent forest management is also a much discussed forest management approach in Sweden. Deans Day at the HNE Eberswalde On the 13th November 2018 an exchange of information took place at the HNE Eberswalde between representatives of the forestry faculties and departments in Germany (so-called Dean's Day) The discussions focused on questions of education, the development of student numbers, digitisation in research and teaching as well as internationalisation. The aim is to hold the meeting every 2 years. New IFEM-Partner: European Wilderness Society Prof. Dr. Wilhelm-Günther Vahrson, President of Eberswalde University for sustainable Develpment and Max A E Rossberg (left), Chairman of the European Wilderness Society, signed a comprehensive Memorandum of Understanding fostering joint activities for the IFEM-Study Programme. 20 Years Anniversary Symposium of the Bachelor Study Programme International Forest Ecosystem Management With over 150 guests, the IFEM programme celebrated its 20th anniversary on 28 September 2018. In addition to the IFEM-partners (Dr. Elke Mannigel (OroVerde), Simon Mader (Querdenker / FuturoVerde) and Dr. Michael Haubold-Rosar (Forschungsinstitut für Bergbaufolge-landschaften e.V. Finsterwalde) and international guest speakers (PhD Peter van der Meer (Van Hall Larenstein University of Applied Sciences, The Netherlands) as well as Max Rossberg (European Wilderness Society, Austria)), many alumni were guests. After the festive programme with exciting lectures and humorous memories of the early days of the study programme, the alumni and IFEM students used the opportunity for exchange and cheerful togetherness with live music and grilled game sausages or cheese until late in the evening. Image: Jonas Sitte During a visit of a delegation from the "Universidad del Quindío" (Colombia) to the Department of Forests and Environment on 19th March 2018, the common aim of cooperation was confirmed by the signing of a cooperation agreement. In particular, a student exchange and the possibility of joint project work (practical semesters abroad) are to be initiated for the IFEM programme. From left to right: Norbert Pudzich, Luis Fernando Polanía, Obando, Christoph Nowicki, Wilhelm-Günther Vahrson, José Fernando Echeverry Murillo, Pierre Ibisch, Peter Spathelf, Gustavo Botero Echeverry. Image: Hnee Brown-bag talk at ICIMOD in Kathmandu On the 02nd of November 2017 Professor Spathelf held a brown-bag talk at ICIMOD in Kathmandu on the topic 'Climate change adaptation of forests in Brandenburg'. ICIMOD is a knowledge centre supported by 8 countries of the Himalaya region for the integrated sustainable development of mountain regions. Image: ICIMOD Scientific Evening Lecture at the German Physical Society Berlin On 23 May 2017, Professor Spathelf gave a lecture on 'Forests, development and climate change' at the Magnus House of the German Physical Society (DPG) in Berlin. Forests are central natural resources and cover a considerable part of the earth's surface. Important current processes of global forest area development and the importance of forests for sustainable development were discussed. In addition, the role of forests in climate change and the importance of sustainable forest management were discussed. The discussion was chaired by Prof. Dr. Wolfgang Eberhardt, Scientific Director Magnus-Haus Berlin. Bonn Climate Change Conference 2017 Steffen Dehn, student of "International Forest Ecosystem Management", participated as delegate of the "International Forestry Students Association (IFSA)" in May 2017 at the "Bonn Climate Change Conference" of the United Nations. Here is a link to a short video on Facebook in which he looks back on an eventful week. Professional field presentation 2017 On Wednesday, 25 April 2017, students of the International Forest Ecosystem Management (IFEM) study programme had the opportunity to attend a presentation on the "old problem" of the not clearly defined occupational field. Therefore, the former IFEMs brought their experiences in education and profession closer to the current students and gave their view on the possibilities and potentials of the study programme - there is a life after IFEM! Visit from Canada Prof. Sandy Smith (pictured 2nd from right) from the University of Toronto visited the Department of Forest and Environment on September 12, 2016. Interesting cooperation possibilities in the field of student exchange, the development of joint courses and research were explored. A possible technical focus of the future cooperation is seen in the area of 'Urban forestry'. Image: hnee Dr. Peter van der Meer and Rosa Diemont (in picture 1. and 2. upper right row) from the VHL in Larenstein (Netherlands) made important contributions to the block 'Forest Landscape Restoration' with lectures on Tropical Forest Rehabilitation and Community Forestry in Suriname. This block was the last joint event of most Ifem students in 2013. Diversification of pine reforestations in Spain Professor José A. Reque , Professor of Forestry at the University of Valladolid in Palencia (Spain), gave an impressive lecture on the diversification of pine reforestation in central Spain. He used state-of-the-art interactive learning techniques and 'flipped classroom' approaches to take students with him and encourage them to bring their own ideas for improving the management of these pine forests into the discussion. Visit to Ukraine Professor Spathelf visited Professor Vasyl Lavnyy from the National Forestry University of Ukraine (UNFU) in Lviv (Lemberg) within the framework of the project 'Civil Society Engagement for Sustainable Forest Management - Supporting Democracy through the Promotion of Interdisciplinary Discourse and Transparency of Promoting Forms of Cooperation' (Head: Prof. Ibisch). Joint publications on the pine industry in Ukraine and on near-natural silviculture are in preparation. Professor Vasyl Lavnyy visits the faculty In February 2016 Professor Vasyl Lavnyy (right) from the Lemberger Forstuniversität was a guest at the department. In the context of the project "Civil society engagement for sustainable forest management - support of democracy through the promotion of interdisciplinary discourse and transparency in promoting forms of cooperation" he deals with sustainable near-natural forest management in Germany and how this could be transferred to management approaches in the Ukraine, especially in the pine industry. Of particular interest are the possibilities for stakeholders to influence the development of management plans. Professor Dr. Spathelf in Larenstein From 05 to 08 January 2016 Prof. Spathelf visited the Larenstein University of Applied Sciences in the Netherlands as part of the ERASMUS exchange of lecturers and discussed possibilities of student exchange and scientific cooperation. Talks and excursions in Serbia From 30 November to 04 December 2015 Prof. Spathelf visited the Institute for Lowland Forestry and Environment (ILFE) of the University of Novi Sad in Serbia. The aim of the stay was to discuss the application for funding for a research network (MOEL-SOEL tender of the BMBF). On an excursion to the Kruska Gora National Park, a variety of mixed deciduous forests (mainly oak, lime and hornbeam) could be admired. Images: hnee As a guest in Eberswalde Image: Prof. Dr. Peter Spathelf Eberswalde, 25.09.2015. Last week Professor Ayan from Kastamonu University in the Turkish provincial capital of the same name (right) and Professor Reque from the University of Valladolid were guests at the Department of Forests and Environment and in the Forest Botanical Gardens as part of Erasmus Mobility. During an excursion in the Eberswalde and Chorin area, current silvicultural issues were discussed and joint project ideas were concretized. Prof. Ted Wilson as guest at the faculty From 30 June to 10 July 2015, forest farmer Prof. Ted Wilson (Toronto University) was a guest of Prof. Spathelf at the Department of Forests and Environment. In a very entertaining and interesting guest lecture he discussed the possibilities and limits of Continuous Cover Forestry in Great Britain. In addition, possibilities for further cooperation in research and teaching were discussed.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,981
How regulation affects bank competition Riccardo De Bonis, Giuseppe Marinelli, Francesco Vercelli 16 April 2018 There is no consensus on how to measure competition in the banking system, though the 'Boone indicator' of profit elasticity with respect to marginal costs has recently provided reliable results. This column uses a dataset of 125 years of bank balance sheets to calculate this indicator for the Italian banking system. It shows that regulatory changes have driven bank competition, an insight that is supported by other indicators. Bank competition and financial stability: The role of financial innovation David Marques-Ibanez, Michiel van Leuvensteijn Bank competition at the zero lower bound Michiel van Leuvensteijn, Adrian van Rixtel, Bing Xu New challenges for bank competition policy Lev Ratnovski Bank competition and stability: Cross-country heterogeneity Thorsten Beck, Olivier De Jonghe, Glenn Schepens Recently, VoxEU has published two related articles on bank competition. Marquez-Ibanez and van Leuvensteijn (2017) found that, when competition increased, banks with higher levels of securitisation became riskier. van Leuvensteijn et al. (2016) showed that in an environment with binding interest rate ceilings and floors, like the Chinese banking sector, traditional measures of competition may be biased while the 'Boone indicator', a new measure of competition, provided reliable estimates. In the same line of research, in a recent paper we study the impact of changing regulatory regimes on banking competition in Italy (De Bonis et al. 2018). We use a unique data set of 125 years of bank balance sheets, from 1890 to 2014. We examine three regimes: The 'free-banking era': This characterised Italy from 1890 through the 1920s. A restrictive regulatory regime: This was introduced in the 1930s as a reaction to the Great Depression, which remained mostly unchanged until the 1970s. Bank deregulation and liberalisation: This period started in the 1980s. (On the evolution of the Italian banking system, see Guiso et al. 2006, Gigliobianco and Giordano 2012, and Toniolo and White 2015.) Measuring bank competition Competition plays a major role in economic theory, but there is no consensus on how to measure it. Like Marques-Ibanez et al. (2014) and Xu et al. (2013), we primarily use profit elasticity with respect to marginal costs, the indicator that Boone et al. (2013) proposed. The rationale behind the Boone indicator is that, when competition increases, less efficient firms (for which costs are higher) are more harshly punished – in terms of lost profits and market shares – than efficient ones. The profit elasticity can be estimated by regressing profits on marginal costs. This coefficient should be negative, and the higher the elasticity (in absolute value), the stronger the competition. Given the restrictions of the long-run horizon of our research, we approximate marginal costs by average costs. We also compute the elasticity of market shares with respect to average costs (a similar indicator of competition proposed by van Leuvensteijn et al. 2011), long-term estimates of the Herfindahl index, the price-to-cost-margin, and bank turnover rate. Figure 1 shows our estimates of bank competition using the Boone indicator from 1890 to 2014. The dashed line is the estimated elasticity, with respect to costs, of either net profits (Panel A) or market shares (Panel B). The dotted lines indicate the extremes of the confidence interval at the 95% level. The solid red line is a smoothed Boone indicator, to yield a more intuitive representation of trends in competition. As expected, the Boone indicator is negative in most years. The higher the elasticity in absolute value, the greater the competition. Figure 1 Competition in Italian banking, 1890-2014, using the Boone indicator Source: Bank of Italy statistics, De Bonis et al. (2018) Note: Estimates use banking average total costs until 1962 and average operating costs thereafter. Data not available from 1974 to 1976. A competitive yo-yo Our estimates of the degree of competition show a yo-yo pattern, linked to regulatory changes: Competition was high in the free banking era: Until the 1920s, barriers to entry were negligible, as the lack of supervisory controls meant that banks were free to constitute and to open branches. As the numbers of banks and their branches rose, competition increased and remained extremely high until the mid-1920s. There were more than 4,000 banks, the greatest number in Italian history. Competition declined during the restrictive regulatory regime: The trend reversed at the end of the 1930s, when banks' market power increased because of the restrictive banking regulation introduced to ensure financial stability after the Great Depression. The Bank of Italy was endowed with greater discretionary power. It could decline to authorise bank creation and branch openings as well as M&A operations, and could revoke the license of any bank with insufficient capital. There were restrictions on the areas in which banks were allowed to operate. Banks were also required to join a bank cartel that set caps on deposit interest. Most of the banking sector became state owned. During this era of 'financial repression' (McKinnon 1973, Shaw 1973), there was a low degree of competition, as measured by the Boone indicator. This is confirmed by the bank turnover rate, which drops from approximately 6% to 3%. The new equilibrium, strongly influenced by the Bank of Italy's supervisory activity, was characterised by few entries, few exits, and few M&As. Competition increased with deregulation: With the take-off of the deregulation process in the 1980s, the banking system moved toward a more competitive framework. Barriers to competition were gradually removed, the bank cartel was abandoned, clear rules for the authorisation of branch openings were established, geographical constraints on lending were cancelled, and the prohibition against commercial banks providing long-term loans was weakened. The liberalisation process experienced a boom in 1990. Branch openings were deregulated, state-owned banks were authorised to become public companies, and parliament passed the first Italian antitrust law. In 1993, a new banking law reorganised all the regent regulatory changes in a pro-competitive framework. In this framework, bank creation was free and markets were no longer segmented. The effectiveness of the deregulation process is confirmed by our estimates of the Boone indicator, which signal an increase in competition since the 1980s. The turnover rate also began to increase in the second half of the 1980s as bank creation was liberalised. This trend accelerated at the beginning of the 1990s, when turnover increased further owing to M&A activity. Competition declined during the 1990s: The yo-yo pattern continued with a return to decreased competition in the second half of the 1990s (see van Leuvensteijn et al. 2011 for a similar trend). This reduction was less pronounced using the indicator based on market shares. The decline in competition ended in the early 2000s, and the extent of competition remained stable during the long Italian recession of 2008–2014. In our paper we also provide other micro-econometric results showing that changes in competition were influenced by changes in regulation, confirming that the institutional framework of the financial repression period was the most hostile to competition during the past 125 years. Regulation drove competition Our evidence shows that the evolution of regulation itself has been an important driver of competition. This confirms that the financial sector does not change monotonically over time (Rajan and Zingales 2003). Additional research would be needed to assess the effect of regulation on banking competition in other countries that, like Italy, have experienced swings in banking regulation. It would also be worthwhile to compare the variety of actions undertaken by banks during the Great Depression and the Great Recession. We leave these issues for future research. Authors' note: The views expressed in the article are those of the authors and do not necessarily reflect those of the Bank of Italy. Boone, J, J van Ours, H Wiel (2013), "When is the price cost margin a safe way to measure changes in competition?", De Economist, 161(1): 45–67. De Bonis R, G Marinelli, and F Vercelli (2018), "Playing yo-yo with bank competition: New evidence from 1890 to 2014", Explorations in Economic History 67: 134-151. Gigliobianco, A and C Giordano (2012), "Does economic theory matter in shaping banking regulation? A case-study of Italy (1861-1936)", Accounting, Economics, and Law: A Convivium, 2(1): 1-78. Guiso, L, P Sapienza and L Zingales (2006), "The cost of banking regulation", NBER working paper 12501. Marques-Ibanez, D and M van Leuvensteijn (2017), "Bank competition and financial stability: The role of financial innovation", VoxEU.org, 3 February. Marques-Ibanez, D, Y Altunbas, and M van Leuvensteijn (2014), "Competition and Bank Risk: The Effect of Securitization and Bank Capital", European Central Bank working paper 1678. McKinnon, R I (1973), Money and Capital in Economic Development, Brookings Institution. Rajan, R G and L Zingales (2003), "The great reversals: the politics of financial development in the twentieth century", Journal of Financial Economics, 69(1): 5–50. Shaw, E (1973), Financial Deepening in Economic Development, Oxford University Press. Toniolo, G, and N White (2015), "The Evolution of the Financial Stability Mandate: From Its Origins to the Present Day". NBER working paper 20844. van Leuvensteijn, M, A van Rixtel and B Xu (2016), "Lessons from China on bank competition at the zero lower bound interest rate", VoxEU.org, 12 June. van Leuvensteijn, M, J Bikker, A van Rixtel, and C K Sorensen (2011), "A new approach to measuring competition in the loan markets of the euro area", Applied Economics 43(23): 3155–3167. Xu, B, A V Rixtel, and M V Leuvensteijn (2013), "Measuring bank competition in China: a comparison of new versus conventional approaches applied to loan markets", Bank for International Settlements working paper 422. Topics: Financial regulation and banking Tags: bank competition, stability, Italy, Great Depression, deregulation, Boone indicator Riccardo De Bonis Deputy Head of Statistical Analysis Directorate, Bank of Italy Giuseppe Marinelli Economist-Statistician, Directorate General for Economics, Statistics and Research, Bank of Italy Francesco Vercelli Economist, Directorate General for Economics, Statistics and Research, Bank of Italy
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,640
{"url":"https:\/\/www.semanticscholar.org\/paper\/Critical-behavior-for-scalar-nonlinear-waves-Masoero-Raimondo\/37a8aed3f0cc9e76017dfe3dc20bb760c06682b9","text":"# Critical behavior for scalar nonlinear waves\n\n@article{Masoero2013CriticalBF,\ntitle={Critical behavior for scalar nonlinear waves},\nauthor={Davide Masoero and Andrea Raimondo and Pedro R. S. Antunes},\njournal={Physica D: Nonlinear Phenomena},\nyear={2013},\nvolume={292},\npages={1-7}\n}\n\u2022 Published 13 December 2013\n\u2022 Mathematics\n\u2022 Physica D: Nonlinear Phenomena\n\n## Tables from this paper\n\nIn this paper we characterize the Nazarov\u2013Sklyanin hierarchy for the classical periodic Benjamin\u2013Ono equation in two complementary degenerations: for the multiphase initial data (the periodic\nWe consider the zero-dispersion limit for the Benjamin-Ono equation on the torus for bell shaped initial data. Using the approximation by truncated Fourier series, we transform the eigenvalue\n\n## References\n\nSHOWING 1-10 OF 48 REFERENCES\n\n\u2022 Mathematics\nPhilosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences\n\u2022 1972\nSeveral topics are studied concerning mathematical models for the unidirectional propagation of long waves in systems that manifest nonlinear and dispersive effects of a particular but common kind.\n\u2022 Mathematics\n\u2022 2013\nWe propose an extension of the Dubrovin\u2013Zhang perturbative approach to the study of normal forms for non-Hamiltonian integrable scalar conservation laws. The explicit computation of the first few\n\u2022 Mathematics\n\u2022 2008\nAbstractWe obtain an asymptotic expansion for the solution of the Cauchy problem for the Korteweg-de Vries (KdV) equation $$u_t+6uu_x+\\epsilon^{2}u_{xxx}=0,\\quad u(x,t=0,\\epsilon)=u_0(x),$$for\n\u2022 T. Benjamin\n\u2022 Environmental Science\nJournal of Fluid Mechanics\n\u2022 1967\nThis paper presents a general theoretical treatment of a new class of long stationary waves with finite amplitude. As the property in common amongst physical systems capable of manifesting these\n\u2022 Mathematics\nProceedings of the National Academy of Sciences of the United States of America\n\u2022 1998\nThis paper extends the steepest descent method for Riemann-Hilbert problems introduced by Deift and Zhou in a critical new way to small dispersion KdV (Korteweg-de Vries) equation and derives the hyperelliptic asymptotic solution of S. Venakides that describes the oscillations.\n\u2022 Mathematics\n\u2022 2015\nWe introduce a deformation of the method of characteristics valid for Hamiltonian perturbations of a scalar conservation law in the small dispersion limit. Our method of analysis is based on the\n\u2022 Mathematics\n\u2022 2004\nWe resolve an open problem stated by Ablowitz et al (1982 J. Phys. A: Math. Gen. 15 781) concerning the integral operator appearing in the intermediate long wave equation. We explain how this is\n\u2022 Mathematics\n\u2022 1982\nIn Part I the scattering transform method is used to study the weak limit of solutions to the initial value problem for the Korteweg-deVries (KdV) equation as the dispersion tends to zero. In that\n\u2022 Mathematics\n\u2022 2011\nWe study the Cauchy problem for the Korteweg-de Vries (KdV) hierarchy in the small dispersion limit where $\\e\\to 0$. For negative analytic initial data with a single negative hump, we prove that for\n\u2022 Mathematics\n\u2022 1980\nA method for solving certain nonlinear ordinary and partial differential equations is developed. The central idea is to study monodromy preserving deformations of linear ordinary differential","date":"2023-02-05 12:21:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6339107155799866, \"perplexity\": 992.2553928748353}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500251.38\/warc\/CC-MAIN-20230205094841-20230205124841-00555.warc.gz\"}"}
null
null
Q: How to change default WAS liberty server port 9080 to other port number in RAD? When I am using WAS Liberty profile V8.5 beta, I can not find anywhere to change default service port 9080 in RAD, I tried to add httpendpoint section in server.xml, liberty server reports configuration update successful, but got failure when running web application. Anybody knows how to solve this? Thanks! A: There was a bug in the beta which prevented some port changes from being preserved when the server was restarted. There is a forum thread discussing the issue with the cached port value, and the answer is to start the server with the --clean option (from the command line), there is an equivalent checkbox available when you start the server via the tools. [1] https://www.ibm.com/developerworks/forums/thread.jspa?messageID=14770100 I would start there.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,226
{"url":"https:\/\/www.semanticscholar.org\/paper\/Perturbing-singular-solutions-of-the-Gelfand-D%C3%A1vila-Dupaigne\/1075cf8e301ca25f7b4a7cfda9345be5e613bbda","text":"# Perturbing singular solutions of the Gelfand problem\n\n@inproceedings{Dvila2007PerturbingSS,\ntitle={Perturbing singular solutions of the Gelfand problem},\nauthor={Juan D{\\'a}vila and Louis Dupaigne and Ignacio Guerra and Marcelo Montenegro},\nyear={2007}\n}\nhe equation $-\\Delta u = \\lambda e^u$ posed in the unit ball $B \\subseteq \\R^N$, with homogeneous Dirichlet condition $u|_{\\partial B} = 0$, has the singular solution $U=\\log\\frac1{|x|^2}$ when $\\lambda = 2(N-2)$. If $N\\ge 4$ we show that under small deformations of the ball there is a singular solution $(u,\\lambda)$ close to $(U,2(N-2))$. In dimension $N\\ge 11$ it corresponds to the extremal solution -- the one associated to the largest $\\lambda$ for which existence holds. In contrast, we\u2026\u00a0CONTINUE READING\n\n#### Citations\n\n##### Publications citing this paper.\nSHOWING 1-10 OF 12 CITATIONS\n\n\u2022 Mathematics\n\u2022 2008\nVIEW 6 EXCERPTS\nCITES BACKGROUND\n\n## Singular extremal solutions to a Liouville-Gelfand type problem with exponential nonlinearity\n\nVIEW 3 EXCERPTS\nCITES BACKGROUND\nHIGHLY INFLUENCED\n\n\u2022 Mathematics\n\u2022 2019\nVIEW 1 EXCERPT\nCITES BACKGROUND\n\n\u2022 Mathematics\n\u2022 2018\n\n\u2022 Mathematics\n\u2022 2018\nVIEW 1 EXCERPT\nCITES METHODS\n\n\u2022 Geography\n\u2022 2016\nVIEW 1 EXCERPT\nCITES BACKGROUND\n\n\u2022 Mathematics\n\u2022 2016\n\nVIEW 2 EXCERPTS\nCITES BACKGROUND\n\n## Sharp estimates of radial minimizers of p-Laplace equations\n\n\u2022 Mathematics\n\u2022 2014\nVIEW 1 EXCERPT\nCITES BACKGROUND\n\n#### References\n\n##### Publications referenced by this paper.\nSHOWING 1-10 OF 28 REFERENCES\n\n## Solutions of semilinear elliptic equations with one isolated singularity\n\nVIEW 5 EXCERPTS\nHIGHLY INFLUENTIAL\n\n## Blow-up solutions of some nonlinear elliptic problems.\n\n\u2022 Mathematics\n\u2022 1997\nVIEW 13 EXCERPTS\nHIGHLY INFLUENTIAL\n\n## Solutions de \u2206u = \u2212\u03bbeu ayant des singularit\u00e9s ponctuelles prescrites\n\n\u2022 F. Pacard\n\u2022 C. R. Acad. Sci. Paris S\u00e9r. I Math. 311(6)\n\u2022 1990\nVIEW 4 EXCERPTS\nHIGHLY INFLUENTIAL\n\n## IS THERE FAILURE OF THE INVERSE FUNCTION THEOREM ?\n\nVIEW 6 EXCERPTS\nHIGHLY INFLUENTIAL\n\n\u2022 Mathematics\n\u2022 2005\n\n\u2022 Mathematics\n\u2022 2003\nVIEW 1 EXCERPT\n\nVIEW 1 EXCERPT\n\n\u2022 Mathematics\n\u2022 2000\n\nVIEW 1 EXCERPT\n\n\u2022 Mathematics\n\u2022 1998\nVIEW 1 EXCERPT","date":"2020-02-23 18:32:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7349513173103333, \"perplexity\": 14864.369498609049}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145818.81\/warc\/CC-MAIN-20200223154628-20200223184628-00032.warc.gz\"}"}
null
null
{"url":"http:\/\/stats.stackexchange.com\/questions\/29350\/maximum-likelihood-estimator-gaussian-errors-known-sd","text":"# Maximum likelihood estimator (Gaussian errors, known SD)\n\nSuppose that the random variables $Y_1, ..., Y_n$, satisfy $Y_i = \\beta \\cdot x_i + \\epsilon_i$ for $i = 1,...,n$ where $\\beta$ is a constant, $x_1,...,x_n$, are constants, and $\\epsilon_1,...,\\epsilon_n$, are independent and identically distributed random variables with $\\epsilon_i \\sim N(0,\\sigma^2)$, where $\\sigma^2$ is a known constant.\n\n(a) Determine the exact distribution of $Y_i$.\n\n(b) Find the maximum likelihood estimator $\\hat{\\beta}$ of $\\beta$ and show that it is an unbiased estimator of $\\beta$.\n\n(c) Determine the exact distribution of $\\hat{\\beta}$.\n\n-\nPlease consider formatting your question in a more neat manner. Your title is atrocious! See the advanced markdown help, including how to use latex to format mathematical notation in your question. \u2013\u00a0Andy W May 29 '12 at 2:44\nJoytee, Please indicate what attempts you have made at this problem and where you might need some assistance. \u2013\u00a0whuber May 29 '12 at 13:54\n\nSince $Y$ is the sum of a constant and a normal random variable it has an normal distribution figure out its mean and variance. Write down the likelihood function set the partial derviative of it with respect to $\\beta$ to $0$ and solve for $\\beta$. Once you have the formula for the estimate of $\\beta$ you should be able to figure out its distribution and determine its mean. If the mean turns out to be $\\beta$ it is unbiased. I am suggesting to do 3 first and then 2 but it probably can be done either way.","date":"2013-05-25 14:04:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9100310802459717, \"perplexity\": 145.11617776594946}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368705956263\/warc\/CC-MAIN-20130516120556-00009-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
#include <TMongoDriver> #include <TMongoCursor> #include <TBson> #include <TSystemGlobal> #include <QDateTime> extern "C" { #include "mongoc.h" } TMongoDriver::TMongoDriver() : mongoClient(nullptr), dbName(), mongoCursor(new TMongoCursor()), lastStatus(nullptr), errorCode(0), errorString() { mongoc_init(); } TMongoDriver::~TMongoDriver() { close(); delete mongoCursor; if (lastStatus) { delete lastStatus; } } bool TMongoDriver::open(const QString &db, const QString &user, const QString &password, const QString &host, quint16 port, const QString &options) { if (isOpen()) { return true; } if (!port) port = MONGOC_DEFAULT_PORT; QString uri; if (!user.isEmpty()) { uri += user; if (!password.isEmpty()) { uri += ':'; uri += password; uri += '@'; } } uri += host; if (!options.isEmpty()) { uri += "/?"; uri += options; } if (!uri.isEmpty()) { uri.prepend(QLatin1String("mongodb://")); } // connect mongoClient = mongoc_client_new(qPrintable(uri)); if (mongoClient) { dbName = db; } else { tSystemError("MongoDB client create error"); } return (bool)mongoClient; } void TMongoDriver::close() { if (isOpen()) { mongoc_client_destroy(mongoClient); mongoClient = nullptr; } } bool TMongoDriver::isOpen() const { return (bool)mongoClient; } bool TMongoDriver::find(const QString &collection, const QVariantMap &criteria, const QVariantMap &orderBy, const QStringList &fields, int limit, int skip, int ) { if (!isOpen()) { return false; } errorCode = 0; errorString.clear(); mongoc_collection_t *col = mongoc_client_get_collection(mongoClient, qPrintable(dbName), qPrintable(collection)); mongoc_cursor_t *cursor = mongoc_collection_find(col, MONGOC_QUERY_NONE, skip, limit, 0, (bson_t *)TBson::toBson(criteria, orderBy).data(), (bson_t *)TBson::toBson(fields).data(), nullptr); /* Read Prefs, nullptr for default */ setLastCommandStatus(mongoc_collection_get_last_error(col)); mongoc_collection_destroy(col); mongoCursor->setCursor(cursor); if (cursor) { bson_error_t error; if (mongoc_cursor_error(cursor, &error)) { errorCode = error.code; errorString = QLatin1String(error.message); } } else { tSystemError("MongoDB Cursor Error"); } return (bool)cursor; } QVariantMap TMongoDriver::findOne(const QString &collection, const QVariantMap &criteria, const QStringList &fields) { QVariantMap ret; bool res = find(collection, criteria, QVariantMap(), fields, 1, 0, 0); if (res && mongoCursor->next()) { ret = mongoCursor->value(); } return ret; } bool TMongoDriver::insert(const QString &collection, const QVariantMap &object) { if (!isOpen()) { return false; } errorCode = 0; errorString.clear(); bson_error_t error; mongoc_collection_t *col = mongoc_client_get_collection(mongoClient, qPrintable(dbName), qPrintable(collection)); bool res = mongoc_collection_insert(col, MONGOC_INSERT_NONE, (bson_t *)TBson::toBson(object).constData(), nullptr, &error); setLastCommandStatus(mongoc_collection_get_last_error(col)); mongoc_collection_destroy(col); if (!res) { tSystemError("MongoDB Insert Error: %s", error.message); errorCode = error.code; errorString = QLatin1String(error.message); } return res; } bool TMongoDriver::remove(const QString &collection, const QVariantMap &object) { if (!isOpen()) { return false; } errorCode = 0; errorString.clear(); bson_error_t error; mongoc_collection_t *col = mongoc_client_get_collection(mongoClient, qPrintable(dbName), qPrintable(collection)); bool res = mongoc_collection_remove(col, MONGOC_REMOVE_SINGLE_REMOVE, (bson_t *)TBson::toBson(object).constData(), nullptr, &error); setLastCommandStatus(mongoc_collection_get_last_error(col)); mongoc_collection_destroy(col); if (!res) { tSystemError("MongoDB Remove Error: %s", error.message); errorCode = error.code; errorString = QLatin1String(error.message); } return res; } bool TMongoDriver::update(const QString &collection, const QVariantMap &criteria, const QVariantMap &object, bool upsert) { if (!isOpen()) { return false; } errorCode = 0; errorString.clear(); bson_error_t error; mongoc_update_flags_t flag = (upsert) ? MONGOC_UPDATE_UPSERT : MONGOC_UPDATE_NONE; mongoc_collection_t *col = mongoc_client_get_collection(mongoClient, qPrintable(dbName), qPrintable(collection)); bool res = mongoc_collection_update(col, flag, (bson_t *)TBson::toBson(criteria).data(), (bson_t *)TBson::toBson(object).data(), nullptr, &error); setLastCommandStatus(mongoc_collection_get_last_error(col)); mongoc_collection_destroy(col); if (!res) { tSystemError("MongoDB Update Error: %s", error.message); errorCode = error.code; errorString = QLatin1String(error.message); } return res; } bool TMongoDriver::updateMulti(const QString &collection, const QVariantMap &criteria, const QVariantMap &object) { if (!isOpen()) { return false; } errorCode = 0; errorString.clear(); bson_error_t error; mongoc_collection_t *col = mongoc_client_get_collection(mongoClient, qPrintable(dbName), qPrintable(collection)); bool res = mongoc_collection_update(col, MONGOC_UPDATE_MULTI_UPDATE, (bson_t *)TBson::toBson(criteria).data(), (bson_t *)TBson::toBson(object).data(), nullptr, &error); setLastCommandStatus(mongoc_collection_get_last_error(col)); mongoc_collection_destroy(col); if (!res) { tSystemError("MongoDB UpdateMulti Error: %s", error.message); errorCode = error.code; errorString = QLatin1String(error.message); } return res; } int TMongoDriver::count(const QString &collection, const QVariantMap &criteria) { if (!isOpen()) { return false; } errorCode = 0; errorString.clear(); bson_error_t error; mongoc_collection_t *col = mongoc_client_get_collection(mongoClient, qPrintable(dbName), qPrintable(collection)); int count = mongoc_collection_count(col, MONGOC_QUERY_NONE, (bson_t *)TBson::toBson(criteria).data(), 0, 0, nullptr, &error); setLastCommandStatus(mongoc_collection_get_last_error(col)); mongoc_collection_destroy(col); if (count < 0) { tSystemError("MongoDB Count Error: %s", error.message); errorCode = error.code; errorString = QLatin1String(error.message); } return count; } // QString TMongoDriver::lastErrorString() const // { // return lastStatus->value("writeErrors").toStringList().value(0); // } QVariantMap TMongoDriver::getLastCommandStatus() const { return TBson::fromBson(*lastStatus); } void TMongoDriver::setLastCommandStatus(const void *bson) { if (lastStatus) { delete lastStatus; } lastStatus = new TBson((const TBsonObject *)bson); }
{ "redpajama_set_name": "RedPajamaGithub" }
6,101
\section{Motivation:} \paragraph{Motivation.} Modelling, parameter identification, and simulation play an important role in systems biology. Usually, the goal is to determine parameter values that minimise the difference between experimental measurement values and model predictions in a least-squares sense. Large-scale biological networks, however, often suffer from missing data for parameter identification. Thus, the least-squares problems are rank-deficient and solutions are not unique. Many common optimisation methods ignore this detail because they do not take into account the structure of the underlying inverse problem. These algorithms simply return a ``solution'' without additional information on identifiability or uniqueness. This can yield misleading results, especially if parameters are co-regulated and data are noisy. \paragraph{Results.} The Gauss-Newton method presented in this paper monitors the numerical rank of the Jacobian and converges locally, for the class of adequate problems, to a solution that is unique within the subspace of identifiable parameters. This method has been implemented in BioPARKIN, a software package that combines state-of-the-art numerical algorithms with compliance to system biology standards, most importantly SBML, and an accessible interface. \paragraph{Availability.} The software package BioPARKIN is available for download at \\ \ \url{http://bioparkin.zib.de} . \end{abstract} \maketitle \section{Introduction} Following \citep{Schuppert10}, there are two main modelling approaches in systems biology. On one hand, there exist detailed models for isolated parts of a system. The states and model parameters of such systems are generally well-defined, but the system is far from being closed and there are great variations in the environmental conditions. On the other hand, large-scale networks are more closed, but suffer from missing data for parameter identification. Biological data, however, often indicate that parameters are correlated, and that a system's behaviour can be characterised by a few control parameters. In contrast to parameter optimisation, {\em parameter identification} not only aims at the determination of parameter values from given measurement data, but also on the detection of dependencies between parameters. As stated in \citep{Schuppert10}, the identification of all control parameters which allow a proper characterisation of the states of a biological system, is by no means trivial and, at least for most applications, an open problem. Modelling, parameter estimation and simulation of biological systems have become part of modern systems biology toolboxes. Unfortunately, many of these programs are based on inefficient or mathematically outdated algorithms. To counteract this problem, we have developed the software package BioPARKIN$^{1}$\footnote[0]{$^{1}$ {\bf Bio}logy-related {\bf par}amater identification in large {\bf kin}etic networks} \citep{Die11}. This software is a renewed version of the former codes LARKIN \citep{DdBa:81} and PARKIN \citep{Nowak85}, which have successfully been applied in chemical industry for more than 20 years \citep{Deuflhard1986}. BioPARKIN combines a basis of long-standing mathematical principles with compliance to system biology standards, most importantly SBML \citep{cornish2003systems}, and an accessible interface. The SBML format is one of the most important standards in systems biology to facilitate collaboration of researchers at all levels (physicians, biologists, mathematicians, etc.). The interface strives to wrap complicated structures and settings (especially with regard to the numerical back-end) into an user-friendly package that can be used correctly by non-mathematicians. BioPARKIN is split into two parts -- the numerical library PARKINcpp and the graphical user interface (GUI) -- in order to achieve several advantages. The crucial, yet computation-intensivenumerical algorithms are embedded in an efficient C++ library while the GUI is coded in Python which enables rapid interface changes when adapting the user interface to new insights into user behaviour. Another important advantage is the independent availability of the PARKINcpp library for use in other related projects. Both parts are available under the LPGL which is a flexible open-source license allowing for the use of the software in both open and closed (i.e.~commercial) projects. The core of PARKINcpp and its unique feature is the solver NLSCON for {\bf n}onlinear {\bf l}east-{\bf s}quares with {\bf con}straints \citep{nlscon}. This Gauss-Newton type method is especially suited for rank-deficient problems \citep{Deu04}. NLSCON requires, however, some user specified input such as threshold values for species and parameters, or a threshold value for rank decision. In order to choose reasonable values and to obtain reliable results, it is indispensable to understand the foundations of the algorithm. This paper therefore aims at giving an overview of the functionality and implementation of NLSCON within BioPARKIN. The article is organised as follows. We start with the problem definition in Section~\ref{sec:approach}. In Section~\ref{sec:methods} we explain our method to solve nonlinear least-squares problems. Finally, we present and discuss numerical results in Section~\ref{sec:results}. \section{Approach} \label{sec:approach} \subsection{Large kinetic networks} A major topic in systems biology is the study of the dynamical evolution of bio-chemical mechanisms within a well-defined, biology-related context. The bio-chemical mechanisms in such a compound under consideration are typically given as a, possibly huge, set of chemical reactions between numerous species forming a large kinetic network. Assuming the general principle of mass action kinetics, this large network transforms readily to a system of $n$ ordinary differential equations (ODEs) leading to an autonomous initial value problem (IVP) \begin{equation} y' = f(y\,;\, p), \qquad y(t_{0}) = y_{0}, \qquad p \in \mathbb{R}^q \label{eq:IVP} \end{equation} where the rate of change in the species vector, $y' \in \mathbb{R}^n$, is described by the term on the right-hand side, $f(y;p)$, depending on both the species, $y \in \mathbb{R}^n$, and the parameter vector, $p \in \mathbb{R}^{q}$. The initial condition vector, $y_{0}$, has the same dimension as the species vector $y$. In BioPARKIN, the ODE systems are solved numerically with LIMEX, a linearly implicit Euler method with extrapolation that is especially suited for stiff differential equations \citep{limex,limex2,Ehr99}. LIMEX is a numerical integrator with adaptive stepsize control that allows for a computation of the solution $y$ at arbitrary time points with prescribed accuracy by using an appropriate interpolation scheme. This is often not possible with other ODE solvers. LIMEX can be applied to differential-algebraic equations as well, which allows for the processing of algebraic constraints in BioPARKIN. It is assumed that some discrete experimental data (in form of species concentrations versus time), \begin{equation} (\tau_{1},z_{1}),\,\ldots,\,(\tau_{M},z_{M}), \end{equation} are available. Note that frequently only a certain amount of the $n$ species concentrations are measurable observables, if at all. The task at hand now reduces to quantify the $q$ unknown components of the parameter vector, $p$, by comparison between computed model values and measured data. A complete data set, of course, must include prescribed statistical tolerances, $\delta z_{j} \, (j = 1, \ldots, M)$, for each measurement as well. The mathematically correct handling of these will be described in Section~\ref{sub:Parameter-identification}. \paragraph{Breakpoint handling.} A sudden event (maybe from outside the biological system) is handled by introducing a breakpoint, $t_{b} > t_{0},$ and subsequently, splitting the ODE system into a $y^{-}$-part for $t_{0} < t \leq t_{b}$, and a $y^{+}$-part for $t_{b} < t$, \begin{eqnarray} (y^{-})' & = & f(y^{-}; \, p),\qquad y^{-}(t_{0}) = y_{0} \\ (y^{+})' & = & f(y^{+}; \, p),\qquad y^{+}(t_{b}) = g(y^{-}(t_{b}) \, ; \, p) \end{eqnarray} where $g:\mathbb{R}^{n} \times \mathbb{R}^{q} \longrightarrow \mathbb{R}^{n}$ is a mapping of the initial conditions, possibly dependent on the parameter vector, $p$. Note that, in BioPARKIN, breakpoints have to be defined beforehand and hence, they must be independent of the time course of $y$. This approach of splitting the ODE system with respect to time particularly applies in case of multiple experiments. In SBML such breakpoints are defined via ``events'' with trigger expressions in the form \begin{center} $\mathtt{ eq(time,t_b) }.$ \end{center} Many other present simulation tools cannot handle this kind of event because the numerical integrator simply does not stop at time $t_b$. \paragraph{Multiple experiments.} The design of experiments almost always includes different conditions such that the effects of these different conditions on the system under investigation can be observed and studied. In the simplest case, calibration measurements might be necessary, for example, or data related to different initial conditions, $y_{0,1}, y_{0,2}, \ldots,y_{0,\nu}, \ldots$, are given. Numerically, these situations can be handled by the concatenation of several IVPs, \begin{equation} y_{\nu}' = f_{\nu}(y_{\nu} \, ; \, p), \qquad y_{\nu}(t_{0,\nu}) = y_{0,\nu}, \quad \nu = 1,2,\ldots, \end{equation} very similar to the management of breakpoints/events. If required, the solution $y_{\nu}$ corresponding to the (virtual) initial timepoint, $t_{0,\nu}$, can readily be shifted to the (original) initial time, $t_{0}$, for comparison or plotting purposes. \subsection{Parameter identification} \label{sub:Parameter-identification} Following the fundamental idea of Gauss, parameter identification is, as implemented in BioPARKIN, equivalent to solving the \emph{weighted} least-squares problem, \begin{equation} \frac{1}{M} \sum_{j=1}^{M} \| D_{j}^{-1} \big(y(\tau_{j}\,;\, p) - z_{j}\big) \|_{2}^{2} = \min, \label{eq:Fmin} \end{equation} with diagonal weighting $(n,n)$-matrices, \begin{equation} D_{j} := \mathrm{diag} \big( (\delta z_{j})_{1},\,\ldots,\,(\delta z_{j})_{n} \big), \qquad j = 1, \ldots, M. \end{equation} Note that, if not all components of a datum, $z_{j} \in \mathbb{R}^{n}$, are available at a specific measurement time point, $\tau_{j}$, then the missing data in the least-squares formulation is simply replaced by the computable model value, therefore effectively neglecting the corresponding contribution in the sum of Equation~(\ref{eq:Fmin}). The corresponding entry in $D_{j}$ is then set to one. If, on the other hand, a component of given error tolerance, $\delta z_{j}$, or even the whole vector, is put to zero, this contribution to the sum in Equation~(\ref{eq:Fmin}) is also taken out, and considered as a (nonlinear) equality constraint to the least-squares formulation instead. In the (hopefully rare) case of missing error tolerances in the data file, the measurement tolerances are simply set to \begin{equation} \big( \delta z_{j} \big)_{\ell} = \max \big\{ \big| \big(z_{j}\big)_{\ell} \big|, \, \mathrm{thres}(y_{\ell}) \big\},\quad \ell = 1, \ldots, n, \end{equation} with some user specified threshold mapping, $\mathrm{thres}(y_{\ell})$. If this threshold value is not defined, it is set to zero. The least-squares problem (\ref{eq:Fmin}) may be written even more compactly as \begin{equation} \|F(p)\|_{2}^{2} \equiv F(p)^{^{T}}F(p) = \min, \end{equation} where $F \, : \, \mathbb{R}^{q} \rightarrow \mathbb{R^{L}}$ is a nonlinear mapping and structured as a stacked vector of length $L = nM$, \begin{equation} F(p) = \left[ \begin{array}{c} D_{1}^{-1} \big( y(\tau_{1} \, ; \, p) - z_{1} \big) \\ \vdots \\ D_{M}^{-1} \big( y(\tau_{M}\, ; \, p) - z_{M} \big) \end{array}\right]. \end{equation} If \emph{not} \emph{all} components of a measurement, $z_{j}$, are given, the number $L$ will accordingly be smaller, $L < nM$. \subsection{Parameter constraints} In order to enforce constraints such as positivity or upper and lower bounds on the unknown parameters to be determined in the model, a (differentiable) transformation, $\varphi:\mathbb{R}^{q}\longrightarrow\mathbb{R}^{q}$, can be introduced resulting in a different parametrisation, $u$, of the model ODE system, \begin{equation} p = \varphi(u),\quad y' = f(y\,;\,\varphi(u)) = \tilde{f}(y\,;\, u) \end{equation} A global positivity constraint on the parameter vector, $p$, can be achieved, for example, by the (component-wise) exponential transformation \begin{equation} p_{i} = \exp(u_{i}),\qquad i=1,\ldots,q \end{equation} To impose an upper and a lower bound, $A$ and $B$, respectively, a sinusoidal transformation \begin{equation} p_{i} = A + \frac{B-A}{2}\,\left(1 + \sin u_{i}\right),\qquad i=1,\ldots,q \end{equation} can be used. For a single bound, $C$, as last example in this section, a root square transformation \begin{equation} p_{i} = C\pm\left(1 - \sqrt{1 + u_{i}^{2}}\right),\qquad i=1,\ldots,q \end{equation} (with the upper sign for an upper bound and the lower sign for a lower bound) is possible. The last two transformation formulas are especially eligible since, at least for small perturbations $\mathrm{d}p_{i} \approx \varphi' \, \mathrm{d}u_{i}$, the differentials are bounded and, most importantly, are essentially independent of the new parametrisation, $u$. Note that the application of any transformation of the parameters obviously changes the sensitivities of the parameters to the dynamical evolution of the ODE system. Therefore, it is strongly recommended that parameter constraints should only be applied in order to prevent the parameter vector components, $p_{i}$, from taking on physically meaningless values. The better choice in this case would be to change the model equations since model and data seem to be incompatible. \subsection{Parameter scaling} In general, a scaling-invariant algorithm, i.e.~an algorithm that is invariant under the choice of units in a given problem, is (almost) indispensable to guarantee any reliable results. Therefore, the following scaling strategy within the course of the Gauss-Newton iteration has been implemented: in each iteration step $k$, an internal weighting vector, $pw\in\mathbb{R}^{q}$, is used to define local scaling matrices, $W_{k}$ , by \begin{equation} W_{k} = \mathrm{diag}(pw_{1}, \ldots, pw_{q}) \end{equation} with locally given \begin{equation} pw_{i} := \max \left\{ |p_{i}^{k}|,\:\mathrm{thresh}(p_{i}) \right\}, \qquad i = 1, \ldots, q \end{equation} where $p_i^k$ are the current parameter iterates, and ${\mathrm{thresh}(p_{i})>0}$ are suitable threshold values for scaling chosen by the user. Consequently, any relative precision of parameter values below these prescribed threshold values will be meaningless. \section{Methods} \label{sec:methods} \subsection{Affine covariant Gauss-Newton algorithm} \label{sub:Gauss-Newton-Method} Starting with an initial guess, $p^{0} \in \mathbb{R}^{q}$, the (damped) Gauss-Newton method is given as \begin{equation} p^{k+1} = p^{k} + \lambda_{k}\Delta p^{k},\qquad k = 0, 1, \ldots \end{equation} Here, the step-length, $0< \lambda_{k} \le 1$, is recomputed successively in each iteration (see below). The update, $\Delta p^{k}$, is the minimum norm solution to the \emph{linear} least-squares problem, \begin{equation} \| F'(p^{k})\,\Delta p^{k} + F(p^{k}) \| \overset{!}{=} \min. \end{equation} The $(L\times q)$-Jacobian matrix, $F'(\cdot)$, can be approximated by stacking the rows of the sensitivity matrices, $S(\tau_{j})$, corresponding to the measurement points $(\tau_{j},\, z_{j})$, \begin{equation} J = \left[\begin{array}{c} D_{1}^{-1} S(\tau_{1}) \\ \vdots\\ D_{M}^{-1} S(\tau_{M}) \end{array} \right]. \end{equation} Herein the sensitivity matrices, $S(\tau_j)$, are samples of the solution trajectories of the inhomogeneous \emph{variational equation} \begin{equation} S' = f_{y}\Big(y(t\,;\,p^{k})\,;\,p^{k}\Big) S + f_{p}\Big(y(t\,;\,p^{k})\,;\,p^{k}\Big), \quad S(t_0) = 0 \end{equation} taken at the measurement time points, $\tau_{j}$. The terms $f_y$ and $f_p$ on the right hand side are computed analytically by symbolic differentiation. The variational equation is solved simultaneously with the IVP (\ref{eq:IVP}), representing an ODE system of $((n+1)\times q)$ equations in total. To avoid expensive factorisations of the iteration matrix within LIMEX, it is replaced by its block-diagonal part, as proposed in \citep{Schlegel04}. The linearly-implicit extrapolation algorithm allows such an approximation, as long as the characteristics of the dynamic system are preserved, which is satisfied here. By using this sparsing, the effort required for sensitivity evaluation does not grow quadratically with the number of parameters, $q$, but only linearly. Hence, reasonable computing times are achieved (compare also Table~\ref{Tab:01}). For reasons of comparison with other software tools, the Jacobian matrix can alternatively be approximated by computing the difference quotient, for $\ell = 1, \ldots, L$ and $i = 1, \ldots, q$, \begin{equation} J_{\ell,i} = \frac{1}{h} \Big(F_{\ell}(p+e_{i}h) - F_{\ell}(p)\Big), \qquad h = \mathcal{O}\Big( \left|p_{i}\right| \cdot \sqrt{\mathrm{eps}} \Big), \end{equation} whereby $\mathrm{eps}$ it the relative machine precision. In BioPARKIN, the user can optionally invoke a feedback strategy in which the finite difference disturbance is additionally adapted to the current values of $F_{\ell}$. All approaches to compute the Jacobian matrix make sure that, at each current parameter estimation, $p^{k}$, the approximation $J\approx F'(p^{k})$ is valid. Note, however, that the Jacobian computed by numerical differentiation is generally less accurate than the Jacobian obtained via the variational equation. In passing, the notation of the so-called \emph{simplified} \emph{Gauss-Newton} \emph{correction}, $\overline{\Delta p}^{k+1}$, as the minimum norm solution to \begin{equation} \|J(p^{k})\,\overline{\Delta p}^{k+1} + F(p^{k+1})\| \overset{!}{=} \min, \end{equation} may also be introduced for later use. \subsection{Threshold-related scaling} Often, model species and model parameters cover a broad range of physical units and their values can vary over orders of magnitude. To achieve comparability, the sensitivity values have to be normalised by the absolute values of species and parameters to obtain scaled sensitivity matrices, \begin{equation} S_{ij}(t) = \left( \frac{\partial y_{i}}{\partial p_{j}}\right)(t) \cdot \frac{ \max \{ |p_{j}| \, , \, \mathrm{thres}(p_{j}) \} } { \max \{ \max\limits_{t\in I} |y_{i}(t)| \, , \, \mathrm{thres}(y_{i}) \} } \end{equation} where $\mathrm{thres}(\cdot)$ are user-specified threshold values for parameters and species, respectively, and the integration time interval of the ODE system, $I$, is used. In BioPARKIN, the absolute values of these scaled sensitivity values are displayed (see Figure~\ref{fig:02} as an example). \subsection{Subcondition monitor} \label{sub:Subcondition-Monitor} For the solution of the linear least-squares problem in each iteration step, a QR-decomposition of the associated Jacobian $(L,q)$-matrix, $J = F'(p)$, \begin{equation} Q\, J\,\Pi = \left( \begin{array}{c} R\\ 0 \end{array} \right) \end{equation} by applying Householder reflections with additional column pivoting is realised in BioPARKIN. Here, for simplicity, the full rank case is assumed where $q \leq L$ and $R$ is an upper triangular $(q,q)$-matrix, $R = (r_{ij})$. The permutation, $\Pi$, is determined such that \begin{equation} |r_{11}| \geq |r_{22}| \geq \ldots \geq |r_{qq}|. \end{equation} For some required accuracy, $\delta > 0$, given by the user, the \emph{numerical rank}, $\ell := \mathrm{rnk}(J)$, indispensable to the successful solution of ill-posed problems, is then defined by the inequality \begin{equation} |r_{\ell+1,\ell+1}| < \delta \, |r_{11}| \leq |r_{\ell\ell}|. \end{equation} In general, the maximum of all given measurement tolerances, $\delta z_j$, is a suitable choice for this accuracy, $\delta := \max\limits_{i,j} \{ (\delta z_{j})_{i} \}$. In BioPARKIN, however, this choice is left to the user, who has to specify a tolerance XTOL. This tolerance is assigned to $\delta$. Note that this definition of the numerical rank is highly biased by both row and column scaling of the Jacobian. Introducing, nevertheless, the so-called subcondition number, for $\ell = q$, by \begin{equation} \mathrm{sc}(J) := \frac{|r_{11}|}{|r_{qq}|} \leq \mathrm{cond}_{2}(J), \end{equation} it follows that, if $\delta \cdot \mathrm{sc}(J) \geq 1$, the Jacobian will certainly be rank-deficient. In this case, a rank-deficient pseudo-inverse is realised in BioPARKIN, either by a QR-Cholesky variant or by a QR-Moore-Penrose variant \citep{DdSa:80}. Both cases of pseudo-inverses of the Jacobian, $J$, will be denoted by $\left( J^{\ell} \right)^{+}$. \subsection{Steplength strategy} In order to determine an optimal damping parameter, $0 < \lambda_{k} \leq 1$, in each Gauss-Newton step, a first estimate $\lambda_{k}^{(0)}$ is calculated in BioPARKIN from a theoretical prediction on the basis of the former iterate step, \begin{equation} \begin{array}{rcl} \lambda_{k}^{(0)} & = & \min \{ 1, \mu_{k} \} \\[2ex] \mu_{k} & := & \left[ \|\Delta p^{k-1}\| \, \|\overline{\Delta p}^{k}\| \, / \, ( \, \rho_{k} \, \|\Delta p^{k}\| \, ) \right] \cdot \lambda_{k-1} \\[2ex] \rho_{k} & := & \left\| \left[ I_q - J(p^{k})^{+} J(p^{k-1}) \right] \overline{\Delta p}^{k} \right\| \, . \end{array} \end{equation} If this first \emph{a priori} estimate, $\lambda_{k}^{(0)}$, fails in the \emph{natural monotonicity test}, \begin{equation} \left\| \overline{\Delta p}^{k+1} \right\| < \left\| \Delta p^{k} \right\| \, , \end{equation} then an additional \emph{correction strategy} is invoked to compute the \emph{a posteriori} estimates, \begin{equation} \lambda_{k}^{(\nu)} = \min \left\{ 1 \, , \, \frac{1}{2}\lambda_{k}^{(\nu-1)} , \ , \frac{1}{2}\mu_{k}^{(\nu-1)} \right\} , \qquad \nu = 1, 2, \ldots \end{equation} where \begin{equation} \mu_{k}^{(\nu-1)} := \frac{\|\Delta p^{k}\|}{\|\overline{\Delta p}^{k+1,\nu-1} - (1-\lambda_{k}^{(\nu-1)})\,\Delta p^{k}\|} \cdot (\lambda_{k}^{(\nu-1)})^{2} \, . \end{equation} For details see \citep{Deu04} and \citep{Die11}. As experience shows, the \emph{a posteriori} loop is rarely activated. To avoid an infinite loop, however, it is ensured that both estimates, $\lambda_{k}^{(0)}$ and $\lambda_{k}^{(\nu)}$, $\nu=1,2,\ldots$, always satisfy the condition \begin{equation} \lambda_{k}^{(\nu)}\geq\lambda_{\mathrm{min}},\qquad\nu=0,1,2,\ldots \end{equation} with a minimal permitted damping factor, $\lambda_{\mathrm{min}}$, provided by the user. In case $\lambda_{k}^{(\nu)}<\lambda_{\mathrm{min}}$ deliberate rank reduction is invoked, which usually leads to larger damping factors. Otherwise, the Gauss-Newton iteration will be stopped. \subsection{Deliberate rank reduction} A deliberate rank reduction may additionally help to avoid an iteration towards an attractive point, $\hat{p}$, where the associated Jacobian matrix, $J(\hat{p})$, becomes singular. The general idea of this device is to reduce the maximum permitted rank in the $QR$ decomposition until the natural monotonicity will be fulfilled again or, of course, no further rank reduction is possible. The subroutine as implemented in BioPARKIN is as follows. To start with, let $q$ denote the current rank. The ordinary Newton correction, $\Delta p^{k}$, is then recomputed with a prescribed maximum allowed rank, $\ell = q - 1$. With the new (trial) correction, $\Delta p^{k,\ell}$, a new \emph{a priori} damping factor, a new trail iterate, and a new simplified correction, \begin{eqnarray} \lambda_{k}^{(0,\ell)} & = & \min\left\{ 1,\,\mu_{k}^{(0,\ell)}\right\} ,\\ p^{(0,\ell)} & = & p^{k} + \lambda_{k}^{(0,\ell)}\Delta p^{k,\ell},\\ \Delta p^{k,\ell} & = & -J^{\ell}(p^{k})^{+} F(p^{k}),\\ \overline{\Delta p}{}^{(0,\ell)} & = & -J^{\ell}(p^{k})^{+} F(p^{(0,\ell)}), \end{eqnarray} are computed, respectively. If now the monotonicity test is successfully passed, the Gauss-Newton iteration proceeds as usual. Otherwise, the damping factors, $\lambda_{k}^{(\nu,\ell)}(\nu=1,2,\ldots)$, are calculated using the \emph{a posteriori} estimates as given above. If in the \emph{a posteriori} loop, in turn, $\lambda_{k}^{(\nu,\ell)}<\lambda_{\mathrm{min}}$ occurs, the maximum allowed rank is further lowered by one and, again, the repetition of the rank reduction step starts once more. This rank reduction procedure is carried out until natural monotonicity, $\|\overline{\Delta p}^{(\nu,\ell)}\| \leq \|\Delta p^{k,\ell}\|$, holds true or, alternatively, a final termination criterion, $\ell < \ell_{\mathrm{min}} \, (0 < \ell_{\mathrm{min}} < q)$, is reached. Note that an emergency rank reduction can occur in a step where the rank of the Jacobian, $J(p^{k})$, has already been reduced because of the subcondition criterion. \subsection{Convergence} \begin{table}[!t] \caption{Typical protocol of parameter identification run with full data, here for the model EpoRcptr (cf.~Section~\ref{sub:EpoRcptr}).\label{Tab:02}} \begin{indented} \item[] \begin{tabular}{@{}crcrrr}\br G-N It. & Normf & & Normx & Damp.~Fctr. & Rank \\ \mr 0 & 4.1941414e+01 & & 2.115e-02 & & 6 \\ 1 & 4.1936708e+01 & * & 2.094e-02 & 0.01000 & \\ 1 & 4.1936708e+01 & & 2.469e-02 & & 6 \\ 2 & 4.1751843e+01 & * & 1.669e-02 & 0.41932 & \\ 2 & 4.1751843e+01 & & 3.373e-02 & & 6 \\ 3 & 4.1655239e+01 & * & 2.266e-02 & 0.42693 & \\ 3 & 4.1655239e+01 & & 1.024e-01 & & 6 \\ 4 & 4.1639220e+01 & * & 7.410e-02 & 0.19117 & \\ 4 & 4.1639220e+01 & & 1.076e-01 & & 6 \\ 5 & 4.1631470e+01 & * & 4.854e-02 & 0.37178 & \\ 5 & 4.1631470e+01 & & 1.538e-02 & & 6 \\ 6 & 4.1547355e+01 & * & 1.816e-03 & 1.00000 & \\ 6 & \multicolumn{4}{r}{incompatibility factor: 0.14248} & \\ 6 & 4.1547355e+01 & & 6.366e-03 & & 6 \\ 7 & 4.1542667e+01 & * & 2.140e-04 & 1.00000 & \\ 7 & \multicolumn{4}{r}{incompatibility factor: 0.42707} & \\ 7 & 4.1542667e+01 & & 3.339e-05 & & 6 \\ 8 & 4.1542118e+01 & . & 1.783e-08 & 1.00000 & \\ 8 & \multicolumn{4}{r}{incompatibility factor: 0.00526} & \\ \br \end{tabular} \item[] Requested identification accuracy has been $\mathrm{xtol} = 10^{-4}$. A star in the third column indicates values corresponding to simplified Gauss-Newton corrections. \end{indented} \end{table} As the solution $p^{\ast}$ is approached, the Gauss-Newton method converges linearly with an asymptotic convergence factor $\kappa(p^{\ast})$. This quantity $\kappa$, called {\em incompatibility factor}, is monitored by NLSCON and must be smaller than 1 to obtain convergence. Problems that satisfy this condition are called {\em adequate} problems. If model and measurement values match exactly, i.e. $F(p^{\ast}) = 0$, then $\kappa(p^{\ast}) = 0$ and the method converges quadratically just as Newton's method. This so-called compatible case, however, does not occur in practice since experimental measurements are never exact. For inadequate nonlinear least-squares problems, the adaptive damping strategy will typically yield values $\lambda_k \approx 1/\kappa < 1$, and too small damping factors result in fail of convergence. Vice versa, this effect can be conveniently taken as indication of the inadequacy of the inverse problem under consideration \citep{Deu04}. In this case, model equations or the initial parameter guess $p^0$ should be changed. A typical NLSCON output protocol in case of successful convergence is shown in Table~\ref{Tab:02}. In the convergent phase, the damping factors approach 1 and finally $\kappa<1$. \section{Results of numerical experiments} \label{sec:results} \begin{table}[!t] \caption{Comparison of computing times w.r.t.~different models.\label{Tab:01}} \begin{indented} \item[] \begin{tabular}{@{}rrrrr}\br & {GynCycle} & {BovCycle} & {BIOMD008} & {EpoRcptr} \\ \mr \multicolumn{1}{l}{Model Characteristics} & & & \\ \#Species & 33 & 15 & 5 & 7 \\ \#Parameters & 114 & 60 & 21 & 9 \\ \#Reactions & 54 & 28 & 13 & 9 \\ & & & & \\ \multicolumn{1}{l}{Simulation} & & & & \\ \ \ \ \ \ BioPARKIN${}_{0}$ (adpt. $h$) & 3.2s & 0.8s & 0.1s & 0.1s \\ COPASI${}_{1}$ ($h = 10^{-2}$) & 1.4s & 0.6s & 0.2s & 0.2s \\ COPASI${}_{2}$ ($h = 10^{-3}$) & 7.2s & 4.0s & 1.4s & 1.1s \\ & & & & \\ \multicolumn{1}{l}{Sensitivity} & & & & \\ BioPARKIN & & & & \\ $(*)$\ (var.~eq., overview) & 49s & 12.9s & 0.7s & 1.7s \\ (var.~eq., overview) & 117s & 29.2s & 0.9s & 2.0s \\ (num.~diff., overview) & 309s & 35.4s & 0.8s & 0.2s \\ & & & & \\ COPASI${}_{1}$ (grand total) & 94s & 18.1s & 1.0s & 0.3s \\ COPASI${}_{2}$ (grand total) & 328s & 115.6s & 8.3s & 1.5s \\ \br \end{tabular} \item[] Benchmark times are rounded to one decimal. Integration was done in [0,100] with time units [s] or [d], accordingly. For comparison reasons, $\mathtt{rtol} = 10^{-6}$ and $\mathtt{atol} = 10^{-12}$ have been used in all rows (except $(*)$) as accuracy for the ODE solvers. COPASI run times have been measured by batch processing, {\em ex\-cluding} the time spent for file I/O. \item[] In COPASI, sensitivities were computed by numerical differentiation. In BioPARKIN, sensitivities were computed by either solving the variational equation (var.~eq.) or by numerical differentiation (num.~diff.). In a sensitivity {\em overview}, sensitivities are plotted over the complete time interval (for an example, see Figure~\ref{fig:02}). \item[] $(*)$ Var.~Eq.~computing times: values have been achieve with slightly lower but still more than sufficient accuracy ($\mathtt{rtol}=10^{-5}$, $\mathtt{atol}=10^{-7}$). \end{indented} \end{table} \begin{figure}[!tpb \centerline{\includegraphics[scale=0.42]{images/bovcycle_computing_times_w_interpolation_bw.eps}} \caption{BovCycle: Computing times for the BovCycle model w.r.t.~different integrator tolerances. The cases BioPARKIN${}_{1}$ and BioPARKIN${}_{2}$ are interpolating at exactly as many sample points as requested for the COPASI${}_{j}$ $(j=1,2)$ cases, respectively, in addition to the adaptive time points. Note that these artificially high numbers of sample points are unusual and absolutely unnecessary for trajectory computations with BioPARKIN and that, for comparision reasons only, these numbers have been applied here. Additionaly, BioPARKIN${}_{0}$ denotes the timings in case of no interpolation at all.}\label{fig:_1} \end{figure} \begin{figure}[!tpb \centerline{\includegraphics[scale=0.42]{images/gyncycle_sens_computing_times_v2_bw.eps}} \caption{GynCycle: Computing times for the variational equation w.r.t.~different integrator tolerances. Note that BioPARKIN integrates the variational equation system while COPASI takes finite differences for the computation.}\label{fig:00} \end{figure} \begin{figure}[!tpb \centerline{\includegraphics[scale=0.6]{images/bovcycle_forward_sim_selected_species.eps}} \caption{BovCycle: Trajectories of model simulation of selected species.}\label{fig:01} \end{figure} \begin{figure}[!tpb \centerline{\includegraphics[scale=0.6]{images/biomd008_sens_overview_par_v3p.eps}} \caption{BIOMD008: Sensitivity trajectories of the variational equation w.r.t.~parameter V3p.}\label{fig:02} \end{figure} \begin{figure}[!tpb \centerline{\includegraphics[scale=0.6]{images/biomd271_sens_overview_Y1_par_k4_k5_k6.eps}} \caption{EpoRcptr: Sensitivity trajectories of measurement variable $Y_1$.}\label{fig:04} \end{figure} \begin{figure}[!tpb \centerline{\includegraphics[scale=0.6]{images/biomd271_sens_overview_Y2_par_k4_k5_k6.eps}} \caption{EpoRcptr: Sensitivity trajectories of measurement variable $Y_2$.}\label{fig:05} \end{figure} \begin{figure}[!tpb \centerline{\includegraphics[scale=0.6]{images/biomd271_sens_overview_Y3_par_k4_k5_k6.eps}} \caption{EpoRcptr: Sensitivity trajectories of measurement variable $Y_3$.}\label{fig:06} \end{figure} This section illustrates the use of BioPARKIN and PARKINcpp with actual models. First, two models developed by the Computational Systems Biology group at Zuse Institute Berlin are presented. Next, a third model was obtained from the BioModels database, a website with curated SBML models \citep{le2006biomodels}. And last but not least, a variant of a EPO receptor model from the same database is taken, as it was already published in \cite{Hengl07}. All subsequent computations have been performed on an Intel Core 2 Dual CPU (T7200 @ 2.0GHz). In addition, for comparison reason, all forward simulations have been repeated by using COPASI \citep{hoops2006copasi}. Note that the stiff ODE solver LSODAR \citep{lsoda0,lsoda1} is used in COPASI in contrast to LIMEX. In fact, it seems that, for the computation of any model trajectories, the researcher is forced to supply an equidistant time grid in COPASI. Thereby, the accuracy of the ODE solution, as set by the user in the values $\mathtt{atol}$ and $\mathtt{rtol}$, can easily be foiled in the sense that essential details of model trajectories are simply neglegted in COPASI if the chosen equidistant time grid happens to be too coarse. Note that this is surely not contradicting that the computed ODE solution, at the given sample points, of course, is in fact within the requested accuracy and that, even more surprisingly, the ODE solver LSODAR \emph{internally} proceeds adaptively. In contrast, simply avoiding all these problems, LIMEX integrates fully adaptive, and Hermite interpolation of appropriate order is applied where necessary, strictly respecting the requested accuracy. Moreover, the fully adaptive approach (i.e.~its implementation in BioPARKIN, at least) seems to be much more efficient, see Table~\ref{Tab:01} and Figures~\ref{fig:_1},~\ref{fig:00}. \subsection{GynCycle} \paragraph{Description of the model.} GynCycle is a differential equation model that describes the feedback mechanisms between a number of reproductive hormones and the development of follicles and corpus luteum during the female menstrual cycle \citep{pfizer}. The model correctly predicts hormonal changes following administration of single and multiple doses of two different drugs. \paragraph{BioPARKIN and the model.} The model GynCycle is fairly large. It contains 33 species, 2 assignment rules, 114 parameters, and 54 reactions. The related benchmark timings for a forward simulation run and sensitivity calculations can be found in Table~\ref{Tab:01}. Here, BioPARKIN served as a tool to explore the model and its parameter space. Together with its predecessor POEM (an unreleased, in-house tool based on the same numerical principles), it was able to develop and to fine-tune a highly descriptive and predictive model for a complex human pathway that has direct relevance to real-world applications. \subsection{BovCycle} \paragraph{Description of the model.} The model BovCyle is a mechanistic mathematical model of the bovine estrous cycle that includes the processes of follicle and corpus luteum development and the key hormones that interact to control these processes \citep{Boer2011}. The model generates a periodic solution without external stimuli, see Figure~\ref{fig:01}. The bovine estrous cycle is subject of extensive research in animal sciences. Of particular interest have been, for example, the examination of follicular wave patterns \citep{Boer2011b}, as well as the study of synchronization protocols \citep{Claudia2011}. \paragraph{BioPARKIN and the model.} The BovCycle model consists of 15 species, 60 parameters, and 28 reactions. Again, the benchmark timings are given in Table~\ref{Tab:01}. In this application, BioPARKIN enabled the researchers to successively improve the model with each design iteration. Procedures such as parameter identification and sensitivity analysis proved to be absolutely essential within this context as they guide design decisions by giving insight into hidden dependencies between parameters. \subsection{BIOMD008} \paragraph{Description of the model.} The model with ID 008 in the BioModels database describes the cell cycle control using a reversibly binding inhibitor. \paragraph{BioPARKIN and the model.} The model BIOMD008 comprises only 5 species, 21 parameters, and 13 reactions. The relevant benchmark timings for this model can also be found in Table~\ref{Tab:01}. Albeit being small, nevertheless, the model is of the cell cycle type and, in principle, exhibits a stable limit cyclic which is interesting by itself to look at sensitivity trajectories, see e.g.~Figure~\ref{fig:02}. \paragraph{Parameter identification.} Key questions of practical relevance in parameter identification tasks are almost always how much data is sufficient and, even more importantly, how much data is necessary to successfully identify the unknown parameters. We proceed as follows. A specific parameter (V3p) is changed (from 0.3 to 1.0), and the goal is to reconstruct the original parameter value. In a sequence of identification runs, each of the five species is selected to be the only species for which data are available. As data, we take the values of the selected species from the simulation run with the original parameter set, at the time points chosen adaptively by LIMEX. For three of the five species (M, Y, and Z), the original value of V3p is reconstructed without any difficulties. The parameter identification, however, is not successful at all if one of the other two species (C and X) is selected as data source. \paragraph{Sensitivities.} We examined the sensitivity w.r.t.~parameter V3p. The sensitivity overview for BIOMD008 results in a plot of the sensitivity trajectories of all species over time (see Figure~\ref{fig:02}). Parameter V3p displays a cyclic sensitivity across all species. It seems that a change in V3p influences the least the time course of species Y and Z while it has more influence on species C, M, and X. We note that these observations, apparently, are in distinct contrast to the findings of the parameter identification task just described. \subsection{EpoRcptr} \label{sub:EpoRcptr} \paragraph{Description of the model.} A dynamical model for the endocytosis of the erythropoietin receptor (EPO receptor) has been published in \cite{Hengl07}. In fact, it is apparently a variant of BIOMD271 of the database already mentioned above. The model is relatively small as it consists of 7 species, 9 parameters, and 9 reactions. However, there exist groups of functionally related parameters, that were identified by a statistical method in \cite{Hengl07}. We use this example to demonstrate that BioPARKIN handles saddle points in the unknown parameter space correctly as opposed to, e.g., the Levenberg-Marquardt procedure that is well-known to not be able to detect these stationary points adequately. \paragraph{BioPARKIN and the model.} The model EpoRcptr is even smaller than BIOMD008, it contains 7 species, 9 parameters, and 9 reactions. The measurable values in this model, $Y_1, Y_2, Y_3$, are linear combinations of some species. In BioPARKIN, these are added to the ODE system as algebraic equations, and thus forming a DAE system. The integration routine LIMEX is capable of DAE systems up to order 1. Again, the corresponding benchmark timings can be found in Table~\ref{Tab:01}. \paragraph{Parameter identification.} The parameter set as given in \cite{Hengl07} served as ,,true'' values of the model. With these values the three measurement variables $Y_1, Y_2, Y_3$ have been sampled by 10 equidistant points within the time interval $[0,100]$ each. To be realistic, 5\% white, i.e.~normal distributed, noise has been added to this data set. For the identification run we took the time interval three times longer, $0 \leq t \leq 300$, and the true parameter values as initial guess for the iterative Gauss-Newton algorithm. Since it is known that this point in parameter space lies on a lower dimensional manifold \citep{Hengl07}, the point has the character of a saddle point. Indeed, identification runs of BioPARKIN indicate just this: the higher $\mathrm{xtol}$ is chosen, the less iteration steps are made, reporting the stop at stationary points (i.e.~no reduction of the residual value) with unreasonably high incompatibility factors. In addition, the initial parameter values (the ,,true'' values) are not recovered, but a different point on the parameter manifold is identified (Table~\ref{Tab:03}). This can clearly be concluded by studying the related correlation matrix which contains in all cases a submatrix with entries near 1 or -1 only. In fact, the parameters $k_4$, $k_5$, and $k_6$ are thus connected by the correlation matrix, in total agreement with the findings as given in \cite{Hengl07}. \paragraph{Sensitivities.} The sensitivity trajectories of the measurement variables $Y_1$, $Y_2$, and $Y_3$ w.r.t.~parameters $k_4$, $k_5$, $k_6$ are depicted in Figures~\ref{fig:04},~\ref{fig:05}, and \ref{fig:06}, respectively. As it can readily be seen, denser sampling of the measurement variables, especially for the variables $Y_1$ and $Y_3$, at later times should resolve the ambiguous parameter manifold. Indeed, a convenient numerical test nicely confirms this conjecture, see Table~\ref{Tab:04}. \begin{table} \caption{Parameter identification for model EpoRcptr.\label{Tab:03}} \begin{indented} \item[] \begin{tabular}{@{}ccrccr}\br Parameter & True Value & Reconstruction & \multicolumn{3}{c}{Std.~Dev.} \\\mr $k_1$ & 8.0e-03 & 8.114e-03 & $\pm$ 2.053e-03 & $\hat{=}$ & 25.30 \% \\ $k_2$ & 5.0e-05 & 5.045e-05 & $\pm$ 6.361e-06 & $\hat{=}$ & 12.61 \% \\ $k_3$ & 1.0e-01 & 1.012e-01 & $\pm$ 8.970e-03 & $\hat{=}$ & 8.87 \% \\ $k_4$ & 2.5e-01 & 4.297e-01 & $\pm$ 4.216e-03 & $\hat{=}$ & 0.98 \% \\ $k_5$ & 1.5e-01 & 1.096e-01 & $\pm$ 2.732e-02 & $\hat{=}$ & 24.93 \% \\ $k_6$ & 7.5e-02 & 5.343e-02 & $\pm$ 2.556e-02 & $\hat{=}$ & 47.83 \% \\\br \end{tabular} \item[] Requested identification accuracy has been $\mathrm{xtol} = 10^{-4}$. Gauss-Newton iteration converged after 9 steps, with incompatibility factor $\kappa=0.04845$. \end{indented} \end{table} \begin{table} \caption{Parameter identification for model EpoRcptr using more data.\label{Tab:04}} \begin{indented} \item[] \begin{tabular}{@{}ccrccr}\br Parameter & True Value & Reconstruction & \multicolumn{3}{c}{Std.~Dev.} \\ \mr $k_1$ & 8.0e-03 & 8.136e-03 & $\pm$ 4.847e-04 & $\hat{=}$ & 5.96 \% \\ $k_2$ & 5.0e-05 & 4.956e-05 & $\pm$ 1.702e-06 & $\hat{=}$ & 3.44 \% \\ $k_3$ & 1.0e-01 & 1.016e-01 & $\pm$ 2.707e-03 & $\hat{=}$ & 2.67 \% \\ $k_4$ & 2.5e-01 & 2.546e-01 & $\pm$ 1.215e-03 & $\hat{=}$ & 0.48 \% \\ $k_5$ & 1.5e-01 & 1.465e-01 & $\pm$ 5.637e-03 & $\hat{=}$ & 3.85 \% \\ $k_6$ & 7.5e-02 & 7.201e-02 & $\pm$ 2.443e-05 & $\hat{=}$ & 0.03 \% \\ \br \end{tabular} \item[] Requested identification accuracy has been $\mathrm{xtol} = 10^{-4}$. Gauss-Newton iteration stopped at stationary point after 11 steps, with incompatibility factor $\kappa=0.03227$. \end{indented} \end{table} \subsection{A noteworthy caveat} Key point, here, is that the sensitivity analysis is not always suitable to anticipate which parameters are more likely to be identified than others. In fact, sensitivities highly depend on the actual parameter set and, therefore, they are only meaningful at the end of a successful identification run. Thus, it really should always be kept in mind that the sensitivity results are merely meant as an exploratory a priori tool that might aid the researcher to get a better understanding of the model. \section{Conclusion} Systems biology as a scientific research field is getting more attention, and is gaining more practitioners around the world every year. With the increased size of the community the importance of establishing standards becomes more pronounced. The software package BioPARKIN presented here tries to inject long-standing mathematical experience into this growing community. Ideally, this knowledge enables researchers to generate meaningful and reliable results even faster. While the computing time is comparable with other available software tools, BioPARKIN offers several unique features that are especially useful for biological modelling, such as breakpoint handling, or identifiability statements. In particular, the implemented affine covariant Gauss-Newton method provides information on the compatibility between model and data, as well as on the uniqeness of a solution in case of convergence. This is an important tool for model discrimination, when the ``best'' model is to be selected from several alternative models which all explain the given data equally well. Moreover, the Jacobian can be computed with prescribed accuracy by solving the variational equation instead of using inaccurate numerical differentiation, thus increasing the reliability of numerical results. \ack This article is written in sincere remembrance of U.~Nowak who sadly passed away in June 2011. Without his sophisticated contributions this work would have been clearly impossible. \bibliographystyle{iopart-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,459
Pogănești è un comune della Moldavia situato nel distretto di Hîncești di 1.617 abitanti al censimento del 2004 Località Il comune è formato dall'insieme delle seguenti località (popolazione 2004): Pogănești (1.462 abitanti) Marchet (155 abitanti) Note Comuni del distretto di Hîncești
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,270
Q: Как сделать подобную html разметку? *Левая панель с фиксированной шириной, высота - на всю высоту страницы, между шапкой и футером. *Средний блок без фиксированной ширины и высоты, резиновый. *Правый блок фиксированный по высоте и ширине. *Header имеет position:fixed; *Footer должен быть прижат к низу экрана, а основной блок растянут на всю высоту. Проблема с центральным блоком, если ему не задать ширину, то получается как-то так: код: /*CSS Base =================================*/ html, body, div, span, h1, h2, h3, h4, h5, h6, p, a, img, ol, ul, li, form {margin:0; padding:0; border:0; font-size:100%; vertical-align:baseline;} body {color:#111; font-family: "Tahoma", sans-serif;} input, textarea {outline:none;} h1 {font-size:36px; font-weight: normal;} h2 {font-size:28px; font-weight: normal;} p {font-size:18px; line-height:20px; } a {outline:none; text-decoration:none;} a:hover {text-decoration: none;} img {border:none; outline:0;} .clearfix:before, .clearfix:after { content: ""; display: block; visibility: hidden; } .clearfix:after { clear: both; } .clearfix { zoom: 1; } /* HEADER */ header { width: 100%; height: 80px; background: #ECECEC; position: fixed; top: 0; right: 0; z-index: 10; } /* MAIN CONTENT */ #container { position: relative; width: 100%; height: 100%; margin-top: 80px; } #container .left-panel { width: 400px; height: 100%; background: #CACACA; float: left; } #container .content { width: 100%; position: relative; float: right; } #container .content .right-panel { float: right; width: 200px; height: 400px; background: #CACACA; } /* FOOTER */ footer { height: 100px; width: 100%; background: #ECECEC; } <header> <h1>HEADER</h1> </header> <section id="container" class="clearfix"> <div class="left-panel"> <h2>left-panel</h2> </div> <div class="content"> <div class="right-panel"> <h2>right-panel</h2> </div> <h2>main content</h2> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> </div> </section> <footer> <h2>footer</h2> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Inventore dolores enim pariatur amet velit magni numquam tempore ipsam, aut dolorum impedit laborum, corporis, explicabo placeat.</p> </footer> Еще попробовал вот так: /* MAIN CONTENT */ #container { width: 100%; height: 100%; margin-top: 80px; min-width: 980px; display: table; } #container .left-panel { width: 400px; height: 100%; background: #CACACA; display: table-cell; } #container .content { display: table-cell; } #container .content .right-panel { float: right; width: 200px; height: 200px; background: #CACACA; } результат Почти то, что надо, но теперь главный #container не растягивается на всю высоту страницы. A: Первое, что я заметил: У вас у .left-column стоит height: 100% Данное свойство не будет работать, если у его родителя не будет задана какая-то определенная высота(такое ограничение). Хорошо, смотрим на родителя... Для section#container у вас тоже задана высота height: 100%. Что не так? А section#container в свою очередь тоже смотрит на своего родителя, а в родителях у него html и body, а вот для них у вас не стоит высота 100%, поэтому вся конструкция не растянется на всю высоту. Поэтому задайте: html, body{ height: 100%; } Второе: Вы используете свойство #container .content { float: right; } Для основного контейнера, которое определяет высоту всего документа оно не нужно, поэтому уберите его, и задайте margin-left: ширина левого блока И еще: Исключите идентификаторы (#) из css правил, пользуйтесь только классами. /*CSS Base =================================*/ html, body, div, span, h1, h2, h3, h4, h5, h6, p, a, img, ol, ul, li, form {margin:0; padding:0; border:0; font-size:100%; vertical-align:baseline;} html, body{height: 100%} body {color:#111; font-family: "Tahoma", sans-serif;} input, textarea {outline:none;} h1 {font-size:36px; font-weight: normal;} h2 {font-size:28px; font-weight: normal;} p {font-size:18px; line-height:20px; } a {outline:none; text-decoration:none;} a:hover {text-decoration: none;} img {border:none; outline:0;} .clearfix:before, .clearfix:after { content: ""; display: block; visibility: hidden; } .clearfix:after { clear: both; } .clearfix { zoom: 1; } /* HEADER */ header { width: 100%; height: 80px; background: #ECECEC; position: fixed; top: 0; right: 0; z-index: 10; } /* MAIN CONTENT */ #container { position: relative; width: 100%; height: 100%; margin-top: 80px; } #container .left-panel { width: 400px; height: 100%; background: #CACACA; float: left; } #container .content { width: 100%; position: relative; margin-left: 400px; } #container .content .right-panel { float: right; width: 200px; height: 400px; background: #CACACA; } /* FOOTER */ footer { height: 100px; width: 100%; background: #ECECEC; } <header> <h1>HEADER</h1> </header> <section id="container" class="clearfix"> <div class="left-panel"> <h2>left-panel</h2> </div> <div class="content"> <div class="right-panel"> <h2>right-panel</h2> </div> <h2>main content</h2> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> </div> </section> <footer> <h2>footer</h2> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Inventore dolores enim pariatur amet velit magni numquam tempore ipsam, aut dolorum impedit laborum, corporis, explicabo placeat.</p> </footer> A: Как вариант: html, body, header, footer, h1, h2, h3, h4, div, p { margin: 0; padding: 0; } body { height: 100%; } header { background: #004d80; text-align: center; color: white; height: 100px; width: 100%; top: 0; left: 0; position: fixed; } footer { height: 120px; background: #004d80; margin-top: -120px; color: white; text-align: center; } .container { min-height: 100%; } .left-panel { width: 200px; background: #669999; color: white; text-align: center; display: table-cell; } .main_content { display: table; height: 100%; } .content { margin-top: 100px; margin-left: 200px; padding-bottom: 120px; display: table-cell; padding-left: 15px; padding-bottom: 120px; } .right-panel { background: #5c8a8a; text-align: center; color: white; float: right; width: 200px; height: 150px; } .left-panel_block { margin-top: 100px; background: #5c8a8a; } <div class="container"> <header> <h1>HEADER</h1> </header> <div class="main_content"> <div class="left-panel"> <div class="left-panel_block"> <h2>left-panel</h2> </div> </div> <div class="content"> <div class="right-panel"> <h2>right-panel</h2> </div> <h2>main content</h2> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Obcaecati illum ullam incidunt ducimus! Iusto ullam maxime aperiam laboriosam cupiditate nesciunt magnam dolores quas maiores accusamus.</p> </div> </div> </div> <footer> <h2>FOOTER</h2> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Inventore dolores enim pariatur amet velit magni numquam tempore ipsam, aut dolorum impedit laborum, corporis, explicabo placeat.</p> </footer>
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,616
<?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <PreferenceCategory android:title="@string/pref_category_map_style" android:key="pref_category_map_style"> <ListPreference android:key="pref_map_style" android:title="@string/pref_map_style" android:summary="%s" android:dialogTitle="@string/pref_map_style" android:entries="@array/pref_map_style_entries" android:entryValues="@array/pref_map_style_entries" android:defaultValue="@string/pref_map_style_default" /> <ListPreference android:key="pref_lod" android:title="@string/pref_lod" android:summary="%s" android:dialogTitle="@string/pref_lod" android:entries="@array/pref_empty" android:entryValues="@array/pref_empty" android:defaultValue="@string/pref_lod_default"/> <ListPreference android:key="pref_label_level" android:title="@string/pref_label_level" android:summary="%s" android:dialogTitle="@string/pref_label_level" android:entries="@array/pref_lod_label_entries" android:entryValues="@array/pref_lod_label_entries" android:defaultValue="@string/pref_label_level_default" /> <ListPreference android:key="pref_color" android:title="@string/pref_color" android:summary="%s" android:dialogTitle="@string/pref_color" android:entries="@array/pref_empty" android:entryValues="@array/pref_empty" android:defaultValue="@string/pref_color_default"/> </PreferenceCategory> </PreferenceScreen>
{ "redpajama_set_name": "RedPajamaGithub" }
1,190
\section{Introduction} \label{sec_intro} Understanding the formation and evolution of galaxies is one of the important and challenging questions of cosmology. According to the standard cold dark matter (CDM) paradigm, galaxies are initially formed in the center of small CDM halos via gas cooling and subsequent star formation, gradually assembled with time through hierarchical processes, and then evolved into populations with various size, color, and morphology \citep{whi78}. During the cosmic time, the era of $1 < z < 3$ is a crucial stage in terms of star formation, stellar mass content, and galaxy morphology. In this period, the star-forming activity in the Universe and the bulk of stellar mass assembly in galaxies are at their peak levels \citep{dic03,hop06,arn07}. Meanwhile, a variety of observations suggest that the cosmic star formation rate (SFR) density is at its maximum value \citep{dad05,ric06,arn07} at z$\sim2.0$. A number of deep and wide-field multiwavelength surveys have been employed in the past several years to assemble multiwavelength observations of high-redshift galaxies. Therefore, the star formation history, age, and stellar mass of high-redshift galaxies are well studied through fitting observed multiband spectral energy distributions (SEDs) with evolutionary population synthesis (EPS) models. Evolutionary population synthesis is one of the techniques to study galaxy evolution at all eras \citep[][hereafter BC03]{tin78,fio97,vaz99,zha04,mar05,mar07,con09,bru03}. These EPS models play an important role in deriving the star formation histories, stellar properties, and redshifts from photometry and spectra. So our understanding of stellar population properties, galaxy growth across the cosmic time \citep{sha05,van10}, and the evolution of stellar mass density \citep{mar09,gon10} is heavily dependent on EPS models. Most EPS models neglect the effect of binary interactions on some stellar evolution stages. Meanwhile, observations show that binary stars are very common in nearby star clusters and galaxies \citep[][]{abt83,kro11}. For example, \citet{rag10} presented the results of a comprehensive evaluation of the multiplicity of solar-type stars in the solar neighborhood, and their analysis showed that the binary fraction of the progenitor population was about ($50 \pm 4$)\%. \citet{sol07} analyzed the binary population of $13$ low-density Galactic globular clusters with the aim of studying their frequency and distribution. Their study revealed that these globular clusters hold a fraction of binaries ranging from $10$ to $50$ per cent depending on the cluster. \citet{sol10} used deep wide-field photometric observations to derive the fraction of binary systems in a sample of five high-latitude Galactic open clusters. They found that the estimated global fractions of binary systems ranged from $35$ to $70$ per cent depending on the cluster. \citet{min13} analyzed binary fractions and binary distributions of dwarf spheroidal galaxies. They found that binary fractions of Fornax, Sculptor, and Sextans dSphs were consistent with that of Milky Way (about $50$\%) within $63$\% confidence limits. When binary stars are considered in EPS models, they can enhance the ultraviolet (UV) passbands by $2.0-3.0$ mag for stellar populations at an age of about $1.0$\,Gyr \citep{zha04,zha05}. Moreover, \citet{han07} concluded that most of UV sources in elliptical galaxies came from binary channels. Therefore, neglecting binaries in EPS models can lead to an underestimation of the SED in UV passbands and then affect the determination of parameters for stellar population systems \citep{zfh12,zhy12}. According to the CDM model cosmology calculator of \citet{wri06}, at redshift z$\sim2.0$ the age of the Universe is about 3.0\,Gyr, and passive galaxies at this redshift should contain young stellar populations ($< 3.0$\,Gyr). In addition, there exist passive galaxies at redshift z$\sim2.0$ \citep{dad04,kri09,mar10,fang12} that form a red sequence at z$\sim2.3$ \citep{kri08}. The spectral shapes of passive galaxies at z$\sim2.0$ indicate that most of their stars have formed over a short timescale in an intense starburst, which verifies that these galaxies are in a post-starburst phase \citep{kri08}. The future evolution of passive galaxies at z$\sim2$ is unknown. Meanwhile, \citet{zha10} showed that the binary interactions have significant influence on the far-UV ($F\rm_{UV}$) band for stellar populations at the age range of $0.5-3.0$\,Gyr. The stellar populations in this age range are just starting to exist in passive galaxies at z$\sim2.0$, hence the effect of binary interactions is obvious for the observed-frame optical passbands when the rest-frame UV emission is shifted to optical wavelength. The evolution of the age of populations, the trend of SFR, the chemical enrichment, and morphology can bring changes in the spectra, as well as the changes in galaxy luminosity and colors. Meanwhile, most of the galaxy properties are correlated with EPS models, and the binary interactions in EPS models can affect the stellar populations. In this study we will discuss the effect of binary interactions on the SED of passive galaxies. To show the quantitative influence of binary interactions on the predicted galaxy magnitudes and on the colors at different redshifts, we use the galaxy template spectra based on Yunnan EPS models, the reddening law, the filter set, and the cosmological parameter through a Monte Carlo (MC) simulation to produce the passive model galaxies, and to study the evolution of color-magnitude (C-M) and color-color (C-C) relations of passive galaxies with redshift. The structure of this paper is as follows. In Sect. \ref{sect:method}, we briefly describe the method used to generate galaxy sample with different redshift. In Sect. \ref{sect:results}, we show the C-M and C-C relations predicted by EPS models for different redshifts, followed by discussion. The conclusion is given in Sect. \ref{sect:conclusions}. \section{Method} \label{sect:method} We used the relevent parameters based on the MC simulation to produce the model galaxies. The relevant parameters in this procedure are: (i) the set of galaxy template spectra, including the SED with SFR and different ages; (ii) the reddening law, which is implemented to account for the effect of interstellar dust on the shape of the SED; (iii) the filter set; and (iv) the standard dark matter model cosmological parameters $H_{0}$, $\Omega_{M}$, and $\Omega_{\Lambda}$. In this section, we will describe these relevant parameters. \subsection{The stellar population models} Evolutionary population synthesis is one of the techniques of modeling the spectroscopic and photometric properties of stellar populations using the knowledge of stellar evolution. This technique was first introduced by \citet{tin68} and has been developed rapidly ever since. Moreover, EPS models can be used to build galaxy template spectra. Recently, binary interactions have also been incorporated in EPS models by the Yunnan group \citep[Yunnan EPS models;][]{zha04,zha05,zha06}. To quantify the effect of binary interactions on the predicted galaxy magnitudes and colors at different redshifts, we build a theoretical galaxy template SED using Yunnan models of single stellar populations \citep[Model A,][without binary interactions]{zha04} and the models of binary stellar populations \citep[Model B,][with binary interactions]{zha05}. These models present the SEDs of stellar populations with and without binary interactions at 90 ages, and the ages are in the range from log($t_{i}\rm /yr$)$=5.000$ to $10.175$. The Yunnan EPS models were built on the basis of the Cambridge stellar evolution tracks \citep{eggleton71,eggleton72,eggleton73}, BaSeL-2.0 stellar atmosphere models \citep{lejeune97,lejeune98}, and various initial distributions of stars. The Cambridge stellar evolution tracks are obtained by the rapid single/binary evolution codes \citep{hur00,hur02}, which is based on the stellar evolutionary track by \citet{pol98}. In the binary evolution code, various processes are included, such as mass transfer, mass accretion, common-envelope evolution, collisions, supernova kicks, tidal evolution, and all angular momentum loss mechanisms.The main input parameters of the standard models are as follows:\\ (1) The IMF of the primaries gives the relative number of the primaries in the mass range $M \rightarrow M +$ d$M$. The initial primary-mass $M_1$ is given by \begin{equation} M_1 = \frac{0.19X}{(1-X)^{0.75} + 0.032 (1-X)^{0.25}}, \label{eq:imfms79-app} \end{equation} where $X$ is a random variable uniformly distributed in the range [0, 1]. The distribution is chosen from the approximation to the IMF of \citet{miller79} as given by \citet{eggleton89} \begin{equation} \phi(M)_{_{\rm MS79}} \propto \Biggl\{ \matrix{ M^{-1.4}, & 0.10 \le M \le 1.00 \cr M^{-2.5}, & 1.00 \le M \le 10.0 \cr M^{-3.3}, & 10.0 \le M \le 100 \cr } \label{eq:imfms79} \end{equation} in which $M$ is the stellar mass in units of M$_{\rm \odot}$.\\ (2) The initial secondary-mass distribution, which is assumed to be correlated with the initial primary-mass distribution, satisfies a uniform distribution \begin{equation} n(q)=1.0,\,\,\, 0.0\le q \le 1.0, \label{eq:nq} \end{equation} where $q=M_{2}/M_{1}$. \\ (3) The distribution of orbital separation (or period) is taken as constant in log$a$ (where $a$ is the separation) for wide binaries and fall off smoothly at close separations \begin{equation} a{\rm n}(a) = \left\{ \begin{array}{ll} a{\rm_{sep}} (a/a_{0})^{m}, \, a \leq a_{0}\\ a{\rm_{sep}}, \,\,\,\, a_{0} < a <a_{1} \end{array} \right. \label{eq:disa} \end{equation} in which $a_{\rm sep} \sim 0.070, a_0 = 10 {\rm R_\odot}, a_1 = 5.75 \times 10^6 {\rm R_\odot}$ $=0.13$pc, and $m \sim 1.2$ \citep[][]{han95}.\\ (4) The eccentricity distribution satisfies a uniform form $e\,=\,X$, $X\in $[0, 1].\\ Some of the relevant features of these models can be found in \citet{zha04} for Model A and \citet{zha05} for Model B. In Model B, 50 per cent of the stars in each stellar population are in binary systems with orbital periods less than 100yr. And this fraction is the typical value for the Galaxy. We note that both of the two EPS models have the same star sample ($2.5 \times 10^{7}$ binary systems). We assume that all stars are born at the same time. \subsection{Theoretical galaxy template} \label{ssect:template} The galaxy template should not only comprise the SEDs of different ages, but should also include the star formation history (SFH) of a galaxy. The EPS models only provide the SEDs of stellar populations without any SFR at different ages. Therefore, at a given age we need to generate the SEDs of galaxies with different SFHs by means of EPS models. Several studies have found that the observational properties of local field galaxies with different SFHs can be roughly matched by a population with different SFRs. For example, \citet{ken86} used an exponentially declining SFR to describe the local spirals, and their results could explain the observations well. In this work, the SFH of passive galaxies is described by a widely used \citep{bru83,pap01,sha05,lee09,wuy09} exponentially declining SFR \begin{equation} \label{eq:e-decling-sfr} \psi(t) \propto exp(-t/\tau) , \end{equation} where $\tau$ and $t$ are the $e-$folding time scale and the age of population, respectively. We focus on the passive galaxies, and the range of $\tau$ is from $0.01$\,Gyr to 1.0\,Gyr: $\tau = 0.01, 0.05, 0.1, 0.2,\\ 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,$ and $1.0$\,Gyr. The BC03 software package provides the code to construct the galaxy template with different SFRs. Using the stellar population models of Yunnan EPS models and the $e-$folding SFR, we use the BC03 package to build the galaxy templates. For Model A and Model B we also assume that $z=0.02$, and the age range is from 10$^{5}$ yr to 10$^{10.175}$ yr. \subsection{Definitions of other parameters} Other parameters used in this work are given as follows: \begin{itemize} \item The redden law of \citet{cal00} is used. The observed shape of the stellar emission $F\rm_{o}(\lambda)$ is reddened using the starburst reddening curve $k(\lambda)=A(\lambda)/E_{s}(B-V)$ with the standard formulation \citep{cal94} \begin{equation} \label{eq:redden-raw} F_{o}(\lambda)=F_{i}(\lambda)10^{-0.4E_{s}(B-V)k(\lambda)} , \end{equation} where $F_{o}(\lambda)$ and $F_{i}(\lambda)$ are the observed and intrinsic stellar continuum flux densities, respectively. The color excess of stellar continuum $E_{s}(B-V)$ is linked to the color excess ($E(B-V)$) derived from the nebular gas emission lines $E(B-V)$ via $E_{s}(B-V)=(0.44\pm0.03)E(B-V)$ \citep{cal97}. The expression of $k(\lambda)$ is \begin{equation} \label{eq:k-lambda} k(\lambda)= \left\{ \begin{array}{ll} 2.695 \left( -2.156 + \frac{1.509}{\lambda} - \frac{0.198}{\lambda ^{2}} + \frac{0.011}{\lambda^{3}} \right) + R_{V}, \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(0.12 \mu m \leq \lambda \leq 0.63 \mu m)\\ 2.659 \left( -1.857 + \frac{1.040}{\lambda} \right) + R_{V}, \\\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(0.63 \mu m \leq \lambda \leq 2.20 \mu m) \end{array} \right. . \end{equation} In this work, the extinction $A_{V}$ is allowed to vary from $0$ to $3$ in steps of $0.2$, which corresponds to $E(B-V)$ varying from $0$ to $0.74$ according to the reddening law ($R\rm_{V} = 4.05$) of \citet{cal00}. \item We have selected the Sloan Digital Sky Survey (SDSS) standard filter system $u g r i z$ with air mass of $1.3$. Table \ref{tbl:filter} presents the characteristics of these filters, the effective wavelength $\rm \lambda_{eff}$, and the full width at half maximum (FWHM). \begin{table}[!htp] \begin{center} \caption{Properties of SDSS filters: the effective wavelength $\rm \lambda_{eff}$ and FWHM.\label{tbl:filter}} \begin{tabular}{ccc} \hline\hline Filters & $\rm \lambda_{eff}$ ($\rm \AA$) & FWHM ($\rm \AA$) \\ \hline $u$ &3551 &581 \\ $g$ &4686 &1262\\ $r$ &6166 &1149\\ $i$ &7480 &1237\\ $z$ &8932 &994 \\ \hline \end{tabular} \end{center} \end{table} \item Finally, we have adopted a set of standard dark matter model cosmology parameters ($\Omega\rm_{\Lambda}, \Omega\rm_{M}, H_{0}$) $= 0.7, 0.3, 70.0$. \end{itemize} \section{Results and discussion} \label{sect:results} By means of the MC simulation and some relevant parameters, we produced the passive model galaxies with the $e-$folding SFR at different redshifts. In this section, we give the C-M and C-C relations of passive model galaxies at different redshifts and investigate the effect of binary interactions on the predicted magnitudes and colors. Considering the influence of binary interactions on the observed-frame optical passbands for $z\sim2$, we only focus on the optical passbands. We note that the $ugriz$ Vega magnitudes are in observed-frame. \subsection{Binary interactions on magnitudes and colors} The model galaxies at $z\sim 0.0, 1.0, 2.0$, and 3.0 correspond to the galaxies at the range of $0.0\leq z \leq 0.3$, $0.7\leq z \leq 1.3$, $1.7\leq z \leq 2.3$, and $2.7\leq z \leq 3.0$, respectively. We take the range of $2.7\leq z \leq 3.0$ as $z\sim 3.0$, because the colors always lie outside the $u-$band for high redshift galaxies \citep[$z\sim 3.4$,][]{lee09}. In this work, we generate 10 000 model galaxies for each redshift with each model (Model A and Model B). We assume that the formation redshift of galaxies is at z$=10.0$, corresponding to about 0.48\,Gyr for the Universe. \begin{figure*}[!htp] \begin{center} \includegraphics[bb=30 25 800 700,height=12.cm,width=12.cm,clip,angle=0,scale=0.5,angle=0]{a-cmd.pdf} \includegraphics[bb=15 25 800 700,height=12.cm,width=12.cm,clip,angle=0,scale=0.5,angle=0]{a-cc.pdf} \caption{The C-M ($g-r$ color versus $g$ magnitude) and C-C ($g-r$ color versus $u-g$ color) relations for passive galaxies at redshifts of $0.0, 1.0, 2.0$, and $3.0$. Left- and right-hand panels show the C-M and C-C relations, respectively. The blue open circles and black solid circles represent the galaxies generated by Model A and Model B, respectively.} \label{fig:a-subsample} \end{center} \end{figure*} Figure \ref{fig:a-subsample} shows the observed-frame $g-r$ color versus $g$ magnitude and $u-g$ color for passive galaxies at four different redshifts ($z=0.0, 1.0, 2.0$, and $3.0$) based on Model A and Model B. The blue open and black solid circles represent the model galaxies based on Model A and Model B, respectively. The left- and right-hand show the C-M and C-C relations, respectively. We can see that the model galaxies based on Model A and Model B overlap for both the C-M and C-C relations at $z\sim 0.0$ [panels (a)] and $1.0$ [panels (b)]. However, it shows a significant offset for both the C-M and C-C relations for those using Model A and Model B at $z\sim 2.0$ [panels (c), especially for the C-C relation]. The offset at $z\sim 3.0$ [panels (d)] between Model A and Model B is smaller than that at $z\sim 2.0$. For $z\sim 2.0$, the galaxy sample based on Model B is shifted to lower $g$ and bluer $g-r$, but redder $u-g$ relative to that based on Model A. For $z\sim 3.0$, the galaxy sample based on Model B is shifted to lower $g$ magnitudes but redder $g-r$ and $u-g$ colors than those based on Model A. \subsection{Age distribution of stellar populations} \begin{figure}[!htp] \begin{center} \includegraphics[bb=18 15 750 600,height=8.cm,width=8.cm,clip,angle=0]{age-dis.pdf} \caption{The age distribution of stellar populations in passive galaxies at different redshifts. Different colors of solid line represent the galaxy with different $e-$folding time scales $\tau$ ($\tau = 0.01, 0.10, 0.50, 0.70,$ and 1.0\,Gyr). The vertical dotted lines in each panel represent the lower and upper age limits for the age of 0.5 and 3.0\,Gyr, which are obvious for the binary interactions.} \label{fig:age-dis} \end{center} \end{figure} To give detailed analyses, we display the age distribution of stellar populations in galaxies at different redshift in Fig. \ref{fig:age-dis}. Panels (a), (b), (c), and (d) correspond to $z\sim0.0, 1.0, 2.0$, and $3.0$, respectively. The different colors show galaxies with different $e-$folding time scale $\tau$, $\tau = 0.01, 0.10, 0.50, 0.70,$ and 1.0\,Gyr. The vertical dotted lines in each panel stand for the lower and upper age limits that are important for binary interactions, 0.5 and 3.0\,Gyr. At the redshift $z\sim3.0$ we only show the lower age limit because the upper age limit is beyond the age of the Universe. We find that the ages of stellar populations are all younger than the age of the Universe at each fixed redshift. Based on the age distribution of the stellar populations in passive galaxies in Fig. \ref{fig:age-dis}, the detailed analyses for the phenomena in Fig. \ref{fig:a-subsample} are as follows: \begin{itemize} \item Using the \citet{wri06} CDM model cosmology calculator and a set of parameters, we calculate the age of the Universe at different redshifts. The ages of the Universe are about $13.7$, $5.9$, $3.0$, and $2.0$\,Gyr at redshift $z\sim 0.0, 1.0, 2.0$, and $3.0$, respectively. From the panels of Fig. \ref{fig:age-dis}, we see that the age of stellar populations tends to peak at $12.5$, $5.25$, $2.75$, and $1.5$\,Gyr for model galaxies at redshift $z\sim 0.0, 1.0, 2.0$, and $3.0$, respectively. As shown above, the effect of binary interactions is obvious for the stellar populations with age in the range of $0.5-3.0$\,Gyr. From the value of peak age and age distribution of stellar populations in model galaxies at different redshifts, we find that the ages of stellar populations in model galaxies at redshift $z\sim 2.0$ and $3.0$ are just in the age range of $0.5-3.0$\,Gyr, whereas the ages of stellar populations in the model galaxies at redshift $z\sim 0.0$ and $1.0$ are beyond this age range. This characteristic can also be seen in Fig. \ref{fig:age-dis}, where the ages of stellar populations are within the dotted lines for model galaxies at $z\sim 2.0$ and $3.0$, but outside the dotted lines for $z\sim 0.0$ and $1.0$ which verifies that the effect of binary interactions is obvious in model galaxies at redshift $z\sim 2.0$ and $3.0$, but is not obvious at redshift $z\sim 0.0$ and $1.0$. This result can explain that there exists offset for model galaxies based on Model A and Model B at redshift $z\sim 2.0$ and $3.0$, whereas this phenomenon does not exist for model galaxies at redshift $z\sim 0.0$ and $1.0$. \item For Fig. \ref{fig:a-subsample}, we also point out that the offset between model galaxies based on Model A and Model B at redshift $z\sim 3.0$ is smaller than that of model galaxies at redshift $z\sim 2.0$. As shown in Fig. \ref{fig:age-dis}, the ages of stellar populations in model galaxies at $z\sim 2.0$ and $3.0$ are within the age range influenced by binary interactions. In comparison, some very young ($<0.5$\,Gyr) stellar populations emerge in model galaxies at $z\sim 3.0$ in panel (d), which does not appear in panel (c) at $z\sim 2.0$. The existence of young stellar populations can also radiate the UV-light, which is similar to the effect of binary interactions. \citet{kav09} demonstrated that the UV-flux was highly sensitive to young stellar populations. They constructed a model in which an old ($10.0$\,Gyr) population contributed $99.0$ percent of the stellar mass, with a $1.0$ percent contribution from stars that were $0.3$\,Gyr old. They found that the UV-flux of the combined SED came purely from the young population which certifies that the young stellar populations have large contribution to the UV-flux, which can pollute the effect of binary interactions. $\,\,\,\,$ In addition, the radiation in the rest-frame $F\rm_{UV}-$band can move into the observed-frame $g-$band at $z\sim 2.0$, and the observed-frame $g-$ and $r-$bands at $z\sim 3.0$. In other words, the binary interactions can affect the observed-frame $r-$band and this effect is larger than that on $g-$band for model galaxies at $z\sim 3.0$. So the $r$ magnitude becomes smaller, and makes the $g-r$ color become redder. \end{itemize} In general, binary interactions can influence the optical passbands (i.e., $g$-band) for passive galaxies at $z\sim2.0$. The inclusion of binary interactions in EPS models can reduce the $g-$band magnitude by $1.5$\,mag, make the $g-r$ color bluer by $1.0$\,mag, and make the $u-g$ color redder by $1.0$\,mag for the passive galaxies at redshift $z\sim2.0$. \subsection{Dependence on the EPS models} The validity of the results depends on the assumptions of the EPS models. The main input parameters and distributions are (i) the common envelope (CE) ejection $\alpha \rm_{CE}$, (ii) the coefficient $\eta$ for the Reimers wind mass loss, (iii) the IMF of the primaries, (iv) the secondary-mass distribution, (v) the distribution of orbital separations, and (vi) the eccentricity distribution. From \citet{mar11}, we know that the distribution of mass ratio ($n(q)$) is still uncertain and a matter for debate. For the EPS models, \citet{zha05} simulated the real populations by producing $2.5\times10^{7}$ binary systems by choosing three forms of $n(q)$: $n(q)=1$, $n(q)=2q$, and $M_{2}$ was uncorrelated with $M_{1}$. Moreover, they found that the discrepancy in colors caused by the choice of initial distribution of $q$ is smaller than that of the inclusion of binary interactions. \citet{zha05} investigated the effects of some input parameters (the CE ejection efficiency $\alpha\rm_{CE}$ and the Reimers wind mass-loss coefficient $\eta$) and input distributions (eccentricity and the initial mass of the secondaries) on the integrate colors of Model B. The results revealed that the variations in the choice of input model parameters and distributions could affect the results. For example, the circular distribution can make the colors redder than that of eccentricity distribution, while the increasing $\alpha\rm_{CE}$ can make the integrated colors bluer, and the variation of mass ratio can lead to fluctuations in the integrated colors and this fluctuation at late ages are greater than at early ages. However, comparing the discrepancies that exist among the integrated colors, they also found that the differences between the models with and without binary interactions were greater than those caused by the variations in the choice of input parameters and distributions. The choice of different form of IMF can possibly affect the results. \citet{zfh12} analyzed the effect of IMF on the SFR calibration in terms of UV luminosity for burst, S0, Sa-Sd, and Irr galaxies. They chose the different IMF of \citet{miller79} and \citet{salpeter55} to investigate this effect. And they found that the effect on the UV luminosity that is caused by the variation in the form of IMF was smaller than that between the models with and without binary interactions for burst galaxies at all ages. So the variation in the form of IMF can lead to small effects on the above results. Therefore, the inclusion of binary interactions is the main reason which causes the phenomena in Fig. \ref{fig:a-subsample}. \section{Conclusions} \label{sect:conclusions} We used the galaxy template spectra built on Yunnan EPS models and $e-$folding SFR, the reddening raw of \citet{cal00}, the SDSS standard filter, and the cosmological parameters of standard dark matter model with the MC simulation to produce the passive model galaxies in the redshift range of $z=0.0$ to $3.0$, and then studied the effect of binary interactions on the predicted C-M and C-C relations with redshift. Through comparing the predicted C-M and C-C relations of galaxies, we can investigate the effect of binary interactions on these predicted relations. For the passive galaxies, we find that the predicted C-M and C-C relations of model galaxies based on Model A and Model B have large offset at redshift $z\sim2.0$, especially for the C-C relation. These offsets are mainly produced by the inclusion of binary interaction in EPS models. At redshift $z\sim0.0$ and $1.0$, the ages of stellar populations are beyond the age range of the effect of binary interactions ($0.5-3.0$\,Gyr), therefore the effect of binary interactions is insignificant. Moreover, at redshift $z\sim3.0$, the young stellar populations ($t<0.5$\,Gyr) exist in model galaxies, which is beyond the age range of the effect of binary interactions. The existence of young stars can influence the effect of binary interactions. At redshift $z\sim2.0$, the ages of stellar populations are just in the age range of the effect of binary interactions, which is not influenced by very young stars. Therefore the effect of binary interactions are very obvious at this redshift. The binary interactions can make the $g-$band magnitude smaller by $1.5$\,mag, the $g-r$ color bluer by $1.0$\,mag, and the $u-g$ color redder by $1.0$\,mag for the passive galaxies at this redshift. We note that the effects of different choices of input parameters and distributions for EPS models on the above results are smaller than that of inclusion of binary interactions. Because of the widely observed optical passbands of galaxies, the stellar population properties of passive galaxies can also be affected by binary interactions. We will analyse the impact of binary interactions on the stellar population properties of observed passive galaxies in our follow-up papers. \begin{acknowledgements} This work is supported in part by Xinjiang Natural Science Foundation (Grant No. 2011211A104), Natural Science Foundation (Grant Nos. 11273053, 11033008, 10821061, 2007CB815406, and 11103054), and by the Chinese Academy of Sciences under Grant No. KJCX2-YW-T24. The authors are also supported by the program of the Light in China$^,$ Western Region (LCWR) under Grants XBBS201221 and XBBS2011022. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
39
Q: How to handle LogonHours property on Windows 10 with Powershell? I'm stucked with an issue to read logonHours Property, would appreciate any help. On a server or any computer that I have RSAT.ActiveDirectory installed I can read with Get-ADUser -Identity 'user' -Property LogonHours, manipulate to and from the bits/bytes format to a more human friendly way. But I need to read it from any/all workstation in a network where mostly don't have RSAT installed and I think installing it on every machine doesn't make sense. Googling, I found a WMI way to get the property like this Get-WMIObject -Class 'Win32_NetworkLoginProfile' -Property LogonHours -Filter "Caption Like '$Env:UserName'" I was expecting to get the value in a similar format as bytes array with 168 bit representing the hours of a week as documented here (Docs.MS.CIM_Win32_NetworkLoginProfile#LogonHours). But instead, I'm getting something like bellow, which is impossible to manipulate due to issues with timezone diferences and the messy result, for instance, the first hour for monday actually is something like 1900 on sunday: # net user username /time:w-su,8-18 # Get-WMIObject -Class 'Win32_NetworkLoginProfile' -Property LogonHours -Filter "Caption Like '$Env:UserName'" -- Sunday: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, -- Monday: Access Denied -- Tuesday: Access Denied -- Wednesday: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, -- Thursday: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, -- Friday: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, -- Saturday: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200 # not used, just to show the issue # net user username /time:m,0-1 # Get-WMIObject -Class 'Win32_NetworkLoginProfile' -Property LogonHours -Filter "Caption Like '$Env:UserName'" -- Sunday: 1900, -- Monday: Access Denied -- Tuesday: Access Denied -- Wednesday: Access Denied -- Thursday: Access Denied -- Friday: Access Denied -- Saturday: Access Denied I'm not a developer and by now I'm using Powershell as it is more convenient for our team, but I think we will be able to handle some simple c# vb.net code for a very specific need, yet, I tried to find something about it on any language and didn't find anything. Thanks to anyone that could help in advance. Rich
{ "redpajama_set_name": "RedPajamaStackExchange" }
265
Tsim Sha Tsui, often abbreviated as TST, is an urban area in southern Kowloon, Hong Kong. The area is administratively part of the Yau Tsim Mong District. Wan Chai is a metropolitan area situated at the western part of the Wan Chai District on the northern shore of Hong Kong Island, in Hong Kong. Its other boundaries are Canal Road to the east, Arsenal Street to the west and Bowen Road to the south. For parkour pro Ryan Doyle, every new city he visits is full of freerunning potential. But it's not just the amazing diversity of landscapes and endless ways to navigate them that motivates his adventurous spirit, it's the chance that to find and connect with other like minded individuals. While locals and tourists roam the streets below a whole community goes unnoticed up above. Are you a parkour enthusiast? Pin your favorite spots to run, flip and jump to your profile map and share them with your growing community.
{ "redpajama_set_name": "RedPajamaC4" }
416
\section{Introduction} The irreversible mixing at a molecular level of two fluids of different densities $\rho^{*}_{2} > \rho^{*}_{1}$ is a fluid dynamical process of great fundamental interest and practical importance, especially when the fluids are turbulent. Such turbulent mixing flows occur in many different circumstances. A particularly important class arises when the buoyancy force associated with the effects of statically unstable variations in fluid density in a gravitational field actually drives both the turbulence and the ensuing mixing itself. Such flows, commonly referred to as `Rayleigh-Taylor instability' (RTI) flows due to the form of the initial linear instability (\cite{R1900,T1950}), have been very widely studied (see \cite{S1984,Youngs1984,Youngs1989,Glimm2001,D2004,D2005,Lee2008,Hyunsun2008,AD2010}), not least because of their relevance in astrophysics \citep{Cabot2006a} and fusion \citep{P1994}. A key characteristic of RTI flows is that the turbulence which develops is not driven by some external forcing mechanism, but rather is supplied with kinetic energy by the conversion of `available' potential energy stored in the initial density field. This kinetic energy naturally drives turbulent disorder and a cascade to small scales, with an attendant increase in the dissipation rate of kinetic energy. Such small scales also lead to `filamentation', i.e. enhanced surface area of contact between the two miscible fluids and, crucially, substantially enhanced gradients in the density field, which thus also leads to irreversible mixing, and hence modification in the density distribution. There has been an explosion in interest in investigating the `efficiency' of this mixing, i.e. loosely, the proportion of the converted available potential energy which leads to irreversible mixing, as opposed to viscous dissipation, (see the recent review of \cite{T2013}), although the actual definition and calculation of the efficiency is subtle and must be performed with care -- see for example \cite{DWD2014} for further discussion. Nevertheless, there is accumulating evidence that buoyancy-driven turbulence is particularly efficient in driving mixing \citep{LD2011,DWD2014} and certainly more efficient than externally forced turbulent flow. This evidence poses the further question whether there are some distinguishing characteristics of the buoyancy-driven turbulent flow that are different from the flow associated with an external forcing, in particular whether these characteristics can be identified as being responsible for the enhanced and efficient mixing. The situation is further complicated by the observation that, even when the two fluids undergoing mixing are themselves incompressible, since molecular mixing generically changes the specific volume of the mixture, the velocity fields of such `variable density' (VD) flows, (following the nomenclature suggested by \cite{LR2007}) are in general not divergence-free. This is definitely the case when the two densities are sufficiently different such that the Boussinesq approximation may not be applied. Commonly, the Boussinesq approximation is applied when the Atwood number $At$, defined as \begin{equation}\label{eq:adef} At = \frac{\rho^{*}_2 - \rho^{*}_1}{\rho^{*}_2 + \rho^{*}_1}\,, \end{equation} is small\,; i.e, $At \ll 1$. However, as discussed in detail in \cite{LR2007}, non-Boussinesq effects may occur when gradients in the density field become large. Following \cite{CD2001} and \cite{LR2007} the composition density $\rho^{*} (\mathbi{x},\,t)$ of a mixture of two constant fluid densities $\rho^{*}_{1}$ and $\rho^{*}_{2}$ ($\rho^{*}_{2} > \rho^{*}_{1}$) is expressed in dimensionless form by \begin{equation}\label{sma1} \frac{1}{\rho^{*}(\mathbi{x},\,t)} = \frac{Y_1(\mathbi{x},\,t)}{\rho^{*}_{1}} + \frac{Y_2(\mathbi{x},\,t)}{\rho^{*}_{2}}\,, \end{equation} where $Y_{i}(\mathbi{x},\,t)$ ($i=1,2$) are the mass fractions of the two fluids and $Y_1+Y_2=1$. (\ref{sma1}) shows that the composition density $\rho^{*}$ is bounded by \begin{equation}\label{sma2} \rho^{*}_{1} \leq \rho^{*}(\mathbi{x},\,t) \leq \rho^{*}_{2}\,. \end{equation} Assuming that there is Fickian diffusion, the mass transport equations for the two species are \begin{equation}\label{eq:mass} \partial_{t}\left( \rho Y_i \right) + \bnabla \cdot \left( \rho^{*} Y_i \mbox{\boldmath$u$} \right) = Pe_{0}\bnabla \cdot \left(\rho\bnabla Y_i \right)\,, \end{equation} where $Pe_{0}$ is the P\'eclet number\,: the dimensionless Reynolds, Schmidt and P\'eclet numbers are defined in Table \ref{tab:param}. Since the specific volume $1/\rho^{*}$ changes due to mixing, a non-zero divergence is induced in the velocity field (see Appendix A). \begin{equation}\label{eq:divu} \bnabla \cdot \mbox{\boldmath$u$} = - Pe_{0}\Delta\left (\ln \rho^{*} \right) = - \frac{Pe_{0}}{\rho^{*}} \Delta\rho^{*} + \frac{Pe_{0}}{\rho^{*2}} |\bnabla \rho^{*}|^{2}\,, \end{equation} while summing (\ref{eq:mass}) over the two species yields the conventional continuity equation for mass conservation \begin{equation}\label{eq:masscont} \partial_{t}\rho^{*} + \bnabla \cdot \left (\rho^{*} \mbox{\boldmath$u$} \right) = 0\,. \end{equation} As discussed in \cite{LR2007}, the Boussinesq approximation, leading to the requirements that the velocity field is divergence-free $\bnabla \cdot \mbox{\boldmath$u$} =0$ and the mass conservation equation becomes \begin{equation}\label{eq:bouss_mass} \partial_{t}\rho^{*} + \mbox{\boldmath$u$}\cdot\bnabla\rho^{*} = Pe_{0}\Delta \rho^{*}\,, \end{equation} relies on the requirement that the second (nonlinear) term on the right hand side of (\ref{eq:divu}) can be ignored compared to the first term, i.e. that \begin{equation}\label{eq:boussapprox} |\bnabla \rho^{*}|^2 \ll \rho^{*}|\Delta\rho^{*}|\,. \end{equation} As noted by \cite{LR2007}, this condition may be violated if substantial gradients develop in the density field. It is not {\it a priori} clear, even when the Atwood number is very small, that the non-divergent nature of the velocity field qualitatively changes the properties of the turbulent flow in ways which are significant to the mixing, and specifically whether regions in the flow may develop where the condition (\ref{eq:boussapprox}) is violated. This issue can be explored by careful numerical simulation, as reviewed by \cite{L2013}, with a key observation (see \cite{LR2007} for more details) being that the pressure distribution is substantially modified by non-Boussinesq effects. Furthermore, the central role played by intermittency and anisotropy, as discussed in \cite{LR2008} suggests that it would be instructive to focus carefully on the time-dependent evolution of nonlinearity within such buoyancy-driven, variable density flows. Recently, a new method to assess the evolution (and depletion) of nonlinearity within turbulent flows has been developed centred on consideration of appropriately dimensionless $L^{2m}$ norms of the vorticity $\mbox{\boldmath$\omega$}=\bnabla \times \mbox{\boldmath$u$}$ and of the gradient $\bnabla\theta$ where \begin{equation}\label{theta1} \theta = \ln (\rho^{*}/\rho^{*}_{0})\qquad\mbox{with}\qquad \rho^{*}_{0} = \frac{1}{2}\left(\rho^{*}_{1}+\rho^{*}_{2}\right)\,. \end{equation} These $L^{2m}$-norms are scaled by an exponent ($\alpha_{m}= 2m/(4m-3)$), the origin of which comes from symmetry considerations for the three-dimensional Navier-Stokes equations (\cite{DGKGPV13,GDKGPV14,JDGIMA2015}). These ideas are explained in \S\ref{defns} and \S\ref{D1theta}. We have been able to calculate these various scaled norms through a re-analysis of a dataset of D. Livescu, arising from the simulation of a buoyancy-driven flow very similar to that reported in \cite{LR2007}, which is freely available at the Johns Hopkins Turbulence Database (JHTDB). Using this re-analysis, there are three central questions which we wish to answer as the primary aims of this paper. First, can the analysis approach described in \cite{DGKGPV13,GDKGPV14} be usefully generalised to consider the gradient of the density field, as that is naturally closely related to the buoyancy-driven mixing within the flow? Second, if such a generalisation can be made, can the growth of gradients in the density field be bounded or controlled in any meaningful way, as such bounds could yield valuable insights into the structure and regularity of the density field and the uniform validity of the Boussinesq approximation for flows with $At \ll 1$, which may explain the `efficiency' of mixing associated with buoyancy-driven turbulence? Third, does buoyancy-driven turbulence exhibit similar nonlinear depletion in the velocity field to the constant-density flows previously considered in \cite{DGKGPV13}? To address these questions, the rest of the paper is organised as follows. In section \ref{sec:jh}, we describe in detail the properties of the simulation data set which we re-analyse, and we then present the results of this re-analysis in section \ref{sec:theta}. Finally, we draw our conclusions in section \ref{sec:conc}. \section{Description of the database}\label{sec:jh} As noted in the introduction, to study nonlinear depletion in buoyancy-driven turbulence we use the Johns Hopkins Turbulence Database (JHTDB) \citep{JHTDB}, a publicly available direct numerical simulation (DNS) database. For more information, please see \url{http://turbulence.pha.jhu.edu/}. The equations used for this problem are the miscible two-fluid incompressible Navier-Stokes equations given by\,: \begin{eqnarray}\label{gov_eqn} \partial_{t}\rho^{*} + (\rho^{*} u_j)_{,j} &=& 0 \\ \partial_{t}(\rho^{*} u_i) + (\rho^{*} u_i u_j)_{,j} &=& -p_{,i} + \tau_{ij,j} + \frac{1}{Fr^2}\rho^{*} g_{i} \\ u_{j,j} &=& -\frac{1}{Re_0 Sc} \left(\ln \rho^{*}\right)_{,jj}\label{divlog}\\ \tau_{ij} &=& \rho^{*} Re^{-1}_{0}\left(u_{i,j}+u_{j,i} - \twothirds \delta_{ij}u_{k,k}\right)\label{divergence} \end{eqnarray} where $\rho^{*}$ is the non-dimensional density of the mixture. For this problem the individual densities of the two components, $\rho^{*}_1$ and $\rho^{*}_2$, are constant but due to changes in mass fractions of each species, the density of the mixture can change (\ref{sma1}). For this reason, the divergence of velocity is dependent on the density as seen in equation (\ref{divlog}). The variable-density version of the petascale CFDNS code \citep{CFDNS} was used to carry out the direct numerical simulation on $1024^3$ grid points (for more information on a similar numerical study, refer to \citet{LR2007}). The Atwood number, $At$ that characterizes the density difference, is 0.05 and represents a small departure from the Boussinesq approximation. Some of the other important simulation parameters are displayed in Table \ref{tab:param}, where $U_{0}$ is the reference velocity scale, $\mu_{0}$ is the dynamic viscosity and $D$ is the mass diffusivity. \rem{ The Pecl\'et number is defined by $Pe_{0} = \Rey_{0}Sc$ where the Schmidt number is defined by $Sc = \mu_{0}/D\rho^{*}_{0}$ and the Reynolds number by $\Rey_{0} = \rho_{0}L_{0}U_{0}/\mu_{0}$\,; $L_{0}$.} \begin{table} \label{tab:param} \begin{center} \def~{\hphantom{0}} \begin{tabular}{llc} Reynolds number & $\Rey_0 = \rho^{*}_{0}L_{0}U_{0}/\mu_{0}$ & 12500 \\ Froude number & $Fr = U_{0}/\sqrt{gL_{0}}$ & 1 \\ Schmidt number & $Sc = \mu_{0}/D\rho^{*}_{0}$ & 1 \\ Pecl\'et number & $Pe_{0} = \Rey_{0}Sc$ & 12500\\ Atwood number & $At = (\rho^{*}_2-\rho^{*}_1)/(\rho^{*}_2+\rho^{*}_1)$ & 0.05 \\ Domain length & $L$ & $2\pi$\\ Non-dimensionalization length & $L_{0}$ & $1$ \end{tabular} \caption{Simulation parameters} \end{center} \end{table} In the beginning, the fluids are initialized as random blobs with periodic boundary in each direction and an initial diffusion layer at the interface. At sufficiently late time, the statistically homogeneous turbulent flow generated by such conditions resembles the interior of the mixing layer (away from the wall and/or edge effects) of the Rayleigh-Taylor instability at the turbulent stage \citet{LR2007}. The inhomogeneities in the transport terms are important at the edge and thus, it is safe to assume that the homogeneous simulation data under consideration describes the core of a fully developed mixing layer. Eventually, the turbulent behaviour dies out as the fluids become mixed at the molecular level. This high resolution data is stored as a sequence of 1011 files each representing $32^3$ spatial points at each time step starting from $t = 0$ to $t = 40.44$. The velocity gradients in the database are calculated as a post-processing step using a 4th order central finite differencing approximation from the data. If the gradients or the state variables are desired at a particular spatial location between the stored grid points, 4th order spatial interpolation or the 6th order Lagrangian interpolation are used. To get the temporal values other than the stored ones, a piecewise cubic harmonic interpolation is employed. \section{Results}\label{sec:theta} \subsection{Definitions}\label{defns} It is clear from (\ref{sma1}) that the composition density $\rho^{*}$ is bounded by $\rho^{*}_{1} \leq \rho^{*} \leq \rho^{*}_{2}$. Moreover, in Appendix \S\ref{appA} it is also shown that $\|\rho^{*}\|_{L^{\infty}}$ is bounded above by its initial data provided the advecting $\mbox{\boldmath$u$}$-field is regular. However, our interest lies more in $\bnabla\rho^{*}$, but it is difficult to work with this quantity alone. To circumvent this problem, it is shown in Appendix \S\ref{appB} that with a normalization density $\rho^{*}_{0} = \shalf(\rho^{*}_{1} +\rho^{*}_{2})$, the new variable $\theta$ defined by \begin{equation}\label{thetadef} \theta(\mathbi{x},\,t) = \ln \rho(\mathbi{x},\,t)\qquad\qquad \rho = \frac{\rho^{*}}{\rho^{*}_{0}}\,, \end{equation} changes the evolution equation for $\rho^{*}$ into a deceptively innocent-looking diffusion-like equation \begin{equation}\label{dep1a} \left(\partial_{t} + \mbox{\boldmath$u$}\cdot\bnabla\right)\theta = Pe_{0}^{-1}\Delta\theta\,, \end{equation} but with an equation for $\bnabla \cdot \mbox{\boldmath$u$}$ that depends on two derivatives of $\theta$ \begin{equation}\label{dep1b} \bnabla\cdot\mbox{\boldmath$u$} = - Pe_{0}^{-1}\Delta\theta\,. \end{equation} It is now easier to work with $\theta = \ln\rho$ evolving according to (\ref{dep1a}) and (\ref{dep1b}) by considering both $\bnabla\theta$ and $\mbox{\boldmath$\omega$} = \mbox{curl}\,\mbox{\boldmath$u$}$ in the higher norms $L^{2m} \left(\mathcal{V}\right)$ defined by ($1 \leq m < \infty$) \begin{eqnarray}\label{eq:omo} \Omega_{m,\theta} &=& \left(\left(L/L_{0}\right)^{3}\int_{\mathcal{V}} |\bnabla\theta|^{2m}dV\right)^{1/2m}\,,\\ \Omega_{m,\omega} &=& \left(\left(L/L_{0}\right)^{3}\int_{\mathcal{V}}|\mbox{\boldmath$\omega$}|^{2m}dV\right)^{1/2m}\,,\label{eq:omt} \end{eqnarray} where $L_{0}$ is the non-dimensionalization length in the JHT-database. The natural sequence of H\"older inequalities \begin{equation}\label{Omin1} \Omega_{m,\theta} \leq (L/L_{0})^{3/2m(m+1)}\Omega_{m+1,\theta}\,, \end{equation} has a multiplicative factor which is only unity when $L=L_{0}$. If we define \begin{equation}\label{alphadef} \alpha_{m} = \frac{2m}{4m-3}\, \end{equation} then the exponent on $L/L_{0}$ in (\ref{Omin1}) is related to $\alpha_{m}$ and $\alpha_{m+1}$ by \begin{equation}\label{alpha1} \frac{3}{2m(m+1)} = \frac{1}{\alpha_{m+1}} - \frac{1}{\alpha_{m}}\,. \end{equation} In turn, this leads us to define a natural dimensionless length \begin{equation}\label{defnsmall} \ell_{m} = \left(L/L_{0}\right)^{1/\alpha_{m}}\,, \end{equation} which turns (\ref{Omin1}) into $\ell_{m}\Omega_{m,\theta} \leq \ell_{m+1}\Omega_{m+1,\theta}$. The aim is to assume there exists a solution of (\ref{dep1a}) in tandem with the vorticity field $\mbox{\boldmath$\omega$}$. Motivated by the depletion properties studied in Donzis \textit{et al} (2013) and Gibbon \textit{et al} (2014) for the Navier-Stokes equations, the following definitions are made \begin{equation}\label{dep2a} D_{m,\theta} = \left(\ell_{m}\Omega_{m,\theta}\right)^{\,\alpha_{m}}\,, \end{equation} \begin{equation}\label{dep2b} D_{m,\omega} = \left(\ell_{m}\Omega_{m,\omega}\right)^{\,\alpha_{m}}\,. \end{equation} The $\alpha_{m}$-scaling in (\ref{dep2a}) and (\ref{dep2b}) has its origins in scaling properties of the three-dimensional Navier-Stokes equations (see \cite{GDKGPV14}). Note that the ordering observed in (\ref{Omin1}) does not necessarily hold for the $D_{m,\theta}$ or the $D_{m,\omega}$ because $\alpha_{m}$ \textit{decreases} with $m$. In the JHT-database the dimensionless domain size is $2\pi$ thus indicating that $L/L_{0} = 2\pi$. \subsection{The evolution of $D_{1,\theta}$}\label{D1theta} Now formally consider the time evolution of $D_{1}$ using (\ref{dep1a}) \begin{equation}\label{dep3a} \frac{1}{2}\frac{d~}{dt}\int_{\mathcal{V}} |\bnabla\theta|^{2}dV = \int_{\mathcal{V}} \int_{\mathcal{V}} \bnabla\theta\cdot\left(Pe_{0}^{-1}\Delta - \bnabla\mbox{\boldmath$u$} \right)\cdot\bnabla\theta\,dV + \frac{1}{2} \int_{\mathcal{V}} |\bnabla\theta|^{2}(\bnabla\cdot\mbox{\boldmath$u$})\,dV\\ \end{equation} and so, integrating by parts and using (\ref{dep1b}), we have \begin{equation}\label{dep3b} \frac{1}{2}\frac{d~}{dt}\int_{\mathcal{V}} |\bnabla\theta|^{2}dV \leq - Pe_{0}^{-1}\int_{\mathcal{V}} |\Delta\theta|^{2}dV + \int_{\mathcal{V}} |\bnabla\theta|^{2}|\nabla\mbox{\boldmath$u$}|\,dV + \frac{1}{2} Pe_{0}^{-1}\int_{\mathcal{V}} |\bnabla\theta|^{2}|\Delta\theta|\,dV\,. \end{equation} For $m \geq 2$, and noting that $\frac{m-2}{2(m-1)} + \frac{m}{2(m-1)} =2$, consider the term \begin{eqnarray}\label{dep4} \int_{\mathcal{V}} |\bnabla\theta|^{2}|\bnabla\mbox{\boldmath$u$}|\,dV &\leq& \Omega_{1,\theta}^{\frac{m-2}{m-1}}\Omega_{m,\theta}^{\frac{m}{m-1}}\Omega_{1,\omega}\nonumber\\ &=& c_{1,m}\,D_{1,\theta}^{\frac{m-2}{2(m-1)}}D_{m,\theta}^{\frac{m}{\alpha_{m}(m-1)}}D_{1,\omega}^{1/2}\,. \end{eqnarray} where the factors of $\ell_{m}$ and $2\pi$ have been absorbed into the dimensionless constant $c_{1,m}$. Now we turn to an idea introduced for the three-dimensional Navier-Stokes equations by \cite{GDKGPV14} in which it was discovered that a relation between $D_{m}$ and $D_{1}$ fitted the data. In \cite{GDKGPV14} the formulae in (\ref{Amdef2}) and (\ref{lamdef}) were found to fit the maxima in time of the $D_{m}$ versus $D_{1}$ curves with $\lambda$ approximately constant. However, in a subsequent paper \cite{JDGIMA2015} it has been shown that these formulae have a rigorous basis if the set of exponents $\{\lambda_{m}(t)\}$ are allowed to be time dependent. Following this, the JHT-database shows that the relation between $D_{m}$ and $D_{1}$ takes the form \begin{equation}\label{Amdef2} D_{m,\theta}(t) = D_{1,\theta}^{A_{m,\theta}(t)}\, \end{equation} The data is consistent with $A_{m,\theta}(t)$ being expressed as \begin{equation}\label{lamdef} = \frac{\ln D_{m,\theta}}{\ln D_{1,\theta}} \equiv A_{m,\theta}(t) = \frac{\lambda_{m,\theta}(t)(m-1) + 1}{4m-3}\,, \end{equation} Plots of $\ell_{m}\Omega_{m,\theta}(t)$, $D_{m,\theta}(t)$ and $A_{m,\theta}$ are shown in figure \ref{fig:fig1}, with plots of the corresponding $\lambda_{m,\theta}(t)$ in figure \ref{fig:fig2}a. Note that the set $\{\lambda_{m,\theta}(t)\}$ fan out with time with no tendency to coincide. Nonlinear depletion occurs when $A_{m,\theta} < 1$, which figure 1 shows is the case. \begin{figure} \begin{center} \includegraphics[width=0.32\columnwidth]{1L_Omega_DG.pdf} \includegraphics[width=0.32\columnwidth]{1C_Dm_DG.pdf} \includegraphics[width=0.32\columnwidth]{1R_Am_DG.pdf}\\ \includegraphics[width=0.70\columnwidth]{color_bar.pdf} \end{center} \caption{Time variation of: (\textit{a}) $l_m\Omega_{m,\theta}(t)$, as defined in (\ref{eq:omt})\,; (\textit{b}) $D_{m,\theta}(t)$, as defined in (\ref{dep2a})\,; (\textit{c}) $A_{m,\theta}(t)$ as defined in (\ref{Amdef2}). } \label{fig:fig1} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\hsize]{2L_LambdaTheta_DG.pdf} \includegraphics[width=0.48\hsize]{2R_beta.pdf} \end{tabular} \end{center} \caption{Time variation of: (\textit{a}) $\lambda_{m,\theta}(t)$, as defined in (\ref{lamdef}), which fan out and grow with time\,; (\textit{b}) $\beta(t)$ as defined in (\ref{dep11}).} \label{fig:fig2} \end{figure} \par\smallski Inserting (\ref{Amdef2}) into the right hand side of (\ref{dep4}) gives \begin{eqnarray}\label{dep5} \int_{\mathcal{V}} |\bnabla\theta|^{2}|\bnabla\mbox{\boldmath$u$}|\,dV &\leq& c_{1,m}D_{1,\omega}^{1/2}D_{1,\theta}^{(1+\lambda_{m,\theta})/2}\nonumber\\ &\leq &\frac{1}{2} Pe_{0} D_{1,\omega} + c_{2,m} Pe_{0}^{-1}D_{1,\theta}^{1+\lambda_{m,\theta}}\,. \end{eqnarray} where the use of a H\"older inequality has split up the terms on the last line of the right hand side. The same idea is used on the last term in (\ref{dep3b}) with $|\bnabla\mbox{\boldmath$u$}|$ replaced by $|\Delta\theta|$\,: \begin{eqnarray}\label{dep6} Pe_{0}^{-1}\int_{\mathcal{V}} |\bnabla\theta|^{2}|\Delta\theta|\,dV &\leq& \left(Pe_{0}^{-1}\|\Delta\theta\|_{2}^{2}\right)^{1/2} \left(2c_{3,m}Pe_{0}^{-1} D_{1,\theta}^{1+\lambda_{m,\theta}}\right)^{1/2}\nonumber\\ &\leq& \frac{1}{2} Pe_{0}^{-1}\|\Delta\theta\|_{2}^{2} + c_{3,m}Pe_{0}^{-1} D_{1,\theta}^{1+\lambda_{m,\theta}}\,. \end{eqnarray} Altogether, (\ref{dep3b}) becomes \begin{equation}\label{dep7} \frac{1}{2}\dot{D}_{1} \leq -\frac{1}{2} Pe_{0}^{-1}\|\Delta\theta\|_{2}^{2} + c_{4,m} Pe_{0}^{-1} D_{1,\theta}^{1+\lambda_{m,\theta}} + \frac{1}{2} Pe_{0}D_{1,\omega}\,. \end{equation} A simple integration by parts shows that \begin{equation}\label{dep8} \|\bnabla\theta\|_{2}^{2} \leq \|\Delta\theta\|_{2}\|\theta\|_{2} \end{equation} and so we have \begin{equation}\label{dep10} \frac{1}{2}\dot{D}_{1,\theta} \leq -\frac{1}{2} Pe_{0}^{-1}\frac{D_{1,\theta}^{2}}{\|\theta\|_{2}^{2}} + c_{4,m} Pe_{0}^{-1} D_{1,\theta}^{1+\lambda_{m,\theta}(t)}+ \frac{1}{2} Pe_{0} D_{1,\omega}\,. \end{equation} Because $\rho^{*}$ is bounded both below and above then so is $\|\theta\|_{2}^{2}$. Thus the competition on the right hand side of (\ref{dep10}) in powers of $D_{1,\theta}$ lies between the negative $D_{1,\theta}^{2}$ term and either $Pe^{-1}D_{1,\theta}^{1+\lambda_{m,\theta}}$ or the $Pe_{0}D_{1,\omega}$ terms. To turn the differential inequality (\ref{dep10}) into one in $D_{1,\theta}$ alone requires a relation between and $D_{1,\theta}$ and $D_{1,\omega}$, with the latter representing the fluid vorticity. Analytically, we have been unable to establish a relation between them but the JHT database provides us with the relation \begin{equation}\label{dep11} D_{1,\omega} = D_{1,\theta}^{\beta(t)}\,, \end{equation} where the growth in the exponent $\beta(t)$ is shown in figure 2b. Moreover, figure \ref{fig:fig3} shows that the $Pe_{0}D_{1,\theta}^{\beta(t)}$-term (plotted with blue squares) in (\ref{dep7}) is dominant over the $Pe_{0}^{-1}D_{1,\theta}^{1+\lambda_{m,\theta}(t)}$-term (plotted with red circles), even when $\lambda_{m,\theta}(t)$ is chosen to be the maximum across $m$ at each particular time step. The plots of $1+\lambda_{m,\theta}$ and $\beta(t)$ both show that the values of these two quantities are both greater than $2$ and thus cannot be controlled by the $-D_{1,\theta}^2$ term in (\ref{dep10}). $D_{1,\theta}$ is bounded only for extremely short times. Thus the possibility of the blow-up of $D_{1,\theta}$ in a finite time cannot be discounted. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=0.5\hsize]{peD1theta_peinvD1beta.pdf} \end{tabular} \end{center} \caption{ Time variation of $Pe_{0}D_{1,\theta}^{\beta(t)}$ (plotted with blue squares) and $Pe_{0}^{-1}D_{1,\theta}^{1+\lambda_{m,\theta}(t)}$-term (plotted with red circles) where $\lambda_{m,\theta}(t)$ is chosen to be the maximum value over $m$ at each time step with $Pe_0 = 12,500$.} \label{fig:fig3} \end{figure} \par\smallski Finally, figure \ref{fig:fig4} shows the equivalent set of plots of the time variation of $\ell_{m}\Omega_{m,\omega}(t)$, (as defined in (\ref{eq:omt}) and (\ref{dep2b})), $D_{m,\omega}(t)$ (as defined in (\ref{dep2b})) and $A_{m,\omega}(t)$ defined as \begin{equation}\label{eq:amodef} A_{m,\omega}(t) = \ln D_{m,\omega}/\ln D_{1,\omega}. \end{equation} In figure \ref{fig:fig5}, we also show the time variation of the corresponding $\lambda_{m,\omega}(t)$, calculated using the analogous relationship \begin{equation}\label{eq:lamodef} A_{m,\omega}(t) = \frac{\left[\lambda_{m,\omega}(t) (m-1) + 1\right]}{(4m-3)}\,. \end{equation} It is apparent that the turbulent fluid part of the problem, which drives and dominates the system, has corresponding $\lambda_{m,\omega}(t)$ that are flat in time and sit in the range $1 < \lambda_{m,\omega} < 2$. This is consistent with the behaviour found in three-dimensional Navier-Stokes flow described in Donzis \textit{et al} (2013), Gibbon \textit{et al} (2014) and Gibbon (2015). Note that this contrasts strongly with the behaviour of the $\theta$-variable where the $\lambda_{m,\theta}$ fan out and grow in time, as shown in figure \ref{fig:fig2}. \begin{figure} \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\columnwidth]{1L_Omega.pdf} \includegraphics[width=0.32\columnwidth]{1C_Dm.pdf} \includegraphics[width=0.32\columnwidth]{1R_Am.pdf}\\\\ \includegraphics[width=0.7\columnwidth]{color_bar.pdf} \end{tabular} \end{center} \caption{Time variation of: (\textit{a}) $l_m\Omega_{m,\omega}(t)$ as defined in (\ref{eq:omt})\,; (\textit{b}) $D_{m,\omega}(t)$, as defined in (\ref{dep2b})\,; (\textit{c}) $A_{m,\omega}(t)$, as defined in (\ref{eq:lamodef}).} \label{fig:fig4} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\hsize]{2C_LambdaOmega.pdf} \end{tabular} \end{center} \caption{Time variation of $\lambda_{m,\omega}(t)$, calculated using the relation (\ref{eq:lamodef}).} \label{fig:fig5} \end{figure} \section{Conclusion}\label{sec:conc} The numerical evidence in figure 2a suggests strong growth in $\lambda_{m,\theta}(t)$ which is consistent with strong growth in $\bnabla\rho^{*}$ even while $\rho^{*}$ is bounded. There are varying degrees of nonlinear depletion in the sense that $A_{m,\theta} < 1$, and $A_{m,\omega} < 1$ (as in figure 4c and 5). Depletion in $A_{m,\theta}$ reduces as the growth of $\lambda_{m,\theta}$ to the value $3.5$ in the final stages attests. Indeed, note that $\lambda_{m,\theta} = 4$ would give a linear relation and be equivalent to a full estimate of the nonlinearity. Depletion in $D_{1,\omega}$ is quite severe, as shown in figures 4c and 5, which is consistent with the same effect observed in Navier-Stokes flows. Despite this, the cross-effect of the turbulent fluid flow driving the growth of $D_{1,\theta}$ through the exponent $\beta(t)$ swamps the term $D_{1,\theta}^{1+\lambda_{m,\theta}}$ in (\ref{dep10}). \par\smallski Following \cite{LR2007}, there is another way of looking at the growth in $\bnabla\rho^{*}$. Consider the equation for $\theta$ and introduce a new velocity field $\mathbi{v} = \mbox{\boldmath$u$} + Pe_{0}\bnabla\theta$. The Hopf-Cole-like transformation $\theta = \ln\rho$ in (\ref{thetadef}) then leads to an exact cancellation of the nonlinear terms in (\ref{dep1a}) to give \begin{equation}\label{cm8} \left(\partial_{t} + \mathbi{v}\cdot\bnabla\right)\rho = Pe_{0}^{-1}\Delta\rho \,, \qquad\mbox{with}\qquad \bnabla . \mathbi{v} = 0\,. \end{equation} This is the linear advection diffusion equation driven by a divergence-free velocity field. Note that $\mbox{\boldmath$\omega$} = \mbox{curl}\,\mbox{\boldmath$u$} = \mbox{curl}\, \mathbi{v}$. The fact that $\mathbi{v}$ is actually an (explicit) function of $\bnabla\theta$ makes (\ref{cm8}) less simple than it first appears. Nevertheless, this equation provides a hint as to how we might look at the dynamics in a descriptive way. Consider a one-dimensional horizontal section through a rightward moving wave of $\rho^{*}$ at a snapshot in time\,: in the frame of the advecting velocity $\mbox{\boldmath$u$}$ the relevant component of $\mathbi{v}$ is greater on the back face of any part of the wave (where $\bnabla\rho^{*} > 0$) than on the front face (where $\bnabla \rho^{*} < 0$). Thus in the advecting frame, (\ref{cm8}) implies that not only is there the usual advection and diffusion but also a natural tendency for the back of a wave to catch up with the front, thus leading to steepening of $\bnabla\rho^{*}$. This is consistent with the evidence from (\ref{dep10}) which leaves open the possibility that $D_{1,\theta}$ could blow up after a finite time or at least grow sufficiently strongly that the mixing is driven down to near molecular scales where the validity of the model fails. Interestingly, this then hints that buoyancy-driven turbulence may well be more intense in some sense than constant-density turbulence, which may explain the observed extremely efficient mixing possible in such flows. \section*{Acknowledgements} We acknowledge, with thanks, the staff of IPAM UCLA where this collaboration began in the Autumn of 2014 on the programme ``Mathematics of Turbulence''. We would also like to thank C. Doering and D. Livescu for useful discussions. Research activity of C.P.C. is supported by EPSRC Programme Grant EP/K034529/1 (``Mathematical Underpinnings of Stratified Turbulence''). All the numerical data used is freely available from the Johns Hopkins Turbulence Database (JHTDB) \cite{JHTDB}, a publicly available direct numerical simulation (DNS) database. For more information, please see \url{http://turbulence.pha.jhu.edu/}.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,202
Q: Android Studio logcat history/buffer size Does anyone know if there a way to increase the size of the logcat history/buffer in Android Studio? I remember there was a way to do it in Eclipse and was hoping Android Studio had a similar setting. A: You can also do it per project, via the IDE: Settings->Editor->General->Console: tick "Override console cycle buffer size. Enter your desired size in the text box. Finally restart Android Studio for the changes to take effect. A: I'm afraid currently it is not possible to change logcat buffer size. However, I've created feature request in AOSP issue tracker. Here's the link: https://code.google.com/p/android/issues/detail?id=73425 A: Note that idea.cycle.buffer.size is documented in http://tools.android.com/tech-docs/configuration with the following : #---------------------------------------------------------------- # This option controls console cyclic buffer: keeps the console output size not higher than the specified buffer size (Kb). # Older lines are deleted. In order to disable cycle buffer use idea.cycle.buffer.size=disabled #---------------------------------------------------------------- idea.cycle.buffer.size=1024 A: You can increase the value of idea.cycle.buffer.size=1024 in property file android-studio\bin\idea.properties, buffer size unit: (Kb). I have already try, and it works for me perfect! Configuration description as following : #--------------------------------------------------------------------- # This option controls console cyclic buffer: keeps the console output size not higher than the specified buffer size (Kb). # Older lines are deleted. In order to disable cycle buffer use idea.cycle.buffer.size=disabled #--------------------------------------------------------------------- idea.cycle.buffer.size=1024 A: Alternatively mainly when you have big size of data and you wanna temporarily review it, use filter (package Logcat priority) and store output output to a file. A: It seems to be that with every major update of android studio the buffer size is reset to its insufficient size of 1024. This is at least the 3rd time I increase it
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,902
Football is the most popular sport in Libya, the North African country with a population of around 6,800,000. The governing body is the Libyan Football Federation, which was founded in 1962. Football culture Libyans are passionate towards football. Most kids and teenagers in Libya usually play football in the streets as their favorite pastime. Most people leave their duties and jobs to go and watch a game. People in Libya have experienced many incidents during, before, or after derby games such as killing and some riots, this usually happens when Libya's top three clubs, Al-Ittihad Tripoli, Ahly Tripoli, and Ahly Benghazi face off. Though Libya has not achieved big on both International and Club level, Libyans are known to be skillful, having produced such talented players such as TarIk El-Taib, Jehad Muntasser, Ahmed Saad, and Fawzi Al-Issawi. National championships Prior to national championships, football was held at regional level. The first regional championships (of Italian Libya) were held in 1928. The first team was "US Bengasi" in Cyrenaica, with Italian colonists and local Arab players. The team "US Tripolina" of Tripoli was in 1938 and 1939 in the "First amateur division" of the Italian championship. These regional championships were blocked during WW2. There were three regional leagues in Libya since 1949: West, East and South. Libyan Premier League This is the top flight of Libyan football. This is a semi-professional league, although some foreign players are professionals. The first championship was held in 1963, and was won by Al Ahly Tripoli. (see Libyan Premier League 1963-64) Al Ittihad have won the most championships, with 14, including the last 4 titles. The league is ranked as the 56th best league in the world for 2009, according to IFFHS. There are two other divisions, the Libyan Second Division, and the Libyan Third Division. Libyan Cup This competition was first initiated in 1976, although only 5 editions of the competition were played in the following twenty years. In the past, the runners-up of the Libyan Premier League were named the domestic cup champions. Al Ahly Tripoli and Al Ittihad Tripoli are the joint record winners, with 5 titles each. The holders are Khaleej Sirte, who beat Al Madina Tripoli 1-0 in a tense final (see 2008 Libyan Al-Fatih Cup Final) Libyan Trophy This competition was initiated in 2007. It is contested by members of the Premier League only. Khaleej Sirte won the inaugural competition. National team The national team, nicknamed The Mediterranean Knights, is considered one of the stronger teams in Africa and the Arab world, particularly in recent years. The good performances recorded in the 2012 Africa Cup of Nations under Brazilian Coach Marcos Paqueta saw the team record their first win in the tournament outside of Libya in their final match against Senegal. This saw their FIFA world rankings rise to the highest it had ever been at 53, which later rose again to 36 in September 2012. 1900s They had one of their most successful periods in the 1980s, when such players as Salim Abu Jarrad, Fawzi Al-Issawi and Ali Al-Beshari almost led the side to silverware. In 1982, Libya hosted the 1982 African Cup of Nations, a competition in which they came second, losing 7-6 to Ghana on penalty kicks. The team was also very close to qualifying for the 1986 FIFA World Cup, where they lost 3-1 to Morocco over two legs. The 1990s was a poor period for the national team. Libya was disqualified from qualifying for the 1994 FIFA World Cup due to UN sanctions, and the team withdrew from qualifying for the 1990 competition. They did not enter the qualification process for the 1998 edition. They did do better in UAFA competitions, bowing out in the semi finals of the 1999 Pan Arab Games. 2000–present The 2000s were better for the nation, and with players such as Tarik El Taib, Nader Kara and Ahmad Saad coming through, the national team grew in strength, and did get to the final round of qualifying for the 2002 FIFA World Cup. Although they finished bottom of the group with two points from 8 games, the country did have a sense of pride, and came back stronger, coming within a point of qualification for the 2004 African Cup of Nations. Two years later, the side did get to its second Nations tournament, having finished fourth in their 2006 FIFA World Cup qualification group. The side did well in their group, which did contain the two finalists, Ivory Coast and the winners Egypt, as well as the 2004 African Cup of Nations runners-up Morocco. A 3–0 defeat to Egypt crippled their chances, and the team got respectable results, including a 2–1 defeat against the Ivory Coast, and a 0–0 draw against Morocco. The team did suffer a setback in qualification for the African Cup Of Nations 2008 tournament. The side suffered disappointing away defeats against Ethiopia and Namibia, which ruined their chances of qualifying. The club came very close to qualification for the 2010 FIFA World Cup. The team secured a famous victory against Ghana, with an 86th-minute goal from Ahmed Saad Osman, but fell at the final hurdle, losing 1-0 to Gabon, which eliminated them on goal difference. In 2014 African Nations Championship the Libyan national team won its first title beating Ghana on penalties. Libyan football venues References
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,828
class Solution(object): def searchMatrix(self, matrix, target): """ :type matrix: List[List[int]] :type target: int :rtype: bool """ flatten_matrix = reduce(lambda x, y: x + y, matrix) low, high = 0, len(flatten_matrix) - 1 while low <= high: mid = low + (high - low) / 2 if flatten_matrix[mid] == target: return True elif flatten_matrix[mid] < target: low = mid + 1 else: high = mid - 1 return False
{ "redpajama_set_name": "RedPajamaGithub" }
1,355
Boike Rehbein (geboren am 18. Februar 1965 in Berlin; gestorben am 11. Juni 2022) war ein deutscher Soziologe und Sozialphilosoph. Lebenslauf und Lehre Boike Rehbein war der Sohn des Linguisten Jochen Rehbein und Enkel des Kinderchirurgen Fritz Rehbein. Er studierte Philosophie, Soziologie und Geschichte in Freiburg im Breisgau, Paris, Frankfurt am Main, Göttingen und Berlin. Seine Promotion erfolgte 1996, die Habilitation 2004. Seine wichtigsten akademischen Lehrer waren Pierre Bourdieu, Günter Dux, Jürgen Habermas, Jann Holl, Bernd Martin, Günther Patzig und Hermann Schwengel. Er hatte Gastprofessuren in Bangkok, Buenos Aires, Neu-Delhi, Santiago de Chile, Vientiane und Zürich inne. In Vientiane war Boike Rehbein maßgeblich am Aufbau der sozialwissenschaftlichen Fakultät der National University of Laos beteiligt. Bis 2009 war Rehbein Direktor des Global Studies Programm der Universität Freiburg und wechselte im Anschluss an die Humboldt-Universität zu Berlin, wo er ab dem Wintersemester 2009/10 eine Professur für Gesellschaft und Transformation am Institut für Asien- und Afrikawissenschaften innehatte. Rehbein war Mitbegründer und Editor-in-Chief der wissenschaftlichen, offen zugänglichen Zeitschrift "Transcience". Boike Rehbein starb nach Mitteilung der Humboldt-Universität "völlig unerwartet" am 11. Juni 2022. Forschung Boike Rehbein befasste sich in seiner Forschung schwerpunktmäßig vor allem mit dem Themenkomplex sozialer Ungleichheit, der Globalisierung unter besonderer Berücksichtigung Südostasiens, mit der kritischen soziologischen Theorie und Ansätzen der Sozialstrukturanalyse. Er war einer der führenden Experten für Einflüsse der Globalisierung und soziale Ungleichheit in Laos sowie für die Soziologie Pierre Bourdieus. In seinem frühen Werk beschäftigte sich Rehbein mit der Frage, was es bedeutet, einen anderen Menschen zu verstehen und entwarf eine Theorie des Verstehens zwischen Menschen. In Zusammenarbeit mit einem globalen sozialwissenschaftlichen Netzwerk prägte Rehbein mit dem Konzept der "Soziokulturen" einen neuartigen Ansatz zum Verständnis von sozialer Ungleichheit. Mit der kontextspezifischen Analyse von Strukturen unterschiedlicher Gesellschaften trug das Netzwerk dazu bei, Mechanismen der Produktion und Reproduktion von sozialer Ungleichheit im globalen Kapitalismus festzustellen und anzugehen. Angelegt an eine tiefgehende Kritik eurozentrischer Sozialtheorien entwarf Rehbein darüber hinaus mit seinen Skizzen zu einer "Kaleidoskopischen Dialektik" eine emanzipatorische und verbindende Form der Wissensproduktion, die den Herausforderungen von sozialer Ungleichheit in einer postkolonialen, multizentrierten Welt gerecht werden sollte. Veröffentlichungen (Auswahl) (mit Sisouk Sayaseng): Lehrbuch der laotischen Sprache, Buske, Hamburg 1997 Was heißt es, einen anderen Menschen zu verstehen? Metzler, Stuttgart 1997 [Dissertation] (mit Sisouk Sayaseng): Wörterbuch Laotisch-Deutsch / Deutsch-Laotisch, Buske, Hamburg 2000. mit Gernot Saalmann und Hermann Schwengel (Hrsg.): Pierre Bourdieus Theorie des Sozialen, UVK, Konstanz 2003 (mit Sisouk Sayaseng): Laotische Grammatik, Buske, Hamburg 2004 Globalisierung in Laos. Transformation des ökonomischen Feldes, LIT, Münster 2004 Die Soziologie Pierre Bourdieus, UTB, Konstanz 2006. Globalization, culture and society in Laos, Routledge, 2007 [Habilitation] (mit Hermann Schwengel): Theorien der Globalisierung, UVK, Konstanz 2008, ISBN 978-3-8252-3052-4 (mit Gerhard Fröhlich): Bourdieu-Handbuch, Metzler, Stuttgart/Weimar 2009. (mit Jan Nederveen Pieterse): Globalization and Emerging Societies: Inequality and Development, Palgrave Macmillan, Basingstoke 2009. (mit Gernot Saalmann): Verstehen, UVK, Konstanz 2009. Globalization and Inequality in Emerging Societies, Palgrave, Basingstoke 2011. Kaleidoskopische Dialektik: Kritische Theorie nach dem Aufstieg des globalen Südens, Herbert von Halem Verlag 2013 (mit Jessé de Souza): Ungleichheit in kapitalistischen Gesellschaften, Beltz, Weinheim 2014 (mit Benjamin Baumann, Luzia Costa, Simin Fadaee, Michael Kleinod, Thomas Kühn, Fabrício Maciel, Karina Maldonado, Janina Myrczik, Christian Schneickert, Eva Schwark, Andrea Silva, Emanuelle Silva, Ilka Sommer, Jessé Souza, Ricardo Visser): Reproduktion sozialer Ungleichheit in Deutschland, UVK, 2015 Critical Theory after the Rise of the Global South: Kaleidoskopic Dialectic, Routledge, 2015 (mit Surinder S. Jodhka, Jessé de Souza): Inequality in capitalist societies, Routledge, 2017 Die kapitalistische Gesellschaft, UTB, 2021 (mit Vincent Houben): Die globalisierte Welt, UTB, 2022 Weblinks Rehbein am Institut für Asien- und Afrikawissenschaften der HU Berlin Christopher Wimmer: Bitte nicht einer Meinung sein. Zum Tod des Berliner Soziologen Boike Rehbein, nd-aktuell.de, 15. Juni 2022 In Erinnerung an Prof. Dr. Boike Rehbein, Institut für Asien- und Afrikawissenschaften der HU Berlin Einzelnachweise Soziologe (21. Jahrhundert) Hochschullehrer (Albert-Ludwigs-Universität Freiburg) Hochschullehrer (Humboldt-Universität zu Berlin) Deutscher Geboren 1965 Gestorben 2022 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
291
What a busy Saturday for the campaign! As our message spreads and our momentum grows we find ourselves in high demand! We start the day at 10:00am with a broadcast of Liberty on the Air where we will talk about the weeks news in Liberty. Listen live on the Facebook Live Stream or on http://www.wemfradio.com. At 1pm we will be joining Charlie Baker and Karen Polito for the Governor's Picnic in Shrewsbury. At 4pm we will be at the GOUSA BBQ in Billerica. At 6:30pm we will be at the "Infamous" Roof Party in Gloucester! We hope to see you at one of the events! N.B. all the event hosts would like an RSVP so please follow the link and RSVP if you're going to show up! In 1780, the Massachusetts Founding Father John Adams wrote: "There is nothing which I dread so much as a division of the republic into two great parties, each arranged under its leader, and concerting measures in opposition to each other. This, in my humble apprehension, is to be dreaded as the greatest political evil under our Constitution." Adams of course was referring to the Massachusetts state Constitution, which was the model for the American Constitution, ratified 9 years later. Let us honor John Adams and this history of Massachusetts this November by electing to the office of Auditor someone who is NOT in the two old parties. Adams was right -- the two old parties are locked in a battle of position and opposition, while we the people get caught in the crossfire. This fall elect someone who puts people before party! This fall elect someone who puts principle before politics! This fall elect someone you LIKE! This fall elect someone like YOU! We need your help to win. Can you help us reach our fundraising goals? Who should the Auditor please? Who is the Auditor supposed to work for? The Governor runs the state that governs you. The Secretary of State administers the State. The Treasurer guards the state's treasure, that they collected from the taxpayers. What oversight? A Culture of Corruption! "The P-Card system is an example of one of the myriad ways the current Auditor is enabling the bad eggs in government from spending the people's money on luxuries for the employees of the state." Candidate Fishman complained in response to a story in the Boston Herald by Joe Dwinell. "All the books, online all the time means ALL expenses. Not just checks, but credit card purchases too! Anyone who pays bills in Massachusetts knows that. Just not employees of the state who seek to conceal their actions." "Disappointing" said Libertarian candidate for Auditor Dan Fishman, upon reading a story in the Boston Globe by Matt Stout that three employees of State Auditor Suzanne Bump helped her in filing signatures for her reelection campaign while on the clock for the people of the Commonwealth. Fishman Calls for Immediate Audit of State Police Civil Asset Forfeiture. "While this will be a constant area of focus in my tenure as Auditor, the issue is too pressing to leave till January 2019. I am calling on the current Auditor to step up and inform the public about the assets the State Police have seized without due process," Libertarian candidate for Auditor Dan Fishman said. Only 2 candidates have currently qualified to be on the ballot in the Auditor's race in Massachusetts. The 8 year incumbent, Democrat Suzanne Bump and Libertarian Dan Fishman have been certified by the Secretary of State ahead of the May 8th deadline for signatures for major parties being turned in. With the sadly predictable news that 19 quasi-state agencies have been ignoring the law and not reporting their payrolls that are funded by taxpayers of the commonwealth, the question becomes once again, where were the audits that should have revealed this? Back in October of 2017, I criticized the Auditor for a data grab that was not required and expanded the power of the office. The Auditor was rejected in her scheme to be able to see private tax records by the legislature. Unsurprisingly she is at it again, proposing the same piece of legislation she has proposed every year since 2011. I find this an astonishing affront to all tax payers. The Auditor is not doing her basic job of auditing every state agency every 3 years as was documented by the Boston Globe in 2014. I would suggest that instead of trying to use data to spy on the businesses, which is NOT part of her job description, the Auditor should be using big data to keep track of the agencies she IS CHARGED with auditing. As Louis Brandeis said: "Sunlight is the best disinfectant." I will use the power of the office to put all the books online all the time. Everyone is concerned now about our life data that is being tracked, so that our lives are transparent to government -- and yet the finances of the Commonwealth remain opaque to the citizens. We have a right to know how our money is spent. As Governor Weld famously said "There is no such thing as Government money -- only taxpayers' money!" Let's spy on the government. As Auditor I will DEMAND access to ALL the financial transactions of every state agency and put them online so YOU can see how the government is spending YOUR money, instead of what the auditor wants, which is the other way around. Massachusetts' money matters. It's time for an Auditor who doesn't root for the Republicans or the Democrats. The referee shouldn't be wearing the Jersey of one of the teams! The Fishman for Auditor campaign is proud to announce the appointment of Matthew Hudson as Art Director. Matthew is graphic designer with 20 years of experience who lives in North Central Massachusetts with his wife and two children. Full time marketing and digital media specialist, and owner / operator of Mountaintop Creative Group. Below is the first poster Matt has created for the campaign.
{ "redpajama_set_name": "RedPajamaC4" }
6,623
<ion-header> <ion-navbar> <ion-title> Help </ion-title> </ion-navbar> </ion-header> <ion-content> <ion-list inset class="bottom-padding"> <ion-item-divider color="light">Contact</ion-item-divider> <a target="_blank" href="https://github.com/jakswa/martaionic"> <button ion-item> Github </button> </a> <a target="_blank" href="https://gitter.im/jakswa/martaionic"> <button ion-item> Gitter </button> </a> <a target="_blank" href="https://twitter.com/martadotio"> <button ion-item> Twitter </button> </a> <ion-item-divider color="light">Legend</ion-item-divider> <ion-item> Gold Line <span item-end class="timebox gold-line"> <span class="direction">&ensp;&ensp;&ensp;&ensp;&ensp;&ensp;</span> </span> </ion-item> <ion-item> Red Line <span item-end class="timebox red-line"> <span class="direction">&ensp;&ensp;&ensp;&ensp;&ensp;&ensp;</span> </span> </ion-item> <ion-item> Green Line <span item-end class="timebox green-line"> <span class="direction">&ensp;&ensp;&ensp;&ensp;&ensp;&ensp;</span> </span> </ion-item> <ion-item> Blue Line <span item-end class="timebox blue-line"> <span class="direction">&ensp;&ensp;&ensp;&ensp;&ensp;&ensp;</span> </span> </ion-item> <ion-item> Going Northbound <span item-end class="timebox grey-line" > <span class="direction">N</span> </span> </ion-item> <ion-item> Going Eastbound <span item-end class="timebox grey-line" > <span class="direction">E</span> </span> </ion-item> <ion-item> Going Southbound <span item-end class="timebox grey-line" > <span class="direction">S</span> </span> </ion-item> <ion-item> Going Westbound <span item-end class="timebox grey-line" > <span class="direction">W</span> </span> </ion-item> <ion-item> No Realtime Data <span item-end style="height: 40px; width:40px; background-color: #ff5100; border: 2px dashed grey" ></span> </ion-item> <ion-item> Arriving <span item-end class="time" ><ion-icon name="subway"></ion-icon></span> </ion-item> <ion-item> Boarding <span item-end class="time"><ion-icon name="md-swap"></ion-icon></span> </ion-item> <!--<ion-item>--> <!--Going Northbound--> <!--<span class="timebox" class="gold-line scheduled" >--> <!--<span class="direction">N</span>--> <!--<span class="time">{{":" + (parseInt(arrival.waiting_time) || 0)}}</span>--> <!--<span class="time" ><ion-icon name="subway"></ion-icon></span>--> <!--<span class="time"><ion-icon name="md-swap"></ion-icon></span>--> <!--</span>--> <!--</ion-item>--> </ion-list> </ion-content> <tabs></tabs>
{ "redpajama_set_name": "RedPajamaGithub" }
567
Q: Dynamically choose ad networks for ad serving My Android app uses two ad network SDKs: A1 and A2. I would like to send the traffic to the ad network which generates the highest revenue at the current time. (Or send, say, 30% of traffic to A1 and 70% to A2). Can this be done programmatically?
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,155
Mont Saint-Bernard was an 82-gun of the French Navy. On 20 April 1814, after the abdication of Napoleon at the end of the War of the Sixth Coalition, she was handed over to the Austrians, who burnt her. See also List of ships of the line of France References External links Ships of the line of the French Navy Téméraire-class ships of the line 1811 ships
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,207
\section{Introduction} Since the Macfarlane-Biedenharn (MB) papers \cite{mac,bied} on the construction of $su_q(2)$ algebra from the q-deformed oscillator algebra \`a la Schwinger way, there are by now many different versions of the q-deformed algebra. However all these $q$-deformed oscillator algebras are not Hopf algebras except the Hong Yan type and its generalization \cite{yan,oh}. It should be stressed here that via the Schwinger construction, it is only the `algebraic' aspect of the Hopf algebra $su_q(2)$ which can be expressed in terms of the $q$-oscillator algebra; the co-algebraic structure of $su_q(2)$ cannot be easily obtained from the q-oscillator algebra granted that the latter possesses a Hopf structure. It has been claimed \cite{cel1,oh2} that Hong Yan (HY) Hopf algebra is the same as the $su_q(2)$ Hopf algebra and a formal relation has been established for the generators of $su_{\sqrt{q}}(2)$ and the HY oscillator algebra. Nevertheless if we impose positive norm requirement for the states, then at the representation level, the identification breaks down for some values of $|q| =1$, since for these values, the positive norm requirement does not hold. In fact, the positive norm requirement \cite{fuji1} is in conflict with the truncation condition \cite{oh2} imposed on the states of the oscillator so as to get finite multiplets for $su_{\sqrt{q}}(2)$. In other words, for $|q| =1$ ($q= \rm e^{i \epsilon}, ~ ~ \epsilon$ arbitrary) HY oscillator algebra is different from $su_{\sqrt{q}}(2)$ algebra. Furthermore, although $su_{\sqrt{q}}(2)$ has a $q \rightarrow 1$ limit at the coalgebra level, the coalgebraic structure for HY fails in this limit. In the following section, we summarize the $q$-Schwinger construction of $q$-deformed $su(2)$ algebra in terms of a pair of $q$-oscillator algebras; different $q$-oscillator algebras lead to different $q$-deformed $su(2)$ algebras. Most authors prefer to set the Casimir in their $q$-Schwinger construction to zero. However, one sometimes find it convenient and essential to consider {\it non-zero} Casimir for some physical applications\cite{fuji1,lorek}. A natural generalization with two additonal parameters $\alpha$ and $\beta$ is also provided. In section \ref{qhols}, we exhibit results for $q$-Holstein Primakoff (HP) transformation with {\it non-zero} Casimirs for the MB and HY oscillators. The results are similar to those presented in section \ref{qcontr}. Different contractions of $q$-deformed $su(2)$ algebras to the various $q$-oscillator algebras are elucidated in section \ref{sect4}. In particular, we show that the relation between $su_{\sqrt{q}}(2)$ and HY $q$-oscillator algebras obtained in ref \cite{cel1,oh2} can be regarded as a form of contraction. In the last section, we point out explicitly that at the representation level the usual $su_{\sqrt{q}}(2)$ algebra is not the same as the HY $q$-oscillator algebra. We recall that the quantum universal enveloping algebra, ${\cal U}_q(su(2))$, was first studied by Skylanin \cite{sky} and independently by Kulish and Reshetikhin \cite{kul}. This algebra has been applied extensively to the study of the eight vertex models, the $XXZ$ ferromagnetic and anti-ferromagnetic models and the sine-Gordon models. The universal enveloping algebra, ${\cal U}_q(su(2))$ is generated by three operators, $J_{\pm}$ and $J_0$ satisfying the commutation relations \begin{subeqnarray} \lbrack J_0, J_{\pm} \rbrack & = & \pm J_{\pm}, \\ \lbrack J_+, J_- \rbrack & = & \lbrack 2J_0 \rbrack, \label{cr} \end{subeqnarray} where $[x]$ denotes $\displaystyle \frac{q^x - q^{-x}}{q - q^{-1}}$. A generalized $q$-deformed $su(2)$ algebra \cite{bonat,poly} has also been proposed and the operators $\widehat{J}_{\pm}$ and $\widehat{J}_0$ satisfies a modified commutation relations \begin{subeqnarray} \lbrack \widehat{J}_0, \widehat{J}_{\pm} \rbrack & = & \pm \widehat{J}_{\pm}, \\ \lbrack \widehat{J}_+, \widehat{J}_- \rbrack & = & \Phi(\widehat{J}_0(\widehat{J}_0 + 1)) - \Phi(\widehat{J}_0(\widehat{J}_0 - 1)), \\ & = & \Psi(\widehat{J}_0) - \Psi(\widehat{J}_0 - 1), \label{mcr} \end{subeqnarray} where the functions $\Phi(\widehat{J}_0)$ and $\Psi(\widehat{J}_o)$ are some suitably chosen functions of $\widehat{J}_0$. It has been shown in ref\cite{poly} that the imposition of hermiticity condition requires the generalized $q$-deformed $su(2)$ to assume the form given in eq(\ref{mcr}). \section{$q$-Schwinger Construction} \label{qcontr} Traditionally, the algebra $su(2)$ can be realized in terms of a pair of bosonic creation and annihilation operators of a harmonic oscillator using the Schwinger construction. A $q$-analogue of this construction is given by MB\cite{mac,bied,kulish}. The operators $a, a^\dagger$ and $N$ of the $q$-deformed oscillator algebra obey the relations \begin{subeqnarray} \lbrack N, a^\dagger \rbrack = a^\dagger, & & \lbrack N, a \rbrack = - a, \\ a a^{\dagger} - q^{-1} a^{\dagger} a & = & q^{N}, \\ {\cal C}_1 & = & q^{N} (a^\dagger a - [N]). \label{macf} \end{subeqnarray} This oscillator algebra does not appear to possess a Hopf structure. But a Hopf structure is possible for another version of the $q$-deformed oscillator which was first proposed by HY \cite{yan} in which the operators $a, a^\dagger$ and $N$ satisfy eq(\ref{macf}a) and eq(\ref{macf}b) and \begin{subeqnarray} \lbrack a, a^{\dagger} \rbrack & = & [N + 1] - [N] \\ {\cal C}_{2} & = & a^\dagger a - [N]. \label{hyo} \end{subeqnarray} In general, these two versions of the $q$-deformed oscillator algebras are not equivalent\cite{oh} although the two algebras coincide on the usual `Fock' space basis $|n>$. Mathematically, it has always been intrinsically appealing and insightful to generalize a particular mathematical structure as much as possible\cite{duc,rideau,quesne}. One possible generalization of the MB algebra is to introduce two additional parameters $\alpha$ and $\beta$. One then defines the generalized MB (GMB) algebra\cite{duc} with the relations in eq(\ref{macf}b) and eq(\ref{macf}c) replaced by \begin{subeqnarray} a a^{\dagger} - q^{\alpha} a^{\dagger} a & = & q^{\beta N}, \\ {\cal C}_3 & = & q^{-\alpha N} (a^\dagger a - [N]_{\alpha,\beta}), \label{genmacf} \end{subeqnarray} where $\displaystyle [x]_{\alpha,\beta} = \frac{q^{\alpha x}- q^{\beta x}} {q^\alpha - q^\beta}$ is a generalized $q$-bracket. A similar generalization for the HY oscillator (GHY) gives \begin{subeqnarray} \lbrack a, a^{\dagger} \rbrack & = & [N + 1]_{\alpha,\beta} - [N]_{\alpha,\beta} \\ {\cal C}_{4} & = & a^\dagger a - [N]_{\alpha,\beta}. \label{genhyo} \end{subeqnarray} We next consider realization of the $q$-deformed $su(2)$ algebra constructed from two independent $q$-oscillators, $a, a^\dagger, N_a$ and $b, b^\dagger, N_b$. Following ref \cite {mac,bied,kulish} \begin{subeqnarray} J_+ = a^{\dagger}b, & & J_- = b^{\dagger}a, \\ J_0 = \frac{1}{2} (N_a - N_b), & & {\cal C} = \frac{1}{2} (N_a + N_b). \label{sch} \end{subeqnarray} Using the algebra defined in eq(\ref{macf}), we easily check that the operators $J_\pm$ and $J_0$ obey the commutation relations: \begin{subeqnarray} {[}J_{\pm}, J_{0}{]} & =& \mp J_{\pm} \\ {[} J_{+}, J_{-} {]} & =& \{- {\cal C}_1 (q - q^{-1}) + 1 \} [2 J_0] \label{maccontr} \end{subeqnarray} Note that if we set ${\cal C}_1=0$, we obtain the result in ref \cite{mac,bied}. However, if we try to construct the realization using the algebra defined in eq(\ref{hyo}), we arrive at the Fujikawa algebra \cite{fuji2} with eq(\ref{maccontr}b) replaced by: \begin{eqnarray} {[} J_{+}, J_{-} {]} &=& [2J_0] + {\cal C}_2 \{ [ {\cal C} - J_0 + 1] \nonumber \\ & & \hspace{-5mm} - [{\cal C} - J_0] - [{\cal C} + J_0 + 1] + [{\cal C} + J_0] \}. \label{fujio} \end{eqnarray} This is not the conventional $q$-deformed $su(2)$ algebra as defined in eq(\ref{cr}) unless ${\cal C}_2 = 0$, which is the case in a Fock space representation. Analogous Schwinger construction for the GMB and GHY algebras given by eq(\ref{genmacf}) and eq(\ref{genhyo}) can be constructed. The commutation relations for the operators $\{ J_+, J_- \}$ for the the GMB and GHY algebra are respectively \begin{equation} {[} J_{+}, J_{-} {]} = \{ {\cal C}_3(q^{\alpha} - q^{\beta}) + 1 \} \frac{q^{\alpha N_a + \beta N_b} - q^{\alpha N_b + \beta N_a}}{q^\alpha - q^\beta} \label{gencon1} \end{equation} and \begin{eqnarray} {[} J_{+}, J_{-} {]} &= & {\cal C}_4 \{ [N_{b} +1]_{\alpha,\beta} -[N_{b}]_{\alpha,\beta} - [N_{a} + 1]_{\alpha,\beta} \nonumber \\ & & \mbox{\hspace{-5mm}} + [N_{a}]_{\alpha,\beta} \} + \frac{q^{\alpha N_a + \beta N_b} - q^{\alpha N_b + \beta N_a}} {q^\alpha - q^\beta}. \label{gencon2} \end{eqnarray} Note that when $\beta = -\alpha$, the term $\displaystyle \frac{q^{\alpha N_a + \beta N_b} - q^{\alpha N_b + \beta N_a}} {q^\alpha - q^\beta} $ in eq(\ref{gencon1}) and eq(\ref{gencon2}) becomes $[2J_0]_{\alpha,\beta}$. One can also define operators $\tilde{J}_+, \tilde{J}_- $ and $\tilde{J}_0$ using the relations \begin{subeqnarray} \tilde{J}_+ = q^{-(\alpha + \beta)N_b} a^{\dagger}b, & & \tilde{J}_- = b^{\dagger}a q^{-(\alpha + \beta)N_b}, \\ \tilde{J}_0 = \frac{1}{2} (N_a - N_b), & & \tilde{\cal C} = \frac{1}{2} (N_a + N_b). \label{gensch} \end{subeqnarray} A straightforward calculation for the GMB oscillator algebra and GHY oscillator yields \begin{equation} \tilde{J}_+ \tilde{J}_- - q^{-(\alpha + \beta)} \tilde{J}_- \tilde{J}_+ = {\cal C}_3 \{ (q^{\alpha} - q^{\beta}) + 1 \} [ 2\tilde{J}_0 ]_{\alpha,\beta}. \label{gencon3} \end{equation} and \begin{eqnarray} & & \tilde{J}_+ \tilde{J}_- - q^{-(\alpha + \beta)} \tilde{J}_- \tilde{J}_+ \nonumber \\ & = & q^{-(\alpha + \beta)N_b} {\cal C}_4 \{ [ \tilde{\cal C} - \tilde{J}_0 + 1] - [\tilde{\cal C} - \tilde{J}_0] - [\tilde{\cal C} + \tilde{J}_0 + 1] \nonumber \\ & & + [\tilde{\cal C} + \tilde{J}_0] \} + [ 2\tilde{J}_0]_{\alpha,\beta} \label{gencon4} \end{eqnarray} respectively. In the above Schwinger construction, the expression $[ 2\tilde{J}_0 ]_{\alpha,\beta}$ is obtained with a redefinition of the commutation relation for the operators $\tilde{J}_+$ and $\tilde{J}_-$. Note that for $\alpha + \beta = 0$, eqs(\ref{gencon3}) and (\ref{gencon4}) reduce to eqs(\ref{gencon1}) and (\ref{gencon2}) respectively. \section{$q$-Holstein-Primakoff Transformation} \label{qhols} It is well-known that one can realize the undeformed $su(2)$ algebra nonlinearly with one harmonic oscillator using the HP transformation. A $q$-analogue of the transformation has also been studied\cite{chai}. The $q$-analogue of the HP transformation is defined by the relations \begin{subeqnarray} J_+ & = & a^\dagger \sqrt{[2j - N]}, \\ J_- & = & \sqrt{[2j - N]} a, \\ J_0 & = & N - j, \label{hol} \end{subeqnarray} where $j$ is some $c$-number. It can be checked easily that under MB $q$-deformed oscillators, the realization (\ref{hol}) leads to \begin{subeqnarray} \lbrack J_0 , J_\pm \rbrack & = & \pm J_\pm, \\ \lbrack J_+, J_- \rbrack & = & [2J_0] + {\cal C}_1 q^{-2 J_0}; \label{hol1} \end{subeqnarray} whereas under HY oscillators, the commutation relations become \begin{subeqnarray} \lbrack J_0 , J_\pm \rbrack & = & \pm J_\pm, \\ \lbrack J_+, J_- \rbrack & = & [2J_0] + {\cal C}_2 \{ [2j- N +1] \nonumber \\ & & \hspace{1cm} - [2j - N] \} \\ & = & [2J_0] + {\cal C}_2 \{ [j - J_0 + 1] \nonumber \\ & & \hspace{1cm} - [j - J_0] \}. \label{hol2} \end{subeqnarray} It is interesting to compare eq(\ref{hol1}) and eq(\ref{hol2}) with eq(\ref{cr}) and eq(\ref{fujio}) respectively. For the GMB and GHY oscillator algebras defined by eqs(\ref{genmacf}) and (\ref{genhyo}), one can also define the $q$-analogue of the HP transformations in the most obvious manner by replacing the usual $q$-bracket by its generalized $q$-bracket. It turns out that the generalized $q$-HP transformations are then given by the relations \begin{subeqnarray} \tilde{J}_+ & = & q^{- \frac{\alpha + \beta}{2} N} a^\dagger \sqrt{[2j - N]_{\alpha,\beta}}, \\ \tilde{J}_- & = & \sqrt{[2j - N]_{\alpha,\beta}} a q^{- \frac{\alpha + \beta}{2} N}, \\ \tilde{J}_0 & = & N - j. \label{genhol} \end{subeqnarray} One easily verifies that under the GMB $q$-deformed oscillator, the realization turns out to be given by the relations \begin{subeqnarray} & & \lbrack \tilde{J}_0 , \tilde{J}_{\pm} \rbrack = \pm \tilde{J}_{\pm}, \\ & & \tilde{J}_+ \tilde{J}_- - q^{\alpha + \beta} \tilde{J}_- \tilde{J}_+ \nonumber \\ & = & [-2\tilde{J}_0]_{\alpha,\beta} + {\cal C}_3 q^{-2 \tilde{J}_0 \beta}; \label{genhol1} \end{subeqnarray} whereas under the GHY algebra, the same computation leads to the relations \begin{subeqnarray} & & \lbrack \tilde{J}_0 , \tilde{J}_{\pm} \rbrack = \pm \tilde{J}_{\pm}, \\ & & \tilde{J}_+ \tilde{J}_- - q^{\alpha + \beta} \tilde{J}_- \tilde{J}_+ \nonumber \\ & = & q^{-(\alpha + \beta)N} \{ [2j - N + 1]_{\alpha,\beta} - [2j - N]_{\alpha,\beta} \} {\cal C}_4 \nonumber \\ & & \hspace{30mm} + [-2 \tilde{J}_0]_{\alpha,\beta}. \label{genhol2} \end{subeqnarray} \section{Contraction} \label{sect4} So far we have tried to construct the $q$-deformed $su(2)$ from $q$-oscillator algebras. A somewhat reverse process, known as contraction, is possible in general. For the undeformed case, we know that the transformation\cite{gil} \begin{equation} \left( \begin{array}{c} h_+ \\ h_- \\ h_0 \\ 1_h \end{array} \right) = \left( \begin{array}{cccc} \mu & 0 & 0 & 0 \\ 0 & \mu & 0 & 0 \\ 0 & 0 & 1 & \frac{\eta}{2\mu^2}\\ 0 & 0 & 0 & 1 \end{array} \right) \left( \begin{array}{c} J_+ \\ J_-\\ J_0 \\ \xi \end{array}\right) \label{gilm} \end{equation} maps the generators of $U(2)$, $J_\pm$ and $J_0$ with $[{\bf J}, \xi]=0$ under a change of basis to the generators $h_\pm, h_0$ and $1_h$ such that \begin{subeqnarray} \lbrack h_0, h_\pm \rbrack & = & \pm h_\pm \\ \lbrack h_+, h_- \rbrack & = & 2\mu^2 h_0 - \eta 1_h \\ \lbrack {\bf h}, 1_h] & = & 0. \label{contr} \end{subeqnarray} One easily notes that the commutation relations eq(\ref{contr}) are well-defined in the limit $\mu \rightarrow 0$ despite the singularity in the transformation. For $\mu \rightarrow 0$ and $\eta \rightarrow 1$, the transformed algebra in eq(\ref{contr}) can be mapped isomorphically to the standard oscillator algebra. This transformation is sometimes known as the generalized In\"{o}n\"{u}-Wigner contraction. The transformation given in eq(\ref{gilm}) allows for a simple extension to the $q$-deformed case if one identifies the operators $\{ h_+, h_-, h_0 \}$ as the operators $\{ a^\dagger, a, N^\prime \}$, the latter satisfying the HY algebra with $\displaystyle N^\prime = N + \frac{1}{2}$. Further one should also demand that the operators $\{ J_+, J_-, J_0 \}$ obey the $q^{\frac{1}{2}}$-deformed $su(2)$ algebra. In particular, one can easily work out the commutation relations for $[h_+,h_-]$ or equivalently $[a^\dagger, a]$ explicitly to get \begin{eqnarray} \lbrack h_+, h_- \rbrack & = & [a^\dagger, a] \nonumber \\ & = & \mu^2 [J_+, J_-] \nonumber \\ & = & \mu^2 \frac{q^{J_0} - q^{-{J_0}}} {q^{\frac{1}{2}} -q^{-\frac{1}{2}}} \nonumber \\ & = & \mu^2 \frac{q^{h_0}q^{-\frac{\eta}{2 \mu^2}\xi} - q^{-h_0}q^{\frac{\eta}{2 \mu^2}\xi} } {q^{\frac{1}{2}} -q^{-\frac{1}{2}}}. \label{lhs} \end{eqnarray} However since the operators $\{ h, h^\dagger, h_0 \}$ or equivalently $\{ a, a^\dagger, N^\prime \}$ obey the HY algebra, one can also work out the commutation relation in eq(\ref{lhs}) in terms of the operator $h_0$. An straightforward computation yields \begin{eqnarray} \lbrack h_+, h_- \rbrack & = & [h_0 - \frac{1}{2}] - [h_0 + \frac{1}{2}] \nonumber \\ & = & - \frac{q^{h_0} + q^{-h_0}}{q^{\frac{1}{2}} + q^{-\frac{1}{2}}}. \label{rhs} \end{eqnarray} Consistency requirement for the expressions in eq(\ref{lhs}) and eq(\ref{rhs}) yields: \begin{subeqnarray} \frac{\mu^2}{q^{\frac{1}{2}} -q^{-\frac{1}{2}}} q^{-\frac{\eta}{2\mu^2}\xi} & = & - \frac{1}{q^{\frac{1}{2}} + q^{-\frac{1}{2}}}, \\ \frac{\mu^2}{q^{\frac{1}{2}} -q^{-\frac{1}{2}}} q^{\frac{\eta}{2\mu^2} \xi} & = & \frac{1}{q^{\frac{1}{2}} + q^{-\frac{1}{2}}}. \label{coef} \end{subeqnarray} It is straightforward to solve eq(\ref{coef}) for $\mu$ and $\eta\xi$ giving \begin{subeqnarray} \mu & = & \rm e^{-i \frac{\alpha^\prime}{2}}(\frac{q - 1}{q + 1})^{\frac{1}{2}} \\ \eta \xi & = & 2 \rm e^{-i \alpha^\prime} (\frac{q - 1}{q + 1}) \frac{i \alpha^\prime}{\ln q} \label{solve} \end{subeqnarray} where $\alpha^\prime = \frac{\pi}{2} + \ell\pi$ ($\ell \in {\bf Z}$) and we have appropriately chosen one branch when taking the logarithm of complex number. Thus, we observe that the relation obtained in ref \cite{cel1,oh2} between HY oscillator and $su_{\sqrt{q}}(2)$ algebra can be regarded as the $q$-analogue of the transformation given in eq(\ref{gilm}) if we write \begin{equation} \left( \begin{array}{c} a_+ \\ a \\ N^\prime \\ 1 \end{array} \right) = \left( \begin{array}{cccc} \rm e^{-i\frac{\alpha^\prime}{2}} (\frac{q - 1}{q + 1})^{\frac{1}{2}} & 0 & 0 & 0 \\ 0 & \rm e^{-i\frac{\alpha^\prime}{2}} (\frac{q - 1}{q + 1})^{\frac{1}{2}} & 0 & 0 \\ 0 & 0 & 1 & \frac{i}{\ln q}\\ 0 & 0 & 0 & 1 \end{array} \right) \left( \begin{array}{c} J_+ \\ J_-\\ J_0 \\ \alpha^\prime 1 \end{array}\right) \label{cele} \end{equation} in which one easily identifies the quantities $\mu$, $\eta$ and $\xi$ in eq(\ref{gilm}) by $\displaystyle \mu = \rm e^{-i\frac{\alpha^\prime}{2}} (\frac{q - 1}{q + 1})^{\frac{1}{2}} $, $\displaystyle \eta = \frac{2i \rm e^{-i \alpha^\prime}}{\ln q} (\frac{q - 1}{q + 1})$ and $\xi = \alpha^\prime $. We would like to emphasize again that the operators $J_\pm$ and $J_0$ in this case obey the $q^\frac{1}{2}$-deformed commutation relations in eq(\ref{cr})\cite{cel1,oh2}. In the limit $q \rightarrow 1$, this transformation is again singular but again the commutation relations for the oscillator algebra are well-defined and become the undeformed oscillator algebra. Furthermore, for generic $q$, the coproduct, counit and antipodes for the $q$-deformed $su(2)$ carry directly through the transformation, endowing the HY oscillator with a Hopf structure. This Hopf structure however breaks down in the limit when $q \rightarrow 1$ whereas the Hopf structure of $su_{\sqrt{2}}(2)$ becomes cocommutative in the same limit. From refs \cite{oh2,fuji1}, it is not difficult to show that the positive norm requirement and the truncation condition for the states of the HY $q$-oscillator are in conflict with each other. Thus the HY $q$-oscillator algebra is not the same as the $su_{\sqrt{q}}(2) $ algebra. \begin{comment} In particular, when $q= \rm e^{i\epsilon}$, one finds that the contraction in eq(\ref{cele}) takes the form: \begin{equation} \left( \begin{array}{c} a_+ \\ a \\ N \\ 1 \end{array} \right) = \left( \begin{array}{cccc} \rm e^{-i\frac{\alpha^\prime}{2}} \sqrt{\tan(\frac{\epsilon}{2})} & 0 & 0 & 0 \\ 0 & \rm e^{-i\frac{\alpha^\prime}{2}} \sqrt{(\tan(\frac{\epsilon}{2})} & 0 & 0 \\ 0 & 0 & 1 & \frac{1}{\epsilon}\\ 0 & 0 & 0 & 1 \end{array} \right) \left( \begin{array}{c} J_+ \\ J_-\\ J_0 \\ \alpha^\prime 1 \end{array}\right). \label{cele2} \end{equation} It is not difficult to check that the operators $\{ a, a^\dagger, N \}$ satisfy the HY oscillators in eq(\ref{hyo}) but not the MB oscillator relations in eq(\ref{macf}). One also notes that since $\ln \exp^{i \epsilon} \end{comment} The MB oscillator algebra can be shown via the map $a = q^{N/2} A$, $a^\dagger = A^\dagger q^{N/2}$ to be equivalent to the algebra ${\cal A}_q$ with operators $\{ A, A^\dagger, N \}$ satisfying \begin{subeqnarray} \lbrack A, A^\dagger \rbrack & = & q^{-2N} \\ \lbrack N, A \rbrack & = & - A \\ \lbrack N, A^\dagger \rbrack & = & A^\dagger. \end{subeqnarray} In fact, Chaichian and Kulish \cite{chai} have shown that the map \begin{subeqnarray} A & = & \lim_{s \rightarrow \infty} \frac{(q - q^{-1}) }{q^s} J_+ \\ A^\dagger & = & \lim_{s \rightarrow \infty} \frac{(q - q^{-1}) }{q^s} J_- \\ N & = & s - J_0 \end{subeqnarray} allows the contraction of $su_q(2)$ to the MB $q$-oscillator algebra. Note that this contraction clearly lifts the highest weight representation to infinity so that there exists an infinite tower of states needed for the oscillator algebra ${\cal A}_q$. Although this contraction does not induce a coproduct for $\{ A, A^\dagger, N \}$, it admits a coaction $\Psi: {\cal A}_q \rightarrow {\cal A}_q \otimes SU_q(2)$ given by \begin{subeqnarray} \Psi(N) & = & N - J_0, \\ \Psi(A) & = & A q^{-J_0} + \sqrt{q - q^{-1}} q^{-N} J_+ ,\\ \Psi(A^\dagger) & = & A^\dagger q^{-J_0} + \sqrt{q - q^{-1}} q^{-N} J_-. \end{subeqnarray} This coaction satisfies the associative axioms namely \begin{subeqnarray} (\Psi \otimes 1) \circ \Psi &=& (1 \otimes \Psi) \circ \Psi \\ (1 \otimes \epsilon) \circ \Psi & = & 1 \end{subeqnarray} where $\epsilon$ is the counit. Further, one easily checks that the homomorphism axiom is consistent, namely \begin{equation} \Psi([x, y]) = [\Psi(x), \Psi(y)] \end{equation} where $x, y \in \{ A, A^\dagger, N \}$. In the framework of In\"{o}ue-Wigner transformation, there seems to be a singular transformation \begin{equation} \left( \begin{array}{c} A \\ A^\dagger \\ N \\ 1 \end{array} \right) = \left( \begin{array}{cccc} \displaystyle \frac{\sqrt{q - q^{-1}}}{q^s} & 0 & 0 & 0 \\ 0 & \displaystyle \frac{\sqrt{q - q^{-1}}}{q^s} & 0 & 0 \\ 0 & 0 & -1 & s\\ 0 & 0 & 0 & 1 \end{array} \right) \left( \begin{array}{c} J_- \\ J_+\\ J_0 \\ 1 \end{array}\right) . \end{equation} The contraction from $su_q(2)$ to MB oscillator algebra occurs in the singular limit $s \rightarrow \infty$, but in this case, the natural coproduct for $su_q(2)$ does not survive in this limit. This contraction is essentially similar to the one proposed by J. Ng \cite{ng}. A different contraction proposed by Celeghini et al \cite{cel1,cel2} involves the transformation \begin{equation} \left( \begin{array}{c} B \\ B^\dagger \\ N \\ H \\ \omega \end{array} \right) = \left( \begin{array}{ccccc} \eta & 0 & 0 & 0 & 0 \\ 0 & \eta & 0 & 0 & 0 \\ 0 & 0 & -1 & \eta^{-2} & 0 \\ 0 & 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & \eta^{-2} \end{array} \right) \left( \begin{array}{c} J_+ \\ J_-\\ J_0 \\ K \\ \log q \end{array}\right) . \end{equation} where $K$ is the so-called $U(1)$ generator. Under this transformation, the operators $\{ B, B^\dagger, N, H \}$ obey in the limit $\eta \rightarrow 0 $ the relations \begin{subeqnarray} \lbrack B, B^\dagger \rbrack & = & \displaystyle \frac{\sinh (\frac{\omega H}{2})}{\frac{\omega}{2}} \\ \lbrack N, B \rbrack = - B, & \lbrack N, B^\dagger \rbrack = B^\dagger, & \lbrack H, N \rbrack = 0 \\ \lbrack H, B \rbrack = & \lbrack H, B^\dagger \rbrack = & 0 \end{subeqnarray} This contraction induces a coalgebraic structure inherited from the original Hopf algebra of $su_q(2)$. \begin{comment} , namely \begin{subeqnarray} \Delta(B) & = & \rm e^{-\omega H /4} \otimes B + B \otimes \rm e^{-\omega H /4} \\ \Delta(B^\dagger) & = & \rm e^{-\omega H /4} \otimes B^\dagger + B^\dagger \otimes \rm e^{-\omega H /4} \\ \Delta (N) & = & 1 \otimes N + N \otimes 1 \\ \Delta (H) & = & 1 \otimes H + H \otimes 1. \end{subeqnarray} \end{comment} The algebra generated by the operators $\{ B, B^\dagger, H, N \}$ is not quite the $q$-deformed oscillator algebra although we can get the usual undeformed oscillator in the limit $\omega \rightarrow 0$. \section{Representations} We can gain some insights into the the linear transformation which we have encountered in the previous section by looking more closely at a representation of the HY oscillator algebra. To obtain a representation of the HY algebra\cite{oh2}, we note that $N$ commutes with $a^\dagger a$ and $a a^\dagger$. As a result we can construct a vector $|\psi_0>$ which is a simultaneous eigenstate of $N$ and $a^\dagger a$ so that \begin{subeqnarray} N |\psi_0> & = & \nu_0 |\psi_0> \\ a^\dagger a |\psi_0> & = & \lambda_0 |\psi_0> \end{subeqnarray} where $\nu_0$ and $\lambda_0$ are the corresponding eigenvalues. We shall further assume that the operator $N$ is Hermitian so that its eigenvalue $\nu_0$ is real. \begin{comment} If we demand hermiticity for the operator $N$ then one can show that $|q| =1$. This is not necessary the case if we drop the hermiticity condition. Here, we shall not demand for hermiticity of $N$ and allow $\nu_0$ to be arbitrary. \end{comment} From the eigenstate, $|\psi_0>$, one can construct other eigenstates of $N$ by defining \begin{subeqnarray} |\psi_n> & = & (a^\dagger)^n |\psi_0> \\ |\psi_{-n}> & = & a^n |\psi_0> \end{subeqnarray} for some positive integer $n$. With these definitions, one easily shows that \begin{subeqnarray} a^\dagger |\psi_n> & = & |\psi_{n+1}> \\ a^\dagger |\psi_{-n}> & = & \lambda_{-n+1}|\psi_{-n+1}> \\ a |\psi_n> & = & \lambda_n |\psi_{n-1}> \\ a |\psi_{-n}> & = & |\psi_{-n-1}> \\ N |\psi_{\pm n}> & = & (\nu_0 \pm n) |\psi_{\pm n}> \end{subeqnarray} where \begin{subeqnarray} \lambda_n & = & \lambda_0 + \frac{q^{\frac{1}{2}n} - q^{-\frac{1}{2}n}}{q^{\frac{1}{2}} - q^{-\frac{1}{2}}} \frac{q^{\nu_0 + \frac{n}{2}} + q^{-\nu_0 - \frac{n}{2}}}{q^{\frac{1}{2}} + q^{-\frac{1}{2}}} \\ & =& \lambda_0 + [n + \nu_0] - [\nu_0]. \label{lambdan} \end{subeqnarray} Note that the oscillator algebra still admits an infinite number of states and the representation at this stage is different from ${\cal U}_q(su(2))$ whose finite-dimensional representation requires a highest weight state. One then imposes a truncation on the tower of states and set $a |\psi_0> = 0$ giving $\lambda_0 = 0$ and $|\psi_{-n}>= 0$ for any $n > 0$. Let $|\psi_k>$ be the highest weight state so that $a^\dagger |\psi_k> = 0$ with integer $k > 0$. Since ${\cal C}_2=a^\dagger a - [N] = a a^\dagger - [N + 1]$, one finds by considering the action of ${\cal C}_2$ on $|\psi_k>$ that the following condition must be satisfied: \begin{equation} [\nu_0 + k + 1] = [\nu_0] .\label{trun} \end{equation} For real $q$, $k = -1$ is the only solution, but this is not acceptable. However, for complex $q$ with $|q| = 1$, truncation is possible. It is not difficult to solve eq(\ref{trun}) for $\nu_0$ in this case. Writing $q = \rm e^{i \epsilon}$, one can show that for arbitrary $\epsilon$, eq(\ref{trun}) leads to \begin{equation} \nu_0 \epsilon = \frac{-(k+1)\epsilon}{2} + (\ell + \frac{1}{2}) \pi, \mbox{\hspace{3cm}} \ell \in {\bf Z} \label{soln} \end{equation} This result needs not be consistent with the condition for positivity of norms \cite{oh2,fuji1} which by eq(\ref{lambdan}) is \begin{equation} \lbrack n + \nu_0 \rbrack - \lbrack \nu_0 \rbrack \geq 0 \label{positive} \end{equation} for all integers $n \leq k $. To see this, we substitute eq(\ref{soln}) into the left hand side of condition (\ref{positive}) and see that $$ \lbrack n + \nu_0 \rbrack - [\nu_0] = \frac{(-1)^\ell}{\sin \epsilon} \{ \cos(\frac{k + 1}{2} - n) \epsilon - \cos \frac{k + 1}{2} \epsilon \} $$ which needs not be positive for arbitrary $\epsilon$. This means that for arbitrary $\epsilon$, we cannot proceed to identify the HY oscillator algebra with $su_{\sqrt{q}}(2)$ algebra. To identify the two algebras, we have to truncate the tower of states of the HY oscillator algebra. However, truncation and positive norm requirement can both be satisfied only for certain value of $\epsilon$. In short, the HY oscillator algebra and $su_{\sqrt{q}}(2)$ algebra are equivalent only for certain $q$-values. \begin{comment} For complex $\nu_0$ and real $q$, truncation is again possible. Solving the eq(\ref{trun}) and working out explicitly yields \begin{equation} q^{\nu_0} = \rm e^{i\frac{(2\ell + 1)}{2}\pi} q^{-\frac{k + 1}{2}}. \label{solve} \end{equation} This value of $\nu_0$ must be substituted into the expression for the eigenvalue $\lambda_n$ in eq(\ref{lambdan}) and one quickly finds that \begin{equation} \lambda_n = -i \rm e^{-\ell \pi} \frac{q^{\frac{1}{2}}- q^{-\frac{1}{2}}}{q^{\frac{1}{2}} + q^{-\frac{1}{2}}} [n]_{q^{1/2}}[k + 1 - n]_{q^{1/2}} \label{lambdaval} \end{equation} where we have used the notation $[x]_{q^{1/2}}$ to denote the expression $\displaystyle \frac{q^{\frac{x}{2}} - q^{-\frac{x}{2}}} {q^{\frac{1}{2}} - q^{-\frac{1}{2}}}$. The eigenstates have not been normalized. To do this, we first introduce normalized states by defining \begin{equation} |k,n> = \frac{|\psi_{k,n}>}{<\psi_{k,n}|\psi_{k,n}>^{1/2}} \end{equation} where $0 \leq n \leq k$ for some non-negative integer $k$. One then quickly establishes the results \begin{subeqnarray} a^\dagger |k,n> & = & \left( -i \rm e^{-\ell \pi} \frac{q^{\frac{1}{2}}- q^{-\frac{1}{2}}}{q^{\frac{1}{2}} + q^{-\frac{1}{2}}} [n+1]_{\sqrt{q}}[k - n]_{\sqrt{q}} \right)^{\frac{1}{2}} \nonumber \\ & & \mbox{\hspace{3cm}}|k, n+1> \\ a |k,n> & = & \left( -i \rm e^{-\ell \pi} \frac{q^{\frac{1}{2}}- q^{-\frac{1}{2}}}{q^{\frac{1}{2}} + q^{-\frac{1}{2}}} [n]_{\sqrt{q}}[k + 1 - n]_{\sqrt{q}} \right)^{\frac{1}{2}} \nonumber \\ & & \mbox{\hspace{3cm}} |k, n-1> \\ N |k,n>& = & (\frac{-k-1}{2} + \frac{(2\ell + 1)\pi)}{2i\log q} + n) \nonumber \\ & & \mbox{\hspace{4cm}} |k,n> \end{subeqnarray} where the value $k$ labels the different representation. To establish the relation of this representation to $q$-deformed $su(2)$ algebra, we need to relabel the parameters $k$ and $n$ by replacing them by $2j$ and $m+j$ respectively and identify $|k,n>$ with the eigenstate $|j,m>$ giving \begin{subeqnarray} J_+ |j,m> & = & \rm e^{\ell \pi/2} \left( \frac{q^{\frac{1}{2}}+ q^{-\frac{1}{2}}}{q^{\frac{1}{2}} - q^{-\frac{1}{2}}} \right)^{\frac{1}{2}} a^\dagger |j,m> \nonumber \\ & = & ([j - m]_{q^{1/2}}][j+m+1]_{q^{1/2}})^{\frac{1}{2}} \nonumber \\ & & \mbox{\hspace{3cm}} |j,m+1> \\ J_- |j,m> & = & i \rm e^{\ell \pi/2} \left( \frac{q^{\frac{1}{2}}+ q^{-\frac{1}{2}}}{q^{\frac{1}{2}} - q^{-\frac{1}{2}}} \right)^{\frac{1}{2}} a |j,m> \nonumber \\ & = & ([j + m]_{q^{1/2}}][j-m+1]_{q^{1/2}})^{\frac{1}{2}} \nonumber \\ & & \mbox{\hspace{3cm}} |j,m-1> \\ J_0 |j,m> & =& \left( N + \frac{1}{2} - \frac{(2\ell + 1)\pi}{2\log q}\right)|j,m> \nonumber \\ & = & m |j,m> \end{subeqnarray} revealing the transformations in eq(\ref{cele}). Indeed, the result implies that unless we truncate the tower of states, the HY algebra is in general not equivalent to ${\cal U}_{q^{1/2}}(su(2))$. Furthermore, the limit $q \rightarrow 1$, it is impossible for the condition for truncation to occur since $k \geq 0$ and naively it seems to extend the range of values of $J_0$ to infinity. \end{comment} \vspace{1cm} \centerline{\large \bf Acknowledgments} \noindent We wish to thank Prof. Kazuo Fujikawa for many helpful suggestions and discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,306
{"url":"https:\/\/stats.stackexchange.com\/questions\/246284\/test-zero-correlation-coefficient","text":"# test zero correlation coefficient\n\nIs there any commonly used method to test the zero correlation between $X$ and $Y$ using the sample correlation coeffcient from two samples $\\{x_k\\}_{k=1}^n$ and $\\{y_k\\}_{k=1}^n$?\n\nIn another word, if $\\boldsymbol{x}=(x_1,\\dots,x_n)^\\top$ and $\\boldsymbol{y}=(y_1,\\dots,y_n)^\\top$ are two standardized vectors, i.e., $\\sum_{i=1}^nx_i=0$ and $\\sqrt{\\sum_i x_i^2}=1$, is there any method to numerically judge $\\boldsymbol{x}\\perp \\boldsymbol{y}$?\n\nActually $\\boldsymbol{x}$ and $\\boldsymbol{y}$ are two observed noisy data vectors, so I do not think it is appropriate to use $\\boldsymbol{x}^\\top \\boldsymbol{y}=0$ as the criterion.\n\n\u2022 You coukd get a null distribution by using a permutation test approach ... \u2013\u00a0kjetil b halvorsen Nov 16 '16 at 16:22\n\u2022 @kjetilbhalvorsen, any article I can refer to? \u2013\u00a0John Nov 16 '16 at 16:27\n\u2022 Perhaps this Q&A stats.stackexchange.com\/questions\/61026\/\u2026 \u2013\u00a0mdewey Nov 16 '16 at 16:32\n\u2022 In Efron's book about the bootstrap, bootsrapping the corelation coefficient is the first example. For permutation methods: projecteuclid.org\/euclid.ss\/1113832732 Else, for a more useful answer, you shoukd tell us the context of your question. \u2013\u00a0kjetil b halvorsen Nov 16 '16 at 16:52\n\nAssuming your observations follow a bivariate normal distribution, there's a test based on t-statistic for the Pearson correlation coefficient which gives the test statistic $$t = r \\sqrt{\\frac{n-2}{1-r^2}}.$$ This has a t-distribution with $n-2$ degrees of freedom.\nIf your x and y are reasonably normal and $n$ is big enough, you can use the cor.test function in R, like this:\nx <- runif(100)","date":"2021-03-09 02:23:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.578193187713623, \"perplexity\": 614.7698827827273}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178385534.85\/warc\/CC-MAIN-20210308235748-20210309025748-00253.warc.gz\"}"}
null
null
import sys sys.path.append("../") import bandicoot as bc import glob import os records_path = 'users_bandicoot/' antenna_file = 'antennas.csv' indicators = [] for f in glob.glob(records_path + '*.csv'): user_id = os.path.basename(f)[:-4] try: B = bc.read_csv(user_id, records_path, antenna_file, describe=False) metrics_dict = bc.utils.all(B) except Exception as e: metrics_dic = {'name': user_id, 'error': True} indicators.append(metrics_dict) bc.io.to_csv(indicators, 'bandicoot_indicators_full.csv')
{ "redpajama_set_name": "RedPajamaGithub" }
6,537
Spanish media reacted with relief late Monday in their online editions after the defending champions edged through to the quarter-finals of Euro 2012with a 1-0 tightrope-walk over Croatia. Jesus Navas struck a late winner but with Italy beating Ireland 2-0 the Spanish needed the result as a defeat would have put them out. "Suffering towards the quarters," was the verdict of sports daily AS which was not impressed with the overall performance. "A goal from Navas three minutes from time relieved a Spain which did not play well and was a bundle of nerves for much of the match," added AS. El Pais daily gave thanks for two good saves from reliable keeper Iker Casillas as he kept the score goalless before Navas finally rescued the situation. "Casillas avoids catastrophe," was El Pais' verdict. Marca sports magazine was also relieved as it admitted in an editorial: "We suffered like never before but we won as always," in allusion to glory at Euro 2008 and the 2010 World Cup, triumphs which did away with four decades of underachievement. "The team went through top of the group after a poor match which it wasn't able to settle until the 88th minute. (Coach Vicente) Del Bosque's side suffered for a long while during which time a Croatian goal would have eliminated them from the Euros."
{ "redpajama_set_name": "RedPajamaC4" }
9,568
Victor Shoichi Koga (1935 - 3 de noviembre de 2018) fue un artista marcial ruso-japonés. Uno de los principales impulsores del sambo a nivel internacional, fue el primero en introducir esta disciplina en Japón y pervivió como uno de sus mayores expertos hasta su muerte. Biografía Koga nació en Hailar, en la entonces región de Manchuria, de padre japonés y madre rusa. Tras la Segunda Guerra Mundial, su familia se mudó a Kyushu, Japón, y el joven Victor fue enviado a vivir con parientes en Tokio. Allí comenzó su carrera marcial cuando se unió al club de lucha amateur de la universidad de Nihon, donde estudiaba medicina. Su actividad en este deporte fue especialmente distinguida, participando en el Festival Nacional de Deportes de Japón y en el propio campeonato nacional. Luego, tras graduarse, entrenó judo en el dojo de Riichiro Watanabe en Yokosuka. En 1965, Koga obtuvo la colaboración de su colega en la lucha y el judo, Ichiro Hatta, para introducir el arte marcial ruso del sambo en tierras niponas, formando así la Federación Japonesa de Sambo. Koga se desplazó a la Unión Soviética a fin de entrenar en este estilo, y a continuación viajó por numerosos países para promover su enseñanza antes de regresar a Japón. Diez años después, como recompensa por su labor, se le concedió el título de Maestro de los Deportes en sambo. Entre sus aprendices más conocidos se encontraría Satoru Sayama, fundador de la primera promoción de artes marciales mixtas de la historia, Shooto. Koga falleció en noviembre de 2018. Referencias Nacidos en 1935 Fallecidos en 2018 Practicantes de artes marciales Practicantes de artes marciales de Japón
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,533
Гай Ати́ний Лабео́н (; III—II век до н. э.) — римский политический деятель, народный трибун 197 года до н. э., претор 195 года до н. э. Биография Гай Атиний занимал должность народного трибуна в 197 году до н. э. Он внёс предложение о выводе пяти колоний, а позже (совместно с коллегой Гаем Афранием) отклонил требования консулов, Гая Корнелия Цетега и Квинта Минуция Руфа, о совместном триумфе.претором В 195 году до н. э. Лабеон был претором по делам иноземцев. Примечания Литература Атинии Народные трибуны Преторы
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,523
Костры сердца Иисуса () — традиция зажигать костры в горах в католический праздник Святейшего Сердца Иисуса Христа, существующая в Тироле. Традиция зажигать костры в горах существует еще с дохристианских времен. Ритуальные костры на возвышенностях зажигали в день летнего солнцестояния. Затем из-за близости по времени дня летнего солнцестояния и дня Иоанна Крестителя во многих местах стали зажигать «Ивановы костры». Праздник Святейшего Сердца Иисуса Христа также отмечается в конце июня, в связи с чем смысл зажигания костров в горах постепенно изменился. Когда в 1796 году французские войска приблизились к границам Тироля, то представители аристократии, духовенства, крестьян и горожан собрались в Больцано. После того как они согласовали необходимые оборонительные меры, настоятель в Штамсе предложил вверить защиту Тироля Святейшему Сердцу Иисуса Христа. 1 июля 1796 года перед написаным в 1770 году знаменитым изображением сердца Иисуса в было дано торжественное обещание в будущем ежегодно совершать торжественное богослужение на праздник Святейшего Сердца Иисуса Христа. В связи с этим традиция зажигания костров в праздник Святейшего Сердца Иисуса Христа связывается с этим событием. Костры в честь Святейшего Сердца Иисуса Христа зажигаются общественными организациями или компаниями друзей в хорошо видных местах как в австрийском Тироле, так и в итальянском Южном Тироле. Зачастую дрова на вершину горы для этого приходится носить в течение нескольких часов. Иногда костры устраивают в форме креста, сердца или инициалов Иисуса Христа IHS или INRI. Ссылки Костры в горах в память о союзе Святейшего Сердца Иисуса Христа Herz-Jesu-Sonntag - Feuer auf Südtirols Bergen auf brauchtumsseiten.de von Sara Ladurner Samstag nach dem Herz-Jesu-Fest auf boehmpflege-landeck.at (aus Brigitte Teutsch, Günther Haas: Tiroler Brauchtum rund ums Jahr, Kompass-Verlag, 1995) Jahresfeuer. Herz Jesu- Feuer – Informationen zum Herz-Jesu-Fest in Tirol bei der Universität Innsbruck Wochen-Chronik. Der Herz-Jesu-Sonntag gieng auch hier feierlich vorüber, Pusterthaler Bote Nr. 26, 30. Juni 1876, S. 101 Aufrufe zur Herz-Jesufeier. Aufruf zur Bergbeleuchtung, Bozner Nachrichten Nr. 111, 16. Mai 1896, S. 3 Das Herz-Jesu-Fest, Andreas Hofer Wochenblatt Nr. 25, 18. Juni 1896, S. 294–297 Das Herz-Jesu-Fest, Andreas Hofer Wochenblatt Nr. 26, 25. Juni 1896, S. 310–316 Bergfeuer am Herz Jesu-Fest 21. Juni, Dolomiten Nr. 72, 17. Juni 1936, S. 2 Hermann Mang: Lodernde Bergfeuer, Dolomiten Nr. 17, 9. Juni 1945, S. 3 Тироль Традиции Австрии Традиции Италии Костры Иисус Христос
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,897
package org.zaproxy.zap.control; import java.io.IOException; import java.io.InputStream; import java.util.List; import org.apache.commons.configuration.HierarchicalConfiguration; /** * Helper class that reads a {@link AddOn#MANIFEST_FILE_NAME manifest file}. * * @since 2.4.0 */ public class ZapAddOnXmlFile extends BaseZapAddOnXmlData { private static final String ASCANRULE_ELEMENT = "ascanrule"; private static final String ASCANRULES_ALL_ELEMENTS = "ascanrules." + ASCANRULE_ELEMENT; private static final String PSCANRULE_ELEMENT = "pscanrule"; private static final String PSCANRULES_ALL_ELEMENTS = "pscanrules." + PSCANRULE_ELEMENT; private static final String FILE_ELEMENT = "file"; private static final String FILES_ALL_ELEMENTS = "files." + FILE_ELEMENT; private static final String LIB_ELEMENT = "lib"; private static final String LIBS_ALL_ELEMENTS = "libs." + LIB_ELEMENT; private static final String BUNDLE_ELEMENT = "bundle"; private static final String BUNDLE_PREFIX_ATT = "bundle[@prefix]"; private static final String HELPSET_ELEMENT = "helpset"; private static final String HELPSET_LOCALE_TOKEN_ATT = "helpset[@localetoken]"; private List<String> ascanrules; private List<String> pscanrules; private List<String> files; private List<String> libs; private String bundleBaseName; private String bundlePrefix; private String helpSetBaseName; private String helpSetLocaleToken; public ZapAddOnXmlFile(InputStream is) throws IOException { super(is); } @Override protected void readAdditionalData(HierarchicalConfiguration zapAddOnXml) { ascanrules = getStrings(zapAddOnXml, ASCANRULES_ALL_ELEMENTS, ASCANRULE_ELEMENT); pscanrules = getStrings(zapAddOnXml, PSCANRULES_ALL_ELEMENTS, PSCANRULE_ELEMENT); files = getStrings(zapAddOnXml, FILES_ALL_ELEMENTS, FILE_ELEMENT); libs = getStrings(zapAddOnXml, LIBS_ALL_ELEMENTS, LIB_ELEMENT); bundleBaseName = zapAddOnXml.getString(BUNDLE_ELEMENT, ""); bundlePrefix = zapAddOnXml.getString(BUNDLE_PREFIX_ATT, ""); helpSetBaseName = zapAddOnXml.getString(HELPSET_ELEMENT, ""); helpSetLocaleToken = zapAddOnXml.getString(HELPSET_LOCALE_TOKEN_ATT, ""); } public List<String> getAscanrules() { return ascanrules; } public List<String> getPscanrules() { return pscanrules; } public List<String> getFiles() { return files; } /** * Gets the libraries of the add-on. * * @return the libraries, never {@code null}. * @since 2.9.0 */ public List<String> getLibs() { return libs; } /** * Gets the base name of the bundle. * * @return the base name of the bundle, never {@code null}. * @since 2.8.0 */ public String getBundleBaseName() { return bundleBaseName; } /** * Gets the prefix of the bundle. * * @return the prefix of the bundle, never {@code null}. * @since 2.8.0 */ public String getBundlePrefix() { return bundlePrefix; } /** * Gets the base name of the HelpSet file. * * @return the base name of the HelpSet file, never {@code null}. * @since 2.8.0 */ public String getHelpSetBaseName() { return helpSetBaseName; } /** * Gets the locale token for the HelpSet file. * * @return the locale token for the HelpSet file, never {@code null}. * @since 2.8.0 */ public String getHelpSetLocaleToken() { return helpSetLocaleToken; } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,860
{"url":"https:\/\/mathematica.stackexchange.com\/questions\/63117\/how-to-create-a-1d-vector-by-selecting-a-straight-line-in-a-given-angle-of-2d-ar\/63120#63120","text":"# How to create a 1D vector by selecting a straight line in a given angle of 2D array\n\nSuppose I have an image created by the code provided below. I want to select a line (1D vector) $\\left(y-y_{0}\\right)=n\\left(x-x_{0}\\right)$ from the output image (2D matrix) by specifying $n$ and $\\left(x_{0},y_{0}\\right)$. Can anyone explain how to do this?\n\nCode:\n\nw1 = 600;\nw2 = 600;\nRotateRight[\nExp[I k] DiskMatrix[20, {w1, w2}], {k 100, k 200 }], {k, 2}] +\nExp[I 5] DiskMatrix[20, {w1, w2}] +\nSum[RotateLeft[Exp[I k] DiskMatrix[20, {w1, w2}], {k 200, k }], {k, 1}]\n<< Developer\nImage[RotateRight[\nRe[Fourier[mask]], {w1\/2, w2\/2}]\\[TensorProduct]ToPackedArray[{1.0, 0.3, 0.1}], Magnification -> 0.4]\n\n\n\u2022 The value for the variable dat is missing. I could not create any output graphics. Oct 14 '14 at 14:57\n\u2022 Sorry about that, fixed it Oct 14 '14 at 14:59\n\u2022 By the way, you can control the brightness of the image using for example Image[10 RotateRight[... instead of Image[RotateRight[.... However, in this case the image is just being used for display purposes, and it sounds like you're interested in taking a line-shaped subset of the array Re[Fourier[mask]] itself. Is that what you're trying to do? Oct 14 '14 at 15:02\n\u2022 @ DumpsterDoofus Yup, that's right. And I need to be able to easily select the angle of said subset. Oct 14 '14 at 15:10\n\nI'm sure you know that a line will almost never run through exact pixel positions. Therefore, you have two choices. First, you interpolate your image matrix and then you can sample as many points along the line as you like. In this case, I probably wouldn't recommend it because the values depend on the interpolation itself. Another, very easy way is to use an algorithm which is used draw an approximate line in a pixel grid.\n\nLucky for you we already had a post about his. Therefore, you can read in this answer how it is done and with the provided function, you will get the pixel coordinates along a line between points $p_0$ and $p_1$.\n\nbresenham[p0_, p1_] :=\nModule[{dx, dy, sx, sy, err, newp}, {dx, dy} = Abs[p1 - p0];\n{sx, sy} = Sign[p1 - p0];\nerr = dx - dy;\nnewp[{x_, y_}] :=\nWith[{e2 = 2 err}, {If[e2 > -dy, err -= dy; x + sx, x],\nIf[e2 < dx, err += dx; y + sy, y]}];\nNestWhileList[newp, p0, # =!= p1 &, 1]]\n\n\nand then you take your image and use ImageData to get the pixel matrix. The extraction of the diagonal image is as simple as\n\nimg = Import[\"http:\/\/i.stack.imgur.com\/FQthc.png\"];\nExtract[ImageData[img], bresenham[{1, 1}, ImageDimensions[img]]]\n\n\nFinal note: pay attention that the image matrix is reversed to what you see in the image. I guess you can do the transformation of a line in two-point form and yours by yourself.\n\n\u2022 Thanks for the help but when I try to use the code I get an error message. Any Idea why? Oct 15 '14 at 10:15\n\u2022 w1 = 600; w2 = 600; mask = Sum[RotateRight[Re[Exp[I 56 k] DiskMatrix[20, {w1, w2}]], {k 100, k 200 }], {k,2}] + Re[Exp[I 5]DiskMatrix[20, {w1, w2}]] + Sum[RotateLeft[Re[Exp[I 5 k] DiskMatrix[20, {w1, w2}]], {k 200, k }], {k, 1}]Image[0.5 mask, Magnification -> 1] << Developer ft = (Re[Fourier[mask]])^2 ift = Image[RotateRight[ft, {w1\/2, w2\/2}][TensorProduct]ToPackedArray[{1.0, 0.3,0.1}],Magnification -> 1] bresenham[p0_, p1_] := Module[{dx, dy, sx, sy, err, newp}, {dx, dy} = Abs[p1 - p0];{sx, sy} = Sign[p1 - p0];err = dx - dy; Oct 15 '14 at 10:17\n\u2022 @AsafMiron No, I don't know why. You have to ensure that your points $p_0$ and $p_1$ lie within your image matrix, but otherwise it's pretty fail-safe. I cannot run your code in the comment since it cropped. Oct 15 '14 at 11:19\n\u2022 newp[{x_, y_}] := With[{e2 = 2 err}, {If[e2 > -dy, err -= dy; x + sx, x], If[e2 < dx, err += dx; y + sy, y]}]; NestWhileList[newp, p0, # =!= p1 &, 1]] Image[Extract[ImageData[ift],bresenham[{0, 0},ImageDimensions[ift]]]] Oct 15 '14 at 11:46\n\u2022 Yeah, sorry about that. There is the remaining part. I'm very new to Mathematica and to any use of a computer to do calculations so I might be missing something simple Oct 15 '14 at 11:49","date":"2021-10-22 10:13:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.32528743147850037, \"perplexity\": 3070.263369878989}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585504.90\/warc\/CC-MAIN-20211022084005-20211022114005-00097.warc.gz\"}"}
null
null
{"url":"http:\/\/mathhelpforum.com\/number-theory\/93203-elementary-insight-into-bertrand-s-postulate.html","text":"## Elementary insight into Bertrand's postulate\n\nBertrand's postulate states that for any positive integer $n>1$ there is a prime with $n.\nThe following is not a rigorous proof but rather something I have thought of which makes the theorem intuitively evident, unlike the rigorous proofs which exist (all of which I have seen are very elegant, but tackle the problem in a roundabout fashion).\n\nSuppose we are given an interval $I$ of integers. Call $s_p$ the proportion of integers in $I$ which are multiples of $p$. Then\n\n$s_p\\leq \\frac{1}{p}$\nis trivial. In perticular\n\n$R(n) = \\prod_{p\\leq n}\\Big(1-\\frac{1}{p}\\Big)$\n\ncan be thought of as the proportion of all integers which are not divisible by any of the primes less than $n$. If we have an interval containing $k$ numbers, we can expect approximately $kR(n)$ of those numbers to be divisible by none of the primes less than $n$.\n\nNote that we have the ridiculously bad bound\n\n$R(n) = \\prod_{p\\leq n}\\Big(1-\\frac{1}{p}\\Big) \\geq \\prod_{j=2}^n\\Big(1-\\frac{1}{j}\\Big) = \\frac{(n-1)!}{n!}=\\frac{1}{n}$\n\nso that in perticular any interval containing $n$ numbers should certainly be expected to contain an integer not divisible by any prime less than $n$, since $nR(n)\\geq 1$. In perticular, when this is applied to the interval $]n,2n]$, we see that it should very reasonably contain such an integer, which, in this case, would mean a prime.","date":"2015-09-03 10:22:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 17, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7454558610916138, \"perplexity\": 2507.8673318595893}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-35\/segments\/1440645310876.88\/warc\/CC-MAIN-20150827031510-00170-ip-10-171-96-226.ec2.internal.warc.gz\"}"}
null
null
{"url":"http:\/\/www.alphaclinic.com.hk\/androcentric-words-pwujgoe\/discriminant-analysis-stata-96ec3b","text":"# discriminant analysis stata\n\nDiscriminant analysis is a 7-step procedure. See also Stata Data Analysis Examples Discriminant Function Analysis One way from PSYCHOLOGY 107 at Queens College, CUNY Open a new project or a new workbook. Columns A ~ D are automatically added as Training Data. One of the features of Stata is that the estimation commands (like discrim lda if you were using linear discriminant analysis) are accompanied by \"postestimation\" commands that give additional results. Import the data file \\Samples\\Statistics\\Fisher's Iris Data.dat; Highlight columns A through D. and then select Statistics: Multivariate Analysis: Discriminant Analysis to open the Discriminant Analysis dialog, Input Data tab. Discriminant analysis is particularly useful for multi-class problems. A given input cannot be perfectly predicted by \u2026 The purpose of discriminant analysis can be to find one or more of the following: a mathematical rule for guessing to which class an observation belongs, a set of linear combinations of the quantitative variables that best reveals the differences among the classes, or a subset of the quantitative variables that best reveals the differences among the classes. Discriminant Analysis. Absence of perfect multicollinearity. Likewise, practitioners, who are familiar with regularized discriminant analysis (RDA), soft modeling by class analogy (SIMCA), principal component analysis (PCA), and partial least squares (PLS) will often use them to perform classification. However, since the two groups overlap, it is not possible, in the long run, to obtain perfect accuracy, any more than it was in one dimension. Actually, for linear discriminant analysis to be optimal, the data as a whole should not be normally distributed but within each class the data should be normally distributed. Discriminant Analysis Akaike Information Criterion Linear Discriminant Analysis Location Model Asymptotic Distribution These keywords were added by machine and not by the authors. In, discriminant analysis, the dependent variable is a categorical variable, whereas independent variables are metric. Step 1: Collect training data. Dependent Variable: Website format preference (e.g. Discriminant analysis assumes covariance matrices are equivalent. Discriminant analysis is described by the number of categories that is possessed by the dependent variable. This chapter covers the basic objectives, theoretical model considerations, and assumptions of discriminant analysis and logistic regression. Univariate ANOVAs. When we have a set of predictor variables and we\u2019d like to classify a response variable into one of two classes, we typically use logistic regression.. Step 1: Load Necessary Libraries Linear Discriminant Analysis) or unequal (Quadratic Discriminant Analysis). Downloadable! Discriminant function analysis is similar to multivariate ANOVA but indicates how well the treatment groups or study sites differ with each other. PLS discriminant analysis can be applied in many cases when classical discriminant analysis cannot be applied. A range of techniques have been developed for analysing data with categorical dependent variables, including discriminant analysis, probit analysis, log-linear regression and logistic regression. The model is composed of a discriminant function (or, for more than two groups, a set of discriminant functions) based on linear combinations of the predictor variables that provide the best discrimination between the groups. Multiple Discriminant Analysis. Canonical discriminant analysis (CDA) and linear discriminant analysis (LDA) are popular classification techniques. $\\endgroup$ \u2013 Frank Harrell Jun 26 '15 at 18:36. Using QDA, it is possible to model non-linear relationships. LDA is very interpretable because it allows for dimensionality reduction. It appears you are using Stata's menus do to your analysis. You can assess this assumption using the Box's M test. Linear Discriminant Analysis (LDA)\u00b6 Strategy: Instead of estimating $$P(Y\\mid X)$$ directly, we could estimate: $$\\hat P(X \\mid Y)$$: Given the response, what is the distribution of the inputs. Means. It was originally developed for multivariate normal distributed data. It is used for compressing the multivariate signal so that a low dimensional signal which is open to classification can be produced. We wish to select the elements of v such that is a maximum. Discriminant Analysis Statistics. Quadratic Discriminant Analysis . are not very accurate (e.g., predict the probability of an event given a subject's sex). There are new discriminant analyse procedures in Stata 10. This process is experimental and the keywords may be updated as the learning algorithm improves. Displays total and group means, as well as standard deviations for the independent variables. Figure 1.1: Example of discriminant analysis with cluster one in red and cluster two in blue where the discriminant rule is the line of best t. a line of best t is a straight line that accurately represents the data on a scatter plot, i.e., a line is drawn through the center of a group of data points. Equality of covariance matrices: Activate this option if you want to assume that the covariance matrices associated with the various classes of the dependent variable are equal (i.e. $$\\hat P(Y)$$: How likely are each of the categories. The major difference is that PCA calculates the best discriminating components without foreknowledge about groups, Any combination of components can be displayed in two or three dimensions. Linear Discriminant Analysis are statistical analysis methods to find a linear combination of features for separating observations in two classes.. Homogeneity of covariances across groups. Principal Components Analysis (PCA) starts directly from a character table to obtain non-hierarchic groupings in a multi-dimensional space. Linear Discriminant Analysis\u00b6. This tutorial provides a step-by-step example of how to perform linear discriminant analysis in Python. Discriminant analysis builds a predictive model for group membership. Linear discriminant analysis would attempt to nd a straight line that reliably separates the two groups. It is easy to show with a single categorical predictor that is binary that the posterior probabilities form d.a. Use of Discriminant Analysis in Counseling Psychology Research Nancy E. Betz Ohio State University Discriminant analysis is a technique for the multivariate study of group differences. Real Statistics Data Analysis Tool: The Real Statistics Resource Pack provides the Discriminant Analysis data analysis tool which automates the steps described above. Linear discriminant analysis is a method you can use when you have a set of predictor variables and you\u2019d like to classify a response variable into two or more classes.. Discriminant analysis seeks out a linear combination of biomarker data for each treatment group that maximizes the difference between treatment groups or study sites for proper classification. Here, we actually know which population contains each subject. Discriminant analysis is very similar to PCA. The Flexible Discriminant Analysis allows for non-linear combinations of inputs like splines. Linear Discriminant Analysis Example. Available options are means (including standard deviations), univariate ANOVAs, and Box's M test. Discriminant Analysis Options in XLSTAT. For example, when the number of observations is low and when the number of explanatory variables is high. This is really a follow-up article to my last one on Principal Component Analysis, so take a look at that if you feel like it: Principal Component Analysis (PCA) 101, using R. Improving predictability and classification one dimension at a time! #3. after developing the discriminant model, for a given set of new observation the discriminant function Z is computed, and the subject\/ object is assigned to first group if the value of Z is less than 0 and to second group if more than 0. Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. Discriminant analysis is the oldest of the three classification methods. Discriminant analysis comprises two approaches to analyzing group data: descriptive discriminant analysis (DDA) and predictive discriminant analysis (PDA). Descriptives. Regular Linear Discriminant Analysis uses only linear combinations of inputs. Nonetheless, discriminant analysis can be robust to violations of this assumption. If the assumption is not satisfied, there are several options to consider, including elimination of outliers, data transformation, and use of the separate covariance matrices instead of the pool one normally used in discriminant analysis, i.e. The null hypothesis, which is statistical lingo for what would happen if the treatment does nothing, is that there is no relationship between consumer age\/income and website format preference. Training data are data with known group memberships. In this type of analysis, your observation will be classified in the forms of the group that has the least squared distance. Discriminant Analysis. For example, in the Swiss Bank Notes, we actually know which of these are genuine notes and which others are counterfeit examples. Discriminant analysis\u2013based classification results showed the sensitivity level of 86.70% and specificity level of 100.00% between predicted and original group membership. To contrast it with these, the kind of regression we have used so far is usually referred to as linear regression. Discriminant analysis is not as robust as some think. However, PDA uses this continuous data to predict group membership (i.e., How accurately can a classification rule classify \u2026 As in statistics, everything is assumed up until infinity, so in this case, when the dependent variable has two categories, then the type used is two-group discriminant analysis. Optimal Discriminant Analysis (ODA) is a machine learning algorithm that was introduced over 25 years ago to offer an alternative analytic approach to conventional statistical methods commonly used in research (Yarnold & Soltysik 1991). When there are missing values, PLS discriminant analysis \u2026 Then, we use Bayes rule to obtain the estimate: Note: Please refer to Multi-class Linear Discriminant Analysis for methods that can discriminate between multiple classes. Quadratic method RDA is a regularized discriminant analysis technique that is particularly useful for large number of features. Logistic regression and discriminant analysis are approaches using a number of factors to investigate the function of a nominally (e.g., dichotomous) scaled variable. format A, B, C, etc) Independent Variable 1: Consumer age Independent Variable 2: Consumer income. Both use continuous (or intervally scaled) data to analyze the characteristics of group membership. Binary that the posterior probabilities form d.a learning algorithm improves a step-by-step example of How to perform linear discriminant builds... Steps described above major difference is that PCA calculates the best discriminating components without foreknowledge groups. Type of analysis, your observation will be classified in the forms of the.! Accurate ( e.g., predict the probability of an event given a subject sex! Quadratic discriminant analysis can be robust to violations of this assumption using the Box 's M test Statistics. So that a low dimensional signal which is open to classification can be produced each. Multiple discriminant analysis and logistic regression analysis ( LDA ) are popular classification techniques standard deviations for the variables. Contrast it with these, the kind of regression we have used so far is referred! Group data: descriptive discriminant analysis ( LDA ) are popular classification techniques are missing values, PLS analysis... Columns a ~ D are automatically added as Training data added as Training data automatically as... The three classification methods ( CDA ) and linear discriminant analysis ( CDA ) and discriminant! Least squared distance not as robust as some think at 18:36 about groups means, as well as deviations... Be robust to violations of this assumption using the Box 's M test which! This assumption using the Box 's M test and which others are counterfeit examples would attempt to nd straight... The dependent variable analysis is not as robust as some think sex ) available options are (!, it is easy to show with a single categorical predictor that is binary that the posterior form! Particularly discriminant analysis stata for large number of observations is low and when the of! The characteristics of group membership for methods that can discriminate between Multiple classes data descriptive. May be updated as the learning algorithm improves signal so that a low dimensional signal which is open classification... Lda ) are popular classification techniques are means ( including standard deviations ), univariate,! When ( B - \u03bbW ) v = 0 Multiple classes is maximum. Described above easy to show with a single categorical predictor that is binary that the posterior probabilities form d.a as. Builds a predictive model for group membership of an event given a subject 's sex ) subject 's )! Posterior probabilities form d.a \u2026 discriminant analysis uses only linear combinations of inputs ( or intervally scaled ) data analyze. Possible to model non-linear relationships are means ( including standard deviations for the Independent variables experimental and the may... Univariate ANOVAs, and Box 's M test C, etc ) Independent variable 2: Consumer income attempt. Large number of categories that is binary that the posterior probabilities form d.a ), univariate,! So that a low dimensional signal which is open to classification can be robust to violations this! Values, PLS discriminant analysis would attempt to nd a straight line that reliably separates the two.. Any combination of components can be produced sex ) Training data ( \\hat P ( ). Of observations is low and when the number of categories that is possessed by the number of features automatically... To analyze the characteristics of group membership used for compressing the multivariate signal so that a dimensional. Components can be displayed in two or three dimensions LDA ) are popular classification techniques type of analysis your. Using Stata 's menus do to your analysis the three classification methods many cases when classical analysis... Attempt to nd a straight line that reliably separates the two discriminant analysis stata described above Multiple analysis. Model considerations, and assumptions of discriminant analysis can be applied in many cases when classical discriminant analysis statistical... Objectives, theoretical model considerations, and assumptions of discriminant analysis ) or unequal ( Quadratic analysis... Statistics Resource Pack provides the discriminant analysis allows for non-linear combinations of inputs categorical., theoretical model considerations, and Box 's M test it appears you are using Stata 's do... The Swiss Bank Notes, we actually know which of these are genuine Notes and which others counterfeit... Analysis, your observation will be classified in the Swiss Bank Notes, we use Bayes rule obtain... Of this assumption analysis methods to find a linear combination of components can be displayed in two or dimensions... \\$ discriminant analysis stata Frank Harrell Jun 26 '15 at 18:36 ) data to analyze the characteristics of membership... This type of analysis, your observation will be classified in the Swiss Bank Notes we... When there are missing values, PLS discriminant analysis would attempt to nd a line. Components without foreknowledge about groups learning algorithm improves the learning algorithm improves using Box... Keywords may be updated as the learning algorithm improves ) data to analyze the characteristics of membership. ( \\hat P ( Y ) \\ ): How likely are each of the categories is not as as. Multiple discriminant analysis categories that is possessed by the dependent variable is a discriminant. As robust as some think there are missing values, PLS discriminant analysis is not as robust some. V = 0 be displayed in two or three dimensions observations is low and when the number of.. Classification can be produced applied in many cases when classical discriminant analysis ( PDA ) classical discriminant analysis are analysis! Variables are metric classification can be displayed in two or three dimensions observations in two classes Bank,... Three classification methods of v such that is a categorical variable, whereas Independent variables is by! Event given a subject 's sex ) means, as well as deviations... Model for group membership scaled ) data to analyze the characteristics of group membership of 86.70 % specificity! To nd a straight line that reliably separates the two groups are metric is to... Three dimensions is possible to model non-linear relationships ( B - \u03bbW v. Genuine Notes and which others are counterfeit examples ( LDA ) are popular classification techniques new discriminant analyse procedures Stata. Is possible to model non-linear relationships: Please refer to Multi-class linear discriminant analysis technique that is possessed by number... Posterior probabilities form d.a predicted and original group membership by the number of explanatory variables is high groups! For example, in the forms of the three classification methods objectives, theoretical model,! To model non-linear relationships analysis for methods that can discriminate between Multiple classes and which others are counterfeit examples number... 86.70 % and specificity level of 100.00 % between predicted and original group membership,... Training data analysis ( CDA ) and predictive discriminant analysis are statistical analysis methods to find a combination! ( PDA ) in the Swiss Bank Notes, we use Bayes rule to the! To show with a single categorical predictor that is possessed by the number of variables. Obtain the estimate: Multiple discriminant analysis technique that is particularly useful for large of! The dependent variable is a categorical variable, whereas Independent variables are using Stata 's do... Subject 's sex ) the sensitivity level of 100.00 % between predicted and original group membership continuous... Observations in two classes: Multiple discriminant analysis Statistics only linear combinations of inputs, in the Swiss Bank,! Means ( including standard deviations ), univariate ANOVAs, and Box 's M test use. Swiss Bank Notes, we use Bayes rule to obtain the estimate: Multiple analysis! Regression we have used so far is usually referred to as linear regression: Multiple discriminant analysis is as. Specificity level of 100.00 % between predicted and original group membership two classes, we actually know discriminant analysis stata population each. ( DDA ) and predictive discriminant analysis ( DDA ) and linear discriminant analysis DDA! To your analysis of 100.00 % between predicted and original group membership descriptive! E.G., predict the probability of an event given a subject 's sex ) Resource Pack provides the analysis! Swiss Bank Notes, we actually know which of these are genuine Notes and others! The elements of v such that is a maximum provides the discriminant analysis only. Added as Training data given a subject 's sex ) ), ANOVAs... Of inputs like splines straight line that reliably separates the two groups Swiss Bank Notes we... To analyze the characteristics of group membership, theoretical model considerations, and assumptions of discriminant (! For the Independent variables are metric step-by-step example of How to perform linear discriminant analysis described. Select the elements of v such that is possessed by the dependent variable a... Occurs when ( B - \u03bbW ) v = 0 both use continuous ( or intervally scaled ) data analyze! Kind of regression we have used so far is usually referred to linear. Independent variables missing values, PLS discriminant analysis ( DDA ) and linear discriminant analysis only. Linear combinations of inputs like splines Notes and which others are counterfeit.!, theoretical model considerations, and assumptions of discriminant analysis ( LDA ) are classification... So that a low dimensional signal which is open to classification can be produced income. Any combination of features for separating observations in two or three dimensions ) v 0. Real Statistics data analysis Tool: the real Statistics data analysis Tool the. Variable is a maximum to violations of this assumption using the Box 's M.... Analysis uses only linear combinations of inputs like splines the discriminant analysis and logistic regression 86.70 and! Know which of these are genuine Notes and which others are counterfeit.. It allows for non-linear combinations of inputs like splines means ( including standard deviations for the Independent variables Multiple! To Multi-class linear discriminant analysis in Python posterior probabilities form d.a which is open to classification can produced! That reliably separates the two groups can not be applied two classes the Flexible discriminant analysis ( ). Applied in many cases when classical discriminant analysis allows for non-linear combinations of inputs like splines for that!","date":"2021-04-19 22:42:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7222781181335449, \"perplexity\": 1165.618905859868}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038917413.71\/warc\/CC-MAIN-20210419204416-20210419234416-00431.warc.gz\"}"}
null
null
\section{Introduction} The quantum game theory \cite{ME99, EW99, IQ05, FL05, IC06} has two aspects. From one side, it is an extension of conventional game theory with Hilbert space vectors and operators. From the other side, it is an attempt to reformulate the description of quantum information processing with the concept of payoff maximization. In conventional game theory, strategies of players are represented by real-valued vectors, and payoffs by real-valued matrices with no further specifications. In quantum game theory, they are replaced by complex {\it unitary} vectors and {\it Hemitian} matrices. It appears that the criterion of mathematical beauty alone favors the latter over the former. Since the space of classical strategies forms a subset of the entire quantum strategy space, it is quite natural to regard the game theory formulated on Hilbert space as a logical extension of classical game theory. It is tempting to imagine that, in search of natural extension, the quantum game theory could have eventually been found irrespective to the discovery of quantum mechanics itself. A crucial questions then arise: What is the {\it physical content} of quantum strategies? Which part of a quantum strategy is classically interpretable and which part purely quantum? Answers to these questions should also supply a key to understand the mystery surrounding the ``quantum resolution'' of games with classical dilemmas \cite{BH01,EP02}. Obviously, the answers to these questions are to be obtained only through a consistent formulation of game strategies on Hilbert space. When that is achieved, it can be used as a springboard to deal with the second aspect of the quantum game theory; namely, quantum games played with microscopic objects in states with full quantum superposition and entanglement. In quantum information theory, concept of efficiency occasionally arises. That would supply the payoff function once we are able to identify ``game players'' in the information processing. It should then become possible to reformulate the problem with the language of quantum games. In this note, we formulate quantum strategies for classical games with {\it diagonal payoff matrices}, and clarify the classical and quantum contents of the resulting payoff function. We will discover two striking features in the results; the existence of a third party, and the mixture of altruistic strategies. We also sketch the game theoretic formulation of quantum information processing through an example of Bell's experiment. We naturally recover the Tsirelson's limit. \section{Game Strategy and Payoff on Hilbert Space} We start by considering $n$-dimensional Hilbert spaces ${\cal H}_A$ and ${\cal H}_B$ in which the strategies of the two players $A$ and $B$ are represented by vectors $ \left | \alpha \right>_A \in {\cal H}_A$ and $ \left | \beta \right>_B \in {\cal H}_B$. The space of {\it joint strategies} of the game is given by the direct product ${\cal H} ={\cal H}_A \times {\cal H}_B$. A vector in ${\cal H}$ representing a joint strategy of the two players can be written \cite{CT06} as \begin{eqnarray} \label{jointst} \left | \alpha, \beta; \gamma \right> = J(\gamma) \left | \alpha \right>_A \left | \beta \right>_B, \end{eqnarray} where the unitary operator $J(\gamma)$ provides quantum correlation ({\it e.g.,} entanglement) for the separable states $ \left | \alpha \right>_A \left | \beta \right>_B$. The two-body operator $J(\gamma)$ is independent of the players' choice and is determined by a third party, which can be regarded as a {\it coordinator} of the game. Once the joint strategy is specified with $J(\gamma)$, the players are to receive the payoffs, which are given by the expectation values of Hermitian operators $A$ and $B$: \begin{eqnarray} \label{QNash} \Pi_A(\alpha, \beta; \gamma) &=& \left < \alpha, \beta; \gamma | A | \alpha, \beta; \gamma \right > , \\ \nonumber \Pi_B(\alpha, \beta; \gamma) &=& \left < \alpha, \beta; \gamma | B | \alpha, \beta; \gamma \right > . \end{eqnarray} Both players try to optimize their strategy to gain the maximal payoff, and the result is the quantum version of the Nash equilibrium, where we have $(\alpha, \beta)= (\alpha^\star, \beta^\star)$ in the strategy space, at which point the payoffs separately attain the maxima as \begin{eqnarray} \label{AlBeNash} \left. \delta_\alpha \Pi_A (\alpha, \beta^\star; \gamma)\right|_{ \alpha^\star} = 0, \quad \left. \delta_\beta \Pi_B (\alpha^\star, \beta; \gamma) \right|_{\beta^\star} = 0, \end{eqnarray} under arbitrary variations in $\alpha$ and $\beta$. We express the individual strategies in terms of orthonormal basis strategies $\{ \left| i \right>\}$, $i = 1, ..., n$ which we regard as common to $A$ and $B$. \begin{eqnarray} \label{basis} \left | \alpha \right>_A = \sum_i \alpha_i \left | i \right>_A , \quad \left | \beta \right>_B = \sum_i \beta_i \left | i \right>_B , \end{eqnarray} with complex numbers $\alpha_i$, $\beta_i$ normalized as $\sum_i \vert \alpha_i \vert^2 = \sum_i \vert \beta_i \vert^2 = 1$. We introduce the swap operator $S$ by \begin{eqnarray} S \left | i , j \right> = \left | j , i \right> \end{eqnarray} for the states $\left | i , j \right> = \left | i \right>_A \left | j \right>_B$, and then $S \left | \alpha, \beta \right> = \left | \beta, \alpha \right>$ for general separable states $\left | \alpha, \beta \right> = \left | \alpha \right>_A \left | \beta \right>_B$ results. We further introduce operators $C$ and $T$ by \begin{eqnarray} C \left | i , j \right> = \left | {\bar i}, {\bar j} \right> , \quad T \left | i , j \right> = \left | {\bar j}, {\bar i} \right> , \end{eqnarray} where the bar represents the complimentary choice; ${\bar i} $ $= (n-1)-i$. The operator $C$ is the simultaneous renaming (conversion) of strategy for two players, and $T$ is the combination $T = CS$. These operators $\{S, C, T\}$ commute among themselves and satisfy \begin{eqnarray} S^2=C^2=T^2 = I, \nonumber \\ T=SC, S=CT, C=TS, \end{eqnarray} where $I$ is the identity operator. They form the dihedral group $D_2$. By defining the {\it correlated payoff operators} \begin{eqnarray} {\cal A}(\gamma) = J^\dagger(\gamma)A J(\gamma) , \quad {\cal B}(\gamma) = J^\dagger(\gamma)B J(\gamma) , \end{eqnarray} we have $\Pi_A (\alpha, \beta; \gamma)$ $=\left< \alpha,\beta \right| {{\cal A}(\gamma)} \left| \alpha,\beta \right> $. We consider diagonal payoff matrices whose elements are given by \begin{eqnarray} \label{diagAB} \left< i', j' \right| A \left| i, j \right> \!\!\! &=& \!\!\! A_{ij} \delta_{i' i}\delta_{j' j}, \\ \nonumber \left< i', j' \right| B \left| i, j \right> \!\!\! &=& \!\!\! B_{ij} \delta_{i' i}\delta_{j' j}. \end{eqnarray} Observe that we have \begin{eqnarray} \Pi_A (\alpha, \beta; 0) = \sum_{i,j} x_i A_{ij} y_j , \\ \nonumber \Pi_B (\alpha, \beta; 0)= \sum_{i,j} x_i B_{ij} y_j , \end{eqnarray} where $x_i = \vert \alpha_i\vert^2$ and $y_j = \vert \beta_j\vert^2$ are the probability of choosing the strategies $ \left | i \right>_A$ and $ \left | j \right>_B$ respectively. This means that, at $\gamma = 0$, our quantum game reduces to the classical game with the payoff matrix $A_{ij}$ under mixed strategies. \section{Altruistic Contents and Quantum Interferences in Quantum Games} Let us now restrict ourselves to two strategy games $n = 2$. The unitary operator $J(\gamma)$ then admits the form, \begin{eqnarray} \label{Hparam1} J(\gamma) = \, e^{i \gamma_1 S / 2} e^{i \gamma_2 T / 2} , \end{eqnarray} where $\gamma = (\gamma_1, \gamma_2)$ are real parameters. Note that, on account of the relation $S+T-C=I$ valid for $n = 2$, only two operators are independent in the set $\{S, C, T\}$. The correlated payoff operator $A(\gamma)$ is split into two terms \begin{eqnarray} \label{Agam} {\cal A}(\gamma) = {\cal A}^{\rm pc}(\gamma) + {\cal A}^{\rm in}(\gamma) \end{eqnarray} where ${\cal A}^{\rm pc}$ is the ``pseudo classical'' term and ${\cal A}^{\rm in}$ is the ``interference'' term given, respectively, by \begin{eqnarray} \label{Agam01} {\cal A}^{\rm pc}(\gamma) \!\!\!&=&\!\!\! \cos^2{ {\gamma_1\over 2}} A + ( \cos^2{ {\gamma_2\over 2}} - \cos^2{ {\gamma_1\over 2}} ) S A S + \sin^2{ {\gamma_2\over 2}} C A C , \nonumber \\ {\cal A}^{\rm in}(\gamma) \!\!\!&=&\!\!\! { {i } \over {2} } \sin \gamma_1(AS - SA) + { {i} \over {2} } \sin \gamma_2(AT - TA) . \end{eqnarray} Correspondingly, the full payoff is also split into two contributions from ${\cal A}^{\rm pc}$ and ${\cal A}^{\rm in}$ as $\Pi_A=$ $\Pi_A^{\rm pc}+ \Pi_A^{\rm in}$. To evaluate the payoff, we may choose both $\alpha_0$ and $\beta_0$ to be real without loss of generality, and adopt the notaions $(\alpha_0, \alpha_1)=(a_0, a_1 e^{i\xi})$ and $(\beta_0, \beta_1)=(b_0, b_1 e^{i\chi})$. The outcome is \begin{eqnarray} \label{payof0} & & \Pi_A^{\rm pc}(\alpha,\beta; \gamma) = \sum_{i,j} { a_i^2 b_j^2 {\cal A}^{\rm pc}_{i j} }(\gamma) , \\ \nonumber & & \Pi_A^{\rm in}(\alpha,\beta; \gamma) = - a_0 a_1 b_0 b_1 [ G_+(\gamma) \sin(\xi + \chi) + G_-(\gamma) \sin(\xi - \chi) ] , \quad \end{eqnarray} with ${\cal A}^{\rm pc}_{i j} (\gamma) = \left< i, j \right| {\cal A}^{\rm pc}(\gamma) \left| i, j \right>$ and \begin{eqnarray} \label {FGH} G_+(\gamma) \!\!\!& = &\!\!\! (A_{00} - A_{11}) \sin\gamma_2, \\ \nonumber G_-(\gamma) \!\!\!& = &\!\!\! (A_{01} - A_{10})\sin\gamma_1. \end{eqnarray} A completely parallel expressions are obtained for the payoff matrix $B(\gamma)$ and the payoff $\Pi_B(\alpha,\beta; \gamma)$ for the player $B$. Above split of the payoff shows that the quantum game consists of two ingredients. The first is the pseudo classical ingredient associated with ${\cal A}^{\rm pc}(\gamma)$, whose form indicates that we are, in effect, simultaneously playing three different classical games, {\it i.e.,} the original classical game $A$, and two types of ``converted'' games, specified by diagonal matrices $SAS$ and $CAC$ with the mixture specified by given $\gamma_1$ and $\gamma_2$. Regarding $\gamma$ as tunable parameters, we see that the quantum game contains a {\it family} of classical games that includes the original game. The second ingredient of the quantum game is the purely quantum component ${\cal A}^{\rm in}(\gamma)$, which occurs only when both of the two players adopt quantum strategies with $a_0 a_1 b_0 b_1 \ne 0$ and non-vanishing phases $\xi$ and $\chi$. The structure of $\Pi_A^{\rm in}$ suggests that this interference term cannot be simulated by a classical game and hence represents the {\it bona fide} quantum aspect. We further look into the pseudo classical family to uncover its physical content. To that end, we assume that one of the coordinator's parameters, $\gamma_2$ is zero. We have \begin{eqnarray} \label{ABgamS} {\cal A}^{\rm pc}(\gamma_1) \!\!&=&\!\! \cos^2 \frac{\gamma_1}{2} A + \sin^2 \frac{\gamma_1}{2} S A S \\ \nonumber {\cal B}^{\rm pc}(\gamma_1) \!\!&=&\!\! \cos^2 \frac{\gamma_1}{2} B + \sin^2 \frac{\gamma_1}{2} S B S \end{eqnarray} The meaning of these payoff matrices becomes evident by considering a {\it symmetric game}, which is defined by requiring that the payoffs are symmetric for two players, namely $\Pi_A (\alpha, \beta; \gamma) = \Pi_B ( \beta, \alpha; \gamma)$. The game appears identical to both players $A$ and $B$. In this sense, a symmetric game is {\it fair} to both parties. It is easy to see that the condition of symmetry translates into the requirement $B = SAS$. We then have, for a symmetric game, \begin{eqnarray} \label{ABgamSS} {\cal A}^{\rm pc}(\gamma_1) \!\!&=&\!\! \cos^2 \frac{\gamma_1}{2} A + \sin^2 \frac{\gamma_1}{2} B \\ \nonumber {\cal B}^{\rm pc}(\gamma_1) \!\!&=&\!\! \cos^2 \frac{\gamma_1}{2} B + \sin^2 \frac{\gamma_1}{2} A . \end{eqnarray} This means that the pseudo classical game specified by modified rule ${\cal A}(\gamma_1)$ and ${\cal B}(\gamma_1)$ can be interpreted as a game played with the mixture of {\it altruism}, or players' taking into account of other party's interest along with their own self-interest \cite{CH03, CH05}. The degree of mixture of altruism is controlled by the correlation parameter $\gamma_1$. It is a well known fact that altruistic behavior is widespread among primates that lead social life. It is also well known that the introduction of altruism ``solves'' such long-standing problems as prisoner's dilemma, to which attempts for solution within conventional game theories based solely on narrow egoistic self-interest has been notoriously difficult \cite{AX84,TA87}. If we fix the first correlation parameter to be $\gamma_1=\pi/2$, and assume {\it T-symmetric} game $B = TAT$, we arrive at a parallel relation to (\ref{ABgamSS}), thereby proving the fact that pseudo classical family is essentially made up of classical games with altruistic modification specified by the coordinator's parameter $\gamma$. For detailed solutions of Nash equilibria with exhaustive classification according to the relative value of the payoff parameters, readers are referred to \cite{CT06, IT06, IC06}. \section{Bell Experiment as a Quantum Game} What we have done up to now amounts to ``quantizing'' classical games. With the advent of nanotechnology, however, it is now possible to actually set up a {\it game with quantum particles} as a laboratory experiment that has no classical analogue. For such quantum games, we have to allow arbitrary Hermitian payoff operators $A$ and $B$, removing the restriction to diagonal ones, (\ref{diagAB}). Without the diagonal condition, however, it turns out that the parametrization of Hilbert space ${\cal H}_A\times{\cal H}_B$ with the correlation operator (\ref{Hparam1}) is not completely valid. (It leaves certain relative phases between basis states fixed, which, for the case of diagonal payoff operators, does no harm.) Instead, we resort to the scheme devised by Cheon, Ichikawa and Tsutsui \cite{CIT06} that utilizes Schmidt decomposition \begin{eqnarray} \label{schmdR} \left| \Psi(\alpha,\beta; \eta) \right> = U(\alpha) \otimes U(\beta) \left| \Phi(\eta) \right> , \end{eqnarray} with ``initial'' correlated state \begin{eqnarray} \label{schmd} \left| \Phi(\eta) \right> = \cos{\frac{\eta_1}{2}} \left|0 0 \right> + e^{i\eta_2} \sin{\frac{\eta_1}{2}} \left|1 1 \right> , \end{eqnarray} and individual $SU(2)$ rotations $U(\alpha)$ and $U(\beta)$ that are controlled respectively by player $A$ and $B$. For definiteness we write \begin{eqnarray} U(\alpha) = \pmatrix{ \cos\frac{\theta_\alpha}{2} & - e^{-i\varphi_\alpha}\sin\frac{\theta_\alpha}{2} \cr e^{i\varphi_\alpha}\sin\frac{\theta_\alpha}{2} & \cos\frac{\theta_\alpha}{2} } . \end{eqnarray} The Schmidt state $\left| \Psi(\alpha,\beta; \eta) \right>$ covers {\it entire} Hilbert space ${\cal H}_A\times{\cal H}_B$. Note that the coordinator's parameters $\eta_1$ and $\eta_2$ have definite meaning as the measure of size and phase of {\it two-particle entanglement}. As an example of such quantum game, let us consider payoff operators \begin{eqnarray} A = B = \sqrt{2} \left( \sigma_x\otimes\sigma_x + \sigma_z\otimes\sigma_z \right) . \end{eqnarray} This is nothing other than the measurement operator for Bell's experiment, in which the projection of two spin $1/2$ particles specified by the state (\ref{schmdR}) are measured separately. Here we identify $\left| 0 \right>$ and $\left| 1 \right>$ as ``up'' and ``down'' states of spin 1/2 along $z$ axis, namely \begin{eqnarray} \sigma_z \left| 0 \right> = \left| 0 \right>, \quad \sigma_z \left| 1 \right> = - \left| 1 \right> . \end{eqnarray} The spin projection of the first particle is measured either along positive $x$ axis (whose value, we call $P_{1}$) or along positive $z$ axis (whose value is $P_{2}$) with random alternation. The spin projection of the second particle is measured either along the line $45$ degrees between positive $x$ and $z$ axes ($Q_{1}$), or along the line $45$ degrees between negative $x$ and positive $z$ axes ($Q_{2}$), again in random alternation. Suppose that both players are interested in maximizing the quantity \begin{eqnarray} \Pi \equiv P_1Q_1-P_2Q_2. \end{eqnarray} We can easily show that $\Pi$ is given by the common payoff to $A$ and $B$ given by \begin{eqnarray} \Pi = \Pi_A(\alpha,\beta,\eta) = \Pi_B(\alpha,\beta,\eta) = \left< \Psi(\alpha,\beta; \eta) \right| A \left| \Psi(\alpha,\beta; \eta) \right> . \end{eqnarray} The game now becomes a one of quantum coordination between players $A$ and $B$ who both try to increase the common payoff $\Pi$ by respectively controlling the directions of spins with $U(\alpha)$ and $U(\beta)$. Considering the relation \begin{eqnarray} \left< \Psi(\alpha,\beta; \eta) \right| A \left| \Psi(\alpha,\beta; \eta) \right> = \left< \Phi(\eta) \right| (U^\dagger(\alpha)\otimes U^\dagger(\beta) \,A\, U(\alpha)\otimes U(\beta)) \left| \Phi(\eta) \right> . \end{eqnarray} we can also restate the game as two players, receiving the correlated two particle state $ \left| \Phi(\eta) \right>$, trying to maximize the common payoff $\Pi$ by rotating the directions of spin projection measurement: The player $A$ applies a common rotation $U(\alpha)$ to the directions $P_1$ and $P_2$, and the player $B$ applies another common $U(\beta)$ to $Q_1$ and $Q_2$. For a fixed set of entanglement parameters $(\eta_1, \eta_2)$, a straightforward calculation yields the Nash equilibrium that is specified by \begin{eqnarray} \theta_\alpha^\star = \theta_\beta^\star = {\it arbitrary}, \quad \varphi_\alpha^\star =\varphi_\beta^\star =0 , \end{eqnarray} for which, the Nash payoff is given by \begin{eqnarray} \Pi^\star(\eta_1,\eta_2)= \sqrt{2} ( 1+\sin{\eta_1}\cos{\eta_2} ) . \end{eqnarray} For particles with no entanglement, $\eta_1 = 0$, we obtain $\Pi^\star = \sqrt{2}$, which is the known maximum for two uncorrelated spins. For particles with maximum entanglement, $\eta_1 = \pi/2$ and phase $\eta_2 = 0$, we obtain the payoff $\Pi^\star = \sqrt{8}$, which is exactly on the Tsirelson's bound \cite{TS80}. This reformulation of Bell's experiment should give a hint for the way toward more general game-theoretic reformulation of quantum information processing. \begin{theacknowledgments} This work has been partially supported by the Grant-in-Aid for Scientific Research of Ministry of Education, Culture, Sports, Science and Technology, Japan under the Grant number 18540384. \end{theacknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,662
Q: Can we push custom logs from function app (subscription A) into Log Analytic Workspace (subscription B)? I'm trying to push custom logs from function app which is in subscription A into Log Analytic Workspace which is in subscription B. I'm using the ARM template to push logs from function app(subscription A) to Log Analytic Workspace (subscription A) by using the below git link. Able to push logs into Log Analytic Workspace successfully. https://github.com/MarcelMeurer/FunctionApp-to-LogAnalytics But when trying to push from different subscriptions getting 400 bad request error. Is it possible to push logs into Log Analytic Workspace when function app and Log Analytic Workspace are in different subscriptions? How can I resolve this issue? A: Please check if below workaround helps to fix the issue: Not aware of ARM template but I tried using the portal to send the Function App Logs in Log Analytics Workspace from one subscription to another subscription. I have notified that the Log Analytics Workspace should be in the same regions of your existing Log Analytics Workspace instance in both the subscriptions. Otherwise, it will fail to push/transfer when using CLI, PowerShell, ARM management tools and will not show the LA Workspace instances in the Azure Portal.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,045
A Tough Loss: UTEP Miner Quarterback Tim Curry Transferring By Adrian Mac Aug 18, 2010, 11:14pm MDT Share All sharing options for: A Tough Loss: UTEP Miner Quarterback Tim Curry Transferring UTEP's quarterback depth took a hit tonight when quarterback Tim Curry decided to leave the program. it appears potential playing time was the determining factor in his decision. Bret Bloomquist of the El Paso Times reported the story on his blog. Miner Illustrated noted via twitter: From Coach Mike Price: Tim Curry will be leaving UTEP for a chance to play for three years at another school. Curry was considered the centerpiece of Mike Price's 2008 recruiting class. Rivals rated Curry, out of Elysian Fields, the 29th best quarterback prospect in the nation that year. Curry redshirted in 2008 and served as backup (along with James Thomas when healthy) to Trevor Vittatoe for most of last year. This spring, as Trevor Vittatoe missed the first several practices due to suspension and with Thomas out due to injury, Curry battled with walk on Carson Meger for a chance to move up the depth chart and seize the #2 spot on the depth chart. Curry, at 6'4, 210 lbs., has all the physical tools needed for a college quarterback at the D-1 level. Despite his advantages, both in size and arm strength over Meger, Carson's surprising, consistent play seemed to give him the nod over Curry. No doubt, the arrival of former New Mexico quarterback Tate Smith also had an impact on Curry's decision to leave the team. Rumblings from camp had Meger ahead of Curry with Smith showing potential as well. Bloomquist wrote: The '08 recruit Curry looked unlikely to hold off Tate Smith in the battle for No. 4 quarterback this fall. No hard feelings here, Curry worked hard as a Miner and we all wish him the best. If he feels he has a better chance to play elsewhere, I think that's something we can all understand. But, we do have to take a look at UTEP's QB situation. The Depth Chart After Trevor Vittatoe, senior James Thomas II is still listed as the primary backup. Thomas is more than capable of being effective but he's been hampered with injury problems throughout his career. He's also spent ample practice time at wide receiver. Thomas is good, and has been great as a change of pace "wildcat" type quarterback, but he isn't an every down threat passing the ball. In three years, Thomas has attempted just 13 passes on game day. That leaves Meger and Tate Smith. Meger looked good during spring ball. He's listed at 6'0, 195. After a scrimmage last spring, I wrote the following on Meger: Meger, a left handed quarterback, was effective in play action and looked confident under center for a redshirt freshman. With the desert wind gusting, Meger had good enough arm strength to connect on 12 yard out patterns thrown against his body to the far side of the field. He started out a sizzling 6/8 for 90 yards. Meger is a gamer. He's excellent in play action and will be good in the short passing game. Is he ready to play D-1 football? No. If asked to play this season there will be a steep learning curve. Down the road, he is a viable backup candidate with good potential. Tate Smith is still a bit of an unknown quantity. He played some at New Mexico. He definitely provides some much needed depth. From his official bio: completed 7-of-21 passes for 82 yards (2 INTs) while rushing 18 times for 25 yards ... saw his most extensive action against eighth-ranked BYU, when he was in for 17 plays and completed 5-of-10 passes for 60 yards with a pick ... led the Lobos 80 yards in six plays, down to the BYU 19, in the closing minutes before taking three sacks That leaves the true freshman Javia Hall. Vittatoe, Thomas, and Hall are the only scholarship quarterbacks now on the UTEP roster. Hall was rated the 34th best quarterback in the nation by Rivals last year and a three star prospect. He was stellar at Dallas Skyline High. He was so good there that replacing him was the focus of this Dallas Morning News article. DMN writer Rainer Sabin wrote that Hall was sorely missed at Skyline, more than players who signed at Texas and Oklahoma. One signed with Texas and another with Oklahoma . Two of their teammates accepted offers from Texas A&M and Colorado State. But of all the key contributors Skyline sent to Division I football programs last season, it was guy who picked UTEP that was perhaps the most important one. His name was Javia Hall. He was the team's quarterback and last season he threw 2,600 yards, 33 touchdowns and only two interceptions. Hopefully, Javia Hall is as good as advertised. We may find out sooner than we ever thought possible.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,752
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/algebra\/algebra-1\/chapter-1-foundations-for-algebra-chapter-review-1-5-and-1-6-operations-with-real-numbers-page-71\/61","text":"## Algebra 1\n\n$40$\nThe product or quotient of two numbers with the same sign is positive. Since -5 and -8 are both negative, the product is positive. Therefore, $-5(-8)=40$","date":"2018-09-24 17:32:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.39100489020347595, \"perplexity\": 268.72845773346404}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267160620.68\/warc\/CC-MAIN-20180924165426-20180924185826-00326.warc.gz\"}"}
null
null
Q: If a segment of length 1 is randomly divided into n intervals, with what probability are all intervals are less than 1/k? If $n-1$ points are chosen at random on a line segment of length $1$ (with uniform distribution), thus dividing it into $n$ segments, what is the probability that no segment has a length greater than $1/k$? I've gotten this far as of now- For $k=2$, only one segment can be greater than $1/2$, so, the probability is just 1 - n times the probability of first segment being of length greater than $1/2$. so $P = 1-(n/2^{n-1})$ A: Let us write $$F_n(x) = \mathbb{P}\left(\max_{1\leq i \leq n} L_{n,i} \leq x \right), $$ where $L_{n,i}$ is the length of the $i$-th gap created by $n-1$ points chosen uniformly at random on $[0, 1]$, independent of each other. Now let $U_1, \cdots, U_n \sim \mathcal{U}[0,1]$ be independently chosen points on $[0, 1]$ and $L_{n+1,i}$ be the length of the corresponding $i$-th gap. Conditioning on $L_{n+1,1} = \min\{U_1,\cdots,U_n\}$, we easily check that \begin{align*} F_{n+1}(x) &= \mathbb{P}\left( \{ L_{n+1,1} \leq x \} \cap \Big\{ \max_{2\leq i \leq n+1} L_{n+1,i} \leq x \Big\} \right) \\ &= \sum_{k=1}^{n} \mathbb{P}\left( \{ U_k \leq x \} \cap \{ \forall l \neq k \ : \ U_l > U_k \} \cap \Big\{ \max_{2\leq i \leq n+1} L_{n+1,i} \leq x \Big\} \right) \\ &= \sum_{k=1}^{n} \mathbb{E} \left[ \mathbb{P}\left( \{ \forall l \neq k \ : \ U_l > U_k \} \cap \Big\{ \max_{2\leq i \leq n+1} L_{n+1,i} \leq x \Big\} \, \middle| \, U_k \right) \mathbf{1}_{\{ U_k \leq x \}} \right] \\ &= \sum_{k=1}^{n} \mathbb{E} \left[ F_n\left(\frac{x}{1-U_k}\right) (1 - U_k)^{n-1} \mathbf{1}_{\{ U_k \leq x \}} \right] \\ &= \int_{0}^{x \wedge 1} F_n\left(\frac{x}{1-u}\right) n(1-u)^{n-1} \, du \end{align*} Here, the last line follows from the fact that, given the value of $U_k$ and $U_l > U_k$ for $l \neq k$, points $\{U_l : l \neq k\}$ are i.i.d. and uniformly distributed over $[U_k, 1]$. With the initial condition $F_1(x) = \mathbf{1}_{[1,\infty)}(x)$, this completely determines $F_n$ at least theoretically. As to an exact formula, we claim that Claim. We have $$ F_n(x) = \sum_{k=0}^{n} (-1)^k \binom{n}{k}(1-kx)_{+}^{n-1}, \tag{*} $$ where we interpret $x_+^0 = \mathbf{1}_{\{x > 0\}}$ when $n = 1$. This easily follows from the recursive formula of $(F_n)$ together with the integration formula $$\int_{a}^{b} n x_+^{n-1} \, dx = b_+^n - a_+^n$$ for $a \leq b$ and $n \geq 1$. In particular, this tells that * *$F_n(\frac{1}{2}) = 1 - n(\frac{1}{2})^{n-1}$, *$F_n(\frac{1}{3}) = 1 - n(\frac{2}{3})^{n-1} + \frac{n(n-1)}{2}(\frac{1}{3})^{n-1}$, *$F_n(\frac{1}{4}) = 1 - n(\frac{3}{4})^{n-1} + \frac{n(n-1)}{2}(\frac{2}{4})^{n-1} - \frac{n(n-1)(n-2)}{6}(\frac{3}{4})^{n-1}$ and so forth. Addendum. The formula $\text{(*)}$ seems to suggest an inclusion-exclusion argument but I haven't tried pursuing this direction. A: Let $L_i$ be the length of some subsegment. We have that: $\mathbb{P}[L_i\leq1/k:\forall i]=1-\mathbb{P}[L_i>1/k:\exists i]$ Now $\mathbb{P}[L_i>1/k:\exists i]\implies$there are all $n-1$ points in a subsegment $(1-\frac{1}{k},1]$. Keep in mind that since $L_i>1/k$ the point forming $L_i$ is also in this length. Since points are uniformly distributed and $\mathbb{P}[\textrm{point in }(1-\frac{1}{k},1]]=1-\frac{1}{k}$, we get: $\mathbb{P}[n-1\textrm{ points in }(1-\frac{1}{k},1]]=\mathbb{P}[\textrm{point in }(1-\frac{1}{k},1]]^{n-1}=(1-\frac{1}{k})^{n-1}$ Hence $\mathbb{P}[L_i\leq1/k:\forall i]=1-(1-\frac{1}{k})^{n-1}$
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,392
{"url":"https:\/\/zendo.martinitime.org\/tag\/hal8999\/","text":"Doing It Wrong archive\n\n## Cheating Irises\n\nCategories: Journal, Machine Learning\nTags:\nPublished on: August 28, 2018\n\n## HAL8999 7\/100\n\nI was sick yesterday but did spend some time looking over some \u201ccheat sheets\u201d that people had put together for various machine learning topics. Some were good, some were just stupid (I\u2019m looking at you Machine Learning in Emoji). Also went through a very simply classifier based on the iris data set.\n\n Model Selection Microsoft Azure Neural networks Neural Network Graphs Python 4 Big Data Python 4 Data Science Scikit-Learn\n\n## Transformers, more than meets the eye\n\nCategories: Journal, Machine Learning\nTags:\nPublished on: August 26, 2018\n\n## HAL8999 6\/100\n\n\u2022 Updated the jupyter notebooks for handson-ml from github and read through the Ch2 notebook to address the CategoricalEncoder issue from yesterday\n\u2022 Looked at a basic transformer\n\nPart of building a data pipeline is likely to include the creation of custom transformer classes to perform operations specific to the project or data source. For example, one of the products I work on stores xml data in a database with the newlines encoded as \u2018\\n\u2019. When the data is pulled from the database those \u2018\\n\u2019 sequences are converted to newline characters before the xml is passed to the parser. It\u2019s a very simple operation but without it the data would fail xml validation.\n\nThe scikit-learn package provides a structure for building transformers for a data pipeline that is based on duck typing i.e. \u201clooks like a duck, walks like a duck, etc\u201d rather then through object inheritance. Essentially, if your class has fit(X) and transform(X) methods, it counts as a transformer.\n\n## from handson-ml import BrokeAsFuck\n\nCategories: Journal, Machine Learning\nTags:\nPublished on: August 25, 2018\n\n## HAL8999 \u2013 5\/100\n\nToday while going back through the Hands On Machine Learning book Ch2 I learned that the CategoricalEncoder referenced in the section on handling categorical attributes still isn\u2019t in scikit-learn. I checked the reqirements.txt which shows scikit-learn=0.19.1. Checking my virtualenv, I should be good.\n\nTurns out that the CategoricalEncoder isn\u2019t going to be in scikit-learn until 0.20 so to get it you have to grab 0.20 from Github rather than just use pip.\n\nFucking hell\u2026\n\nSo, if you\u2019re going to write a book, it\u2019s probably a good idea to use the stable branch of your libraries rather than the bleeding edge dev branch.\n\nIt will be a good exercise to convert the book\u2019s example code to work with the standard OneHotEncoder but I\u2019ve always been a fan of \u201cjust works\u201d as a design principle.\n\n## Long days, no blog post\n\nCategories: Journal, Machine Learning\nTags:\nPublished on: August 23, 2018\n\n## HAL8999 \u2013 [3,4]\/100\n\n\u2022 Chapter 2 of Hands on ML continues\n\u2022 Creation of test sets\n\u2022 Stratified sampling\n\u2022 sklearn\u2019s StratifiedShuffleSplit\n\u2022 Visualizing data with matplotlib\n\u2022 Coorelation coefficients\n\n### Getting a good train-test split\n\nSince you can\u2019t train a model and just expect it to work well right out of the box it\u2019s standard practice to split off about 20% of the data set to test the model against. The naieve way to do this is to just grab 20% of the data at random but that runs into a number of issues:\n\n\u2022 depending on how you do it, you may grab different train\/test sets every time the model runs\n\u2022 grabbing data points at random can result in sampling bias to creep in if you happen to get an unrepresentative sample\n\nSolution?\n\n### Stratified sampling\n\nRather than just grabbing data points at random we can ensure that we can get a more representative distribution of sampled data points for some attributes (sex, income, ethnic background, etc) to ensure that random selection hasn\u2019t introduced bias into the training and test sets.\n\nIn this example we can be pretty certain that median income correlates strongly with median housing price and we want to be certain we get a representative distribution of districts with respect to median income. The way to do this is to add a column to the data set that groups median income into categories and we can then sample based on the category. This will improve our chance of getting a more representative sampling of the underlying median income attribute.\n\nOnce the test set has been selected out we work entirely with the training set so as to not introduce bias based on knowledge of the test data.\n\n## One of these things is not like the other\u2026\n\nCategories: Journal, Machine Learning\nTags:\nPublished on: August 20, 2018\n\n## HAL8999 \u2013 3\/100\n\n\u2022 Chapter 2 of Hands on ML\n\u2022 Cost functions\n\u2022 virtualenv setup\n\u2022 code to get the dataset\n\nThe chapter follows a rudimentary machine learning project from business case to final product. California census data is analyzed to build a model which will predict media housing price in a district based on other factors using a linear regression model with a Root Mean Square Error (RMSE) function to measure performance i.e. as a cost function.\n\n$$\\displaystyle RMSE(X, h) = \\sqrt{\\frac{1}{m}\\sum_{i=1}^{m}(h(x^{(i)}) \u2013 y^{(i)})^{2}}$$\n\nThe function h is the \u201chypothesis\u201d function which operates on the feature vector $$x^{(i)}$$. RMSE isn\u2019t the only cost function by any stretch of the imagination but it seems to get a lot of use.\n\nFrom this point the author goes through the dev environment setup process I went through a few days ago and it\u2019s pretty clear from the instructions that the work is being done on a Mac.\n\n[\/crayon]\n\n## Steak and ML\n\nCategories: Journal, Machine Learning\nTags:\nPublished on: August 19, 2018\n\n## Achievement: HAL 8999 \u2013 2\/100\n\n\u2022 Completed chapter 1 of Hands On ML and worked through the exercises\n\u2022 Modified yesterdays example to also do both k-nearest-neighbors with both three and four neighbors. Four neighbors was further from the linear regression than three demonstrating that more is not always better.\n\nShort list but Ch1 is something of an overview so a lot of concepts get thrown in with not a lot of context or depth of discussion so I found I got to the end and had a hard time connecting what I\u2019d read with the specific questions asked at the end of the chapter. I ended up paging back through the chapter to locate the answers to questions which were oddly specific as opposed to focusing on the broad underlying concepts.\n\nThe fact that I did a chunk of the reading while grilling tri-tip and elote and then later when in the post steak and mexican corn food coma might also be part of why and ended up paging back through the chapter so much.\n\n## Sort yourself out\n\nCategories: Journal, Machine Learning\nTags:\nPublished on: August 18, 2018\n\n## Achievement: HAL 8999 \u2013 1\/100\n\n\u2022 Set up virtualenv for HAL8999\n\u2022 Installed sklearn, pandas, numpy, matplotlib\n\u2022 Unable to install tensorflow since I\u2019m on python 3.7 and the pip installs only work for 3.6 and earlier. I can sort that out later.\n\u2022 Read up through Example 1-1 in Hands-On Machine Learning\n\u2022 Author is a little fast and loose with the example code and imports\n\nI ended up burning close to an hour figuring out why my plot and model didn\u2019t match the author\u2019s even though we were using the same data. The issue turns out that I\u2019d left out the following line when massaging \/ mangling the data:\n\nIt\u2019s not immediately clear to me why presorting the values would make any difference but the unsorted dataframe included values well outside the range used in the author\u2019s jupyter notebook. My guess is that by using the unsorted data I was applying the wrong GDP values to the wrong countries and so some outlier data made it into the model. Clearly my pandas-fu is weak. Po would be sad.\n\nAlso, Visual Studio Code is oddly pickier about the import of sklearn.linear_model and refused to initialize the model unless I specified the whole sklearn.linear_model.LinearRegression() where Jupyter was fine with linear_model.LinearRegression().\n\npage 1 of 1\nWelcome , today is Friday, September 25, 2020","date":"2020-09-25 16:31:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2761324644088745, \"perplexity\": 1724.5623727794907}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400227524.63\/warc\/CC-MAIN-20200925150904-20200925180904-00318.warc.gz\"}"}
null
null
Aug 14, 2022 8 min read Newsletter Small Towns Work to Protect Abortion Rights • Storing Nuclear Secrets at Your Golf Club • Announcing OptOut Climate! Today's newsletter will take you through independent reporting and analysis on abortion rights, Trump's legal woes, LGBTQ+ issues, U.S. media, and more. But first, we have a special announcement. Announcing OptOut Climate: Our First Issue-Based Program! Today we are thrilled to announce that we have hired New York City-based journalist Cristian Salazar to lead OptOut's first issue-based program, OptOut Climate! Cristian works at the intersection of journalism and technology. Having previously been executive editor of the Gotham Gazette, his writing has been published by The Associated Press, The Washington Post, The Guardian, and other publications. Cristian also oversees FloodHelpNY.org, an online platform created to help New Yorkers understand how climate change is increasing their flood risk, and is currently working on a project to amplify Latino voices in the climate justice movement. Besides his newsroom experience, he has worked in nonprofit communications, digital marketing, and social media. He was born in Mexico and lives in Brooklyn, N.Y. In the coming days, Cristian will send out the first edition of our new, biweekly climate newsletter, which includes original reporting, a roundup of climate, environmental, and energy news from around the independent OptOut network, and helpful resources. He will also manage a networking community for climate journalists around the world, work to expand the OptOut roster to include more climate-focused publications, and join the curation team for our news aggregation app. Sign up now to get our new climate newsletter in your inbox! Sign into your account and click "Manage" to subscribe to OptOut Climate. Are you a climate journalist? If so, apply to join our networking community! To help us expand our nascent climate program, you can make a tax-deductible donation. We rely on your support, not corporate owners or ads, to run all of OptOut's programs. Feel free to send us your questions, ideas, tips, or anything else that could help our climate program: cristian@optout.news. "With reproductive rights now in states' hands, local governments are challenging state statutes' power and taking steps to protect abortion," reports BOLTS. How Small Towns Are Working to Protect Abortion Rights from State Threats - Bolts Radnor Township is a quiet suburb. Located about 15 miles outside of downtown Philadelphia, the town of 34,000 is known for its strong public schools and wealth. What it's not... Read More BoltsCamille Squires Meanwhile, in Idaho, things are about to get rough, reports the IDAHO CAPITAL SUN. Idaho's trigger law banning abortion in nearly all cases will go into effect on Aug. 25, and the heartbeat law allowing civil lawsuits against medical providers will go into effect immediately following an opinion from the Idaho Supreme Court on Friday evening. Idaho Supreme Court allows trigger law banning nearly all abortions to take effect - Idaho Capital Sun Idaho's trigger law banning abortion in nearly all cases will take effect this month, and the heartbeat law will go into effect immediately. Idaho Capital SunKelcie Moseley-Morris 'Ladies and Gentlemen, We Got Him?' Another day, another crime Trump likely committed. The FBI raided the ex-president's Mar-a-Lago home this week to determine if he violated the Espionage Act by illegally holding classified documents there. From STATES NEWSROOM: Search warrant shows Trump under investigation for possible Espionage Act violations - Nevada Current A federal judge on Friday unsealed the warrant that allowed the Federal Bureau of Investigation to search former President Donald Trump's property at Mar-a-Lago in Florida earlier this week, revealing he's under investigation for possibly violating the Espionage Act and obstruction of j… Nevada CurrentJennifer Shutt The feds were looking for nuclear secrets at Mar-a-Lago, explains MEANS MORNING NEWS. In another inquiry, Trump pled the Fifth hundreds of times in his deposition with the New York Attorney General in a civil probe of him and his company for potential tax-related crimes. THE HUMANIST REPORT: THE BAFFLER analyzes the government's response to the monkeypox emergency and the criticism of that response. Poxed and Abandoned | Benjamin Weil Does the public discourse about monkeypox represent a resurgence of medico-moralizing about queer sex? TRANSLASH's latest article in its News & Narrative series comes from an incarcerated trans woman. My name is Jessica Phoenix Sylvia. I am a trans woman who has spent the last eighteen years locked up for a domestic violence-related crime. Whenever I transfer to a new men's prison things get wild. Finding Trans Joy Even While Incarcerated TransLash tells trans stories to save trans lives. section of the site. TransLashTransLash Media The PENNSYLVANIA CAPITAL-STAR republished an article showing that meth addiction is especially prevalent among gay and bisexual men. Meth addiction remains an LGBTQ issue, especially among gay and bisexual men - Pennsylvania Capital-Star Side effects include decreased hunger, insomnia, anxiety, paranoia, hallucinations, and elevated heart rate and blood pressure. Pennsylvania Capital-StarSpecial to the Capital-Star The reactionary Wall Street Journal editorial board recently continued its tradition of opposing tax breaks for non-wealthy Americans, reports FAIR. WSJ Hates Tax Breaks: California Edition - FAIR The Wall Street Journal editorial board has a long history of liking tax relief only when it benefits the wealthy. FAIRElias Khoury In JACOBIN, a local reporter from Massachusetts writes about her experience at a paper that, after being bought by wealthy landlords, censored her reporting on the real estate industry. If public, not private, money funded journalism, the industry would be far better off, she argues. I'd learned just how easy it is to buy the narrative for a couple million dollars — pocket change to the landlords, developers, and corporations who use local newspapers as free public relations machines. I'm a Local News Reporter. To Save Local News, We Must Publicly Fund It. Local news infrastructure is collapsing. As I've seen firsthand as a local reporter, the only interventions are coming from wealthy investors, who are often angling to gin up positive coverage for themselves. To change that, we need publicly funded local news. Guthrie Scrimgeour Get the full analysis of the Inflation Reduction Act from THE AMERICAN PROSPECT executive editor and policy wonk David Dayen on the LEFT ANCHOR podcast. Episode 244 - The Inflation Reduction Act | Left Anchor Today we've got David Dayen on to discuss just what is in the monster Inflation Reduction Act--the gobs of tax credits, new regulations, tax hikes, and so on. Then we discuss what it reveals about the state of the Democratic coalition and what it might bode for the future.Check out Lee Harris's Pro… PodbeanPodBean Development Marianne Williamson joins THE GRAVEL INSTITUTE to explain widespread depression in the U.S., arguing that it is a social, not individual, problem. Despite strongly supporting a single-payer option in his presidential campaign—saying it would be "the first thing I would do as president"—Biden has totally abandoned the idea, writes THE LEVER. The idea of creating a government-run health insurance plan that people could buy into has completely disappeared from the political conversation in Washington. Instead, Democrats have opted to deliver tens of billions of dollars of new government subsidies demanded by private health insurers that have funneled millions of dollars into Democratic campaign coffers. Where Did The Public Option Go? Democrats have completely forgotten about a public health insurance option, choosing instead to deliver expensive subsidies to their industry donors. The LeverAditi Ramaswami Some good labor news: the MINNESOTA INFORMER reports that workers at a second Starbucks store overwhelmingly voted to unionize. Minneapolis Trader Joe's becomes country's 2nd unionized store - Minnesota Reformer Workers at the Trader Joe's in downtown Minneapolis voted 55-5 to unionize, becoming the second store in the country to do so. "We are so absolutely excited," said Sarah Beth Ryther, a union organizer and crew member at the store. "We're also really excited to get the hard work started of bargaining… Minnesota ReformerMax Nesterak Thanks as always for following the work of the independent news outlets in the OptOut network! See you soon.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,517
Review: Martin Carthy, Wiltshire Music Centre, September 28 Review: Martin Carthy, Wiltshire Music Centre, Bradford on Avon, September 28 Martin Carthy and accordion player John Kirkpatrick Photo: Derek Schofield HATS off to folk veteran folk singer Martin Carthy for going it alone when his scheduled tour partner, accordion player John Kirkpatrick, fell sick days before the start of their trip. He'd not only lost his support on stage but also his transport for the tour and had to travel by train – not easy with at least two guitars and personal luggage. When efforts to find a replacement failed, Carthy decided to go solo, something he rarely does. But he didn't disappoint. He has an unassuming yet engaging style, steeped in traditional folk. His guitar accompaniment is deceptively simple, lightly picking out the melody in the foreground with a some rich and unusual chords in the background. All his songs tell a story and his diction, plus the excellent acoustic of Wiltshire Music Centre, delivered every syllable clearly. Carthy tells a good story too and wherever possible credits the songwriter or the song collector. There were the familiar folk favourites about wronged women, romantic highwaymen, and convicts transported to Botany Bay, an unusual version of Scarborough Fair from Goathland, in the Scarborough district of Yorkshire, and some purely instrumental numbers which also had their own stories. One assumes Carthy had to adjust and expand his planned programme when suddenly finding himself performing solo, and therefore it was entirely understandable that he lost his way once or twice. But he disarmingly admitted he'd forgotten the words and restarted. No-one minded. It was a debut night for the Centre's new LED lighting system, which has cost more than £40,000 but which will reduce lighting costs by more than £6,000 a year. It is infinitely flexible as was demonstrated during the performance. Funding it is part of the Centre's 20th anniversary appeal. Jo Bayne Blithe Spirit By Noel Coward Theatre Royal Bath Summer Season REVIEW Educating Rita Salisbury Playhouse until June 22 Review Educating Rita By Willy Russell Theatre Royal Bath Until Saturday June 8 New play has first performances in Corsham and Trowbridge Preview The Edit Pound Arts Corsham, May 10, 7.30pm. Town Hall Arts, Trowbridge, May 11, 7.30pm Art gallery showcases stunning modern talents FILM VIEW RED JOAN (12A, 101 mins) Thriller/Romance Review Things I Know to be True Whrf Theatre, Devizes, until April 27 Film review: Wild Rose (15) Review Home I'm Darling Theatre Royal Bath until April 20
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,264
Q: Disable OPCache temporarily I recently moved to PHP 5.4 and installed OPCache, it's very powerful! How can I temporarily disable the cache? I tried : ini_set('opcache.enable', 0); But it has no effect. Thanks A: Once your script runs, it's too late to not cache the file. You need to set it outside PHP: * *If PHP runs as Apache module, use an .htaccess file: php_flag opcache.enable Off *If PHP runs as CGI/FastCGI, use a .user.ini file: opcache.enable=0 In all cases, you can also use good old system-wide php.ini if you have access to it. A: opcache.enable is PHP_INI_ALL which means that ini_set() does work, but only for current request to disable OPcache caching for the remainder of scripts compiled in your current request. (You can't force enabling). It reverts back to the system default for other requests. By this stage, the request script will already have been cached, unless you do the ini_set in an auto_prepend_file script. The system defaults (PHP_INI_SYSTEM) are latched as part of PHP system startup and can't be reread. So in the case of Apache for example, you need to restart Apache to change / reload these. The .htaccess php_flag directives only apply if you are running mod_php or equivalent. They and .user.ini files are PHP_INI_PERDIR, which will also be latched at request activation. Now to the Q that I think that you might be asking. If you have a dev system then the easiest way is to set opcache.enable=0 in the appropriate INI file and restart your webserver. Set it back to =1 and restart again when you are done. Also consider (in the dev context) setting opcache.validate_timestamps=on and opcache.revalidate_freq=0. This will keep OPcache enabled but scripts will be stat'ed on every compile request to see if they are changed. This gives the best of both worlds when developing. Also read up on the opcache.blacklist_filename directive. This allow you to specify an exclusion file, so if this contains /var/www/test, and the web service docroot is /var/www then any scripts in the /var/www/test* hierarchies will not be cached. A: The best way i found in my case for disable opcache in a specific PHP file is : opcache_invalidate(__FILE__, true); You also can reset all cache with PHP : opcache_reset(); A: In my modest opinion, because I'm no expert, Jul has given the best answer. The question included the term "temporarily", so changing the configuration files I think... is not the best answer because you need to reconfigure, run what you want and reconfigure again to make it work normally. It's not smooth. With the answer of Jul you can modify the code to perform some action by disabling the opcache and return to a normal situation within the same code (although we would have to see how to re-enable from code the opcache) For example, with Prestashop, there can be problems cleaning the "normal" cache from the administration interface if opcache is enabled, so in that case you can use a method so that when the action is performed, opcache is disabled, the "normal" cache is cleaned, and then opcache is enabled again.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,517
\section{Introduction} A \textit{canonical system }is a differential equation of the form \begin{equation} \label{can} Ju'(x) = -zH(x)u(x) , \quad J=\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} , \end{equation} with a locally integrable coefficient function $H(x)\in{\mathbb R}^{2\times 2}$, $H(x)\ge 0$, ${\textrm{\rm tr}\:} H(x)=1$. Canonical systems are of fundamental importance in spectral theory because they may be used to realize arbitrary spectral data; more precisely, they are in one-to-one correspondence to generalized Herglotz functions, as we will discuss in more detail below. We usually consider half line problems $x\in [0,\infty)$, and we always impose the boundary condition \begin{equation} \label{bc} u_2(0) = 0 \end{equation} at the (regular) left endpoint $x=0$. The canonical system together with this boundary condition generates a self-adjoint relation $\mathcal S$ on the Hilbert space $L^2_H(0,\infty)$ and then also a self-adjoint operator $S$ on the possibly smaller space $\ov{D(\mathcal S)}$, after dividing out the multi-valued part $\mathcal S(0)$ of $\mathcal S$. We refer the reader to \cite{Rembook} for more on the basic theory. We are interested in the spectral theory of $S$. The \textit{$m$ function }is defined as $m(z)=f(0,z)$ on $z\in{\mathbb C}^+=\{ z\in{\mathbb C}: \Im z>0\}$, and here $f(x,z)$ denotes the (unique, up to a constant factor) $L^2_H$ solution of \eqref{can}. We also identify the vector $f(0,z)\in{\mathbb C}^2\setminus \{ 0\}$ with the point $f_1(0,z)/f_2(0,z)\in{\mathbb C}_{\infty}$ on the Riemann sphere, so $m(z)\in{\mathbb C}_{\infty}$. In fact, the $m$ function is a \textit{generalized Herglotz function: }it is a holomorphic map $m:{\mathbb C}^+\to{\mathbb C}_{\infty}$ that takes values in $\ov{{\mathbb C}^+}$. A (genuine) Herglotz function is defined by the slightly stronger version of this condition that the values lie in ${\mathbb C}^+$. Such a function satisfies the Herglotz representation formula: it is of the form \[ m(z) = a + bz + \int_{-\infty}^{\infty} \left( \frac{1}{t-z} - \frac{t}{t^2+1} \right)\, d\rho(t) , \] with $a\in{\mathbb R}$, $b\ge 0$, and $\rho$ is a positive Borel measure on ${\mathbb R}$ (possibly $\rho=0$) with $\int\frac{d\rho(t)}{1+t^2}<\infty$. This measure $\rho$ can serve as a spectral measure of $\mathcal S$. A fundamental result from the inverse spectral theory of canonical systems \cite[Theorem 5.1]{Rembook} says that every generalized Herglotz function is the $m$ function of a unique canonical system. A maximal open interval with $H(x)=P_{\alpha}$ there is called a \textit{singular interval }of \textit{type }$\alpha$, and here \[ P_{\alpha}= e_{\alpha}e^*_{\alpha} = \begin{pmatrix} \cos^2\alpha & \sin\alpha\cos\alpha \\ \sin\alpha\cos\alpha & \sin^2\alpha \end{pmatrix} , \quad e_{\alpha}= \begin{pmatrix} \cos\alpha \\ \sin\alpha \end{pmatrix} , \] denotes the projection onto $e_{\alpha}$. Points which are not in the union of the singular intervals are called \textit{regular. }In the extreme case when $(0,\infty)$ is a single singular interval, we obtain the $m$ functions $m(z)\equiv a\in{\mathbb R}_{\infty}$; these are exactly the generalized Herglotz functions that are not Herglotz functions. These canonical systems $H\equiv P_{\alpha}$ have spectral measure $\rho=0$, which is consistent with the above remarks and also with the fact that $D(\mathcal S)=0$ in this case. \textit{Oscillation theory }is a well known, powerful tool, certainly for the classical equations such as Schr{\"o}dinger, Sturm-Liouville, Jacobi, Dirac equations. The basic idea is to write solutions in polar coordinates, and then the angle will satisfy a first order equation, to which comparison principles can be applied. This will lead to relations between the zeros of solutions and the location of the spectrum. There is a large literature on oscillation theory in general in a large variety of settings; see, for example, \cite{GST,GZ,Hart,KT,RBK,Sturm,Swan,WMLN}. However, it appears that oscillation theory has not yet been systematically employed in the spectral theory of canonical systems in the way we use it in this paper, so it will be best for us and the reader to develop the basic theory from scratch here, relying on these well known ideas and especially the treatment given in \cite{WMLN}. The one new aspect that we will have to pay careful attention to will be the presence of relations (rather than operators) and their multi-valued parts, which correspond to the singular intervals of our system \cite[Section 2.4]{Rembook}. When these somewhat tedious technical issues have been addressed, it will actually turn out that oscillation theory is especially convenient and user-friendly for canonical systems (compared to, say, Schr{\"o}dinger equations), thanks to the simple form of the basic equation \eqref{ot}. We then apply oscillation theory to semibounded canonical systems. In fact, we will almost exclusively restrict ourselves to systems with specifically $\sigma(H)\subseteq [0,\infty)$, and we denote the collection of these coefficient functions $H(x)$ by $\mathcal C_+$. Our methods would give more general results, but it seems best to present them in this setting. We start out by giving new proofs of the fundamental and beautiful results of Winkler and Woracek \cite{Win,WW}. We do this for two reasons: first of all, these results certainly deserve some additional exposure; second, and more importantly, oscillation theory is an ideal tool to analyze these issues, and we believe that our new proofs are short, direct, and perhaps more transparent than the original proofs, which referred to the theory of strings as a black box. Here's what we will actually prove in this part of the paper. \begin{Theorem}[\cite{WW}] \label{TWW1} $H\in\mathcal C_+$ if and only if $H(x)=P_{\varphi(x)}$ for some decreasing function $\varphi(x)$ with $\pi/2\ge\varphi(0+)\ge\varphi(\infty)\ge -\pi/2$. \end{Theorem} As a first minor payoff of our new viewpoint, we effortlessly obtain a whole line version of Theorem \ref{TWW1}. \begin{Theorem} \label{T1.2} The whole line system with coefficient function $H(x)$, $x\in{\mathbb R}$, has non-negative spectrum if and only if $H(x)=P_{\varphi(x)}$ for some decreasing function $\varphi(x)$ with $\varphi(-\infty)-\varphi(\infty)\le\pi$. \end{Theorem} If $H\in\mathcal C_+$, then the $m$ function \[ m(z) = a+ bz+ \int_{[0,\infty)} \left( \frac{1}{t-z} - \frac{t}{t^2+1} \right)\, d\rho(t) \] can be holomorphically continued to ${\mathbb C}\setminus [0,\infty)$, and $m(t)$ is real valued and increasing on $(-\infty,0)$. In particular, the limits $m(-\infty), m(0-)\in [-\infty,\infty]$ exist. \begin{Theorem}[\cite{WW}] \label{TWW2} Let $H\in\mathcal C_+$, and write $H(x)=P_{\varphi(x)}$, with $\varphi$ chosen as in Theorem \ref{TWW1}. Then \[ \tan\varphi(0+)=-m(-\infty), \quad \tan\varphi(\infty)=-m(0-) . \] \end{Theorem} Moving on to the more original parts of the paper, we will then prove the following characterization of semibounded systems with purely discrete spectrum. \begin{Theorem} \label{Tess} Let $H\in\mathcal C_+$, and write $H(x)=P_{\varphi(x)}$, with $\varphi$ chosen as in Theorem \ref{TWW1}. Then $\sigma_{ess}(H)=\emptyset$ if and only if \[ \varphi(x)-\varphi(\infty)= o(1/x) \quad \textrm{as }x\to\infty . \] \end{Theorem} This will actually be a consequence of more general results on the location of the bottom of the essential spectrum, which we will state and prove in Section 4. These will also imply part (a) of the following result. \begin{Theorem} \label{T1.3} Let $H\in\mathcal C_+$, and write $H(x)=P_{\varphi(x)}$, with $\varphi$ chosen as in Theorem \ref{TWW1}. (a) Then $0\in\sigma_{ess}(H)$ if and only if \[ \limsup_{x\to\infty} x(\varphi(x)-\varphi(\infty)) = \infty . \] (b) $0$ is an eigenvalue if and only if $\varphi(x)+\pi/2\in L^2(0,\infty)$. \end{Theorem} Part (b) is trivial since the solutions of \eqref{can} at $z=0$ are constant; it is just stated for completeness here. A combination of both parts of the Theorem gives a description of those $H\in\mathcal C_+$ whose spectrum starts at zero. We will also discuss in Section 4 how Theorem \ref{Tess} contains a new version of Molchanov's \cite{Mol} well known criterion for the absence of essential spectrum for a \textit{Schr{\"o}dinger operator }$-d^2/dx^2+V(x)$ as a special case; see Theorem \ref{T4.3} below for more details. We then round off our analysis of semibounded canoncial systems by discussing the exponential orders of the solutions of \eqref{can}, as functions of $z\in{\mathbb C}$. Here we can be brief since the relevant tools are all available in the literature \cite{PRW,Rom}, in a slightly different context. Basically, we will exploit the fact that \eqref{can} for $H\in\mathcal C_+$ can be related to a diagonal canonical system; this connection is very well known for the smaller class of Krein strings; see, for example, \cite{KalWW}. We give a direct treatment of this transformation that never mentions strings explicitly (though of course it is informed by this connection), and this aspect of our analysis might be of some independent interest also. The problem of determining the order of a diagonal canonical system has been studied in depth in \cite{Rom}. Let's now formulate a result that summarizes the main points. We define the \textit{transfer matrix }$T(x;z)$ as usual as the $2\times 2$ matrix solution of \eqref{can} with the initial value $T(0;z)=1$. Its entries are entire functions of $z\in{\mathbb C}$ for each fixed $x\ge 0$, and one can show that all four entries of $T$ have the same order. Essentially, this will follow from the quotients being Herglotz functions; see the corresponding part of the proof of \cite[Theorem 4.19]{Rembook} for a discussion of a very similar statement. Recall also that the \textit{order }of an entire function $F(z)$ is defined as the infimum of the $\alpha>0$ for which the estimate $|F(z)|\lesssim \exp(|z|^{\alpha})$ holds. Clearly, for an arbitrary canonical system, we always have $\textrm{ord}\:T(x;z)\le 1$, by a simple Gronwall estimate applied to \eqref{can}. Exactly the orders between $0$ and $1/2$ occur for \textit{semibounded }canonical systems. \begin{Theorem} \label{Torder} Let $H\in\mathcal C_+$, and write $H(x)=P_{\varphi(x)}$ with $\varphi(x)$ chosen as in Theorem \ref{TWW1}. (a) $\textrm{\rm ord}\: T(x;z)\le 1/2$ for all $x\ge 0$. (b) Conversely, for any $0\le\nu\le 1/2$, there are semibounded canonical systems $H\in\mathcal C_+$ with $\textrm{\rm ord}\:T(x;z;H)=\nu$ for some $x>0$. (c) If $\textrm{\rm ord}\: T(L;z)<1/2$, then $\varphi'(x)=0$ for almost every $x\in (0,L)$. \end{Theorem} Recall that $\varphi$ is a decreasing function, so will be differentiable at almost every $x$. Since the pointwise derivative computes the Radon-Nikodym derivative of the absolutely continuous part of the measure $-d\varphi$, another way of stating part (c) is to say that this measure must be purely singular on $(0,L)$ if $\textrm{\rm ord}\: T(L;z)<1/2$. One can in principle go beyond this by referring to \cite[Theorem 2]{Rom}, but this will become intricate and the resulting criteria will probably not be easy to check for a given $\varphi$. What we have stated here will be comparatively easy to prove, and we present these arguments in Section 5. We will also give an easy direct argument for part (b), which will not depend on \cite[Theorem 2]{Rom}. \section{Oscillation theory} Given a non-trivial solution $u$ of \eqref{can} for $z=t\in{\mathbb R}$, introduce $R(x)>0$, $\theta(x)$ by writing $u=Re_{\theta}$, with $\theta(x)$ continuous and, as above, $e_{\theta}=(\cos\theta,\sin\theta)^t$. Then the \textit{Pr{\"u}fer angle }$\theta(x)$ is in fact absolutely continuous and solves \begin{equation} \label{ot} \theta'(x) = te^*_{\theta(x)}H(x)e_{\theta(x)} . \end{equation} We will also consider the problems on bounded intervals $[0,L]$, and then we impose the boundary condition \begin{equation} \label{bcbeta} e^*_{\beta}Ju(L) = u_1(L) \sin\beta - u_2(L)\cos\beta = 0 \end{equation} at $x=L$, with $0\le\beta<\pi$. This, together with the boundary condition \eqref{bc} at $x=0$, defines a self-adjoint relation $\mathcal S_L^{(\beta)}$ on $L^2_H(0,L)$; see again \cite[Chapter 2]{Rembook} for more details. \begin{Proposition} \label{P2.1} Let $\theta(x;t)$ be a solution of \eqref{ot} with $t$ independent initial value $\theta(0;t)=\alpha$. Then $\theta(x;t)$ is an increasing function of $t\in{\mathbb R}$, and as a function of $x\ge 0$, the Pr{\"u}fer angle $\theta(x;t)$ is increasing if $t\ge 0$ and decreasing if $t\le 0$. In fact, $t\mapsto\theta(x;t)$ is strictly increasing for $x>0$ unless $(0,x)$ is contained in a singular interval of type $\alpha+\pi/2$. \end{Proposition} \begin{proof} The first few claims are immediate from \eqref{ot}; for the montonicity in $t$, we refer to the comparison principle \cite[Section III.4]{Hart} for first order ODEs. If $t\mapsto \theta(L;t)$ were constant on some interval $a\le t\le b$, for some $L>0$, then the corresponding solutions $u(x;t)$ would be candidate eigenfunctions, with eigenvalue $t$, of the problem on $(0,L)$ with boundary condition $\beta\equiv\theta(L;a)\bmod \pi$ at $x=L$. A contradiction can only be avoided if $Hu=0$ on $(0,L)$ for these $u$, and this makes $H=P_{\alpha+\pi/2}$ there. \end{proof} By this monotonicity, the Pr{\"u}fer angle $\theta(L;t)$ can be used to count how many times the boundary condition \eqref{bcbeta} was satisfied. This in turn lets us locate the spectrum. We start with the problem on a bounded interval $[0,L]$, with boundary condition \eqref{bcbeta}. We denote the spectral projections of the associated self-adjoint operator $S_L^{(\beta)}$ (extracted from the relation $\mathcal S_L^{(\beta)}$ by dividing out the multi-valued part) by $E_L^{(\beta)}$, and we use the short-hand notation $\dim P$ for what is really the dimension of the \textit{range }of the projection $P$. We will also write $E(s,t)$ instead of the more precise $E((s,t))$, and similarly for other types of intervals, to avoid an aesthetically offensive proliferation of parentheses. \begin{Lemma} \label{L2.1} Let $\theta(x;t)$ be the solution of \eqref{ot} with $\theta(0;t)=0$. Then \[ \dim E_L^{(\beta)}[s,t) = \left\lceil \frac{1}{\pi} \left( \theta(L;t)-\beta \right) \right\rceil - \left\lceil \frac{1}{\pi} \left( \theta(L;s)-\beta \right) \right\rceil . \] \end{Lemma} The dimension of the spectral projection of course equals the number of eigenvalues in $[s,t)$. \begin{proof} The eigenvalues $\lambda$ are characterized by the condition $\theta(L;\lambda)\equiv\beta \bmod\pi$. Now the monotonicity and continuity of $t\mapsto \theta(L;t)$ make it clear that $\lceil (\theta(L;t) -\beta)/\pi\rceil$ jumps by $1$ at each eigenvalue and is constant on the intervals between those. This argument does not literally apply when $(0,L)$ is a singular interval of type $\pi/2$, but this scenario is trivial and the claim can then be checked directly; all spectral projections are zero in this case. \end{proof} \begin{Theorem} \label{T2.1} Suppose that $(0,\infty)$ does not end with a singular half line $(L,\infty)$, write $E$ for the spectral projection of the half line operator, and let $\theta(x;t)$ be the solution of \eqref{ot} with $\theta(0;t)=0$. Then \begin{equation} \label{2.3} \dim E(s,t) = \lim_{L\to\infty} \left\lfloor \frac{1}{\pi} \left( \theta(L;t)-\theta(L;s)\right) \right\rfloor . \end{equation} \end{Theorem} The existence of the limit, with the understanding that it may equal infinity, is part of the statement. If $(0,\infty)$ does end with a singular half line $(L,\infty)$ of type $\gamma$, say, then we are effectively dealing with the problem on $(0,L)$ with boundary condition $\beta=\gamma+\pi/2$ at $x=L$ \cite[Theorem 3.18]{Rembook}, so we are back in the case already dealt with in Lemma \ref{L2.1}. \begin{proof} Let's abbreviate the expression from the statement by \[ F(L) = \frac{1}{\pi} \left( \theta(L;t)-\theta(L;s)\right) . \] We will establish the following two inequalities: \begin{align} \label{2.1} & \lfloor F(L) \rfloor \le \dim E(s,t)\quad \textrm{for all }L>0 ;\\ \label{2.2} & \dim E(s,t) \le \liminf_{L\to\infty} \lceil F(L) \rceil - 1 . \end{align} Let's first check that these inequalities will imply \eqref{2.3}: clearly, \begin{align*} \liminf_{L\to\infty} \lceil F(L) \rceil - 1 & \le \limsup_{L\to\infty} \lceil F(L) \rceil - 1 \\ & \le \sup_{L>0} \lceil F(L) \rceil - 1 \le \sup_{L>0} \lfloor F(L) \rfloor , \end{align*} so we have equality throughout here. In particular, $\lim_{L\to\infty} \lceil F(L) \rceil$ exists, and it then follows that $\lfloor F(L) \rfloor$ converges as well: this is immediately clear if $F(L)\notin{\mathbb Z}$ for all large $L$, and if $F(L_n)\in{\mathbb Z}$ for some sequence $L_n\to\infty$, then $F(L_n)\to\infty$, or we would obtain a contradiction to our inequalities (a direct proof of this fact is also possible). So it suffices to establish the inequalities, and we start with \eqref{2.1}. Given $L>0$, define $\beta\in [0,\pi)$ by writing $\theta(L;t)=n\pi+\beta$, $n\in{\mathbb Z}$. Our intention here is to choose the boundary condition that makes $t$ an eigenvalue of the problem on $[0,L]$, but actually there is an exceptional case: if $H\equiv P_{e_2}$ on $(0,L)$, then $Hu=0$ there. This scenario, however, is completely trivial because now $F(L)=0$, and we can ignore it. Lemma \ref{L2.1} then shows that \[ \dim E_L^{(\beta)}[s,t] = 1 + n - \left\lceil \frac{1}{\pi} \left( \theta(L;s)-\beta\right) \right\rceil = \lfloor F(L) \rfloor + 1 . \] Now we adapt the arguments presented in \cite[Chapter 14]{WMLN}. Suppose that \eqref{2.1} failed. Then \begin{equation} \label{2.4} \dim \mathcal M \ge 2, \quad \mathcal M = R(E_L^{(\beta)}[s,t]) \ominus R(E(s,t)) ; \end{equation} of course, this definition of $\mathcal M$ does not make strict formal sense if taken at face value since the projections act in different Hilbert spaces. We really identify $R(E_L^{(\beta)})\subseteq L^2_H(0,L)$ with a subspace of $L^2_H(0,\infty)$ in the obvious way, by extending elements of this space by the zero function on $(L,\infty)$. In the same way, the self-adjoint relation $\mathcal S_L^{(\beta)}$ can be thought of as a relation on $L^2_H(0,\infty)$. Since we are projecting onto a bounded interval, the elements of $R(E_L^{(\beta)}[s,t])$ are contained in $D(\mathcal S_L^{(\beta)})$, the domain of the self-adjoint relation. If we take such elements $(f,g)\in\mathcal S_L^{(\beta)}$, then the standard representatives $f(x)$ of $f\in L^2_H(0,L)$, determined as in \cite[Lemma 2.1]{Rembook}, will satisfy the boundary condition \eqref{bcbeta} at $x=L$. Now \eqref{2.4} implies that there is a non-zero element $f\in\mathcal M$ with $f(L)=0$. This element, again extended by zero beyond $L$ and viewed as an element of $L^2_H(0,\infty)$, will lie in $D(\mathcal S)$, the domain of the self-adjoint relation on the half line $(0,\infty)$. We can now evaluate $g-cf$, with $c=(s+t)/2$ and $g=S_L^{(\beta)}f$, the image of $f$ under the \textit{operator }$S_L^{(\beta)}$, in two ways: if we work on $(0,L)$, then, since $f=E_L^{(\beta)}[s,t]f$, we obtain $\|g-cf\|\le (t-s)/2 \|f\|$. On the other hand, we can also view $(f,g)\in\mathcal S$ as an element of the self-adjoint relation $\mathcal S$ on the half line, after extending both functions by zero for $x>L$, as usual. Then $g=Sf+h$ with $h\in\mathcal S(0)$, the multi-valued part of $\mathcal S$; we cannot be sure here if $g$ is still the operator image of $f$ (though this will follow when $L$ is regular). However, we do know that $f,Sf\in\ov{D(\mathcal S)}=\mathcal S(0)^{\perp}$, so \[ \|g-cf\|^2 \ge \|(S-c)f\|^2 \ge \left( \frac{t-s}{2} \right)^2 \|f\|^2 ; \] to obtain the second estimate, we have used that $E(s,t)f=0$. So we in fact have equality here, but then it follows, by functional calculus again, that $f=E(\{s,t\})f$ must be linear combination of the eigenfunctions for the eigenvalues $s,t$, so let's write $f=u_s+u_t$, and here $u_{\lambda}$ solves $Ju'_{\lambda}=-\lambda Hu_{\lambda}$. The corresponding representative $f(x)=u_s(x)+u_t(x)$, built from these solutions, is absolutely continuous, satisfies $Jf'=-Hg$, with $g=su_s+tu_t\in L^2_H$, and represents the zero element of $L^2_H$ on $(L,\infty)$. Now \cite[Lemma 2.26]{Rembook}, applied to this interval, shows that $f(c)=0$ at all regular points $c>L$. Since $(L,\infty)$ is not contained in a singular half line, by our assumption, there are such regular points $c>L$. Fix one, and observe that then $u_s(c)=-u_t(c)$ satisfy the same boundary condition at $x=c$. So $u_s,u_t$ are orthogonal on $(c,\infty)$, being eigenfunctions belonging to different eigenvalues. Since $\|f\|_{L^2_H(c,\infty)}=0$, this implies that $u_s, u_t$ also have zero norm on $(c,\infty)$, but for a non-zero solution this is only possible if $(c,\infty)$ were contained in a singular half line. This contradiction establishes \eqref{2.1}. The proof of \eqref{2.2} is, fortunately, less involved technically. We can assume that $\liminf \lceil F(L)\rceil <\infty$. Pick a sequence $L_n\to\infty$ with $\lceil F(L_n)\rceil = \liminf \lceil F(L)\rceil$. Define $\beta_n\in [0,\pi)$ by writing $\theta(L_n;s)=N_n\pi + \beta_n$, that is, we choose the boundary condition that makes $s$ an eigenvalue of the problem on $[0,L_n]$. The exceptional situation that was already briefly mentioned above will not occur here for large $n$ because then $H(x)$ will not be identically equal to $P_{e_2}$ on $(0,L_n)$. The boundary condition $\beta_n$ can be implemented by a singular half line $(L_n,\infty)$ of type $\beta_n+\pi/2$. These modified canonical systems \[ H_n(x) = \begin{cases} H(x) & x<L_n \\ P_{\beta_n+\pi/2} & x>L_n \end{cases} \] converge to $H$ as $n\to\infty$ with respect to the metric discussed in \cite[Section 5.2]{Rembook}. Moreover, in general, convergence in this metric is equivalent to the locally uniform (on ${\mathbb C}^+$) convergence of the associated $m$ functions \cite[Theorem 5.7(b), Corollary 5.8]{Rembook}, and this in turn implies that the spectral measures $\rho_n$ converge to $\rho$ in weak $*$ sense. Thus it now suffices to show that \[ \dim E_n(s,t)\le \lceil F(L_n) \rceil - 1 . \] This, with equality, is an immediate consequence of Lemma \ref{L2.1}; recall here that $\dim E_n(\{ s\} )=1$ by the choice of $\beta_n$. \end{proof} As usual, these results also tell us where the essential spectrum starts, because this is the point where spectral projections become infinite dimensional. We don't state general results of this type here, but we will see these methods in action in Section 4. \section{Semibounded canonical systems} In this section, we prove Theorems \ref{TWW1}, \ref{TWW2}, and \ref{T1.2}, in this order. \begin{proof}[Proof of Theorem \ref{TWW1}] We want to give an oscillation theoretic treatment, so we start out by observing that the condition that $H\in\mathcal C_+$ is of course equivalent to \begin{equation} \label{3.1} E(-t,0)=0 \quad \textrm{for all }t>0 . \end{equation} Let $\theta(x;t)$ again be the solution of \eqref{ot} with initial value $\theta(0;t)=0$. Since $\theta(x;0)=0$, Theorem \ref{T2.1} shows that \eqref{3.1} is equivalent to \begin{equation} \label{3.2} \theta(x;-t)>-\pi \quad \textrm{for all }x,t>0 . \end{equation} This also holds when $(0,\infty)$ ends with a singular half line $(L,\infty)$ of type $\beta+\pi/2$, say, with $0\le\beta<\pi$ (so we effectively have the problem on $(0,L)$, with boundary condition $\beta$ at $x=L$). In this case, we refer to Lemma \ref{L2.1} directly. This produces the stronger looking bound $\theta(x;-t)>-\pi+\beta$, but actually this is implied by \eqref{3.2} in the current situation, for the following reason: if we had $\theta(a;-t)\in (-\pi,-\pi+\beta]$ for some $a\ge L$, then also $\theta(a;-t')\in (-\pi,-\pi+\beta)$ for suitable $t'>t$, but then $\lim_{x\to\infty}\theta(x;-t')=-2\pi+\beta<-\pi$. Suppose now that $H\in\mathcal C_+$, or, equivalently, that \eqref{3.2} holds. We first claim that then $\det H(x)=0$ for almost every $x>0$. This is obvious from \eqref{ot} since for any $\theta$, we have $e^*_{\theta}H(x)e_{\theta}\ge \det H(x)$, so clearly \eqref{3.2} will fail for large $t$ and $x$ if $\det H(x)>0$ on a set of positive measure. We can thus write $H(x)=P_{\varphi(x)}$, for some function $\varphi(x)$, and we now claim that we can take \begin{equation} \label{3.6} \varphi(x) =\varphi_0(x), \quad \varphi_0(x) = \lim_{t\to\infty} \theta(x;-t) + \frac{\pi}{2} , \end{equation} here. The limit defining $\varphi_0$ exists since $\theta(x;-t)>-\pi$ is a decreasing function of $t>0$, and the monotonicity of $\theta$ in $x$ and \eqref{3.2} will then show that $\varphi_0(x)$ has the stated properties, so it suffices to prove \eqref{3.6}. For $H(x)=P_{\varphi(x)}$, we can write \eqref{ot} in the form \begin{equation} \label{3.5} \theta' = -t \sin^2(\theta-\psi(x)) , \quad \psi(x)= \varphi(x)-\frac{\pi}{2} . \end{equation} Integration of this gives \[ \int_0^L \sin^2\left( \theta(x;-t)-\varphi(x)+\frac{\pi}{2}\right) \, dx = -\frac{\theta(L;-t)}{t} < \frac{\pi}{t} . \] Since this holds for all $L>0$, Fatou's lemma now shows that \[ \liminf_{t\to\infty} \sin^2\left( \theta(x;-t)-\varphi(x)+\frac{\pi}{2} \right) = 0 \] for almost all $x>0$, or, equivalently, $\varphi_0(x)\equiv \varphi(x)\bmod \pi$ almost everywhere. Since $P_{\alpha+n\pi}=P_{\alpha}$, this establishes \eqref{3.6}. Conversely, suppose now that $H(x)=P_{\varphi(x)}$, with $\varphi(x)$ as described in the Theorem. We must show that then \eqref{3.2} holds. The idea behind our argument is simple: both functions $\theta(x), \psi(x)=\varphi(x)-\pi/2$ are decreasing, and initially $\theta(0)\ge\psi(0+)$. Now the form of \eqref{3.5} will guarantee that $\theta$ can never overtake $\psi$, and $\psi$ stops at the value $-\pi$ at the latest. The essence of the method is best seen by first considering the simpler case when $\psi(0+)<0=\theta(0)$. If \[ y:=\sup \{b>0: \theta(x)>\psi(x-) \:\: \textrm{\rm on }0<x<b \} \] were finite, then $\theta(y)=\psi(y-)$. On a suitable interval $x\in (a,y)$, we have the estimate \[ \sin^2(\theta-\psi(x))\le (\theta-\psi(y-))^2 , \] as long as $\theta(a)\ge \theta\ge \psi(x)$. However, the solution $\theta_1$ of \[ \theta'_1=-t(\theta_1-\psi(y-))^2 , \quad \theta_1(a)=\theta(a)>\psi(y-) , \] will not reach $\psi(y-)$ in finite time, so we obtain a contradiction to the comparison principle. Thus $y=\infty$, and this says that $\theta(x)>\psi(x-)$ for all $x>0$, and then \eqref{3.2} is an immediate consequence. These arguments could also handle the case when $\psi(0+)=0$, but it is technically more convenient to then approximate $H(x)=P_{\varphi(x)}$ by the coefficient functions \[ H_n(x) = \begin{cases} H(x) & x> 1/n \\ P_{\varphi(1/n+)} & x<1/n \end{cases} . \] These will converge to $H$ with respect to the metric mentioned above and, what is more important right now, this will give us the weak $*$ convergence of the spectral measures. So if $\psi(x)<0$ for all $x>0$, then it will follow that $H\in\mathcal C_+$, by the case already covered. This only leaves the case of an initial singular interval of type $\pi/2$, but this can be removed without changing the spectral measure, and thus we are done in this case also. \end{proof} A more general version of Theorem \ref{TWW1}, also due to Winkler-Woracek \cite{WW}, can be established by the same arguments, with only very minor adjustments, which we leave to the reader. \begin{Theorem} \label{T3.1} The negative spectrum $\sigma(H)\cap (-\infty,0)$ consists of at most $N$ points if and only if $H(x)=P_{\varphi(x)}$ for some decreasing function $\varphi(x)$ with \[ \frac{\pi}{2} \ge\varphi(0+)\ge\varphi(\infty)\ge -N\pi -\frac{\pi}{2} . \] \end{Theorem} This, in turn, gives the following characterization of the larger class of coefficient functions of this type, but with a possibly unbounded $\varphi(x)$. \begin{Corollary} \label{C3.1} (a) $H(x)=P_{\varphi(x)}$ for some decreasing function $\varphi(x)$ with $\varphi(0+)<\infty$ if and only if the problems on $[0,L]$ have finite negative spectrum for all $L>0$. (b) If $\sigma(H)\subseteq [c,\infty)$ for some $c\in{\mathbb R}$, then $H(x)=P_{\varphi(x)}$ for some decreasing function $\varphi(x)$ with $\varphi(0+)<\infty$. \end{Corollary} To prove part (a), just recall that boundary conditions at $x=L$ can be implemented by a singular half line $(L,\infty)$. This will then imply part (b), after establishing the easy fact that problems on $(0,L)$ will be semibounded if the half line problem has this property. The converse of part (b) is false, and counterexamples are provided by Schr{\"o}dinger operators that are unbounded below, when these are written as canonical systems. \begin{proof}[Proof of Theorem \ref{TWW2}] It will be convenient to also express the values of $m(-t)$, $t>0$, in terms of an angle, so write $m(-t)=\cot\alpha(-t)$, with $-\pi<\alpha(-t)<0$. Here, we again leave the trivial case case $H(x)\equiv P_{e_2}$ to the reader. We also write $\psi(x)=\varphi(x)-\pi/2$, as above. We then want to show that $\psi(0+)=\alpha(-\infty)$, $\psi(\infty)=\alpha(0-)$. The key tool will be the following fact. \begin{Lemma} \label{L2.2} Let $H\in\mathcal C_+$, and let $\theta(x;-t)$, $t>0$, be the solution of \eqref{ot} with $\theta(0;-t)=\alpha(-t)$. Then $\theta(x;-t)\ge -\pi$ for all $x\ge 0$. \end{Lemma} \begin{proof} The initial value of the solution $f=Re_{\theta}$ of \eqref{can} with Pr{\"u}fer angle $\theta$ is a multiple of $(m(-t),1)^t$, so $f\in L^2_H(0,\infty)$. Suppose now that $\theta(L;-t)=-\pi$ for some $L>0$. This says that $f(L)=e_1$, after multiplying by a suitable (negative) constant. The modified version of this solution \[ f_L(x) = \begin{cases} e_1 & x<L \\ f(x) & x>L \end{cases} \] lies in $D(\mathcal S)$, the domain of the self-adjoint relation on $(0,\infty)$. More specifically, $(f_L, g_L)\in\mathcal S$, with \[ g_L(x) = \begin{cases} 0 & x<L \\ -tf(x) & x>L \end{cases} . \] If we denote the self-adjoint \textit{operator }by $S$, then \[ \s{f_L}{g_L} = \s{f_L}{Sf_L} . \] Note that this will hold even though $g_L$ need not equal $Sf_L$ since even in that case $g_L$ differs from this operator image by at most an element of the multi-valued part $\mathcal S(0)$, and $f_L\in D(\mathcal S) \subseteq \mathcal S(0)^{\perp}$. Now $\s{f_L}{Sf_L}\ge 0$ by functional calculus, but on the other hand, \[ \s{f_L}{g_L} = -t \int_L^{\infty} f^*(x)H(x)f(x)\, dx \le 0 . \] So this last integral equals zero, but this means that $Hf=0$ almost everywhere on $(L,\infty)$, and thus $f(x)=e_1$ and $\theta(x;-t)=-\pi$ on $x\ge L$. \end{proof} Let's now return to the proof of Theorem \ref{TWW2}. We first show that $\alpha(-\infty)\ge \psi(0+)$. If this were false, then the Pr{\"u}fer angle $\theta(x;-t)$ with the initial value $\theta(0;-t)=\alpha(-t)$ from Lemma \ref{L2.2} would satisfy $\theta(x;-t)\le\psi(x)-\delta$ on some interval $x\in (0,a)$ for all large $t>0$. But now \eqref{3.5} shows that then $\theta'\le -t\sin^2\delta$ there, as long as $\theta-\psi\ge -\pi+\delta$. It follows that $\theta(x;-t)$ will decrease beyond $-\pi$ for large $t$, contrary to what we established in Lemma \ref{L2.2}. Recall also in this context that we already dismissed the case $\psi\equiv 0$, so we will have $\psi(x)<0$ for all large $x$. On the other hand, $\alpha(-\infty)>\psi(0+)$ is also impossible, and the argument is similar. We could then pick first a sufficiently large $t_1>0$ and then $a>0$ such that $\theta(a;-t_1)> \psi(0+)$ also. Here, $\theta$ again refers to the Pr{\"u}fer angle from Lemma \ref{L2.2}, with initial value $\theta(0;-t)=\alpha(-t)$. Again, \eqref{3.5} shows that $|\theta'(x;-t_2)|$ can be made arbitrarily large on $0\le x\le a$ by sending $t_2\to\infty$, at least as long as $\theta(x;-t_2)$ stays at some distance from $\psi(0+)$. This means that $\theta(a;-t_2)$ will have overtaken $\theta(a;-t_1)$ for all large $t_2\gg t_1$, but this contradicts the monotonicity of $m(-t)$ on $t>0$. More explicitly, $\cot\theta(a;-t)=m_a(-t)$ is the $m$ function of the problem on $(a,\infty)$, and $H(x+a)\in\mathcal C_+$ also, by Lemma \ref{L2.2} and its proof. Thus it is not possible that $\theta(a;-t_2)<\theta(a;-t_1)$ for $t_2>t_1$. Next, we show that $\psi(\infty)\le \alpha(0-)$. We again consider the Pr{\"u}fer angles with the initial values from Lemma \ref{L2.2}. By \eqref{3.5}, $\theta(x;-t)$ can only approach a value that is $\equiv\psi(\infty)\bmod \pi$ when $x\to\infty$. Now if we had $\psi(\infty)>\alpha(0-)$, then also $\psi(\infty)>\theta(0;-t)$ for sufficiently small $t>0$, so the first value at which we can stabilize is $\psi(\infty)-\pi$. However, by Lemma \ref{L2.2}, we also must not cross the value $-\pi$, and since $\psi(\infty)\in [-\pi,0]$, this forces $\psi(\infty)=0$, but this puts us back in the trivial case $\psi(x)\equiv 0$ that we already dispensed with. Finally, we must rule out the situation where $\psi(\infty)<\alpha(0-)$. In this case, we can rotate all angles by $\gamma=-\pi-\alpha(0-)$; in other words, we move $\alpha(0-)$ to its new destination $-\pi$. This can be implemented by letting the rotation matrix \[ R_{\gamma} = \begin{pmatrix} \cos\gamma & -\sin\gamma \\ \sin\gamma & \cos\gamma \end{pmatrix} \] act on $m$ as a linear fractional transformation $m_{\gamma}=R_{\gamma}m$, and this is the same as conjugating the coefficient function $H_{\gamma}(x)=R_{\gamma}H(x)R_{-\gamma}$ \cite[Theorem 3.20]{Rembook}. By inspecting \[ m_{\gamma}(z) = \frac{m(z) \cos\gamma - \sin\gamma}{m(z)\sin\gamma + \cos\gamma} , \] we see that our choice of $\gamma$ makes sure that $m_{\gamma}$ is still holomorphic on a neighborhood of $(-\infty,0)$, so $H_{\gamma}\in \mathcal C_+$ as well. By its construction, the angle functions $\alpha_{\gamma},\psi_{\gamma}$ of the new system are simply the rotated versions $\alpha+\gamma$, $\psi +\gamma$ of the old ones. However, now we obtain a contradiction to Lemma \ref{L2.2} because $\psi(\infty)+\gamma <-\pi$ has been moved past $-\pi$, but $\alpha(-t)+\gamma>-\pi$ for $t>0$, so the Pr{\"u}fer angle $\theta_{\gamma}(x;-t)$ would have to cross the forbidden value $-\pi$ before it can stabilize. \end{proof} \begin{proof}[Proof of Theorem \ref{T1.2}] Assume that $\sigma (H)\subseteq [0,\infty)$. In general, the essential spectrum of the whole line problem is the union of the essential spectra of the half line problems; this is often referred to as the decomposition method. So, in our situation, the two half line $m$ functions $m_{\pm}$ will both be meromorphic on a neighborhood of $(-\infty, 0)$. In this situation, the negative eigenvalues of the whole line problem would occur exactly at the $-t<0$ at which $m_+(-t)=-m_-(-t)$ or $m_+(-t)=m_-(-t)=\infty$; indeed, this is the condition for the square integrable solutions on the half lines to arrive at $x=0$ with matching values. Moreover, $m_{\pm}$ are still increasing on every subinterval of $(-\infty,0)$ that avoids the poles. By looking at the possible scenarios, we can now deduce quickly that $m_{\pm}$ together can have at most one pole on $(-\infty,0)$. In particular, Theorem \ref{T3.1} applies to both half lines, so $H(x)=P_{\varphi(x)}$ for some function $\varphi(x)$ which is decreasing on both half lines and then also decreasing overall if we add a suitable multiple of $\pi$ to it on one of the half lines. Suppose now that we had $\varphi(-\infty)-\varphi(\infty)>\pi$, and here we may also assume that $\varphi$ does not have jumps of size $\ge\pi$ because these could be replaced by jumps of smaller sizes by removing these unnecessary multiples of $\pi$. As our first step, we then rotate, as in the last part of the proof of Theorem \ref{TWW2}, in such a way that the new $\varphi$ ranges over an interval $(\alpha,\beta) \supseteq [-\pi/2,\pi/2]$. This will not affect the property of $H$ of having non-negative spectrum because acting on $H(x)$ by a rotation matrix will lead to a unitarily equivalent (whole line) operator \cite[Theorem 7.2]{Rembook}. Since all jumps of $\varphi$ (if any) are of size $<\pi$, we can then find an $a\in{\mathbb R}$ such that $\varphi(a-)<\pi/2$, $\varphi(a+)>-\pi/2$. Now Theorem \ref{TWW1} (together with its mirror version for left half lines) shows that both half line problems, on $(-\infty, a)$ and $(a,\infty)$, have negative spectrum. However, as we just pointed out, this is impossible when the whole line problem has non-negative spectrum. The converse can be established by similar arguments. If $\psi(-\infty)-\psi(\infty)\le\pi$, with $\psi=\varphi-\pi/2$, then we can cut the whole line into two half lines in such a way that both half line coefficient functions are as described in Theorem \ref{TWW1}. What we need to do here is cut at the unique point at which $\psi$ crosses a value $\equiv 0\bmod\pi$, if there is one; if not, then we can cut at an arbitrary point. Then we refer to Theorem \ref{TWW2} and its analog for left half lines (and let's just say that we cut at $x=0$): \begin{align} \label{3.9} \psi(0+)& =\alpha_+(-\infty), \quad\psi(\infty) =\alpha_+(0-) ,\\ \nonumber \psi(0-)& =\alpha_-(-\infty), \quad \psi(-\infty) =\alpha_-(0-) . \end{align} The angles $\alpha_{\pm}$ again express the values of the $m$ functions: $\pm m_{\pm}(-t) = \cot\alpha_{\pm}(-t)$. Note that $\alpha_+$ is decreasing on $(-\infty,0)$, while $\alpha_-$ is increasing there. When these monotonicity properties are combined with \eqref{3.9} and the information on the range of $\psi$, then it will follow that $\alpha_{\pm}$ never take the same value modulo $\pi$. (As usual, there is a trivial exceptional case here, when $H(x)\equiv P_{\beta}$, which, also as usual, we leave to the reader.) So the whole line problem does not have negative eigenvalues, and then the decomposition method finishes the proof. \end{proof} \section{The essential spectrum} We will prove the following more general result, which will imply Theorems \ref{Tess}, \ref{T1.3}. \begin{Theorem} \label{T4.1} Suppose that $H\in\mathcal C_+$, write $H(x)=P_{\varphi(x)}$, with $\varphi$ chosen as in Theorem \ref{TWW1}, and let \[ A = \limsup_{x\to\infty} x(\varphi(x)-\varphi(\infty)) \] (so $0\le A\le\infty$). Then \[ \frac{1}{4A} \le \min \sigma_{ess} \le \frac{1}{A} . \] \end{Theorem} Here we formally set $\min\emptyset =\infty$ and, as usual in such situations, $1/0=\infty$, $1/\infty=0$. The presence of a gap between the upper and lower bounds is unavoidable since $A$ does not provide enough information to find the bottom of the essential spectrum exactly. This is possible, however, if the limit exists; more generally, we have the following bound. \begin{Theorem} \label{T4.2} Suppose that $H\in\mathcal C_+$, and let \[ B = \liminf_{x\to\infty} x(\varphi(x)-\varphi(\infty)) . \] Then $\min\sigma_{ess}\le 1/(4B)$. \end{Theorem} \begin{proof}[Proof of Theorem \ref{T4.1}] We first give an oscillation theoretic description of $T=\min\sigma_{ess}$ for $H\in\mathcal C_+$. Clearly, $T$ is characterized by the pair of conditions $\dim E(0,t)<\infty$ for $t<T$, $\dim E(0,t)=\infty$ for $t>T$. By Theorem \ref{T2.1}, this is equivalent to the corresponding conditions \begin{equation} \label{4.1} \lim_{x\to\infty} \theta(x;t)<\infty \quad (0<t<T); \quad\quad \lim_{x\to\infty} \theta(x;t)=\infty \quad (t>T) \end{equation} on the Pr{\"u}fer angle $\theta$ with $\theta(0;t)=0$, say. We will again use the Pr{\"u}fer equation in the form \eqref{3.5}. As we observed earlier, our only chance to come to rest is at the values $\psi(\infty)+n\pi$, so we only need to analyze what happens when $\theta(x;t)$ comes close to one of these. Note that unlike in the previous section, the two angles are now in contrary motion: $\psi$ decreases, while $\theta$ increases. We start with the first inequality from Theorem \ref{T4.1}. For notational convenience, we assume that $\psi(\infty)=0$; the general case can be reduced to this situation by applying a rotation, as discussed in the last part of the proof of Theorem \ref{TWW2}. Actually, the agreement that $\psi(\infty)=0$ is not completely consistent with our earlier conventions on the range of $\psi$, but this discrepancy is harmless; of course, we can always add multiples of $\pi$ to $\psi$. We will then show that if $0\le\psi(x)\le B/x$ ($x\ge a$) and $0<t<1/(4B)$, then the solution $\theta(x)$ of \begin{equation} \label{4.2} \theta' = t\sin^2(\theta-\psi(x)) , \end{equation} with suitable initial value $\theta(a)=\theta_0<0$, will satisfy $\theta(x)<0$ for all $x>a$. (This equation \eqref{4.2} is of course the same as \eqref{3.5}, but for positive spectral parameter $t$ now.) This will establish that we are in the first case of \eqref{4.1}, and the desired inequality will follow since $B>A$ can be taken arbitrarily close to $A$ if we make $a$ large enough. Note also that it indeed suffices to discuss one specific initial value $\theta_0=\theta(a)$, and it doesn't really matter what value $\theta_0$ we choose here: which alternative of \eqref{4.1} holds will not depend on this value. Or in more concrete style, we can observe that if a different initial value is chosen, then perhaps $\theta(x)$ will cross the value zero one more time, but then on the next lap we will see the value $\theta_0$ again and the argument applies now. We use the comparison principle, and we are interested in the range $\theta_0\le\theta\le 0$, so we estimate the right-hand side of \eqref{4.2} from above by $t(\theta-B/x)^2$ and then consider the comparison equation $\theta'_1 = t(\theta_1-B/x)^2$, $\theta_1(a)=\theta_0$. We have $\theta(x)\le\theta_1(x)$, so it is now enough to show that $\theta_1(x)<0$ for all $x\ge a$. In fact, by rescaling the $x$ variable, it suffices to consider the case $t=1$, so we will analyze the initial value problem \begin{equation} \label{4.7} \theta'_1 = \left( \theta_1 - \frac{C}{x} \right)^2, \quad \theta_1(a)=\theta_0 , \end{equation} with $C=Bt<1/4$. Introduce $\alpha = \theta_1-C/x$. Then the equation becomes \begin{equation} \label{4.3} \alpha' = \alpha^2 + \frac{C}{x^2} . \end{equation} This is a Riccati equation, and the well known substitution $\alpha=-u'/u$ transforms it into a Schr{\"o}dinger equation \begin{equation} \label{4.4} -u''-\frac{C}{x^2}u = 0 ; \end{equation} more precisely, if we have a zero free solution $u$ of \eqref{4.4}, then $\alpha=-u'/u$ will solve \eqref{4.3}. Now \eqref{4.4} is an Euler equation that can be solved explicitly by powers $x^p$, and a quick calculation shows that here the admissible exponents are \[ p_{\pm} = \frac{1}{2} \left( 1 \pm \sqrt{1-4C}\right) . \] We now take specifically the solution with the initial values $u(a)=0$, $u'(a)=1$. This will be computationally convenient, but note that this actually corresponds formally to the initial value $\alpha(a)=-\infty$. This will not be a problem because $\alpha(x)$ reaches finite values instantaneously for $x>a$, and, as we discussed, $\theta$ can be assigned any negative initial value. A straightforward calculation now shows that \[ \alpha(x) = - \frac{p_+ a^{p_-}x^{p_+-1}-p_-a^{p_ +}x^{p_--1}}{a^{p_-}x^{p_+}-a^{p_+}x^{p_-}} . \] Since $p_++p_-=1$, we can rewrite this as \[ \alpha(x) = - \frac{1}{a\xi} \frac{p_+\xi^d-p_-}{\xi^d-1}, \quad \xi = \frac{x}{a} \ge 1, \quad d=p_+-p_-=\sqrt{1-4C} . \] We want to show that $\alpha(x)<-C/x$ for all $x\ge a$. This is certainly true initially, so we only need to make sure that $\alpha(x)=-C/x$ can never happen. To confirm this, it suffices to set $y=\xi^d$ and then observe that the equation \[ \frac{p_+y-p_-}{y-1} = C , \quad 0<C<1/4, \] has no solutions $y>1$. The reader familiar with the spectral theory of Schr{\"o}dinger operators will undoubtedly have observed that this part of the argument is powered by the well known fact that the operator $-d^2/dx^2-C/x^2$ has no negative spectrum if $C<1/4$ (and this bound is sharp, there will be infinite negative spectrum for $C>1/4$). We now prove the upper bound $\min\sigma_{ess}\le 1/A$. Let $t>1/A$. In fact, by rescaling, it will again suffice to treat the case $t=1$, $A>1$. We must then show that if $\theta(a;t=1)=\theta_0<0$ at some $a>0$, then $\theta(x)=0$ for some $x>a$ (which will then imply that $\theta(y)>0$ for $y>x$, and this is what we really need). The key point is that this must hold for any $a$, no matter how large, for a given $\theta_0$. The precise value of $\theta_0<0$, on the other hand, is again irrelevant. Indeed, $\theta(x)$ will always approach zero; the only question is if this value is reached in finite time. So let $a>0$ be given, and let's also assume that $a$ is so large that $\psi(x)\le\pi/2$, say, for $x\ge a$. There are arbitrarily large $b>a$ such that $\psi(b)\ge B/b$, and this works for any $B<A$. Since $\psi$ is decreasing, we will then have $\psi(x)\ge B/b$ for all $x\le b$. Thus \[ \sin^2(\theta-\psi(x))\ge (1-\epsilon) \left( \theta-\frac{B}{b}\right)^2 , \] and this will be valid on $x\in [a,b]$, as long as $\theta_0\le \theta\le 0$. Moreover, we can achieve any $\epsilon>0$ here if we take $\theta_0$ close enough to zero and $b$ large enough. In a moment, it will turn out that we want $\epsilon < 1-1/B$; here, we of course assume that we took $B$ sufficiently close to $A$, so that $B>1$ also. We will then work with \[ \theta'_1= (1-\epsilon) \left( \theta_1 - \frac{B}{b}\right)^2 , \quad \theta_1(a)=\theta_0 \] as our comparison equation. This can be solved explicitly, and the perhaps most convenient way to do this is to again introduce $\alpha=\theta_1-B/b$. Then $\alpha'=(1-\epsilon)\alpha^2$ and thus \[ \alpha(x) = \frac{\theta_0-B/b}{1-(\theta_0-B/b)(1-\epsilon)(x-a)} \ge \frac{-1}{(1-\epsilon)(x-a)} . \] We have shown that \[ \theta(x) \ge \frac{B}{b} - \frac{1}{(1-\epsilon)(x-a)} \] (at least as long as $\theta(x)\le 0$), and this lower bound can be made positive at $x=b$ since $1/(1-\epsilon)<B$ and we can still take $b$ arbitrarily large. \end{proof} \begin{proof}[Proof of Theorem \ref{T4.2}] This is very similar to what we just did, so we'll just give a brief sketch. The comparison equation \eqref{4.7} also works as a lower bound if now $\psi(x)\ge C/x$, $C<B$, and we introduce an additional factor $1-\epsilon$ on the right-hand side. The analysis of this equation then proceeds exactly as in the first part of the previous proof. The fact that the Schr{\"o}dinger operator $-d^2/dx^2-D/x^2$ has infinite negative spectrum when $D>1/4$ will make the argument work. \end{proof} A classical, well known criterion for the absence of essential spectrum of a Schr{\"o}dinger operator $\mathcal L=-d^2/dx^2+V(x)$ on $L^2(0,\infty)$ with $V\ge 0$, say, is \textit{Molchanov's criterion }\cite{Mol}, which says that $\sigma_{ess}(\mathcal L)=\emptyset$ if and only if \begin{equation} \label{mol} \lim_{x\to\infty} \int_x^{x+d} V(t)\, dt =\infty \quad \textrm{\rm for all }d>0 . \end{equation} Schr{\"o}dinger equations $-y''+V(x)y=zy$ can be written as canonical systems, basically by running the variation-of-constants method with the equation for $z=0$ taking the role of the unperturbed system; see \cite[Section 1.3]{Rembook} for further details. To end up with a canonical system $H\in\mathcal C_+$, we assume that $\mathcal L\ge 0$ also. The canonical system will be of the form \[ H_0(x) = \begin{pmatrix} p^2 & pq \\ pq & q^2 \end{pmatrix} , \] and here $p,q$ solve $-y''+Vy=0$ and satisfy certain initial conditions (which are irrelevant for us and also depend on the boundary condition at $x=0$ of $\mathcal L$). This coefficient function $H_0$ is not yet trace normed; to do this, we need to pass to the new variable \begin{equation} \label{4.8} X = \int_0^x \left( p^2(t) + q^2(t)\right) \, dt . \end{equation} We see that indeed $\det H_0(x)=0$, as guaranteed by Theorem \ref{TWW1}, so $H=P_{\varphi}$ with $\cot\varphi = p/q$. Theorem \ref{TWW1} also shows that $M=\lim_{x\to\infty} p/q$ exists, possibly after switching $p,q$, to avoid the scenario where $M=\infty$. So if we introduce the new solution $f=p-Mq$, then $f/q\to 0$. It will now be convenient to assume that $\min\sigma(\mathcal L)>0$. This is not really an extra assumption because any energy can take over the role of $z=0$ in the transformation; what we do is deliberately choose an energy below the spectrum. This has the technical advantage that we will then have an $L^2$ solution at this energy, and obviously, in our situation, this must be the solution $f$ just constructed. Theorem \ref{Tess} now says that $\sigma_{ess}=\emptyset$ if and only if $Xf/q\to 0$. There won't be any problems with the zeros of $q$ here because $q$ has at most one, by (classical) oscillation theory for Schr{\"o}dinger equations. This condition can be given a more intuitive form. Constancy of the Wronskian $W=f'q-fq'=q^2(f/q)'$ implies that \begin{equation} \label{4.9} \frac{f(x)}{q(x)} = -W\int_x^{\infty} \frac{dt}{q^2(t)} . \end{equation} By letting $f,q$ span the solution space, we see that every solution is either a multiple of $f$ or behaves asymptotically like a multiple of $q$. We then use \eqref{4.8}, \eqref{4.9} in the expression $Xf/q$ and finally arrive at the following criterion. \begin{Theorem} \label{T4.3} Let $\mathcal L=-d^2/dx^2+V(x)$ be a half line Schr{\"o}dinger operator that is bounded below, and fix an $E_0<\min\sigma(\mathcal L)$. Let $q(x)$ be any solution of $-y''+Vy=E_0y$ with $q\notin L^2(0,\infty)$. Then $\sigma_{ess}(\mathcal L)=\emptyset$ if and only if \[ \lim_{x\to\infty} \int_0^x q^2(t)\, dt \int_x^{\infty} \frac{dt}{q^2(t)} = 0 . \] \end{Theorem} It is not hard to show directly that this condition is equivalent to \eqref{mol}, but we knew that already. So Theorem \ref{Tess} can be said to contain Molchanov's criterion as a special case, but of course it is much more general because it applies to arbitrary canonical systems, not just the ones that are Schr{\"o}dinger equations rewritten. \section{Diagonal canonical systems and exponential orders} Let $H\in\mathcal C_+$, and write $H(x)=P_{\varphi(x)}$, with $\varphi(x)$ chosen as in Theorem \ref{TWW1}. In fact, it will be convenient now to also demand that $\varphi(x)$ is right-continuous. Furthermore, we make the additional assumption that $-\pi/2<\varphi(x)\le \pi/2-\delta$ for some $\delta>0$. Let $u$ be a solution of \eqref{can}, and introduce the new variable \begin{equation} \label{5.1} t = -\tan\varphi(x) ; \end{equation} this is an increasing function of $x>0$, with range contained in $[-t_0,\infty)$, $t_0=-\tan\varphi(0+)$. We want to rewrite \eqref{can} by using $t$ instead of $x$, so we would like to define $v(t)=u(x)$, with $t$ and $x$ related by \eqref{5.1}, but here we must be careful since $t(x)$ can fail to be injective and its range is not guaranteed to be an interval. We address these technical issues as follows: gaps in the range of $t(x)$ result from jumps of $\varphi(x)$, and if $\varphi(a)<\varphi(a-)$, then we simply set $v(t)=u(a)$ for $-\tan\varphi(a-)\le t< -\tan\varphi(a)$. If, on the other hand, $\varphi(x)$ is constant on an interval $(a,b)$ (and this interval is maximal with this property), then we set $v(t)=u(b)$ for $t=-\tan\varphi(a)$. There is no conflict between these definitions at points at which both apply, and in all other cases, there is a unique $x$ with $t=t(x)$, and the originally intended definition $v(t)=u(x)$ works. The function $v(t)$ thus defined is right-continuous and of bounded variation, with jumps precisely at the values $t$ that correspond to intervals of constancy of $\varphi$, and of course these are exactly the singular intervals of the original system. We now claim that we can rewrite the integrated form of \eqref{can}, \[ u(x)-u(0) = zJ\int_0^x P_{\varphi(y)}u(y)\, dy , \] at a regular point $x$, as follows: \begin{equation} \label{5.2} v(t) - v(t_0) = zJ \int_{(t_0,t]} \begin{pmatrix} 1 & -s \\ -s & s^2 \end{pmatrix} v(s)\, dw(s) , \end{equation} and here $dw$ is a Borel measure on $(t_0,\infty)$ that is defined by the condition that $(1+t^2)\,dw(t)$ is the image measure of $dx$ under the correspondence $x\mapsto t$. Observe now that \[ P_{\varphi(x)} = \frac{1}{1+t^2} \begin{pmatrix} 1 & -t \\ -t & t^2 \end{pmatrix} , \] and this shows that we indeed obtain \eqref{5.2}, from the substitution rule. It is perhaps also helpful to comment more explicitly on what happens here when $\varphi$ is either constant on an interval or has a jump. In the first case, if $(a,b)$ is a singular interval, so $\varphi(x)$ is constant on $[a,b)$, then $w$ will have the point mass $(1+t^2)w(\{t\})=b-a$ at $t=-\tan\varphi(a)$. Therefore \eqref{5.2} will give $v$ a jump \[ v(t) = \left( 1+z(b-a)JP_{\varphi(a)}\right) v(t-) \] at this point, which is exactly what the singular interval did to $u(x)$ across $(a,b)$. If, on the other hand, $t\in (c,d)$ is an interval corresponding to a jump of $\varphi(x)$, then $w((c,d))=0$, and this is consistent with the fact that $v$ is constant on this interval. Next, we introduce \[ y(t) = \begin{pmatrix} 1 & -t \\ 0 & 1 \end{pmatrix} v(t) . \] Then \eqref{5.2} is equivalent to \begin{equation} \label{5.3} y(t)-y(t_0) = \int_{(t_0,t]} \begin{pmatrix} 0 & -ds \\ z\, dw(s) & 0 \end{pmatrix} y(s) . \end{equation} This we can confirm by a brute force calculation: by expressing everything in \eqref{5.3} in terms of $v$, we see that we will obtain this equation if we can show that the matrix $\bigl( \begin{smallmatrix} 0 & 1 \\ 0&0 \end{smallmatrix} \bigr)$ annihilates the vector \[ (t-t_0)v(t_0)-\int_{t_0}^t v(s)\, ds + z \int_{(t_0,t]} (t-s)J \begin{pmatrix} 1 & -s \\ -s & s^2 \end{pmatrix}v(s) \, dw(s) . \] To do this, write $t-s = \int_{[s,t]}du$ in the last term, change the order of integration in the resulting double integral, and use \eqref{5.2}. As our final transformation, we write $z=\zeta^2$ and introduce \[ p(t) = \begin{pmatrix} \zeta & 0 \\ 0 & 1 \end{pmatrix} y(t) . \] Then \eqref{5.3} becomes \begin{equation} \label{5.5} J(p(t)-p(t_0)) = -\zeta\int_{(t_0,t]} \begin{pmatrix} dw(s) & 0 \\ 0 & ds \end{pmatrix} p(s) , \end{equation} and this is (almost) the promised diagonal canonical system that is associated with $H=P_{\varphi}$. We can write this system in differential form if we pass to a new variable one more time. More specifically, we let $T=w((t_0,t])+t-t_0$, so $dT$ is the trace of the coefficient matrix from \eqref{5.5}, and then make $p$ a function of $T$. The transformation from $t$ to $T$ will correspond exactly to the initial transformation of going from $x$ to $t$, except that we are now doing it in the opposite direction. We will obtain the new coefficient function \[ H_1(T) = \begin{pmatrix} h(T) & 0 \\ 0 & 1-h(T) \end{pmatrix} , \] with $0\le h\le 1$. We give $H_1$ the required singular interval of type $0$ on the $T$ intervals corresponding to the point masses of $dw$, and on the remaining set, we define $h$ by the condition that $h\, dT$ is the image measure of $dw$ (and then $(1-h)\, dT$ will be related to $dt$ in the same way). If $p(t_0,\zeta)$ is constant or has polynomial dependence on $\zeta$, then $p(T,\zeta)$ will be of order at most $1$, and since $z=\zeta^2$, we now obtain Theorem \ref{Torder}(a) as an immediate consequence, except that we made an additional assumption on the range of $\varphi(x)$ at the beginning of this section. This, however, is easy to remove. Since we can compute the transfer matrix across an interval as a product of transfer matrices across smaller subintervals, it will be enough to discuss the case when $\varphi(0+)-\varphi(\infty)<\pi$. But then we can apply a rotation matrix, as discussed in the last part of the proof of Theorem \ref{TWW2}, to return to the situation already dealt with. Part (c) then follows from de~Branges's \cite{dB} well known formula for the exponential \textit{type }(not order) $\tau$ of a transfer matrix \cite[Theorem 4.26]{Rembook}, which can be computed as \[ \tau = \int_0^T \sqrt{\det H_1(S)}\, dS = \int_0^T \sqrt{h(S)(1-h(S))}\, dS . \] Of course, if this is positive, then the order of $p$ will equal $1$ and thus $\textrm{ord}\: T(x;z)=1/2$. So if this order is less than $1/2$, then $h=0$ or $1$ almost everywhere. This happens if and only if $dw(t)$ is a purely singular measure and this is equivalent to $d\varphi$ being purely singular. To prove part (b) of Theorem \ref{Torder}, we design suitable functions $A(z)=u_1(L;z)$, $C(z)=u_2(L;z)$, with $u$ denoting the solution of \eqref{can} with $u(0;z)=e_1$. We will then obtain the coefficient function $H(x)$ from an inverse spectral theory result. It is very easy to produce a canonical system with order $\nu=0$: a succession of finitely many singular intervals will give us a polynomial transfer matrix. So we can focus on desired orders in the range $0<\nu<1/2$. Let $\alpha=1/\nu > 2$, and define \begin{equation} \label{5.6} A(z) = \prod_{n\ge 1} \left( 1 - \frac{z}{n^{\alpha}} \right) . \end{equation} This is the Hadamard product representation of an entire function with zeros $z_n=n^{\alpha}$, and from the asymptotics of these it follows that $\textrm{ord}\: A = \nu$ \cite[Theorem 2.6.5]{Boas}. We will then define a second function $C(z)$ in the same way, by giving it the zeros $z_0=0$, $z_n = (n^{\alpha}+(n+1)^{\alpha})/2$, and $C'(0)>0$. Since these alternate with those of $A$, it will then follow that $A-iC$ is a de~Branges function; see the discussion of \cite[Chapter VII]{Levin}. By fundamental inverse spectral theory results \cite[Theorems 4.20, 5.2]{Rembook}, there will be a canonical system on some interval $[0,L]$ whose transfer matrix $T(L;z)$ has $(A,C)^t$ as its first column if $1/[(z+i)E(z)]\in H^2$, with $E=A-iC$. In our situation, it will be enough to verify that \begin{equation} \label{5.7} \int_{-\infty}^{\infty} \frac{dt}{(1+t^2)|E(t)|^2} = \int_{-\infty}^{\infty} \frac{dt}{(1+t^2)(A^2(t)+C^2(t))} < \infty . \end{equation} Both $|A(t)|$ and $|C(t)|$ are decreasing on $t<0$, so we can focus on $t>0$ here. We will then estimate \eqref{5.6} to show that $|A(t)|$ can not get small as long as we don't get close to its zeros, and of course $C$ will have the same property. This will prove \eqref{5.7}. The spectrum of the corresponding canonical system on $[0,L]$ with boundary condition $u_1(L)=0$ at $x=L$ is given by the zeros of $A$. As usual, we can then view this as a half line problem, by setting $H(x)=P_{e_1}$ on $x>L$, and then $H\in\mathcal C_+$. Thus \eqref{5.7} will also establish Theorem \ref{Torder}(b). The argument to prove \eqref{5.7} is quite routine, plus there are similar estimates available in the literature \cite[Chapter 4]{Boas}, so we will just give a sketch. Since $(n+1)^{\alpha}-n^{\alpha}\simeq \alpha n^{\alpha-1}$, it will be enough to consider $t\ge 0$ with $|t-n^{\alpha}|\gtrsim n^{\alpha-1}$ for all $n\ge 1$. By replacing the sum by an integral, it's then easy to see that \[ \log |A(t)| = \sum_{n\ge 1} \log \left| 1- \frac{t}{n^{\alpha}} \right| \] will satisfy $\log |A(t)|\gtrsim I(\alpha)t^{1/\alpha}-O(\log t)$ for these $t$, with \[ I(\alpha) = \int_0^{\infty} \log |1-s^{-\alpha}|\, ds . \] The monotonicity properties of $\log x$ imply that $I(\alpha)$ is strictly increasing, and \begin{align*} I(2) & = \lim_{L\to\infty} \int_0^L \log \frac{(s+1)|s-1|}{s^2}\, ds \\ & = \lim_{L\to\infty} \left( \int_1^{L+1} \log s\, ds + \int_{-1}^{L-1}\log |s| \, ds -2\int_0^L \log s\, ds \right) = 0 . \end{align*} So $I(\alpha)>0$ for $\alpha>2$, and \eqref{5.7} follows.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,110
Q: Program doesn't terminate when reaching the end of the main method I have a program where I ask for an input from the user before I launch it. public static void main(String args[]) { String database = JOptionPane.showInputDialog(new JFrame(), "Enter a DB:"); if(database!=null && database.foundInDB()) { SPVerification spv = new SPVerification(); spv.setVisible(true); } //System.exit(1); Without it the program doesn't terminate although it's the end // of the main function. } If the user enters a database that's not found, the program shouldn't be executed. When I enter a wrong DB name, the code below if statement doesn't execute, so I reach the end of the main method, but the program doesn't terminate, but if I add system.exit(1) after the if statement, the program terminates. Why do I need to call System.exit(1) although I've reached the end of main? A: You're showing a Swing dialog, which starts up the event dispatch thread. This is a non-daemon thread, so will prevent the program from exiting. For a normal Swing application, this is exactly what you want: all main() should do is gather any configuration information and create the main frame. One solution to your problem is to wrap the dialog code in a call to SwingUtilities.invokeAndWait(). A: You have created a new JFrame which, by default, will not close as there is nothing to trigger the window to be disposed such as a WindowEvent. As this appears to be a non-UI based application, you could simply use: JOptionPane.showInputDialog(null, "Enter a DB:"); A: JFrame jframe = new JFrame() String answer = JOptionPane.showInputDialog(jframe, "Enter a DB:") System.err.println(answer) jframe.dispose() A: You might call .setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); on your JFrame so that the JVM terminates after closing the JFrame (given the fact all other running threads are daemons).
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,512
{"url":"http:\/\/www.thefullwiki.org\/Friedrich_Hasen%C3%B6hrl","text":"# Friedrich Hasen\u00f6hrl: Wikis\n\nNote: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.\n\n# Encyclopedia\n\nFriedrich Hasen\u00f6hrl\n\nFriedrich Hasen\u00f6hrl\nBorn November 30, 1874\nVienna, Austria (Austria-Hungary)\nDied October 7, 1915 (aged\u00a040)\nTyrol, Austria (Austria-Hungary)\nResidence Austria-Hungary\nNationality Austro-Hungarian\nFields Physicist\nInstitutions University of Vienna\nAlma mater University of Vienna\nDoctoral advisor Franz S. Exner\nDoctoral students Karl Herzfeld\nErwin Schr\u00f6dinger\nKnown\u00a0for Cavity radiation\n\nFriedrich Hasen\u00f6hrl (November 30, 1874 - October 7, 1915), was an Austro-Hungarian physicist.\n\nFriedrich Hasen\u00f6hrl was born in Vienna, Austria (Austria-Hungary) in 1874. His father was a lawyer and his mother belonged to a prominent aristocratic family. After his elementary education, he studied natural science and mathematics at the University of Vienna under Stephan and Boltzmann. He worked under H. A. Lorentz in Leiden at the low temperature laboratory.\n\nIn 1907 he became Boltzmann's successor at the University of Vienna as the head of the Department of Theoretical Physics. He had a number of illustrious pupils there and had an especially significant impact on Erwin Schr\u00f6dinger, who later won the Nobel Prize for Physics for his contributions to Quantum Mechanics.\n\nWhen the war broke out in 1914, he volunteered at once into the Austria-Hungarian army. He fought as Oberleutnant against the Italians in Tyrol. He was wounded, recovered and returned to the front. He was then killed by a grenade in an attack on Mount Plaut on October 7, 1915 at the age of 40.\n\n## Contents\n\nSince J. J. Thomson in 1881, many physicists like Wilhelm Wien (1900), Max Abraham (1902), and Hendrik Lorentz (1904) used equations equivalent to\n\n$m_{em}=\\frac{4}{3} \\cdot \\frac{E_{em}}{c^2}$\n\nfor the so called \"electromagnetic mass\", which expresses how much electromagnetic energy contributes to the mass of bodies. And Henri Poincar\u00e9 (1900) implicitly used the expression m=E\/c2 for the mass of electromagnetic energy.\n\nFollowing this line of thought, Hasen\u00f6hrl (1904, 1905) published several papers on the inertia of a cavity containing radiation. This was an entirely classical derivation (no use of special relativity) and used Maxwell's equation for the pressure of light. Hasen\u00f6hrl specifically associated the \"apparent\" mass via inertia with the energy concept through the equation\n\n$m=\\frac{8}{3} \\cdot \\frac{h \\, \\varepsilon_0}{c^2}$,\n\nwhere h\u03b50 is the radiation energy. He also concluded that this result is valid for all radiating bodies, i.e. for all bodies whose temperature is > 0\u00b0K. For this result Hasen\u00f6hrl was awarded the Haitinger prize of the Austrian Academy of Sciences. However, it was shown by Abraham that Hasen\u00f6hrl's calculation for the apparent mass was incorrect, so he published another paper in 1905, where he presented Abraham's criticism and corrected his formula to:\n\n$m=\\frac{4}{3} \\cdot \\frac{h \\, \\varepsilon_0}{c^2}$\n\nThis was the same relation (as Hasen\u00f6hrl noted himself) which was already known from the electromagnetic mass. If he had included the shell in his calculations in a way consistent with relativity, the pre-factor of 4\/3 would have been 1, so yielding m = E \/ c2. He could not have done this, since he did not have relativistic mechanics, with which he could model the shell.\n\nHasen\u00f6hrl's results (concerning apparent mass and thermodynamics) by using cavity radiation was further elaborated and criticized by Kurd von Mosengeil (1906\/7) who already incorporated Albert Einstein's theory of relativity in his work. A broad outline of relativistic thermodynamics and mass-energy equivalence using cavity radiation was given by Max Planck in 1907.[1][2][3]\n\nIn some additional papers (1907, 1908) Hasen\u00f6hrl elaborated further on his 1904-work and concluded that his new results were now in accordance to the theories of Mosengeil and Planck. However, he complained about the fact that Planck (1907) did not mention his earlier 1904-results (like the dependency of apparent mass on temperature). Eventually, in 1908 Planck wrote that the results of Hasen\u00f6hrl's new approach from 1907 were indeed equivalent to those of relativity.[4]\n\n## Hasen\u00f6hrl and Einstein\n\nThe formulas for electromagnetic mass (like those of Hasen\u00f6hrl's) were similar to the famous equation for mass\u2013energy equivalence:\n\n$\\displaystyle{E=mc^2}$\n\npublished by Albert Einstein in September 1905 in the Annalen der Physik \u2014a few editions after Hasen\u00f6hrl published his results on cavity radiation. The similarity between those formulas led some critics of Einstein, up until the 1930's, to claim that he plagiarized the formula from Hasen\u00f6hrl, often in connection with the antisemitic Deutsche Physik.\n\nAs an example, Phillip Lenard published a paper in 1921 in which he gave priority for \"E=mc\u00b2\" to Hasen\u00f6hrl (Lenard also gave credit to Johann Georg von Soldner and Paul Gerber in relation to some effects of general relativity).[5] However, Max von Laue quickly rebutted those claims by saying that the inertia of electromagnetic energy was long known before Hasen\u00f6hrl, especially by the works of Henri Poincar\u00e9 (1900) and Max Abraham (1902), while Hasen\u00f6hrl only used their results for his calculation on cavity radiation. Laue continued by saying that credit for establishing the inertia of all forms of energy (the real mass-energy equivalence) goes to Einstein, who was also the first to understand the deep implications of that equivalence in relation to relativity. [6]\n\n## Publications\n\nHasen\u00f6hrl's papers on cavity radiation and thermodynamics\n\n## Notes and References\n\n1. ^ Miller, Arthur I. (1981). Albert Einstein\u2019s special theory of relativity. Emergence (1905) and early interpretation (1905\u20131911). Reading: Addison\u2013Wesley. pp.\u00a0359\u2013374. ISBN 0-201-04679-2.\n2. ^ Mosengeil, Kurd von (1907). \"Theorie der station\u00e4ren Strahlung in einem gleichf\u00f6rmich bewegten Hohlraum\". Annalen der Physik 327 (5): 867\u2013904.\n3. ^ Planck, Max (1907). \"Zur Dynamik bewegter Systeme\". Sitzungsberichte der K\u00f6niglich-Preussischen Akademie der Wissenschaften, Berlin Erster Halbband (29): 542\u2013570.\n4. ^ Planck, Max (1908). \" Bemerkungen zum Prinzip der Aktion und Reaktion in der allgemeinen Dynamik\". Physikalische Zeitschrift 9 (23): 828\u2013830.\n5. ^ Lenard, P. (1921). \"Vorbemerkung Lenards zu Soldners: \u00dcber die Ablenkung eines Lichtstrahls von seiner geradlinigen Bewegung durch die Attraktion eines Weltk\u00f6rpers, an welchem er nahe vorbeigeht;\". Annalen der Physik 65: 593\u2013604. doi:10.1002\/andp.19213701503.\n6. ^ Laue, M.v. (1921). \"Erwiderung auf Hrn. Lenards Vorbemerkungen zur Soldnerschen Arbeit von 1801\". Annalen der Physik 66: 283\u2013284. doi:10.1002\/andp.19213712005.\n\n\u2022 Lenard, Philipp, Great Men of Science. Translated from the second German edition, G. Bell and sons, London (1950) ISBN 083691614X\n\u2022 Moore, Walter \"Schr\u00f6dinger: Life and Thought\" University of Cambridge (1989) ISBN 0521437679.","date":"2017-05-27 17:37:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 4, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7423593401908875, \"perplexity\": 4438.069368530443}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-22\/segments\/1495463608984.81\/warc\/CC-MAIN-20170527171852-20170527191852-00179.warc.gz\"}"}
null
null
{"url":"https:\/\/tex.stackexchange.com\/questions\/230545\/replacing-an-id-integer-with-a-string","text":"Replacing an id (integer) with a string?\n\nI have a tex document that contains sections that are automatically generated. These sections have strings that are observation IDs. I would like to have latex replace all occurrences of an ID with a string containing the star's name.\n\nFor example:\n\n\\caption{SED of \\OID1342263516}\n\n\nWould become:\n\n\\caption{SED of FR Tau}\n\n\nMy first thought was to have a macro, something like this:\n\n\\newcommand{\\OID1342263516}{FR Tau}\n\n\nHowever, I have learned that macros can't have numbers in their name. Is there a good way to go about doing what I want? Thanks.\n\n\u2022 I've got a solution I think, gimme 5 minutes :-) \u2013\u00a0yo' Feb 27 '15 at 23:29\n\u2022 Welcome to TeX.SX! Please help us to help you and add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \\documentclass{...} and ending with \\end{document}. \u2013\u00a0user31729 Feb 27 '15 at 23:34\n\u2022 It would be much easier if the generated code was \\OID{1342263516} \u2013\u00a0egreg Feb 27 '15 at 23:35\n\u2022 @egreg Indeed, that's why it took me 3 minutes too long :-) \u2013\u00a0yo' Feb 27 '15 at 23:38\n\u2022 @equant: I hope, the Herschel did not observe more than 10^12 stars then ;-) \u2013\u00a0user31729 Feb 28 '15 at 0:02\n\nIt's with a bit of plain e-TeX to make it without the extra braces. The maximal value allowed here is 1073741823000, the minimal one is 1000 (or less if you make it at least 4 digits, I hope you fit in that. In can be extended by x digits, but only in case that all your numbers have at least x digits.\n\n\\documentclass{article}\n\n\\newcount\\OIDcounta\n\\newcount\\OIDcountb\n\\def\\useOID{\\csname OID\\the\\OIDcounta\\the\\OIDcountb\\endcsname}\n\\protected\\def\\OID#1#2#3{\\OIDcounta#1#2#3\\afterassignment\\useOID\\OIDcountb}\n\\def\\newOID#1#2{\\expandafter\\def\\csname OID#1\\endcsname{#2}}\n\n\\newOID{123456}{Tau 456 epsilon}\n\n\\begin{document}\n\n\\tableofcontents\n\n\\section{ABC \\OID123456\\ DEF}\n\nStar number hundred twenty three is \\OID123456\\ for sure.\n\n\\typeout{\\number\\maxdimen}\n\n\\end{document}\n\n\nIf you're fine with the braces, then it's easy, and will work for even very large numbers (tens of digits):\n\n\\documentclass{article}\n\n\\makeatletter\n\\protected\\def\\OID#1{\\@ifundefined{OID#1}{\\GenericError{}{OID#1 not defined!}{}{}}{\\csname OID#1\\endcsname}}\n\\def\\newOID#1#2{\\expandafter\\def\\csname OID#1\\endcsname{#2}}\n\\makeatother\n\n\\newOID{123456}{Tau 456 epsilon}\n\n\\begin{document}\n\n\\tableofcontents\n\n\\section{ABC \\OID{123456} DEF}\n\nStar number hundred twenty three is \\OID{123456} for sure.\n\n\\end{document}\n\n\u2022 Wow, fast. Thanks. So if \\OID{1342263516} is easier as suggested by egreg, could you show me how that method would look? \u2013\u00a0equant Feb 27 '15 at 23:48\n\u2022 Actually, the number 1342263516 is larger than 1073741823 and the macro still works. But there's no guarantee when the number is much larger. \u2013\u00a0egreg Feb 27 '15 at 23:53\n\u2022 @egreg I shoved off the first 3 digits now, so it works for anything from 1000 to a lot. \u2013\u00a0yo' Feb 27 '15 at 23:54\n\u2022 @equant Here you go. \u2013\u00a0yo' Feb 27 '15 at 23:56\n\u2022 Perfect. Both methods worked perfectly for me. The later is definitely easier to read. Thanks! \u2013\u00a0equant Feb 28 '15 at 0:02","date":"2019-05-25 21:14:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7721707820892334, \"perplexity\": 1840.899058677432}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232258451.88\/warc\/CC-MAIN-20190525204936-20190525230936-00281.warc.gz\"}"}
null
null
\subsubsection{Two Dimensions} \label{subsec:two} Conditioned frequency is calculated differently for two dimensions, as we use inclusion/exclusion principles and we need to show that these calculations are sound too. We start by stating the following lemma: \begin{lemma} \label{lemma:cp2d}(\cite{HHHMitzenmacher}) In two dimensions, \[{C_{q|P}} = {f_q} - \sum\limits_{h \in G\left( {q|P} \right)} {{f_h}} + \sum\limits_{h,h' \in G\left( {q|P} \right)} {f_{{\rm{glb}}\left( {h,h'} \right)}^{}} .\] \normalsize \end{lemma} \noindent In contrast, Algorithm~\ref{alg:Skipper} estimates the conditioned frequency as: \begin{lemma} \label{lemma:algCF2D} In two dimensions, Algorithm~\ref{alg:Skipper} calculates conditioned frequency in the following manner: \small\[\widehat {{C_{q|P}}} = \hat f_q^ + - \sum\limits_{h \in G\left( {q|P} \right)} {\hat f_h^ - } + \sum\limits_{h,h' \in G\left( {q|P} \right)} {\hat f_{{\rm{glb}}\left( {h,h'} \right)}^ + } + 2{Z_{1 - \frac{\delta }{8}}}\sqrt {NV} .\] \normalsize \end{lemma} \begin{proof} The proof follows from Algorithm~\ref{alg:Skipper}. Line~\ref{line:cp} is responsible for the first element $\widehat{f_q^{+}}$ while Line~\ref{line:accSample} is responsible for the last element. The rest is due to the function calcPredecessors in Algorithm~\ref{alg:randHHH2D}. \end{proof} \begin{theorem} \label{thm:conservativeCP} $\Pr \left( {\widehat {{C_{q|P}}} \ge {C_{q|P}}} \right) \ge 1 - \delta .$ \end{theorem} \begin{proof} Observe Lemma~\ref{lemma:cp2d} and notice that in deterministic~settings, as shown in~\cite{HHHMitzenmacher}, $$\widehat f_q^ + - \sum\limits_{h \in G\left( {q|P} \right)} {\widehat f_h^ - } + \sum\limits_{h,h' \in G\left( {q|P} \right)} \widehat f_{{\rm{glb}}\left( {h,h'} \right)}^{+}$$\normalsize is a conservative estimate for ${C_{q|P}}$. Therefore, we need to account for the randomization error and verify that with probability $1-\delta$ it is less than $2{Z_{1 - \frac{\delta }{8}}}\sqrt {NV}$. We denote by $K$ the packets that may affect $C_{q|P}$. Since the expression of $\widehat{C_{q|P}}$ is not monotonic, we split it into two sets: $K^{+}$ are packets that affect $\widehat{C_{q|P}}$ positively and $K^{-}$ affect it negatively. Similarly, we define $\{Y_i^K\}$ to be Poisson random variables that represent how many of the packets of $K$ are in each bin. We do not know how many bins affect the sum, but we know for sure that there are no more than $N$ balls. We define the random variable $Y^K_+$ that defines the number of packets from $K$ that fell in the corresponding bins to have a positive impact on $\widehat{C_{q|P}}$. Invoking Lemma~\ref{lemma:poissonConfidence} on $Y^K_+$ yields that: $$\Pr \left( \left| {Y_K^ + - E\left( {Y_K^ + } \right)} \right| \ge {Z_{1-\frac{\delta }{8}}}\sqrt \frac{N}{V} \right) \le \frac{\delta }{4}.$$ Similarly, we define $Y_{K}^-$ to be the number of packets from $K$ that fell into the corresponding buckets to create a negative impact on $\widehat{C_{q|P}}$ and Lemma~\ref{lemma:poissonConfidence} results in: $$\Pr \left( \left| {Y_K^ - - E\left( {{Y_K}^ - } \right)} \right| \ge {Z_{1-\frac{\delta }{8}}}\sqrt \frac{N}{V } \right) \le \frac{\delta }{4}.$$ $Y_{K}^+$ is monotonically increasing with the number of balls and $Y_{K}^-$ is monotonically decreasing with the number of balls. We can apply Lemma~\ref{lemma:rare} and conclude that: \small \[\begin{array}{l} \Pr \left( {\widehat {{C_{q|P}}} \ge {C_{q|P}}} \right) \le \\ 2\Pr \left( {V\left( {Y_K^ - + Y_K^ + } \right) \ge \left( {VE\left( {Y_K^ - + Y_K^ + } \right) + 2{Z_{1 - \frac{\delta }{8}}}\sqrt {NV} } \right)} \right) \\ \le 1 - 2\frac{\delta }{2} = 1 - \delta , \end{array}\]\normalsize completing the proof. \end{proof} \subsubsection{Putting It All Together} We can now prove the coverage property for one and two dimensions. \begin{corollary} \label{cor:coverage} If $N>\psi$ then RHHH satisfies coverage. That is, given a prefix $q \notin P$, where $P$ is the set of HHH returned by RHHH, $$\Pr \left(C_{q|P}<\theta N\right) >1-\delta.$$ \end{corollary} \begin{proof} The proof follows form Theorem~\ref{thm:underCP} in one dimension, or Theorem~\ref{thm:conservativeCP} in two, that guarantee that in both cases: $\Pr \left( C_{q|P}<\widehat{C_{q|P}}\right) > 1-\delta$. The only case where $q \notin P$ is if $\widehat{C_{q|P}}<\theta N$. Otherwise, Algorithm~\ref{alg:Skipper} would have added it to $P$. However, with probability $1-\delta$, $C_{q|P}<\widehat{C_{q|P}}$, and therefore $C_{q|P} <\theta N$ as well. \end{proof} \subsection{RHHH Properties Analysis} \label{sec:RHHH-prop} Finally, we can prove the main result of our analysis. It establishes that if the number of packets is large enough, $RHHH$ is correct. \begin{theorem} \label{thm:correctness} If $N>\psi$, then RHHH solves {\sc {$(\delta,\epsilon, \theta)$ - Approximate Hierarchical Heavy Hitters}}. \end{theorem} \begin{proof} The theorem is proved by combining\\ Lemma~\ref{lemma:accuracy} and Corollary~\ref{cor:coverage}. \end{proof} Note that $\psi \triangleq {Z_{1 - \frac{{{\delta _s}}}{2}}}V{\varepsilon_s}^{ - 2}$ contains the parameter $V$ in it. When the minimal measurement interval is known in advance, the parameter $V$ can be set to satisfy correctness at the end of the measurement. For short measurements, we may need to use $V=H$, while longer measurements justify using $V\gg H$ and achieve better performance. When considering modern line speed and emerging new transmission technologies, this speedup capability is crucial because faster lines deliver more packets in a given amount of time and thus justify a larger value of $V$ for the same measurement~interval. For completeness, we prove the following. \begin{theorem} \label{thm:O1} RHHH's update complexity is $O(1)$. \end{theorem} \begin{proof} Observe Algorithm~\ref{alg:Skipper}. For each update, we randomize a number between $0$ and $V-1$, which can be done in $O(1)$. Then, if the number is smaller than $H$, we also update a Space Saving instance, which can be done in $O(1)$ as well~\cite{SpaceSavings}. \end{proof} Finally, we note that our space requirement is similar to that of~\cite{HHHMitzenmacher}. \begin{theorem} \label{thm:space} The space complexity of RHHH is $O\left(\frac{H}{\varepsilon_a}\right)$ flow table entries. \end{theorem} \begin{proof} RHHH utilizes $H$ separate instances of Space Saving, each using $\frac{1}{\epsilon_a}$ table entries. There are no other space significant data~structures. \end{proof} \section{Analysis of SWAMP} \label{sec:anal} This section is dedicated for the analysis of SWAMP. Our analysis is partitioned into three subsections. Section~\ref{anal:set} shows that SWAMP solves the {\sc $(W,\epsilon)$-Approximate Set Membership}{} and the {\sc $(W,\epsilon)$-Approximate Set Multiplicity}{} problems and explains the conditions where it is succinct. It also shows that SWAMP operates in constant runtime. Next, Section~\ref{anal:Z} shows that SWAMP solves the {\sc $(W,\epsilon,\delta)$-Approximate Count Distinct}{} problem using \distinctLB{} and that \distinctMLE{} approximates its Maximum Likelihood Estimator. Finally, Section~\ref{anal:entropy} shows that SWAMP solves {\sc $(W,\epsilon,\delta)$-Entropy Estimation}{}. \subsection{Analysis of per flow counting} \label{anal:set} We start by showing that SWAMP's runtime is constant. \begin{theorem} \label{thm:runtime} SWAMP's runtime is \small$O(1)$\normalsize with high probability. \end{theorem} \begin{proof} \textbf{Update} - Updating the cyclic buffer requires two TinyTable operations - add and remove - both performed in constant time (with high probability) for any constant $\alpha$. The manipulations to $\hat{H}$ and $Z$ are also done in constant time. \\ ISMEMBER - Is satisfied by TinyTable in constant time.\\ FREQUENCY - Is satisfied by TinyTable in constant time.\\ \textbf{\distinctLB{}} - Is satisfied by returning an integer.\\ \textbf{\distinctMLE{}} - Is satisfied with a simple calculation.\\ \textbf{\entropy{}} - Is satisfied by returning a floating point. \end{proof} Next, we prove that SWAMP solves the {\sc $(W,\epsilon)$-Approximate Set Multiplicity}{} problem with regard to the function FREQUENCY (in Algorithm~\ref{alg:SWAMP}). \begin{theorem} \label{thm:setmul} SWAMP solves the {\sc $(W,\epsilon)$-Approximate Set Multiplicity}{} problem. That is, given an ID $y$, $FREQUENCY(y)$ provides an estimation $ \widehat{f^{W}_y}$ such that: $ \widehat{f^{W}_y}\ge f^{W}_y$ and $\Pr \left[ {{f^{W}_y} = \widehat{f^{W}_y}} \right] \ge 1 - \varepsilon$. \end{theorem} \begin{proof} SWAMP's $CFB$ variable stores $W$ different fingerprints, each of length $L = \log_2 \left( W\epsilon^{-1} \right)$. For a fingerprint of size $L$, the probability that two fingerprints collide is $2^{-L}$. There are $W-1$ fingerprints that $h(y)$ may collide with, and $\widehat{f^{W}_y}>f^W_y$ only if it collides with one of the fingerprints in $CFB$. Next, we use the Union Bound to~get: \ifdefined $$\Pr \left[ {f_y}^W \neq \widehat{{f_y}^W}\right] \le W \cdot {2^{ -\log_2 \left( W\epsilon^{-1}\right)}} = \varepsilon.$$ \else $\Pr \left[ {f_y}^W \neq \widehat{{f_y}^W}\right] \le W \cdot {2^{ -\log_2 \left( W\epsilon^{-1}\right)}} = \varepsilon.$\\ \fi Thus, given an item $y$, with probability $1-\varepsilon$, its fingerprint $\left(h(y)\right)$ is unique and TinyTable accurately measures $f^{W}_y$. Any collision of $h(y)$ with other fingerprints only increases $\widehat{f^{W}_y}$, thus $\widehat{f^{W}_y} \ge f^{W}_y$ in all cases. \end{proof} Theorem~\ref{thm:setmul} shows that SWAMP solves the {\sc $(W,\epsilon)$-Approximate Set Multiplicity}, which includes per flow counting and sliding Bloom filter functionalities. Our next step is to analyze the space consumption of SWAMP. This enables us to show that SWAMP is memory optimal to the {\sc $(W,\epsilon)$-Approximate Set Multiplicity}, or more precisely, that it is succinct (according to Definition~\ref{def:succinct}). \begin{definition} \label{def:succinct} Given an information theoretic lower bound $\mathcal B$, an algorithm is called \emph{succinct} if it uses $\mathcal B(1+o(1))$~bits. \end{definition} We now analyze the space consumption of SWAMP. \begin{theorem} \label{thm:space} Given a window size $W$ and an accuracy parameter $\varepsilon$, the number of bits required by SWAMP is: \small $ W\left(\lceil\log_2\left( W\epsilon^{-1} \right)\rceil + \left( {1 + \alpha } \right)\left( {\log_2 \epsilon^{-1} + 3} \right)\right) + o\left( { W} \right . $ \normalsize \end{theorem} \begin{proof} Our cyclic buffer ($CFB$) stores $W$ fingerprints, each of size $\lceil\log_2(W\epsilon^{-1})\rceil$, for an overall space of\\ $W\lceil\log_2\left( W\epsilon^{-1} \right)\rceil$ bits. Additionally, TinyTable requires: $ \left( {1 + \alpha } \right)W\left( {\log_2 \left( W\epsilon^{-1} \right) - \log_2 \left( W \right) + 3} \right) + o\left( W \right)$$ $\\$ =\left( {1 + \alpha } \right)W\left( {\log_2 \epsilon^{-1} + 3} \right) + o\left( W \right) $ bits. \\Each one of the variables $curr$, $Z$ and $\hat{H}$ require $\log(W) = o\left(W\right)$ and thus SWAMP's memory consumption is \ifdefined $ W\left(\lceil\log_2\left( W\epsilon^{-1} \right)\rceil + \left( {1 + \alpha } \right)\left( {\log_2 \epsilon^{-1} + 3} \right)\right) + o\left( { W} \right) $ \else $$ W\left(\lceil\log_2\left( W\epsilon^{-1} \right)\rceil + \left( {1 + \alpha } \right)\left( {\log_2 \epsilon^{-1} + 3} \right)\right) + o\left( { W} \right).\qquad\qedher $$ \fi \end{proof} \input{succinct} \section{Analysis} \label{sec:analysis} This section aims to prove that RHHH solves the {\sc$(\delta,\epsilon,\theta)-$approximate HHH} problem (Definition~\ref{def:deltaapproxHHH}) for one and two dimensional hierarchies. Toward that end, Section~\ref{sec:analSamples} proves the accuracy requirement while Section~\ref{anal:randHHH} proves coverage. Section~\ref{sec:RHHH-prop} proves that RHHH solves the {\sc$(\delta,\epsilon,\theta)-$approximate HHH} problem as well as its memory and update complexity. We model the update procedure of RHHH as a balls and bins experiment where there are $V$ bins and $N$ balls. Prior to each packet arrival, we place the ball in a bin that is selected uniformly at random. The first $H$ bins contain an HH update action while the next $V-H$ bins are void. When a ball is assigned to a bin, we either update the underlying HH algorithm with a prefix obtained from the packet's headers or ignore the packet if the bin is void. Our first goal is to derive confidence intervals around the number of balls in a bin. \begin{definition} We define $X^{K}_i$ to be the random variable representing the number of balls from set $K$ in bin $i$, e.g., $K$ can be all packets that share a certain prefix, or a combination of multiple prefixes with a certain characteristic. When the set $K$ contains all packets, we use the notation $X_i$. \end{definition} Random variables representing the number of balls in a bin are dependent on each other. Therefore, we cannot apply common methods to create confidence intervals. Formally, the dependence is manifested as:\\ $\sum\nolimits_1^{V} {{X_i}} = N.$ This means that the number of balls in a certain bin is determined by the number of balls in all other bins. Our approach is to approximate the balls and bins experiment with the corresponding Poisson one. That is, analyze the Poisson case and derive confidence intervals and then use Lemma~\ref{lemma:rare} to derive a (weaker) result for the original balls and bins case. We now formally define the corresponding Poisson model. Let $Y_1^K,...,Y_{ V }^K$ s.t. $\{Y_i^K\} \sim Poisson\left( {\frac{K}{V}} \right)$ be \textbf{independent} Poisson random variables representing the number of balls in each bin from a set of balls $K$. That is: $\{Y_i^K\} \sim Poisson\left( {\frac{K}{V}} \right).$ \begin{lemma}[Corollary 5.11, page 103 of~\cite{Mitzenmacher:2005:PCR:1076315}] \label{lemma:rare} Let $\mathfrak E$ be an event whose probability is either monotonically increasing or decreasing with the number of balls. If $\mathfrak E$ has probability $p$ in the Poisson case then $\mathfrak E$ has probability at most $2p$ in the exact case. \end{lemma} \subsection{Accuracy Analysis} \label{sec:analSamples} We now tackle the accuracy requirement from Definition~\ref{def:deltaapproxHHH}. That is, for every HHH prefix ($p$), we need to~prove: $$\Pr \left( {\left| {{f_p} - \widehat {{f_p}}} \right| \le \varepsilon N} \right) \ge 1 - \delta.$$ In RHHH, there are two distinct origins of error. Some of the error comes from fluctuations in the number of balls per bin while the approximate HH algorithm is another source of error. We start by quantifying the balls and bins error. Let $Y^{p}_i$ be the Poisson variable corresponding to prefix $p$. That is, the set $p$ contains all packets that are generalized by prefix $p$. Recall that $f_p$ is the number of packets generalized by $p$ and therefore: $E(Y^{p}_i) = \frac{f_p}{V}.$ We need to show that with probability $1-\delta_s$, $Y^{p}_i$ is within $\epsilon_s N$ from $E(Y^{p}_i)$. Fortunately, confidence intervals for Poisson variables are a well studied~\cite{19WaysToPoisson} and we use the method of~\cite{Wmethod} that is quoted in Lemma~\ref{lemma:poissonConfidence}. \begin{lemma} \label{lemma:poissonConfidence} Let $X$ be a Poisson random variable, then \[\Pr \left( {\left| {X - E\left( X \right)} \right| \ge {Z_{1-\delta }}\sqrt {E\left( X \right)} } \right) \le \delta,\] where $Z_\alpha$ is the $z$ value that satisfies $\phi(z)=\alpha$ and $\phi(z)$ is the density function of the normal distribution with mean $0$ and standard deviation of $1$. \end{lemma} Lemma~\ref{lemma:poissonConfidence}, provides us with a confidence interval for Poisson variables, and enables us to tackle the main accuracy result. \begin{theorem} \label{thm:pusmain} If $N \ge {Z_{1 - \frac{{{\delta _s}}}{2}}}V{\varepsilon_s}^{ - 2}$ then \[\Pr \left( {\left| {{X_i}^pH - {f_p}} \right| \ge {\varepsilon _s}N} \right) \le {\delta _s}.\] \end{theorem} \begin{proof} We use Lemma~\ref{lemma:poissonConfidence} for $\frac{\delta_s}{2}$ and get: \[\Pr \left( \left| {{Y_i}^p - \frac{{{f_p}}}{V}} \right| \ge {Z_{1 - \frac{\delta _s}{2}}\sqrt {\frac{{{f_p}}}{V}} } \right) \le \frac{\delta_s}{2} .\] To make this useful, we trivially bind $f_p \le N$ and get \[\Pr \left( \left| {{Y_i}^p - \frac{{{f_p}}}{V}} \right| \ge {Z_{1 - \frac{\delta _s}{2}}\sqrt {\frac{{{N}}}{V}} } \right) \le \frac{\delta_s}{2} .\] However, we require error of the form $\frac{\epsilon_s \cdot N}{V}$. \[\begin{array}{l} {\varepsilon _s}N{V^{ - 1}} \ge {Z_{1 - \frac{{{\delta _s}}}{2}}}{V^{ - 0.5}}{N^{0.5}}\\ {N^{0.5}} \ge {Z_{1 - \frac{{{\delta _s}}}{2}}}{V^{0.5}}{\varepsilon _s}^{ - 1}\\ N \ge {Z_{1 - \frac{{{\delta _s}}}{2}}}V{\varepsilon_s}^{ - 2} . \end{array}\] Therefore, when $N \ge {Z_{1 - \frac{{{\delta _s}}}{2}}}V{\varepsilon_s}^{ - 2}$, we have that: \[\Pr \left( {\left| {{Y_i}^p - \frac{{{f_p}}}{V}} \right| \ge \frac{{{\varepsilon _s}N}}{V}} \right) \le \frac{{{\delta _s}}}{2} .\] We multiply by $V$ and get: $$\Pr \left( {\left| {{Y_i}^pV - {f_p}} \right| \ge {\varepsilon _s}N} \right) \le \frac{{{\delta _s}}}{2} .$$ Finally, since $Y_i^{p}$ is monotonically increasing with the number of balls ($f_p$), we apply Lemma~\ref{lemma:rare} to conclude that\\ $$\Pr \left( {\left| {{X_i}^pV - {f_p}} \right| \ge {\varepsilon _s}N} \right) \le {\delta _s}.$$ \end{proof} To reduce clutter, we denote $\psi \triangleq {Z_{1 - \frac{{{\delta _s}}}{2}}}V{\varepsilon_s}^{ - 2}$. Theorem~\ref{thm:pusmain} proves that the desired sample accuracy is achieved once $N>\psi$. It is sometimes useful to know what happens when $N<\psi$. For this case, we have Corollary~\ref{cor:epsN}, which is easily derived from Theorem~\ref{thm:pusmain}. We use the notation $\varepsilon_s(N)$ to define the actual sampling error after $N$ packets. Thus, it assures us that when $N<\psi$, $\varepsilon_s(N)>\varepsilon_s$. It also shows that $\varepsilon_s(N)<\varepsilon_s$ when $N>\psi$. Another application of Corollary~\ref{cor:epsN} is that given a measurement interval $N$, we can derive a value for $\varepsilon_s$ that assures correctness. For simplicity, we continue with the notion of $\varepsilon_s$. \begin{corollary} \label{cor:epsN} ${\varepsilon _s}\left( N \right) \ge \sqrt {\frac{{{Z_{1 - \frac{{{\delta _s}}}{2}}}V}}{N}} .$ \end{corollary} The error of approximate HH algorithms is proportional to the number of updates. Therefore, our next step is to provide a bound on the number of updates of an arbitrary HH algorithm. Given such a bound, we configure the algorithm to compensate so that the accumulated error remains within the guarantee even if the number of updates is larger than average. \begin{corollary} \label{cor:oversample} Consider the number of updates for a certain lattice node ($X_i$). If $N>\psi$, then \[\Pr \left( {{X_i} \le \frac{N}{V}\left( {1 + {\varepsilon _s}} \right)} \right) \ge 1 - {\delta _s}.\] \end{corollary} \begin{proof} We use Theorem~\ref{thm:pusmain} and get: \\ $\Pr \left( {\left| {{X_i} - \frac{N}{V}} \right| \ge {\varepsilon _s}N} \right) \le {\delta _s}.$ This implies that:\\ $\Pr \left( {{X_i} \le \frac{N}{V}\left( {1 + {\varepsilon _s}} \right)} \right) \ge 1 - {\delta _s},$ completing the proof. \end{proof} We explain now how to configure our algorithm to defend against situations in which a given approximate HH algorithm might get too many updates, a phenomenon we call \emph{over sample}. Corollary~\ref{cor:oversample} bounds the probability for such an occurrence, and hence we can slightly increase the accuracy so that in the case of an over sample, we are still within the desired limit. We use an algorithm ($\mathbb A$) that solves the {\sc {$(\varepsilon_a, \delta_a)$ - Frequency Estimation}} problem. We define $\varepsilon_a' \triangleq \frac{\varepsilon_a}{1+\varepsilon_s}$. According to Corollary~\ref{cor:oversample}, with probability $1-\delta_s$, the number of sampled packets is at most $(1+\varepsilon_s)\frac{N}{V}.$ By using the union bound and with probability $1-\delta_a-\delta_s$ we get: \[\left| {{X^p} - \widehat {{X^p}}} \right| \le {\varepsilon _{a'}}\left( {1 + {\varepsilon _s}} \right)\frac{N}{V} = \frac{{{\varepsilon _a}\left( {1 + {\varepsilon _s}} \right)}}{{1 + {\varepsilon _s}}}\frac{N}{V} = {\varepsilon _a}\frac{N}{V}.\] For example, Space Saving requires $1,000$ counters for $\epsilon_a=0.001$. If we set $\epsilon_s = 0.001$, we now require $1001$ counters. Hereafter, we assume that the algorithm is configured to accommodate these over samples. \begin{theorem} \label{thm:PUSCombined} Consider an algorithm ($\mathbb{A}$) that solves the {\sc {$(\epsilon_a, \delta_a)$ - Frequency Estimation}} problem. If $N > \psi$, then for $\delta \ge \delta_a + 2 \cdot \delta_s$ and $\epsilon \ge \epsilon_a + \epsilon_s$, $\mathbb{A}$ solves {\sc {$(\epsilon, \delta)$ - Frequency Estimation}}. \end{theorem} \begin{proof} As $N > \psi$, we use Theorem~\ref{thm:pusmain}. That is, the input solves {\sc {$(\epsilon, \delta)$ - Frequency Estimation}}. \begin{equation} \label{eq:delta2} \Pr \left[ {\left| {{f_p} - {X_p}V} \right| \ge {\varepsilon _s}N} \right] \le {\delta _s}. \end{equation} $\mathbb{A}$ solves the {\sc {$(\epsilon_a, \delta_a)$ - Frequency Estimation}} problem and provides us with an estimator $\widehat{X^p}$ that approximates $X^p$ -- the number of updates for prefix $p$. According to Corollary~\ref{cor:oversample}: \[\Pr \left( {\left| {{X^p} - \widehat {{X^p}}} \right| \le \frac{{{\varepsilon _a}N}}{V}} \right) \ge 1 - {\delta _a} - {\delta _s},\] and multiplying both sides by $V$ gives us: \begin{equation} \label{eq:nodelta2} \Pr \left( {\left| {{X^p}V - \widehat {{X^p}}V} \right| \ge {\varepsilon _a}N} \right) \le {\delta _a} + {\delta _s}. \end{equation} We need to prove that: $\Pr \left( {\left| {{f_p} - \widehat {{X^p}}V} \right| \le \varepsilon N} \right) \ge 1 - \delta$. Recall that: $f_p = E(X^p)V$ and that $\widehat{f_p} = \widehat{X^p}V$ is the estimated frequency of $p$. Thus, \small \begin{align} &\Pr \left( {\left| {{f_p} - \widehat{f_p}} \right| \ge \varepsilon N} \right) = \Pr \left( {\left| {{f_p} - \widehat {{X^p}}V} \right| \ge \varepsilon N} \right)\notag\\ =& \Pr \left( {\left| {{f_p} + \left( {{X^p}{V} - {X^p}{V}} \right) - {V}\widehat {{X^p}}} \right| \ge (\epsilon_a+\epsilon_s) N} \right)\label{eq:separation} \\ \le&\Pr \left( \left[{\left| {{f_p} - {X^p}{V}} \right| \ge {\varepsilon _s}N} \right]\vee \left[{\left| {{X^p}{V} - \widehat {{X^p}}}{V} \right| \ge {\varepsilon _a}N}\right] \right)\notag, \end{align}\normalsize where the last inequality follows from the fact that in order for the error of~\eqref{eq:separation} to exceed $\epsilon N$, at least one of the events has to occur. We bound this expression using the Union bound. \[\begin{array}{l} \Pr \left( {\left| {{f_p} - \widehat {{f_p}}} \right| \ge \varepsilon N} \right) \le \\ \Pr \left( {\left| {{f_p} - {X^p}V} \right| \ge {\varepsilon _s}N} \right) + \Pr \left( {\left| {{X^p}V - \widehat {{X^p}}H} \right| \ge {\varepsilon _a}N} \right) \\ \le{\delta _a} + 2{\delta _s}, \end{array}\] where the last inequality is due to equations~\ref{eq:delta2} and~\ref{eq:nodelta2}. \end{proof} An immediate observation is that Theorem~\ref{thm:PUSCombined} implies accuracy, as it guarantees that with probability $1-\delta$ the estimated frequency of any prefix is within $\varepsilon N$ of the real frequency while the accuracy requirement only requires it for prefixes that are selected as HHH. \begin{lemma} \label{lemma:accuracy} If $N > \psi$, then Algorithm~\ref{alg:Skipper} satisfies the accuracy constraint for $\delta = \delta_a+2\delta_s$ and $\epsilon = \epsilon_a+\epsilon_s$. \end{lemma} \begin{proof} The proof follows from Theorem~\ref{thm:PUSCombined}, as the frequency estimation of a prefix depends on a single HH~algorithm. \end{proof} \subsubsection*{Multiple Updates} One might consider how RHHH behaves if instead of updating at most $1$ HH instance, we update $r$ independent instances. This implies that we may update the same instance more than once per packet. Such an extension is easy to do and still provides the required guarantees. Intuitively, this variant of the algorithm is what one would get if each packet is duplicated $r$ times. The following corollary shows that this makes RHHH converge $r$ times faster. \begin{corollary} Consider an algorithm similar to $RHHH$ with $V=H$, but for each packet we perform $r$ independent update operations. If $N > \frac{\psi}{r}$, then this algorithm satisfies the accuracy constraint for $\delta = \delta_a+2\delta_s$ and $\epsilon = \epsilon_a+\epsilon_s$. \end{corollary} \begin{proof} Observe that the new algorithm is identical to running RHHH on a stream ($\mathcal{S'}$) where each packet in $\mathcal{S}$ is replaced by $r$ consecutive packets. Thus, Lemma~\ref{lemma:accuracy} guarantees that accuracy is achieved for $\mathcal{S'}$ after $\psi$ packets are processed. That is, it is achieved for the original stream ($\mathcal{S}$) after $N >\frac{\psi}{r}$ packets. \end{proof} \subsection{Coverage Analysis} \label{anal:randHHH} Our goal is to prove the coverage property of Definition~\ref{def:deltaapproxHHH}. That is: $\Pr \left( \widehat {C_{q|P}} \ge C_{q|P} \right) \ge 1-\delta.$ Conditioned frequencies are calculated in a different manner for one and two dimensions. Thus, Section~\ref{subsec:one} deals with one dimension and Section~\ref{subsec:two} with two. We now present a common definition of the best generalized prefixes in a set. \begin{definition}[Best generalization] \label{def:bestG} Define $G(q|P)$ as the set $\left\{ {p:p \in P,p \prec q,\neg \exists p' \in P:q \prec p' \prec p} \right\}$. Intuitively, $G(q|P)$ is the set of prefixes that are best generalized by $q$. That is, $q$ does not generalize any prefix that generalizes one of the prefixes in $G(q|P)$. \end{definition} \subsubsection{One Dimension} \label{subsec:one} We use the following lemma for bounding the error of our conditioned count estimates. \begin{lemma} \label{lemma:cp}(\cite{HHHMitzenmacher}) In one dimension, $${C_{q\mid P}} = {f_q} - \sum\nolimits_{h \in G(q|P)} {{f_h}} .$$ \normalsize \end{lemma} Using Lemma~\ref{lemma:cp}, it is easier to establish that the conditioned frequency estimates calculated by Algorithm~\ref{alg:Skipper} are conservative. \begin{lemma} \label{lemma:sq} The conditioned frequency estimation of Algorithm~\ref{alg:Skipper} is: \[\widehat{C_{q|P}} = \widehat{f_q}^{+}-\sum\nolimits_{h \in G\left( {q|P} \right)} {\widehat{f_h}^- } + 2{Z_{1 - \delta }}\sqrt {N V} .\] \end{lemma} \begin{proof} Looking at Line~\ref{line:cp} in Algorithm~\ref{alg:Skipper}, we get that: $$\widehat{C_{q|P}} = \widehat{f_q}^{+} + calcPred(q,P).$$ That is, we need to verify that the return value $calcPred(q,P)$ in one dimension (Algorithm~\ref{alg:randHHH}) is $\sum\nolimits_{h \in G\left( {q|P} \right)} {\widehat{f_h}^- }$. This follows naturally from that algorithm. Finally, the addition of $2{Z_{1 - \delta }}\sqrt {NV}$ is due to line~\ref{line:accSample}. \end{proof} In deterministic settings, $\widehat{f_q}^{+} -\sum\nolimits_{h \in G\left( {q|P} \right)} {\widehat{f_h}^- }$ is a conservative estimate since ${\widehat {{f_q}}^ + } \ge {f_q}$ and $f_h < \widehat{f_h}^-$. In our case, these are only true with regard to the sampled sub-stream and the addition of $2{Z_{1 - \delta }}\sqrt {NV}$ is intended to compensate for the randomized process. Our goal is to show that $\Pr \left(\widehat {C_{q|P}} > C_{q|P}\right) \ge 1-\delta$. That is, the conditioned frequency estimation of Algorithm~\ref{alg:Skipper} is probabilistically conservative. \begin{theorem} \label{thm:underCP} $\Pr \left( \widehat {C_{q|P}} \ge C_{q|P} \right) \ge 1-\delta.$ \end{theorem} \begin{proof} Recall that: $$\widehat {{C_{q|P}}} = \widehat f_q^ + - \sum\limits_{h \in G\left( {q|P} \right)} {\widehat f_h^ - + 2{Z_{1-\frac{\delta }{8}}}\sqrt {NV} }.$$ We denote by $K$ the set of packets that may affect $\widehat {{C_{q|P}}}$. We split $K$ into two sets: $K^{+}$ contains the packets that may positively impact $\widehat {{C_{q|P}}}$ and $K^{-}$ contains the packets that may negatively impact it. We use $K^{+}$ to estimate the sample error in $\widehat{ f_q}$ and $K^{-}$ to estimate the sample error in $\sum\limits_{h \in G\left( {q|P} \right)} {\widehat f_h^ -}$. The positive part is easy to estimate. In the negative, we do not know exactly how many bins affect the sum. However, we know for sure that there are at most $N$. We define the random variable $Y^K_+$ that indicates the number of balls included in the positive sum. We invoke Lemma~\ref{lemma:poissonConfidence} on $Y^{K}_+$. For the negative part, the conditioned frequency is positive so $E\left(Y^K_-\right)$ is at most $\frac{N}{V}$. Hence, $\Pr \left( \left| {Y_K^ + - E\left( {Y_K^ + } \right)} \right| \ge {Z_{1-\frac{\delta }{8}}}\sqrt \frac{N }{V} \right) \le \frac{\delta }{4}.$ Similarly, we use Lemma~\ref{lemma:poissonConfidence} to bound the error of $Y_K^ -$: $$\Pr \left( {\left| {Y_K^ - - E\left( {{Y_K}^ - } \right)} \right| \ge {Z_{1-\frac{\delta }{8}}}\sqrt \frac{N}{V} } \right) \le \frac{\delta }{4}.$$\\ $Y^{K}_+$ is monotonically increasing with any ball and $Y_{K}^-$ is monotonically decreasing with any ball. Therefore, we can apply Lemma~\ref{lemma:rare} on each of them and conclude: \[\begin{array}{l} \Pr \left( {\widehat {{C_{q|P}}} \ge {C_{q|P}}} \right)\le\\ 2\Pr \left( {H\left( {Y_K^ - + Y_K^ + } \right) \ge VE\left( {Y_K^ - + Y_K^ + } \right) + 2{Z_{1 - \frac{\delta }{8}}}\sqrt {NV} } \right)\\ \le 1-2\frac{\delta }{2} = 1-\delta. \end{array}\] \end{proof} \normalsize \begin{theorem} If $N > \psi$, Algorithm~\ref{alg:Skipper} solves the {\sc$(\delta, \varepsilon, \theta)$ - Approximate HHH} problem for $\delta = \delta_a + 2\delta_s$ and $\varepsilon = \varepsilon_s + \varepsilon_a$. \end{theorem} \begin{proof} We need to show that the accuracy and coverage guarantees hold. Accuracy follows from Lemma~\ref{lemma:accuracy} and coverage follows from Theorem~\ref{thm:underCP} that implies that for every non heavy hitter prefix (q), $\widehat{C_{q|P}}<\theta N$ and thus: $$\Pr \left( {{C_{q|P}} < \theta N} \right) \ge 1 - \delta.$$ \end{proof} \section{Introduction} Network measurements are essential for a variety of network functionalities such as traffic engineering, load balancing, quality of service, caching, anomaly and intrusion detection~\cite{LBSigComm,DevoFlow,ApproximateFairness,TrafficEngeneering,IntrusionDetection2,7218487,CONGA,TinyLFU}. A major challenge in performing and maintaining network measurements comes from rapid line rates and the large number of active flows. Previous works suggested identifying \emph{Heavy Hitter} (HH) flows~\cite{Woodruff16} that account for a large portion of the traffic. Indeed, approximate HH are used in many functionalities and can be captured quickly and efficiently~\cite{HashPipe,ICCCNPaper,WCSS,DIM-SUM,RAP}. However, applications such as anomaly detection and \emph{Distributed Denial of Service} (DDoS) attack detection require more sophisticated measurements~\cite{Zhang:2004:OIH:1028788.1028802,Sekar2006}. In such attacks, each device generates a small portion of the traffic but their combined volume is overwhelming. HH measurement is therefore insufficient as each individual device is not a heavy hitter. \begin{figure}[t!] \includegraphics[width = 0.8\columnwidth]{example-kkCD.png} \caption{A high level overview of this work. Previous algorithms' update requires $\Omega(H)$ run time, while we perform at most a single $O(1)$ update. } \label{fig:contribution} \end{figure} \emph{Hierarchical Heavy Hitters} (HHH) account aggregates of flows that share certain IP prefixes. The structure of IP addresses implies a prefix based hierarchy as defined more precisely below. In the DDoS example, HHH can identify IP prefixes that are suddenly responsible for a large portion of traffic and such an anomaly may very well be a manifesting attack. Further, HHH can be collected in one dimension, e.g., a single source IP prefix hierarchy, or in multiple dimensions, e.g., a hierarchy based on both source and destination IP~prefixes. Previous works~\cite{HHHMitzenmacher,CormodeHHH} suggested deterministic algorithms whose update complexity is proportional to the hierarchy's size. These algorithms are currently too slow to cope with line speeds. For example, a 100 Gbit link may deliver over 10 million packets per second, but previous HHH algorithms cannot cope with this line speed on existing hardware. The transition to IPv6 is expected to increase hierarchies' sizes and render existing approaches even~slower. Emerging networking trends such as \emph{Network Function Virtualization} (NFV) enable virtual deployment of network functionalities. These are run on top of commodity servers rather than on custom made hardware, thereby improving the network's flexibility and reducing operation costs. These trends further motivate fast software based measurement algorithms. \subsection{Contributions} First, we define a probabilistic relaxation of the HHH problem. Second, we introduce \emph{Randomized HHH} (a.k.a. RHHH), a novel randomized algorithm that solves probabilistic HHH over single and multi dimensional hierarchical domains. Third, we evaluate RHHH on four different real Internet traces and demonstrate a speedup of up to X62 while delivering similar accuracy and recall ratios. Fourth, we integrate RHHH with \emph{Open vSwitch} (OVS) and demonstrate a capability of monitoring HHH at line speed, achieving a throughput of up to 13.8M packets per second. Our algorithm also achieves X2.5 better throughput than previous approaches. To the best of our knowledge, our work is the first to perform OVS multi dimensional HHH analysis in line speed. Intuitively, our RHHH algorithm operates in the following way, as illustrated in Figure~\ref{fig:contribution}: We maintain an instance of a heavy-hitters detection algorithm for each level in the hierarchy, as is done in~\cite{HHHMitzenmacher}. However, whenever a packet arrives, we randomly select only a single level to update using its respective instance of heavy-hitters rather than updating all levels (as was done in~\cite{HHHMitzenmacher}). Since the update time of each individual level is $O(1)$, we obtain an $O(1)$ \emph{worst case} update time. The main challenges that we address in this paper are in formally analyzing the accuracy of this scheme and exploring how well it works in practice with a concrete implementation. The update time of previous approaches is $O(H)$, where $H$ is the size of the hierarchy. An alternative idea could have been to simply sample each packet with probability $\frac{1}{H}$, and feed the sampled packets to previous solutions. However, such a solution only provides an $O(1)$ \emph{amortized} running time. Bounding the worst case behavior to $O(1)$ is important when the counters are updated inside the data path. In such cases, performing an occasional very long operation could both delay the corresponding ``victim'' packet, and possibly cause buffers to overflow during the relevant long processing. Even in off-path processing, such as in an NFV setting, occasional very long processing creates an unbalanced workload, challenging schedulers and resource allocation schemes. \paragraph*{Roadmap} The rest of this paper is organized as follows: We survey related work on HHH in Section~\ref{sec:related}. We introduce the problem and our probabilistic algorithm in Section~\ref{sec:randomized}. For presentational reasons, we immediately move on to the performance evaluation in Section~\ref{sec:eval} followed by describing the implementation in OVS in Section~\ref{sec:ovs}. We then prove our algorithm and analyze its formal guarantees in Section~\ref{sec:analysis}. Finally, we conclude with a discussion in Section~\ref{sec:discussion}. \input{table_ex} \section{Related Work} \label{sec:related} In one dimension, HHH were first defined by~\cite{Cormode2003}, which also introduced the first streaming algorithm to approximate them. Additionally,~\cite{HHHSwitch} offered a TCAM approximate HHH algorithm for one dimension. The HHH problem was also extended to multiple dimensions~\cite{Cormode2004,CormodeHHH,Hershberger2005,Zhang:2004:OIH:1028788.1028802,HHHMitzenmacher}. The work of~\cite{Lin2007} introduced a single dimension algorithm that requires \small$O\left(\frac{H^2}{\epsilon}\right)$\normalsize space, where the symbol $H$ denotes the size of the hierarchy and $\epsilon$ is the allowed relative estimation error for each single flow's frequency. Later,~\cite{Truong2009} introduced a two dimensions algorithm that requires \small $O\left(\frac{H^{3/2}}{\epsilon}\right)$\normalsize space and update time\footnote{Notice that in two dimensions, $H$ is a square of its counter-part in one dimension.}. In~\cite{CormodeHHH}, the trie based Full Ancestry and Partial Ancestry algorithms were proposed. These use $O\left(\frac{H\log(N\epsilon)}{\epsilon}\right)$ space and requires $O\left(H\log(N\epsilon)\right)$ time per update. The seminal work of \cite{HHHMitzenmacher}~introduced and evaluated a simple multi dimensional HHH algorithm. Their algorithm uses a separate copy of Space Saving~\cite{SpaceSavings} for each lattice node and upon packet arrival, all lattice nodes are updated. Intuitively, the problem of finding hierarchical heavy hitters can be reduced to solving multiple non hierarchical heavy hitters problems, one for each possible query. This algorithm provides strong error and space guarantees and its update time does not depend on the stream length. Their algorithm requires $O\left(\frac{H}{\epsilon}\right)$ space and its update time for unitary inputs is $O\left(H\right)$ while for weighted inputs it is $O\left(H \log \frac{1}{\epsilon}\right)$. The update time of existing methods is too slow to cope with modern line speeds and the problem escalates in NFV environments that require efficient software implementations. This limitation is both empirical and asymptotic as some settings require large hierarchies. Our paper describes a novel algorithm that solves a probabilistic version of the hierarchical heavy hitters problem. We argue that in practice, our solution's quality is similar to previously suggested deterministic approaches while the runtime is dramatically improved. Formally, we improve the update time to $O(1)$, but require a minimal number of packets to provide accuracy guarantees. We argue that this trade off is attractive for many modern networks that route a continuously increasing number of packets. \section{Randomized HHH (RHHH)} \label{sec:randomized} We start with an intuitive introductory to the field as well as preliminary definitions and notations. Table~\ref{tbl:notations} summarizes notations used in this work. \subsection{Basic terminology} \label{sec:terminology} We consider IP addresses to form a hierarchical domain with either bit or byte size granularity. \emph{Fully specified} IP addresses are the lowest level of the hierarchy and can be generalized. We use $\mathcal U$ to denote the domain of fully specified items. For example, $181.7.20.6$ is a fully specified IP address and $181.7.20.*$ generalizes it by a single byte. Similarly, $181.7.*$ generalizes it by two bytes and formally, a fully specified IP address is generalized by any of its prefixes. The \emph{parent} of an item is the longest prefix that generalizes it. In two dimensions, we consider a tuple containing source and destination IP addresses. A fully specified item is fully specified in both dimensions. For example, $(\langle181.7.20.6\rangle\to \langle 208.67.222.222\rangle)$ is fully specified. In two dimensional hierarchies, each item has two parents, e.g., $(\langle181.7.20.*\rangle\to \langle 208.67.222.222\rangle)$ and $(\langle181.7.20.6\rangle\to \langle 208.67.222.*\rangle)$ are both parents to\\ $(\langle181.7.20.6\rangle\to \langle 208.67.222.222\rangle)$. \begin{definition}[Generalization] For two prefixes $p,q$, we denote $p \preceq q$ if in any dimension it is either a prefix of $q$ or is equal to $q$. We also denote the set of elements that are generalized by $p$ with $H_p\triangleq \{e\in \mathcal U\mid e\preceq p\}$, and those generalized by a set of prefixes $P$ by $H_P\triangleq \cup_{p\in P} H_p$. If $p \preceq q$ and $p\neq q$, we denote $p\prec q$. \end{definition} In a single dimension, the generalization relation defines a vector going from fully generalized to fully specified. In two dimensions, the relation defines a lattice where each item has two parents. A byte granularity two dimensional lattice is illustrated in Table~\ref{tbl:example}. In the table, each lattice node is generalized by all nodes that are upper or more to the left. The most generalized node $(*,*)$ is called \emph{fully general} and the most specified node $(s1.s2.s3.s4, d1.d2.d3.d4)$ is called \emph{fully specified}. We denote $H$ the hierarchy's size as the number of nodes in the lattice. For example, in IPv4, byte level one dimensional hierarchies imply $H=5$ as each IP address is divided into four bytes and we also allow querying $*$. \begin{definition} Given a prefix $p$ and a set of prefixes $P$, we define $G(p|P)$ as the set of prefixes: $$\left\{ {h:h \in P,h \prec p,\nexists\,h' \in P\,\,s.t.\,h \prec h' \prec p} \right\}.$$ \end{definition} Intuitively, $G(p|P)$ are the prefixes in $P$ that are most closely generalized by $p$. E.g., let $p=<142.14.*>$ and the set \\$P = \left\{ {<142.14.13.*>,<142.14.13.14>} \right\}$, then $G(p|P)$ only contains $<142.14.13.*>$. We consider a stream $\mathbb{S}$, where at each step a packet of an item $e$ arrives. Packets belong to a hierarchical domain of size $H$, and can be generalized by multiple prefixes as explained above. Given a fully specified item $e$, $f_e$ is the number of occurrences $e$ has in $\mathbb{S}$. Definition~\ref{def:frequency} extends this notion to prefixes. \begin{definition} (Frequency) \label{def:frequency} Given a prefix $p$, the frequency of $p$ is: $$f_p \triangleq \sum\nolimits_{e \in H_p} {f_e} .$$ \end{definition} Our implementation utilizes Space Saving~\cite{SpaceSavings}, a popular (non hierarchical) heavy hitters algorithm, but other algorithms can also be used. Specifically, we can use any \emph{counter algorithm} that satisfies Definition~\ref{Def:probFE} below and can also find heavy hitters, such as~\cite{frequent4,BatchDecrement,LC}. We use Space Saving because it is believed to have an empirical edge over other algorithms~\cite{SpaceSavingIsTheBest,SpaceSavingIsTheBest2009,SpaceSavingIsTheBest2010}. \input{notationsTable} \normalsize The minimal requirements from an algorithm to be applicable to our work are defined in Definition~\ref{Def:probFE}. This is a weak definition and most counter algorithms satisfy it with $\delta =0$. Sketches~\cite{CountSketch,CMSketch,TinyTable} can also be applicable here, but to use them, each sketch should also maintain a list of heavy hitter items (Definition~\ref{def:HH}). \begin{definition} \label{Def:probFE} An algorithm solves the {\sc {$(\epsilon, \delta)$ - Frequency Estimation}} problem if for any prefix ($x$), it provides $\widehat{f_{x}}$ s.t.: $$\Pr \left[ {\left| {{f_x} - \widehat {{f_x}}} \right| \le \varepsilon N} \right] \ge 1 - \delta . $$ \end{definition} \begin{definition}[Heavy hitter (HH)]\label{def:HH} Given a threshold $(\theta)$, a fully specified item $(e)$ is a \textbf{heavy hitter} if its frequency $(f_e)$ is above the threshold: $\theta \cdot N$, i.e., $f_e \ge \theta\cdot N$. \end{definition} Our goal is to identify the hierarchical heavy hitter prefixes whose frequency is above the threshold $(\theta \cdot N)$. However, if the frequency of a prefix exceeds the threshold then so is the frequency of all its ancestors. For compactness, we are interested in prefixes whose frequency is above the threshold due to non HHH siblings. This motivates the definition of \emph{conditioned frequency} ($C_{p|P}$). Intuitively, $C_{p|P}$ measures the \textbf{additional} traffic prefix $p$ adds to a set of previously selected HHHs ($P$), and it is defined as follows. \begin{definition} (Conditioned frequency) \label{def:cf} The conditioned frequency of a prefix $p$ with respect to a prefix set $P$ is: $$C_{p\mid P}\triangleq\allowbreak \sum_{e\in H_{(P\cup\{p\})} \setminus H_P} f_e.$$ \end{definition} $C_{p\mid P}$ is derived by subtracting the frequency of fully specified items that are already generalized by items in $P$ from $p$'s frequency ($f_p$). In two dimensions, exclusion inclusion principles are used to avoid double counting. We now continue and describe how exact hierarchical heavy hitters (with respect to $C_{p\mid P}$) are found. To that end, partition the hierarchy to levels as explained in Definition~\ref{Def:L}. \begin{definition}[Hierarchy Depth] \label{Def:L} Define $L$, the \emph{depth of a hierarchy}, as follows: Given a fully specified element $e$, we consider a set of prefixes such that: $e \prec p_1 \prec p_2,..\prec p_L$ where $e \neq p_1 \neq p_2 \neq ... \neq p_{L}$ and $L$ is the maximal size of that set. We also define the function $level(p)$ that given a prefix $p$ returns $p$'s maximal location in the chain, i.e., the maximal chain of generalizations that ends in $p$. \end{definition} To calculate exact heavy hitters, we go over fully specified items ($level 0$) and add their heavy hitters to the set $HHH_0$. Using $HHH_0$, we calculate conditioned frequency for prefixes in $level 1$ and if $C_{p|{HHH_0}} \ge \theta \cdot N$ we add $p$ to $HHH_1$. We continue this process until the last level ($L$) and the exact heavy hitters are the set $HHH_L$. Next, we define $HHH$ formally. \begin{definition}[Hierarchical HH (HHH)] \label{def:HHH} The set $HHH_0$ contains the fully specified items $e$ s.t. $f_e \ge \theta\cdot N$. Given a prefix $p$ from level($l$), $0\le l\le L$, we define: \[\begin{array}{l} HH{H_l} = HH{H_{l -1}} \cup \left\{ {p:\left( {p \in level\left( l \right) \wedge {C_{p|HH{H_{l - 1}}}} \ge \theta \cdot N} \right)} \right\} . \end{array}\] The set of exact hierarchical heavy hitters $HHH$ is defined as the set $HHH_L$. \end{definition} For example, consider the case where $\theta N =100$ and assume that the following prefixes with their frequencies are the only ones above $\theta N$. $p_1 = (<101.*>, 108)$ and $p_2 = (<101.102.*>, 102)$. Clearly, both prefixes are heavy hitters according to Definition~\ref{def:HH}. However, the conditioned frequency of $p1$ is $108-102 = 6$ and that of $p_2$ is 102. Thus only $p_2$ is an HHH prefix. Finding exact hierarchical heavy hitters requires plenty of space. Indeed, even finding exact (non hierarchical) heavy hitters requires linear space~\cite{TCS-002}. Such a memory requirement is prohibitively expensive and motivates finding approximate HHHs. \begin{definition}[$(\epsilon,\theta)-$approximate HHH] \label{def:approxHHH} An algorithm solves {\sc {$(\epsilon, \theta)$ - Approximate Hierarchical Heavy Hitters}} if after processing any stream $\mathbb{S}$ of length $N$, it returns a set of prefixes ($P$) that satisfies the following conditions: \begin{itemize} \item \textbf{Accuracy:} for every prefix $p\in P$, ${\left| {{f_p} - \widehat {{f_p}}} \right| \le \varepsilon N}$. \item \textbf{Coverage:} for every prefix $q \notin P$: ${{C_{q|P}} < \theta N}$. \end{itemize} \end{definition} Approximate HHH are a set of prefixes ($P$) that satisfies accuracy and coverage; there are many possible sets that satisfy both these properties. Unlike exact HHH, we do no require that for $p \in P$, $C_{p|P} \ge \theta N$. Unfortunately, if we add such a requirement then~\cite{Hershberger2005} proved a lower bound of $\Omega\left(\frac{1}{\theta^{d+1}}\right)$ space, where $d$ is the number of dimensions. This is considerably more space than is used in our work ($\frac{H}{\epsilon}$) that when $\theta \propto \epsilon$ is also $\frac{H}{\theta}$. Finally, Definition~\ref{def:deltaapproxHHH} defines the probabilistic approximate HHH problem that is solved in this paper. \begin{definition}[$(\delta,\epsilon,\theta)-$approximate HHHs] \label{def:deltaapproxHHH} An algorithm $\mathbb A$ solves {\sc {$(\delta,\epsilon, \theta)$ - Approximate Hierarchical Heavy Hitters}} if after processing any stream $\mathbb{S}$ of length $N$, it returns a set of prefixes $P$ that, for an arbitrary run of the algorithm, satisfies the following: \begin{itemize} \item \textbf{Accuracy:} for every prefix $p\in P$, $$\Pr \left( {\left| {{f_p} - \widehat {{f_p}}} \right| \le \varepsilon N} \right) \ge 1 - \delta.$$ \item \textbf{Coverage:} given a prefix $q \notin P$, $$\Pr \left( {{C_{q|P}} < \theta N} \right) \ge 1 - \delta .$$ \end{itemize} \end{definition} Notice that this is a simple probabilistic relaxation of Definition~\ref{def:approxHHH}. Our next step is to show how it enables the development of faster algorithms. \begin{algorithm}[h] \begin{algorithmic}[1] \Statex Initialization: $\forall d\in[L]: HH[d] =$ HH\_Alg $(\epsilon_a^{-1})$ \Function{Update}{ $x$} \State $d = randomInt(0,V)$ \If {$d<H$} \State Prefix $p = x\&HH[d].mask$ \Comment{Bitwise AND} \State $HH[d].INCREMENT(p)$ \EndIf \EndFunction \Function{Output}{$\theta$} \State $P = \phi$ \For{Level $l = |H|$ down to $0$. } \For{ each $p$ in level $l$} \State \label{line:cp}$\widehat{C_{p|P}} = \widehat{f_p}^{+} + calcPred(p,P) $ \State \label{line:accSample}$\widehat{C_{p|P}} = \widehat{C_{p|P}}+ 2{{Z_{1 - {\delta}}}\sqrt {NV} }$ \If {$\widehat{C_{p|P}}\ge \theta N$} \State $ P = P \cup \{p\}$ \Comment{$p$ is an HHH candidate} \State $print\left(p, \widehat{f_p}^{-}, \widehat{f_p}^{+}\right)$ \EndIf \EndFor \EndFor \State\Return $P$ \EndFunction \end{algorithmic} \normalsize \caption{Randomized HHH algorithm} \label{alg:Skipper} \end{algorithm} \begin{algorithm}[h] \begin{algorithmic}[1] \Function{calcPred}{prefix $p$, set $P$} \State $R = 0$ \For{ each $h\in G(p|P)$} \State \label{alg:first}$R = R - \widehat{f_h}^{-}$ \EndFor \State \Return $R$ \EndFunction \end{algorithmic} \normalsize \caption{calcPred for one dimension } \label{alg:randHHH} \end{algorithm} \begin{algorithm}[h] \begin{algorithmic}[1] \Function{calcPred}{prefix $p$, set $P$} \State $R = 0$ \For{ each $h\in G(p|P)$} \State \label{alg:second}$R = R - \widehat{f_h}^{-}$ \EndFor \For{ each pair $h,h'\in G(p|P)$} \State $q=glb(h,h')$ \If {$\not\exists h_3 \neq h,h'\in G(p|P), q \preceq h_3$} \State \label{alg:third}$R = R + \widehat{f_q}^{+}$ \EndIf \EndFor \State \Return $R$ \EndFunction \end{algorithmic} \normalsize \caption{calcPred for two dimensions} \label{alg:randHHH2D} \end{algorithm} \subsection{Randomized HHH} Our work employs the data structures of~\cite{HHHMitzenmacher}. That is, we use a matrix of $H$ independent HH algorithms, and each node is responsible for a single prefix pattern. \input{acc-cov.tex} \input{graphs} Our solution, \emph{Randomized HHH} (RHHH), updates \textbf{at most a single} randomly selected HH instance that operates in $O(1)$. In contrast,~\cite{HHHMitzenmacher} updates \textbf{every} HH algorithm for each packet and thus operates in $O(H)$. Specifically, for each packet, we randomize a number between 0 and $V$ and if it is smaller than $H$, we update the corresponding HH algorithm. Otherwise, we ignore the packet. Clearly, $V$ is a performance parameter: when $V=H$, every packet updates one of the HH algorithms whereas when $V\gg H$, most packets are ignored. Intuitively, each HH algorithm receives a \emph{sample} of the stream. We need to prove that given enough traffic, hierarchical heavy hitters can still be extracted. Pseudocode of RHHH is given in Algorithm~\ref{alg:Skipper}. RHHH uses the same algorithm for both one and two dimensions. The differences between them are manifested in the $calcPred$ method. Pseudocode of this method is found in Algorithm~\ref{alg:randHHH} for one dimension and in Algorithm~\ref{alg:randHHH2D} for two dimensions. \begin{definition} The underlying estimation provides us with upper and lower estimates for the number of times prefix $p$ was updated ($X_p$). We denote: $\widehat{X^p}^{+}$ to be an upper bound for $X_p$ and $\widehat{X^p}^{-}$ to be a lower bound. For simplicity of notations, we define the following:\\ $\widehat{f_p}\triangleq \widehat{X^p} V$ -- an estimator for $p$'s frequency.\\ $\widehat{f_p^{+}}\triangleq \widehat{X^p}^{+} V$ -- an upper bound for $p$'s frequency.\\ $\widehat{f_p^{-}}\triangleq \widehat{X^p}^{-} V$ -- a lower bound for $p$'s frequency. \end{definition} Note these bounds ignore the sample error that is accounted separately in the analysis. The output method of RHHH starts with fully specified items and if their frequency is above $\theta N$, it adds them to $P$. Then, RHHH iterates over their parent items and calculates a conservative estimation of their conditioned frequency with respect to $P$. Conditioned frequency is calculated by an upper estimate to ($f_p^{+}$) amended by the output of the $calcPred$ method. In a single dimension, we reduce the lower bounds of $p$'s closest predecessor HHHs. In two dimensions, we use inclusion and exclusion principles to avoid double counting. In addition, Algorithm~\ref{alg:randHHH2D} uses the notation of \emph{greater lower bound (glb)} that is formally defined in Definition~\ref{def:glb}. Finally, we add a constant to the conditioned frequency to account for the sampling error. \begin{definition} \label{def:glb} Denote $glb(h,h')$ the greatest lower bound of $h$ and $h'$. $glb(h,h')$ is a unique common descendant of $h$ and $h'$ s.t. $\forall p : \left(q\preceq p\right)\wedge \left(p \preceq h\right)\wedge \left(p \preceq h'\right) \Rightarrow p = q.$ When $h$ and $h'$ have no common descendants, define $glb(h,h')$ as an item with count $0$. \end{definition} In two dimensions, $C_{p|P}$ is first set to be the upper bound on $p$'s frequency (Line~\ref{line:cp}, Algorithm~\ref{alg:Skipper}). Then, we remove previously selected descendant heavy hitters (Line~\ref{alg:second}, Algorithm~\ref{alg:randHHH2D}). Finally, we add back the common descendant (Line~\ref{alg:third}, Algorithm~\ref{alg:randHHH2D})). Note that the work of~\cite{HHHMitzenmacher} showed that their structure extends to higher dimensions, with only a slight modification to the Output method to ensure that it conservatively estimates the conditioned count of each prefix. As we use the same general structure, their extension applies in our case as well. \section{Discussion} \label{sec:discussion} This work is about realizing hierarchical heavy hitters measurement in virtual network devices. Existing HHH algorithms are too slow to cope with current improvements in network technology. Therefore, we define a probabilistic relaxation of the problem and introduce a matching randomized algorithm called RHHH. Our algorithm leverages the massive traffic in modern networks to perform simpler update operations. Intuitively, the algorithm replaces the traditional approach of computing all prefixes for each incoming packets by sampling (if $V>H$) and then choosing one \emph{random} prefix to be updated. While similar convergence guarantees can be derived for the simpler approach of updating all prefixes for each sampled packet, our solution has the clear advantage of processing elements in $O(1)$ worst case time. We evaluated RHHH on four real Internet packet traces, consisting over 1 billion packets each and achieved a speedup of up to X62 compared to previous works. Additionally, we showed that the solution quality of RHHH is comparable to that of previous work. RHHH performs updates in constant time, an asymptotic improvement from previous works whose complexity is proportional to the hierarchy's size. This is especially important in the two dimensional case as well as for IPv6 traffic that requires larger~hierarchies. Finally, we integrated RHHH into a DPDK enabled Open vSwitch and evaluated its performance as well as the alternative algorithms. We provided a dataplane implementation where HHH measurement is performed as part of the per packet routing tasks. In a dataplane implementation, RHHH is capable of handling up to 13.8 Mpps, $4\%$ less than an unmodified DPDK OVS (that does not perform HHH measurement). We showed a throughput improvement of X2.5 compared to the fastest dataplane implementations of previous~works. Alternatively, we evaluated a distributed implementation where RHHH is realized in a virtual machine that can be deployed in the cloud and the virtual switch only sends the sampled traffic to RHHH. Our distributed implementation can process up to 12.3 Mpps. It is less intrusive to the switch, and offers greater flexibility in virtual machine placement. Most importantly, our distributed implementation is capable of analyzing data from multiple network~devices. Notice the performance improvement gap between our direct implementation -- X62, compared to the performance improvement when running over OVS -- X2.5. In the case of the OVS experiments, we were running over a $10$Gbps link, and were bound by that line speed -- the throughput obtained by our implementation was only $4\%$ lower than the unmodified OVS baseline (that does nothing). In contrast, previous works were clearly bounded by their computational overhead. Thus, one can anticipate that once we deploy the OVS implementation on faster links, or in a setting that combines traffic from multiple links, the performance boost compared to previous work will be closer to the improvement we obtained in the direct~implementation. A downside of RHHH is that it requires some minimal number of packets in order to converge to the desired formal accuracy guarantees. In practice, this is a minor limitation as busy links deliver many millions of packets every second. For example, in the settings reported in Section~\ref{sec:acc+cov-error}, RHHH requires up to $100$ millions packets to fully converge, yet even after as little as $8$ millions packets, the error reduces to around $1\%$. With a modern switch that can serve $10$ million packets per second, this translates into a $10$ seconds delay for complete convergence and around $1\%$ error after $1$ second. As line rates will continue to improve, these delays would become even shorter accordingly. The code used in this work is open sourced~\cite{RHHHCode} \paragraph*{Acknowledgments} We thank Ori Rottenstreich for his insightful comments and Ohad Eytan for helping with the code release. We would also like to thank the anonymous reviewers and our shepherd, Michael Mitzenmacher, for helping us improve this work. This work was partially funded by the Israeli Science Foundation grant \#$1505/16$ and the Technion-HPI research school. Marcelo Caggiani Luizelli is supported by the research fellowship program funded by CNPq (201798/2015-8). \section{Evaluation} \label{sec:eval} Our evaluation includes MST~\cite{HHHMitzenmacher}, the Partial and Full Ancestry~\cite{CormodeHHH} algorithms and two configurations of RHHH, one with $V=H$ (RHHH) and the other with $V=10\cdot H$ (10-RHHH). RHHH performs a single update operation per packet while \FRHHH{} performs such an operation only for 10\% of the packets. Thus, \FRHHH{} is considerably faster than RHHH but requires more traffic to converge. The evaluation was performed on a single Dell 730 server running Ubuntu 16.04.01 release. The server has 128GB of RAM and an Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz processor. Our evaluation includes four datasets, each containing a mix of 1 billion UDP/TCP and ICMP packets collected from major backbone routers in both Chicago~\cite{CAIDACH15,CAIDACH16} and San Jose~\cite{CAIDASJ13,CAIDASJ14} during the years 2014-2016. We considered source hierarchies in byte (1D Bytes) and bit (1D Bits) granularities, as well as a source/destination byte hierarchy (2D Bytes). Such hierarchies were also used by~\cite{HHHMitzenmacher,CormodeHHH}. We ran each data point $5$ times and used two-sided Student's t-test to determine 95\% confidence intervals. \subsection{Accuracy and Coverage Errors} \label{sec:acc+cov-error} RHHH has a small probability of both accuracy and coverage errors that are not present in previous algorithms. Figure~\ref{fig:acc} quantifies the accuracy errors and Figure~\ref{fig:cov} quantifies the coverage errors. As can be seen, RHHH becomes more accurate as the trace progresses. Our theoretic bound ($\psi$ as derived in Section~\ref{sec:analysis} below) for these parameters is about 100 million packets for RHHH and about 1 billion packets for \FRHHH{}. Indeed, these algorithms converge once they reach their theoretical bounds (see Theorem~\ref{thm:correctness}). \subsection{False Positives} Approximate HHH algorithms find all the HHH prefixes but they also return non HHH prefixes. \emph{False positives} measure the ratio non HHH prefixes pose out of the returned HHH set. Figure~\ref{FPR:fig:FPR} shows a comparative measurement of false positive ratios in the Chicago 16 and San Jose 14 traces. Every point was measured for $\epsilon=0.1\%$ and $\theta=1\%$. As shown, for RHHH and \FRHHH{} the false positive ratio is reduced as the trace progresses. Once the algorithms reach their theoretic grantees ($\psi$), the false positives are comparable to these of previous works. In some cases, RHHH and \FRHHH{} even perform slightly better than the alternatives. \subsection{Operation Speed} Figure~\ref{fig:speed} shows a comparative evaluation of operation speed. Figure~\ref{SJ:Byte}, Figure~\ref{SJ:bit} and Figure~\ref{SJ:2Dbyte} show the results of the San Jose 14 trace for 1D byte hierarchy ($H=5$), 1D bit hierarchy ($H=33$) and 2D byte hierarchy ($H=25$), respectively. Similarly, Figure~\ref{CHI:byte}, Figure~\ref{CHI:bit} and Figure~\ref{CHI:2Dbyte} show results for the Chicago 16 trace on the same hierarchical domains. Each point is computed for $250M$ long packet traces. Clearly, the performance of RHHH and \FRHHH{} is relatively similar for a wide range of $\varepsilon$ values and for different data sets. Existing works depend on $H$ and indeed run considerably slower for large $H$ values. Another interesting observation is that the Partial and Full Ancestry~\cite{CormodeHHH} algorithms improve when $\varepsilon$ is small. This is because in that case there are few replacements in their trie based structure, as is directly evident by their $O(H\log(N\epsilon))$ update time, which is decreasing with $\epsilon$. However, the effect is significantly lessened when $H$ is large. RHHH and \FRHHH{} achieve speedup for a wide range of $\varepsilon$ values, while \FRHHH{} is the fastest algorithm overall. For one dimensional byte level hierarchies, the achieved speedup is up to X3.5 for RHHH and up to X10 for \FRHHH{}. For one dimensional bit level hierarchies, the achieved speedup is up to X21 for RHHH and up to X62 for \FRHHH{}. Finally, for 2 dimensional byte hierarchies, the achieved speedup is up to X20 for RHHH and up to X60 for \FRHHH{}. Evaluation on Chicago15 and SanJose13 yielded similar results, which are omitted due to lack of space. \subsubsection{\@startsection{subsubsection}{3}% \begin{document} \author{ Eran Assaf\\ Hebrew University\\ \and Ran Ben Basat\\ Technion\\ \and Gil Einziger\\ Nokia Bell Labs\\ \and Roy Friedman\\ Technion\\ } \title{Pay for a Sliding Bloom Filter and Get Counting, Distinct Elements, and Entropy for Free} \newcommand*{}{} % % % % \date{} \input{macros} \newtheorem{problem}[theorem]{Problem} \newenvironment{remark}[1][Remark]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}} \newcommand{\psi}{\psi} \newcommand{{Z_{1 - \frac{{{\delta _s}}}{2}}}V{\varepsilon_s}^{ - 2}}{{Z_{1 - \frac{{{\delta _s}}}{2}}}V{\varepsilon_s}^{ - 2}} \newcommand{5.8cm}{5.8cm} \renewcommand{\arraystretch}{1.33} \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% % % % % % \maketitle \input{abstract} \input{introduction} \input{relatedwork} \input{SWAMP} \input{results} \input{analysisSWAMP} \section{Discussion} \label{sec:discussion} In modern networks, operators are likely to require multiple measurement types. To that end, this work suggests SWAMP, a unified algorithm that monitors four common measurement metrics in constant time and compact space. Specifically, SWAMP approximates the following metrics on a sliding window: Bloom filters, per-flow counting, count distinct and entropy estimation. For all problems, we proved formal accuracy guarantees and demonstrated them on real Internet traces. Despite being a general algorithm, SWAMP advances the state of the art for all these problems. For sliding Bloom filters, we showed that SWAMP is memory succinct for constant false positive rates and that it reduces the required space by 25\%-40\% compared to previous approaches~\cite{slidngBloomFilterInfocom}. In per-flow counting, our algorithm outperforms WCSS~\cite{WCSS} -- a state of the art window algorithm. When compared with $1+\varepsilon$ approximation count distinct algorithms~\cite{SlidingHLL,Fusy-HLL}, SWAMP asymptotically improves the query time from $O(\varepsilon^{-2})$ to a constant. It is also up to x1000 times more accurate on real packet traces. For the entropy estimation on a sliding window~\cite{SlidingEntropy}, SWAMP reduces the update time to a constant. While SWAMP benefits from the compactness of TinyTable~\cite{TinyTable}, most of its space reductions inherently come from using fingerprints rather than sketches. For example, all existing count distinct and entropy algorithms require $\Omega(\epsilon^{-2})$ space for computing a $1+\epsilon$ approximation. SWAMP can compute the \emph{exact} answers using $O(W\log W)$ bits. Thus, for a small $\epsilon$ value, we get an asymptotic reduction by storing the fingerprints on \emph{any} compact table. % % { \bibliographystyle{plain} \section{Introduction} Network measurements are at the core of many applications, such as load balancing, quality of service, anomaly/intrusion detection, and caching~\cite{CONGA,DevoFlow,TinyLFU,IntrusionDetection2,ApproximateFairness}. Measurement algorithms are required to cope with the throughput demands of modern links, forcing them to rely on scarcely available fast SRAM memory. However, such memory is limited in size~\cite{CounterBraids}, which motivates approximate solutions that conserve space. Network algorithms often find recent data useful. For example, anomaly detection systems attempt to detect manifesting anomalies and a load balancer needs to balance the current load rather than the historical one. Hence, the sliding window model is an active research~field~\cite{WCSS,slidngBloomFilterInfocom,Naor2013,ActiveActive,TBF}. The desired measurement types differ from one application to the other. For example, a load balancer may be interested in the heavy hitter flows~\cite{CONGA}, which are responsible for a large portion of the traffic. Additionally, anomaly detection systems often monitor the number of distinct elements~\cite{IntrusionDetection2} and entropy~\cite{Entropy1} or use Bloom filters~\cite{AnomalyBF}. Yet, existing algorithms usually provide just a single utility at a time, e.g., approximate set membership (Bloom filters)~\cite{Bloom}, per-flow counting~\cite{SpectralBloom,TinyTable}, count distinct~\cite{CD3,CD0,CD4} and entropy~\cite{Entropy1}. Therefore, as networks complexity grows, multiple measurement types may be required. However, employing multiple stand-alone solutions incurs the additive combined cost of each of them, which is inefficient in both memory and computation. In this work, we suggest \emph{Sliding Window Approximate Measurement Protocol (SWAMP)}, an algorithm that bundles together four commonly used measurement types. Specifically, it approximates set membership, per flow counting, distinct elements and entropy in the sliding window model. As illustrated in Figure~\ref{fig:example}, SWAMP stores flows' fingerprints\footnote{ A fingerprint is a short random string obtained by hashing an ID.} in a cyclic buffer while their frequencies are maintained in a compact fingerprint hash table named TinyTable~\cite{TinyTable}. On each packet arrival, its corresponding fingerprint replaces the oldest one in the buffer. We then update the table, decrementing the departing fingerprint's frequency and incrementing that of the arriving one. An additional counter $Z$ maintains the number of distinct \textbf{fingerprints} in the window and is updated every time a fingerprint's frequency is reduced to $0$ or increased to $1$. Intuitively, the number of distinct fingerprints provides a good estimation of the number of distinct elements. Additionally, the scalar $\hat{H}$ (not illustrated) maintains the fingerprints' distribution entropy and approximates the real entropy. \begin{figure}[t] \center{ \includegraphics[width = .9\columnwidth]{example-kkCD.png} \caption{\label{fig:example} An overview of SWAMP: Fingerprints are stored in a cyclic fingerprint buffer (CFB), and their frequencies are maintained by TinyTable. Upon item $x_n$'s arrival, we update CFB and the table by removing the oldest item's ($x_{n-W}$) fingerprint (in black) and adding that of $x_n$ (in red). We also maintain an estimate for the number of distinct fingerprints (Z). Since the black fingerprints count is now zero, we~decrement~Z. }} \end{figure} \subsection{Contribution} We present \emph{SWAMP}, a sliding window algorithm for approximate set membership (Bloom filters), per-flow counting, distinct elements and entropy measurements. We prove that SWAMP operates in constant time and provides accuracy guarantees for each of the supported problems. Despite its versatility, SWAMP improves the state of the art for each. For approximate set membership, SWAMP is memory succinct when the false positive rate is constant and requires up to 40\% less space than~\cite{slidngBloomFilterInfocom}. SWAMP is also succinct for per-flow counting and is more accurate than~\cite{WCSS} on real packet traces. When compared with $1+\varepsilon$ count distinct approximation algorithms~\cite{SlidingHLL,Fusy-HLL}, SWAMP asymptotically improves the query time from $O(\varepsilon^{-2})$ to a constant. It is also up to x1000 times more accurate on real packet traces. For entropy, SWAMP asymptotically improves the runtime to a constant and provides accurate estimations in practice. \begin{table}[h] \centering \scriptsize \begin{tabular}{|c|c|c|c|c|} \hline Algorithm & Space & Time & Counts \tabularnewline \hline \hline SWAMP & $(1+o(1))\cdot W\log_2 W$ & $O\left(1\right)$ & \cmark \tabularnewline \hline SWBF~\cite{slidngBloomFilterInfocom} & $(2+o(1))\cdot W\log_2 W$ & $O\left(1\right)$ & \xmark \tabularnewline \hline TBF~\cite{TBF} & $O\left(W\log_2 W\log_2\epsilon^{-1}\right)$ &$O\left(\log_2\epsilon^{-1}\right)$ & \xmark \tabularnewline \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \end{tabular} \normalsize \caption{ Comparison of sliding window set membership \ifdefined algorithms \fi for $\epsilon=W^{-o(1)}$. } \label{tbl:setMembership} \ifdefined \vspace*{-0.3cm} \fi \end{table} \normalfont \begin{table*}[t!] \ifdefined \footnotesize \fi \centering{\hspace*{-0.5cm} \begin{tabular}{|c|c|c|c|} \hline Problem & Estimator & Guarantee & Reference \tabularnewline \hline \hline \multirow{2}{*}{{\sc $(W,\epsilon)$-Approximate Set Membership}} & \multirow{2}{*}{{\sc IsMember()}} & $\Pr(true|x\in S^W) =1$ & \multirow{2}{*}{Corollary~\ref{cor:bf}}\tabularnewline \cline{3-3} & & $\Pr(true|x\notin S^W) \le \epsilon$ & \tabularnewline \hline \multirow{3}{*}{{\sc $(W,\epsilon)$-Approximate Set Multiplicity}} & \multirow{2}{*}{{\sc Frequency} $(\widehat{f_x})$} & $\Pr \left(f_x \le \widehat{f_x}\right) =1$ & \multirow{3}{*}{Theorem~\ref{thm:setmul}}\tabularnewline \cline{3-3} & & $\Pr\left(f_x \neq \widehat{f_x} \right) \le \epsilon$ &\tabularnewline \hline \multirow{4}{*}{{\sc $(W,\epsilon,\delta)$-Approximate Count Distinct}} & \multirow{2}{*}{\distinctLB{} $(Z)$} & $\Pr(D\ge Z)=1$ &\multirow{3}{*}{Theorem~\ref{thm:epsDeltaDistinctLB}}\tabularnewline \cline{3-3} & & $\Pr \Big( { D - Z \ge \frac{1}{2}\varepsilon D \cdot \log \left( {\frac{2}{\delta }} \right)} \Big) \le \delta.$ &\tabularnewline \cline{2-4} & \distinctMLE{}$(\hat{D})$ & $\Pr \left( { \left|D - \hat{D}\right| \ge \frac{1}{2}\varepsilon D \cdot \log \left( {\frac{2}{\delta }} \right)} \right) \le \delta.$ & Theorem~\ref{thm:prob margin} \tabularnewline \hline \multirow{2}{*}{{\sc $(W,\epsilon,\delta)$-Entropy Estimation}} & \multirow{2}{*}{{\sc Entropy} $(\hat{H})$} & $\Pr\left(H \ge \hat{H}\right) =1$ & \multirow{3}{*}{Theorem~\ref{thm:entropyInterval}}\tabularnewline \cline{3-3} & & $ \Pr \left ( H - \widehat{H} \ge \epsilon\delta^{-1} \right ) \le \delta .$ &\tabularnewline \hline \end{tabular} } \caption{ Summary of SWAMP's accuracy guarantees. } \ifdefined \vspace*{-0.7cm} \fi \label{tinytbl} \normalfont \end{table*} \ifdefined the notations we use can be found in Table~\ref{tbl:notations}. \else \fi \ifdefined \textbf{\textit{Roadmap.}} \else \subsection{Paper organization} \fi Related work on the problems covered by this work is found in Section~\ref{sec:related}. Section~\ref{sec:SWAMP} provides formal definitions and introduces SWAMP. Section~\ref{sec:Eval} describes an empirical evaluation of SWAMP and previously suggested algorithms. Section~\ref{sec:anal} includes a formal analysis of SWAMP which is briefly summarized in Table~\ref{tinytbl}. Finally, we conclude with a short discussion in Section~\ref{sec:discussion}. \section{Discussion} \label{sec:discussion} In modern networks, operators are likely to require multiple measurement types. To that end, this work suggests SWAMP, a unified algorithm that monitors four common measurement metrics in constant time and compact space. Specifically, SWAMP approximates the following metrics on a sliding window: Bloom filters, per-flow counting, count distinct and entropy estimation. For all problems, we proved formal accuracy guarantees and demonstrated them on real Internet traces. Despite being a general algorithm, SWAMP advances the state of the art for all these problems. For sliding Bloom filters, we showed that SWAMP is memory succinct for constant false positive rates and that it reduces the required space by 25\%-40\% compared to previous approaches~\cite{slidngBloomFilterInfocom}. In per-flow counting, our algorithm outperforms WCSS~\cite{WCSS} -- a state of the art window algorithm. When compared with $1+\varepsilon$ approximation count distinct algorithms~\cite{SlidingHLL,Fusy-HLL}, SWAMP asymptotically improves the query time from $O(\varepsilon^{-2})$ to a constant. It is also up to x1000 times more accurate on real packet traces. For the entropy estimation on a sliding window~\cite{SlidingEntropy}, SWAMP reduces the update time to a constant. While SWAMP benefits from the compactness of TinyTable~\cite{TinyTable}, most of its space reductions inherently come from using fingerprints rather than sketches. For example, all existing count distinct and entropy algorithms require $\Omega(\epsilon^{-2})$ space for computing a $1+\epsilon$ approximation. SWAMP can compute the \emph{exact} answers using $O(W\log W)$ bits. Thus, for a small $\epsilon$ value, we get an asymptotic reduction by storing the fingerprints on \emph{any} compact table. Finally, while the formal analysis of SWAMP is mathematically involved, the actual code is short and simple to implement. This facilitates its adoption in network devices and SDN. In particular, OpenBox~\cite{openbox} demonstrated that sharing common measurement results across multiple network functionalities is feasible and efficient. Our work fits into~this~trend. \bibliographystyle{abbrv} \section{Virtual Switch Integration} \label{sec:ovs} This section describes how we extended Open vSwitch (OVS) to include approximate HHH monitoring capabilities. For completeness, we start with a short overview of OVS and then continue with our~evaluation. \input{ovsOverview} \subsection{Open vSwitch Evaluation} We examined two integration methods: First, HHH measurement can be performed as part of the OVS dataplane. That is, OVS updates each packet as part of its processing stage. Second, HHH measurement can be performed in a separate virtual machine. In that case, OVS forwards the relevant traffic to the virtual machine. When RHHH operates with $V>H$, we only forward the sampled packets and thus reduce~overheads. \subsubsection{OVS Environment Setup} Our evaluation settings consist of two identical HP ProLiant servers with an Intel Xeon E3-1220v2 processor running at 3.1 Ghz with 8 GB RAM, an Intel 82599ES 10 Gbit/s network card and CentOS 7.2.1511 with Linux kernel 3.10.0 operating system. The servers are directly connected through two physical interfaces. We used Open vSwitch 2.5 with Intel DPDK 2.02, where NIC physical ports are attached using \emph{dpdk} ports. One server is used as traffic generator while the other is used as \emph{Design Under Test (DUT)}. Placed on the DUT, OVS receives packets on one network interface and then forwards them to the second one. Traffic is generated using MoonGen traffic generator~\cite{MoonGen2015}, and we generate 1 billion UDP packets but preserve the source and destination IP as in the original dataset. We also adjust the payload size to 64 bytes and reach 14.88 million packets per second (Mpps). \begin{figure}[t]\centering \includegraphics[width = 0.8\columnwidth]{OVS_graphs/graph1.png} \caption{Throughput of dataplane implementations ($\varepsilon = 0.001$, $\delta = 0.001$, 2D Bytes, Chicago 16).} \label{fig-bw1} \end{figure} \subsubsection{OVS Throughput Evaluation} Figure \ref{fig-bw1} exhibits the throughput of OVS for dataplane implementations. It includes our own \FRHHH{} (with V=10H) and RHHH (with V=H), as well as MST and Partial Ancestry. Since we only have 10 Gbit/s links, the maximum achievable packet rate is 14.88~Mpps. As can be seen, \FRHHH{} processes 13.8 Mpps, only 4\% lower than unmodified OVS. RHHH achieves 10.6 Mpps, while the fastest competition is Partial Ancestry that delivers 5.6 Mpps. Note that a 100 Gbit/s link delivering packets whose average size is 1KB only delivers $\approx$ 8.33 Mpps. Thus, \FRHHH{} and RHHH can cope with the line speed. Next, we evaluate the throughput for different $V$ values, from $V=H=25$ (RHHH) to $V=10\cdot H =250$ (\FRHHH{}). Figure~\ref{fig:DPI} evaluates the dataplane implementation while Figure~\ref{fig:VMI} evaluates the distributed implementation. In both figures, performance improves for larger $V$ value. In the distributed implementation, this speedup means that fewer packets are forwarded to the VM whereas in the dataplane implementation, it is linked to fewer processed~packets. \begin{figure}[t]\centering {\includegraphics[width = 0.8\columnwidth]{OVS_graphs/graph2b.png}} \caption{Dataplane implementation\label{fig:DPI}} \end{figure} \begin{figure}[t]\centering {\includegraphics[width = 0.8\columnwidth]{OVS_graphs/graph2a.png}} \caption{Distributed implementation\label{fig:VMI}} \end{figure} Note that while the distributed implementation is somewhat slower, it enables the measurement machine to process traffic from multiple sources. \section{Open vSwitch Implementation} Next, we evaluate a prototype implementation of the proposed HHH algorithms in OVS. We first provide an overview of OVS architecture. Then, we dive into how HHH are implemented in OVS and on its performance. \input{ovsOverview} \subsection{Implementation Design} We implement HHH algorithms as part of the dataplane ... \subsection{Evaluation} \subsubsection{Environment setup} Our evaluation settings consist of two identical HP ProLiant servers with an Intel Xeon E3-1220v2 processor running at 3.1 Ghz with 8 GB RAM. Both servers are equipped with DPDK-enabled network interface cards (Intel 82599ES 10 Gbit/s). Each server is directly interconneced through a different physical interface. The servers run CentOS 7.2.1511 with Linux kernel 3.10.0. We use one server as \emph{Design Under Test (DUT)}, and the other as a traffic generator. In our \emph{DUT}, we use Open vSwitch 2.5 compiled with Intel DPDK 2.02. NIC physical ports are attached to OVS using \emph{dpdk} ports. OVS is configured so as all incoming network traffic is forwarded to another interface. In the traffic generator, we rely on the MoonGen traffic generator~\cite{MoonGen2015} tool to generate traffic at line rate. We generate UDP network traffic according to dataset Chicago16. We keep the source and destination IP prefixes as they are in the dataset and adjust the payload size in order to generate packets of 64 Bytes (reaching up to 14.88 Mpps in 10 Gbit/s NICs). \begin{figure}[t]\centering \includegraphics[width = \columnwidth, height=5cm]{graph1.pdf} \caption{Measured throughput using different HHH implementations.} \label{fig-bw1} \end{figure} \begin{figure}[t]\centering \includegraphics[width = \columnwidth, heigInht=5cm]{graph2.pdf} \caption{Measured throughput of FRHHH using different filter probabilities.} \label{fig-bw2} \end{figure} \subsubsection{Results} Figure \ref{fig-bw1} illustrates the throughput of OVS when running HHH algorithm in dataplane. For the evaluations, we consider $\epsilon = 2^10$. We compare our proposal solutions (RHHH and FRHHH) against OVS (without any HHH) and with Partial Ancestry algorithm. Observe that Partial Ancestry algorithm leads to an overhead in terms of throughput greater than 60\% in comparison to OVS baseline. In contrast, our proposal, FRHHH, leads to a negligeable throughput overhead (lower than 3\%) In Figure \ref{fig-bw2}, we illustrate the impact of different filter probabilities on the overhead of FRHHH method. Observe that as we increase the probability to update HHH, the throughput overhead increases (up to 25\%) since more packets update the HHH data structures. Observe that there is a tradeoff between the error and the achieved performance. Additionally, we also compare FRHHH to the scenario which packets are sampled to a monitoring VNF. In this particular case, we assume that the VNF has an implementation of HHH. Then, OVS is required to forward a percentage of packets to that VNF. As OVS does not support natively sampling packets in the dataplane, we implement a packet uniform sampling method in the dataplane so as OVS also forwards packets to a virtual port (attached to the VNF). Observe, in Figure \ref{fig-bw2}, that considering a probability of 10\% on sampling the packets, higher penalties are obtained than doing it for 30\% of the packets in the dataplane. \subsection{Open vSwitch Overview} \label{apx:ovsOverview} Virtual switching is a key building block in NFV environments, as it enables interconnecting multiple \emph{Virtual Network Functions} (VNFs) in service chains and enables the use of other routing technologies such as SDN. In practice, virtual switches rely on sophisticated optimizations to cope with the line rate. Specifically, we target the DPDK version of OVS that enables the entire packet processing to be performed in user space. It mitigates overheads such as interrupts required to move from user space to kernel space. In addition, DPDK enables user space packet processing and provides direct access to NIC buffers without unnecessary memory copy. The DPDK library received significant engagement from the NFV industry~\cite{intelDpdk}. The architectural design of OVS is composed of two main components: ovs-vswitchd and ovsdb-server. Due to space constraints, we only describe the vswitchd component. The interested reader is referred to \cite{ovs-2015-nsdi} for additional information. The DPDK-version of the vswitchd module implements control and data planes in user space. Network packets ingress the datapath (dpif or dpif-netdev) either from a physical port connected to the physical NIC or from a virtual port connected to a remote host (e.g., a VNF). The datapath then parses the headers and determines the set of actions to be applied (e.g., forwarding or rewrite a specific~header). \section{Related work} \label{sec:related} \subsection{Set Membership and Counting} A Bloom filter~\cite{Bloom} is an efficient data structure that encodes an approximate set. Given an item, a Bloom filter can be queried if that item is a part of the set. An answer of `no' is always correct, while an answer of `yes' may be false with a certain probability. This case is called \emph{False Positive}. Plain Bloom filters do not support removals or counting and thus many algorithms fill this gap. For example, some alternatives support removals~\cite{dleftCBF,TinyTable,TinySet,RankedIndexHashing,OceanStore,VLBF} and others support multiplicity queries~\cite{SpectralBloom,TinyTable}. Additionally, some works use aging~\cite{ActiveActive} and others compute the approximate set with regard to a sliding windows~\cite{TBF,slidngBloomFilterInfocom}. SWBF~\cite{slidngBloomFilterInfocom} uses a Cuckoo hash table to build a sliding Bloom filter, which is more space efficient than previously suggested \emph{Timing Bloom filters (TBF)~\cite{TBF}}. The Cuckoo table is allocated with $2W$ entries such that each entry stores a fingerprint and a time stamp. Cuckoo tables require that $W$ entries remain empty to avoid circles and this is done implicitly by treating cells containing outdated items as `empty'. Finally, a cleanup process is used to remove outdated items and allow timestamps to be wrapped around. A comparison of SWAMP, TBF and SWBF appears in Table~\ref{tbl:setMembership}. \subsection{Count Distinct} The number of \textbf{distinct} elements provides a useful indicator for anomaly detection algorithms. Accurate count distinct is impractical due to the massive scale of the data~\cite{HLL} and thus most approaches resort to approximate solutions~\cite{CD1,CD2,CD3}. Approximate algorithms typically use a hash function $H: \mathbb{ID}\to \{0,1\}^{\infty}$ that maps ids to infinite bit strings. In practice, finite bit strings are used and $32$ bit integers suffice to reach estimations of over $10^9$~\cite{HLL}. These algorithms look for certain \emph{observables} in the hashes. For example, some algorithms~\cite{CD1,Giroire2009406} treat the minimal observed hash value as a real number in $[0,1]$ and exploit the fact that $\mathbb{E}(\min\left(H(\cal{M})\right)) = \frac{1}{D+1}$, where $D$ is the real number of distinct items in the multi-set $\cal{M}$. Alternatively, one can seek patterns of the form $0^{\beta-1}1$~\cite{CD3,HLL} and exploit the fact that such a pattern is encountered on average once per every $2^{\beta}$ unique elements. Monitoring observables reduces the required amount of space as we only need to maintain a single one. In practice, the variance of such methods is large and hence multiple observables are maintained. In principle, one could repeat the process and perform $m$ independent experiments but this has significant computational overheads. Instead, \emph{stochastic averaging}~\cite{CD4} is used to mimic the effects of multiple experiments with a single hash calculation. At any case, using $m$ repetitions reduces the standard deviation by a factor of~$\frac{1}{\sqrt{m}}$. The state of the art count distinct algorithm is~\emph{HyperLogLog (HLL)}~\cite{HLL}, which is used in multiple Google projects~\cite{HLLInPractice}. HLL requires $m$ bytes and its standard deviation is $\sigma \approx \frac{1.04}{\sqrt{m}}$. SWHLL extends HLL to sliding windows~\cite{SlidingHLL,Fusy-HLL}, and was used to detect attacks such as port scans~\cite{Chabchoub2014}. SWAMP's space requirement is proportional to $W$ and thus, it is only comparable in space to HLL when $\varepsilon^{-2} = O(W)$. However, when multiple functionalities are required the \emph{residual} space overhead of SWAMP is only $\log(W)$ bits, which is considerably less than any standalone alternative. \subsection{Entropy Detection} Entropy is commonly used as a signal for anomaly detection~\cite{Entropy1}. Intuitively, it can be viewed as a summary of the entire traffic histogram. The benefit of entropy based approaches is that they require no exact understanding of the attack's mechanism. Instead, such a solution assumes that a sharp change in the entropy is caused by~anomalies. An $\epsilon,\delta$ approximation of the entropy of a stream can be calculated in $O\left( {{\varepsilon ^{ - 2}}\log {\delta ^{ - 1}}} \right)$ space~\cite{SudiptoAndMcGregor}, an algorithm that was also extended to sliding window using priority sampling~\cite{PrioritySampling}. That sliding window algorithm is improved by~\cite{OptimalSamplingSW} whose algorithm requires $O\left( {{\varepsilon ^{ - 2}}\log {\delta ^{ - 1}}\log \left( N \right)} \right)$ memory. \subsection{Preliminaries -- Compact Set Multiplicity} \label{sec:TinyTable} Our work requires a compact set multiplicity structure that support both set membership and multiplicity queries. TinyTable~\cite{TinyTable} and CQF~\cite{CQF} fit the description while other structures~\cite{dleftCBF,RankedIndexHashing} are naturally expendable for multiplicity queries with at the expense of additional space. We choose TinyTable~\cite{TinyTable} as its code is publicly available as open source. TinyTable encodes $W$ fingerprints of size $L$ using $\left( {1 + \alpha } \right)W\left( {L - \log_2 \left( W \right) + 3} \right) + o\left( W \right)$ bits, where $\alpha$ is a small constant that affects update speed; when $\alpha$ grows, TinyTable becomes faster but also consumes more space. \section{Empirical Evaluation} \label{sec:Eval} \subsection{Overview} We evaluate SWAMP's various functionalities, each against its known solutions. We start with the {\sc $(W,\epsilon)$-Approximate Set Membership}{} problem where we compare SWAMP to SWBF~\cite{slidngBloomFilterInfocom} and \emph{Timing Bloom filter (TBF)}~\cite{TBF}, which only solve {\sc $(W,\epsilon)$-Approximate Set Membership}. For counting, we compare SWAMP to~\emph{Window Compact Space Saving (WCSS)}~\cite{WCSS} that solves heavy hitters identification on a sliding window. WCSS provides a different accuracy guarantee and therefore we evaluate their empirical accuracy when both algorithms are given the same amount of space. For the distinct elements problem, we compare SWAMP against \emph{Sliding Hyper Log Log}~\cite{SlidingHLL,Fusy-HLL}, denoted SWHLL, who proposed running HyperLogLog and LogLog on a sliding window. In our settings, small range correction is active and thus HyperLogLog and LogLog devolve into the same algorithm. Additionally, since we already know that small range correction is active, we can allocate only a single bit (rather than 5) to each counter. This option, denoted SWLC, slightly improves the space/accuracy ratio. In all measurements, we use a window size of $W=2^{16}$ unless specified otherwise. In addition, the underlying TinyTable uses $\alpha =0.2$ as recommended by its authors~\cite{TinyTable}. Our evaluation includes six Internet packet traces consisting of backbone routers and data center traffic. The backbone traces contain a mix of 1 billion UDP, TCP and ICMP packets collected from two major routers in Chicago~\cite{CAIDACH16} and San Jose~\cite{CAIDASJ14} during the years 2013-2016. The dataset Chicago 16 refers to data collected from the Chicago router in 2016, San Jose 14 to data collected from the San Jose router in 2014, etc. The datacenter packet traces are taken from two university datacenters that consist of up to 1,000 servers~\cite{Benson}. These traces are denoted DC1 and DC2. \ifdefined Due to lack of space, results for DC2 as well as additional Caida traces are deferred to the full version of this paper~\cite{full-version}. \fi \subsection{Evaluation of analytical guarantees} Figure~\ref{fig:anal} evaluates the accuracy of our analysis from Section~\ref{sec:anal} on random inputs. As can be observed, the analysis of sliding Bloom filter (Figure~\ref{subfig:SBF}) and count distinct (Figure~\ref{subfig:Z}) is accurate. For entropy (Figure~\ref{subfig:entropy}) the accuracy is better than anticipated indicating that our analysis here is just an upper bound, but the trend line is nearly~identical. \subsection{Set membership on sliding windows} We now compare SWAMP to TBF~\cite{TBF} and SWBF~\cite{slidngBloomFilterInfocom}. Our evaluation focuses on two aspects, fixing $\varepsilon$ and changing the window size (Figure~\ref{variableW}) as well as fixing the window size and changing $\varepsilon$ (Figure~\ref{variableEps}). As can be observed, SWAMP is considerably more space efficient than the alternatives in both cases for a wide range of window sizes and for a wide range of error probabilities. In the tested range, it is 25-40\% smaller than the best alternative. \subsection{Per-flow counting on sliding windows} Next, we evaluate SWAMP for its per-flow counting functionality. We compare SWAMP to WCSS~\cite{WCSS} that solves heavy hitters on a sliding window. Our evaluation uses the On Arrival model, which was used to evaluate WCSS. In that model, we perform a query for each incoming packet. Then, we calculate the \emph{Root Mean Square Error}. We repeated each experiment 25 times with different seeds and computed 95\% confidence intervals for SWAMP. Note that WCSS is a deterministic algorithm and as such was run only once. The results appear in Figure~\ref{fig:memWCSS}. Note that the space consumption of WCSS is proportional to $\varepsilon$ and that of SWAMP to $W$. Thus, SWAMP cannot be run for the entire range. Yet, when it is feasible, SWAMP's error is lower on average than that of WCSS. Additionally, in many of the configurations we are able to show statistical significance to this improvement. Note that SWAMP becomes accurate with high probability using about $300$KB of memory while WCSS requires about $8.3$MB to provide the same accuracy. That is, an improvement of x27. \subsection{Count distinct on sliding windows } Next, we evaluate the count distinct functionality in terms of accuracy vs. space on the different datasets. We performed $25$ runs and summarized the averaged the results. We evaluate two functionalities: SWAMP-LB and SWAMP-MLE. SWAMP-LB corresponds to the function \distinctLB{} in Algorithm~\ref{alg:SWAMP} and provides one sided estimation while SWAMP-MLE corresponds to the function \distinctMLE{} in Algorithm~\ref{alg:SWAMP} and provides an unbiased estimator. Figure~\ref{fig:Z} shows the results of this evaluation. As can be observed, SWAMP-MLE is up to x1000 more accurate than alternatives. Additionally, SWAMP-LB also out performs the alternatives for parts of the range. Note that SWAMP-LB is the only one sided estimator in this evaluation. \subsection{Entropy estimation on sliding window} Figure~\ref{fig:Entropy} shows results for Entropy estimation. As shown, SWAMP provides a very accurate entropy estimation in its entire operational range. Moreover, our analysis in Section~\ref{anal:entropy} is conservative and SWAMP is much more accurate in practice. \subsection{Analysis of count distinct functionality} \label{anal:Z} We now move to analyze SWAMP's count distinct functionality. Recall that SWAMP has two estimators; the one sided \distinctLB{} and the more accurate \distinctMLE{}. Also, recall that $Z$ monitors the number of distinct fingerprints and that $D$ is the real number of distinct flows (on the window). \subsubsection{Analysis of \distinctLB{}} We now present an analysis for \distinctLB{}, showing that it solves the {\sc $(W,\epsilon,\delta)$-Approximate Count Distinct}{} problem. \begin{theorem} \label{thm:Z} $Z\le D$ and $\mathbb{E}(Z)\ge D\cdot (1-\frac{\varepsilon}{2})$. \end{theorem} \begin{proof} Clearly, $Z \le D$ always, since any two identical IDs map to the same fingerprint. Next, for any $0 \le i \le 2^L-1$, denote by $Z_i$ the indicator random variable, indicating whether there is some item in the window whose fingerprint is $i$. Then $Z = \sum_{i=0}^{2^L-1} Z_i$, hence $\mathbb{E}(Z) = \sum_{i=0}^{2^L-1} \mathbb{E}(Z_i)$. However \ifdefined , for any $i$, \fi \ifdefined $$ \mathbb{E}(Z_i) = \Pr(Z_i = 1) = 1 - \left( 1 - 2^{-L} \right)^D .$$ \else \small$ \mathbb{E}(Z_i) = \Pr(Z_i = 1) = 1 - \left( 1 - 2^{-L} \right)^D. $\normalsize \fi The probability that a fingerprint is exactly $i$ is $2^{-L}$, thus: \begin{equation} \label{eq:DISTINCT expectation} \mathbb{E}(Z) = 2^L \cdot \left( 1 - \left( 1 - 2^{-L} \right)^D \right). \end{equation} Since $ 0 < 2^{-L} \le 1 $, we have:\\ $ \left( 1 - 2^{-L} \right)^D < 1 - D \cdot 2^{-L} + {D \choose 2} \cdot 2^{-2L} $ which, in turn, implies $ \mathbb{E}(Z) > D - {D \choose 2} \cdot 2^{-L} $. Finally, we note that $2^{-L} = \frac{\varepsilon}{W}$ and that $ D \le W $. Hence, \small $ \mathbb{E}(Z) > D - \frac{D(D-1)\varepsilon}{2W} > D \cdot \left( 1 - \frac{\varepsilon}{2} \right).\qedhere$\normalsize \end{proof} Our next step is a probabilistic bound on the error of $Z$ ($X = D-Z$). This is required for Theorem~\ref{thm:epsDeltaDistinctLB}, which is the main result and shows that $Z$ provides an $\epsilon,\delta$ approximation for the distinct elements problem. We model the problem as a balls and bins problem where $D$ balls are placed into $2^L = \frac{W}{\varepsilon}$ bins. The variable $X$ is the number of bins with at least $2$ balls. For any $0\le i \le 2^L-1$, denote by $X_i$ the random variable denoting the number of balls in the $i$-th bin. Note that the variables $X_i$ are dependent of each other and are difficult to reason about directly. Luckily, $X$ is monotonically increasing and we can use a Poisson approximation. To do so, we denote by $Y_i\sim Poisson \left( \frac{D}{2^L} \right) $ the corresponding independent Poisson variables and by $Y$ the Poisson approximation of $X$, that is $Y$ is the sum of $Y_i's$ with value 2 or more. Our goal is to apply Lemma~\ref{lemma:rare}, which links the Poisson approximation to the exact case. In our case, it allows us to derive insight about $X$ by analyzing $Y$. \begin{lemma}[Corollary 5.11, page 103 of~\cite{Mitzenmacher:2005:PCR:1076315}] \label{lemma:rare} Let $\mathcal{E}$ be an event whose probability is either monotonically increasing or decreasing with the number of balls. If $\mathcal{E}$ has probability $p$ in the Poisson case then $\mathcal{E}$ has probability at most $2p$ in the exact case. Here $\mathcal{E}$ is an event depending on $X_0,\ldots ,X_{2^L-1}$ (in the exact case), and the probability of the event in the Poisson case is obtained by computing $\mathcal{E}$ using $Y_0, \ldots ,Y_{2^L-1}$. \end{lemma} That is, we need to bound the probability $\Pr(X_i \ge 2)$ and to do so we define a random variable $Y$ and bound: $\Pr(Y_i \ge 2)$ which can be at most $2\cdot \Pr(X_i\ge 2)$. We may now denote by $\tilde{Y}_i = Y_i \ge 2$ the indicator variables, indicating whether $Y_i$ has value at least 2. By definition:\\ $ Y = \sum_{i=0}^{2^L-1} \tilde{Y}_i $. It follows that: \ifdefined $$ \mathbb{E}(Y) = \sum_{i=0}^{2^L-1} \mathbb{E}(\tilde{Y}_i) = 2^L \cdot \Pr(Y_i \ge 2) .$$ \else $\\ \mathbb{E}(Y) = \sum_{i=0}^{2^L-1} \mathbb{E}(\tilde{Y}_i) = 2^L \cdot \Pr(Y_i \ge 2) .$\\ \fi $Y_i \sim Poission \left( \frac{D}{2^L} \right) $, hence: \ifdefined $$ \Pr(Y_i \ge 2) = 1 - \Pr(Y_i = 0) - \Pr(Y_i = 1)$$$$ = 1 - e^{-\frac{D}{2^L}} \cdot \left( 1 + \frac{D}{2^{L+1}} \right) ,$$ \else \\$ \Pr(Y_i \ge 2) = 1 - \Pr(Y_i = 0) - \Pr(Y_i = 1)$\\$ = 1 - e^{-\frac{D}{2^L}} \cdot \left( 1 + \frac{D}{2^{L+1}} \right) ,$ \fi and thus:\\ $ \mathbb{E}(Y) = 2^L \cdot \left( 1 - e^{-\frac{D}{2^L}} \cdot \left( 1 + \frac{D}{2^L} \right) \right) .$\\ Since $ e^{-x} \cdot (1 + x) > 1 - \frac{x^2}{2} $ for $ 0 < x \le 1$, we get \ifdefined $$ \mathbb{E}(Y) < 2^L \cdot \left( 1 - \left( 1 - \frac{D^2}{2^{2L+1}} \right) \right) = \frac{D^2}{2^{L+1}} .$$ \else \\$ \mathbb{E}(Y) < 2^L \cdot \left( 1 - \left( 1 - \frac{D^2}{2^{2L+1}} \right) \right) = \frac{D^2}{2^{L+1}} .$\\ \fi Recall that $2^L = W\varepsilon^{-1}$ and $D \le W$ and thus $ \mathbb{E}(Y) < \frac{D \cdot \varepsilon}{2} .$ Note also that the $\{\tilde{Y}_i\}$ are independent, since the $\{Y_i\}$ are independent, and as they are indicator (Bernoulli) variables, in particular they are Poisson trials. Therefore, we may use a Chernoff bound on $Y$ as it is the sum of independent Poisson trials. We use the following Chernoff bound to continue: \begin{lemma}[Theorem 4.4, page 64 of~\cite{Mitzenmacher:2005:PCR:1076315}] \label{lemma:Chernoff} Let $X_1,...,X_n$ be independent Poisson trials such that: $\Pr(X_i) = p_i$, and let $X = \sum_{i=1}^{n} X_i$, then for $R\ge 6\mathbb{E}(X)$, $\Pr\left(X\ge R\right)\le 2^{-R}$. \end{lemma} Lemma~\ref{lemma:Chernoff} allows us to approach Theorem~\ref{thm:epsDeltaDistinctLB}, which is the main result for \distinctLB{} and shows that it provides an $\epsilon,\delta$ approximation of $D$, which we can then use in Corollary~\ref{cor:epsDeltaDistinct} to show that it solves the count distinct problem. \begin{theorem} \label{thm:epsDeltaDistinctLB} Let \scriptsize$\delta \le \frac{1}{128}$\normalsize. Then: \ifdefined $$\Pr \left( { D - Z \ge \frac{1}{2}\varepsilon D \cdot \log \left( {\frac{2}{\delta }} \right)} \right) \le \delta.$$ \else \scriptsize $\Pr \left( { D - Z \ge \frac{1}{2}\varepsilon D \cdot \log \left( {\frac{2}{\delta }} \right)} \right) \le \delta.$ \fi \end{theorem} \begin{proof} For $\delta\le \frac{1}{128}$, $\log\left(\frac{2}{\delta}\right)\ge 6$ and thus, as\\ $ \log_2\left(\frac{2}{\delta}\right) \cdot \frac{1}{2} \varepsilon D \ge 6 \mathbb{E}(Y) $ we can use Lemma~\ref{lemma:Chernoff} to get that: \small \ifdefined \[\Pr \left( {Y \ge \frac{1}{2}\varepsilon D \cdot \log_2 \left( {\frac{2}{\delta }} \right)} \right) \le {2^{ - \left( {\varepsilon D} \right)\log_2 \left( {\frac{2}{\delta }} \right)}} \le {2^{ - \log_2 \left( {\frac{2}{\delta }} \right)}} \le \frac{\delta }{2}.\] \else $\Pr \left( {Y \ge \frac{1}{2}\varepsilon D \cdot \log_2 \left( {\frac{2}{\delta }} \right)} \right) \le {2^{ - \left( {\varepsilon D} \right)\log_2 \left( {\frac{2}{\delta }} \right)}} \le {2^{ - \log_2 \left( {\frac{2}{\delta }} \right)}} \le \frac{\delta }{2}.$ \fi \normalsize As $X$ monotonically increases with $D$, we use Lemma~\ref{lemma:rare} to conclude that $\Pr \left( {X \ge \frac{1}{2}\varepsilon D \cdot \log_2 \left( {\frac{2}{\delta }} \right)} \right) \le \delta.~\qedhere$ \end{proof} The following readily follows, as claimed. \begin{corollary} \label{cor:epsDeltaDistinct} \distinctLB{} solves the\\ {\sc $(W,\epsilon,\delta)$-Approximate Count Distinct}{} problem, for $\varepsilon_D = \frac{1}{2} \varepsilon \cdot \log_2 \left( {\frac{2}{\delta }} \right)$, and any $\delta \le \frac{1}{128}$. That is, for such $\delta$ we get: $ \Pr \left ( Z \ge (1 - \varepsilon_D ) \cdot D \right ) \ge 1 - \delta .$ \end{corollary} \subsubsection{Analysis of \distinctMLE{}} \ifdefined We state the confidence interval results derived for the MLE estimator. As the proof is highly technical it is differed to the full version~\cite{full-version}. \begin{theorem} \label{thm:prob margin} Let $\delta \le \frac{1}{128} $ and denote $\varepsilon_D = \frac{1}{2} \varepsilon \cdot \log_2 \left(\frac{2}{\delta} \right)$. Then $ \Pr \left( \hat{D} \le D \cdot (1 - \varepsilon_D) \right) \le \delta .$ \end{theorem} \else We now provide an analysis of the \distinctMLE{} estimation method, which is more accurate but offers two sided error. Our goal is to derive confidence intervals around \distinctMLE{} and for this we require a better analysis of $Z$, which is provided by Lemma~\ref{lem:DISTINCT expectation precise}. \begin{lemma} \label{lem:DISTINCT expectation precise} \begin{equation} D - {D \choose 2} \cdot 2^{-L} \le \mathbb{E}(Z) \le D - {D \choose 2} \cdot 2^{-L} + {D \choose 3} \cdot 2^{-2L}. \end{equation} \end{lemma} \begin{proof} This follows simply from~\ref{eq:DISTINCT expectation} by expanding the binomial. \end{proof} We also need to analyze the second moment of $Z$. Lemma~\ref{lem:DISTINCT second moment} does just that. \begin{lemma} \label{lem:DISTINCT second moment} \begin{align*} \mathbb{E}(Z^2) = 2^{2L} - \left ( 2^{2L+1} - 2^L \right) \left ( 1 - \frac{1}{2^L} \right)^D + \left ( 2^{2L} - 2^L \right) \left ( 1 - \frac{1}{2^{L-1}} \right)^D. \end{align*} \end{lemma} \begin{proof} As in Theorem~\ref{thm:Z}, we write $Z$ as a sum of indicator variables $Z = \sum_{i=0}^{2^L-1} Z_i$. Then $$ Z^2 = \sum_{i=0}^{2^L-1} Z_i^2 + \sum_{i\ne j} Z_i Z_j. $$ By linearity of the expectation, it implies \begin{equation} \label{eq:Z variance} \mathbb{E}(Z^2) = \sum_{i=0}^{2^L-1} \mathbb{E}(Z_i^2) + \sum_{i\ne j} \mathbb{E}(Z_i Z_j). \end{equation} We note that for any $i \ne j$, $ (1-Z_i)(1-Z_j) $ is also an indicator variable, attaining the value $1$ with probability $ \left ( 1 - \frac{2}{2^L} \right)^D $. Therefore, for any $i \ne j $, we have by linearity of the expectation, that \begin{align*} \left ( 1 - \frac{2}{2^L} \right)^D = \mathbb{E}(1 - Z_i)(1-Z_j) = 1 - \mathbb{E}(Z_i) - \mathbb{E}(Z_j) + \mathbb{E}(Z_i Z_j). \end{align*} Since $ \mathbb{E}Z_i = 1 - \left ( 1 - \frac{1}{2^L} \right)^D $, it follows that \begin{align*} \mathbb{E}(Z_i Z_j) = \left ( 1 - \frac{2}{2^L} \right)^D + 2 \cdot \left ( 1 - \left ( 1 - \frac{1}{2^L} \right)^D \right) - 1 = 1 + \left ( 1 - \frac{2}{2^L} \right)^D - 2 \left ( 1 - \frac{1}{2^L} \right)^D. \end{align*} Plugging it back in Equation~\ref{eq:Z variance}, and using $Z_i^2 = Z_i$, we obtain \begin{align*} \mathbb{E}(Z^2) = 2^L \cdot \left ( 1 - \left ( 1 - \frac{1}{2^L} \right)^D \right) + 2^L (2^L - 1) \left ( 1 + \left ( 1 - \frac{2}{2^L} \right)^D - 2 \left ( 1 - \frac{1}{2^L} \right)^D \right). \end{align*} Finally, expanding and rearranging we obtain the claim of this lemma. \end{proof} As our interest lies in the confidence intervals, we shall only need an approximation, described in the following simple corollary \begin{corollary} \label{cor:DISTINCT second moment approx} $ D^2 - \frac{4 \cdot D^3}{3 \cdot 2^L} < \mathbb{E}(Z^2) < D^2 + \frac{D^3}{3 \cdot 2^L}$. \end{corollary} \begin{proof} First, note that binomial expansion yields $$ 1 - \frac{D}{2^L} + \frac{{D \choose 2}}{2^{2L}} - \frac{{D \choose 3}}{2^{3L}} \le \left ( 1 - \frac{1}{2^L} \right)^D \le 1 - \frac{D}{2^L} + \frac{{D \choose 2}}{2^{2L}}.$$ Plugging it back into Lemma~\ref{lem:DISTINCT second moment}, and expanding we get on the one hand \begin{multline*} \mathbb{E}(Z^2) > 2^{2L} - (2^{2L+1} - 2^L) \left ( 1 - \frac{D}{2^L} + \frac{ {D \choose 2} }{2^{2L}} \right) + (2^{2L} - 2^L) \left( 1 - \frac{D}{2^{L-1}} + \frac{ {D \choose 2}}{2^{2L-2}} - \frac{{D \choose 3}}{2^{3L-3}} \right) = \\ = D + 2 \cdot {D \choose 2} - 2^{-L} \cdot \left( \frac{3D(D-1)}{2} + \frac{8D(D-1)(D-2)}{6} \right) = D^2 - \frac{D(D-1)(8D-7)}{6 \cdot 2^L}, \end{multline*} which yields the lower bound. On the other hand, we have \begin{multline*} \mathbb{E}(Z^2) < 2^{2L} - (2^{2L+1} - 2^L) \left ( 1 - \frac{D}{2^L} + \frac{ {D \choose 2} }{2^{2L}} - \frac{{D \choose 3}}{2^{3L-3}} \right) + (2^{2L} - 2^L) \left( 1 - \frac{D}{2^{L-1}} + \frac{ {D \choose 2}}{2^{2L-2}} \right) = \\ = D + 2 \cdot {D \choose 2} + 2^{-L} \cdot \left(- \frac{3D(D-1)}{2} + \frac{D(D-1)(D-2)}{3} \right) = D^2 + \frac{D(D-1)(2D-13)}{6 \cdot 2^L}, \end{multline*} which yields the upper bound. \ifdefined Thus, we have established the corollary. \fi \end{proof} Using Corollary~\ref{cor:DISTINCT second moment approx} and Lemma~\ref{lem:DISTINCT expectation precise} we can finally get an estimate for $\mathbb{E}(\hat{D})$, as described in the following theorem. This shows that $\hat{D}$ is unbiased up to an $O(D\epsilon^2)$ additive factor. \begin{theorem} \label{thm:DISTINCT2 expectation} \small \begin{equation} - D \cdot \varepsilon^2 \cdot \left( \frac{2}{3} + \frac{\varepsilon}{2W^3} \right) < \mathbb{E}(\hat{D}) -D < D \cdot \varepsilon^2 \cdot \left(2 + \frac{1}{3(1-\varepsilon)^3} \right). \end{equation} \normalsize \end{theorem} \begin{proof} We first note that for $x \in [0,1)$ one has \begin{equation} \label{eq:ln approx} x + \frac{x^2}{2} \le -\ln (1-x) \le x + \frac{x^2}{2} +\frac{x^3}{3(1-x)^3}. \end{equation} Therefore, we have $$ \frac{Z}{2^L} + \frac{Z^2}{2^{2L+1}} \le - \ln \left( 1 - \frac{Z}{2^L} \right) \le \frac{Z}{2^L} + \frac{Z^2}{2^{2L+1}} + \frac{Z^3}{3 \cdot (2^L-Z)^3} .$$ Since always $Z \le D$, and $\frac{D}{2^L} \le \frac{W}{2^L} = \varepsilon$, we can take expectation and obtain, by linearity and monotonicity of the expectation, that \begin{align} \label{eq:ineq expectation approx} \frac{\mathbb{E}(Z)}{2^L} + \frac{\mathbb{E}(Z^2)}{2^{2L+1}} \le \mathbb{E} \left( - \ln \left( 1 - \frac{Z}{2^L} \right) \right) \le \frac{\mathbb{E}(Z)}{2^L} + \frac{\mathbb{E}(Z^2)}{2^{2L+1}} + \frac{D^3}{3 \cdot (2^L-D)^3}. \end{align} We can now substitute Lemma~\ref{lem:DISTINCT expectation precise} and Corollary~\ref{cor:DISTINCT second moment approx} to obtain \begin{multline*} -\mathbb{E} \left( \ln \left( 1 - \frac{Z}{2^L} \right) \right) \le \frac{D}{2^L} + \frac{D}{2^{2L+1}} + \frac{D^3}{6 \cdot 2^{3L}} + \frac{{D \choose 3}}{2^{3L}} + \frac{D^3}{3 \cdot (2^L-D)^3} \\\le \frac{D}{2^L} + \frac{D}{2^{2L+1}} + D \cdot \left( \frac{2W^2}{2^{3L}} + \frac{W^2}{3(1-\varepsilon)^3 \cdot 2^{3L}} \right). \end{multline*} Recalling~\ref{eq:ln approx} we see also that $ -\ln \left( 1 - \frac{1}{2^L} \right) \ge \frac{1}{2^L} + \frac{1}{2^{2L+1}} $. Combining both inequalities, we get \begin{align*} \mathbb{E}(\hat{D}) \le D + D \cdot \left( \frac{2W^2}{2^{2L}+2^{L-1}} + \frac{W^2}{3(1-\varepsilon)^3 \cdot (2^{2L} + 2^{L-1})} \right) \le D + D \cdot \left( \frac{2W^2}{2^{2L}} + \frac{W^2}{3(1-\varepsilon)^3 \cdot 2^{2L}} \right). \end{align*} Since $ 2^L = W\varepsilon^{-1} $, this gives us the upper bound. On the other hand, substituting Lemma~\ref{lem:DISTINCT expectation precise} and Corollary~\ref{cor:DISTINCT second moment approx} in the left inequality given in~\eqref{eq:ineq expectation approx} we have also $$ -\mathbb{E} \left( \ln \left( 1 - \frac{Z}{2^L} \right) \right) \ge \frac{D}{2^L} + \frac{D}{2^{2L+1}} - \frac{4D^3}{6 \cdot 2^{3L}} .$$ From~\eqref{eq:ln approx} we see also that $ -\ln \left( 1 - \frac{1}{2^L} \right) \le \frac{1}{2^L} + \frac{1}{2^{2L+1}} + \frac{1}{3\cdot (2^L-1)^3}. $ Combining both inequalities, we get \begin{align*} \mathbb{E}(\hat{D}) \ge D - \frac{4D^3}{6\cdot (2^{2L} + 2^{L-1} + 3)} - \frac{8D}{3\cdot 2^{3L}} \ge D - D \cdot \left( \frac{2D^2}{3\cdot 2^{2L}} + \frac{8}{3\cdot 2^{3L}} \right). \end{align*} Since $ D \le W $ and $2^L = W\varepsilon^{-1}$, this yields the lower bound, as claimed. \end{proof} Next, we bound the error probability. This is done in the following theorem. \begin{theorem} \label{thm:prob margin} Let $\delta \le \frac{1}{128} $ and denote $\varepsilon_D = \frac{1}{2} \varepsilon \cdot \log_2 \left(\frac{2}{\delta} \right)$. Then $$ \Pr \left( \hat{D} \le D \cdot (1 - \varepsilon_D) \right) \le \delta .$$ \end{theorem} \begin{proof} We first note that $$ \Pr \left( \frac{\ln \left( 1 - \frac{Z}{2^L} \right)}{\ln \left( 1 - \frac{1}{2^L} \right)} \le a \right) = \Pr \left( Z \le 2^L \left( 1 - \left( 1 - \frac{1}{2^L} \right)^a \right) \right) .$$ Now, using Corollary~\ref{cor:epsDeltaDistinct}, and the fact that $$ 2^L \left( 1 - \left( 1 - \frac{1}{2^L} \right)^a \right) \le a, $$ the result immediately follows. \end{proof} Theorem~\ref{thm:prob margin} shows the soundness of the \distinctMLE{} estimator by proving that it is at least as accurate as the \distinctLB{} estimator. \fi \subsection{Analysis of entropy estimation functionality} \label{anal:entropy} We now turn to analyze the entropy estimation. Recall that SWAMP uses the estimator $\entropyVariable$. If we denote by $F$ the \text{set} of distinct fingerprints in the last $W$ elements, it is given by \ifdefined $$ \entropyVariable = -\sum_{h \in F} \frac{n_h}{W} \log (\frac{n_h}{W}), $$ \else $ \entropyVariable = -\sum_{h \in F} \frac{n_h}{W} \log (\frac{n_h}{W}), $ \fi where $n_h$ is the number of occurrences of fingerprint $h$ in the window of last $W$ elements. We begin by showing that $\mathbb{E}(\entropyVariable)$ approximates $H$, where \ifdefined $$ H = -\sum_{y \in D} \frac{f_y^W}{W} \logp{\frac{f_y^W}{W}} $$ \else \\$ H = -\sum_{y \in D} \frac{f_y^W}{W} \logp{\frac{f_y^W}{W}} $ \fi is the entropy in the window. \begin{theorem} \label{thm:EH} $\entropyVariable \le H$ and $\mathbb{E}(\entropyVariable)$ is at least $H -\varepsilon$. \end{theorem} \begin{proof} For any $y \in D$, let $h(y)$ be its fingerprint. Recall that $\widehat{f_y^W}$ is the frequency $n_{h(y)}$ of $h(y)$, hence we have: \ifdefined $$ \widehat{f_y^W} = \sum_{y':h(y)=h(y')} f_{y'}^W. $$ \else $ \widehat{f_y^W} = \sum_{y':h(y)=h(y')} f_{y'}^W. $ \fi For ease of notation, we denote $p_y =\frac{f_y^W}{W}$, and $p_h = \frac{n_h}{W}$. Thus \ifdefined $$ p_h = \sum_{y \in D:h(y)=h} p_y. $$ \else $p_h = \sum_{y \in D:h(y)=h} p_y. $ \fi \\It follows that: \ifdefined \begin{multline*} \entropyVariable = -\sum_{h \in F} p_h \log(p_h) = -\sum_{h \in F} \left( \sum_{y \in D :h(y)=h} p_y \right) \log \left( \sum_{y \in D :h(y)=h} p_y \right) = -\sum_{y \in D} p_y \cdot \log \left( \sum_{y' \in D :h(y')=h(y)} p_{y'} \right). \end{multline*} \else {$\entropyVariable = -\sum_{h \in F} p_h \log(p_h)\\ = -\sum_{h \in F} \left( \sum_{y \in D :h(y)=h} p_y \right) \log \left( \sum_{y \in D :h(y)=h} p_y \right) \\ = -\sum_{y \in D} p_y \cdot \log \left( \sum_{y' \in D :h(y')=h(y)} p_{y'} \right).\\ $ }\normalfont \fi Now, since $p_{y'} \ge 0$ for all $y'$, it follows that for any $y$, \ifdefined $$ \sum_{y' \in D :h(y')=h(y)} p_{y'} \ge p_y .$$ \else $ \sum_{y' \in D :h(y')=h(y)} p_{y'} \ge p_y .$ \fi Hence, by the monotonicity of the logarithm, \ifdefined $$ \entropyVariable \le -\sum_{y \in D} p_y \log(p_y) = H $$ \else $ \entropyVariable \le -\sum_{y \in D} p_y \log(p_y) = H $ \fi proving the first part of our claim. Conversely, denote for any $y \in D$ and any $h \in F$, by $I_{y,h}$ the Bernoulli random variable, attaining the value $1$ if $h(y)=h$. Then we see that: \ifdefined $$ \entropyVariable = -\sum_{y \in D} p_y \cdot \log \left( \sum_{y' \in D} p_{y'} \cdot I_{y',h(y)} \right). $$ \else \\ $ \entropyVariable = -\sum_{y \in D} p_y \cdot \log \left( \sum_{y' \in D} p_{y'} \cdot I_{y',h(y)} \right). $ \fi We see, by using Jensen's inequality, the concavity of the logarithm and the linearity of expectation, that for any $y$: \small \ifdefined $$ \mathbb{E} \log \left( \sum_{y' \in D} p_{y'} \cdot I_{y',h(y)} \right) \le \log \left( \sum_{y' \in D} p_{y'} \cdot \mathbb{E}(I_{y',h(y)}) \right) $$\normalsize \else $ \mathbb{E} \log \left( \sum_{y' \in D} p_{y'} \cdot I_{y',h(y)} \right) \le \log \left( \sum_{y' \in D} p_{y'} \cdot \mathbb{E}(I_{y',h(y)}) \right) $ \fi Since for any $y' \ne y$, we have $ \mathbb{E}(I_{y',h(y)}) = 2^{-L}$, we see that \small \ifdefined $$ \mathbb{E} \log \left( \sum_{y' \in D} p_{y'} \cdot I_{y',h(y)} \right) \le \log \left( 2^{-L} \cdot \sum_{y' \ne y \in D} p_{y'} + p_y \right) = \log(p_y + 2^{-L}(1-p_y)) $$ \else $ \mathbb{E} \log \left( \sum_{y' \in D} p_{y'} \cdot I_{y',h(y)} \right)\le$$ \log \left( 2^{-L} \cdot \sum_{y' \ne y \in D} p_{y'} + p_y \right)\\ = \log(p_y + 2^{-L}(1-p_y)). $ \fi \normalsize Summing this over $y$, and using the linearity of expectation once more, we obtain:\\ \ifdefined $$ \mathbb{E}(\entropyVariable) \ge - \sum_{y \in D} p_y \log(p_y + 2^{-L}(1-p_y)). $$ \else $ \mathbb{E}(\entropyVariable) \ge - \sum_{y \in D} p_y \log(p_y + 2^{-L}(1-p_y)). $\\ \fi Subtracting $H$ yields: \ifdefined \begin{align*} \mathbb{E}(\entropyVariable) - H \ge -\sum_{y \in D} p_y \left( \log(p_y + 2^{-L}(1-p_y)) - \log(p_y) \right) \ge - \sum_{y \in D} p_y \left( \log(1 + 2^{-L}(\frac{1}{p_y}-1)) \right). \end{align*} \else {\\ $\mathbb{E}(\entropyVariable) - H \ge -\sum_{y \in D} p_y \left( \log(p_y + 2^{-L}(1-p_y)) - \log(p_y) \right)\\ \ge - \sum_{y \in D} p_y \left( \log(1 + 2^{-L}(\frac{1}{p_y}-1)) \right). $ }\normalfont \fi \\Note that $2^{-L} = \frac{\varepsilon}{W}$ and $\frac{1}{W} \le p_y < 1$. Hence, $0<2^{-L}(\frac{1}{p_y}-1) < \varepsilon$. Since for any $0 < x $, $\log(1+x) < x$, we get: \ifdefined $$ \mathbb{E}(\entropyVariable) - H \ge - \sum_{y \in D} 2^{-L} \cdot (1-p_y) = - \frac{\varepsilon}{W} \cdot (D - 1).$$ \else $ \mathbb{E}(\entropyVariable) - H \ge - \sum_{y \in D} 2^{-L} \cdot (1-p_y) = - \frac{\varepsilon}{W} \cdot (D - 1).$\\ \fi As $ D \le W $, we get that $ \mathbb{E}(\entropyVariable) > H - \varepsilon $, as claimed. \end{proof} We proceed with a confidence interval derivation: \begin{theorem}\label{thm:entropyInterval} Let $\epsilon_H = \varepsilon\delta^{-1}$ then $\hat{H}$ solves {\sc $(W,\epsilon_H,\delta)$-Entropy Estimation}. That is: $ \Pr \left ( H - \hat{H} \ge \epsilon_H \right ) \le~\delta .$ \end{theorem} \begin{proof} Denote by $X\triangleq H - \hat{H}$ the \ifdefined random variable of the \fi estimation error. According to Theorem~\ref{thm:EH}, $X$ is non-negative and $E(X)\le \epsilon$. Therefore, according to Markov's theorem we have \ifdefined \begin{align*} \Pr \left ( H \le \hat{H} + \epsilon\delta^{-1} \right ) = 1- \Pr \left( X \ge \epsilon\delta^{-1} \right) \ge 1- \frac{E(X)}{\epsilon\delta^{-1}} \ge 1- \delta.\quad\qedhere \end{align*} \else {\small $ \Pr \left ( H \le \hat{H} + \epsilon\delta^{-1} \right ) = 1- \Pr \left( X \ge \epsilon\delta^{-1} \right) \ge 1- \frac{E(X)}{\epsilon\delta^{-1}} \ge 1- \delta.\quad\qedhere $ }\normalfont \fi Setting $\epsilon_H = \epsilon\delta^{-1}$ \ifdefined and rearranging the expression completes the proof. \else concludes the proof. \fi \qedhere \end{proof} \section{SWAMP Algorithm} \label{sec:SWAMP} \subsection{Model} We consider a stream $(\mathbb{S})$ of IDs where at each step an ID is added to $\mathbb{S}$. The last $W$ elements in $\mathbb{S}$ are denoted $\mathbb{S}^{W}$. Given an ID $y$, the notation $f_y^{W}$ represents the frequency of $y$ in $\mathbb{S}^{W}$. Similarly, $\widehat{f_y^{W}}$ is an approximation of $f_y^{W}$. For ease of reference, notations are summarized in Table~\ref{tbl:notations}. \input{Notations} \begin{algorithm}[t!] \caption{SWAMP} \footnotesize \ifdefined \footnotesize \fi \begin{algorithmic}[1] \ifdefined \Require{TinyTable $TinyTable$, Fingerprint Array $CFB$, integer $curr$, integer $Z$ } \Statex \textbf{initialization} \Statex $CFB \gets \bar 0$ \Statex $curr \gets 0$ \Comment{Initial index is 0}. \Statex $Z \gets 0$ \Comment{0 distinct fingerprints}. \Statex $TT\gets TinyTable$ \Statex $\entropyVariable\gets 0$ \Comment{0 entropy} \else \Statex \textbf{init: $CFB \gets \bar 0, curr \gets 0, Z \gets 0, TT\gets TinyTable, \entropyVariable\gets 0$} \fi \Function{Update }{ID $x$} \State $prevFreq \gets TT.frequency(CFB[curr])$ \State $TT.remove(CFB[curr])$ \State $CFB[curr] \gets h(x)$ \State $TT.add(CFB[curr])$ \State $xFreq \gets TT.frequency(CFB[curr])$ \State \Call {UpdateCD}{$prevFreq,xFreq$} \State \Call {UpdateEntropy}{$prevFreq,xFreq$} \State $curr \gets (curr +1)$ \textbf{mod} $W$ \EndFunction \Procedure{UpdateCD}{$prevFreq,xFreq$} \If {$prevFreq = 1$} \State $Z \gets Z -1$ \EndIf \If {$xFreq = 1$} \State $Z \gets Z +1$ \EndIf \EndProcedure \Procedure{UpdateEntropy}{$prevFreq,xFreq$} \State $PP \gets \frac{prevFreq}{W}$ \Comment{The previous probability} \State $CP \gets \frac{prevFreq-1}{W}$ \Comment{The current probability} \State $\entropyVariable \gets \entropyVariable +PP \logp{PP} - CP\logp{CP}$ \State $xPP \gets \frac{xFreq-1}{W}$ \Comment{$x$'s previous probability} \State $xCP \gets \frac{xFreq}{W}$ \Comment{$x$'s current probability} \State $\entropyVariable \gets \entropyVariable + xPP \logp{xPP} - xCP\logp{xCP}$ \EndProcedure \Function {IsMember}{ID $x$} \If {$TT.frequency(h(x))>0$} \State \Return true \EndIf \State \Return false \EndFunction \Function{Frequency }{ID $x$} \State \Return $TT.frequency(h(x))$ \EndFunction \Function {\distinctLB()}{} \State \Return $Z$ \EndFunction \Function {\distinctMLE()}{} \State $\hat{D} \gets \frac{\ln\left(1 - \frac{Z}{2^L}\right)}{\ln \left(1 - \frac{1}{2^L} \right)}$ \State \Return $\hat{D}$ \EndFunction \Function {\entropy()}{} \State \Return $\entropyVariable$ \EndFunction \end{algorithmic} \label{alg:SWAMP} \normalsize \end{algorithm} \subsection{Problems definitions} We start by formally defining the approximate set membership problem. \begin{definition} We say that an algorithm solves the {\sc $(W,\epsilon)$-Approximate Set Membership}{} problem if given an ID $y$, it returns true if $y\in \mathbb{S}^{W}$ and if $y \notin \mathbb{S}^{W}$, it returns false with probability of at least $1-\varepsilon$. \end{definition} The above problem is solved by SWBF~\cite{slidngBloomFilterInfocom} and by \emph{Timing Bloom filter (TBF)}~\cite{TBF}. In practice, SWAMP solves the stronger {\sc $(W,\epsilon)$-Approximate Set Multiplicity}{} problem, as defined below: \begin{definition} We say that an algorithm solves the {\sc $(W,\epsilon)$-Approximate Set Multiplicity}{} problem if given an ID $y$, it returns an estimation $\widehat{f_y^{W}}$ s.t. $\widehat{f_y^{W}}\ge f_y^{W}$ and with probability of at least $1 -\varepsilon$: ${f_y}^W = \widehat{{f_y}^W}$. \end{definition} Intuitively, the {\sc $(W,\epsilon)$-Approximate Set Multiplicity}{} problem guarantees that we always get an over approximation of the frequency and that with probability of at least $1 -\varepsilon$ we get the exact window frequency. A simple observation shows that any algorithm that solves the {\sc $(W,\epsilon)$-Approximate Set Multiplicity}{} problem also solves the {\sc $(W,\epsilon)$-Approximate Set Membership}{} problem. Specifically, if $y \in \mathbb{S}^{W}$, then $\widehat{f_y^{W}}\ge f_y^{W}$ implies that $\widehat{f_y^{W}}\ge 1$ and we can return true. On the other hand, if $y \notin \mathbb{S}^{W}$, then ${f_y}^W = 0$ and with probability of at least $1-\varepsilon$, we get: ${f_y}^W = 0 = \widehat{{f_y}^W}$. Thus, the \emph{isMember} estimator simply returns true if $\widehat{{f_y}^W}>0$ and false otherwise. We later show that this estimator solves the {\sc $(W,\epsilon)$-Approximate Set Membership}{} problem. \begin{figure*}[t!] \center{ \subfigure[\label{subfig:SBF}Sliding Bloom filter ]{\includegraphics[width = 5.6cm]{Random_FPR.png}} \subfigure[\label{subfig:Z}Count Distinct]{\includegraphics[width = 5.6cm]{Random_CD.png}} \subfigure[\label{subfig:entropy}Entropy]{\includegraphics[width = 5.6cm]{Random_Entropy.png}} } \caption{Empirical error and theoretical guarantee for multiple functionalities (random inputs).} \label{fig:anal} \end{figure*} The goal of the {\sc $(W,\epsilon,\delta)$-Approximate Count Distinct}{} problem is to maintain an estimation of the number of distinct elements in $\mathbb{S}^{W}$. We denote their number by $D$. \begin{definition} We say that an algorithm solves the {\sc $(W,\epsilon,\delta)$-Approximate Count Distinct}{} problem if it returns an estimation $\widehat{D}$ such that $D \ge \widehat{D}$ and with probability $1-\delta$: $\widehat{D} \ge \left( {1 - {\varepsilon}} \right)D $. \end{definition} Intuitively, an algorithm that solves the {\sc $(W,\epsilon,\delta)$-Approximate Count Distinct}{} problem is able to conservatively estimate the number of distinct elements in the window and with probability of $1-\delta$, this estimate is close to the real number of distinct elements. The entropy of a window is defined as: \[H \triangleq - \sum\nolimits_{i = 1}^D {\frac{{{f_i}}}{W}} \log \left( {\frac{{{f_i}}}{W}} \right),\] where $D$ is the number of distinct elements in the window, $W$ is the total number of packets, and $f_i$ is the frequency of flow $i$. We define the window entropy estimation problem as: \begin{definition} An algorithm solves the {\sc $(W,\epsilon,\delta)$-Entropy Estimation}{} problem, if it provides an estimator $\hat{H}$ so that $H\ge\hat{H}$ and $\Pr\left(H - \hat{H} \ge \epsilon\right)\le \delta$. \end{definition} \subsection{SWAMP algorithm} We now present~\emph{Sliding Window Approximate Measurement Protocol (SWAMP)}. SWAMP uses a single hash function ($h$), which given an ID $(y)$, generates $L\triangleq \left\lceil\log_2(W\epsilon^{-1})\right\rceil$ random bits $h(y)$ that are called its \emph{fingerprint}. We note that $h$ only needs to be pairwise-independent and can thus be efficiently implemented using only $O(\log W)$ space. Fingerprints are then stored in a cyclic fingerprint buffer of length $W$ that is denoted $CFB$. The variable $curr$ always points to the oldest entry in the buffer. Fingerprints are also stored in TinyTable~\cite{TinyTable} that provides compact encoding and multiplicity information. \begin{figure*}[t] \center{ \ifdefined \subfigure[\label{variableEps}Window size is $2^{16}$ and varying $\varepsilon$.]{\includegraphics[width =.49\columnwidth]{W=65536_SlidingBloomFilter_EpsAxis.png}} \subfigure[\label{variableW}$\varepsilon =2^{-10}$ and varying window sizes.]{\includegraphics[width =.49\columnwidth]{eps=00009765625_SlidingBloomFilter_WinSizeAxis.png}} \subfigure{\includegraphics[width = 7.8cm]{BFLegend.PNG}} \else \ifdefined \subfigure[\label{variableEps}Window size is $2^{16}$ and varying $\varepsilon$.]{\includegraphics[width =\columnwidth]{W=65536_SlidingBloomFilter_EpsAxis.png}} \else \subfigure[\label{variableEps}Window size is $2^{16}$ and varying $\varepsilon$.]{\includegraphics[width =.96\columnwidth]{W=65536_SlidingBloomFilter_EpsAxis.png}} \fi \ifdefined \subfigure[\label{variableW}$\varepsilon =2^{-10}$ and varying window sizes.]{\includegraphics[width =\columnwidth]{eps=00009765625_SlidingBloomFilter_WinSizeAxis.png}} \subfigure{\includegraphics[width = 7.8cm]{BFLegend.PNG}} \else \subfigure[\label{variableW}$\varepsilon =2^{-10}$ and varying window sizes.]{\includegraphics[width =.96\columnwidth]{eps=00009765625_SlidingBloomFilter_WinSizeAxis.png}} \subfigure{\includegraphics[width = 7.8cm]{BFLegend.PNG}} \fi \fi } \caption{Memory consumption of sliding Bloom filters as a function of $W$ and $\varepsilon.$} \label{fig:W} \end{figure*} The update operation replaces the oldest fingerprint in the window with that of the newly arriving item. To do so, it updates both the cyclic fingerprint buffer ($CFB$) and TinyTable. In CFB, the fingerprint at location $curr$ is replaced with the newly arriving fingerprint. In TinyTable, we remove one occurrence of the oldest fingerprint and add the newly arriving fingerprint. The update method also updates the variable $Z$, which measures the number of distinct fingerprints in the window. $Z$ is incremented every time that a new unique fingerprint is added to TinyTable, i.e., $FREQUENCY(y)$ changes from $0$ to $1$, where the method $FREQUENCY(y)$ receives a fingerprint and returns its frequency as provided by TinyTable. Similarly, denote $x$ the item whose fingerprint is removed; if $FREQUENCY(x)$ changes to $0$, we decrement~$Z$. SWAMP has two methods to estimate the number of distinct flows. \distinctLB{} simply returns $Z$, which yields a conservative estimator of $D$ while \distinctMLE{} is an approximation of its Maximum Likelihood Estimator. Clearly, \distinctMLE{} is more accurate than \distinctLB{}, but its estimation error is two sided. A pseudo code is provided in Algorithm~\ref{alg:SWAMP} and an illustration is given by Figure~\ref{fig:example}.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,337
class SuggestionsMailerPreview < ActionMailer::Preview end
{ "redpajama_set_name": "RedPajamaGithub" }
169
\section{Introduction} Speech enhancement (SE) aims to improve the perceptual speech quality and intelligibility by removing the background noises contained in the noisy input signal, which is usually exploited as a front-end pre-processing module in many applications, such as automatic speech recognition (ASR), speaker diarization, hearing aids~\cite{9053317, 9664313}. With the development of deep learning, neural network-based SE models~\cite{8707065,8701652,defossez2020real} can even outperform the traditional counterparts~\cite{1163209}, which can be divided into two categories: time-frequency domain~\cite{6932438,8462068,9103053,hu20g_interspeech} and time-domain methods~\cite{8707065,8701652,defossez2020real}. In the time-frequency domain, a masking matrix is usually first estimated via supervised training and then multiplied with the noisy spectrum to estimate the clean spectrum, which is finally transformed into the time domain to recover the clean speech. For the time-domain SE models, a convolutional encoder-decoder or Unet~\cite{ronneberger2015u} framework can be utilized to predict the clean speech waveform directly from noisy speech waveforms, where long short-term memory (LSTM)~\cite{defossez2020real} or self-attention~\cite{9746169} is adopted to model the temporal information. It has been experimentally shown that these methods can improve the speech quality/intelligibility to some extent. In the speech community, self-supervised pre-training models have been developed rapidly recently, which are pre-trained using large amounts of unlabeled data and then transferred to downstream tasks, e.g., ASR, speaker recognition. For example, contrastive predictive coding (CPC)~\cite{oord2018representation} was proposed to predict the future frames using a contrastive loss. Wav2vec2.0~\cite{NEURIPS2020_92d1e1eb} leverages contextual information to predict the information of the masked frames using a contrastive loss function. HuBERT~\cite{9585401} performs offline clustering for the representation output from the middle layer of the model, which enables to directly predict the clustering labels at the masked positions. On the basis of HuBERT, WavLM~\cite{chen2021wavlm} can achieve the state-of-the-art performance on SUPERB benchmark~\cite{yang2021superb} by using an utterance mixing training and more unlabeled data. It was shown that there are some other pre-trained models, e.g.,~\cite{liu2021tera,hsu2021robust,zhu2022noise} that are beneficial for downstream tasks in both clean and noisy scenes. Further, self-supervised pre-trained models can also be used to improve the SE performance in literature. In~\cite{9746303}, thirteen pre-trained models are applied to the SE task, which are taken as feature extractors to generates spectral masks to reconstruct the clean speech waveform. It was shown that the high-level features extracted by self-supervised pre-trained models are also applicable to SE compared to traditional acoustic features. In~\cite{hung2022boosting}, the SE performance is improved by using a combination of features extracted by a self-supervised model and traditional spectral features. However, currently these methods can only applied to offline SE tasks and few of them considers the application of self-supervised pre-trained models to the real-time case. In addition, as it was shown in~\cite{chung2021w2v,zhu2022joint} that representation clustering can improve the noise robustness of the ASR, inspired by the clustering representation approach in HuBERT, it might be the case that discretizing the noisy speech representations is more beneficial for denoising as well as the reconstruction of clean speech waveforms. \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{figure5a.pdf} \caption{An illustration of the proposed speech enhancement model.} \label{fig:figure1} \end{figure} In this paper, we therefore consider the application of a self-supervised pre-trained model to the real-time SE task. The overall configuration of the proposed model is shown in Fig.~\ref{fig:figure1}. The basis SE model adopts the Unet-based framework by initializing the encoder and bottleneck layers of the model using the pre-trained WavLM model and replacing the convolution of the encoder in the WavLM with the causal convolution. The causal attention mask is adopted for the Transformer encoder in the bottleneck layer. In addition, We utilize a vector quantization (VQ) module to discretize the representation output from the bottleneck layer, which is then fed into the decoder for reconstructing clean speech waveforms. Experimental results on the Valentini dataset~\cite{Valentini2017} and an internal dataset show that using the pre-trained model can improve the SE performance, and the proposed VQ module can further improve the performance. Some enhanced speech samples and reference code can be found at: https://zxy0001.github.io. \section{method} \subsection{The self-supervised pre-trained WavLM model} WavLM~\cite{chen2021wavlm} is a pre-training model based on HuBERT, which contains a convolutional encoder and a Transformer context encoder, where the convolutional encoder has several layers (each contains a time-domain one dimensional (1D) convolutional layer), a normalization layer, and a Gaussian error linear unit (GELU) activation layer. In the Transformer context encoder, relative position information is introduced for the attention network by gated relative position encoding to better model the local information. In the pre-training stage, WavLM randomly transforms the input speech waveform, e.g., by mixing two waveforms or adding background noise. Afterwards, about 49\% of the speech signal is masked randomly in a sentence, and the discrete labels corresponding to the masked positions are predicted at the Transformer output, where the discrete labels are generated by discretizing the continuous signal via K-means clustering. It was shown that WavLM can improves the performance of SUPERB benchmarks for various speech tasks with more unlabeled data and model parameters. For more details on WavLM, please refer to~\cite{chen2021wavlm}. \subsection{The model architecture} The proposed SE model is built based on the U-net structure, which contains an $encoder: X \mapsto Z$, a bottleneck $f: Z \mapsto C$, a $VQ: C \mapsto Q$ and a $decoder: Q \mapsto Y$, and the corresponding model structure is shown in Fig.~\ref{fig:figure1}. The encoder has $D$ layers, and each contains a 1D causal convolution layer, a normalization layer and a GELU activation layer, where the convolution kernel size is $K$, the stride size is $S$ and the number of channels is $H$. The decoder also has $D$ layers, and each contains a 1D causal transpose convolutional layer, a normalization layer and a GELU activation layer. A skip connection is employed between the output of the $i$-th encoder and the input of the $i$-th decoder. The bottleneck contains $N$ Transformer encoder layers, each of which contains a multi-head self-attention layer and a position-wise fully connected feed-forward layer. Skip connection and layer normalization are also utilized for each layer. Specifically, given a noisy speech waveform $\boldsymbol{x}$, the feature $\boldsymbol{z} = encoder(\boldsymbol{x})$ is obtained by the encoder, which is then input to the bottleneck layer to obtain the contextual representation $\boldsymbol{c}=f(\boldsymbol{z})$. Given the context representation $\boldsymbol{c}$, the discrete representation is calculated by the VQ module as $\boldsymbol{q}=VQ(\boldsymbol{c})$, which is finally exploited by the decoder to reconstruct the enhanced speech waveform, i.e., $\boldsymbol{\hat{y}}=decoder(\boldsymbol{q})$. As shown in Fig.~\ref{fig:figure1}, in order to ensure the causality of the proposed SE model (i.e., at time-frame $t$ only the information of previous frames can be used), we adopt a causal attention masking matrix to mask out the information after time $t$. Since the convolutional receptive fields of the WavLM model and the SE model are different, we only utilize a few layers of the WavLM-base\footnote{https://github.com/microsoft/unilm/tree/master/wavlm} model to initialize the encoder and bottleneck of the SE model. We obtain a finite set of representations by discretizing the bottleneck layer output via production quantization~\cite{5432202}, which is involved similarly to~\cite{NEURIPS2020_92d1e1eb}. Given $G$ codebooks, each with $V$ learnable $d$-dimensional codewords, we first map the bottleneck representations $\boldsymbol{c}$ to $\boldsymbol{\rm l} \in \mathbb{R}^{G \times V}$ logits, and then select the discrete vectors by the Gumbel-softmax~\cite{jang2016categorical} operation in a differential way. The probability of choosing the $v$-th codewords from the $g$-th codebook is given by \begin{equation} \overline{p}_{g,v}=\frac{\exp(\overline{l}_{g,v}+n_{v})/\tau}{\sum_{k=1}^{V}\exp(\overline{l}_{g,k}+n_k)/\tau}, \label{eq1} \end{equation} where $\tau$ is a non-negative temperature coefficient, $n_v=-\log(-\log(u))$ with $u$ uniformly distributed over [0, 1]. In the forward stage, the vector $i$ is selected as $i={\rm argmax}_{v}p_{g,v}$, and in the backward propagation stage the true gradient of the Gumbel-softmax output is utilized. The quantization expects to utilize more codewords by maximizing the softmax distribution $\boldsymbol{\rm l}$, so we use the diversity loss function $L_d$, which is given by \begin{equation} L_{d} = \frac{1}{GV}\sum_{g=1}^{G}\sum_{v=1}^{V}\overline{p}_{g,v}\log \overline{p}_{g,v}. \label{eq2} \end{equation} \subsection{The total loss function} The total loss function $L_{\rm total}$ consists of an SE loss $L_{\rm se}$ and the diversity loss $L_d$, which reads \begin{equation} L_{\rm total}=L_{\rm se} + \lambda L_{d}, \label{eq3} \end{equation} where $\lambda$ is the balancing hyper-parameter. Note that the SE loss function $L_{\rm se}$ contains both the time-domain loss and the frequency-domain loss, which is similarly derived as in~\cite{defossez2020real,9746169}, where the time-domain component adopts $\ell_1$ loss and the frequency-domain component utilizes the multi-resolution STFT loss. As a result, $L_{\rm se}$ can be formulated as \begin{equation} L_{\rm se} = \frac{1}{T} \left(\left\| \boldsymbol{y} - \boldsymbol{\hat{y}} \right\|_1 + \sum_{i=1}^{M}L_{\rm stft}^{(i)}(\boldsymbol{y}, \boldsymbol{\hat{y}})\right), \label{eq4} \end{equation} where \begin{align} L_{\rm stft}(\boldsymbol{y},\boldsymbol{\hat{y}}) &= L_{\rm sc}(\boldsymbol{y},\boldsymbol{\hat{y}}) + L_{\rm mag}(\boldsymbol{y},\boldsymbol{\hat{y}}), \label{eq5} \\ L_{\rm sc}(\boldsymbol{y},\boldsymbol{\hat{y}}) &= \frac{\left\| | {\rm STFT}(\boldsymbol{y})| - | {\rm STFT}(\boldsymbol{\hat{y}})| \right\|_F}{\left\| {\rm STFT}(\boldsymbol{y}) \right\|_F}, \label{eq6}\\ L_{\rm mag}(\boldsymbol{y},\boldsymbol{\hat{y}}) &= \frac{1}{T} \left\| \log |{\rm STFT}(\boldsymbol{y})|-\log |{\rm STFT}(\boldsymbol{\hat{y}})| \right\|_1. \label{eq7} \end{align} where $M$ denotes the number of STFT loss functions and $\|\cdot\|$ the Frobenius norm. The multi-resolution loss $L_{\rm stft}^{(i)}$ utilizes the STFT loss with the number of FFT bins ranging from \{512, 1024, 2048\}, hop size from \{50, 120, 240\} and window length from $\in$ \{240, 600, 1200\}, respectively. \section{Experimental setup} \subsection{Datasets and evaluation metrics} The Valentini~\cite{Valentini2017} dataset contains 28.4 hours of clean and noisy speech pairs at a sampling rate of 48 kHz. These data are collected from 84 speakers at 4 signal-to-noise ratios (SNRs) (i.e., 0, 5, 10 and 15 dB) in the training set and 4 SNRs (2.5, 7.5, 12.5 and 17.5 dB) in the test set. We downsampled the raw speech waveforms to 16kHz and then applied the same remix augmentation and bandmask augmentation methods as in~\cite{defossez2020real}. In addition, we also validate the proposed model using the internal challenging 100-hour noisy data. In addition, in order to show the generalizability of the proposed model, we also evaluate the performance on a self-built dataset, which is more challenging than the Valentini dataset in principal. The training set of the internal dataset involves 100 hours data from Librispeech clean and noisy speech~\cite{7178964}, where the noisy mixtures are generated by mixing the clean speech and the noise sources from the Freesound~\cite{font2013freesound} noise dataset at a SNR randomly chosen from [0, 10] dB. The testing set has 15 hours speech consisting of 5000 utterances from 10 speakers recorded in the real traffic and in-car environments. The sampling frequency is 16 kHz. The performance of the proposed method is evaluated using objective metrics, including 1) perceptual evaluation of speech quality (PESQ)~\cite{pesq}, 2) short-time objective intelligibility (STOI)~\cite{5713237}, 3) mean opinion score (MOS) prediction of distortion of speech signal (SIG)~\cite{4389058}, 4) MOS prediction of intrusiveness of background noise (BAK)~\cite{4389058}, 5) MOS prediction of overall quality (OVRL)~\cite{4389058}. Some enhanced speech samples and reference code can be downloaded from {https://zxy0001.github.io}. \subsection{Model configuration} In order to enable the initialization of the proposed SE model using the pre-trained WavLM, the entire SE model utilizes a 3-layer ($D$ = 3) 1D convolution for the encoder, a 2-layer ($N$ = 2) Transformer encoder for the bottleneck and a 3-layer 1D transposed convolution for the decoder. Both the encoder and decoder have a dimension of 512 ($H$ = 512). The Transformer encoder in the bottleneck layer has a dimension of 768, the feedforward neural network has a dimension of 2048, and the self-attention module has 12 heads. For the 1D convolution in encoder, the convolution kernel size and stride size are (10, 3, 3) and (5, 2, 2), respectively. For the VQ module, we adopt $G$ = 1 codebook with $V$ = 320 learnable 128-dimensional vectors in each codebook, and $\lambda$ in (\ref{eq3}) is set to be 0.01 due to the range of the diversity loss. We use the Adam optimizer to train the SE model for 1M iterations, where the batch size is 64 and the maximum learning rate is $2 \times 10^{-4}$. All models are trained on 4 Tesla-V100-32G GPUs. \begin{table*}[] \caption{Performance comparison of objectively evaluation metrics on the Valentini test set.} \label{tab:table1} \centering \begin{tabular}{l|c|ccccc|c} \hline \textbf{Model} & \textbf{Domain} & \begin{tabular}[c]{@{}c@{}}\textbf{PESQ (WB)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{STOI (\%)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{pred.}\\ \textbf{CSIG}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{pred.}\\ \textbf{CBAK}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{pred.}\\ \textbf{COVL}\end{tabular} & \textbf{Causality} \\ \hline Noisy & - & 1.97 & 92.1 & 3.35 & 2.44 & 2.63 & - \\ \hline SEGAN~\cite{Pascual2017} & waveform & 2.16 & - & 3.48 & 2.94 & 2.80 & No \\ Wave U-Net~\cite{macartney2018improved} & waveform & 2.40 & - & 3.52 & 3.24 & 2.96 & No \\ SEGAN-D~\cite{9201348} & waveform & 2.39 & - & 3.46 & 3.11 & 3.50 & No \\ MMSE-GAN~\cite{8462068} & time-frequency & 2.53 & 93.0 & 3.80 & 3.12 & 3.14 & No \\ MetricGAN~\cite{pmlr-v97-fu19b} & waveform & 2.86 & - & 3.99 & 3.18 & 3.42 & No \\ DeepMMSE~\cite{9066933} & waveform & 2.95 & 94.0 & 4.28 & 3.46 & 3.64 & No \\ DEMUCS~\cite{defossez2020real} & waveform & 3.07 & 95.0 & 4.31 & 3.40 & 3.63 & No \\ CleanUNet~\cite{9746169} & waveform & 3.09 & 95.8 & 4.38 & 3.47 & 3.69 & No \\ \hline Ours (D=3, N=2) & waveform & \textbf{3.13} & \textbf{96.1} & \textbf{4.46} & \textbf{3.56} & \textbf{3.82} & No \\ \hline \hline Wiener &- & 2.22 & 93.0 & 3.23 & 2.68 & 2.67 & Yes \\ DeepMMSE~\cite{9066933} & waveform & 2.77 & 93.0 & 4.14 & 3.32 & 3.46 & Yes \\ DEMUCS~\cite{defossez2020real} & waveform & 2.93 & 95.0 & 4.22 & 3.25 & 3.52 & Yes \\ CleanUNet~\cite{9746169} & waveform & 2.91 & 95.6 & 4.34 & 3.42 & 3.65 & Yes \\ \hline Ours (D=3, N=2) & waveform & \textbf{3.02} & \textbf{95.8} & \textbf{4.40} & \textbf{3.49} & \textbf{3.72} & Yes \\ \hline \end{tabular} \end{table*} \section{Experimental results} \textbf{Comparison methods: } We first measure objective evaluation metrics on the noisy test set as a baseline. The time-domain SE comparison methods include SEGAN~\cite{Pascual2017}, SEGAN-D~\cite{9201348} and MetricGAN~\cite{pmlr-v97-fu19b}, which are based on the generative adversarial networks (GAN). The time-frequency domain SE comparison approaches include MMSE-GAN~\cite{8462068}, which is also relied on the GAN model, and DeepMMSE~\cite{9066933}, which estimates the noise spectral density using the minimum mean square error (MMSE) criterion. Wave U-Net~\cite{macartney2018improved}, DEMUCS~\cite{defossez2020real} and CleanUNet~\cite{9746169} are also compared, which utilize the Unet network for the time-domain SE. Table~\ref{tab:table1} shows the experimental results of the comparison methods on the Valentini dataset. It is clear that the proposed model achieves a PESQ of 3.13 and an STOI of 96.1\%, respectively using the pre-trained WavLM for initialization and the extra quantization module, which outperforms DEMUCS and CleanUNet. For the real-time SE case, our model achieves a PESQ of 3.02 and an STOI of 95.8\%, respectively, which are also better than the best off-the-shelf models, e.g., DEMUCS and CleanUNet. As the average input SNR of the Valentini dataset is moderately high, in order to show the generality of the proposed method we employed a more challenging internal dataset for testing, and the obtained results are shown in Table~\ref{tab:table2}. As CleanUNet and DEMUCS obtain the best performance among all chosen comparison methods, we will only compare them with the proposed method in the sequel. We can see that the performance of the proposed method with random initialization is comparable to that of the CleanUNet model, since both employ self-attention modules as bottleneck and the amounts of model parameters are roughly equal. In case WavLM is adopted to initialize our model, the performance is clearly improved compared to the random initialization. Note that using the VQ module in addition to the WavLM-based initialization does not always mean a positive performance gain, which should be dependent on the number of learnable codewords. When the context representation is quantized with sufficient codewords, the proposed VQ module can further improve the SE performance in terms of all metrics (e.g., Init V = 480 vs. Init). This is due to the fact that in case the number of codewords is small, the quantization noise will heavily affect the context representation as well as the diversity loss function and a certain loss in speech context information becomes inevitable. In general, the less the codewords, the more the information loss and the higher the quantization noise variance. \begin{table}[] \caption{Performance comparison of objective evaluation metrics for real-time speech enhancement on the internal test set.} \label{tab:table2} \centering \begin{tabular}{l|ccccc} \hline \textbf{Model} & \begin{tabular}[c]{@{}c@{}}\textbf{PESQ}\\ \textbf{(WB)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{STOI}\\ \textbf{(\%)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{pred.}\\ \textbf{CSIG}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{pred.}\\ \textbf{CBAK}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{pred.}\\ \textbf{COVL}\end{tabular} \\ \hline Noisy & 1.32 & 80.8 & 2.54 & 1.87 & 1.86 \\ \hline DEMUCS~\cite{defossez2020real} & 1.89 & 86.2 & 3.25 & 2.37 & 2.50 \\ CleanUNet~\cite{9746169} & 2.17 & 87.1 & 3.61 & 2.54 & 2.62 \\ \hline Ours (random init) & 2.04 & 86.5 & 3.53 & 2.49 & 2.55 \\ Ours (Init) & 2.29 & 88.1 & 3.70 & 2.61 & 2.72 \\ Ours (Init, V=160) & 2.20 & 87.3 & 3.65 & 2.59 & 2.65 \\ Ours (Init, V=320) & 2.35 & 88.6 & 3.79 & 2.66 & 2.78 \\ Ours (Init, V=480) & \textbf{2.37} & \textbf{88.9} & \textbf{3.80} & \textbf{2.67} & \textbf{2.81} \\ \hline \end{tabular} \end{table} This can also be seen from different choices of $V$ in Table~\ref{tab:table2}, as the SE performance of the proposed model decreases in case the number of learnable vectors decreases. Finally, we visualize the filterbank features of speech signals in Fig.~\ref{fig:figure2}, from which it can be clearly seen that SE can remove the noise component from the noisy speech to some extent. In addition, for the causal models we evaluate the real-time factor (RTF), which is defined as the required processing time of unit-second speech. The RTF is computed on a 12-core Intel E5-2680 v3 CPU. We find that DEMUCS, CleanUet and the proposed method obtain a comparable RTF of 0.66, which tells that the proposed model does not increase the time complexity. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{fbank2.pdf} \caption{A filterbank feature of an example speech.} \label{fig:figure2} \vspace{-0.3cm} \end{figure} \section{Conclusions} In this paper, we investigated the effect of using pre-trained models for initialization and vector quantization for context representations on the real-time SE problem. It was shown that the pre-trained WavLM based initialization for the encoder and bottleneck layers of the SE model can always lead to a performance improvement compared to the random initialization, while the utilization of the VQ module should be careful. Only when the size of representation codebook is large enough (as the quatization noise does not dominate), it can further improve the SE performance on the basis of the WavLM-based initialization. In principal, applying the quantization module to discretize the representation of the bottleneck layer output functions as a preliminary noise reduction processor on the speech representation. \section*{Acknowledgment} This work was supported by the National Natural Science Foundation of China (62101523), Hefei Municipal Natural Science Foundation (2022012) and Fundamental Research Funds for the Central Universities. \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,065
Q: Add additional attribute to an existing document if the attribute doesn't exist elasticsearch I have a specific requirement were I have to add an additional attribute to elastic search index which has n documents. This has to be done only if the documents don't contain the attribute. This tasks basically involves 2 steps 1) searching 2) updating I know how to do this with multiple queries. But it would be great if I manage to do this in a single query. Is it possible? If yes, can someone tell me how this can be done. A: You can use update by query combined with the exists query to update and add the new field to only those documents which does not contain the attribute. For example, you have only one documents containing field attrib2, others don't have that field. curl -XPUT "http://localhost:9200/my_test_index/doc/1" -H 'Content-Type: application/json' -d' { "attrib1": "value1" }' curl -XPUT "http://localhost:9200/my_test_index/doc/2" -H 'Content-Type: application/json' -d' { "attrib1": "value21" }' curl -XPUT "http://localhost:9200/my_test_index/doc/3" -H 'Content-Type: application/json' -d' { "attrib1": "value31", "attrib2": "value32" }' The following update by query will do the job. curl -XPOST "http://localhost:9200/my_test_index/_update_by_query" -H 'Content-Type: application/json' -d' { "script": { "lang": "painless", "source": "ctx._source.attrib2 = params.attrib2", "params": { "attrib2": "new_value_for_attrib2" } }, "query": { "bool": { "must_not": [ { "exists": { "field": "attrib2" } } ] } } }' It will set the new value new_value_for_attrib2 to the field attrib2 on only those documents which don't already have that field.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,459
krios.dll is a part of the software "shopperz" by Jabuticaba Ltd. It is technically not a virus, but it exhibit many malicious traits, such as capabilities to hook deep into the operating system, browser hijacking, and interfering with your daily computer use. The industry generally refers to it as a "PUP". "shopperz" is often bundled to freeware or shareware applications. Press "Windows"-key + "R" to bring up the "Run" dialogue. In "Run" dialogue type "msconfig" and hit Enter Under the tab "Startup", uncheck any boxes relating to krios.dll. Press CTRL+SHIFT+ESC at the same time to bring up the Task Manager. Under the tab "Start-up", uncheck any boxes relating to krios.dll.
{ "redpajama_set_name": "RedPajamaC4" }
5,234
while true; do ./logger; echo "restarting logger..." >> /home/comran/startup_log.txt; done
{ "redpajama_set_name": "RedPajamaGithub" }
5,748
Banking Insurance RD, steady and go Recurring deposits are becoming more popular, but they have their limitations By Abhinav Singh May 06, 2018 17:18 IST Asish Saraf, 39, is a regular investor in recurring deposit schemes. "It is like balancing your investment portfolio," says the chief audit executive of DHFL in Mumbai. "Investment in RDs also depends on your risk appetite and I, being from an audit background, am a bit conservative in my approach. I also look towards low-risk investments like RD where your returns are low, but fixed. Investments in market-linked investment plans such as mutual funds have a huge risk involved as the markets can fluctuate and, at times, can be volatile. At the end of the day, the returns depend on the performance of the mutual fund in the market." There are penalties levied on missing monthly instalments and premature withdrawals. these can dent the rate of return on your RD. - Naveen Kukreja, CEO and cofounder, Paisabazaar.com Unlike earlier times, where you had to go to the bank to open an RD and deposit money every month, all this can be done online in a matter of minutes. - Navin Chandani, chief business development officer, BankBazaar.com Undoubtedly, RDs have gained popularity in India primarily because of the extensive network of banks and post offices. Their monthly investment requirement, which can be as low as Rs50 a month, makes them the best option for small savers. While their assured returns have made them popular among investors averse to risk, the lack of awareness about superior, low-risk instruments, like debt mutual funds, has kept that loyal base intact. "Recurring deposit is most advantageous for those who lack financial discipline," says Naveen Kukreja, CEO and cofounder of Paisabazaar.com. "By ensuring automatic deduction of a preset amount at a pre-determined date, RD ensures forced savings. With a minimum monthly deposit, they are also ideal for small savers who cannot save enough to meet the minimum deposit criteria of fixed deposits. While their tenure can range anywhere from six months to 10 years, one can close one's RD prematurely by paying a premature withdrawal penalty. Alternatively, one can avail a loan by using his RD as security." Naveen Kukreja Experts such as Pralay Mondal, senior group president, retail and business banking at Yes Bank, feel that the reason for the sudden popularity is an increasingly volatile environment. "Customers are looking at safer avenues of investment," he said. "Recurring deposit, a traditional choice of investment, offers simplicity in its product offering, while preserving investor gains. RD not only offers stability and reliability of rates, but is also a convenient means of investment for small investors. With the median income range in India well below that in developed economies, the majority of the populace is looking at stability of returns over volatile or higher bets." Rajiv M. Ranjan, founder and chief managing director of Mumbai-based BigWin Infotech, says that right from childhood, we imbibe the habit of saving. And, consumers are increasingly getting educated and understanding the power of short-term savings, which is playing a key role in fulfilling their short-term financial goals. "Currently, RD schemes are gaining preference because they allow investment from people who do not have lump sum amount, but are still looking to invest a set amount every month for a predefined tenure," he says. "Also, leading banks are now offering attractive interest rates, so customers can gain the maximum out of their investments. At the same time, consumers' financial goals are changing. They are doing financial planning to secure short-term goals such as yearly education fees for their children, managing marriage expenses, vacations abroad, home furnishings and renovation.ww" Ranjan adds that, ever since the rise of the fintech industry in India, banking operations are now at the customer's fingertips. This is making owning an RD account much more preferable. Navin Chandani, chief business development officer at BankBazaar.com, says that RDs allow people to save in a regular and structured manner. And, as RDs have a specific time frame, it allows people to plan their goals and save towards them. "Unlike earlier times, where you had to go to the bank to open an RD and deposit money every month, all this can be done online in a matter of minutes. All these factors, when put together, make RDs a very attractive proposition," he said. Navin Chandani However, RDs have disadvantages, too. The biggest is their low returns. "For those in the higher tax slabs, the post-tax returns of RDs hardly beat the inflation rate," says Kukreja. "This makes RDs unsuitable for meeting mid- and long-term financial goals. If one is willing to take a little extra risk, a systematic investment plan in direct plans of ultra-short term and short-term debt funds can offer superior post-tax returns for meeting short-term goals. Those investing for long-term goals should opt for SIPs in direct plans of equity mutual funds. Besides that, there are penalties levied on missing monthly instalments and premature withdrawals. If incurred, these two penalties can significantly dent the rate of return on your RD." Mondal also feels that the RD locks the interest rate, thereby ensuring nil volatility in rates. "While it is pro investor in a declining interest rate regime, it may not be the same in a contrasting environment," he says. Experts like Ranjan say that, with RD, you do not have the privilege of withdrawing any part of the money until the term is over. "If you are looking for an instrument that allows easy liquidity, RDs are a bad fit," he says. Interestingly, RDs provide returns similar to fixed deposits, and have the same tax rules. So, they are not enough to build a retirement corpus over the long term as the returns after tax would not be inflation-proof. "In the case of RDs, one cannot change one's deposit amount, regardless of your financial situation at the moment," said Chandani. "This can be a problem for investors who have fluctuating incomes." RD VERSUS FD Experts such as Navin Chandani, chief business development officer at BankBazaar.com, say that the primary difference between FDs and RDs is that the former is a one-shot investment for a set period, whereas the latter is a periodic investment for a fixed duration. "The interest rates and tax liabilities of RDs and FDs are similar. However, FDs tend to give a higher rate of return," he said. "Say you invest Rs 24,000 in an FD at the start of the year versus Rs 2,000 per month in a recurring deposit for a year. Both these products offer you a 8 per cent rate of interest compounded quarterly. At the end of the year, the FD will give you an interest of approximately Rs 1,650, while the RD will give you an interest of approximately Rs 1,060. In an FD, you invest a lump sum amount that earns interest for one year. However, in an RD, the first instalment earns interest for 12 months period, the second for 11 months, third for 10 months and so on. Hence, the FD gives a higher maturity amount." Similarly, banking experts point out that RDs and FDs differ in the timing of the investment. "The primary reason is that in FD you invest a lump sum amount, and so the entire money earns interest for one year, and, hence, customer will have a higher yield," said Pralay Mondal, senior group president, retail and business banking at Yes Bank. "However, RDs inculcate the discipline of saving, which help people in the long run."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,372
Q: Conexión Android Studio con base de datos en Xampp He estado trabajando en una aplicación en Android para conectarme a una base de datos. El código que uso es este: public class MainActivity extends AppCompatActivity { private EditText editnombre, editApellido; private Button btnConectar; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); editnombre = (EditText) findViewById( R.id.editNombre ); editApellido = (EditText) findViewById( R.id.editApellido); btnConectar = (Button) findViewById( R.id.btnConectar ); btnConectar.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ejecutarServicio( "http://192.168.1.106:8080/pruebaDBandroid/insertar_datos.php" ); } }); } public void ejecutarServicio(String url){ StringRequest stringRequest = new StringRequest(Request.Method.POST, url, new Response.Listener<String>() { @Override public void onResponse(String response) { Toast.makeText(getApplicationContext(), "Operacion exitosa", Toast.LENGTH_SHORT).show(); } }, new Response.ErrorListener(){ @Override public void onErrorResponse(VolleyError error){ Toast.makeText(getApplicationContext(), error.toString(), Toast.LENGTH_SHORT).show(); } }){ @Override protected Map<String, String> getParams() throws AuthFailureError{ Map<String, String> parametros = new HashMap<String, String>(); parametros.put("nombre", editnombre.getText().toString()); parametros.put("apellido", editApellido.getText().toString()); return parametros; } }; RequestQueue requestQueue = Volley.newRequestQueue(this); requestQueue.add(stringRequest); } } He agregado también la librería para hacer las peticiones al web service: implementation 'com.android.volley:volley:1.1.0' Así como los permisos para el Internet: <uses-permission android:name="android.permission.INTERNET"></uses-permission> Cuando ejecuto el código recibo el mensaje "operación exitosa" que he colocado dentro del método ejecutarServicio, lo que me indica que la aplicación esta funcionando, pero por alguna razón cuando entro al localhost veo mi base de datos, no tengo ningún dato nuevo, no se agregan los valores. ¿Qué podría estar mal? Estos son mis archivos php: insertar_datos.php: <?php include 'conexion.php'; $nombre=$_POST['nombre']; $apellido=$_POST['apellido']; $consulta = "INSERT INTO data VALUES('".$nombre."', '".$apellido."')"; mysqli_query( $conexion,$consulta ) or die( mysqli_error() ); mysqli_close( $conexion ); ?> conexion.php: <?php $hostname = 'localhost'; $database = 'db1'; $username = 'root'; $password = '1234'; $conexion = new mysqli( $hostname, $username, $password,$database ); if($conexion->connect_errno){ echo "Lo sentimos, el sitio web esta experimentando problemas" } ?> A: Te doy la bienvenida a Stackoverflow. En mi caso era imposible encontrar los valores pasados desde Android al servidor usando $_POST. La única forma en que he podido recuperar los datos ha sido pasándolos mediante el método getHeaders y en el servidor recuperándo los encabezados con apache_request_headers(). Una particularidad es que apache_request_headers() convierte la primera letra de cada clave en mayúscula. Para la coherencia entonces puedes ponerlo ya así en Android para evitar confusiones. El código sería: Android @Override public Map getHeaders() { HashMap headers = new HashMap(); headers.put("Content-Type", "text/plain; charset=utf-8"); headers.put("Nombre", "Pedro"); headers.put("Apellido", "García"); return headers; } PHP Me voy a permitir mejorar varias cosas de tu código PHP. Por ejemplo, en el archivo que recibe los datos es preciso que controles el flujo, recogiendo siempre un texto que al menos informa al usuario de lo que ha ocurrido. No puedes escribir un código que en ciertas situaciones no diga nada. El resultado para el cliente será una pantalla en blanco, sin que éste sepa qué ha fallado. insertar_datos.php <?php $_POST=apache_request_headers(); /* Solamente para prueba ---------------------------------*/ var_dump($mPost); /* ------------------------------------------------------*/ $nombre =!empty($_POST['Nombre']) ? $_POST['Nombre'] : NULL; $apellido =!empty($_POST['Apellido']) ? $_POST['Apellido'] : NULL; if ($nombre && $apellido) { include 'conexion.php'; if($conexion){ /* Apunto varias cosas aquí, pues veo que apliquas malas prácticas: 1. Seguridad por favor, usa consultas preparadas (leer sobre Inyección SQL) 2. Acostumbra a nombrar cada columna en el INSERT, en un futuro la tabla puede incorporar nuevas columnas y un INSERT no explícito podría crear un lío. Cambia en el código aquiColumnaNombre y aquiColumnaApellido por los nombres reales 3. Usaremos el estilo orientado a objetos, es mucho más claro y de hecho lo usas en la conexión */ $consulta = "INSERT INTO data (aquiColumnaNombre, aquiColumnaApellido) VALUES(?,?)"; if ( $stmt=$conexion->prepare($consulta) ){ $stmt->bind_param("ss", $nombre,$apellido); $stmt->execute(); $txtOut="Filas insertadas: ".$conexion->affected_rows; $conexion=NULL; } else { $txtOut="Error: ".$conexion->error; } } else { $txtOut="La conexión es nula"; } } else { $txtOut="No se postearon datos"; } echo $txtOut; ?> conexion.php Evita sacar información por pantalla. Puede que necesites este código en un futuro para obtener datos y presentarlos como un JSON. El echo ahí te podría fastidiar el bloque de error que incluirías en el JSON. Aquí simplemente controla la conexión y cuando haya error, la seteas a NULL. He puesto también el juego de caracteres utf8. Conviene ponerlo para no tener problemas con las tildes y caracteres raros. <?php $hostname = 'localhost'; $database = 'db1'; $username = 'root'; $password = '1234'; $conexion = new mysqli( $hostname, $username, $password,$database ); $conexion->set_charset("utf8"); if($conexion->connect_errno){ $conexion=NULL; } ?> PD Si el código te dice que no se postearon datos, prueba lo que hay en $mPost mediante: var_dump($mPost); En mi caso los datos se muestran correctamente: array(8) { ["Content-Length"]=> string(1) "0" ["Content-Type"]=> string(25) "text/plain; charset=utf-8" ["Accept-Encoding"]=> string(4) "gzip" ["Apellido"]=> string(7) "García" ["Host"]=> string(17) "www.example.com" ["Nombre"]=> string(5) "Pedro" ["User-Agent"]=> string(69) "Dalvik/2.1.0 (Linux; U; Android 5.1.1; Redmi 3 MIUI/V9.6.2.0.LAIMIFD)" ["X-Forwarded-For"]=> string(14) "XX.YYY.ZZZ.150" } En Android, puedes agregar el resultado de response al Toast, así ves lo que ocurrió: public void onResponse(String response) { Toast.makeText(getApplicationContext(), "Operacion exitosa"+response, Toast.LENGTH_SHORT).show(); }
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,700
using System; using System.Collections.Generic; using JetBrains.Application.Progress; using JetBrains.ProjectModel; using JetBrains.ReSharper.Feature.Services.Bulbs; using JetBrains.ReSharper.Feature.Services.Intentions; using JetBrains.ReSharper.Feature.Services.QuickFixes; using JetBrains.ReSharper.Intentions.Util; using JetBrains.ReSharper.Plugins.Unity.CSharp.Daemon.Errors; using JetBrains.ReSharper.Plugins.Unity.CSharp.Feature.Services.ContextActions; using JetBrains.ReSharper.Plugins.Unity.Resources; using JetBrains.ReSharper.Plugins.Unity.UnityEditorIntegration.Api; using JetBrains.ReSharper.Psi; using JetBrains.ReSharper.Psi.CSharp; using JetBrains.ReSharper.Psi.CSharp.Tree; using JetBrains.ReSharper.Psi.ExtensionsAPI; using JetBrains.ReSharper.Psi.Tree; using JetBrains.TextControl; using JetBrains.Util; #nullable enable namespace JetBrains.ReSharper.Plugins.Unity.CSharp.Feature.Services.QuickFixes { [QuickFix] public class MarkSerializableQuickFix : IQuickFix { private readonly IAttribute myAttribute; public MarkSerializableQuickFix(RedundantSerializeFieldAttributeWarning highlighting) { myAttribute = highlighting.Attribute; } public IEnumerable<IntentionAction> CreateBulbItems() { var api = myAttribute.GetSolution().GetComponent<UnityApi>(); var typeDeclaration = myAttribute.GetContainingTypeDeclaration(); var declaration = AttributesOwnerDeclarationNavigator.GetByAttribute(myAttribute) .FirstOrDefault(d => // We ignore constants - if we marked the type as serialisable, the constant declaration wouldn't // suddenly become serialisable d is IFieldDeclaration { IsStatic: false, IsReadonly: false } || (d is IPropertyDeclaration { IsAuto: true, IsStatic: false, IsReadonly: false } && myAttribute.Target == AttributeTarget.Field)); if (declaration == null) return EmptyList<IntentionAction>.Enumerable; if (ValidUtils.Valid(typeDeclaration) && typeDeclaration.DeclaredName != SharedImplUtil.MISSING_DECLARATION_NAME && !api.IsUnityType(typeDeclaration.DeclaredElement)) { return new MakeSerializable(typeDeclaration).ToQuickFixIntentions(); } return EmptyList<IntentionAction>.Enumerable; } public bool IsAvailable(IUserDataHolder cache) => ValidUtils.Valid(myAttribute); private class MakeSerializable : BulbActionBase { private readonly ICSharpTypeDeclaration myTypeDeclaration; public MakeSerializable(ICSharpTypeDeclaration typeDeclaration) { myTypeDeclaration = typeDeclaration; } protected override Action<ITextControl>? ExecutePsiTransaction(ISolution solution, IProgressIndicator progress) { AttributeUtil.AddAttributeToSingleDeclaration(myTypeDeclaration, PredefinedType.SERIALIZABLE_ATTRIBUTE_CLASS, myTypeDeclaration.GetPsiModule(), CSharpElementFactory.GetInstance(myTypeDeclaration)); return null; } public override string Text => string.Format(Strings.MakeSerializable_Text_Make_type___0___serializable, myTypeDeclaration.DeclaredName); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,384
Dactylispa malabikae is een keversoort uit de familie bladkevers (Chrysomelidae). De wetenschappelijke naam van de soort werd in 1977 gepubliceerd door Basu & Saha. malabikae
{ "redpajama_set_name": "RedPajamaWikipedia" }
567
NEWLY LAUNCHED. Located in Zone 4 and just a short walking distance away from Mill Hill East Underground station providing direct access into central London for those commuting and working in the city, these properties are suitable for raising a family in natural settings with beautiful valley views across surrounding natural landscapes. Offering the best of both worlds, this project is located on a hilltop with tremendous views across green fields and has fast links towards the city centre of London, situated within a conservation area and a highly desired part of North London. Constructed by a large developer, this project offers a range of one, two, and three bedroom apartments as well as four and five bedroom detached houses – providing something for small and large families looking to make a life in London. 460 homes in total with the first stage completed by the end of summer 2019. Situated in natural landscaping, the development includes a vast amount of gardens and outdoor amenity space for residents to relax. There are 545 parking spaces, a fitness suite, space for a cafe, and lots more to discover inside. Inside, flats are modern with spacious layouts granting room for families to spend quality time together whilst enjoying privacy in large bedrooms. Master bedrooms have en-suite private bathrooms. Balconies offer excellent nature views. For full information about this development, including the very latest in prices, availability, and updates, please call or contact Property UK today to speak with our trained professionals in London. Located on a hilltop setting in Mill Hill, this project is just 15 minutes walking distance away from Mill Hill East Underground station with London Euston reachable within 22 minutes and Tottenham Court Road, Waterloo within 30 minutes. The location is perfect for those working in the city centre and looking to commute from one of the most sought after settings in North London. Mill Hill is a vibrant family friendly area and is home to some of the best performing schools in the entire city of London, a range of leisure centres, top quality sports facilities, large retail and shopping scene, and things to do. Found in a regeneration area with greenbelt corridor, these hilltop properties are highly recommended for viewing and have just been launched making this an opportunity to purchase at the lowest prices possible.
{ "redpajama_set_name": "RedPajamaC4" }
4,828
Gilmar Jalith Mayo Lozano (departamento de Chocó, 30 de setembro de 1969) é um antigo atleta colombiano, especialista em salto em altura. O seu recorde pessoal é de 2.33 m, alcançado numa prova disputada em altitude (1400 m) na cidade colombiana de Pereira. Esta marca constitui, atualmente, o recorde sul-americano. Foi campeão ibero-americano em 1994, 2000 e 2002 e campão sul-americano em 1991, 1995, 1997 e 2005. Participou dos Jogos Olímpicos de Atlanta 1996 e Sydney 2000, em ambos não chegando à final. Numa determinada fase da sua carreira, também se dedicou ao triplo salto, onde apresenta um registo de 16.04 m, feito em 1994. Ligações externas Saltadores em altura da Colômbia Atletas nos Jogos Olímpicos de Verão de 1996 Atletas nos Jogos Olímpicos de Verão de 2000
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,402
<?php namespace OneMightyRoar\PHP_ActiveRecord_Components\Exceptions; use \LogicException; /** * ReadOnlyAttributeException * * Exception thrown when writing to a read-only attribute. * * @package OneMightyRoar\PHP_ActiveRecord_Components\Exceptions */ class ReadOnlyAttributeException extends LogicException { }
{ "redpajama_set_name": "RedPajamaGithub" }
8,693
«You Lost Me» es una canción de la cantante estadounidense Christina Aguilera, tomada de su cuarto álbum de estudio, Bionic. Descrito por Aguilera como el «corazón del álbum», la canción fue escrita por la propia Christina Aguilera, Sia Furler y Samuel Dixon, quien también produce la canción. La canción líricamente habla de un hombre infiel, que ha dejado el mundo en vista de Aguilera como «infectado». Fue lanzada en la radio el 29 de junio de 2010 y el 6 de julio de 2010 en iTunes. Tras el anuncio de que la canción sería sencillo oficial fue elogiada por su gran capacidad vocal. La cadena televisiva MTV dijo que él era el corazón del álbum. La revista Billboard comentó que la canción y el vídeo son los polos opuestos para el primer lanzamiento de Bionic, el sencillo «Not Myself Tonight». Comercialmente, el rendimiento de la canción era muy débil, convirtiéndose en su primer sencillo para no entrar en el Billboard Hot 100, pero logró posicionarse en el número 1 de la lista Billboard Hot Dance Club Play al igual que «Not Myself Tonight», que hasta ese entonces llevaba dos sencillos consecutivos en el número 1 y cinco número uno para ese entonces. En Israel, fue donde tuvo la mejor posición ubicándose dentro del top 10 de la lista principal del país. En los Países Bajos tuvo un éxito moderado, sin embargo logró entrar en el puesto número 91, y curiosamente en el 2012, dos años después logró volver entrar a las listas en el puesto 79. Antecedentes Sia informó que el equipo directivo de Christina Aguilera le contactó para hablar sobre la posibilidad de que ala dos artistas trabajaran juntas. La cantante dijo que "Christina tuvo un par de registros como Zero 7 y sabía que yo había escrito con y para Zero 7. Mi gestión era que ni siquiera sabía pronunciar su nombre! la llamaba 'Christine' con una 'e' al final en lugar de una 'a'. Apuesto a que molesta a su gente". Anteriormente esta colaboración, Sia había pedido a sus directores en contacto con Aguilera de colaborar en su pista "Death by Chocolate", sin embargo, no fue así. Aguilera dijo: "Definitivamente soy un fan de Sia. Me emocionó también quería trabajar en conjunto y, a su vez, era un fan mío". La gestión de Aguilera al contacto con Sia le preguntó si podían establecer una llamada telefónica entre ella y Aguilera, Sia declaró que "ella me llamó y yo le pregunté, ¿qué quieres?, básicamente". Sia entonces ha sido incluida en una lista de sus artistas favoritos para el álbum Bionic. Aguilera a Sia preguntó si deseaba llevar a alguien a su primera sesión de grabación que Sia hizo. Después de contactar a Sia con ella regularmente coguionista y el bajista Samuel Dixon que escribió cuatro canciones para Bionic: «All I Need», «I Am», «Stronger Than Ever» y «You Lost Me», con este último se describe como la "El corazón del álbum". Después de la colaboración, Sia, dijo: "Ella estaba emocionada de ir a trabajar con los artistas que ama, hay una idea errónea de que ella es una especie de persona de América del medio, pero ella es un poco inconformista... Vuelve a su casa y se sienta junto al fuego con un poco de vino, y lo que está jugando en el equipo de sonido? The Knife y Arthur Russell. Ella no escucha a la música pop". «You Lost Me» fue anunciado como el segundo sencillo de Bionic, el 22 de junio de 2010. Su portada se reveló tres días más tarde a través de su página web oficial. Becky Bain de Idolator dijo que la portada sigue siendo letra de «Paparazzi» de Lady Gaga canción de 2009. Lanzamiento El sencillo fue lanzado en las radios de Estados Unidos el 29 de julio de 2010 mientras que su CD-Single será lanzado en el mes de julio. En Reino Unido según MTV UK: Contradiciendo el rumor del lanzamiento de «I Hate Boys» para el mercado internacional, MTV UK acaba de anunciar que «You Lost Me» también sería single en Inglaterra, pero la canción se lanzaría en septiembre. MTV.co.uk. argumento: "La balada «You Lost Me» será lanzada en septiembre. Christina Aguilera anunció que su próximo single en Inglaterra, que se desprenda de Bionic será «You Lost Me». La balada, la cual será re-mezclada para su lanzamiento como single, ha sido descrita por Christina en su sitio oficial, como el "corazón" del disco." Controversia La especulación sobre el segundo sencillo oficial de Bionic empezó por «WooHoo», pero al revelarse que la filtración del tema hacia Internet y por iTunes no se debía a un single (sencillo) oficial sino únicamente un sencillo promocial, después de esto siguieron las especulaciones, pero esta vez tomó más fuerza que «You Lost Me» era el segundo single oficial. El 21 de junio de 2010 se dio a conocer que «You Lost Me» sí es el siguiente sencillo. La canción ha sido recibida por la crítica en su totalidad como un alivio a comparación de su antecesor «Not Myself Tonight», la canción demuestra que Aguilera es aún la voz que puede dominar el cielo y que gratifica al ser escuchada (All Music Blog). Vídeo musical El vídeo musical del single se rodó a finales del mes junio. Fue dirigido por Anthony Mandler, quien también produjo el concepto que describe diciendo: "Desde las primeras claves de la apertura de la pista, sabemos que estamos a punto de ver algo que se va a mover y transformarnos. Al final, hemos viajado a través del mundo de Christina. Uno que es física, mental y espiritualmente cargado pero nada es lo que parece. Todo está fracturado y en evolución, que se hunde tan frágil". Aguilera afirmó que el vídeo no se desnuda de nuevo a diferencia del anterior vídeo musical «Not Myself Tonight». Aguilera dijo: "Me encanta una gran producción y algunos de mis estilos de maquillaje elaborados entiendo totalmente apagado, pero esta canción en particular fue realmente importante que debo interpretar la simplicidad de la emoción cruda que tiene lugar en el canción en sí y no cualquier tipo de teatralidad involucrados". En el vídeo musical, hay escenas que muestran una habitación vacía con colchones con fundas de almohada quemada, y en cuanto al coro de la sección de vídeo, siempre que la letra "Parece que nuestro mundo está infectado "se llevan a cabo", Aguilera afirmó que "eso es exactamente lo que queríamos incorporar en la visión de la canción", explicó: "Estoy siendo literalmente desgarrado. Cuando tomo mi camisa es un momento en el que digo estoy tomando el control de la situación, estoy perdiendo la piel". El vídeo fue estrenado el 22 de julio de 2010 en la cuenta oficial de VEVO en YouTube. Actualmente el vídeo musical cuenta con más de 93 millones de visitas. Trama Este vídeo musical se caracteriza porque no contiene contenido explícito dejando atrás la imagen demasiado promiscua de «Not Myself Tonight», en el vídeo se ve Christina con pelo rubio cobrizo, maquillaje natural, labios rojos y ropa negra (seguramente señalando tristeza), en un cuarto oscuro con primeros planos de su cara, mientras unas luces tenues la iluminan, seguido se despina y recorre el cuarto, un efecto como de entrar a otra habitación hace pasar a otra escena donde se ve acostada en un escenario como una cueva cuando se le corre el maquillaje ya que 3 lágrimas salen de sus ojos, acostada se ve que se lamenta y después hacen un primer plano de su cara de nuevo, seguido porque la cueva se ilumina de una luz amarilla y ella se sienta pero se acuesta de nuevo y entra en escena un hombre que la quiere levantar del suelo (la canción trata sobre una infidelidad, así que este hombre sería el infiel) pero ella se resiste, y el la sacude violentamente mientras ella se lamenta hasta que la logra levantar y ella lo empuja violentamente y se quita su blusa (tiene una abajo) y una luz muy brillante acompañada de unos destellos dorados la hacen pasar a otra escena pero esta vez ella canta con un poco más de energía y con una ropa igual a la negra, pero esta vez es blanca (señalando su recuperación de la infidelidad) se entrecruzan unos primeros planos de su cara en el cuarto oscuro mientras atrás se ve que ella se arrodilla en el cuarto con mucha luz y su ropa blanca, en la escena final se ve ella con la ropa blanca pero esta vez una luz azul muy brillante, dejando ver solo su cara, pero a medida que esa luz se atenúa se ve medio cuerpo y el vídeo finaliza cuando las luces se apagan y el vídeo se desvanece de afuera hacia dentro. Recepción crítica Billboard: Christina Aguilera llora y luce grandiosa haciéndolo en su nuevo video para la balada de piano «You Lost Me». Ambas, la canción y el vídeo son los polos opuestos para el primer lanzamiento de Bionic el sencillo «Not Myself Tonight», representando algo así como el regreso a su forma natural de la cantante. Entertainment Weekly: Se le rompe el corazón a Christina Aguilera en su vídeo «You Lost Me». Su último vídeo tal vez no fue de lo más original, pero esta vez, Christina Aguilera tiene material de calidad en sus manos. La balada es acerca del engaño de su pareja que eventualmente la abandona. En el vídeo, Christina está como en un mundo de sueños. Empieza llorando en una habitación, luego se deja caer en algo como una cueva antes de alejar físicamente al infiel. No dejen de ver este vídeo dirigido por Anthony Mandler. Piensas que este vídeo es mejor que «Not Myself Tonight»?. Por otro lado, la cantante Pink comentó acerca de la canción desde su página Twitter: "Acabo de escuchar la nueva canción de Christina Aguilera y tengo que decir, maldición! esa chica sabe cantar. Amo esta canción." Asimismo el cantante Adam Lambert desde su página de Twitter, el artista que lanzó su exitosa carrera tras la 8ª temporada de American Idol escribió: "El nuevo vídeo You Lost Me de Christina Aguilera es simplemente impresionante. Elegante y sobrio." Comercial La canción obtuvo un éxito moderado. En los Estados Unidos, que debutó en Billboard Bubbling Under Hot 100 para la edición del 26 de junio de 2010, en el número 20. Sin embargo, la canción pasó cuatro semanas en la lista Billboard Adult Contemporary donde alcanzó el puesto número 28. Pero se obtuvieron mejores resultados en la lista Dance/Club Play Songs donde alcanzó el número 1 de dicha lista al igual que el sencillo anterior de Aguilera «Not Myself Tonight». La canción también llegó al número 153 en la lista de popularidad del Reino Unido por las ventas digitales después de la lanzamiento del álbum Bionic. Asimismo, se trazó en los Países Bajos durante una semana en el número 91, pero curiosamente en 2012 dos años después, se trazó de nuevo alcanzando el número 79. En Eslovaquia, la canción llegó en el Airplay donde pasó cuatro semanas en la lista, llegando al número 78. En Israel, alcanzó su punto máximo en el número 10. Presentaciones en vivo La canción fue estrenada el 26 de mayo de 2010 en la final de American Idol, acompañada por cuerdas "sombrías" y un piano, con el pelo recogido en chongos hacia atrás que figuran en la portada de Bionic, y con un "modesto conjunto negro", Aguilera comenzó suavemente, realizando la letra de la primera estrofa, después de lo cual MTV señaló que "tomó fuerza", como la música "se arremolinaba bajo ella". La sección media de la actuación de acuerdo con James Montgomery fue "lleno de humo y solemne", según Aguilera realizó notas "estiramiento" con los ojos cerrados y el brazo derecho extendido. Sus voces llegaron poco a poco más grande como la canción entró en el tramo final hasta que terminó con los "entrecortada" con la letra de «You Lost Me». El rendimiento reunió una gran ovación de la audiencia de American Idol, con MTV que dice "se hizo evidente que Aguilera todavía tiene la mercancía a ir en contra de nadie". Aguilera interpretó la canción en The Today Show en un mini-concierto celebrado en la ciudad de Nueva York, sin embargo, el rendimiento no fue incluida en el especial de televisión. MTV respondió bien a la actuación diciendo que ella trajo un ambiente sutil y suave a su conjunto mientras cantaba la balada". El 11 de junio de 2010, Aguilera realizó una presentación en The Early Show, donde, entre otras novedades de Bionic, actuó «You Lost Me», de la que Robbie Daw de Idolator declaró que "la potencia vocal rubia se puso un parcial ver a través de abrigo y cantó canciones nuevas «Not Myself Tonight» y «You Lost Me». Aguilera realizó la pista en el Late Show con David Letterman. En la actuación, que contó con lápiz labial rojo que llevaba mallas diamante y un brillo blanco de estilo cut-out blusa, lleva con rojos brillantes tacones de aguja, con un micrófono brillante para igualar. Aguilera realizó de nuevo la canción como parte de su interpretación en VH1 Storytellers. Covers de otros cantantes El cantante americano de R&B y Soul Marcus Canty, interpretó la canción en la primera temporada de la versión americana de The X Factor. La cantante canadiense-chipriota-Portugués Nikki Ponte realizó la canción en la tercera semana de la tercera temporada de The X Factor (Grecia). «You Lost Me» también estaba cubierto de una versión de estudio de The X Factor (Polonia series 2) por el semifinalista Ewelina Lisowska. La canción también se realizó en las audiciones de la cuarta temporada de The X Factor (Alemania) por el cantante español Alberto Bellido Márquez. La canción también fue cubierta por el The X Factor (Ucrania) por la semifinalista Vladyslav Kurasov en la segunda temporada de la serie. Posicionamiento Fecha de lanzamiento Versiones Digital download "You Lost Me" (Radio Remix) – 4:19 Digital remix - Dub "You Lost Me" (Hex Hector Mac Quayle Ghettohouse Dub) – 6:57 Digital remix - radio edit "You Lost Me" (Hex Hector Mac Quayle Ghettohouse Mix) – 3:37 Digital remix - club mix "You Lost Me" (Hex Hector Mac Quayle Ghettohouse Extended Mix) - 6:53 Sencillo en CD "You Lost Me" (Radio Remix) – 4:19 "You Lost Me" (Hex Hector Mac Quayle Radio Edit) – 3:37 Digital EP "You Lost Me" (Radio Remix) – 4:19 "You Lost Me" (Hex Hector Mac Quayle Radio Edit) – 3:37 "Not Myself Tonight" (Laidback Luke Radio Edit) – 3:39 Referencias Sencillos de Christina Aguilera Sencillos de 2010 Canciones escritas por Sia Canciones sobre la infidelidad
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,846
Foliation II (Furnishing Fabric) 1976 (produced and reprinted 1988) Designed by Ben Rose (American, 1916–2004) Produced by Ben Rose, Inc. United States, Illinois, Chicago The Museum as Classroom Emerging Voices: Listening to Our Interns Meet some of the future movers and shakers of the ever-changing art museum world. Pascual Madrigal, Manager of Custodial Services, Facilities, and Logistics Ancient Portraits in Clay Ben Rose (Designer) Designed 1976 Modacrylic and rayon, plain weave; screen printed Inscription (stamped on selvage): Ben Rose Hand Print-1946 Reprinted-1988 328.4 × 125.1 cm (128 1/8 × 49 1/4 in.) Repeat: 60.6 × 61.1 cm (23 7/8 × 24 in.) Gift of Mr. and Mrs. Ben Rose Thurman, Christa C. Mayer. Rooted in Chicago: Fifty Years of Textile Design Traditions. Chicago: The Art Institute of Chicago Museum Studies, 1997. p. 60 (Image) The Art Institute of Chicago, Elizabeth F. Cheney and Agnes Allerton Textile Galleries, Rooted in Chicago: Fifty Years of Textile Design Traditions, February 12–July 27, 1997 Ben Rose Weaving - printed
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,598
package org.abego.treelayout; /** * Provides the extent (width and height) of a tree node. * <p> * Also see <a href="package-summary.html">this overview</a>. * * @author Udo Borkowski (ub@abego.org) * * @param <TreeNode> Type of elements used as nodes in the tree */ public interface NodeExtentProvider<TreeNode> { /** * Returns the width of the given treeNode. * * @param treeNode &nbsp; * @return [result &gt;= 0] */ double getWidth(TreeNode treeNode); /** * Returns the height of the given treeNode. * * @param treeNode &nbsp; * @return [result &gt;= 0] */ double getHeight(TreeNode treeNode); }
{ "redpajama_set_name": "RedPajamaGithub" }
1,512
Absolutely great Trial Xtreme 4 v1. Web Anti-Virus Remover 2. Collect these each month (and add your own favorite song cards) and you have a great song collection for transition times throughout the day. Dwarf Fortress is a game for Windows, Linux and Mac, developed by Bay 12 Games featuring two modes of play, as well as distinct, randomly-generated worlds (complete with terrain, wildlife and legends), gruesome combat mechanics and ubiquitous alcohol dependancy by dwarves. Which means they need to identify items of Muah origins in deko first crack realflight 6 demo. The next scene shows them all at that relaflight same house, living, and rayman legends 3dm crack only to finally be free, in a place that they all call home. This site always has the hard-to-find or random parts that you might need. We continue to work through issues," Little said. Latest Topic - How to crack realflight 6 demo the Audio to External Speaker using. Pathways to assistance for victims of intimate partner violence. I then converted the file to 16 bits in order to reduce the size for realflighh. Because of the high number of people that have made the repair, those who need it the most and those who wanted to demi in line with it, I recently began a chat group. Make sure sites continue linking to you - without wasting time. Updates:: Windows Update Kb2919355 Causes Crack realflight 6 demo Store Corruption. Scleroderma is reaoflight connective tissue disease that involves changes in the skin, blood vessels, time traveling witch). Finance Ministry launches its official YouTube channel to disseminate information in real time. Log in, to view your saved searches and add to your favorite listings. Oregon Small Farms Program Videos from events such as the Oregon Small Farms Conference, Congress leader, Gandhi Nagar, Cracj. Kyle Jarlsberg captured his first feature of the 2010 season crack realflight 6 demo exciting crack realflight 6 demo as he crossed the stripe sideways to win the Jefferson Window and Door 50 lap Late Model feature. Read All 9 Posts RELEVANCY SCORE 1. And within weeks, an industry directory listing all sorts of advisers and brokers will be realfflight to savers. The Dictionary is an efficient colleague realflght travellers and those who concern about Thai and English language. By identifying crack realflight 6 demo spots, 18 January 2015. The awesome Harry Potter spinoffs that we want WP Freebies We create awesome free WordPress themes for you to use however you want. The handbook, now in its second online edition and fully rsalflight, is the work of but you will need to CHANGE the USN to publish the correction on a new. You can also use the same relflight to access other blocked sites and not just Runescape. Coordinate the reporting crack realflight 6 demo documenting of ARES activities in your district of jurisdiction. Your CD Baby online session is due to expire shortly. A New Review of Classic Home Video Games, type a style such as bold or italic, a font family, or any other part of a font name. He was recently named by Worth Magazine as one of the Top 100 Attorneys in the country representing affluent families and individuals, including in the areas of private foundations and philanthropy, as well as a Pennsylvania Super Lawyer in these areas. I am doing it very good now. Buy Watersnake Pro Freedom Kayak Fishing PFD and be safe on water. For over 25 years, Pixar has touched the hearts of millions of audiences around the world through its 15 feature-length masterpieces and countless short de,o. This site will remain accessible during the federal government shutdown. They cracm many photos while walking around the Singapore Botanic Gardens, they crack realflight 6 demo told to choose one of their best shot to be printed out.
{ "redpajama_set_name": "RedPajamaC4" }
9,447
Paul Zarb, a beloved member of the Fellowship of Friends, completed his task on Saturday, April 29, 2017. Paul was sixty years old. Paul re-joined the School in London, on August 11, 2008. During these years in the School, Paul consistently worked with his illness, a continual example to the Center of working with suffering.
{ "redpajama_set_name": "RedPajamaC4" }
2,165
Zoom: Academy for Superheroes Jack Shepard a.k.a. "Zoom" is an out-of-shape auto shop owner, far removed from the man who once protected the world's freedom. Reluctantly called back into action by the government, Jack is charged with turning a rag tag group of kids with special powers into a new generation of superheroes to save the world from certain destruction. Based on Jason Lethcoe's popular graphic novel "Zoom's Academy for the Super Gifted." Jennifer Todd Todd Garner Suzanne Todd Adam Rifkin David Berenbaum Tim Allen, Courteney Cox, Chevy Chase, Spencer Breslin, Rip Torn © 2006 Revolution Studios Distribution Company, LLC. All Rights Reserved. Dull superhero tale has comic book violence, potty humor. Parents need to know that Zoom: Academy for Superheroes is a 2006 movie in which Tim Allen plays a washed-up former superhero who is brought back by the government to train a ragtag group of kids and teens with superpowers. There's a ton of disrespectful behavior from both the adults and kids in this movie. Before warming up to the kids, Jack is downright mean, calling them names and treating them badly. There's also lots of crude behavior involving farting, burping, and a huge snot-bubble that bursts and covers everyone with green goo. In one scene, the kids trap a scientist in an environmental simulator and subject him to falling rocks, a cyclone, and a rainstorm, then laugh at him. Outtakes during the end credits show the cast singing "We like to poop in our pants." Also, the parents in this movie are conspicuously missing, and the superheroes form their own "family." There's some mild profanity and comic-book style violence (kicking, punching, throwing, shattering glass). There is also some blatant consumerism, M&M's featured prominently, a scene centered on the characters' spaceship going through Wendy's drive-through, and a robot named "Mr. Pibb." Genre:Family, Action, Sci-Fi Release Date:August 11, 2006
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,689
Міжнародна федерація котів (, FIFe) — міжнародна організація з розведення та виведення нових порід котів. Є одним з дев'яти членів Всесвітнього фелінологічного конгресу (WCC). FIFe можна розглядати як Організацію Об'єднаних Націй Федерацій Котів. Насправді ж, це федерація національних членів, яка представлена наразі 40 країнами з 42 повноправними членами, але чисельність яких продовжує зростати. Історія Заснована Маргеріт Рейв (, федерація була неофіційно створена в 1949 році в Парижі (Франція). На перших загальних зборах організації в Генті (Бельгія) федерація була оформлена офіційно. Первісна назва — Fédération Internationale Féline d'Europe (FIFE). У 1972 році до організації приєднався Brazilian Cat Club, що викликало необхідність змінити євроцентричну назву федерації — оновлена назва — FIFe. Офіційний статут було зареєстровано в Люксембурзі в 1998 році. Членами FIFe є 42 організації котів у всьому світі, більше 100 000 окремих членів у різних організаціях FIFe. Понад 110 000 родоводів та 3000 назв розплідників на рік. Понад 220 міжнародних суддів та 140 студентських суддів. Федерація протводить близько 600 виставок на рік, де виставляється більше, ніж 200 000 котів. 1 березня 2022 року FIFe виступила із заявою, в якій йдеться про засудження Російського вторгнення в Україну. Рішенням, оприлюдненим на офіційному сайті Федерації прийняте рішення про неможливість вивозити та реєструвати у родовідній книзі, випущеній під егідою FIFE жодного кота, виведеної в Росії терміном до 31 травня (із можливістю перегляду за необхідності). Крім того, FIFE виділила частину свого бюджету на підтримку заводчиків і любителів котів в Україні. Задачі Метою FIFe є об'єднання всіх клубів, асоціацій та федерацій клубів, незалежно від їх національності або місця знаходження. FIFe заохочує розведення котів та покращення порід. Зокрема, Федерація має займатися: стандартизацією положень щодо суддів, шоу та титулів тощо; визначенням порід та стандартизацією стандартів порід; визнанням та гармонізацією племінних реєстрів (LO) та початкового й експериментального реєстру (RIEX) кожної країни. FIFe може вільно перевіряти такі реєстри без посягання на незалежність кожного члена; організацією та регулюванням міжнародного реєстру префіксів (назв розплідників); складанням офіційного списку суддів, уповноважених Федерацією; виданням будь-яких необхідних дозволів на національні та міжнародні шоу. Склад Рада FIFe є координаційним органом на міжнародному рівні. Вона складається з 6 осіб від країн-членів, які обираються її членами. Вони керують організацією відповідно до її статуту. Існує 5 допоміжних комісій, відповідальністю яких є виконання рішеннь Генеральної Асамблеї та Ради: Судді та Комісія зі стандартів; Комісія з селекції та реєстрації; Комісія з виставок; Комісія з питань охорони здоров'я та добробуту; Дисциплінарна комісія. Структура організації гарантує кожному учаснику однакові права голосу на рівні Генеральної Асамблеї. Генеральна Асамблея скликається щороку для обговорення та голосування щодо пропозицій членів, щоб обрати нових службовців, обговорювати нові правила та приймати спільні стратегії. Визнання порід Нині FIFe визнає 48 офіційних порід котів для чемпіонатів. Усі породи поділяються на 4 категорії та ідентифікуються трибуквеним кодом відповідно до системи Easy Mind System (EMS). EMS - це система, що використовується FIFe та усіма її членами для легкої ідентифікації котів за буквено-цифровими кодами. . Примітки Джерела Офіційний сайт FIFe Схематичне зображення зв'язків FIFe та інших фелінологічних організацій Міжнародні недержавні організації
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,532
Weekly Shift Delivered every Monday by 10 a.m., Weekly Shift examines the latest news in employment, labor and immigration politics and policy. Get the Weekly Shift Newsletter DOL retreats on misclassification By TED HESSON With help from Marianne LeVine, Ian Kullgren and Mel Leonor DOL RETREATS ON MISCLASSIFICATION: The Labor Department announced Wednesday that Secretary Alexander Acosta would withdraw the department's informal guidance regarding who DOL would consider an employee and who an employer. A 2015 guidance on misclassification offered an "economic realities" test to determine whether an employee was wrongly classified an independent contractor, and a 2016 guidance on joint employment advised how to establish whether one business controlled or supervised the employees of another business to the extent that DOL would consider both businesses to be their employers. The two guidances, now inoperative, were issued under David Weil, DOL Wage and Hour administrator under President Barack Obama and author of "The Fissured Workplace," an influential book about the increasingly arms-length relationship between deep-pocketed corporations and lower-wage workers. Republicans, predictably, lined up to praise the change. Senate HELP Committee Chair Lamar Alexander (R-Tenn.) said it "was absolutely right;" House Education and the Workforce Chair Virginia Foxx (R-N.C.) and Subcommittee Chair Bradley Byrne (R-Ala.) said the Obama-era guidance "only served to empower union leaders while hurting small businesses." But Weil told Morning Shift said that "retracting the two administrator interpretations in and of itself does not mean anything" because "they are based on established law, regulation and judicial opinions." Weil said he was more concerned about what the change might presage for the Wage and Hour Division under Acosta. More on the moves here. GOOD MORNING. It's Thursday, June 8, and this is Morning Shift, POLITICO's daily tipsheet on employment and immigration policy. Send tips, exclusives, and suggestions to [email protected], [email protected], [email protected], [email protected] and [email protected]. Follow us on Twitter at @tedhesson, @marianne_levine, @MelLeonor, @IanKullgren and @TimothyNoah1. THE INNOVATION ISSUE: How should Washington think about innovation? What's its role in the economy — and what does it take for government to foster new technologies that help the whole nation, not just a favored few? In a month long Special Report, The Agenda takes a deep look at the surprising new politics of innovation, and ideas for how to drive it in a new era. In this package, you'll read about how AOL founder Steve Case became the first call for Congresspeople who want to bring innovation to the heartland; a critical look at Challenge.gov, the federal government's prize competition designed to spur innovation; and the surprisingly innovative history of the U.S. Post Office, which was long on the forefront of technology before turning into a lesson in what *not* to do. Read the entire package here. REAL-LIFE 'APPRENTICE': President Donald Trump will travel to the Labor Department on Wednesday to deliver an address on his workforce training plan, the Wall Street Journal reports. Reed Cornish, one of the president's tech advisers, revealed plans for the talk during a panel discussion at the Business Roundtable. Cornish said Trump will talk about ways his administration can "expand apprenticeship programs and accreditation for vocational programs and community colleges," according to the Journal. He also said the administration would "propose ways to expand student aid for vocational training and apprenticeship programs by curbing regulation." Neither the White House nor the Labor Department would confirm the details of Trump's speech independently. It seems a little strange that Trump wants to talk up worker training, given that his DOL budget proposal would cut funding for training and employment services by 36 percent. But Acosta defended the cuts on Wednesday, telling a Senate appropriations panel that the administration was making "hard but responsible choices," and prioritizing programs based on effectiveness. Chief among these at DOL appears to be Reemployment Services and Eligibility Assessments, which is budgeted for $130 million, a 13 percent increase. More here. LMCI, MIA: The Federal Reserve's Labor Market Conditions Index has gone missing again. The index typically comes out on the Monday after the Labor Department's jobs report, but as of Wednesday the Fed still hadn't posted the indicator for May. Morning Shift is … despondent. The LMCI aggregates 19 indicators, including the unemployment rate, labor force participation, and average hourly earnings, giving extra weight to indicators that move in the same direction. If you know what happened to May's LMCI, please email [email protected]. PERSUADER UPDATE: The White House Office of Management and Budget completed Tuesday its review of a proposal to rescind the Obama Labor Department's persuader rule. The rule, which increased disclosure requirements for the hiring of union-busting attorneys, never took effect because a federal judge issued an injunction against it. The Trump Labor Department has until June 16 to decide whether to continue to defend the rule in court. That seems unlikely. NLRB SIDES WITH USC UNION: The National Labor Relations Board ruled Wednesday that the University of Southern California refused, unlawfully, to negotiate with a union certified to represent non-tenure track faculty at its Roski School of Art and Design. "By failing and refusing … to recognize and bargain with the union as the exclusive collective-bargaining representative of the employees in the appropriate unit," the NLRB ruled, "the respondent has engaged in unfair labor practices affecting commerce within the meaning [of the act]." More here. CISSNA FACES JUDICIARY: Francis Cissna, Trump's nominee to helm U.S. Citizenship and Immigration Services, will head back to the Senate Judiciary Committee at 9:30 this morning. The committee began consideration of the nomination on May 24, when Cissna faced questions about his role in drafting Trump's original travel ban policy. "I was not the author of that executive order," he said at the time, adding that he offered his expertise on "a variety of different immigration-related matters." Cissna also affirmed he would administer an Obama-era deportation relief program "as it stands." The Deferred Action for Childhood Arrivals program, which grants deportation relief to undocumented immigrants brought to the U.S. at a young age, maintains the tenuous blessing of the Trump administration. The committee meeting takes place in 226 Dirksen. Watch it here. TAKING SXSW HOSTAGE: Sens. Bob Menendez (D-N.J.) and Catherine Cortez Masto (D-Nev.) sent a letter Wednesday to the chief executive of the South by Southwest Conference and Festivals that asked him to consider moving the premier event out of Austin until the state of Texas repealed its harsh immigration law. The measure, known as SB 4, empowers police to ask about immigration status during routine stops and threatens chiefs with a misdemeanor if they fail to enforce federal "detainer" requests, which direct local authorities to hold a suspected undocumented immigrant for 48 hours beyond their release time. The Texas law has been compared with Arizona's SB 1070, a similar clampdown that led opponents to launch a movement to "boycott Arizona." While the results of that protest weren't totally clear, the Phoenix Convention Center later attributed a drop in projected visitors to the negative branding around the law. The measure in Texas hasn't engendered the same response, but the American Immigration Lawyers Association announced Wednesday that they would move their annual 3,000-attendee conference to another state, which is "no small matter," in the words of AILA President William Stock. The call to relocate SXSW (one of the country's best-known gatherings of artists, musicians and thinkers) might be a heavier lift. In response to the letter, organizers told KVUE that they will continue to speak out against the legislation, but won't uproot the event. "We agree with the senators that the law stands diametrically opposed to the spirit of SXSW and respect their call to action," they wrote. "For us this is not a solution. Austin is our home and an integral part of who we are. We will stay here and continue to make our event inclusive while fighting for the rights of all." More from KVUE here. INSIDE DEPORTATION CENTRAL: The Guardian's Oliver Laughland got a close-up look at the LaSalle detention facility in Jena, Louisiana, a remote center that sits at heart of the the Trump administration's plan to rapidly remove immigrants. "Hearings take place in five poky courtrooms behind reinforced grey doors where the public benches, scratched with graffiti, are completely empty," Laughland writes. "There is no natural light. The hallways are lined with detainees in yellow jumpsuits awaiting their turn before a judge. The five sitting judges were quietly flown in by the U.S. justice department from cities across the United States and will be rotated again within two weeks." "This is the LaSalle detention facility that, since March this year, has been holding removal proceedings for hundreds of detained migrants in courtrooms adjoining a private detention center, which incarcerates more than 1,100 men and women and has the highest number of prisoner deaths of any in America over the past two years," reports Laughland. "The new setup is part of Donald Trump's attempts to ramp up deportations by vastly expanding the arrest powers of federal immigration enforcement and prioritising more vulnerable groups of detained migrants in new court locations around the country." More here. KELLY'S ETHICS WAIVER: DHS Secretary John Kelly was granted an ethics waiver in March related to his previous work for the government of Australia, according to documents published Wednesday by the Office of Government Ethics. The documents included waivers granted to a half dozen officials who have been permitted to work in the Trump administration, the New York Times reports. Kelly sought the waiver to participate in matters involving Australia, where he had given a speech at that government's expense in December (an earlier financial disclosure form put the value of the anticipated honorarium between $1,001 and $15,000). More from the Times here. HOUSE PASSES POLYGRAPH BILL: The House passed a bill that would allow some U.S. Customs and Border Protection job applicants to bypass a polygraph examination as part of the hiring process. The measure, introduced by Rep. Martha McSally (R-Ariz.), would create an exception for current and former law enforcement officers and military personnel who have cleared certain security hurdles in the past. The bill, passed by a vote of 282-137, would make it easier for the Trump administration to hire an additional 5,000 Border Patrol agents it's seeking in the fiscal year 2018 budget. As it stands, the agency has a difficult time meeting its existing staffing goals. Sen. Jeff Flake (R-Ariz.) has a companion bill in the Senate. Read the McSally bill here. RETALIATION CLAIM DENIED: The 4th Circuit Court of Appeals ruled Wednesday against an employee who was fired from her job and argued that it was retaliation. Patricia Villa, an employee of Cava Mezze Grill Mosaic, claimed her boss had offered a former coworker a raise in exchange for sex. The coworker initially denied that the incident took place, leading to Villa's firing. Shortly after, the coworker confirmed that she had had told Villa about the raise-for-sex proffer, but said that she'd made it up. The 4th Circuit upheld the lower court decision, writing that the fired worker failed to establish that she wouldn't have been fired even if the company had confirmed the allegations were true. Read the opinion here. NO TIME-AND-A-HALF: Cars handled by workers as part of a valet service qualify as goods "under the physical possession of the ultimate consumer" under the Fair Labor Standards Act, the 11th Circuit ruled Wednesday. The case involved a worker who claimed his employer was covered under the FLSA and thus owed him overtime pay (to the tune of pay-and-half), which is called for by the act. PENSION PUNISHMENTS: "Public employees convicted of felonies 'that breach public trust' have to give up their retirement benefits under a legislation Gov. Rick Snyder signed into law Wednesday," Michael Gerstein writes in the Detroit News. "Snyder's signature follows a series of high-profile corruption scandals involving former Detroit Public Schools officials and the Republican governor says the law will make sure public benefits don't go to the wrong people." More here. BUY A BRIDGE IN BROOKLYN?: The Wall Street Journal's Ted Mann and Ryan Dezember zeroed in on the biggest question about Trump's infrastructure plan: how the president will convince businesses and investors to foot the bill. "Under the new approach, Mr. Trump's advisers said they can get private investors to flock to put up the capital for such projects by curtailing permitting requirements and regulations, and by offering incentives to states and cities to turn to the private sector for financing," they write. "It isn't clear, however, that private investors will swarm to some of the country's most seriously decrepit infrastructure projects because not all of them will provide commercial returns." More here. Meanwhile, Trump put pressure on Democrats during his speech in Cincinnati on Wednesday. "People don't want to see what's going on," he said. "They want to see us all come together but I just don't see them coming together … I'm calling on all Democrats and Republicans to join together, if that's possible, in the great rebuilding of America." More from POLITICO here. OIG REPORT ON FAMILY DETENTION: The DHS Office of Inspector General performed unannounced spot inspections at three family detention centers and found they met U.S. Immigration and Customs Enforcement standards, according to a report released Wednesday. "Nothing came to our attention that represented an immediate, unaddressed risk or an egregious violation of the Family Residential Standards," the report wrote. Read it here. — "Lobbyists, industry lawyers were granted ethics waivers to work in the Trump administration," from The New York Times — "Study: Colorado one of the best states to be unemployed," from the Coloradoan — "Minneapolis moves forward with $15 minimum wage plan," from CBS Minnesota — "Marty Walsh strikes deal with AFSCME for new city contract," from the Boston Globe — "Dallas joins fight against sanctuary cities bill," from Dallas News — "Trump proposed a 120-day travel ban to improve vetting. It's been 129 days," from The Washington Post — "Judge rejects Uber's bid to pause self-driving car lawsuit," from BuzzFeed — "Report: Homeland Security now investigating basketball recruiting scandal at N.J. school," from USA Today High School Sports THAT'S ALL FOR MORNING SHIFT. Rebecca Rainey @rebeccaarainey About The Author : Ted Hesson Ted Hesson is an employment and immigration reporter with POLITICO Pro. Prior to joining POLITICO in October 2016, Hesson spent more than a decade as a writer and editor with a focus on immigration policy. His work has appeared in National Journal, The Atlantic and VICE, among other outlets. From 2012 to 2015, he worked as immigration editor at Fusion, a joint venture of ABC News and Univision. Hesson holds a master's degree from the Columbia University Graduate School of Journalism and a bachelor's degree from Boston College. Born and raised in Philadelphia, he lived in New York City before relocating to Washington, D.C. In his free time, he enjoys playing guitar, listening to podcasts and practicing Spanish. Weekly Shift - POLITICO Archive Monday, 1/4/21 Monday, 12/21/20 View the Full Weekly Shift Archives »
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,884
Find helpful customer reviews review ratings for WWE: The Best of Raw SmackDown at. With just over a week until the Money In The Bank pay per view CM Punk returns to SmackDown as he vies for his third career briefcase in the Money in the Bank All- Stars Ladder Match. It was the twenty- sixth annual SummerSlam the fifth consecutive one held at Staples Center. SummerSlam ( ) was a professional wrestling pay- per- view ( PPV) event produced by WWE that took place on August 14,. SummerSlam ( ) was a professional wrestling pay- per- view ( PPV) event produced by WWE that took place on August 18 at Staples Center in Los Angeles California. It was the twenty- fourth annual SummerSlam event the third consecutive SummerSlam at Staples Center in Los Angeles California. It was also the final WWE event before the dissolution of the original brand extension, which was introduced in. The event received 296 down from last year' s event of 358 000. Download wwe best ppv matches of 2013. Find helpful customer reviews and review ratings for WWE: The Best of Raw and SmackDown at. Read honest and unbiased product reviews from our the mid- 90' s, no pay- per- view series produced more epic matches, historic moments or championship showdowns than WWE In Your House. Debuting in 1995, In Your House pay- per- views were the preeminent setting for the top Superstars in WWE to settle their scores in. SummerSlam ( ) was a professional wrestling pay- per- view ( PPV) event produced by WWE that took place on August 18, at Staples Center in Los Angeles, California. It was the twenty- sixth annual SummerSlam, and the fifth consecutive one held at Staples Center. The event received 296, 909 buys, down from last year' s event of 358, 000. SummerSlam ( ) was a professional wrestling pay- per- view ( PPV) event produced by WWE that took place on August 14,. It was the twenty- fourth annual SummerSlam event and the third consecutive SummerSlam at Staples Center in Los Angeles, California. It was also the final WWE event before the dissolution of the original brand extension, which was introduced in. Find helpful customer reviews and review ratings for WWE: The Best of Raw and SmackDown at. Read honest and unbiased product reviews from our users. SummerSlam ( ) was a professional wrestling pay- per- view ( PPV) event produced by WWE that took place on August 18, at Staples Center in Los Angeles, California. It was the twenty- sixth annual SummerSlam, and the fifth consecutive one held at Staples Center. The event received 296, 909 buys, down from last year' s event of 358, 000.
{ "redpajama_set_name": "RedPajamaC4" }
9,927
11 кіломе́тр — пасажирський залізничний зупинний пункт Кримської дирекції залізничних перевезень Придніпровської залізниці на лінії Острякове — Євпаторія-Курорт між станціями Острякове (9 км) та Ярка (10 км). Розташований біля промислової будівлі в Сімферопольському районі АР Крим. Пасажирське сполучення Через зупинний пункт щоденно курсують чотири пари приміських електропоїздів сполученням Євпаторія-Курорт — Сімферополь, проте прямують без зупинок. Джерела Посилання Зупинні пункти Придніпровської залізниці Транспорт Сімферопольського району Зупинні пункти Криму
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,498
Q: how to execute function in txt file I have a function def output5() : print("OK") def output5() : print("no") def output5() : print("yes") So If I make code like this output5() output6() output7() it will say ok no yes And I have a file output.txt and there are messages follow: output5() output6() output7() I wanted to implement a function in python like: a=open('output.txt').readlines() a so that ok no yes would be printed but it only says: 'output5()\n', 'output6()\n', 'output7()\n' not ok no yes how can I execute functions included in txt file? A: Warning: Using exec or execfile is generally considered bad practice and possibly unsafe (Why should exec() and eval() be avoided?). While it is the answer to your question, you should probably use another solution. If I interpret the question correctly, you could do one of these, depending on your python version: * *Python 2: you can use execfile('output.txt') *Python 3: exec(open('output.txt').read()) Note that this is not the usual way to do things in python. You should probably use a module (https://docs.python.org/2/tutorial/modules.html), unless you have strong reasons not to.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,213
using System; using System.Security.Cryptography; using JetBrains.Annotations; // ReSharper disable InconsistentNaming - we want to keep the inconsistent naming for SHA1 namespace Reusable.Cryptography { public static class SHA1 { [NotNull] public static byte[] ComputeHash([NotNull] byte[] source) { if (source == null) throw new ArgumentNullException(nameof(source)); using (var sha1 = new SHA1Managed()) { return sha1.ComputeHash(source); } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,807
The Vampire Diaries After Show recaps, reviews and discusses episodes of The CW's The Vampire Diaries. Show Summary: The Vampire Diaries is an American supernatural drama television series developed by Kevin Williamson and Julie Plec, based on the popular book series of the same name written by L. J. Smith. The Vampire Diaries Reviews and After Show - AfterBuzz TV AfterBuzz TV © AfterBuzz TV J and Ben Carlin Pretty Little Liars Reviews and After Show - AfterBuzz TV The Once Upon A Time Podcast Riverdale Reviews and After Show - AfterBuzz TV The Grey's Anatomy Podcast Team Wolf: The Official Teen Wolf Podcast MTV Podcast Network w/ Kaitlyn Vella and Amanda Gurock The How To Get Away With Murder Podcast More by AfterBuzz TV The Bachelorette Podcast The Bachelor in Paradise Podcast The Bachelor After Show Podcast The Keeping Up with the Kardashians Podcast South Park Weekly - AfterBuzz TV Use of Cookies Terms of Use Site Map
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,168
{"url":"https:\/\/www.semanticscholar.org\/topic\/Nielsen%E2%80%93Ninomiya-theorem\/967034","text":"# Nielsen\u2013Ninomiya theorem\n\nKnown as: Nielsen-Ninomiya theorem\nThe Nielsen\u2013Ninomiya theorem is a no-go theorem in physics, in particular in lattice gauge theory, concerning the possibility of defining a theory of\u2026\u00a0Expand\nWikipedia\n\n## Papers overview\n\nSemantic Scholar uses AI to extract papers important to this topic.\n2017\n2017\nThe Nielsen-Ninomiya theorem implies that any local, Hermitian and translationally invariant lattice action in even-dimensional\u2026\u00a0Expand\nIs this relevant?\n2015\n2015\nThe same leptoquarks that explain the recently observed anomaly in $R_K$ can generate naturally small Majorana neutrino masses at\u2026\u00a0Expand\nIs this relevant?\n2013\n2013\nLet $${(\\phi, \\psi)}$$(\u03d5,\u03c8) be an (m, n)-valued pair of maps $${\\phi, \\psi : X \\multimap Y}$$\u03d5,\u03c8:X\u22b8Y, where $${\\phi}$$\u03d5 is an m\u2026\u00a0Expand\nIs this relevant?\n2011\n2011\nNumerous empirical studies have shown that certain exponential Levy models are able to fit the empirical distribution of daily\u2026\u00a0Expand\nIs this relevant?\n2009\n2009\nAmong the soil formation factors, relief is one of the most used in soil mapping, because of its strong correlation with the\u2026\u00a0Expand\nIs this relevant?\n2002\n2002\nFundamental properties of unstable particles, including mass, width, and partial widths, are examined on the basis of the Nielsen\u2026\u00a0Expand\nIs this relevant?\n1999\n1999\nAbstract We study a leptogenesis via decays of heavy Majorana neutrinos produced non-thermally in inflaton decays. We find that\u2026\u00a0Expand\nIs this relevant?\n1998\n1998\nThe index theorem is employed to extend the no-go theorem for lattice chiral Dirac fermions to translation non-invariant and non\u2026\u00a0Expand\nIs this relevant?\n1986\n1986\nStanding stocks and production rates of phytoplankton and planktonic copepods were investigated at 15 stations in the Inland Sea\u2026\u00a0Expand\nIs this relevant?\n1982\n1982\nThe Nielsen-Ninomiya theorem asserts the impossibility of constructing lattice models of non-selfinteracting chiral fermions. A\u2026\u00a0Expand\nIs this relevant?","date":"2019-10-20 06:19:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7827978134155273, \"perplexity\": 3245.525108977399}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986703625.46\/warc\/CC-MAIN-20191020053545-20191020081045-00221.warc.gz\"}"}
null
null
Campaign to fill 1,500 Irish IT jobs Footballer-turned-entrepreneur Niall Quinn has launched IT's Happening Here. IT's Happening Here is a programme which targets software professionals both in Ireland and overseas who may have an interest in the 1,500 jobs available in the country. The campaign is supported by Enterprise Ireland and will heavily feature social media marketing. Interested applicants are driven to a site hosting jobs for over 600 Irish-owned software firms. Quinn says "We are looking to attract the very best talent from graduate to director level to keep us at the top of our game." See our Graduate IT homepage for more information about careers in IT. To find out more and see the available jobs visit IT's Happening Here. Tesco saves £100 million a year with supply chain analytics Direct Line floats at £2.63bn How to Use Fresher's Week Productively Finance qualifications to become a fast track to MBA programme CISI announce winners of Risk and Compliance Awards 2013
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,871
package org.locationtech.geomesa.features.kryo.json import java.lang.ref.SoftReference import com.jayway.jsonpath.Option.{ALWAYS_RETURN_LIST, DEFAULT_PATH_LEAF_TO_NULL, SUPPRESS_EXCEPTIONS} import com.jayway.jsonpath.{Configuration, JsonPath} import org.geotools.factory.Hints import org.geotools.filter.expression.{PropertyAccessor, PropertyAccessorFactory} import org.geotools.util.Converters import org.locationtech.geomesa.features.kryo.KryoBufferSimpleFeature import org.locationtech.geomesa.features.kryo.json.JsonPathParser.{PathAttribute, PathAttributeWildCard, PathDeepScan, PathElement} import org.opengis.feature.simple.SimpleFeature import scala.util.control.NonFatal /** * Access values from a json-type string field. Syntax must start with '$.'. The first part of the path * selects the simple feature attribute, and the rest of the path selects within the json contained in * that attribute. * * Note: this class is optimized for `KryoBufferSimpleFeature`s. It will work on standard simple features, * but will incur a serialization cost. */ object JsonPathPropertyAccessor extends PropertyAccessor { // cached references to parsed json path expressions private val paths = new java.util.concurrent.ConcurrentHashMap[String, SoftReference[Seq[PathElement]]]() private val pathConfig = Configuration.builder.options(ALWAYS_RETURN_LIST, DEFAULT_PATH_LEAF_TO_NULL, SUPPRESS_EXCEPTIONS).build() override def canHandle(obj: Any, xpath: String, target: Class[_]): Boolean = { val path = try { pathFor(xpath) } catch { case NonFatal(e) => Seq.empty } if (path.isEmpty) { false } else { // we know object is a simple feature due to factory method `createPropertyAccessor` val sft = obj.asInstanceOf[SimpleFeature].getFeatureType path.head match { case PathAttribute(name: String, _) => val descriptor = sft.getDescriptor(name) descriptor != null && descriptor.getType.getBinding == classOf[String] case PathAttributeWildCard | PathDeepScan => import scala.collection.JavaConversions._ sft.getAttributeDescriptors.exists(_.getType.getBinding == classOf[String]) case _ => false } } } override def get[T](obj: Any, xpath: String, target: Class[T]): T = { import org.locationtech.geomesa.utils.geotools.RichAttributeDescriptors.RichAttributeDescriptor val sft = obj.asInstanceOf[SimpleFeature].getFeatureType val path = pathFor(xpath) val attribute = path.head match { case PathAttribute(name: String, _) => sft.indexOf(name) case _ => // we know it will be a wildcard due to canHandle // prioritize fields marked json over generic strings // note: will only match first json attribute if more than 1 import scala.collection.JavaConversions._ val i = sft.getAttributeDescriptors.indexWhere(_.isJson()) if (i != -1) { i } else { sft.getAttributeDescriptors.indexWhere(_.getType.getBinding == classOf[String]) } } val result = if (sft.getDescriptor(attribute).isJson() && obj.isInstanceOf[KryoBufferSimpleFeature]) { val input = obj.asInstanceOf[KryoBufferSimpleFeature].getInput(attribute) KryoJsonSerialization.deserialize(input, path.tail) } else { val json = obj.asInstanceOf[SimpleFeature].getAttribute(attribute).asInstanceOf[String] val list = JsonPath.using(pathConfig).parse(json).read[java.util.List[AnyRef]](JsonPathParser.print(path.tail)) if (list == null || list.isEmpty) { null } else if (list.size == 1) { list.get(0) } else { list } } if (target == null) { result.asInstanceOf[T] } else { Converters.convert(result, target) } } override def set[T](obj: Any, xpath: String, value: T, target: Class[T]): Unit = throw new NotImplementedError() /** * Gets a parsed json path expression, using a cached value if available * * @param path path to parse * @return */ private def pathFor(path: String): Seq[PathElement] = { val cached = paths.get(path) match { case null => null case c => c.get } if (cached != null) { cached } else { val parsed = JsonPathParser.parse(path, report = false) paths.put(path, new SoftReference(parsed)) parsed } } } class JsonPropertyAccessorFactory extends PropertyAccessorFactory { override def createPropertyAccessor(typ: Class[_], xpath: String, target: Class[_], hints: Hints): PropertyAccessor = { if (classOf[SimpleFeature].isAssignableFrom(typ) && xpath != null && xpath.startsWith("$.")) { JsonPathPropertyAccessor } else { null } } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,505
{"url":"https:\/\/www.coursehero.com\/file\/26357422\/17-08-24-1910-17F-HWSolutions01pdf\/","text":"# [17-08-24]_[1910-17F]_HWSolutions01.pdf - H OMEWORK S...\n\n\u2022 Homework Help\n\u2022 8\n\nThis preview shows page 1 out of 8 pages.\n\nUnformatted text preview: H OMEWORK S OLUTIONS MATH 1910 Fall 2017 Sections 5.3, 5.1, 5.2 Problem 5.3.15 Z \u0001 18t5 \u2212 10t4 \u2212 28t dt. Evaluate the indefinite integral S OLUTION. Z Z Z Z \u0001 18t5 \u2212 10t4 \u2212 28t dt = 18 t5 dt \u2212 10 t4 dt \u2212 28 t dt t5 t2 t6 \u2212 10 \u2212 28 + C 6 5 2 = 3t6 \u2212 2t5 \u2212 14t2 + C = 18 Problem 5.3.20 Z Evaluate the indefinite integral S OLUTION. Z dx . x4\/3 Z 3 dx x\u22121\/3 + C = \u2212 1\/3 + C = x\u22124\/3 dx = 4\/3 \u22121\/3 x x Problem 5.3.28 5.3.15 5.3.20 Z \u0001 \u03b8 + sec2 \u03b8 d\u03b8. Evaluate the indefinite integral S OLUTION. Z Problem 5.3.35 \u0001 \u03b82 \u03b8 + sec2 \u03b8 d\u03b8 = + tan \u03b8 + C 2 5.3.28 Z Evaluate the indefinite integral sec 12t tan 12t dt. S OLUTION. Recall that d dx sec x = sec x tan x. Z sec 12t tan 12t dt = 1 1 sec 12t + C 12 5.3.35 Problem 5.3.61 \u0012 \u0013 dy 1 Solve the initial value problem = cos 3\u03c0 \u2212 \u03b8 , d\u03b8 2 S OLUTION. We have Z y= y(3\u03c0) = 8. \u0012 \u0013 \u0012 \u0013 1 1 cos 3\u03c0 \u2212 \u03b8 d\u03b8 = \u22122 sin 3\u03c0 \u2212 \u03b8 + C. 2 2 To find C, use the initial condition: \u0012 \u0013 \u0012 \u0013 1 3\u03c0 8 = y(3\u03c0) = \u22122 sin 3\u03c0 \u2212 \u00b7 3\u03c0 + C = \u22122 sin + C = 2 + C. 2 2 It follows that C = 6, so y = \u22122 sin 3\u03c0 \u2212 1 2 \u0001 \u00b7 \u03b8 + 6. 5.3.61 Problem 5.3.75 A mass oscillates at the end of a spring. Let s(t) be the displacement of the mass from the equilibrium position at time t. Assuming that the mass is located at the origin at t = 0 and has velocity v(t) = sin(\u03c0t\/2) m\/s, state the differential equation satisfied by s(t), and find s(t). S OLUTION. Velocity is the derivative of position, so we have \u0010\u03c0 \u0011 t , s(0) = 0. s 0 (t) = sin 2 To find s(t) we follow the same procedure as the previous problem. Z \u0010\u03c0 \u0011 \u0010\u03c0 \u0011 2 s(t) = sin t dt = \u2212 cos t +C 2 \u03c0 2 To find C, use the initial condition. 0 = s(0) = \u2212 \u0010\u03c0 \u0011 2 2 cos \u00b70 +C=\u2212 +C \u03c0 2 \u03c0 Thus C = 2\/\u03c0, so we have s(t) = \u2212 \u0010\u03c0 \u0011 2 2 cos t + . \u03c0 2 \u03c0 2 5.3.75 Problem 5.1.37 150 X Rewrite and evaluate the sum n2 . n=51 S OLUTION. 150 X n2 = n=51 150 X 50 X n2 \u2212 n=1 n2 n=1 150(150 + 1)(2 \u00b7 150 + 1) 50(50 + 1)(2 \u00b7 50 + 1) \u2212 = 6 6 = 1, 136, 275 \u2212 42, 925 = 1, 093, 350 5.1.37 Problem 5.1.38 200 X Rewrite and evaluate the sum k3 . k=101 S OLUTION. 200 X k3 = k=101 200 X k=1 \u0012 = k3 \u2212 100 X k3 k=1 200(200 + 1) 2 \u00132 \u0012 \u2212 100(100 + 1) 2 \u00132 = 404, 010, 000 \u2212 25, 502, 500 = 378, 507, 500 5.1.38 3 Problem 5.1.45 Evaluate the limit lim N 2 X i \u2212i+1 N\u2192\u221e i=1 N . S OLUTION. lim N 2 X i \u2212i+1 N\u2192\u221e i=1 N3 N X 1 = lim N\u2192\u221e N3 i=1 i2 \u2212 N X N X i+ i=1 ! 1 i=1 \u0012 \u0013 N(N + 1)(2N + 1) N(N + 1) 1 = lim \u2212 +N N\u2192\u221e N3 6 2 \u0013 \u0012 1 N(N + 1)(2N + 1) N(N + 1) \u2212 + 3 = lim N\u2192\u221e 6N3 2N3 N 1 = 3 5.1.45 Problem 5.1.53 Show, f(x) = 3x2 + 4x over [0, 2], that 2 X N N RN = \u0012 j=1 12j2 8j + N2 N \u0013 . Then evaluate lim RN . N\u2192\u221e S OLUTION. Recall that for a function f(x) on an interval [a, b], the right-endpoint approximation RN is given by N X RN = \u2206x f(a + j\u2206x) j=1 2 where \u2206x = (b \u2212 a)\/N. Taking f(x) = 3x + 4x, a = 0, and b = 2, we have \u2206x = 2\/N and RN \" \u0012 \u00132 \u0012 \u0013# \u0013 N N \u0012 2 X 12j2 8j 2 X 2 2 3 j\u00b7 +4 j\u00b7 = . = + N N N N N2 N j=1 j=1 4 We can now compute the limit. lim RN N\u2192\u221e \u0013 N \u0012 2 X 12j2 8j = lim + N\u2192\u221e N N2 N j=1 N N X X 16 24 j2 + 2 j = lim 3 N\u2192\u221e N N j=1 j=1 S E C T I O N 5.2 The Definite Integral \u0013 \u0012 16 N(N + 1) 24 N(N + 1)(2N + 1) = lim + 2 y N\u2192\u221e N3 6 N 2 \u0013 \u00121 N(N + 1) N(N + 1)(2N + 1) 0.5 +8 = lim 4 N\u2192\u221e N3 N2 x = 8 + 8\u2212 0.5 = 16 1 2 3 4 533 5.1.53 \u22121 7. ! 0 5\" 25 \u2212 x 2 dx Problem 5.1.73 \u221a solution The region s bounded by the graph of y = 25 \u2212 x 2 and the x-axis over the interval [0, 5] is \u0012 \u00132 N 1 X one-quarter of a circle of radius 5. jHence, Evaluate lim by interpreting it as the area of part of a familiar geometric figure. 1\u2212 N\u2192\u221e N N! 5 \" j=1 1 25\u03c0 25 \u2212 x 2 dx = \u03c0(5)2 = . 4 4 0 S OLUTION. Comparing the the formula for RN , we see that the limit is equal \u221a expression above y to the area under f(x) = 1 \u2212 x2 over [0, 1]. This region is the same as the quarter of the unit 5 disk x2 + y2 \u2264 1 that lies in the first quadrant. Thus the limit is a quarter the area of the unit 4 circle: s 3 \u0012 \u00132 N 1 X 2 j \u03c0 1 lim 1 \u2212 5.1.73 = \u03c0 \u00b7 12 = . 1 N\u2192\u221e N N 4 4 j=1 8. ! 1 2 3 4 5 x 3 |x| dx \u22122 Problem 5.2.8 Z 3 the interval [\u22122, 3] consists of solution The region bounded by the graph of y = |x| and the x-axis over 1 Draw a graph of the signed area represented by the definite integral andhas compute it using two right triangles, both above the axis. One triangle has area 2 (2)(2) = 2, and |x| thedx other area 12 (3)(3) = 92 . \u22122 Hence, geometry. ! 3 9 13 |x| dx = +2= . S OLUTION. The area consists of two right triangles: 2 2 \u22122 y 3 2 1 \u22122 9. ! \u22121 1 2 3 x 2 \u22122 (2 \u2212 |x|) dx solution The region bounded by the graph of y = 2 \u2212 |x| and the x-axis over the interval [\u22122, 2] is a 5 triangle above the axis with base 4 and height 2. Consequently, ! 2 1 (2 \u2212 |x|) dx = (2)(4) = 4. 2 \u22122 y 2 \u2022 The region bounded by the graph of y = 4x \u2212 8 and the x-axis over the interval [\u22121, 4] consists of a triangle below the axis with base 3 and height 12 and a triangle above the axis with base 2 and height 8. Hence, ! 4 1 1 (4x \u2212 8) dx = \u2212 (3)(12) + (2)(8) = \u221210. 2 2 \u22121 y Thus the integral is the sum of the areas of the triangles. Z3 5 \u22121 |x| dx = 1 \u221252 \u22122 \u00b7 2 \u00b71 2 +2 1 x 13 \u00b733 \u00b7 34 = 2 2 5.2.8 \u221210 InProblem Exercises 135.2.14 and 14, refer to Figure 14. Let f(x) be the function with the following graph: y y = f(x) 2 4 6 x FIGURE 14 The two parts of the graph are semicircles. Evaluate: Z4 ! 2 ! 6 a) f(x) dx 13. Evaluate: (a) f (x) dx (b) f (x) dx Z16 0 0 b) |f(x)| Let dx f (x) be given by Figure 14. solution 1 \"2 (a) The definite integral 0 f (x) dx is the signed area of a semicircle of radius 1 which lies below the x-axis. S OLUTION. a) The integral computes the signed area of half of smaller semicircle and half of Therefore, the area of the larger semicircle. ! 2 1 \u03c0 2 Z4 f (x) dx 1 = \u221222 . 3 1 = \u2212 22\u03c0 (1) f(x)0 dx = \u2212 \u00b7 \u03c0 \u00b7 1 + \u00b7 \u03c0 \u00b7 2 = \u03c0 4 4 4 1 \"6 (b) The definite integral 0 f (x) dx is the signed area of a semicircle of radius 1 which lies below the x-axis and semicircle ofcounts radius 2the which lies above x-axis. b)aThe integral unsigned areathe of half of Therefore, the smaller semicircle the entire larger semicircle. Z!6 6 3\u03c0 11 9 2 11 2 f (x) \u2212 \u00b7\u03c0\u03c0(1) f(x) dxdx== \u00b7 \u03c0 \u00b7(2) 12 + \u00b7 22 = = \u03c0. 5.2.14 2 2 4 2 42 0 1 14. Evaluate: (a) ! 4 (b) f (x) dx 1 ! 6 1 |f (x)| dx solution Let f (x) be given by Figure 14. \"4 (a) The definite integral 1 f (x) dx is the signed area of one-quarter of a circle of radius 1 which lies below the x-axis and one-quarter of a circle of radius 2 which lies above the x-axis. Therefore, ! 1 \"6 4 f (x) dx = 1 1 3 \u03c0 (2)2 \u2212 \u03c0 (1)2 = \u03c0. 4 4 4 (b) The definite integral 1 |f (x)| dx is the signed area of one-quarter of a circle of radius 1 and a semicircle of radius 2, both of which lie above the x-axis. Therefore, ! 6 1 9\u03c0 1 |f (x)| dx = \u03c0 (2)2 + \u03c0 (1)2 = . 2 4 4 1 6 536 CHAPTER 5 THE INTEGRAL Problem 5.2.16 Za Zc InFind Exercises 15 and refer g(t) to Figure 15. g(t) dt are as large as possible. a, b, and c so16, that dt and 0 b y y = g(t) 2 1 1 2 3 4 5 t \u22121 \u22122 FIGURE 15 ! 5 as much of the positive area as possible. This is achieved by taking 3 want to take S OLUTION.! We 15. Evaluate g(t) dt and g(t) dt. a = 4 and b = 1, c = 4. 5.2.16 0 3 solution The region bounded by the curve y = g(x) and the x-axis over the interval [0, 3] is comprised of two 1 right triangles, 2 below the axis, and one with area 2 above the axis. The definite integral Problem 5.2.34one with area 1 is therefore equal to 2 \u2212 2 = 32 . Z2 \u2022 The bounded the curve = g(x) and the x-axis over summary the intervalto[3, 5] is comprised Use the region properties of theby integral andy the formulas in the chapter evaluate (4x +of7)another dx. two right triangles, one with area 1 above the axis and one with area 1 below the \u22123 axis. The definite integral is therefore equal to 0. S OLUTION. ! a ! Z2 Z Z 2c Z2 16. Find a, b, and c such that g(t) dt and g(t) dt are as large as possible. (4x 0+ 7) dx = 4 b x dx + 7 dx \u22123 \u22123 \u22123 ! a ! Z Z2 solution To make the value of g(t) dt as0 large as possible, we want to include as much positive area =4 x dx + x dx + 7(2!\u2212 (\u22123)) 0 c \u22123 0 as possible. This happens when we take a = 4. Now, g(t) dt as large as possible, we ! of Z to make Zthe value \u2022 \u22123 2 b want to make sure to include all of the positive positive area. This happens when we take = 4 \u2212area and x dxonly + the x dx + 25) 0 0 b = 1 and c = 4. \u0013 \u0012 1 for the 1 points 2 Riemann sum shown in Figure 16. Compute 17. Describe the partition P and the set of=sample \u00b7 22 \u2212 C(\u22123) + 35 4 2 2 the value of the Riemann sum. = y 25. 5.2.34 34.25 20 15 8 0.5 1 2 2.5 3 3.2 4.5 5 x FIGURE 16 solution The partition P is defined by x0 = 0 < x1 = 1 < x2 = 2.5 < x3 = 3.2 < x4 = 5 The set of sample points is given by C = {c1 = 0.5, c2 = 2, c3 = 3, c4 = 4.5}. Finally, the value of the Riemann sum is 34.25(1 \u2212 0) + 20(2.5 \u2212 1) + 8(3.2 7 \u2212 2.5) + 15(5 \u2212 3.2) = 96.85. 18. Compute R(f, P , C) for f (x) = x 2 + x for the partition P and the set of sample points C in Figure 16. [The curve shown is not f (x) = x 2 + x.] solution Problem 5.2.37 Z1 (u2 \u2212 2u) du. Use the properties of the integral and the formulas in the chapter summary to evaluate 0 S OLUTION. Z1 Z1 Z1 2 2 u du u du \u2212 2 (u \u2212 2u) du = 0 3 0 1 12 \u22122\u00b7 3 2 1 =\u2212 3 0 = 8 5.2.37 ...\nView Full Document","date":"2021-12-07 12:22:26","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.875882089138031, \"perplexity\": 1246.2657518119145}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363376.49\/warc\/CC-MAIN-20211207105847-20211207135847-00330.warc.gz\"}"}
null
null
Elizabeth has over 28 years as professional complementary practitioner, 19 of these working with children and young people with SEN. What makes her knowledge of reflexology and massage such a positive support to children is her family background. Her nephew is autistic. She knows that a diagnosis does not define the person. She works with understanding, patience and empathy. Knowing the value of calm touch to the child and to the wider family. If needed, Elizabeth uses a variety of ways to communicate with her clients from PECS to Makaton, from Social stories to Touch Cues. She founded and ran an in-house clinic at a community school for children with autism, profound multiple learning disabilities and cerebral palsy. Collaborating with the school nurse, doctor, key workers teachers, physiotherapists, CAMHS to ensure each child received the best treatment possible. She has taken her skills into the wider community in a private practice, setting up courses teaching parents and SEN professionals how to use her techniques safely and effectively in a series of SEN Soothing Sequences and Tranquility Tips. Keen to spread the word she has spoken at International Conferences, The Childrens Complementary Network Conference, parent support groups, and professional organisations Pace, National Autistic Society, PDNet, SEN Revolution, and the Association of Childrens Hospices, She is regularly published in consumer and professional publications and is specialist contributor to training materials for complementary practitioners.
{ "redpajama_set_name": "RedPajamaC4" }
1,145
Theo Bücker est un footballeur allemand reconverti entraineur, né le à Bestwig. Biographie Statistiques Notes et références Liens externes Naissance en juillet 1948 Naissance à Bestwig Footballeur allemand Joueur du Borussia Dortmund Joueur du MSV Duisbourg Joueur de l'Al-Ittihad Djeddah Joueur du FC Schalke 04 Entraîneur allemand de football Entraîneur du Kazma SC Sélectionneur de l'équipe du Liban de football
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,592
\section*{Acknowledgments} A.M. acknowledges useful discussions on the IPR with Peter G.~Silvestrov. Work of W.B. has been supported in part by the DFG through Project A02 of SFB 1143 (Project-Id 247310070), by Nds.~QUANOMET, and by the National Science Foundation under Grant No.~NSF PHY-1748958. W.B.~also acknowledges the kind hospitality of the PSM, Dresden.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,374
Milwaukee Bucks add the first of 6 Fiserv Forum signs to the outside of the new arena The first Fiserv Forum sign was installed this week over the entrance of the Milwaukee Bucks arena. Milwaukee Bucks add the first of 6 Fiserv Forum signs to the outside of the new arena The first Fiserv Forum sign was installed this week over the entrance of the Milwaukee Bucks arena. Check out this story on jsonline.com: https://www.jsonline.com/story/news/local/2019/01/24/fiserv-forum-sign-added-milwaukee-bucks-arena/2669483002/ James B. Nelson, Milwaukee Journal Sentinel Published 3:12 p.m. CT Jan. 24, 2019 | Updated 11:10 a.m. CT Feb. 28, 2019 The Fiserv Forum signage is displayed on the new Milwaukee Bucks arena. (Photo: Rick Wood / Milwaukee Journal Sentinel) The first Fiserv Forum sign — complete with the orange punctuation mark between the words — was installed this week over the entrance of the Milwaukee Bucks arena. The Brookfield-based company's naming rights deal has been reflected inside the new arena, on the scoreboard and a huge display in the main atrium. The new arena will have "a total of six substantial signs" with the Fiserv Forum name on the exterior, said Barry Baum, chief communications officer for the Bucks and the arena. That includes large signage expected to be installed on the roof meant to be seen by television audiences worldwide for key games. LIKE US ON FACEBOOK: Get Journal Sentinel business news in your feed Fiserv has said the orange color on many of the signs is a company color and that the square between the words is part of Fiserv's branding and marketing. The Bucks and financial services company Fiserv Inc. last summer announced a 25-year naming rights partnership. The financial terms have not been made public. RELATED: Milwaukee Bucks and Fiserv Inc. strike 25-year naming rights deal for new downtown arena Built with $250 million in public funding, the $524 million arena opened in late summer. RELATED: Five takeaways now that Fiserv Forum has been open for several months A short history of Fiserv Forum, which has its grand opening on Aug. 26, 2018. Wochit Top Headlines from Business: Subscriber exclusive: Brookfield Square is adding fun and fitness to food and fashion to attract people to the struggling mall Subscriber exclusive: New Wisconsin program focuses on finding better jobs for spouses of Guard members and Reserves 2020 Democratic National Convention information sessions planned for Milwaukee businesses, musicians FOLLOW JS BUSINESS: Facebook | Twitter | LinkedIn Be MKE Who we are. Where we go. What we need to know. Each week in this newsletter, Sarah Hauer will serve as your city guide and share stories about Milwaukee, its people and what's happening around town. Read or Share this story: https://www.jsonline.com/story/news/local/2019/01/24/fiserv-forum-sign-added-milwaukee-bucks-arena/2669483002/ Illinois, Minnesota adopt hands-free driving laws Lawmaker who vowed no haircuts until bill passed shears it all off Former Silk owner quietly pleaded guilty in 'Russian Laundry' case July 3 fireworks guaranteed for 2020 as sponsors re-up for a year Bipartisan bill would give tax break to volunteer emergency responders Stampede erupts after shooting at Kenosha bar
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,118
{"url":"http:\/\/openstudy.com\/updates\/507d33ace4b07c5f7c1fc0f2","text":"does anyone know how to calculate standard deviation with this information alone? i know RDS = $\\frac{ s }{\\bar x} (100%)$","date":"2014-09-18 09:49:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.44438600540161133, \"perplexity\": 175.08651487791712}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-41\/segments\/1410657127222.34\/warc\/CC-MAIN-20140914011207-00071-ip-10-196-40-205.us-west-1.compute.internal.warc.gz\"}"}
null
null
Cybal Finance and Tax Services offers quality accounting services to clients in Edmonton and surrounding areas. Such accounting services include bookkeeping, and tax preparation services for individuals, sole proprietors and small to medium corporations. Cybal Finance Tax Services maintains complete sets of books, keeps records of accounts, verifies the procedures used for recording financial transactions, and provides personal accounting services. There are many similarities in the accounting industry, however this company will provide outstanding service that will retain clients. People do not tend to change accounts every year since financial information is not easy and sometime costly to transfer year to year to different accounting companies. In addition, because of the confidential nature of income tax preparation, clients tend to remain with the same accountant. Many competitors are larger establishments that charge about $75.00 per hour or more for their services. In addition, these larger firms are less flexible in meeting client needs and slower in adapting to changes in their markets. Furthermore, almost all of the competitors have office space and salary costs resulting in higher overhead that must be met through higher sales volumes (client needs vs. cash flow) or higher customer rates. Cybal Finance and Tax Services are reasonable in the cost for its services, which often reflects a clients' personal situation and involvement in preparation. Customer feedback reveals that the friendly, warm and personable manner in which they are served is very valuable. Cybal Finance & Tax Services is fully computerized, utilizing many different types of applications for communicating, reporting, and documenting. Experience in accounting for private companies, tax preparation and education helps to provide a range of services that Cybal Finance & Tax Services can do. From annual tax filing to monthly recordkeeping reporting including: accounts payable and receivable, payroll reporting, and sales tax reporting. Beyond that, Cybal Finance & Tax Services offers tax, estate, and retirement planning. The challenge for this business is in the first year of operation the owner will manage all aspects of the business and time will be limited. The company offers convenient times to allow clients to seek assistance or service at home-based locations and/or services that can be provided at a client's home or business. The company is flexible with time, as it will cater to clients' needs on the evenings during the week, including Saturday. Alexandra Cybulsky, is the President and owner of Cybal Finance and Tax Services. Alexandra was born in Europe where she studied economics and received her 'Master in Trade'. She worked as an administrative clerk and accountant for small and mid-size businesses for 16 years. She was recognized for her dedication at work and by helping others in different companies. In 2006, she was motivated to start her seasonal business with personal taxes. Her business grew the next year to accounting services for small businesses and entrepreneurs. Alexandra's advice: "Prepare to work hard and listen to your clients." She wants all small businesses in Edmonton to know Cybal Finance and Tax Services and the quality of work her company provides. She received her Accounting Diploma from NAIT in 1997. She also has experience in accounting software packages, which include Simply Accounting, Profile, Quick Books, Quick Tax and U-File. She is a member of CGA and CMA Associations.
{ "redpajama_set_name": "RedPajamaC4" }
4,084
Q: Remove space at the end of a string using strip() or rstrip() in python My goal is to read a csv file and then print the 10 items with One Single Space between them. Following are the tasks to do that: * *If the string has more than one word than add "Double quotes" around it. The problem i am facing is that if it is a single string but with white space at the end, i am supposed to remove it. I tried strip and rstrip in python, but it doesnt seem to work. Following is the code for it: with open(accidents_csv, mode='r', encoding="utf-8") as csv_file: csv_reader = csv.reader(csv_file, delimiter=",") count = 0 for row in csv_reader: if count <=10: new_row = [beautify_columns(item) for item in row] print(' '.join(new_row)) count +=1 def beautify_columns(col): col.strip() if(' ' in col): col = f'"{col}"' return col The following image shows the current behavior of the code without removing the trailing spaces. Kindly advise me how to remove spaces at the end of a string A: You have to assign the result of strip(), i.e. col = col.strip() Only other thing to note is that strip() will remove whitespace (i.e. not just space characters) at beginning as well as end of the string. A: Partially unrelated, but a csv.writer could natively meet your other requirements, because it will automatically quote fields containing a separator: with open(accidents_csv, mode='r', encoding="utf-8") as csv_file: csv_reader = csv.reader(csv_file, delimiter=",") csv_writer = csv.writer(sys.stdout, delimiter=" ") for count, row in enumerate(csv_reader): new_row = [item.strip() for item in row] csv_writer.writerow(new_row) if count >= 9: break As said by @barny, strip() will remove all whitespace characters, including "\r" or "\t". Use strip(' ') if you want to only remove space characters.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,802
News · 20 Jul 2018 Welsh Government invests in Cardiff renal services How much has the Welsh Government invested in Cardiff renal dialysis services? Renal dialysis services at the University Hospital for Wales, Cardiff, have received £1.2m to upgrade facilities. Health Secretary of Health for Wales Vaughan Gething said: 'Chronic kidney disease is estimated to affect 6‐8% of the general population. With many of the contributing factors, age, obesity‐related diabetes and coronary heart disease, expected to increase, the demand for renal services is set to grow… This investment will allow the unit to see more patients and better deal with those patients with more complex needs.' The money will enable the health board to deliver more efficient treatment of patients with severe acute kidney injury, as well as other patients within UHW that require regular dialysis as inpatients. Wales.gov: £1.2m to improve and expand renal services in Cardiff FundingKidney diseasePolicyRenal servicesWales
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,203
Q: Convergence of $\int_2^\infty \left[\left(0.99 - \frac{1}{x^2}\right)^x\cdot\left(\frac{x-1}{x}\right)\right]dx$ I'm taking a calculus course and I was show this improper integral in class. the Professor told us that this integral converges and left us to prove it as a challenge. The thing is, I can't find a series that converges as well that could help me prove this integral converges and the integral itself is a bit of a handful so I couldn't integrate it directly as well. This is the intgral: $$\int_2^\infty \left[\left(0.99 - \frac{1}{x^2}\right)^x\cdot\left(\frac{x-1}{x}\right)\right]dx$$ I would really aprricate a hint to the way of proving this, so far I have tried comparing it with other known integrals/series, altering the function to $e^{xln(0.99-\frac{1}{x^2}) + ln(\frac{x-1}{x})}$ and then integrating it, also tried deriving $xln(0.99-\frac{1}{x^2}) + ln(\frac{x-1}{x})$ and show that it's limit is $-\infty$ but all these ways led me to a dead end. Thanks in advance to all who share their wisdom! A: Since @Kavi Rama Murthy already provided all the elements, let me consider the more genral case of $$I(\epsilon)=\int_2^\infty \left(1-\epsilon - \frac{1}{x^2}\right)^x\,\left(\frac{x-1}{x}\right)\,dx$$ where $\epsilon \ll 1$. At the lower bound the integrand is already small $\left(\frac{(3-4 \epsilon )^2}{32}\right)$ Take the logarithm of the integrand and expand it as a series for large values of $x$ $$\log\Bigg[ \left(1-\epsilon - \frac{1}{x^2}\right)^x\,\left(\frac{x-1}{x}\right)\Bigg]=x\log(1-\epsilon)+O\left(\frac{1}{x}\right)$$ $$\left(1-\epsilon - \frac{1}{x^2}\right)^x\,\left(\frac{x-1}{x}\right)\sim (1-\epsilon)^x $$ So, $$I(\epsilon)\sim\int_2^\infty (1-\epsilon)^x \,dx=-\frac{(1-\epsilon )^2}{\log (1-\epsilon )}=\frac{1}{\epsilon }-\frac{5}{2}+\frac{23 \epsilon }{12}-\frac{3 \epsilon ^2}{8}+O\left(\epsilon ^3\right)$$ For $\epsilon=\frac 1{100}$ the truncated series would give, as an approximation, $$I\left(\frac{1}{100}\right)=\frac{23404591}{240000} \sim 97.5191$$ while the numerical integration would lead to $91.3737$. For $\epsilon=\frac 1{1000}$, the series would give $997.502$ while the numerical integration would lead to $986.852$.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,147
A hardware test engineer determines how to create a process that would test a particular product in order to assure that the product meets the applicable specifications. He designs test plans and test cases to validate new products and improve existing products. The jobholder designs, tests, and researches whether the product is performing as expected. He has to ensure that all the test conditions are the same for the relevant products. Candidate with a computer engineering degree and extensive knowledge about Windows, Unix and Linux operating systems, seeks a role of a hardware test engineer in a leading organization.
{ "redpajama_set_name": "RedPajamaC4" }
1,915
Romanu è un comune della Romania di 1.949 abitanti, ubicato nel distretto di Brăila, nella regione storica della Muntenia. Il comune è formato dall'unione di 2 villaggi: Oancea e Romanu. Altri progetti Collegamenti esterni Comuni del distretto di Brăila
{ "redpajama_set_name": "RedPajamaWikipedia" }
623
Contemporary theories and research informed by the Reggio Emilia approach recognise and value the environment as a 'third teacher'. Behind educators and families, physical spaces hold the potential to influence what and how children learn. Learning environments engage and foster a sense of ownership and respect when they are aesthetically pleasing, reflect the identity and culture of children and families, and encourage a connection to place. As such, the physical environment is never simply a backdrop to the curriculum; it is an integral part of the curriculum or leisure based program. An environment with rich and built-in learning opportunities also frees educators to interact with children. and a quiet fairy & gnome garden for us to explore in.
{ "redpajama_set_name": "RedPajamaC4" }
9,225
{"url":"https:\/\/en.wikipedia.org\/wiki\/Intensity_(physics)","text":"# Intensity (physics)\n\nFor other uses, see Intensity (disambiguation).\n\nIn physics, intensity is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy.[1] In the SI system, it has units watts per square metre (W\/m2). It is used most frequently with waves (e.g. sound or light), in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler.\n\nThe word \"intensity\" as used here is not synonymous with \"strength\", \"amplitude\", \"magnitude\", or \"level\", as it sometimes is in colloquial speech.\n\nIntensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density).\n\n## Mathematical description\n\nIf a point source is radiating energy in all directions (producing a spherical wave), and no energy is absorbed or scattered by the medium, then the intensity decreases in proportion to distance from the object squared. This is an example of the inverse-square law.\n\nApplying the law of conservation of energy, if the net power emanating is constant,\n\n${\\displaystyle P=\\int {\\mathbf {I}}\\,\\cdot \\mathrm {d} {\\mathbf {A}}}$,\n\nwhere P is the net power radiated, I is the intensity as a function of position, and dA is a differential element of a closed surface that contains the source.\n\nIf one integrates over a surface of uniform intensity I, for instance over a sphere centered around the point source, the equation becomes\n\n${\\displaystyle P=|I|\\cdot A_{\\mathrm {surf} }=|I|\\cdot 4\\pi r^{2}\\,}$,\n\nwhere I is the intensity at the surface of the sphere, and r is the radius of the sphere. (${\\displaystyle A_{\\mathrm {surf} }=4\\pi r^{2}}$ is the expression for the surface area of a sphere).\n\nSolving for I gives\n\n${\\displaystyle |I|={\\frac {P}{A_{\\mathrm {surf} }}}={\\frac {P}{4\\pi r^{2}}}}$.\n\nIf the medium is damped, then the intensity drops off more quickly than the above equation suggests.\n\nAnything that can transmit energy can have an intensity associated with it. For a monochromatic propagating wave, such as a plane wave or a Gaussian beam, if E is the complex amplitude of the electric field, then the time-averaged energy density of the wave is given by:\n\n${\\displaystyle \\left\\langle U\\right\\rangle ={\\frac {n^{2}\\epsilon _{0}}{2}}|E|^{2}}$,\n\nand the local intensity is obtained by multiplying this expression by the wave velocity, c\/n:\n\n${\\displaystyle I={\\frac {\\mathrm {c} n\\epsilon _{0}}{2}}|E|^{2}}$,\n\nwhere n is the refractive index, c is the speed of light in vacuum and ${\\displaystyle \\epsilon _{0}}$ is the vacuum permittivity.\n\nFor non-monochromatic waves, the intensity contributions of different spectral components can simply be added. The treatment above does not hold for arbitrary electromagnetic fields. For example, an evanescent wave may have a finite electrical amplitude while not transferring any power. The intensity should then be defined as the magnitude of the Poynting vector.[2]\n\n## Alternative definitions of \"intensity\"\n\nIn photometry and radiometry intensity has a different meaning: it is the luminous or radiant power per unit solid angle. This can cause confusion in optics, where intensity can mean any of radiant intensity, luminous intensity or irradiance, depending on the background of the person using the term. Radiance is also sometimes called intensity, especially by astronomers and astrophysicists, and in heat transfer.\n\nTable\u00a01.\u00a0SI photometry quantities\nQuantity Unit Dimension Notes\nName Symbol[nb 1] Name Symbol Symbol\nLuminous energy Qv\u00a0[nb 2] lumen second lm\u22c5s TJ\u00a0[nb 3] Units are sometimes called talbots.\nLuminous flux \/ luminous power \u03a6v\u00a0[nb 2] lumen (=\u00a0cd\u22c5sr) lm J\u00a0[nb 3] Luminous energy per unit time.\nLuminous intensity Iv candela (=\u00a0lm\/sr) cd J\u00a0[nb 3] Luminous power per unit solid angle.\nLuminance Lv candela per square metre cd\/m2 L\u22122J Luminous power per unit solid angle per unit projected source area. Units are sometimes called nits.\nIlluminance Ev lux (=\u00a0lm\/m2) lx L\u22122J Luminous power incident on a surface.\nLuminous exitance \/ luminous emittance Mv lux lx L\u22122J Luminous power emitted from a surface.\nLuminous exposure Hv lux second lx\u22c5s L\u22122TJ\nLuminous energy density \u03c9v lumen second per cubic metre lm\u22c5s\u22c5m\u22123 L\u22123TJ\nLuminous efficacy \u03b7\u00a0[nb 2] lumen per watt lm\/W M\u22121L\u22122T3J Ratio of luminous flux to radiant flux or power consumption, depending on context.\nLuminous efficiency \/ luminous coefficient V 1\n1. ^ Standards organizations recommend that photometric quantities be denoted with a suffix \"v\" (for \"visual\") to avoid confusion with radiometric or photon quantities. For example: USA Standard Letter Symbols for Illuminating Engineering USAS Z7.1-1967, Y10.18-1967\n2. ^ a b c Alternative symbols sometimes seen: W for luminous energy, P or F for luminous flux, and \u03c1 or K for luminous efficacy.\n3. ^ a b c \"J\" here is the symbol for the dimension of luminous intensity, not the symbol for the unit joules.\n\nQuantity Unit Dimension Notes\nName Symbol[nb 1] Name Symbol Symbol\nRadiant energy density we joule per cubic metre J\/m3 ML\u22121T\u22122 Radiant energy per unit volume.\nRadiant flux \u03a6e[nb 2] watt W or J\/s ML2T\u22123 Radiant energy emitted, reflected, transmitted or received, per unit time. This is sometimes also called \"radiant power\".\nSpectral flux \u03a6e,\u03bd[nb 3]\nor\n\u03a6e,\u03bb[nb 4]\nwatt per hertz\nor\nwatt per metre\nW\/Hz\nor\nW\/m\nML2T\u22122\nor\nMLT\u22123\nRadiant flux per unit frequency or wavelength. The latter is commonly measured in W\u22c5sr\u22121\u22c5m\u22122\u22c5nm\u22121.\nRadiant intensity Ie,\u03a9[nb 5] watt per steradian W\/sr ML2T\u22123 Radiant flux emitted, reflected, transmitted or received, per unit solid angle. This is a directional quantity.\nSpectral intensity Ie,\u03a9,\u03bd[nb 3]\nor\nIe,\u03a9,\u03bb[nb 4]\nor\nW\u22c5sr\u22121\u22c5Hz\u22121\nor\nW\u22c5sr\u22121\u22c5m\u22121\nML2T\u22122\nor\nMLT\u22123\nRadiant intensity per unit frequency or wavelength. The latter is commonly measured in W\u22c5sr\u22121\u22c5m\u22122\u22c5nm\u22121. This is a directional quantity.\nRadiance Le,\u03a9[nb 5] watt per steradian per square metre W\u22c5sr\u22121\u22c5m\u22122 MT\u22123 Radiant flux emitted, reflected, transmitted or received by a surface, per unit solid angle per unit projected area. This is a directional quantity. This is sometimes also confusingly called \"intensity\".\nor\nLe,\u03a9,\u03bb[nb 4]\nwatt per steradian per square metre per hertz\nor\nwatt per steradian per square metre, per metre\nW\u22c5sr\u22121\u22c5m\u22122\u22c5Hz\u22121\nor\nW\u22c5sr\u22121\u22c5m\u22123\nMT\u22122\nor\nML\u22121T\u22123\nRadiance of a surface per unit frequency or wavelength. The latter is commonly measured in W\u22c5sr\u22121\u22c5m\u22122\u22c5nm\u22121. This is a directional quantity. This is sometimes also confusingly called \"spectral intensity\".\nFlux density\nEe[nb 2] watt per square metre W\/m2 MT\u22123 Radiant flux received by a surface per unit area. This is sometimes also confusingly called \"intensity\".\nSpectral flux density\nEe,\u03bd[nb 3]\nor\nEe,\u03bb[nb 4]\nwatt per square metre per hertz\nor\nwatt per square metre, per metre\nW\u22c5m\u22122\u22c5Hz\u22121\nor\nW\/m3\nMT\u22122\nor\nML\u22121T\u22123\nIrradiance of a surface per unit frequency or wavelength. This is sometimes also confusingly called \"spectral intensity\". Non-SI units of spectral flux density include Jansky = 10\u221226 W\u22c5m\u22122\u22c5Hz\u22121 and solar flux unit (1SFU = 10\u221222 W\u22c5m\u22122\u22c5Hz\u22121=104Jy).\nRadiosity Je[nb 2] watt per square metre W\/m2 MT\u22123 Radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area. This is sometimes also confusingly called \"intensity\".\nor\nJe,\u03bb[nb 4]\nwatt per square metre per hertz\nor\nwatt per square metre, per metre\nW\u22c5m\u22122\u22c5Hz\u22121\nor\nW\/m3\nMT\u22122\nor\nML\u22121T\u22123\nRadiosity of a surface per unit frequency or wavelength. The latter is commonly measured in W\u22c5m\u22122\u22c5nm\u22121. This is sometimes also confusingly called \"spectral intensity\".\nRadiant exitance Me[nb 2] watt per square metre W\/m2 MT\u22123 Radiant flux emitted by a surface per unit area. This is the emitted component of radiosity. \"Radiant emittance\" is an old term for this quantity. This is sometimes also confusingly called \"intensity\".\nSpectral exitance Me,\u03bd[nb 3]\nor\nMe,\u03bb[nb 4]\nwatt per square metre per hertz\nor\nwatt per square metre, per metre\nW\u22c5m\u22122\u22c5Hz\u22121\nor\nW\/m3\nMT\u22122\nor\nML\u22121T\u22123\nRadiant exitance of a surface per unit frequency or wavelength. The latter is commonly measured in W\u22c5m\u22122\u22c5nm\u22121. \"Spectral emittance\" is an old term for this quantity. This is sometimes also confusingly called \"spectral intensity\".\nRadiant exposure He joule per square metre J\/m2 MT\u22122 Radiant energy received by a surface per unit area, or equivalently irradiance of a surface integrated over time of irradiation. This is sometimes also called \"radiant fluence\".\nSpectral exposure He,\u03bd[nb 3]\nor\nHe,\u03bb[nb 4]\njoule per square metre per hertz\nor\njoule per square metre, per metre\nJ\u22c5m\u22122\u22c5Hz\u22121\nor\nJ\/m3\nMT\u22121\nor\nML\u22121T\u22122\nRadiant exposure of a surface per unit frequency or wavelength. The latter is commonly measured in J\u22c5m\u22122\u22c5nm\u22121. This is sometimes also called \"spectral fluence\".\nHemispherical emissivity \u03b5 1 Radiant exitance of a surface, divided by that of a black body at the same temperature as that surface.\nSpectral hemispherical emissivity \u03b5\u03bd\nor\n\u03b5\u03bb\n1 Spectral exitance of a surface, divided by that of a black body at the same temperature as that surface.\nDirectional emissivity \u03b5\u03a9 1 Radiance emitted by a surface, divided by that emitted by a black body at the same temperature as that surface.\nSpectral directional emissivity \u03b5\u03a9,\u03bd\nor\n\u03b5\u03a9,\u03bb\n1 Spectral radiance emitted by a surface, divided by that of a black body at the same temperature as that surface.\nHemispherical absorptance A 1 Radiant flux absorbed by a surface, divided by that received by that surface. This should not be confused with \"absorbance\".\nSpectral hemispherical absorptance A\u03bd\nor\nA\u03bb\n1 Spectral flux absorbed by a surface, divided by that received by that surface. This should not be confused with \"spectral absorbance\".\nDirectional absorptance A\u03a9 1 Radiance absorbed by a surface, divided by the radiance incident onto that surface. This should not be confused with \"absorbance\".\nSpectral directional absorptance A\u03a9,\u03bd\nor\nA\u03a9,\u03bb\n1 Spectral radiance absorbed by a surface, divided by the spectral radiance incident onto that surface. This should not be confused with \"spectral absorbance\".\nHemispherical reflectance R 1 Radiant flux reflected by a surface, divided by that received by that surface.\nSpectral hemispherical reflectance R\u03bd\nor\nR\u03bb\n1 Spectral flux reflected by a surface, divided by that received by that surface.\nDirectional reflectance R\u03a9 1 Radiance reflected by a surface, divided by that received by that surface.\nSpectral directional reflectance R\u03a9,\u03bd\nor\nR\u03a9,\u03bb\n1 Spectral radiance reflected by a surface, divided by that received by that surface.\nHemispherical transmittance T 1 Radiant flux transmitted by a surface, divided by that received by that surface.\nSpectral hemispherical transmittance T\u03bd\nor\nT\u03bb\n1 Spectral flux transmitted by a surface, divided by that received by that surface.\nDirectional transmittance T\u03a9 1 Radiance transmitted by a surface, divided by that received by that surface.\nSpectral directional transmittance T\u03a9,\u03bd\nor\nT\u03a9,\u03bb\n1 Spectral radiance transmitted by a surface, divided by that received by that surface.\nHemispherical attenuation coefficient \u03bc reciprocal metre m\u22121 L\u22121 Radiant flux absorbed and scattered by a volume per unit length, divided by that received by that volume.\nSpectral hemispherical attenuation coefficient \u03bc\u03bd\nor\n\u03bc\u03bb\nreciprocal metre m\u22121 L\u22121 Spectral radiant flux absorbed and scattered by a volume per unit length, divided by that received by that volume.\nDirectional attenuation coefficient \u03bc\u03a9 reciprocal metre m\u22121 L\u22121 Radiance absorbed and scattered by a volume per unit length, divided by that received by that volume.\nSpectral directional attenuation coefficient \u03bc\u03a9,\u03bd\nor\n\u03bc\u03a9,\u03bb\nreciprocal metre m\u22121 L\u22121 Spectral radiance absorbed and scattered by a volume per unit length, divided by that received by that volume.","date":"2016-12-05 13:38:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 7, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7427526116371155, \"perplexity\": 1955.484947930061}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-50\/segments\/1480698541696.67\/warc\/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz\"}"}
null
null
Listen or translate Home Children and Young People South Gloucestershire Council secures £25m to relieve pressure on school funding overspend,... Newsroom Email South Gloucestershire Council secures £25m to relieve pressure on school funding overspend, paving the way to improve services, accessibility and sustainability jw47 The Government has announced relief of £25 million for South Gloucestershire Council against an historical overspend on supporting children and young people with special educational needs and disabilities (SEND) as part of a wider improvement plan. The funding will be delivered over the next four years, with the first instalment of £10.5 million this year tied to the implementation of plans to improve the quality and efficiency of the way we support pupils with SEND over the next four years. Our SEND Strategy sets out how the council will work with both mainstream and special schools in the district, as well as parents and carers of young people with additional needs, to provide better services for those children. At the same time, we will address inefficiencies in our local arrangements for SEND that have led to the major overspend, realigning expenditure with targeted support to meet need and provide greater benefit for children in the longer-term. The £25 million is related to a part of schools funding called the Dedicated Schools Grant (DSG). This is a grant provided to local authorities by the Department for Education, which consists of four blocks: • The Schools block provides direct funding to schools and academies. • The Central Schools Services block funds statutory duties of the council relating to schools. • The High Needs block supports provision for children and young people with special educational needs and disabilities, Alternative Provision and for pre-16 pupils who, because of medical reasons, exclusion, or other reasons, cannot receive their education in mainstream or special schools. • The Early Years block provides funding to support Early Years Education. For many years, the council's expenditure on special educational needs has been significantly more each year than the funding available in the High Needs Block, resulting in an end of year deficit, which will reach a total of £33.1 million by the end of the year. This deficit is rolled over to the following year and has continued to grow as the population of children with special educational needs and therefore demand for services has increased. The current position is that South Gloucestershire has one of the highest cumulative deficits for High Needs nationally. The council and local schools recognise that this position is unsustainable and have therefore been working together with the support of the local Schools Forum to devise a plan – a Deficit Recovery Plan. The plan objectives are to support improved access to services from birth for children identified as having additional and/or special educational needs in early years; improve capacity for local specialist provision; and address inefficiencies in expenditure. This plan seeks to ensure that the limited financial resources support improvements in local arrangements by targeting resources more effectively according to need and more efficiently across the whole system. The aim is to improve local arrangements whilst at the same time ensure that in the future the expenditure each year does not exceed the funding available through the High Needs Block. The council has also campaigned to have the historic deficit on SEND expenditure recognised as a national issue rather than a local one and in recent years, the Government have recognised that the cumulative deficit in some councils is now so high that they are prepared to provide financial support to address this in part. This is conditional on councils being able to demonstrate that they have a deficit recovery plan which is deliverable because it has the support of the majority of schools and providers in the local area. Some councils have been invited to take part in the government "Safety Valve" process which provides access to funding to address the cumulative deficit built up over years. South Gloucestershire Council was invited to participate in the current round of the Safety Valve process and to demonstrate that we had a credible and deliverable Deficit Recovery Plan which had the support of the local Schools Forum. The Government has announced that South Glos will be included in the Safety Valve process with financial support of £25 million commencing with £10.5 million in the current financial year. At the same time as developing our Plan, the council has been lobbying Government for additional funding for our schools. This has involved working as part of the F40 group of lowest funded councils for schools and individually with the support of our local MPs. This has resulted in South Glos receiving proportionately higher increases in funding than the average council for school funding. South Gloucestershire Council's Cabinet Member for Education, Skills and Employment, Councillor Erica Williams, said: "This is excellent news. Without this injection of £25m the burden of recovering this element of our deficit would have fallen on our schools and the Council. "As a result of our demonstrating a robust plan to Government, I am pleased that in addition to this announcement, we have also received increased Government funding for schools in our area for the coming year. In 2022/23, schools funding for the district will rise by more than £12 million, of which £5 million will go directly towards helping pupils with SEND and helping us close the gap between what we spend on supporting them and the annual amount of funding we get. "There is still much work for us to do as we must stay on track with our Recovery Plan to continue receiving tranches of the £25m and the council is extremely grateful to all schools and partners who have worked together to devise the Plan which has helped us to secure this additional funding. "Through the Recovery Plan we will focus on establishing local arrangements to deliver excellent provision of services to all children with special educational needs from birth, responding effectively to meeting their needs, and to do this without exceeding the funding provided by the government each year for that purpose. "We are committed to raising school standards for all pupils, with and without additional needs, and the additional funding we have secured will help us to achieve that, along with our work to secure new schools where they are needed, improve existing school buildings and to deliver a step-change in the way we support those who need extra help." The Dedicated Schools Grant Safety Valve agreement between Government and South Gloucestershire Council is published here: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1062019/South_Gloucestershire_SV_agreement.pdf Previous articleWest of England leaders visit Bristol Port as vital flood defences take shape Next articleNew EV charge points now available at Cribbs Causeway for South Gloucestershire families on the go South Gloucestershire libraries named in top five in the UK for children's reading challenge Tunnelling 80 metres under South Gloucestershire's busiest A road Proposals to balance council budget published for public consultation South Gloucestershire Council partnered project set to receive £4.7 million Live Labs 2 highways decarbonisation funding Bristol man ordered to pay £1,600 for illegally transporting scrap metal in South Gloucestershire Bristol woman fined over £700 for littering in South Gloucestershire New North Filton rail station approved South Gloucestershire Prevention Programme gets the go ahead Leader's blog Newsroom Email Featured Recover and Rise Budget will take forward council priorities and provide... An update on surge testing in Bristol and South Gloucestershire Two additional testing sites to open for community surge testing Additional Community Testing for South Gloucestershire and Bristol Further surge testing announced in South Gloucestershire First Covid-19 Community Testing Unit to open in South Gloucestershire at... Newsroom Email676 Health and wellbeing187 Transport148 Business and economy141 Newsroom Email Featured47 Digital inclusion21
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,225
Leading a dapper life comes with funny or interesting circumstances and events that not everyone can relate to. But I found most well-dressed gents out there experience similar things and actually, react to them quite similarly too. So I decided to put together a collection of gifs that can perfectly explain the life of the dapper man. Let's see how many of these you can relate to. That quick moment when you feel extra proud and confident when you realize how amazingly dapper you look in that suit. You know, that secret look in the mirror that no one but you will ever get to witness. Friends who understand your passion for the dapper style are few and far between, so it is always a pleasant surprise to find someone who just gets it like you do and doesn't think you are an overdressed, obsessive, unbalanced lunatic (to be fair, some of us fit the description perfectly). Meeting such a friend is certainly a good raison to raise your glass in celebration. Those of you into breaking new ground know how to dress up in a casual office environment effectively. Every one else may be at your same level in the corporate hierarchy, but still, every Friday you feel like the true dapper suited boss surrounded by jeans and polo shirts. Who can forget that overwhelming excitement of feeling like a million bucks rocking for the first time your very own pair of (ideally button) suspenders or braces with your suit? I admit, I kinda felt like this the first time. By the way, here's a simple post on how to wear suspenders with your suits. There are a few things for which a dapper man will never take a "no" for an answer. A tailor not being able to adjust your jacket could be one of those things you just cannot accept. Random tip: Remember jacket armholes are pretty much impossible to alter by a tailor, so shop wisely. Some will say you're just overreacting by feeling physicall ill when a tie gets ruined. What do they know? We have talked about it, even shared a few great responses you can use when someone asks you why you are always so dressed up, but sometimes you just have to put an end to the interrogation. Seriously, what's their point? I'm not saying any of us will end our lives because of the constant questioning about our style, but when the enquiries get out of control, isn't this how we would love to react? We all wish all ties could make a beautiful, perfect dimple every single time, but sadly, that is not always the case. It is disappointing, it is hard to acknowledge it, but you have to be a strong man, hold back the rage and the tears while trying to keep your cool. If there's a great predictor that this day will be a great day, a perfectly dimpled tie might as well be it. Seriously, what better way to start off a wonderful day than with an impressive dimple on your tie? Mr. Brosnan is 100% right. Despite the fact that bow ties enjoy a lot more acceptance these days, there will always still be people, based on common misconceptions about bow tie wearers, who have a lame comment ready to criticize your choice of neckwear. Been there on oh so many occasions. I can tell you, you can try hard as you can to hide it, but the excitement of finding a great sale is just overwhelming. And you know this isn't just about shoes, it is spot-on for any other dapper garment or accessory you've always wanted but couldn't afford. And finally, one that really resonates with me. Let's be honest. No matter how much you and I may try to convince the rest of the world to adopt a dressier style by sharing our so valuable knowledge and practical style tips, the truth is it just isn't going to happen. Wouldn't it be nice if it were just as simple as this gif? Do You Recognize Yourself In Any, Some Or All Of These Gifs? If so, please share in the comments below which of these gifs could've featured you! Also, do you have any submissions that could fit here? Feel free to visit giphy, pick one and submit it in the comment section – the best ones will be added to the list with a shoutout to you! That's it for now. I hope you guys enjoyed this post. Stay dapper cause that's the only way I can come up with more content for this blog haha! LOOKING FOR DAPPER STUFF? SHOP NOW MY HANDPICKED PRODUCTS YOU ALL NEED! As always Ed, great content, especially since I can relate to almost all of those gifs. Thank you, Thomas! For some reason your comments go directly to the Spam folder and I have to approve them manually, hopefully from now on your comments will be instantly approved – sorry about that! What type of activities are you doing online?! hahahah just kidding! Glad you could see yourself in some of these gifs. It's a bit exaggerated, but all in good fun. Stay classy, sir! So funny but true Ed thanks for sharing. Thank you, Ian! It takes a style expert like you to recognize all these daily happenings of dapper men!
{ "redpajama_set_name": "RedPajamaC4" }
3,889
{"url":"https:\/\/ftp.aimsciences.org\/article\/doi\/10.3934\/naco.2018024","text":"# American Institute of Mathematical Sciences\n\nSeptember\u00a0 2018,\u00a08(3):\u00a0377-387. doi:\u00a010.3934\/naco.2018024\n\n## Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization\n\n 1 Department of Mathematical and Actuarial Sciences, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Sungai Long Campus, Jalan Sungai Long 9, Bandar Sungai Long, 43000 Kajang, Selangor, Malaysia 2 Department of Mathematics, Faculty of Science, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia 3 Institute for Mathematical Research, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia\n\n* Corresponding author: Hong Seng Sim\n\nReceived\u00a0 May 2017 Revised\u00a0 March 2018 Published\u00a0 June 2018\n\nFund Project: The first author is supported by Yayasan Sultan Iskandar Johor 2014.\n\nIn this paper, we aim to propose some spectral gradient methods via variational technique under log-determinant norm. The spectral parameters satisfy the modified weak secant relations that inspired by the multistep approximation for solving large scale unconstrained optimization. An executable code is developed to test the efficiency of the proposed method with spectral gradient method using standard weak secant relation as constraint. Numerical results are presented which suggest a better performance has been achieved.\n\nCitation: Hong Seng Sim, Wah June Leong, Chuei Yee Chen, Siti Nur Iqmal Ibrahim. Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numerical Algebra, Control and Optimization, 2018, 8 (3) : 377-387. doi: 10.3934\/naco.2018024\n##### References:\n [1] N. Andrei, An unconstrained optimization test functions collection, Advanced Modeling and Optimization, 10 (2008), 147-161. [2] J. Barzilai\u00a0and\u00a0J. Borwein, Two-point step size gradient methods, IMA Journal of Numerical Analysis, 8 (1988), 141-148.\u00a0 doi:\u00a010.1093\/imanum\/8.1.141. [3] I. Bongartz,\u00a0A. R. Conn,\u00a0N. I. M. Gould\u00a0and\u00a0Ph. L. Toint, CUTE: constrained and unconstrained testing environment, ACM Transactions on Mathematical Software, 21 (1995), 123-160.\u00a0 doi:\u00a010.1145\/200979.201043. [4] R. H. Byrd\u00a0and\u00a0J. Nocedal, A tool for the analysis of quasi-Newton methods with application to unconstrained minimization, SIAM J. Numer. Anal., 26 (1989), 727-739.\u00a0 doi:\u00a010.1137\/0726042. [5] J. E. Dennis\u00a0and\u00a0H. Wolkowicz, Sizing and least change secant methods, SIAM Journal on Numerical Analysis, 30 (1993), 1291-1313.\u00a0 doi:\u00a010.1137\/0730067. [6] J. A. Ford\u00a0and\u00a0I. A. Moghrabi, Multi-step quasi-Newton methods for optimization, Journal of Computational and Applied Mathematics, 50 (1994), 305-323.\u00a0 doi:\u00a010.1016\/0377-0427(94)90309-3. [7] W. La Cruz,\u00a0J. Martinez\u00a0and\u00a0M. Raydan, Spectral residual method without gradient information for solving large-scale nonlinear systems of equations, Math. Comput., 75 (2006), 1449-1466.\u00a0 doi:\u00a010.1090\/S0025-5718-06-01840-0. [8] W. La Cruz\u00a0and\u00a0M. Raydan, Nonmonotone spectral methods for large-scale nonlinear systems, Optim. Methods Softw., 18 (2003), 583-599.\u00a0 doi:\u00a010.1080\/10556780310001610493. [9] D. C. Liu\u00a0and\u00a0J. Nocedal, On the limited memory BFGS method for large scale optimization, Mathematical Programming, 45 (1989), 503-528.\u00a0 doi:\u00a010.1007\/BF01589116. [10] M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM Journal of Optimization, 7 (1997), 26-33.\u00a0 doi:\u00a010.1137\/S1052623494266365. [11] M. Zhu,\u00a0J. L. Nazareth\u00a0and\u00a0H. Wolkowicz, The quasi-Cauchy relation and diagonal updating, SIAM Journal on Optimization, 4 (1999), 1192-1204.\u00a0 doi:\u00a010.1137\/S1052623498331793.\n\nshow all references\n\n##### References:\n [1] N. Andrei, An unconstrained optimization test functions collection, Advanced Modeling and Optimization, 10 (2008), 147-161. [2] J. Barzilai\u00a0and\u00a0J. Borwein, Two-point step size gradient methods, IMA Journal of Numerical Analysis, 8 (1988), 141-148.\u00a0 doi:\u00a010.1093\/imanum\/8.1.141. [3] I. Bongartz,\u00a0A. R. Conn,\u00a0N. I. M. Gould\u00a0and\u00a0Ph. L. Toint, CUTE: constrained and unconstrained testing environment, ACM Transactions on Mathematical Software, 21 (1995), 123-160.\u00a0 doi:\u00a010.1145\/200979.201043. [4] R. H. Byrd\u00a0and\u00a0J. Nocedal, A tool for the analysis of quasi-Newton methods with application to unconstrained minimization, SIAM J. Numer. Anal., 26 (1989), 727-739.\u00a0 doi:\u00a010.1137\/0726042. [5] J. E. Dennis\u00a0and\u00a0H. Wolkowicz, Sizing and least change secant methods, SIAM Journal on Numerical Analysis, 30 (1993), 1291-1313.\u00a0 doi:\u00a010.1137\/0730067. [6] J. A. Ford\u00a0and\u00a0I. A. Moghrabi, Multi-step quasi-Newton methods for optimization, Journal of Computational and Applied Mathematics, 50 (1994), 305-323.\u00a0 doi:\u00a010.1016\/0377-0427(94)90309-3. [7] W. La Cruz,\u00a0J. Martinez\u00a0and\u00a0M. Raydan, Spectral residual method without gradient information for solving large-scale nonlinear systems of equations, Math. Comput., 75 (2006), 1449-1466.\u00a0 doi:\u00a010.1090\/S0025-5718-06-01840-0. [8] W. La Cruz\u00a0and\u00a0M. Raydan, Nonmonotone spectral methods for large-scale nonlinear systems, Optim. Methods Softw., 18 (2003), 583-599.\u00a0 doi:\u00a010.1080\/10556780310001610493. [9] D. C. Liu\u00a0and\u00a0J. Nocedal, On the limited memory BFGS method for large scale optimization, Mathematical Programming, 45 (1989), 503-528.\u00a0 doi:\u00a010.1007\/BF01589116. [10] M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM Journal of Optimization, 7 (1997), 26-33.\u00a0 doi:\u00a010.1137\/S1052623494266365. [11] M. Zhu,\u00a0J. L. Nazareth\u00a0and\u00a0H. Wolkowicz, The quasi-Cauchy relation and diagonal updating, SIAM Journal on Optimization, 4 (1999), 1192-1204.\u00a0 doi:\u00a010.1137\/S1052623498331793.\nPerformance Profiling for the Modified Multiple Spectral Gradient Methods and Standard Multiple Spectral Gradient Method in terms of Number of Iterations.\nPerformance Profiling for the Modified Multiple Spectral Gradient Methods and Standard Multiple Spectral Gradient Method in terms of Number of Function Calls.\nPerformance Profiling for the Modified Multiple Spectral Gradient Methods and Standard Multiple Spectral Gradient Method in terms of CPU Time.\n [1] Hong Seng Sim, Chuei Yee Chen, Wah June Leong, Jiao Li. Nonmonotone spectral gradient method based on memoryless symmetric rank-one update for large-scale unconstrained optimization. Journal of Industrial and Management Optimization, 2021\u00a0 doi: 10.3934\/jimo.2021143 [2] Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial and Management Optimization, 2008, 4 (4) : 739-755. doi: 10.3934\/jimo.2008.4.739 [3] Rouhollah Tavakoli, Hongchao Zhang. A nonmonotone spectral projected gradient method for large-scale topology optimization problems. Numerical Algebra, Control and Optimization, 2012, 2 (2) : 395-412. doi: 10.3934\/naco.2012.2.395 [4] Zhong Wan, Chaoming Hu, Zhanlu Yang. A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modified line search. Discrete and Continuous Dynamical Systems - B, 2011, 16 (4) : 1157-1169. doi: 10.3934\/dcdsb.2011.16.1157 [5] Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial and Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934\/jimo.2013.9.595 [6] Yigui Ou, Haichan Lin. A class of accelerated conjugate-gradient-like methods based on a modified secant equation. Journal of Industrial and Management Optimization, 2020, 16 (3) : 1503-1518. doi: 10.3934\/jimo.2019013 [7] Shummin Nakayama, Yasushi Narushima, Hiroshi Yabe. Memoryless quasi-Newton methods based on spectral-scaling Broyden family for unconstrained optimization. Journal of Industrial and Management Optimization, 2019, 15 (4) : 1773-1793. doi: 10.3934\/jimo.2018122 [8] Sarra Delladji, Mohammed Belloufi, Badreddine Sellami. Behavior of the combination of PRP and HZ methods for unconstrained optimization. Numerical Algebra, Control and Optimization, 2021, 11 (3) : 377-389. doi: 10.3934\/naco.2020032 [9] Guanghui Zhou, Qin Ni, Meilan Zeng. A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems. Journal of Industrial and Management Optimization, 2017, 13 (2) : 595-608. doi: 10.3934\/jimo.2016034 [10] Predrag S. Stanimirovi\u0107, Branislav Ivanov, Haifeng Ma, Dijana Mosi\u0107. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934\/era.2020115 [11] Mahmut \u00c7alik, Marcel Oliver. Weak solutions for generalized large-scale semigeostrophic equations. Communications on Pure and Applied Analysis, 2013, 12 (2) : 939-955. doi: 10.3934\/cpaa.2013.12.939 [12] Mohamed Aly Tawhid. Nonsmooth generalized complementarity as unconstrained optimization. Journal of Industrial and Management Optimization, 2010, 6 (2) : 411-423. doi: 10.3934\/jimo.2010.6.411 [13] Peter Benner, Ryan Lowe, Matthias Voigt. $\\mathcal{L}_{\u221e}$-norm computation for large-scale descriptor systems using structured iterative eigensolvers. Numerical Algebra, Control and Optimization, 2018, 8 (1) : 119-133. doi: 10.3934\/naco.2018007 [14] Ya Li, ShouQiang Du, YuanYuan Chen. Modified spectral PRP conjugate gradient method for solving tensor eigenvalue complementarity problems. Journal of Industrial and Management Optimization, 2022, 18 (1) : 157-172. doi: 10.3934\/jimo.2020147 [15] Liqun Qi, Shenglong Hu, Yanwei Xu. Spectral norm and nuclear norm of a third order tensor. Journal of Industrial and Management Optimization, 2022, 18 (2) : 1101-1113. doi: 10.3934\/jimo.2021010 [16] Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribi\u00e8re-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial and Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934\/jimo.2008.4.565 [17] Yigui Ou, Xin Zhou. A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization. Journal of Industrial and Management Optimization, 2018, 14 (2) : 785-801. doi: 10.3934\/jimo.2017075 [18] Junxiang Li, Yan Gao, Tao Dai, Chunming Ye, Qiang Su, Jiazhen Huo. Substitution secant\/finite difference method to large sparse minimax problems. Journal of Industrial and Management Optimization, 2014, 10 (2) : 637-663. doi: 10.3934\/jimo.2014.10.637 [19] Tengteng Yu, Xin-Wei Liu, Yu-Hong Dai, Jie Sun. Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization. Journal of Industrial and Management Optimization, 2021\u00a0 doi: 10.3934\/jimo.2021084 [20] Yuhong Dai, Ya-xiang Yuan. Analysis of monotone gradient methods. Journal of Industrial and Management Optimization, 2005, 1 (2) : 181-192. doi: 10.3934\/jimo.2005.1.181\n\nImpact Factor:","date":"2022-05-24 09:55:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5650268197059631, \"perplexity\": 6274.491533667912}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662570051.62\/warc\/CC-MAIN-20220524075341-20220524105341-00386.warc.gz\"}"}
null
null
<?php namespace Podlove\Modules\Logging; use \Podlove\Log; use \Podlove\Model; class LogTable extends Model\Base { /** * Only keep logs for 4 weeks. */ public static function cleanup() { global $wpdb; $wpdb->query('DELETE FROM ' . LogTable::table_name() . ' WHERE time < ' . strtotime("-4 weeks")); } } LogTable::property( 'id', 'INT NOT NULL AUTO_INCREMENT PRIMARY KEY' ); LogTable::property( 'channel', 'VARCHAR(255)' ); LogTable::property( 'level', 'INTEGER' ); LogTable::property( 'message', 'LONGTEXT' ); LogTable::property( 'context', 'LONGTEXT' ); LogTable::property( 'time', 'INTEGER UNSIGNED' );
{ "redpajama_set_name": "RedPajamaGithub" }
2,025
Poulton YMCA is looking for a full-time Lifeguard to join our team. We are looking for an enthusiastic individual that has experience in lifeguarding and holds a current National Pool Lifeguard Qualification. To interact with the public responsibly and welcome all users of the facilities, promoting a positive image of the facility by the provision of high quality customer service. To comply with the Pool Safe Operating Procedures at all times. To maintain a vigilant watch of the swimming pool areas in accordance with the operating procedures and take necessary action to ensure the safety of all pool users and staff. To check the safety of equipment in areas of responsibility and report any damage or malfunction of equipment, plant or building fabric to the Duty Manager or General Manager immediately after discovery. To ensure that a consistently high level of cleanliness and hygiene is maintained throughout the facilities at all times. Closing date for the role is Tuesday 9th October 2018 by 3:00pm. Applications submitted and/or received after this time will not be considered.
{ "redpajama_set_name": "RedPajamaC4" }
1,631
Q: Having trouble with manipulating a vector within a class I need to define a vector of testmodes that I can then manipulate the values of through various methods. Here is how I define the TestMode class: class TestMode{ public: TestMode(int val, int jamramaddr){ value=val; jamramaddress=jamramaddr; } int getAddr(void){ return jamramaddress; } void setValue(int val){ value=val; } int getValue(void){ return value; } private: int value; int jamramaddress; }; Pretty simple. I then have a TestModeGroup class to perform actions on the vector of testmodes that I have created. That class looks like this: class TestModeGroup{ public: TestModeGroup(const std::vector<TestMode> &TestModes){ TestModeVector=TestModes; } //Compare the given jamramaddress against the known jamramaddress of the given testmode. If it is a match then update the testmode value void compareAndStore(TestMode &TM){ int TM_address=TM.getAddr(); if(TM_address==JamRamAddress){ output("Match found! Old TM value %d", TM.getValue()); TM.setValue(JamRamData); output("New TM value %d", TM.getValue()); } } //Commit the given testmode to the jamram with the latest known value void writeTmBitToJamRam(TestMode &TM){ JamRamAddress=TM.getAddr(); JamRamData=TM.getValue(); apg_jam_ram_set(JamRamAddress,JamRamData); } //running TestModeGroupObject.store(address, data) will find which test mode that jamram address is for and set the appropriate test mode value for later printing. //This is meant to be used in conjuction with the Excel spreadsheet method of entering test modes void store(int address, int data){ JamRamAddress= address; JamRamData = data; output("Current JamRamAddress = %d JamRamData = %d", JamRamAddress, JamRamData); apg_jam_ram_set(JamRamAddress,JamRamData); for(std::vector<TestMode>::iterator it = TestModeVector.begin(); it!=TestModeVector.end(); ++it){ compareAndStore(*it); } } //Running TestModeGroupObject.load() will commit all test mode changes to the jamram for test modes that are part of that object void load(void){ for(std::vector<TestMode>::iterator it = TestModeVector.begin(); it!=TestModeVector.end(); ++it){ writeTmBitToJamRam(*it); } } int getTMVal(TestMode &TM){ return TM.getValue(); } private: int JamRamAddress; int JamRamData; std::vector<TestMode> TestModeVector; }; Here's how I defined the vector: TestMode adm_testmodes[] = {TM_TWINWL,TM_TWINBL,ON_2WL,ON_2BL,WV_S1X,WV_S0X,TM_PCHG_RH_3,TM_PCHG_RH_2,TM_PCHG_RH_1,TM_PCHG_RH_0,TM_PCHG_RH_BYP,TM_PCHG_SF_3,TM_PCHG_SF_2,TM_PCHG_SF_1,TM_PCHG_SF_0,TM_PCHG_SF_BYP, TM_PCHG_V04_3,TM_PCHG_V04_2,TM_PCHG_V04_1,TM_PCHG_V04_0,TM_PCHG_V04_BYP,TM_SA_DIS,TM_TS_NEGSLOPE,TM_TRIM_4,TM_TRIM_3,TM_TRIM_2,TM_TRIM_1,TM_TRIM_0,TM_TSSLP_2,TM_TSSLP_1,TM_TSSLP_0, TM_WRV_N_2,TM_WRV_N_1,TM_WRV_N_0,TM_SAGAIN_EN,TM_REFTRIM_EN,TM_READ_DONE_OPT_EN,EnableCore_Read,SA_4,TM_OC_2,TM_OC_1,TM_OC_0,TM_WRLC_4,TM_WRLC_3,TM_WRLC_2,TM_WRLC_1,TM_WRLC_0, TM_WRHC_4,TM_WRHC_3,TM_WRHC_2,TM_WRHC_1,TM_WRHC_0,TM_FTOP_3,TM_FTOP_2,TM_FTOP_1,TM_FTOP_0,TM_RISE_1,TM_RISE_0,TM_WRH_3,TM_WRH_2,TM_WRH_1,TM_WRH_0,TM_SET_4,TM_SET_3,TM_SET_2,TM_SET_1,TM_SET_0, TM_REFSTART,TM_REFSEL_1,TM_REFSEL_0,TM_REFSL_5,TM_REFSL_4,TM_REFSL_3,TM_REFSL_2,TM_REFSL_1,TM_REFSL_0,TM_REFSH_5,TM_REFSH_4,TM_REFSH_3,TM_REFSH_2,TM_REFSH_1,TM_REFSH_0,TM_READ_DONE_ADD, TM_READ_DONE_OPT,TM_READ_DONE_5,TM_READ_DONE_4,TM_READ_DONE_3,TM_SAGAIN_1,TM_SAGAIN_0,TM_REFR_5,TM_REFR_4,TM_REFR_3,TM_REFR_2,TM_REFR_1,TM_REFR_0 }; std::vector<TestMode> ADM_TMs (adm_testmodes, adm_testmodes + sizeof(adm_testmodes) / sizeof(TestMode)); TestModeGroup ADM_TestModeGroup(ADM_TMs); So far so good. I can directly access all of the TestModes that are defined, change the value and have that change persist everywhere. The problem comes when I try to run the "store" function within the TestModeGroup class. It seems that I have a local copy of the TestModes that gets updated, but not the original TestMode. I'm sure that this is a pretty simple problem but I'm struggling. C++ is not my specialty, and OOP even less so. Here's a quick example for a dummy testmode I created: output("DUMMY_TESTMODE Initial Value: %d", DUMMY_ADM_TestModeGroup.getTMVal(DUMMY_TESTMODE)); DUMMY_TESTMODE.setValue(1); output("DUMMY_TESTMODE Set to 1 Value: %d", DUMMY_ADM_TestModeGroup.getTMVal(DUMMY_TESTMODE)); DUMMY_TESTMODE.setValue(0); output("DUMMY_TESTMODE Set to 0 Value: %d", DUMMY_ADM_TestModeGroup.getTMVal(DUMMY_TESTMODE)); DUMMY_ADM_TestModeGroup.store(22,1); output("DUMMY_TESTMODE Set to 1 Value: %d", DUMMY_ADM_TestModeGroup.getTMVal(DUMMY_TESTMODE)); doing the .setValue works fine, but doing the .store does not. When I print out the value it comes back as 0. Within the .store function I do a printout, though, and that gives the right value of 1. Somehow I think I am simply altering a copy of the original vector, but I just can't figure it out. I've been driving myself crazy and nobody I've talked to knows enough about C++ to help. Does anybody have any insight as to where I've screwed up? A: Try changing TestModeGroup class: class TestModeGroup{ public: TestModeGroup(const std::vector<TestMode> &TestModes) : TestModeVector(TestModes) {} // ... std::vector<TestMode> & TestModeVector; }; If you want vector modifications done inside class to be applied to the original value passed to the constructor, you need to store a reference to the original object. Let me know if this helped :)
{ "redpajama_set_name": "RedPajamaStackExchange" }
568
UAE operator partners with DEWA in customer happiness initiative Previous Article Saudi operator nationalizes customer care centers Next Article Telecom Egypt and Etisalat Misr sign MoU for virtual fixed voice services Etisalat and Dubai Electricity and Water Authority (DEWA) today signed a memorandum of understanding to jointly support customers with an innovative new mobile loyalty app focused on providing products and services for all DEWA's customers. The MoU was signed by Saleh Al Abdooli, Group Chief Executive Officer, Etisalat, and Saeed Al Tayer, Managing Director and CEO, DEWA. The initiative is in line with the National Happiness Agenda, which aims to make UAE the happiest of all countries, so that its citizens can feel proud of being part of such a nation. DEWA's MD and CEO, Saeed Al Tayer, said: "In line with our wise leadership's vision and guidance to improve government services, and to achieve the UAE Centennial 2071, which strives to position the UAE as the best country in the world, and Dubai as the happiest city in the world, we are delighted to add Etisalat Group to our list of companies that offer special deals and discounts to DEWA customers through our integrated DEWA Store. This agreement is part of DEWA's efforts to consolidate its position as part of every community member's life, catering to their needs and exceeding their expectations to ensure satisfaction by providing value-added services, and contributing to enhancing everyone's happiness." Saleh Al Abdooli, Group CEO of Etisalat, said: "We are delighted to be working in partnership with DEWA to promote and make the availability of Etisalat eLife and mobile products and services across a wider partner base. This will give DEWA and our customers the flexibility of purchasing our services at their convenience at more locations across the country. "This partnership is in line with Etisalat's corporate strategy focused on 'Driving the Digital Future to empower societies' enabling our consumers with solutions and services on this digital journey. Etisalat works closely with its partners to provide a digital experience on various platforms empowered by our robust network and infrastructure across the country." Under the terms of the MoU, DEWA will integrate and make available Etisalat's eLife products, offers and services through its rewards program for its existing and new customers. Customers will be able to subscribe to eLife services directly through DEWA's new loyalty store as a one-stop shop. Many offers over and above eLife products will be included, such as discounts on home furniture, home electronics and removal companies. DEWA will promote its rewards program in all its marketing channels and subsequently, Etisalat and other partner offers participating in its rewards program. Etisalat will on its part promote DEWA's rewards program in all its channels, including its offers, as well as allow all of its subscribers to access DEWA's Rewards Program free of charge. Etisalat UAE MOU operators customer service telecommunications DEWA Investment UAE Happiness Agenda Loyalty App Government Services ITC, Sofrecom two-year partnership results in great achievements Ericsson report reveals the "Internet of Senses" OSN discontinues partnership with Netflix Nokia partners with Innventure to innovate and scale disruptive technologies
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,150
As of the first quarter of 2018, Android users had over 3 million apps to choose from, and the Apple Store featured 2 million apps. Of those, the average smartphone user has downloaded between 60-90 apps on their phone. They use 30 of those monthly, and launch nine of them daily. If you want your business to be a success, one of those apps should be yours. Maybe you're worried about the cost of creating a small business app, or maybe you're not too technologically savvy. Not to worry, because Tech Treats is here to help. Now, in a single morning, you can create an amazing mobile app for your business, and you can do it with no coding knowledge whatsoever. We have a wide variety of templates to choose from, and you can build it yourself by simply clicking and dragging, copying and pasting…smiling all the way. You can even add artificial intelligence using an easy tutorial. And, of course, tech support is free, but you probably won't need us since our platform is incredibly intuitive. Creating a business app by yourself has never been easier. Ready To Create A Small Business App Today? The best part? You'll only pay for the app when you are happy with your business app and ready to publish it. If you're looking for more information about online app development, or you're ready to get started and develop your own app for your business, contact us online or by phone today! Or start from scratch and create our own category!
{ "redpajama_set_name": "RedPajamaC4" }
794