text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
En savoir plus à propos de l'abonnement Scribd Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs. Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Copyright 2002 insider career networkTM are trademarks of Vault Inc. For information about permission to reproduce selections from this book, contact Vault Inc., 150 W. 22nd St., 5th Floor, New York, New York 10011, (212) 366-4212. Library of Congress CIP Data is available. ISBN 1-58131-170-2 Printed in the United States of America ACKNOWLEDGEMENTSVault would like to acknowledge the assistance and support of Matt Doull, Ahmad Al-Khaled, Lee Black, Eric Ober, Hollinger Ventures, Tekbanc, New York City Investment Fund, American Lawyer Media, Globix, Hoover's, Glenn Fischer, Mark Fernandez, Ravi Mhatre, Carter Weiss, Ken Cron, Ed Somekh, Isidore Mayrock, Zahi Khouri, Sana Sabbagh and other Vault investors. Many thanks to our loving families and friends. Special thanks to Deborah Adeyanju and Evan Cohen. Thanks also to H.S. Hamadeh, Val Hadjiyski, Marcy Lerner, Chris Prior, Rob Schipano, Ed Shen, and Tyya N. Turner and the rest of the Vault staff for their support. INTRODUCTION Practice Makes Perfect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Fit Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Sample Fit Questions and Answers . . . . . . . . . . . . . . . . . . . . . . .6 Commonly Asked Basic Finance Questions . . . . . . . . . . . . . . . . . .8 17 69 Skills for Sales & Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Sales & Trading Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library RESEARCH/INVESTMENT MANAGEMENT 89 Skills for Research and Investment Management . . . . . . . . . . . . .91 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94 FINANCE GLOSSARY 113 Visit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more. CAREER LIBRARY xi Do you have an interview coming up with a financial institution? Unsure how to handle a finance Interview? Fit questions and general finance questions. These questions, found in this Introduction chapter, are commonly asked of all finance interview candidates. They are intended to test a jobseekers basic level of fitness for a finance position in terms of temperament, interest in financial markets, and basic finance knowledge. Corporate finance/M&A questions. These questions are those commonly faced in investment banking interviews, as well as those found in interviews for an internal corporate finance position. This section will be helpful for those pursuing a career in commercial banking as well. Sales & trading questions. These questions are applicable to both the sellside and the buy-side. Research/investment management questions. These questions are those that one might field both on the buy- and sell-side for a research position. You may want to browse through more than one of these sections. If you are pursuing a position at a hedge fund, for example, you may find that your position will entail some trading AND research. A general management program at an asset management firm or a rating agency might require some knowledge of all of the above subjects, and so forth. Also, we stress that these categories are basic groupings that reflect the likelihood of a question being asked in a specific type of interview you may encounter any of these questions in any finance interview, depending on what financial product youre likely to be working with (fixed income vs. equity vs. derivatives, etc.) and how frisky your interviewer is feeling. The vast majority of the questions in this guide are finance-related (technical) questions that youd receive in an interview with a line professional. However, we stress that preparing for fit questions is vital in some interviews, even with finance professionals, you may face a greater proportion of these so-called behavioral questions. Samples of these questions begin on the next page. Fit QuestionsBelow are some of the most commonly asked fit questions, all of which you should think about before you go into your interviews. 1. Why did you choose to go to _____ college or university? 2. Why did you major in _____? 3. What was your overall GPA (if not on resume)? What was your SAT/GMAT? 4. What courses did you do the best/worse in? 5. Tell me about your college/grad school experience. 6. What appeals to you about this position? 7. Why would you be a good choice for this position? Why should we hire you? 8. What do you think this position requires, and how well do you match those requirements?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library 9. Why did you leave your last position? 10. What did you learn about yourself at your last job? 11. Describe the most relevant and specific items in your background that show that you are qualified for this job. 12. What matters most to you in your next position? 13. Give me an example where you came with a creative solution to a problem. 14. Give me an example where you successfully persuaded others to think or do what you wanted. 15. Give me an example where you sought out a problem to solve because it represented a challenge for you. 16. Give me examples of your leadership abilities. 17. Describe a project in which you went beyond what was expected of you. 18. What events have had the most significant impact on your life? 19. What motivates you?Visit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more. 20. What kind of activities do you enjoy? 21. Discuss something about yourself that I cannot learn from your resume. 22. Tell me about your reasons for selecting this industry. 23. What is it about our company that interests you? 24. Describe what would be an ideal environment for you? 25. What would you do if you did not have to work for money? How does that relate to this job opportunity? 26. How do you define stress and how do you manage it? 27. Describe your ideal job. 28. Give examples of how you have used your greatest skills. 29. What is your major weakness? 30. What have been your major successes and accomplishments? How did you achieve these? 31. What were your failures and what did you learn from them?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library 32. What role do you usually take in a team? 33. Do you have any questions for me? 34. Tell me your biggest regret. Because the answers to these types of questions will vary depending on the person, weve focused on answers to technical questions in this guide. However, you will find some sample answers to behavioral questions later in this guide. We do suggest that you write out answers to at least some of the above questions as well as to the questions contained later on in this book. While you do not necessarily need to type up answers as long as the answers to fit questions youll see later, you should be able to tailor the responses to your background. Looking over your own answers to typical questions will prove helpful before an interview. We have all walked out of interviews thinking God! Why didnt I say ________ when s/he asked ____! Thinking about potential questions before interviews will make you seem less nervous and more polished, and help you land the finance job of your dream. 1. Why do you want to work here? This question is designed to demonstrate how much research you have done on the firm as well as to see if you might be a good fit. To get further information about a particular firm, you should read recent press stories, visit their web page, and also read the Vault guide written about it. This answer should be based on your actual reasons; you dont want to get caught in a lie. You should still manage to show that you know a bit about the firm, its people, its culture, and its specialties in your answer. For example, you might want to emphasize your desire for strong team mentality at virtually all of the banks (but especially Goldman). If you are interviewing at a firm where entry-level financial analyst and associate-level hires go through a rotation program before getting placed, you might want to emphasize that you like the fact that one can see more than one area before a final decision is made. (Note: for summer internships, some firms will rotate you through two areas of the banking department.)Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Other things to know and weave into your answer include: Is the firm a small firm and ostensibly hoping to stay small or trying to get bigger? Is the structure flat with few layers of management or are there several titles between analyst and managing director? Is the firm part of a commercial bank or is it a pure brokerage and investment bank? If you are interviewing for an internal corporate finance position, do you have to (or can you) rotate through various finance and/or nonfinancial parts of the business (marketing, sales, etc.)? Most important, you should emphasize the people. Many banking professionals maintain that things are the same no matter where you work, but the people you work with can have very different personalities. You should have met at least three people whose names and titles you can recite at the interview; five to ten would be even better, even if they were not all in the corporate finance or M&A departments. You should discuss why you like the people youve met and why this makes you want to become part of the team. It is good to talk about the firms culture, but not okay to blatantly state that you want to work for a prestigious firm (for reasons similar to why you should not discuss wanting to make money). 2. What skills can you bring to the table? Your answer should match the desired skills mentioned above. If you have no financial or analytical background, discuss any accounting, finance or economics courses you have taken, or ways in which you have analyzed problems at school or in past jobs. Talk about any personal investing you have done through E*Trade or Schwab. Emphasize any activities involving a great deal of dedication and endurance you have participated in. (Have you run a marathon, did you participate in a sport, were you in the Peace Corps or the military, did you train for years to be a top ballerina?) 3. What about you might be a disadvantage at this firm? This is a variation of the old weakness question. You should find a weakness that you can turn into a positive. For example, driving yourself too hard or putting the needs of others before your own too often.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library 4. At what other firms are you looking? This is another key question. Even if you are looking at every major Wall Street firm, and a few minor ones, your interviewer wants to hear that you are focused, and they hear this by you (truthfully) stating that you are talking to similar firms. For example, Morgan Stanley probably wants to hear that you are talking to Goldman or Merrill (bulge-bracket). Merrill probably wants to hear Morgan Stanley, Goldman, or Citigroup SSB. Bear would want to hear Lehman, CSFB, or Citigroup/SSB (similar cultures at CSFB and SSB, similar smaller-firm feel at Lehman). Bank of America sees itself as the next Citigroup, and so forth. There is no correct answer, since every interviewer is different. However, if you tell Goldman that you are interviewing only at Goldman and Bear Stearns but are not interested in Morgan Stanley, or you tell Citigroup/SSB that it is between them and Lazard, your interviewer may look at you askance. That said, the more wanted you are by other firms, the more desirable you will appear to your interviewer. If you are interviewing at 12 firms, all else being equal your interviewer will take more notice than if you are interviewing at just two. If a prestigious firm seems interested in you, by all means let this be known. This also improves your cachet. Do not lie about any of this, however, since recruiters do talk to each other and you may end up blackballed across the Street. If you interviewing at both the banks and for internal finance jobs, you may want to mention this to the banks but that clearly banking is your top choice, and visa versa. Your first choice is always to be wherever you are interviewing. Always. 5. How would you say our firm compares to these others: _____? This is designed to show your overall knowledge of the industry. You should demonstrate how much you know about the firm you are interviewing with and its competition without insulting or being overly critical of another firm. Badmouthing another bank is considered poor form. 6. What are the major criteria that you will use to select an employer? This should match your response to the Why have chosen this firm? question. 7. Where do you expect to be in 5 years? In 10 years? This question can be asked in any interview, but the interviewer is looking for you to show that you have a genuine interest in the markets and research. Thus, stating that you want to be a top analyst or strategist or a managing director in five years shows ambition, and saying that you may want to start your own hedge fund in 10 to 15 years is not out of line. Saying that you hope to make a quick million and then become a filmmaker does not sound so good. 1. What do you think is going to happen with interest rates over the next six months? This is another way of asking What has the market been doing? What do you think the market will do in the coming 12 months? If you have been reading The Wall Street Journal, The Economist, analyst reports etc., this should not pose a problem. If not, start reading them today. 2. What is a bulge bracket firm? Bulge bracket is a term that loosely translates into the largest full service brokerages/investment banks as measured by various league table standings. Goldman Sachs, Morgan Stanley, and Merrill Lynch are considered the ultimate examples (sometimes called the Super Bulge Bracket) Of late, Citigroup/Salomon Smith Barney, CSFB and, increasingly, J.P. Morgan Chase are considered to have joined the U.S. bulge bracket. Globally, J.P. Morgan Chase, Deutsche Bank and UBS Warburg/PaineWebber are typically thrown in with the U.S. top five to form the so-called Global Bulge Bracket. (Outside of the U.S., Deutsche Bank, J.P. Morgan and UBS frequently outrank Goldman in the league tables, for example.) If you are at a bulge bracket firm, you believe that only the very largest and niche firms will survive over the next few years. If you are applying to a bulge bracket aspirant (DB and UBS for a U.S.-based position; Bank of America, Lehman, Bear, ABN Amro, DKW, or BNP Paribas globally) you want to demonstrate your knowledge of how the firm at which you are interviewing is moving up various league tables and will soon join the ranks of the Global Bulge Brackets. Or how the firm is essentially already a bulge bracket firm in many areas. Or how you want to be part of a firm with room for growth. If you are interviewing with a boutique or regional firm (Lazard, TWP or Jefferies, at the time of publication), you should emphasize your belief that firms able to carve out a niche and build strong relationships will survive and even thrive. 10 3. How do you stay on top of the markets? You want to demonstrate that you read the key publications (and, you should). The Wall Street Journal, the Financial Times, The Economist, and BusinessWeek should be on your reading list. You should watch CNBC, Bloomberg Television, and CnnFn. You will get bonus points for reading analysts research reports (especially of the firm at which you are interviewing). 4. Where and what is the Dow? Where are the 1-year, 5-year, and 10year Treasury? What is the price of gold? Where is the S&P 500? Where is the US trade deficit? They really do ask these sorts of questions, especially of people from nonfinancial backgrounds. You should keep track of these and other key financial numbers on at least a weekly basis. While you do not have to be exact, if you say that the NASDAQ is around 12,000 and the Nikkei is about 400, your attempt to convince Merrill that you are really interested in global finance will fall short. 5. What is unique about the U.S. treasury market vs. the rest of the debt market? The U.S. Federal governments bonds are considered riskless, since the U.S. has never defaulted and is the worlds strongest economy. All other bonds trade at and are quoted at a certain percentage or basis over treasuries (except in the case of a few other AAA-rated countries like France or the U.K.). 6. What is junk? Called high-yield bonds by the investment banks (never call it junk yourself), these bonds are below investment grade, and are generally unsecured debt. Below investment grade means at or below BB (by Standard & Poors) or Ba (by Moodys). Some less credit worthy companies issue debt at high yields because they have difficulty in securing bank debt or in tapping the equity markets.Visit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more. 11 Sometimes high yield debt starts out investment grade and then crosses over to high yield. (Think of K-Mart or the Gap, which had their ratings lowered in 2002.) Bonds from extremely high credit risk companies, like Enron in early 2002, are categorized as distressed debt. 7. Tell me what the repeal of Glass-Steagall means to me as a capital markets participant. This sort of question is aimed at finding out how much in-depth market knowledge you have. If you claim that you have always followed or have always been interested in the markets, but cant answer a question along these lines, you may be in trouble. What is commonly referred to as the Glass-Steagall law is actually the Bank Act of 1933, which erected a wall between commercial banking and securities/brokerage. Commercial banking and insurance were separated by the Bank Holding Company act of 1956. The Gramm-Leach-Bliley Act repealed these laws in 1999.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library What the repeal has done is pave the way in the U.S. for so-called Universal Banks and what Europeans sometimes call Bankassurance firms. While the Europeans always allowed such firms to exist, the U.S. (until 1999) and the Japanese have forbidden them. Examples of truly universal banks (investment banks as well as insurance companies and full-fledged commercial banks) include Citigroup/SSB, Credit Suisse/CSFB, Allianz/Dresdner Kleinwort Wasserstein, and ABN AMRO. Firms that have both investment and commercial banks include J.P. Morgan Chase, Bank of America, Deutsche Bank, and UBS. Goldman, Merrill, Morgan Stanley, Lehman, and Bear are still pure brokerage firms, for the most part. Many believe the recent consolidation wave (Travelers/Citibank, Dresdner/Wasserstein/Allianz, Credit Suisse/DLJ, UBS/PaineWebber, J.P. Morgan/Chase) will result in the inevitable merger of these last few holdouts merging with a large commercial bank and/or an insurance company. Many believe that having a large balance sheet and numerous corporate banking relationships will increasingly allow universal banks to use loans and their other relationships to gain greater market share in higher-margin areas like M&A and underwriting. Indeed, Citibank and to a lesser extent J.P. Morgan and Bank of America have moved up in several league tables. ABN AMRO and 12 Allianz/Dresdner have slipped by some measures, and the jury is still out on how much being a full-service firm has helped some others. The results are similarly mixed for the pure brokers, though many point to the huge losses made by Citigroup and J.P. Morgan Chase on loans to Enron and Global Crossing as proof that a balance sheet does not always help. Some have speculated that J.P. Morgan may lose more on bad loans to Enron than it has made in investment banking fees for that client and several others combined. In any event, J.P. Morgans CEO recently acknowledged that such a strategy is risky. The bottom line is: If you are talking to a pure brokerage firm like Goldman, you want to spell out the threat from universal banks, but stress that pure brokerage can and will succeed. If you are at Bank of America, you believe universal banks are the wave of the future. 8. Tell me three major investment banking industry trends and describe them briefly. Here are four possible answers:Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library I. Consolidation: More firms are teaming up. Examples include: Citibank and Travelers/SSB, Morgan Stanley/Dean Witter, Deutsche Banc/Alex. Brown, BNP/Paribas, Dresdner/Wasserstein/Allianz, Credit Suisse/DLJ, UBS/PaineWebber, Merrill Lynch and brokerages in the U.K., Canada, Australia, Japan, Spain, and South Africa, J.P. Morgan and Chase... all driven largely by the need to increase capital base and geographic reach. II. Expansion in Europe: More U.S. firms see the ending of corporate crossholdings, increasing use of capital markets to raise financing along with pension reform as leading to greater growth opportunities for their European-based businesses. III. Technology: Increasingly, firms are using ECN (Electronic Communication Networks, like Archipelago, Island or Instinet) to route and execute trades. Even in traditional forms of trading, technology is lowering costs but simultaneously lowering margins and commoditising many markets. In addition, increasingly sophisticated derivative and risk management products and distribution of information has been made possible due to recent advances in computing and telecommunications technology. 13 IV. Demographic shift: The baby boomers in all of the advanced industrial countries are nearing retirement. Simultaneously, the boomers parents and grandparents will leave their estates to their children and grandchildren, leading to the single greatest inter-generational transfer of wealth the world has ever seen. Over the next few decades there should therefore be a sharp rise in the demand for investment services products to support these Boomers through retirement years around the world. 9. What is a hedge fund? Hedge funds are loosely regulated investment pools (they are limited partnerships). They generally are open only to the wealthy or institutions. Hedge funds use many strategies to hedge against risk with the goal of making a profit in any market environment. Hedge funds may short stock, use leverage, options, futures, or employ a risk arbitrage strategy, among other things. Hedge funds, though, do not always hedge or sell short. Some funds had virtually all of their money long during the bull market of the late nineties, for example. What unifies hedge funds is the fact that, unlike mutual funds, they can invest in whatever they please (as long as it is legal) and do not have to issue prospectuses or follow other limits and regulations that mutual fund mangers must. In addition, they usually charge much higher fees than traditional fund managers. Finally, they are limited to less than 500 or 100 investors (depending on how they are structured), whereas a mutual fund can have thousands of investors. While hedge funds usually have less under management than a traditional institutional investor, the fact that they trade relatively often makes them valuable customers for brokerage firms. 10. Why do we care about housing starts? The housing industry accounts for over 25% of investment spending in the U.S. and approximately 5% of U.S. GDP. The housing starts figure is considered a leading indicator. Housing starts rise before an economic up-tick, and decline before a slow down. 14 11. What has the market been doing? Why? What do you think the market will do in coming 12 months? If you have been reading The Wall Street Journal, The Economist, analyst reports etc., this should not pose a problem. 12. What is the difference between senior and junior bondholders? Senior bondholders get paid first (and as a result their bonds pay a lower rate of interest if all else is equal). The order in which debtors get paid in the case of bankruptcy is generally: commercial debts (vendors), mortgage lenders, other bank lenders, senior secured bondholders, subordinated (junior) secured bondholders, debentures (unsecured) holders, preferred stock holders, and finally straight equity (stockholders.) 13. What is the best story you read this week in The Wall Street Journal?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library This question can ruin an otherwise great interview. The interviewer is trying find out if you read more than just the front page of the Journal, and that you read it fairly regularly. It does not have to be a story that shows your depth of knowledge about the market; it could be a human-interest story. If you dont remember a recent WSJ story, try recounting a BusinessWeek, The Economist, or even a CNBC story. 14. Tell me about some stocks you follow. Should I buy any of them? This question often comes up in sales & trading or research interviews (often posed as, Sell me a stock) but it can also come up in banking interviews to test your general market knowledge. You may find that as you begin to talk about Viacom or GM the interviewer will interrupt you and ask for a small-cap or nonU.S. name instead. Your best bet is to be prepared with knowledge of at least four varied companies: A large cap U.S. Company, a small-cap U.S. company, a non-U.S. company and a short-sell pitch (or a stock you would recommend an investor sell rather than buy). 15 You should try to read a few analysts reports and press stories on your companies. At the very least you should know the name, ticker symbol, the CEOs name, a brief description of the companys line of business, and three points supporting your argument (if you feel strongly that one should buy or sell). You should also know who (if anyone) covers the stock at the firm you are interviewing with and their rating. You should be able to recite (if asked) the basic valuation metrics (P/E, growth rate, etc.). You should also be prepared to answer common criticisms of your pitch (if you believe that one should buy or sell the stock). (Isnt GM in an industry facing overcapacity? Yes, but according to your firms auto analyst, management has succeeded in streamlining costs and increasing profitability..., etc.) The point is not to be correct or agree with the interviewer or their firms analyst, but to be persuasive and demonstrate your knowledge of the markets. Be prepared. Be very, very prepared. Make sure youre prepared for your finance interviews with Vaults Career Guides and Career Services at the Vault Finance Career Channel. Go to for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library. 16 18 19 Do you have the wherewithal to work at 100% for 80-100+ hours a week for weeks on end? Even at the expense of your personal life? Are you capable of assessing business line and divisional results versus a (your) companys target? Can you evaluate a (your) firms relative position? Will you be able to analyze and quickly understand single company information, industry-wide issues, and how they might be affected by macro-economic trends? Are you good at building ongoing relationships with, and getting information from, company management, research analysts, capital markets professionals, lawyers, (other firms) bankers and others with whom you will have regular contact? Can you alternate between being courteous, professional, gregarious, and sycophantic with all types of people, from secretaries to CEOs? Are you very good at financial modeling and valuation, especially when using Excel?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Do you understand various valuation methods and procedures (discounted cash flow techniques, WACC, free cash flow, comparable analysis, and sensitivity analysis, etc.)? Are you an expert at accounting? Can you quickly analyze financial statements quantitatively and qualitatively? Will you be able to accurately project earnings, cash flow statements and balance sheets trends? Do you have exceptional presentation, selling and marketing skills in both formal and informal setting? Will you eventually be able to bring in new business to the firm? Are you a stickler for detail and extremely well organized? Are you personally passionate about the market? Below are some specific questions asked during past Corporate Finance/M&A interviews along with possible answers. 20 21). Even so, birds eye view of AOL Time Warners finances while working on a bond offering than one of AOLs home video division financial analysts. Conversely, the banker will probably never learn as much about DVD sales. Similarly, someone working in AOLs treasury department will gain an even greater understanding of the companys finances than either the home video financial analyst or the banker. However, the treasury department employee will likely also know less about DVD sales than the analyst, and less about issues facing other media and telecommunications companies than the banker. companys capital going forward. (For example: Maybe the company should try 22 to increase sales in one country, exit one product line altogether, and buy a company in an entirely new business.) You might then set a plan against which one could later judge the companys a sort of internal corporate and investment banker, structuring major financing deals for IBM customers, and then syndicating these deals in much the same way Citigroup or Bank of America would a loan. A fourth might work in IBMs treasury department and assist in managing the companys.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Risk, rewards and lifestyle: It is no big secret that investment bankers put in long hours and frequently must put their personal lives on hold (The client doesnt care that you have a wedding to go to. This merger needs to go through while the stock is still near its to 100 hours a week or travel on a regular basis. Does it equal out? That depends on your priorities. It is not surprising that, given the level of endurance and commitment one needs to be an investment banker, those in internal finance positions are more likely to have come from banking than the other way around (again, particularly at the junior- and midlevel). On the other hand, spending any time as a junior banker might be too high of a price for some. 23 3. Lets say retail sales figures just came out, and they were far below what economists were expecting. What will this do to stock prices and the strength of the dollar? Bad news might drive the market lower, but if interest rates have been relatively high, such news may lead the Street to expect the Federal Reserve to ease monetary policy, which actually may be bullish for the stock market. Since bad economic news usually leads to the Fed easing interest rates, the dollar will weaken versus most leading foreign currencies, and U.S. companies may benefit, all else being equal. 4. How do you value a bond? The value of a bond comes derived from the present value of the expected payments or cash flows from a bond, discounted at an interest rate that reflects the default risk associated with the cash flows. To find the present value of a bond, the formula is:Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library PV of a Bond = t=1 t=N Where: Coupont = coupon expected in period t, Face Value = the face value of the bond, N= number of periods (usually years or half-years) and r = the discount rate for the cash flows. For example, let us say that you had an 8% coupon, 30-year maturity bond with a par (face) value of $1,000 that pays its coupons twice a year. If the interest rate on the bond has changed from the coupon (now it is 10%), one would value the bond thusly:t=60 40 + (1.05)t 1000 (1.05)60 The equation can be written out as $40 x Annuity Factor (5%, 60) + $1,000 x PV Factor (5%, 60) = $757.17 + 53.54 = $810.71. These calculations can be done on any standard financial calculator. (Enter n = 60, PMT = 40, FV = 1,000,24CAREER LIBRARY interest rate = 5%, and then hit the PV button. You should get 810.71 for present value, which is negative because one has to pay this amount to own the bond.) The discount rate depends upon default risk. Higher rates are used for more risky bonds and lower rates for safer ones. Rating agencies like Standard & Poors or Moodys assign a rating to bonds. High rated bonds like U.S. Treasuries generally pay the lowest rates, while higher risk bonds (like, say those of an Argentinean steel company) pay higher rates. If the bond is traded and thus has a market price, one can compute the internal rate of return (IRR) for the bond (the rate at which the present value of the coupons and the face value is equal to the market price.) This is commonly called the yield to maturity on the bond. Unfortunately, IRR and YTM must be computed by trial and error, although financial calculators have functions for computing this. While you will most likely never have to calculate the value of a bond in a sales and trading interview by hand, it is important to intuitively understand how a bond is valued. Another way to think of it is as follows: suppose you have two credit cards through your local bank, a Visa, which charges you 10% a year, and a MasterCard, which charges 20%. You owe $10,000 on each and can transfer all of your debt to one or another. Which would you choose? Clearly you would put your debt on the Visa. Now, which card would your bank rather you use? $20,000 in debt on the MasterCard at 20% is clearly worth more to the bank. Bonds work the same way. They, like credit card receivables for a bank, represent future interest payments for bondholders. Higher rates for the same value increase the expected present value for an issue all else being equal. 5. What is the difference between preferred stock and regular stock? Unlike regular (common) stock, preferred stock not only provides the securitys owner with an equity stake in the company, but also provides certain bond-like qualities for the owner. Preferred shares usually pay a dividend. Unlike bond yields, preferred yields can be changed or cut, though are generally cut after common dividends are halted. Should a company run into financial trouble or go bankrupt, preferred holders have a right to earnings and assets after bondholders but before common stockholders. As with its bonds, riskier companies must pay a relatively higher yield on its preferred stock in order to attract investors.Visit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more. 25 Preferred stock frequently has a conversion option imbedded allowing one to trade in the security for common stock. (See the question on pricing convertible bonds, no. 14 in this chapter.) Institutions are the main purchasers of preferred stock. Companies issue preferred stock for a number of reasons, including the fact that companies view it as a cheaper form of financing than common equity, and that it can be constructed so that it is viewed as equity by the rating agencies and debt by the tax authorities. (Note: Do not confuse this with Class A versus Class B stock or the like. Lettered classes of stock refer to voting and non-voting shares.) 6. What is disintermediation? According to the original usage of the term as listed in the Oxford Dictionary, it means a reduction in the use or role of banks and savings institutions as intermediaries between lenders and borrowers; the transfer of savings and borrowings away from the established banking system.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Of late, the term has taken on new meanings as it relates to the world of finance. The word means literally to remove intermediaries from the trading process, so that buyers can deal more directly with sellers. This is also known as cutting out the middleman. Disintermediation is a hot buzzword in many areas (eBay is a tool for disintermediation; direct selling also affects insurance companies and travel agencies). The term was particularly in vogue when B2B was all the rage. In the banking and brokerage business, many firms have seen traditional customers move towards trading directly with the public by telephone or the Internet (such as when using Ameritrade, or when buying a mutual fund directly from Fidelity or a C.D. from a new online bank rather than at your local branch). Disintermediation is occurring even with corporate and institutional clients: U.S. Treasury securities are often traded electronically without the use of a human trader or brokerage firm, and certain large corporations have issued securities directly to investors without the use of an investment bank. All of this is lowering costs but simultaneously lowering margins and commoditising many markets for investment banks (and their clients). 26 7. How would you value a stock or a company? Three common methods used are: Discounted cash flow valuation (DCF), which values a company based on the present value of expected future cash flows produced by that asset (like valuing a bond in a previous question); Relative valuation, which estimates value by looking at the price of comparable companies equity via common ratios such as price/earnings, enterprise value/EBITDA, or price/book value; and Real Option theory, which utilizes option pricing models. To estimate the value using DCF, one can either measure cash flows in the form of free cash flows to the firm (or FCFF, which includes the value of cash flows eventually payable to debt and equity holders and thus values the whole company), dividends (the dividend discount valuation or DDV), or free cash flows to equity (FCFE; DDV and FCFE value only the companys equity). Regardless of which method one chooses, the DCF method is essentially the same as valuing a bond: Value =Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library t=n Where CFt = the cash flow in period t, r = discount rate (determined by the risk level of the cash flows in question) and t = the life of the asset. When valuing a company using the dividend discount method (which is generally now only considered appropriate in valuing financial services companies), the CF would be dividends, while the discount rate would be the cost of equity for the asset. When valuing cash flows to equity, one would also use the cost of equity (or Ke) for the discount rate. When valuing cash flows to the firm, one would use the weighted average cost of capital (or WACC, which is the weighted cost of the firms equity and debt) for the discount rate. The various cash flows can be determined as follows: Dividends = Net Income x Payout Ratio, while the expected growth in dividends = Retention Ratio x Return on Equity. Free Cash Flow to Equity = Net Income - (Capital Expenditures Depreciation) x (1- Debt to Capital Ratio) - Change in Working Capital x 27 (1- Debt to Capital Ratio) while the expected growth in FCFE = Retention Ratio x Return on Equity. Free Cash Flow to Firm = Earnings Before Interest and Taxes x (1-tax rate) - (Capital Expenditures - Depreciation) - Change in Working Capital, while expected growth in FCFF = Reinvestment Rate x Return on Firm Capital. The appropriate discount rates can be determined as follows: Cost of Equity = Appropriate Risk Free Rate + Beta x Equity Risk Premium WACC = [Cost of Equity x (Market Value of Equity/Market Value of Equity and Debt)] + [Cost of Debt x (1-Tax Rate) x (Market Value of Debt/Market Value of Equity and Debt)] The risk free rate must match the firms cash flows. In the U.S., the risk free rate would be the U.S. Treasury rate with the most similar maturity, while in the Euro-zone it would be a German (or maybe French) government bond, and so forth. Beta is a measure of how changes in a firms stock price deviate from changes in the market. (In the U.S., usually either the S&P 500 or the Wilshire Total Market Index is used as a measure of the market.) Beta is thus the responsiveness of a security to macroeconomic events. High betas can be found in technology companies, smaller firms or in highly cyclical industries, while lower betas would be found in steady industries like grocery chains or tobacco companies (since in good or bad times people eat and smoke.) For valuation purposes, beta can be historical beta (which can be found using data services like Bloomberg or Yahoo! Finance) or the Bottom-Up method which is based on the firms peers beta and the firms leverage (debt level). The Bottom-Up method should be used unless the company in question has no real publicly traded peers (like Eurotunnel or the Boston Celtics) or is a financial services firm. The equity risk premium is either the historical or presently implied level of average return investors demand over the risk-free rate in order to invest in stocks. No two analysts or finance professors use the same number for this, but we will say here that stocks in U.S. generally earn 4.5 percentage points above long-term U.S. Treasuries. The risk premium would be higher for emerging market countries. The cost (and value) of debt can be estimated using either the 28 market interest rate on the firms outstanding debt or the borrowing rate associated with firms that have the same debt rating as the company in question. If these two methods cannot be used, use the borrowing rate and debt rating associated with firms which have the similar financial ratio values (such as EBITA/Debt Expense, also called the interest coverage ratio) to the company in question. Once the rate is known, all debt on the books can be valued at this rate like a bond. In the above WACC formula, we assume that interest is tax deductible (thus the 1-Tax Rate). When valuing a firm using any DCF method, one must assume that the firm is either steadily growing and will remain this way forever (like a typical grocery store chain), or that it will for a few years grow at a faster rate than the growth of the overall GDP and then will abruptly begin steady growth (like some automobile companies), or that it will grow fast and then gradually move towards steady growth (like a technology company). Steady Growth:Growth 2-Stage Growth:Growth 3-Stage Growth:Growth Time To value a company growing forever at the same rate, one would simply value the estimated future cash flows using the above formula. For 2- stage growth companies, one must estimate the NPV of the cash flows over the high growth stage, and then add the NPV of perpetual steady growth based on cash flows at the end of the high growth period (called the terminal value) to this amount. For 3-stage growth companies one must also compute the NPV for each of the intervening years between high and steady growth and add this to the value. Now, let us take a simple example of using the FCFF method. We will use a fictitious U.S. publicly traded manufacturing company (Vault Machines, Inc.). where earnings are in steady growth forever (5% a year; we are using nominal values for all measures; if you use real measures taking into account inflation, you must do so throughout). The company has bonds with a $1 billion marketVisit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more. 29 value and no other debt. These bonds currently trade at 10% (which is also par). The 10-year U.S. Treasury is trading at 5%. The firm, like others in its industry, has a marginal tax rate of 40%. The firm just reported revenues for the most recent year of $1 billion and earnings of $50 million. Depreciation expenses were $25 million, and working capital increased by $10 million. The firm purchased a factory for $30 million and made no other capital expenditures during the year. Companies in Vault Machines same industry have an average Beta of 1.2 and an average debt/equity ratio of 25%. The firm currently has 100 million shares of stock outstanding trading at $20 a share. Is this the appropriate value of the firms stock? What about for the entire company? First we will determine the free cash flow to the firm: Free Cash Flow to Firm = Earnings Before Interest and Taxes x (1-tax rate) (Capital Expenditures - Depreciation) - Change in Working Capital EBIT in this case can be found as follows: Since the tax rate is 40%, and the firm earned $50 million after taxes, earnings before taxes = 50/(1-.4) = $83.33 million. We know the firm has $1 billion (market value) in debt outstanding and is paying 10%, thus interest expense = $100 million. Adding interest expense, we get EBIT = $183.33 million.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library FCFF = 183.33(1-.4) (30-25) 10 = $95 million. Recall that the appropriate discount rates can be determined as follows: WACC = [Cost of Equity x (Market Value of Equity/Market Value of Equity and Debt)] + [Cost of Debt x (1-Tax Rate) x (Market Value of Debt/Market Value of Equity and Debt)] To determine the WACC, we first need to determine the cost of equity. This means we must determine the bottom-up beta of the company. The formula for unlevering beta (or to determine what the beta would be in the absence of any debt, used since more debt makes a firm more sensitive to macroeconomic changes) is: unlevered = levered Since we know the industrys levered beta and D/E, we will first determine the industrys unlevered beta, and then use this beta and relever it using Vault Machines actual (different) debt/equity ratio. unlevered of Industry = (1+WACC) t=1CFt Since Vault Machines is expected to grow forever at the same rate, we can simplify the process and value the cash flows like a perpetuity: Value of the firm = $95 million/((9.4%-5%) = $2.159 billion. Since we know the market is valuing the debt at $1 billion, we subtract this amount out and divide the remainder by 100 million shares outstanding. The firm should be trading at $11.59 a share, not $20 (by this simple analysis.) So it is trading above fair valuation. In real life, the companys growth might actually have been expected to accelerate in the future, and thus our one-stage model might have provided too low a valuation. Using a 2-stage or 3-stage model would require a great deal more work if done by hand, and thus would entail the use of an Excel spreadsheet. 31 Another common method used to value the equity portion of a company is called relative valuation (or using comparables). If you needed to sell your car or home, you might look at what similar cars or homes sold for. Similarly, many analysts compare the value of a stock to the market values of comparable stocks using ratios such as price to earnings and enterprise value to EBITDA. Well use a valuation of GM during the summer of 2002 using this method to illustrate:Company DaimlerChrysler AG Fiat S.p.A Ford Motor Company Honda Motor Co., Ltd. Nissan Motor Co., Ltd. Toyota Motor Corporation Average General Motors P/E 11.62x 8.13x NA 7.14x 11.29x NA 7.64x 33.14x Price to Book 1.20x 0.53x 4.35x 1.03x NA 1.69x 1.76x 1.70x P/E Forward 22.15x 27.51x 77.14x 15.61x 11.27x 20.74x 29.07x 10.80x $59.07 $5.47 Price $47.62 $12.38 $16.97 $21.54 $14.43 $53.52 EPS Forward $2.15 $0.45 $0.22 $1.38 $1.28 $2.58 AT $59.07 GM appears to be overvalued using P/E as proxy. Since GM made $1.84 in the trailing twelve months, it should be trading at 7.64 times this amount or $14.06 a share. Analysts often use forward P/E ratios instead. In this case, GMs estimated forward EPS times the average forward P/E would lead to a predicted price of $159.01, meaning that GM is undervalued. Using price to book as a proxy, the firm looks closer to fair value. GMs book value per share at this time was $34.93. Multiplied by 1.76, this gives us a value of $61.48 per share. As you can see, this is an inexact method. One area of the analysis that is not straightforward is which firms to use as comparables: here we could have added BMW or Renault, but GM at the time of this analysis owned Hughes Electronics, while DaimlerChrysler owned a portion of a defense/aerospace company. Are these two really the same as the other peers that had no holdings in the defense industry? Also, Ford and Nissan were at the time of this analysis experiencing troubles, had no earnings, and their forward earnings were significantly lower than the rest of the groups; should they therefore be left off of this list? An analyst needs to make judgment calls on these sorts of issues there is no one correct answer. Another factor that might make a difference in this case is the fact that firms with higher growth rates generally have higher P/Es than those with lower growth rates. We do not have the various growth rates in this example. 32 It should be noted that different sector analysts use different multiples. The chart below details multiples commonly used by various industries.Characteristics of Company Cyclical Manufacturing Multiple P/E and P/E Relative to the Market PEG (P/E over Growth Rate in Earnings) P/S (Market Cap over Total Revenues), V/S (Market Cap + Market Value of Debt-Cash over Total Revenues) V/EBITDA (Market Cap + Market Value of Debt-Cash over Earnings Before Interest, Taxes Depreciation and Amortization) Price/Cash Flow Note Analysts often normalize earnings to take cyclicality into account Used because there are often large differences in growth rates This assumes that margins will improve in the future Heavy Infrastructure This applies to companies in sectors that generally see losses in early years while earnings differ because of differing depreciation methods Usually, there are no capital expenditures from equity earnings The book value of equity is regularly marked to market REITs Financial Services Retailing (if D/E levels are similar) Retailing (if D/E levels vary widely) P/BV PS VS Finally, analysts sometimes use option theory to value a stock. Real Option valuation holds that the company has the option to delay making an investment, to adjust or alter production as prices changes, to expand production if things seem to be going well, or to abandon projects if they are not worth something. For example, an oil company may have a DCF-based valuation of $10 billion, but a market cap of $20 billion. The extra value may come from the fact the company has unused or underutilized oil reserves that can be tapped should oil prices increase; the firm has the option to expand. In this case, real option valuation using the Black-Scholes method may be appropriate. Recall that the equation (including dividends) is: C (call) = Se-ytN(d1) - Ke-rtN(d2) P (put) = Ke-rt[1-N(d2)]- Se-yt[1-N(d1)] When y (yield) = annual dividend/price of the asset and 33 d1 = ln(S/K)+(r+ 2/2)t t d2 = d1 - t where S = Current price of the underlying asset (stock or otherwise) K = Strike price of the option (sometimes called X for exercise price instead) t = Time until the option expires r = Riskless interest rate (should be for time period closest to lifespan of option) 2 = Variance D = Dividend yield In the case of an American oil company with untapped reserves, the probable inputs would be: S = Total value of the developed reserves discounted back over the time development takes at the dividend rate (D below) K = Present value of development cost (discounted at the firms WACC) t = The weighted average time until the option to develop these oil reserves expires r = The appropriate riskless interest rate (if the oil reserve rights last 5 years, the 5-year U.S. Treasury rate, for example.) 2 = The variance in oil prices over the recent past (which could also be the implied volatility of prices based on oil futures.) D = Net oil production revenue from reserves / value of reserves As a banker or internal finance professional working on such valuations, you would add the option value of oil reserves to the DCF valuation estimate to come up with the total firm value. This technique is also used to value patents for pharmaceutical, biotech or technology companies, among other things, and can often explain why DCF valuations fall far short of the stocks assigned market price.34CAREER LIBRARY 8. What does liquidity allow an investor? Liquidity allows an investor to move in and out of an asset class quickly, enabling one to capitalize on any upside or quickly get out to avoid downside. All else being equal, one can get a better price for a more liquid asset since there is less risk of not being able to sell it. For example, assuming you needed $100,000 cash immediately, would you rather have to sell $100,000 (nominal market value) in Disney stock or $100,000 in (nominal market value) baseball cards? 9. If you worked for the finance division of our company, how would you decide whether or not to invest in a project? Investing in a project could mean entering a new area of business, buying another company, expanding or broadening an existing business, or changing the way a business is run. The basic test is: will this project earn more money than it will cost from this point forward?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library There are several common methods for determining this. One accounting-based method is to compare the firms cost of capital versus the projects after-tax return on capital. If the projected ROC is higher than the after-tax COC, the project is a good one. (The firms WACC does not necessarily equal the projects COC; a company with a low debt rating might obtain better interest rates for a relatively less risky project if the loan or bond payments are directly tied to the cash flows generated by the project.) If the project will be funded entirely by equity, one would similarly compare the projected ROE to the COE. Another method measures the Economic Value Added or EVA of a project. EVA = (ROCCOC)(Capital Invested in Project). Equity EVA = (ROE-COE)(Equity Invested in Project). A positive estimated EVA means that the project is a good one. Two cash flow-based measures of investment return use the net present value (NPV) or the internal rate of return (IRR) to determine the merits of a project. Determining the NPV of a project is the sum of the present values of all cash flows less any initial investments, excluding sunk costs. This last distinction is key sunk costs should not be taken into account. For example, what if you just spent $500 painting half of your car only to find that it will only increase your resale value by $750. You might think that something that costs $1,000 but nets you only $750 is a bad investment; had you known this before starting the paint job you never would have begun. But from this point on, you will probably earnVisit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more. 35 $0 extra dollars for selling a half painted car, versus netting an extra $250 if you have the job completed. Similarly, only incremental cash flows should be taken into account when deciding on investing in a project. Determining a projects NPV is essentially the same as valuing a company or a bond: NPV of Project = (1+r) t=1CFt - (Initial Investment) Where CFt = the cash flow in period t, r = discount rate (either COC or COE) and t = the life of the project. One may argue that any positive NPV, even if it is only $1, is good for a company, since it can only make the company richer. In practice, however, since companies have only a limited amount of money to invest, the project or combination of projects that generate the highest possible NPV are the best, all else being equal. The IRR (internal rate of return) is the discounted cash flow equivalent of accounting rates of return (like ROC or ROE). The IRR of a project is the discount rate that makes the NPV of a project zero. For example, let us say you invested $1,000,000 in a factory that earned $300,000, $400,000 and $500,000 in years one, two and three respectively. You then sold the factory for $600,000 in year four. At a 24.89% discount rate, the NPV of this project would be zero. Hence, the IRR of this project is 24.89%. If the company can raise money for less than the IRR, than the project is a good one using this measure. Think of it this way: you wouldnt buy a bank CD that earns 5% if the only way you could finance it was through a cash advance on a credit card charging 10%. The chart below summarizes this analysis:Method Used Its a Good Project If Its a Bad Project If Accounting: COC & ROC Accounting: COE & ROE Accounting: EVA Accounting: Equity EVA After Tax ROC > COC ROE > COE EVA > 0 Equity EVA > 0 After Tax ROC < COC ROE < COE EVA < 0 Equity EVA < 0 NPV > 0 s Cash Flow: IRR Which method is best? In general, the cash flow-based methods are more in vogue, since accounting returns are not always the best measure of financial36CAREER LIBRARY performance. IRR and NPV tend to lead to the same investment decisions, but anomalies do sometimes arise. For example, you may run into situations where your calculations yield more than one IRR. In these cases, it is easier to use NPV. In other cases, IRR and NPV calculations may lead to different investment recommendations when deciding between more than one mutually exclusive project of greatly different size. A larger project might look better on an IRR basis, but worse from an NPV point of view. One way to solve for this is to look at the profitability index (PI), which is NPV/Initial Investment. A higher PI means a better project or set of projects. If the conflict between what IRR and NPV calculations yield cannot be resolved, it is best to favor NPV. 10. How does the yield curve work? What does it mean when it is upward sloping? This is more likely to come up in a fixed income research or sales and trading interview, but do not be surprised of it is asked in a banking interview (just to test your knowledge). The yield curve generally refers to points on a price/time to maturity graph of various U.S. Treasury securities. While yields to maturity on bonds of different maturities are often similar, yields generally do differ. Shorter maturity bonds tend to offer lower yields to maturity while longer-term bonds tend to offer higher ones. This is shown graphically as the yield curve, which sometimes called the term structure of interest rates. There are a few reasons why yields may differ as maturities change. One theory is the expectations theory, which states that the slope of the yield curve is determined by expectations of changes in short-term rates. Higher yields on longer-term bonds reflect a belief that rates while increase in the future. If the curve is downward sloping, this theory holds that rates will fall (probably because the economy is slowing, easing fears of inflation and raising the expectation that rates will fall in tandem). If the curve falls then rises again, it may signal that rates will go down temporarily then rise again (perhaps because of monetary easing by the Fed due to an economic slowdown). If the yield is upward sloping, the economy is expected to do well in the future. A sharply rising curve suggests a boom. Another supposition is the liquidity preference theory. This theory states that since shorter-term bonds tend to be more liquid than longer-term ones, investors are more willing to hold shorter-term bonds even though they do not pay relatively high yields. A third hypothesis is the market segmentation or preferred habitat theory, which states that long- and short-term bonds trade 37 differently because different types of investors seek to purchase each an ongoing basis. It is likely that all of these theories are true and work in tandem. 11. Tell me how you would go about valuing a privately held construction company? Valuing a private company is essentially no different than valuing a publicly held company (see the earlier question on valuing a company or stock): one uses some combination of DCF, relative (comparable) and option valuation techniques for any type of firm. (Remember, however, that relative valuation and the FCFE or DDM methods of DCF valuation should only be used to value the firms equity, not the firm as whole.) For a DCF valuation, whether or not to use a one, two or three-stage valuation depends on the market the company is in and the macroeconomic environment. A home building company in Las Vegas operating during an economic boom will likely have two or three stages of growth, whereas a firm specializing in erecting steel plants in the Northeast U.S. might only grow at one steady rate.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library There are differences in the details when it comes to valuing a private company in any industry, however. These differences arise mainly when it comes to DCF valuation. When it comes to relative valuation, one would follow the same steps as outlined in the answer to the valuing a stock question. For a construction company, using the P/E multiples of publicly traded peers is probably the best choice. Real option valuation might be appropriate if the company has the exclusive right to build potentially valuable properties. If so, the only difference between a public and a private company using real options lies in the determination of the weighted average cost of capital (WACC). For a DCF valuation, the determination of the cost of equity (COE) or the WACC will also differ. Remember, the WACC is based partly on the COE. The cost of equity estimation usually depends on either a historical regression beta (found on Bloomberg among other data sources) or a bottom-up beta estimation. Since no historical regression betas are available (private companies do not trade), one must perform a bottom-up beta estimation using an average of the regression betas of similarly sized publicly traded peers as a proxy. In this case, one would look at similar-sized construction companies (as we did in the earlier question). 38 Recall: levered[1+(1-tax rate)(Market Debt/Equity Ratio)] 39 firm towards the D/E ratio of its peer group if it is currently far out of line, thus gradually adjusting the WACC as well. Once one has determined the WACC, there are a few more issues one must take into account when valuing a privately held firm. First, private firms may have a shorter history than most publicly traded firms, in which case more extrapolation is required when making future cash flow projections. Second, private firms often use accounting techniques that would not be acceptable for public companies. Third, many private companies (especially mom and pop businesses) list what would otherwise be personal expenses as business expenses. Fourth, and similarly, owner/operators pay themselves salaries and may also pay themselves dividends. Past numbers may therefore have to be adjusted to show what they would have been had the firm been public; this is true both for relative valuation and the DCF valuation of a private company slated to go public or be purchased by a public firm. Future cash flow may have to also be adjusted to reflect the expense of salaried employees replacing owner/operators. It is important to note that in both DCF and relative valuation, it makes a difference as to why one is valuing a company. If the company intends to stay private or is being priced for sale to another private company, the valuation will likely be lower than if it is to be sold to a publicly traded company or if it is planning an IPO. First, private businesses cannot be bought and sold as easily as a public one; thus one must estimate an illiquidity discount for a private firm to remain in private hands. As a shortcut many bankers lower the valuation by 20% (this is what past studies have shown the average discount to be) and then arbitrarily adjust this percentage upwards or downwards depending on the size of the firm in question. (Surely, multi-billion dollar private companies like Hallmark or Cargill would have a lower illiquidity discount than a local construction company.) Second, once the beta has been determined, it may need to be adjusted upwards further since the beta estimation of a publicly traded firm is a measure of market risk, and assumes that the firms stockholders are well diversified. Private firm owners tend to have a large portion if not the majority of their wealth tied up in a business. In this case, if the construction firm is being valued for purchase by another private company (or is not going to be purchased at all), the beta should reflect the increased risk associated with this lack of diversification. This is done as follows: Total Beta = Preliminary Beta / Firms Correlation with Market.40CAREER LIBRARY This is why it is generally preferable for the owners of a private company seeking to liquidate their holdings to go public or be purchased by a publicly traded company rather than be bought by another private firm; being public generally means a lower COE and thus a higher valuation. 12. Why might a technology company be more highly valued in the market in terms of P/E than a steel company stock? You may recall from the question on how to value a stock that relative P/E is affected by the growth rate in earnings. Generally speaking, higher growth companies (and industries) have higher P/E ratios than lower growth ones. Mathematically this can be shown by breaking down the components of the ratio into their equity DCF components: Using the DDM model: P = Dividends per Share/(r-g). Dividing both sides by EPS:Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library P/E = (Payout Ratio)(1+g)/(r-g) Using the FCFE method of valuation: P = FCFE/(r-g), and P/E = (FCFE/Earnings)(1+g)/(r-g) All else being equal, higher growth (g in the equation) means higher P/E ratios. One can safely assume that technology companies (even in the down market of 2002) will grow faster than steel companies. 41 13. When should a company raise money via equity? When should a company raise funds using debt? As you may recall: Cost of Equity = Appropriate Risk Free Rate + Beta (Equity Risk Premium) Only very low Beta companies will have a cost of equity that approaches their cost of debt, but in most cases debt is cheaper than equity from a firms point of view. (Issuing stock is not free, since it dilutes the ownership stakes of the firms existing owners.) However, coupon-bearing debt requires regular payments. Therefore, younger and smaller firms with good growth prospects but more volatile cash flows are better suited for equity, while mature companies or those with steady cash flows tend to use more debt. Even zero-coupon debt, which requires only a balloon payment at maturity, is only appropriate for firms that have fairly certain future large cash flows. Some would argue that a company might want to use equity (such as when making an acquisition) in cases where management believes its own stock to be very highly (or even over) valued; thus AOL used stock to buy Time Warner and many dot-coms sold stock before March 2000 even though they did not need the funds. In other cases, existing bank or debt covenants may require a certain debt/equity ratio or cash balance, in which case a mature company with too much debt may issue stock to keep within its agreements. 14. How would one price the different elements of a convertible bond? Although this question often popus up in sales and trading interviews, many banks staff convertible bond capital markets/origination employees with bankers, so this question sometimes comes up in corporate finance interviews. In addition, as a banker you may need to discuss potential convertible offerings with a CFO; as an internal corporate finance professional you may help to decide whether your firm should raise money via equity, debt or hybrid securities like convertibles. First, the textbook definition of a convertible bond: A convertible bond is a bond that gives bondholders the option, but not the obligation, to convert into a certain number of shares. This option is usually triggered if the bond issuers stock hits a certain, higher price in the future. Conversion becomes more likely as the 42 underlying stock price increases. Firms normally use convertibles to lower the interest payments, when all else equal, the right to convert into equity gives the bond more value. From a trading standpoint, a convertible bond behaves at different times like: A bond. When it is deep out of the money (the underlying stock price is far below the conversion level) the overwhelming majority of the value comes from the interest payments; An option. Since the option to convert is no different than that of a traditional stock option, traditional option valuation techniques can be used; Straight equity. When the convertible is deep in the money (or the issuers stock far above the conversion price) it becomes a near certainty that the bondholder will exercise his or her right to exercise. For valuation purposes, a convertible can be broken down into a straight bond and a conversion option. To value the bond component, take the coupon (face value interest) rate on the convertible bond and compare it to the interest rate that the company would have had to pay if it had issued a straight bond. This should be what the company pays on similar outstanding issues. If the company has no bonds outstanding, one can infer from its bond rating what the company might pay. Using the maturity, coupon rate and the market interest rate of the convertible bond, one can estimate the value of the bond as the sum of the present value of the coupons at the market interest rate and the present value of the face value of the bond at the market interest rate. Whatever is leftover is the equity portion. If it were a convertible offering yet to be issued, one would estimate the price of the call options imbedded in the convertible issue (using the option pricing methods discussed in a previous question) and subtract this amount from what the value of an ordinary bond of an equal maturity and face value would be. To determine the interest rate the issuer would then need to pay, one would find the rate, which when plugged into the bond pricing equation, is equal to this new amount (ordinary bond less option value). 43 15. How would you value a non-U.S. company? Valuing a foreign company involves the same steps as outlined in the question on how to value a company or stock. As with the question on valuing a private company, it is the small details that make such a valuation different. When employing relative valuation, one should use local companies as peers whenever possible. Thus one would compare small German banks on a price to book basis using other German banks (or if need be, other E.U.-based banks.) In certain cases, there are no local peers (Nokia is the only Finnish mobile phone manufacturer) or there are only a few players who operate globally (such as in the auto industry). In such cases, using international peers is appropriate, though one must keep in mind local difference that may account for inconsistent ratio levels between companies. For example, if one compared, say South African Breweries with Heineken and Anheuser-Busch on a P/E or V/EBITDA basis in 2002, one must take into the account SABs greater exposure to emerging markets, which (all else being equal) might increase the firms risk and thus lower its relative valuation. Even in two markets of similar risk, different factors might affect what might otherwise be a clean comparison. For example, according to, lower interest means lower discount rates and thus also higher P/E ratios. Japanese companies tend to have higher P/Es than their European counterparts because of Japans relatively low rates. When comparing peers of or from different countries, such subtle differences can affect a valuation greatly. Finally, while many large non-U.S. companies like Sony and DaimlerChrysler report using U.S. accounting standards, many more do not. One should therefore make certain to adjust for any accounting differences between the U.S. and the company in questions home country. 44 If one utilizes real options in a non-U.S. company valuation, remember that three of the inputs are: K = (Exercise or Strike Price) = Present value of development cost (discounted at the firms WACC). r = The appropriate riskless interest rate (if the oil reserve rights last 5 years, the 5-year U.S. Treasury rate, for example). 2 = The variance in asset price. Each of these variables may be different for a non-U.S. company. K will differ if the WACC is different (more on this as we discuss using DCF in pricing a foreign company). The riskless rate will almost certainly be different than a similar maturity U.S. Treasury security. And the variance may depend upon the volatility of similar assets in the firms local market, which will likely differ from the volatility of such assets in the U.S. S, t and D (strike price, time and dividend yield) are not affected. When it comes to using DCF methods, one should make certain to adjust for any accounting differences between the U.S. and the company in questions home country when looking at past numbers or making forward financial estimates. One must use the appropriate currency for all calculations. Thus, a Mexican company should be valued in Pesos throughout while the Mexican risk-free rate should be used. Once a value has been determined, this can be translated into dollar terms. Nominal amounts should be used unless the company is in a very high inflation environment; only then should one compute all numbers using real or constant (inflation-neutral) terms. For non-U.S. companies cost of debt, one would use whatever cost of debt is appropriate given the firms bond rating. If there is no rating available, one should estimate one and adjust the WACC by adding a risk premium. Without a rating, the cost of and relative weight of debt can be calculated using the firms financial characteristics and/or interest expenses, just as with a U.S. firm. If the company is in a country with a lower debt rating than the U.S., one would generally then add the difference between where sovereign debt trades in the companys home country and the U.S. For example, let us take a Southeast Asian real estate company in an emerging market country where 10-year government bonds trade at, say 10.1% versus 5.1% in the U.S. Thus the socalled Default Spread would be 5%. If this Southeast Asian real estate company had an S&P, Moodys or Fitch-assigned rating, one would use the cost of debt appropriate for companies with this rating. If not, one would look at whatVisit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more.CAREER LIBRARY 45 similar firms in the U.S. had as a cost of debt and add the default spread. If companies in the U.S. with similar interest coverage ratios can borrow at 9%, we should assume that the real estate company could borrow at no better then 14%. The COE would also differ. There are several prevalent methods for determining by how much. One (and probably the easiest) way to do this is called the Bond Rating method. Since sovereign bond rating by S&P, Moodys and Fitch take into account country risk, one assumes that: The country risk premium = Risk PremiumUS+ (Default Spread on Sovereign Bonds) The equity risk premium would be equal to or higher than in the U.S. using this approach. For example, the bonds of a country like the U.K. or France would trade at the same interest rate as U.S. Treasuries, so using this method the risk premium would still be 4.5% (again, this is number we are using for convenience). For an emerging market country with a 5% default spread, the appropriate equity risk premium would be 4.5% + 5% = 9.5%, which would significantly increase the COE and thus the WACC, all else being equal. Another method is the Relative Equity Market method. Since certain equity markets are more or less volatile than the U.S., this method assumes that:Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library The country risk premium = Equity Risk PremiumUS (Country Equity/US Equity) As you may recall: Cost of Equity = Appropriate (Local) Risk-Free Rate + Beta(Equity Risk Premium) A non-U.S. company will likely have a different risk-free rate and beta (although for large companies in small markets, like Nortel in 2000 vis--vis Canada, one might substitute the S&P and thus use the same market to calculate beta). The equity risk premium would be higher for more volatile markets and lower for less volatile one using this approach. The equity risk premium would also be higher for companies in non AAA-rated countries. A third method is to use the Bond Method in conjunction with equity market volatility. Here one assumes that: The country risk premium = Risk PremiumUS[(Most Comprehensive Local Equity Index)/(Local Government Long-Term Bond)] A fourth method involves predicting the implied equity premium of a market. This is a rather complicated method, but in a nutshell it assumes that: 46 Total Value of All Stocks = (Dividends and Stock Buybacks Expected Next Year)/(Required Returns on Stocks - Expected Growth Rate) We can use a local index as a proxy for the Total Value of All Stocks. We can use analysts estimates for what dividends on all stocks in the local index will be for Dividends, and analysts growth estimate for Growth. It is then algebraically possible to extract the expected return on stocks. By subtracting out the local risk-free rate, one can find equity risk premium. Since most stock markets are made up of companies that will grow in one, two or three stages, in reality the calculations are much more complicated. No matter which of these methods for determining WACC one chooses, one should take into account what proportion of a firms revenues and operations are actually located in their headquarter country. For example, Cemex, a large Mexican cement manufacturer, gets a large percentage of its revenue from and has many of its factories in Europe and the U.S. Many South African firms are listed on the London Stock Exchange and have moved their headquarters to London, but have most of their operations in and derive most of their revenue from Africa. UBS, Roche, and Nestl are clearly more than just Swiss companies. In these sorts of cases, one may want to weigh the COE proportionately based on the origin of cash flows and the location of the firms operations. 16. What is operating leverage? Operating leverage refers to percentage of costs that are fixed versus variable. An airline, manufacturing or hotel company with lots of long-term property leases and unionized employees must make lease and salary payments whether sales rise or fall. On the other extreme, a consulting firm that has many employees working on site with clients or a technology company with high R&D expenditures might have the flexibility to lay-off employees or lower R&D expenses should sales falter, or to increase employees or spending more if sales rise. On the other hand, firms with high degrees of operating leverage generally experience significant increases in operating income as sales increase. A firms degree of operating leverage is defined as: DOL = 1+(fixed costs/profits) = % change in profits/% change in sales. 47 17. Your client wants to buy one of two banks. One is trading at a 12x P/E, and the other trades at a 16x P/E. Which should your client try to buy? Do you even have enough information to determine this? There are two ways in which this question is tricky. First, P/E is usually analyzed in relation to expected future growth in earnings. Higher growth companies tend to have higher P/Es, all else being equal. Since we do not know the banks growth rates, we cannot say for certain. Second, in the answer in an earlier question we stated that price to book is a better measure of relative value for the financial services industry, since the book value of equity is regularly marked to market at banks, brokerages and insurance companies. Therefore, we couldnt make as good a guess as possible even if the growth rates were known. Return on equity is the variable which best matches P/BV. Mathematically this can also be shown as follows: According to the dividend discount model, P = Dividend per Share/(discount rate growth in earnings) Also, return on equity (ROE) = EPS / Book Value of Equity, and by combining these two formulas the value of equity is: P = or P/BV = or Growth = (1 - Payout ratio)ROE Either way, higher returns on equity mean higher growth rates and also a higher P/BV and thus a higher valuation for the financial services firm in question, all else being equal. ROE(Payout Ratio)(1+g) (r-g) BV(ROE)(Payout Ratio)(1 + g ) (r-g) 48 18. What are some ways to determine if a company might be a credit risk? The easiest method, of course, is to look at what the rating agencies (S&P, Moodys, and/or Fitch) say about a company; it is their job to analyze such risk. These ratings may be unavailable, however, or you may wish to do further due diligence. There are several potential sources of risk any company faces. When analyzing the credit risk of a potential recipient of financing, one should examine all of these from a subjective standpoint. There are international-related risks (host government changes in law, political unrest, currency risk); domestic risks (recession, inflation or deflation, interest rate risk, demographic shifts, political and regulatory risk); industry risk (technological change, increased competition, increasing supply costs, unionization); and company-specific risk (management (in)competence, strategic outlook, legal action). Additionally, one must objectively look for signs that any subjectively determined risk scenarios will affect the finances of a company. Short-term liquidity risk can be analyzed by looking at some of the following accounting equations: Current Ratio = Current Assets/Current LiabilitiesCustomized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library This ratio has been moving below 2 for most U.S. companies, and some industries average below 1. Generally, it is somewhere between 1 and 2. The lower this number is moving, the better the firm is at managing inventory. A sharp rise either means an expected boom in business or an overstocked warehouse. Quick Ratio = (Cash + Marketable Securities + Receivables)/Current Liabilities In some industries, where inventory can be quickly liquidated, Quick Ratio = (Cash + Marketable Securities + Receivables + Inventory)/Current Liabilities Quick ratios are usually around 0.5 1.0. The higher the number, the faster a company can pay debt in a worse case scenario. An increasing quick ratio may also mean that the company is not managing inventory or receivables as well as it could. Other short-term measurements (the norms of which vary from industry to industry) are:Visit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more.CAREER LIBRARY 49 Operating Cash Flow (CFO) to Current Liabilities (CL) = CFO/Average CL Accounts Receivable (AR) Turnover = Sales/Average AR Inventory Turnover = Cost of Goods Sold/Average Inventories Accounts Payable Turnover = Purchases/Average Accounts Payable If the company in question is far from its industrys norm for such ratios, it may be a short-term credit risk. A company might be a short-term credit risk but a safe bet longer-term (or vice-versa). Accounting ratios used to determine longerterm risk include: Long-Term Debt Ratio = Long-Term Debt/(Long-Term Debt + Shareholders Equity) Debt/Equity Ratio = Long-Term Debt/ Shareholders Equity Liabilities/Assets Ratio = Total Liabilities/Total Assets Operating Cash Flow to Total Liabilities Ratio = Cash Flow From Continuing Activities/Average Total LiabilitiesCustomized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Operating Cash Flow to Capital Expenditures Ratio = Cash Flow From Continuing Activities/Capital Expenditures Again, if any of these longer-term ratios are far worse than what they are for the industry as a whole, you may have a longer-term credit risk. The most commonly used long-term risk measure is the Interest Coverage Ratio (ICR) (use this if you have to pick only one): ICR = EBITDA/Interest Expense This ratio is less frequently looked at as: ICR = (Net Income + Interest Expense + Income Tax Expense+ Minority Interest in Earnings)/Interest Expense For larger U.S. companies, S&P usually assigns its best rating of AAA to firms with an ICR of higher than 8.5; financial services firms must generally have an ICR of above 3 and smaller firms one of above 12.5 to get an AAA rating. Larger firms with an ICR below 2.5, financial services firms with an ICR below .8, and smaller companies with an ICR below 3.5 tend to get a BB or lower rating and thus are classified as not credit-worthy or junk.50CAREER LIBRARY 19. How does compounding work? Would I be better off with 10% annually, semi-annually, or daily? Daily would pay the most. Paid semiannually, $1 invested at rate r will grow to [1+(r/2)]2T. Paid monthly, $1 invested at rate r will grow to [1+(r/12)]12N. It can be proven mathematically that the larger number of compounding periods gets, the larger the final value of $100 gets. If one could pay interest every instant, $100 would grow into ert, when e is approximately 2.71828. To put it another way, your credit card company may charge you 1.5% a month, but you can see that this costs you more per year than 18% (which is why the A.P.R. is listed is $19.56%). Try it on your calculator and see. 20. What is duration? All firms expect their bankers to be knowledgeable about bonds (particularly if you are placed in debt capital markets/origination). Remember, most larger corporations issue more debt than equity.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Think of bond payments like children on a seesaw. If a bond pays $10 a year for 29 years, and then paid $400 the 30th and last year, at what point would the present values of these cash flows balance if they were instead children on the see-saw? Duration is this measure.Duration (An Ilustration) $10 $400 Why is duration important? There two main reasons. First, when bond prices rise or fall, interest rates fall or rise. For small changes in interest rates, the duration of a bond will allow you to figure out how much a bonds price will change. It is a measure of interest rate sensitivity. Second, knowing duration of bond or a portfolio of bonds (or loans) is important in order to match assets with 51 liabilities. A classic example of a failure to do so sparked the Savings and Loan crises in the U.S. in the late 1970s. Banks took in deposits, which were generally withdrawn in well under five years. Meanwhile, they loaned out the same money in the form of 30-year mortgages or invested in long-term U.S. government bonds. When interest rates spiked in the 1970s, S&Ls found themselves paying out double-digit interest on short-term deposits while collecting low single digit interest on bonds and home loans. This was a classic case of duration mismatch. On a personal level, you may have experienced a time when you were eventually going to receive more money than you owed, but your credit card on other bills were due before your cash inflows arrived; this is a sort of duration mismatch. By computing the duration of ones portfolio, one can hedge or otherwise structure to help insure that cash inflows match cash outflows. Classic, or MacCaulays Duration, measures the effective maturity of a bond, defined as the weighted average of the times until each payment, with weights proportional to the present value of the payments. Put another way, it is how long on average bondholders will have to wait to get their money back. Mathematically:Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library wt = Where: w = weight and CFt = Cash flow made at time t And duration (or D) is: D= tw t=1Coupon = 8% (annually) n = 4 years (maturity) Duration = 80 (1) 80 (2) 80 (3) 1,080 (4) + (1.1) + (1.1)3 + (1.1)4 (1.1) 2 80 80 80 (1.1) + (1.1)2 + (1.1)3 + 1,080 (1.1)4 = 3.56 years 52 Another duration equation assumes that one knows the slope of the price change per change in interest rate of a bond. This formula is written: D= -(slope)[1+y/2] P Where P = price and y = yield. Alternatively this can be stated as: D= P/P y/(1+y/2) With P/P being the percentage change in price. Some more things you may want to remember (believe it or not, some of these things have come up in interviews): a. The duration of a zero coupon bond equals its time to maturity while duration is always less than maturity for coupon-bearing bonds. Thus a three-year coupon-bearing bond has a lower duration than a three-year zero. b. Holding the coupon rate constant, a bonds duration and interest rate sensitivity generally increase with time to maturity.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library c. The slope of a duration graph is less than one (duration increases by less than a year for each years increase in maturity). (This statement is true only for coupon bonds. For zero-coupon bonds, duration increases on a one-to-one basis with maturity.) d. Holding time to maturity and YTM constant, a bonds duration and interest rate sensitivity increases as the coupon rate decreases. Bonus hard question: What is the difference between MacCaulays and modified duration? (This is an actual question asked in a banking interview.) Answer: Modified duration is the percentage change in price for a change in yield. Mathematically: Modified Duration = Or Modified Duration = Duration (1+y/2) P/P y 53 21. What would happen to a companys stock if it announced a large loss due to a write-down of goodwill? First, what is goodwill? When one firm purchases another, the acquiring firm must allocate assets to the new, combined companys balance sheet. Value is assigned to identifiable and tangible assets like land, buildings, and equipment first. Next, value is assigned to identifiable intangible assets like patents, customer lists, or trade names. The remainder is listed on the balance sheet as goodwill. In short, you can just say goodwill is the difference between the book value of the purchased company and the actual price paid. Lets say General Electric purchases an advertising company like Omnicom and a brand-driven firm like Coca-Cola. The amount of goodwill left on the balance sheet of the combined company would be much greater than if GE bought General Motors.or Equity Office Properties. This is because a relatively larger portion of GMs and Equity Offices value is derived from their ownership of car factories or office buildings. Omnicom derives much of its value from the skills and knowledge of its employees. As much as managers tout human capital these days, there is no balance sheet item for the term. Similarly, the majority of Cokes value comes from its brand names and secret formula (which is not patented). U.S. GAAP (Generally Accepted Accounting Principles) used to require firms to amortize goodwill much the same way they depreciate tangible assets. This is no longer required: firms must now write-down goodwill only when it becomes impaired. For example, if after GE purchased Coca-Cola, scientists discovered that orange juice causes cancer, the ability of GE to earn money off the Minute Maid brand name would become impaired. Thus some of the NPV of future income that had been projected to come from the Minute Maid division of Coca Cola/GE would have to be subtracted from goodwill on the balance sheet. This subtraction would also be taken from net income as a one-time loss. Consequently, if a company announces a big write-down of goodwill, this means that the company no longer expects to earn as much money as it had hoped from the intangible and unidentifiable assets in question. This would lower net income; in the case of AOL Time Warner in 2002 or Nortel in 2001, it could lower accounting earnings by tens of billions. However, this is considered a noncash charge, since it does not affect actual cash coming into the firm for the quarter the charge is taken. Indeed, it may not affect cash flows for several quarters or even years. If analysts and investors foresee such a charge being 54 taken (which they often do), this massive loss is ignored, and only operating earnings are considered. In the case of AOL Time Warner and Nortel, investors had already seen the fortunes of Yahoo!, Lucent, JDS Uniphase, and the like tumble before the announcement of the write-downs. Many dot-coms, technology and telecommunication equipment companies (like JDS Uniphase) had themselves already announced write-downs. The stocks of AOL Time Warner and Nortel were therefore largely unaffected. A firms stock will only fall if the write-down is completely unexpected, or much larger than expected. 22. What is convexity? As bond prices rise and fall along with interest rate changes, they do not do so in the linear way assumed by duration. A straight-line approximation is valid for small changes in interest rates. For larger jumps, another measure is needed. Convexity is a second derivative of the price function and measures the actual curvature of the price-yield curve of a bond.PriceCustomized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Price1 Yield1 Yield Convexity is generally considered desirable in a bond, since the greater the curvature the more prices will increase when yields decrease and fall less when yields increase. Convexity also assumes that as yields rise, the price-bond curve becomes flatter. Convexity is important because it allows one to improve the 55 Mathematically, it can Convexity = 1 P(1+y)2 Where n = time until maturity of the bond, CFt = the cash flow paid to the bondholder at time t, P = price, and y = yield. You will never have to know this equation in an interview, but those of you familiar with calculus might find this equation helpful in understanding convexity. 23. Whats deferred tax? A deferred tax liability is a non-cash balance sheet item stemming from differences between reported accounting income and taxable income. Future tax deductions stemming from similar differences are classified as a deferred tax asset. An example is when a manufacturer of vacuum cleaners increases the estimated number of warranties to be redeemed, due to a defect found in a top selling model. This would immediately lower accounting earnings, but this charge would not be tax-deductible. It would instead create a deferred tax asset since the firm would not be able to deduct the expense for tax purposes until the warranty repairs were actually made. 24. What is securitization? Securitization is the immediate monetizing of future cash flows. Recall earlier, in our discussion of duration, we said that when interest rates spiked in the 1970s, S&Ls found themselves paying out double-digit interest on short-term deposits while collecting low single digit interest on bonds and home loans. This was a classic case of duration mismatch. The way out of this mess for many S&Ls was to sell the rights to future mortgage payments (via investment banks) to institutional investors just like bonds (now called CMOs or collateralized mortgage obligations). Since the mortgage-backed bond market was started in the late 1970s, virtually any sort of predictable cash flow that can be securitized 56 has been. Examples include credit card payments, auto loans, student loans, even songwriting royalties (Bowie Bonds) and several states tobacco case settlement money. The advantage to the seller of the bond is that it receives cash immediately and mitigates any risk of suffering from future defaults by debtors. 25. Rising U.S. trade deficits are a problem. We need to get our deficit lower. Do you agree? Rising deficits are not necessarily a problem. After all, Japan had a trade surplus throughout the 1990s while being mired in recession, while the U.S. had the worlds strongest economy and high deficits during this time. The trade deficit is defined as follows: Trade Balance = Exports - Imports of Goods = NX. If the balance is negative, we have a trade deficit. You should recall from basic economics that GDP = C + I + G + NX (with C = consumer spending, I = investment, and G = government spending). Lower NX must be offset by higher C, I, or G. In the case of the U.S. during the 90s, C (consumption) and I (investment) were both higher. The total dollars invested in the U.S. economy during this time included massive amounts invested by foreigners. Fast growing economies like the U.S. or the so-called Asian Tigers attract foreign investment while having trade deficits. Hence, higher trade deficits are not necessarily a sign of economic weakness but can be a sign of confidence in an economy by global investors (assuming the deficit is financing investment and not consumption or government budget deficits). 26. How do you calculate market capitalization of a company? Multiply the current stock price by the number of shares outstanding. For example, if MSFT is trading at $52/share and there are 5,415.46 million shares outstanding, the market cap would be calculated as $52/share*5,415.46 million shares = $281,603.9. 57 27. How do you calculate the market value of a firm? Total firm value (V) equals the sum of the market value of the firms debt (D) and equity (E). Or, V = D + E. 28. What is the breakup value of a firm? The breakup value of the firm is determined by analyzing the liquidation value of all tangible assets (A) and liabilities (L). These are netted (i.e., A - L) and the result is the residual value accruing to shareholders. 29. How can a company raise its stock price? There are many ways in which a company could do this. These include: Initiation of a large public stock repurchase. This sends a signal to the market that the company feels that its stock is undervalued. This means that the company cant find better investment alternatives than its own stock. Financial ratios such as earnings per share and return on earnings should increase since there are fewer outstanding shares (corporate share repurchases go into the treasury stock account, which is not included in the number of shares outstanding). Announcement of planned changes to organizational structure, such as mounting major cost cutting campaign, consolidation or refocusing of product lines, making management changes and so on. Structural changes, including mergers, acquisitions and divestitures. Announcing an increase in dividends. Since the share price should be valued as the DCF of future expected cash flows, if cash flows increase, the share price should increase. 58 30. What are some reasons companies carry out mergers? A merger occurs when two companies agree to combine operations and go forward as a single company. The assets and liabilities of the two companies are combined. They can occur for a variety of reasons. Benefits can include cost reduction, economies of scale or the ability to enter a new market to exploit new opportunities at a lower cost than would be possible without the merger. There are different kinds of mergers as well. Horizontal merger: a merger that takes place between two competitors (companies in the same line of business), such as one airline purchasing another. The Daimler-Chrysler merger is an example. Horizontal mergers can result in industry consolidation. If industry competition is sufficiently diluted after the horizontal merger, the new company may be able to raise prices and keep them high. For example, the Federal Trade Commission reports that had the merger between Staples (an office supply store) and Office Depot been allowed to go through, Staples would have been the only office supply store in many areas, and would have been able to raise prices 13 percent.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Vertical merger: a merger of non-competing companies, where one makes a component needed by the other. These firms have a buyer-seller relationship. Example: Time-Warners merger with Turner Broadcasting Systems. Vertical mergers can increase barriers to entry by either refusing to sell needed production inputs to competitors or raising their prices. These types of mergers can also result in lowered prices that competitors cannot match, synergies in product design, and more efficient use of resources. Conglomerate merger: a combination between companies in unrelated businesses. Goals of conglomerate mergers can include diversification. The theory: the companies are worth more together than separately. Conglomerates have fallen out of favor in recent years; investors today prefer to diversity portfolios, not purchase diversified companies. Divestitures and breakups of huge conglomerates built during the 1960s and 1970s are now common. 59 31. What is an acquisition? An acquisition is where one company (the acquiring firm) seeks to gain control over another (the target). An acquisition may be the cheapest way to buy desired assets or gain entry into a particular field. The takeover may be friendly or hostile. In a friendly takeover, the acquiring firm makes an offer to the target firms management and board of directors to establish either a parent-subsidiary relationship, a merger or a consolidation of the two firms. In a hostile takeover, the acquiring firm proceeds against the wishes of the management/board, normally by accumulating stock and making tender offers directly to the shareholders. 32. What are some of the defensive tactics that a target firm may employ to block a hostile takeover? There are a variety of strategies a target firm may employ, many designed to make the takeover economically unattractive to the target.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Changing the bylaws of the corporation for example, by implementing a staggered board so that only 1/3 of the directors are elected each year and/or requiring a supermajority (for example, 75%) to approve a merger or acquisition. Poison pill shareholder rights plan the target company uses this tactic to make its stock less attractive to the potential acquirer. According to the Delaware Supreme Court, A poison pill is a defensive mechanism adopted by corporations that wish to prevent unwanted takeovers. Upon the occurrence of certain triggering events, such as a would-be acquirers purchase of a certain percentage of the target corporations shares, or the announcement of a tender offer, all existing shareholders, except for the would-be acquirer, get the right to purchase debt or stock of the target at a discount. This action dilutes the would-be acquirers stake in the company and increases the costs of acquisition . Golden parachutes lucrative benefits, such as stock options, severance pay and so on, given to top management in the event of hostile takeovers. Not only does this increase the cost of the merger, but also results in loss of talented managers and other employees, who may be assets coveted by the acquirer.60CAREER LIBRARY Employee Stock Option Plan (ESOP) allows lower level employees to purchase shares and appoint trustees who will (presumably) support management in a defense against a hostile takeover. Shareholder Stock Purchase Program (SSPP) a program allowing shareholders to purchase additional shares at a low price should a triggering event occur. The triggering event can be, for example, whenever an outside entity acquires more than 20% of the stock. Defensive litigation seek help from the Justice Department on the grounds that the merger will be anti-competitive. White knight another acquirer more acceptable to management. 33. Who is Alan Greenspan and why is he followed so closely by the press? How will the stock market be affected if he announces an increase in interest rates? Alan Greenspan is the Chairman of the Federal Reserve Board. He is closely followed by the press because the Federal Reserve Board has a number of tools it can use to stimulate or slow economic growth. One of these tools is the ability to alter the interest rate charged between banks for overnight loans. This discount rate is widely used as a benchmark against which other rates are compared. The effect of a rise in interest rates depends on whether the announced increase was anticipated or not. If the markets anticipate a rate increase of 50bp (50 basis points, or half a percent) and if Greenspan announces a rate increase of 50bp, there should be no change in stock prices, since the stocks would already have been priced according to this expectation. On the other hand, if rates rise higher than expected, stock prices must change to reflect the new reality and the overall market indices could be expected to fall. (Although, for example, stocks of financial corporations such as banks and other lenders may rise if a significant part of their revenues come from interest payments.) 61 34. Why is a firms credit rating important? The credit ratings are assigned by the major ratings agencies (Moodys, S&P, Fitch). Firms can raise cash by going to the equity markets (IPO, secondary offering, etc.) or through the debt markets (commercial paper, bonds, etc.). The credit rating directly affects the firms cost of debt. The lower the rating, the higher the borrowing cost. If the rating is low enough, the firm may be rated non-investment grade, at which point many institutional investors will be prohibited from owning it. There are times when a firm will not be actively engaged in the capital markets. If a firm is rated AA, for example, it can pretty much raise cash whenever it wants by borrowing. 35. Why would a firm choose short-term over long-term debt? Short-term debt is generally cheaper and easier to obtain, but risky because the lender can cut you off at any time, for example, if your credit rating worsens. If you borrow at a floating rate, the risk of short-term debt increases because of having to rollover the short rate is now a random variable influenced by uncontrollable factors such as inflation. Short-term debt is appropriate only for short time horizons or when assets are liquid. 36. If a firm needs to raise cash by issuing a bond, but is worried about the fact that interest rates may drop in the future, what strategy should it employ? It can issue long-term callable debt if it is concerned about locking in a high interest rate. 62 37. What is a leveraged buyout (LBO)? Why lever-up? A leveraged buyout is a strategy to buy a company using borrowed funds. The acquiring company can use its own assets as collateral in anticipation that future cash flows from the resulting acquisition will cover the interest payments. LBOs enable shareholders to change the capital structure of the firm. They can buy up all of the stock and retire most of it, leaving the corporation with a structure, for example, of 90 percent debt and 10 percent equity. Benefits may include improved corporate governance: since debt payments have to be made, there are managerial incentives for good performance, thus better aligning managerial and shareholder interest. 38. How would you value a company with no earnings, such as a start-up? You cant use the DDM or DCF. You would use multiples such as Price/Sales or comparables. Recall caveats in using comparables: in using comparables analysis, the key is to choose the right comparables. No two companies are exactly alike. Certainly it is necessary to choose companies in the same industry, but also consider the capital structure, size, operating margins and any seasonality effects. 39. Your boss uses the discounted free cash flow model to value highgrowth stocks with low earnings. What do you think of this strategy? It is very dangerous to value high-growth, low-earnings companies by the traditional DCF model. First of all, the distributions of important financial variables such as profit margins, working capital requirements, revenue growth rates, etc., are not distributed as they might be for more stable companies, but rather are bimodally distributed. The return on such a company is usually much better or much worse than expected, so traditional methods that trade off expected return with expected risk do not work. Risk is very high, making it very difficult to determine an appropriate discount rate. This is the same problem faced by venture capitalists, who instead might just use a very high hurdle rate (the minimum return that someone must receive 63 before theyll invest in something) of, say 50% or so. Also, it is very difficult to determine the length of time over which the growth will occur, then diminish to stable growth. The uncertainty in all of these critical variables means that DCF analysis can produce very misleading and inaccurate results. 40. Why might high-tech stocks have high prices even though they have little or no earnings? Investors are expecting high future growth. 41. How do you calculate a discount rate? You would use the CAPM. While beta is an admittedly flawed estimate of risk, it is the best risk measure we have. The Capital Asset Pricing Model says that the proper discount rate to use is the risk-free rate plus the company risk premium, which reflects the particular companys market risk or beta.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library 42. What is beta? Beta is a scalar measure of risk relative to the overall market. A financial instrument with a beta of 1 is perceived to be as risky as the market, and moves with the market. Note: Betas (s) are always the weighted average of market values, not book values. 43. How do you calculate an equity beta? You can perform a linear regression of the historical stock returns against market returns. The slope of the regression line is beta. Value Line, S&P and so on are data providers of equity beta. If you cant get beta (for example, if the company is private), use the beta of a comparable company as an estimate. 64 44. Is beta constant or does it vary over time? As companies mature, beta should approach one in the limit. Some models of beta (for example, Barra) use a weighted average beta, giving 2/3 weight to the regression beta and weighting the rest at 1 to take this into account. Thus 2 3 hist + 1 3 (1) 45. What do you think the beta of General Motors is? What about a high-tech stock, such as Cisco Systems? Since beta measures the sensitivity of the stock to the overall market, mature companies such as GM are generally expected to have betas close to 1. This means that they are about as risky as the overall market. A high-tech stock is perceived as more risky than the market and would probably have a beta higher than 1, perhaps closer to 2. Note that the more pronounced the growth orientation of the firm/industry, the higher beta is likely to be. Betas vary significantly between industries. 46. Why do you unlever beta? You need a discount rate for free cash flows, which can be obtained through the CAPM: E[rfirm] = E[rf] + firm(rm - rf). The beta of a firm is supposed to be an unbiased measure of the firms risk. The firm value should be independent of the amount of financial leverage so you unlever out any miscellaneous debt. Since beta is a mathematical average of where the risk of the firm is concentrated among the creditors and shareholders, if the firm looks risky, is it risky because of the nature of the business or the nature of the financing? If the latter, you unlever beta to get at the business risk. 65 47. What is the weighted average cost of capital and how do you calculate it? Why is it important? The weighted average cost of capital, WACC, is the expected return on a portfolio consisting of all of the entitys securities. It is used as the discount factor in capital budgeting decisions, and reflects the risk of the company. WACC = rA = rE E/V + rD D/V where rE is the expected return on equity (return on stock), rD is the cost of debt, E and D are the market values of equity and debt, respectively, and V is the market value of the firm, V = D + E. The WACC is used as the discount rate in valuation. 48. How would you calculate WACC for a private company? If the company is not publicly traded, you would try to find comparable companies but include a control premium. 49. Why would a firm try to optimize its capital structure? The idea here, due to MM (Modigliani-Miller), is that there is an optimal capital structure for a firm. The capital structure is the mix between debt and equity. If the capital structure is optimized, the return on equity (ROE) should also be optimized, because the firm possesses the optimal amount of equity to produce its income. According to MM, firms should rely almost exclusively on debt to finance their operations. They dont do this, in practice, for a variety of reasons, including reduced liquidity, increased risk of financial distress, agency costs, etc. 50. What are some functions of an investment banker? Helps provide financing for a company by bringing new issues public. This involves performing due diligence and valuation analysis in order to price the issue. 66 Acts as an intermediary between the investing public and security issuers. Provides advice and guidance to security issuers through and subsequent to offerings. May provide temporary stabilization of bid price during offering and distribution period. 51. How can a firm raise cash? It can go to the capital markets (selling stock through an IPO or secondary offering), it can issue debt, or it can sell off assets.. 67 68 70 Manage and expand your client base to increase profitability? Work efficiently under pressure? Handle effectively multiple tasks at one time? They will also wonder whether you have strong financial skills? Will you be able to: Understand how financial markets operate from a broad perspective as well relative to a particular marketplace (i.e. the foreign exchange, stock, or treasury markets)? Communicate important quantitative investment criteria to a client base? Do you have strong analytical skills? Will you be able to: Listen to the firms analysts and strategists and also read and quickly interpret their research reports? Can you quickly and briefly relay the key ideas to clients? Evaluate the firms competition and effectively the market firm against competition? 71 For trading positions, the interviewer will also want to know if you have good interpersonal skills as well as the following: Are you able to work efficiently under pressure? Can you handle and prioritize multiple tasks at one time? Can you do the same repeatedly throughout the day? Are you decisive? Can you manage people efficiently? Can you act as liaison between multiple (sometimes aggravated) parties? Traders must also have good financial and analytical skills: Do you understand how financial markets operate from a broad perspective as well as relative to a particular marketplace (i.e. the foreign exchange, stock or treasury markets)? Will you be able to assess and/or initiate risk positions for various markets? Will you understand various products on a macro and micro level?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Can you analyze and improve information flow among traders, clients, and salespeople on various on desks? With these skills in mind, let us dive into some typical questions and effective answers for those of you seeking positions in sales & trading. These questions are not grouped together by topic, since a typical applicant will be asked completely unrelated questions throughout his or her interview process and is expected to able to shift gears quickly and continually (like an actual trader or salesperson). Interested in a sales & trading career? Get the Vault Career Guide to Investment Banking for a detailed look at career paths, job responsibilities, corporate culture and more. Go to the Vault Finance Career Channel at 72 1. Why have you chosen sales and trading? This is a rather basic question that often begins an interview, but it is one that could easily kill your chances before you really begin. You should give your own genuine answer to this drawing on your own skills and background. You want to nonetheless demonstrate that you are drawn to the activities and have the skills of a salesperson or trader. In other words, you like dealing with lots of people on the phone, you like a fast paced environment, you work well under pressure, you like the entrepreneurial atmosphere of the trading floor, you like working on a team, you are interested in the markets (dont lie or be too specific about this last one, unless you are prepared to answer very specific follow-up questions.) Derivativess).Visit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more. 73 (which has led to news stories detailing the spectacular losses suffered by Barrngs, Orange County in California, and elsewhere). In a corporate finance role, you will mainly have to construct, pitch and/or evaluate simple as well as extremely complex derivative instruments for hedging and financial insurance. 3. What particular markets or instruments are you interested in? You should be honest in answering this since it may lead to follow up questions on whatever market(s) you name. Remember, firms with rotational programs for entry-level analysts and associates (Citigroup, Bear, Lehman) want to hear that you are open to many different areas. This is true even if you have experience in a particular area. For lateral hires, they assume you will want to stay in the same sort of area. 4. Why debt, equity, or currency and commodities? Why cash or derivates? Again, this should be answered honestly. Debt (or fixed income as it is often called) is viewed as more quantitative than equity. The debt markets are also more attuned to broad macroeconomic trends, such as interest rate changes and GDP figures. Equity is viewed as more story telling and as more microeconomic in nature. Derivatives are viewed as very quantitative and many would say that one can make money in derivatives whether or not the markets go up or down. One should be careful when answering this, however. Equity interviewers do not want to hear that you are NOT quantitative, and convertible bond desks are generally part of a firms equity division but require knowledge of equities, bonds AND derivatives. Similarly, there is an element of story telling to areas of fixed income, particularly in high yield and emerging markets sales. 74 5. What previous experiences have you had that relate to sales and trading? Hopefully you can demonstrate that your past experience relates to this skills listed at the beginning of this chapter. Any sort of sales or financial markets experience is relevant. Even if you have not sold or worked anywhere near the Street, talk about your personal experience managing your E*TRADE account, or talk about the fast-paced and high-pressure environment of a past job, or about how you have been good at persuading people in the past. 6. Do you want to sell or trade? Answer this honestly (you dont want to end up somewhere you will later regret.) Again, firms with rotational programs want to know that you are open. You should be able to demonstrate that you have the skills mentioned at the beginning of this chapter. If you have not sold for a living before, discuss ways in which you have persuaded people in the past. 8. How do you price an option? While this question is typically asked in sales and trading interviews, it may also come up in a banking interview to test your basic financial knowledge. It may also come up at firms where derivative or convertible bond capital markets/origination is part of a banking rotation program. In addition, so-called Real Options are increasingly used in equity valuation, particularly in valuing pharmaceutical/biotech and natural resources-dependent companies. There are two main ways to price an option. One is using a binomial pricing model. Binomial option pricing (which is also referred to as the two-state option-pricing model) is based on the theory that no arbitrage opportunities will become available, or if they do, they will be immediately arbitraged away. First 75 introduced in 1979, binomial pricing and its variants are probably the most common model used for equity calls and puts today. The binomial option pricing model is essentially based on the idea that an asset price will move up or down in a given time period in only one of two possible ways. For example, let us take a simple, two-step binomial model, where the initial price of a stock is 100. The price can either go up in the next time period to 110 or down to 90. The current risk free (or U.S. Treasury) rate interest rate is 2%. How would we price a put with a strike price of 95? (That is, the right to sell the stock to the writer of the put at 95.) S=100, u=1.10, d=0.90, K=95, and r=1.02. The binomial tree thus looks like the following: 11 10 0 0 121 99 90Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library 81 Thus the option payoffs (or what the payoffs would be if the put is not exercised until maturity) are: Putuu= max {0,K-u2S} = 0 Putud= max {0,K-udS} = 0 Putdu= max {0,K-duS} = 0 Putdd= max {0,K-d2S} = 14 In other words, only if the put goes down twice is it in the money or below the $95 strike price. Otherwise, it is worthless. Since r=1.02, the risk neutral probability p is: P= (r-d)/(u-d) or (1.02-0.90)/(1.10-0.90)=0.60 If we look at step dS=90, the value from immediate exercise of the option is (KdS) or 5; at uS=110, (K-uS)= -115. The value from not exercising would be 0 if the price of the stock goes up or Putdd=14 if the price goes down again. Using 76 the risk neutral probability and discounting by the risk free rate, the value from not exercising here is: (1/1.02)[(0.60)(0) + (0.40)(14)]=5.49 Immediate exercise at the first step of S=100 leads to a payoff of max{K-S,0}=5; since it is not optimal to exercise with a negative result, it is effectively max{K-S,0}=0. Not exercising leads to Putu =0 if step uS is the result, and Putd=5.49 if dS is the result. Since (again) it is not optimal to exercise early at the first step, the initial value of the put is: P=(1/1.02)[(0.60)(0)+(0.40)(5.49)] = 2.15 Now imagine taking this process out hundreds or even thousands of steps. You can see why you will not be asked to solve a problem like this in an interview. As an options salesperson or trader, complex computer programs thankfully do binomial pricing. The other way to price options is by using the Black-Scholes equation, which was first proposed in 1973. The original version of the equation assumed that all options were European (which means that they cannot be exercised before maturity) and do not pay dividends.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library A call option can be valued by the Black-Scholes equation using these variables: S = Current price of the underlying asset (stock or otherwise) K = Strike price of the option (sometimes called X for exercise price instead) t = Time until the option expires r = Riskless interest rate (should be for time period closest to lifespan of option) 2 = Variance in the log-normal value of the underlying asset (volatility) Once one has these variables, one plugs them into this equation: Price of call = SN(d1) - Ke-rtN(d2) Price of put= Ke-rtN(-d2) - SN(-d1) Whend1 = ln(S/K)+(r + 2/2)t t and d2 = d1 - tVisit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more.CAREER LIBRARY 77 Dividend payments reduce the price of a stock (you may have noticed that a stocks price almost always declines on the ex-dividend day.) The equation was later modified to take steady dividend payments into account. If the dividends payable on the underlying asset are expected to remain constant over the life of the option, the equation becomes. C (call) = Se-ytN(d1) - Ke-rtN(d2) P (put) = Ke-rt[1-N(d2)]- Se-yt[1-N(d1)] When y (yield) = annual dividend/price of the asset and d1 = and d2 = d1 - tCustomized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library ln(S/K)+(r-y+ 2/2)t t N(d) is the probability that such a variable will be less than d on a standard normal distribution. These values can be found (or approximated) using a computer. As already stated, you will never actually have to price an option during an interview, but it is important that you be able to tell your interviewer the two main pricing methods (Binomial and Black-Scholes) and the basic elements that go into determining price under both methods: current price of the underlying asset, strike price of the option, time until option expiration, riskless interest rate, volatility in price of the underlying asset, and dividends (if applicable). 9. Tell me what an institutional investor is. Buyers of stocks, bonds, and other investment instruments are generally governments, corporations, individual investors (either retail like buying through Schwab or Merrill Lynch or via a trust or private bank like J.P. Morgan for the very wealthy) and institutional investors. Institutional investors are nongovernmental institutions that manage and invest money for themselves or others. They include mutual funds, which pool and manage money for large groups of retail investors; pension funds, which handle retirement money for a 78 companys or states defined benefit retirement plan; insurance companies, which invest either to earn enough to pay out policies in the future or to hedge their liabilities; and endowment funds (like a museums left-over donation money or a university endowment.) Certain professional investment firms (like Fidelity or the Capital Group) manage money on an outsourced basis for both retail clients through mutual funds as well as for institutions like endowments or for government or corporate clients. These are also commonly referred to as institutional investors. Hedge funds are often lumped in with institutional investors. Some firms break sales coverage (especially in derivatives) into corporate (non-pension-related), insurance, hedge funds and institutional (everything else save individual investors.) 10. Does the price of an option go up or down when interest rates rise? This is a classic trick question in two ways. First, most interviewees have drilled it into their heads that when interest rates go up, bond prices go down and visa versa. Thus your gut reaction is to say down. Dont rush to answer this one the question is about options, not bonds. Second, the answer depends on what kind of options one is talking about. If it is a call, the price will go up when interest rates rise. If it is a put, the price will decrease. One can explain this several ways. If you are very comfortable with math and the Black-Scholes equation, you might see that as r goes up, C does as well, while P decreases. C (call) = Se-ytN(d1) - Ke-rtN(d2) P (put) = Ke-rt[1-N(d2)]- Se-yt[1-N(d1)] When y (yield) = annual dividend/price of the asset and d1 = and d2 = d1- t ln(S/K)+(r-y+ 2/2)t t 79 Basically, money now is worth more than money later, and as interest rates rise, the net present value of the final exercise price is reduced. According to the Black-Scholes equation, we price options as though we are in a risk-neutral economy, which means that we assume that the underlying securitys price will bring future returns equal to the risk-free interest rate. Let us take a world with two periods, 1 (today) and 2 (tomorrow). If the call option is ever in the money in the future, it will pay us S2-K. The present value of this payment is (S2K)/(1+r), or (S2/1)+K/(1+r). Since the underlying security appreciates by 1+r, S2=(S1)(1+r), the present value of a possible future payoff is S1 (1+r)/[(1+r)(K/1+r)]. For puts, the put valuation equation breaks down into: put = call + the present value of the strike price - the underlying stock price + the present value of dividends. A put is the right to sell something at a set price at a future date. Rising interest rates makes the present value of what you will get less valuable, all else being equal. Alternatively, just think of what one would have to do if one could not invest in options but wanted the same result: one would sell short an amount of the stock (represented by ) and lend out the present value of the strike price weighted by the probability of paying the strike price at expiration. (One use of or delta in options parlance for the amount of the underlying needed to produce the replicating portfolio.) Since owning a bond is equivalent to lending, and the price of a bond goes down when interest rates rise, the value of a put will similarly fall when interest rates rise. Of course, when interest rates rise or fall, the overall stock market often moves in the opposite direction, which is why we state all else being equal. In the real world, changes in interest rates might affect the underlying price enough to offset the predicted change in an options price using Black-Scholes. (For example, interest rates rising should increase the value of a call but the underlying stock of an interest-rate sensitive company like a commercial bank may go down as a result of the rate increase.) The way in which the various option valuation inputs affect value is summarized in the chart on the next page. 80 CHANGE IN INPUTS AND AFFECT ON OPTION PRICES CALL VALUE GOES UP GOES DOWN GOES UP GOES UP GOES UP GOES DOWN PUT VALUE GOES DOWN GOES UP GOES UP GOES UP GOES DOWN GOES UP VOLATILITY TIME INTEREST INCREASE 11. What would happen to the price of an option if Iraq invaded Kuwait again? Increases in volatility are good for option prices, all else being equal, because the more prices jump around, the more likely an option will expire in the money. Thus, an invasion, in theory, would increase the value of all options. In practice, the overall stock market might plunge, while certain sectors would soar (such as defense and energy.) Therefore it is unlikely that all else would remain equal. It is likely that underlying prices would fall enough in certain sectors (like airlines or brokerage stocks) to cancel out whatever gains were created by a spike in volatility. Thus there is no unambiguous answer to this question. 12. What are some limitations of the Black-Scholes equation? The Black-Scholes equation applies only to European-style options. Americanstyle options are generally more valuable (all else being equal) than European options since the holder of an American option can exercise the option at any time before maturity (European option holders can only exercise at maturity). In addition, the Black-Scholes formula assumes that asset price changes follow a lognormal distribution. In reality, though, prices can jump around far more than what a normal distribution would predict, and thus the actual distribution can have fatter tails than a normal distribution. This means that there is a greater chance of an option being exercised than suggested by the formula. Thus, option prices (European- and American- style) tend to be slightly higher in real world. 13. What do you do for fun? This is a fairly common question for sales & trading interviews. You should try to think of something truthful yet interesting that will help you stand out. Remember, salespeople and traders tend to be outgoing and gregarious types, so athletics, outdoor activities, or something totally unique like being in a band would go over better than, say, macram. 14. What is a yield curve and how is it constructed? A yield curve is the plot of current spot yields against maturity. For example, a Bloomberg machine (a good comment to make if you are interviewing with a data vendor know their product and refer to it in your answers) continually updates the current yield curve. The yield curve is constructed of current, on-the-run benchmark Treasuries. An example yield curve is shown below.Yield Curve as a Function of MaturityYield, % 5.0 4.5 4.0 3.5 3.0 2.5 5 10 15 20 25 30 Maturity 82 As noted previously, the shape of the yield curve can be upward sloping (normal), downward sloping (or inverted), flat, humped and so on. Missing maturities such as one-year, three-year and so on may be obtained by straight linear interpolation. In order for this method to be valid, we assume that the yield curve is constructed of piecewise linear segments. The missing one-year value, then, would be estimated by interpolating between the closest given points that bound it. In this case, we have the six month yield and the two year yield and nothing in between. The general interpolation formula is derived from the first order Taylor series approximation of f, where f is the continuous function of yields: f (x) = x0 + f(x0)(x - x0)Now, we estimate the derivative as f(x0) = y = x y1 - y0 x1 - x0 Here, y1, y0, x1 and x0 are known, and x is the point about which we seek the estimate. Here, x = 1 year, x0 = 0.5 years, x1 = 2 years, y0 = yield at 0.5 years = 1.61% and y1 = yield at 2 years = 2.13%. Then, f(1) = f(0.5) + (2.13-1.61)/(20.5)(1- 0.5) = 1.61% + [(2.13-1.61)/(2-0.5)](1- 0.5) = 1.78%.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library There are par yield curves, forward yield curves and spot yield curves. If the yield curve is upward sloping, the forward yield curve is above the spot curve, which is above the par yield curve. If the yield curve is downward sloping, the par yield curve is above the spot curve which is above the forward curve. Some definitions: On-the-run: Newly auctioned securities are referred to as on-the-run while older, seasoned issues (i.e., those sold in previous auctions) are referred to as off-the-run. Benchmark Treasury: A benchmark Treasury is a reference Treasury having a specific maturity, such as three months, two years and so on. These are securities against which other bonds may be measured, usually in terms of yield comparison. The news media will often say things like, the benchmark 10-year Treasury closed up 30 basis points. Basis point: A basis point is 1/100th of a percentage point. If someone says that the yield on the two-year benchmark treasury rose by 100bp, they mean that the yield increased by 100 (1/100th) percentage points, or a full one percent. Basis points provide us with a convenient means to speak of fractions of percentage points. They can be used to report the cost of 83 borrowing in terms of a spread to some reference, as in, Ford Credit issued bonds at 25 bp over LIBOR, implying that the cost of borrowing was LIBOR + 0.25% that is, if LIBOR is 4%, the total borrowing cost would be 4.25%. 15. Define the term structure of interest rates. The term structure of interest rates is the relationship between yield to maturity of risk-free zero coupon securities (usually Treasuries) and their maturities. The yield of a newly issued risk-free zero coupon bond (pure discount bond) is called the spot rate, and the relationship between these spot rates and the bond maturities is called the spot yield curve. 16. What is a spot rate? The spot rate is the rate at which you could purchase the asset today. There are spot interest rates, spot rates for currencies, spot prices for commodities and so forth. 17. What is a forward rate? A forward rate is an interest rate prevailing at some later time that can be locked in today. For example, if we are going to need a one-year loan in one years time, we could go to the bank today and lock in the rate we will pay. We can get an idea of the markets opinion of where forward rates will be by calculating them from the yield curve. 18. Do forward rates predict the rates that ultimately prevail in later periods? No. The expectation is that forward rates are unbiased predictors of future spot rates, but in practice, numerous studies (most notably one by Fama in 1976 and another paper by Fama in 1984) have shown that forward rates have very low 84 predictive power over long time periods. Fama found mixed results over different time intervals: for example, he found one-month forward rates have some predictive power to forecast the spot rate one month ahead. Since the forward rate embeds two elements the expected future spot rate and the risk premium he hypothesized that this is due to the failure of models to control for this term premium in the forward rates. Unless this risk premium is controlled for, the best use of forward rates may just be as insight into the markets opinion of future spot rates. 19. I was just looking at Bloomberg and noticed that I can earn 3.872% on a one-year bond in the U.K. and can borrow at 2% here in the U.S. Can I make a risk-free profit by doing this? Not necessarily. The reason for different interest rates across countries is primarily due to different expectations of inflation. High interest rates in the U.K. relative to the U.S. indicate that the currency is expected to depreciate relative to the U.S. dollar.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library 20. If you were trading for a pension fund, would you recommend taxfree munis or corporate bonds? Why? Corporates. The pension fund is already tax-exempt so it would be disadvantageous to invest in tax-free munis, which offer lower rates than corporates due to the tax-free status. 21. Why are yields on corporate bonds higher than treasury bonds of the same maturity? Because of the risk involved. Treasury bonds are generally considered to be risk free, backed by the full faith and credit of the United States Government. Corporate bonds will involve some risk of default, credit downgrades and so on, so investors demand a higher yield (lower price) in order to compensate them for the increased risk of the corporate bond. 85 22. Does treasury stock receive dividends? Is it included in market capitalization of a company? What happens to a companys ROE if shares are repurchased? Treasury stock does not receive dividends. It is not included in market capitalization of a company, since market cap includes only outstanding shares. Retirement of stock through corporate repurchases raises financial variables such as ROE and EPS since the number of outstanding shares, used in the denominator of these variables, decreases. 23. What is LIBOR? Why is it important? The London InterBank Offered Rate. It is important because LIBOR is the primary reference rate used in the Euromarkets. Even in the U.S., many floating rates are quoted as LIBOR plus or minus a spread. For example, in a swap the floating payer may be quoted as paying LIBOR + 25bp.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library 24. What is a defensive stock? Defensive stock is the stock of a company that is not affected much by downturns in the economy. It may be used as a diversification element in a client portfolio. Defensive stocks typically include stocks of corporations that manufacture consumer essentials, such as food, clothing, pharmaceuticals, and so on, which people would still need even during recessions. On the other hand, stocks in sectors such as automotive, heavy construction or steel are highly sensitive to economic conditions. 86 25. A U.S. Government bond is selling in the market at 96.25. What is the price of this bond? Prices are quoted as a percentage of par in 32nds, so 96.25 means 96 and 25/32%. 25/32 is close to 24/32 = 6/8 = so the price is about 96.78% of par, say 96.75% per $1,000 face or $967.5 per $1,000 face. This is a discount bond, meaning it sells below par. 26. A prisoner was to be executed but, after begging for his life, was given one chance to live. He was given 100 balls, 50 black and 50 white, and told to distribute them evenly between two urns in any way he liked. He would then draw a ball at random from one of the urns, and his life would be spared if the ball were white. Legend has it that he drew a white ball and was released. What possible strategy could he have used to maximize his chances of success? If he had just distributed the balls evenly among the two urns, then his chances would have been 50-50 no matter which urn he drew. So he put one white ball in urn A and the other 49 white balls in urn B, along with the remaining 50 black balls. If he was handed urn A, he had a certainty of drawing a white ball, and if handed urn B, he had only a slightly less than 50-50 (49/99, to be exact)chance of drawing a white ball. His total probability of drawing a white ball was therefore x 1 + x 49/99 = 74/99, just slightly less than 75%. 27. If you believe that there is a 40% chance of earning a 10% return on a stock, a 50% change of losing 5% and a 10% chance of losing 20%, what is the expected gain/(loss) on the stock? The expected return is the weighted average of the probabilities of the returns times the returns, orE[R] = 87.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library For one-on-one coaching with a finance interview expert, get Vaults Finance Interview Prep. 88 90 Buy-side skills Your exact duties will vary depending on where you are applying. Nonetheless, your interview will most likely be trying to determine: Will you be able to analyze and quickly understand single company information, industry-wide issues, and macro-economic trends? Are you good at building ongoing relationships with, and getting information from sell-side analysts and salespeople, company management, suppliers, industry experts and others from whom you will gain data? Do you understand various valuation methods (discounted cash flow techniques, comparable company analysis, etc.)? Are you an expert at accounting (or economics if applicable)? Will you be able to accurately project earnings, cash flow statements, balance sheets, and/or overall industry, market and economic trends? 91 Can you effectively communicate investment ideas and recommendations to your firms investment committee and the media? Can you defend those same ideas? If you become part of your firms decision-making process, will you able to successfully evaluate investment recommendations? Are you personally passionate about the market? Are you able to understand risk levels of various investments and balance your firms exposure to various sectors in order to keep risk levels tolerable? Are you assertive, autonomous, probing and innovative? Sell-side skills Whether covering stocks, bonds, the entire market or the economy as a whole, your interview will essentially be trying to find out: Will you be able to analyze and quickly understand single company information, industry-wide issues, and macro-economic trends?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Are you good at building ongoing relationships with, and getting information from company management, suppliers, industry experts and others from whom you will gain data? Are you assertive, autonomous, probing and innovative? Will you be able to get along with bankers, salespeople, and traders? Are you very good at financial modeling, especially when using Excel? Do you understand various valuation methods (discounted cash flow techniques, comparable analysis, etc.)? Are you an expert at accounting (or economics if applicable)? Will you be able to accurately project earnings, cash flow statements, balance sheets, and/or overall industry, market and economic trends? Can you effectively communicate investment ideas and recommendations to institutional clients, retail customers, the media, bankers, salespeople, traders, and bankers, in person, via email and on the phone? Can you defend those same ideas? 92 Do you have good presentation and marketing skills? Will you be able to bring in new business to the firm (either new institutional investors or banking clients)? Can you produce well-written morning call notes, updates and research reports on companies, industries and/or economics under tight deadlines? Are you a stickler for detail and extremely well organized? Are you personally passionate about the market? The following pages include some specific questions asked during past research interviews along with possible answers. Interested in a research or asset management career? Get the Vault Career Guide to Investment Management for a detailed look at career paths in portfolio management, investment research, and sales and marketing for investment management firms. The guide covers mutual funds, institutional investors, and high-net worth clients. Go to the Vault Finance Career Channel at 93 1. Why buy- vs. sell-side (or vice versa)? There are several good responses to this question, and you should tailor your response so that it is truthful and fits in with your goals. If you are interviewing for both buy and sell side positions, you should be honest about this and talk about your interest in uncovering undervalued securities. You should also make certain that your answers mesh with the desired skills mentioned above. If you are going for only buy- or sell-side positions, you should not deride the area you are not interested in. Many of your interviewers will have spent part of their career on both sides of the divide. You should also not state that you want the buy side because you think the hours are better (even though they generally are) because you dont want to come across as lazy. You also dont want to say that you want the sell-side because you want to focus on a particular industry. Most brokerages only place new associates in particular areas if they have expertise (i.e. someone who worked at Disney before business school in the media group, or a medical doctor in health care). Financial analysts are even less likely to get the group they want. Most likely, you will end up wherever there is an opening. So even if you really want biotech, be prepared to cover the automobile industry. (Note: like in sales and trading and in banking, sell-side research hires just out of undergrad are generally called financial analysts, analysts or F.A.s. The next level is associate (usually those with MBAs and/or CFAs), while research analysts are generally those who are at the assistant vice president (AVP) level or above.) 2. What courses have you taken/will take to prepare you for a career in asset management/research? Again, discuss any accounting, finance or economics courses you have taken or will take in the following year(s) if it is a summer position. Do not forget to discuss other less-obvious courses, like Conflict Resolution or the like, since 94 they too can be relevant. If you have any professional designations (even pending) like a C.P.A., C.F.A., or M.D., certainly mention them. 3. What would you buy? What would you short? This is a variation of the earlier stock pitch question. If you are interviewing for a position in fixed income, derivatives, or strategy research, you might want to have some idea what sort of bonds or derivatives analysts find attractive. For example, if you believe that there will be increased tension in the Middle East, you might argue that one should go long on volatility in the oil sector. If you expect inflation to pick up, you might suggest shorting inflation-sensitive bonds (since an increase in inflation generally leads to an increase in interest rates and thus a decrease in existing bond prices.) 4. What do you think about index funds? Do you subscribe to the Random Walk theory?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Those who tout index funds (funds that do not pick stocks but rather mimic a particular index like the S&P 500 or the Dow Jones Eurostoxx 50) and the author of A Random Walk Down Wall Street (Burton Gordon Malkiel) maintain that on average, most active portfolio managers and sell-side analysts underperform the broader stock market. Many studies have supported this theory (often called the efficient markets theory). Supporters of this view maintain that investors should not pay the extra fees that active mutual fund managers and stockbrokers charge, but should simply buy index funds. You do believe in investors diversifying their holdings, but unless you are interviewing at Vanguard or another index fund/firm, you do NOT believe in what the author of Random Walk says or that investors should use index funds. You believe that a good stock (or bond) picker can find mispriced securities and/or exploit the markets occasional inefficiencies. Just think about it: if everyone believed in the efficient markets theory, no one would buy mutual funds, invest in hedge funds, or use the advice of sell-side analysts. Your interviewers would all be out of work. Do not fall into this trap during an interview, no matter what your finance professor told you in class. 95 5. Stocks historically outperform bonds over the long term. If I am a long-term investor, I dont need any bonds in my portfolio. True or false? This is false. It is true that measured both arithmetically and geometrically, stocks in the U.S. (as measured by the S&P or Dow Jones Industrial indexes) have outperformed the various key bond categories since at least 1926. However, when the broader stock market rises, bonds tend to go down, and vice versa, and there is not perfect covariance. You should be familiar with the phrase No risk, no reward. Higher returning assets tend to be riskier than lower returning assets. By having a certain amount of bonds in a portfolio, one can exploit the lack of perfect covariance in order to earn the same return with lower risk. Lets take the simple example of a two-security portfolio to illustrate: First, lets assume that these securities have a single period investment horizon, that returns are independent between periods, that there are no transactions costs, and that the assets returns follow a normal distribution. Next, lets say that p = X11 + X2 2= the portfolios expected return where = the mean return and where X = the probability of occurrence so in this case, X1+X2 = 1. Lets also state R = return of the security or portfolio in question. Further, the variance of the return on this two-security portfolio is represented by the term: 2 = X12 12 + X22 22 + 2 X1 X2 Covariance (R1,R2). (Note: The variance (2) and standard deviation () of the portfolio are NOT a weighted average of the individual securities 2 and since X12 + X22 1.) Finally, lets define as = Cov (R1,R2) 1 2 when: r = +1 there is perfect positive correlation, r = 0 there is no correlation whatsoever, and r = -1 perfect negative correlation Thus: 2 = X12 12 + X22 22 + 2 X1 X2 1 2 96 Lets take a simple case where the securities can return 0 or 100:R1 X1 = 1 X2 = 1 X1 = 0.5 X2 = 0.5 0% 0% 0% R2 100% 100% 100% E[R] 50% 50% 50% 2 HIGH HIGH LOWER POSSIBLERESULTS In this case, risk is lower even though the mean return is the same in all three cases. If < 1, one will have the same expected return with diversification. Graphically, the risk-reward trade-offs can be shown in the following three illustrations:R X1 = 0 X2 = 1 X1 = 0.5 X2 = 0.5 X1 = 1 X2 = 0 2 (1+2)/2 2 R X1 = 0 X2 = 1 X1 = 1 X2 = 0 (1+2)/R J 1 = +1; There is perfect correlation between assets 1 and 2 and no gain or loss from diversification. 2 = X12 12 + X22 22 + 2 X1 X2 1 2 2 = (X1 1 + X2 2)2 = X 1 1 + X 2 2 = -1; There is perfect negative correlation. Here, at point J, d is zero. One can achieve higher returns with less risk! 2 = X12 12 + X22 22 - 2 X1 X2 1 2 2 = (X1 1 - X2 2)2 = X 1 1 + X 2 2 X 1 1 - X 2 2 = 0 X1 = 2/(1 + 2) =0 Generally, covariance is between 1 and 0, so the risk-reward line will be curved rather than angular. Still, as long as securities returns are not perfectly correlated, one can construct a two-security portfolio with a higher return and less risk than one could earn with just one security. 97 B X1 = 0 X2 = 1 C X1 = 1 X2 = 0 1 2 This notion can be expanded to include a portfolio with dozens, hundreds, or even thousands of securities. Therefore, in theory at least, if one can find the appropriate uncorrelated assets (of technology stocks and grocery store stocks, or of stocks in general and bonds, or of bonds and gold, etc.), a portfolio manager can use bonds to lower the risk in his or her portfolio without lowering returns. You should be able to weave in whatever specific recommendations for long or short pitches with an overall investment theme. You should know what you are talking about when you answer this dont B.S.. Ask friends or alumni in the industry for their advice on this, read investment strategy research reports if you can, or at the very least investment advice articles in BusinessWeek if you are totally clueless. A typical answer might be something like: I would invest $200,000 in these four specific stocks [give brief versions of your pitches if you have not already], $300,000 in U.S. small caps, since they tend to outperform large caps in the early stages of a recovery, $200,000 in East Asian equities, since the U.S recovery should lead to increased exports for the Asian economies, and $300,000 in a mixture of investment-grade and high-yield corporate bonds. 98 7. What investment philosophy do you subscribe to and why? There are several main investment strategies: Aggressive growth investors want to maximize rapid capital appreciation. This implies a readiness for high risk. It may also imply the use of alternative investment vehicles like private equity, venture capital, derivatives and hedge funds. Growth investors seek capital appreciation, but with less risk than aggressive growth investors. (And thus lower returns.) This still usually means stocks rather than bonds. Income investors concentrate on current and steady income. Such investors usually have a mix of bonds and preferred stocks. They may also have dividend paying stocks and coupon-bearing convertible bonds, usually from blue chip companies like IBM or G.E. Growth-income or balanced investors seek a mix of Growth and Income.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Conservative or defensive investors seek capital preservation at all cost. This means investing in only high-grade bonds and the like. There are a few main investment styles, including: Value investing, which emphasizes securities (mainly stocks) that have a market price far below whatever value analysts have computed for the security, whether based on DCF, P/E, or what have you. Contrarian investing, which means investing contrary to the current market direction or beliefs. This is an exaggerated version of value investing, since contrarians seek out-of-favor investments with the potential for large gains through turnarounds. Momentum investors or market timers use economic figures or technical analysis to make investment decisions rather than fundamental analysis. The idea is to be invested when prices are rising, and sell just as the market begins falling. This style generally uses technical analysis to time an investment in a particular stock or bond. The goal is to invest at a time when the securitys price movement can be anticipated. Additional common terms: Fundamental analysis means using DCF, P/E, talking with management, examining the income statement and balance sheet or other fundamental techniques to value a security. Technical analysis means basing investment decisions by analyzing price charts, trading volume, resistance levels, and the like. Technical analysts do not care about management, the balance sheet, earnings, or other fundamentals. Indexing or passive investing means not picking individual securities but rather mimicking a particular index like the S&P 500 or the Dow Jones Eurostoxx 50. Adherents maintain that, on average, most active portfolio managers and sell-side analysts under perform the broader market. Many studies have supported this theory (often called the efficient markets theory). There is no one correct answer to this question at a buy-side interview. You should make sure you know which styles and philosophies the firm employs before the interview and make sure your answer is in line with what the firm espouses. On the sell side, most analysts would consider themselves fundamental analysts, but are agnostic when it comes to growth versus income. Few believe in technical analysis (since it means that what they do is meaningless), though many sell-side firms have technical analysts, so be careful not to downplay any one technique, style or philosophy. 8. What would you do if a stock you just recommended lost 10% 15% of its value? Situations like this happen quite frequently on the buy and sell side alike. You should prepare an answer to a question like this one that demonstrates that you are level headed, analytical, articulate, and able to learn from any mistakes you may have made. 100 9. Are you a top-down or bottom-up investor? Top-down investors evaluate the economy as a whole and find which sectors they believe will outperform the broader market and invest in these. For example, if the economy is entering a recession, such investors may seek recession-proof stocks like Kraft or Phillip Morris. If oil prices are expected to rise, they might seek energy sector investments. Bottom-up investors seek investments that are compelling values based on fundamental analysis (DCF, relative valuation or otherwise) regardless of overall economic conditions. Many portfolio managers are a combination of both. If you are interviewing at a buy-side firm, you should research the firms philosophy and match your answer accordingly. 10. In 2002 the S&P is trading at a P/E multiple much higher than it was in the 1970s or even during the booming 1980s. Does this mean that stocks as a whole are currently overvalued? It does not necessarily mean that they are overvalued. Mathematically, we can show that P/E can be broken down as follows: Using, higher growth means higher P/E ratios. All else being equal, lower interest rates mean lower discount rates and thus also higher P/E ratios. One could argue whether corporate earnings were growing faster than in the 1970s and 1980s. Interest rates in 2002 were clearly far lower, however, which allows all stocks to trade at higher multiples. 101 11. What kinds of things make a stock extremely volatile in the short term? Uncertainty about the economy or the sector that a stock is in; varying news from competitors (for example, if Ford says business is weak but GM says it is strong, Daimler Chryslers stock may move about wildly until it discusses its outlook); the firm may be in a highly cyclical sector (like semiconductors or oil); lots of momentum players in the stock (these are investors who bet that the direction of a stock will continue rather than those who perform fundamental analysis); the company may be in a newer, less proven and/or high growth industry (like biotech); and legislative uncertainty (will the government raise or lower tariffs for steel makers?). 12. How do you calculate the return on a stock? The return on a stock is calculated as the percentage change in price over the investment horizon. If P0 is the price at the beginning of the investment horizon and P1 is the price at the end of the horizon, thenCustomized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library r= P1 + D - P 0 P0 13. What is an option? An option is a contract between buyer and seller that provides the buyer the right, but not the obligation, to enter into a transaction at some future date, while the seller is obliged to honor the transaction. Options are derivatives that depend on the value of the underlying. For instance, one can buy a call option on a specific stock. The option will be defined by the exercise price (strike price) and the time to expiry. As an example, you buy a June call option on IBM stock with a strike of $80 when IBM is trading at $75. This gives you the option to purchase IBM at a price equal to the strike price, before or at the expiration date depending on whether the option is European or American style. 102 14. What is put-call parity? How is it used? Put-call parity provides a relationship between call and put prices on a stock that should hold in equilibrium. It is based on a no-arbitrage argument, and asserts that p + S0 = c + Xe-rT, where p and c are the price of the put and call, respectively, S0 the stock price, X the strike price, r the risk-free rate, and T the time to expiry of the options. It is used to find the price of a put having strike X and time to expiry T, if the price of the call with the same parameters is known, and vice versa. It can also be used to determine whether arbitrage opportunities exist: given observed put and call prices, stock and exercise prices and time to expiry, one can use put-call parity to determine an implied interest rate. Comparing to the available rate will allow a decision to be made on whether the market prices permit an arbitrage opportunity. 15. Can you describe a situation when it would be optimal to exercise an American call option before the expiry date? Does it matter whether the stock pays dividends or not?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library What do you think: is it ever optimal to exercise an option early? Purchase of an American option rather than a European option confers the right to do this, but it comes at a higher premium cost. So there must be some point at which early exercise is optimal, or no one would buy American options. There can be only one possible benefit the ability to receive cash flows earlier than the exercise date. However, if the option is being held as part of a hedging strategy, one gives up the insurance provided by the option when it is exercised early. Consider that you hold a large block of stock and a put to hedge against price declines. If the put is sold, you are then exposed to adverse price movements below the strike price of the put). The gain you realize by early exercise must at least offset the loss of the insurance. 103 16. What are the important factors affecting the value of an option? The important factors include: The moneyness of an option how close the underlying is to the strike price. Time to expiration the longer the time to expiry, the higher the probability that the option will finish in the money. Volatility this increases the value of the option for the same reason as above. Other factors include risk-free rate and dividends paid (or foreign interest rate received if valuing currency option). 17. If the price of a stock increases by $1, how should the price of a call option change? What about a put option? It depends on delta (noted by , delta is the change in value of an option for a change in value of the underlying asset). If the stock is at or in the money, = 1 and there is a 1-for-1 relationship between c and S. But deep out of the money, it does not matter if the stock price goes up by $1, the call option will still be valueless. Only when we get sufficiently close to X so that N(d1) = > 0 will changes in the underlying affect the value of the option. The same argument holds for a put, but the put has value only when S<X. 104 18. What is a warrant? Do warrants affect a firms financial ratios such as ROE? A warrant is a security similar to a call option on a stock, except a warrant usually has a much longer time to expiry. Warrants may often be attached to issues of preferred stock or bonds in order to make the issue more attractive to investors, as they offer the opportunity for some participation in stock appreciation. When the warrant is exercised, the owner pays the stated strike price in exchange for shares of stock. Thus, warrants result in the issue of new shares of common stock and are dilutive. All other things equal, measures such as ROE and EPS should decrease as the number of shares increases. 19. If Microsoft announces a new issue of 2,000,000 units at $120, with each unit consisting of one share of common, one share of preferred and a warrant for of a share of common, how many new shares of common will be issued assuming that the offering is successful and all warrants are exercised?Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Each warrant entitles the owner to redeem it for shares, so since there were 2,000,000 warrants issued, there will be 2,000,000 () = 500,000 new shares of common, in addition to the 2,000,000 new shares of common issued with the units, for a total of 2,500,000 new shares. 20. What would be a good instrument to use to hedge a portfolio of preferred stock? Since preferred stock is similar to bonds that never mature (perpetual bonds), the best hedging instrument would be a long-maturity, risk-free instrument such as a T-bond option based on long-term treasuries. 105 21. If you are buying corporate bonds, which is more speculative: A, Aa, Baa or B? B is the most speculative of these Moodys ratings. 22. If a client purchases a 6%, $1,000 bond selling at a yield to maturity of 7%, what is the amount of the semi-annual interest payment? Yield is unimportant here. Whats important is the coupon payment: 6% of $1,000. So each year the payout is $60, or $30 every six months (semiannually). Dont get confused if the interviewer adds extra information to the question. 23. Suppose you have an investment earning 1% per month. How do you convert this to an annual rate? This question is simple, but on an interview, where you may be nervous, these simple questions can really trip you up. One should not assume that you wont be asked questions of this type just because they appear trivial. It is a good idea to talk through your thought process out loud and make use of a pad and paper if available. This talk might go something like the following: Assuming that the client will earn a flat rate of r for each month over the year, then, if r is the monthly rate, this means that an investment of $P will be worth $P + interest on $P, or $P(1+r) after one month. This amount is invested at the beginning of the second month, so you will have ($P(1+r))*(1+r) at the end of the month. This process continues so that by the end of the year, you have a total of $P(1+r)12. The annualized interest earned is then (1+r)12. 24. If you earn 6% a year using simple compounding, how would you calculate how much you would earn in a 90-day period? You would just adjust this to account for the earning period. If you earn 6% using simple compounding, assuming (make sure to state your day count assumption here as day counts are very important on bond questions) 30/360 106 day-count convention, you would earn 6%*90/360 = 1.5% over a one-quarter period using simple compounding. 25. You have a client that wishes to be invested in a bond portfolio. Would you recommend short- or long-term bonds for this client, and why? It depends on what you expect the yield curve to do. Is it upward sloping now and expected to flatten? Turn it around on the interviewer (though this can be dangerous as they can then turn it back on you) by asking, What do you think interest rates are going to do? But in general, remember that price of a bond moves inversely to yield. Thus if interest rates are expected to rise, the price of a bond should fall. Usually long-term bonds are much more sensitive to interest rate movements than are short-term bonds. So you would tend to stay on the long end of the curve in order to get the maximum profit from rate movements (also, of course, the maximum exposure/potential loss). So, if the client wanted to profit from a rise in interest rates, you might short the long bonds. If rates are expected to decline, you could buy the long bonds.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library 26. A client expects the market to move significantly and wants to hedge against either direction. What strategy would you recommend? Explain. A straddle. This way, he will profit no matter which way the market moves. Note that if the market does not move, or moves but not by very much, there will be a loss to the strategy. A straddle consists of the purchase of both a call and a put having the same strike price and expiry date. The upfront costs, apart from transaction costs, are the premiums that have to be paid for the call and put. 107 27. A client purchased a 10-year 5% par bond that will yield 6% if called at the first call date in 2 years. If the client holds the bond to maturity, what will the yield on the bond be? What if he sells the bond prior to maturity? If he holds the bond to maturity, the client will receive a yield equal to the stated coupon rate of 5%. This is because he will receive par, he paid par and he is getting a 5% return in the form of coupon payments. He would only receive a yield other than this if the bond is surrendered prior to maturity. If the bond is called after two years, he would receive a yield of 6%; otherwise, the yield would have to be calculated based on the price of the bond when sold. 28. A corporate treasurer is borrowing at LIBOR to fund automobile loans. She wants to hedge against anticipated rises in the short rate. What hedging strategy would you recommend? Enter an interest rate swap as the fixed rate payer/LIBOR receiver with swap dates arranged to coincide with borrowing dates. 29. What factors influence the price of a bond? The main factors are the perceived risk of the bond, its yield and the issuers cash flows. 30. If a fixed income client is interested in capital appreciation, in what type of interest rate environment should he buy bonds? For capital appreciation you need the bond price to rise. Since bond prices are inversely proportional to yields, you need a falling interest rate environment. 108 31. A client in the 28% tax-bracket has a choice between a tax-free municipal bond yielding 7% and a corporate bond yielding 8.5%. Which should he choose? What would the yield on the corporate bond have to be in order to be equivalent to the tax-free bond? We have to compare the instruments on the same basis in order to decide. Since the muni bond is tax-free, the after-tax yield of the corporate bond is the comparator. Lets take the corporate bond first and consider a one-year period for simplicity. Suppose the client invested $1,000 and earned 8.5%. Of this, 28% will be taxed so his gain is (1-t)y$1000 = (1-0.28)0.085*$1,000 = 61.2. This is equivalent to a tax-free yield of 6.12%. So, since the yield of the tax-free bond is greater than the after-tax yield of the corporate bond, he should choose the muni. To determine the yield that will give parity between the corporate bond and the muni bond, use the formula after tax yield on corporate = tax-free rate or, (1-t)ycorp = ytax-free, then ycorp = ytax-free/(1-t). For this example, the yield on the corporate bond would have to be 0.07/(1-0.28) = 9.722% in order to be equivalent to the tax-free bond. If corporate bond yields are lower than 9.722%, choose the muni; otherwise, choose the corporate bond since the higher yield will offset the cost of the tax. 32. A convertible bond is selling at $1,200. It is convertible into 80 shares of stock. What would the stock price have to be for the convertible bond to be at parity with the stock? The value of the bond and the common stock must be equivalent at parity, so the stock must also be worth $1,200. Then the stock price would be $1,200/80 shares = 120/8 = 60/4 = $15/share. 109 33. The U.S. Treasury sells bonds at 1-year, 2-year, 5-year and 10-year maturities. You need the yield on a 7-year Treasury bond. How do you get it? You would interpolate between known values. To get the value for the 7-year yield, you would interpolate between the 5- and 10-year yields. Linear interpolation would probably be sufficient, but splines and other smoothing techniques are sometimes used. 34. What are the factors affecting refinancing and prepayments of mortgages? Not only the current level of interest rates, but also the path of rates: the level relative to prior levels and anticipated levels. General economic conditions: people tend to refinance in lowered interest rate environments. However, there is a burn-out effect whereby if people have refinanced, then rates rise and lower again, the pool of people refinancing may be diminished because those eligible to refinance have already done so earlier. Unpredictable events such as fires, divorces, marriages, relocations, winning the lottery and so on may also encourage people to sell homes and buy new ones. 35. How can you reduce the risk of a portfolio? You add instruments for diversification. Hopefully these instruments are not well correlated with each other so overall they reduce risk. For equities, theoretically, you need about 30 different stocks for efficient diversification. There are many forms of risk: credit risk, liquidity risk, country risk, market risk, firm-specific risk and so on. You can also include hedging instruments. For example, if you own a particular equity, you could buy put options on it. 110 111 Accretive merger: A merger in which the acquiring companys.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Call option: An option that gives the holder the right to purchase an asset for a specified price on or before a specified expiration date. Capital Asset Pricing Model (CAPM): A model used to calculate the discount rate of a companys cash flows. Commercial bank: A bank that lends, rather than raises money. For example, if a company wants $30 million to open a new production plant, it can approach a commercial bank like Bank of America or Citibank for a loan. (Increasingly, commercial banks are also providing investment banking services to clients.) Commercial paper: Short-term corporate debt, typically maturing in nine months or less. Commodities: Assets (usually agricultural products or metals) that are generally interchangeable with one another and therefore share a common price. For example, corn, wheat, and rubber generally trade at one price on commodity markets worldwide. 112 Common stock: Also called common equity, common stock represents an ownership interest in a company. (As opposed to preferred stock, see below.) The vast majority of stock traded in the markets today is common, as common stock enables investors to vote on company matters. An individual with 51 percent or more of shares owned controls a companys decisions and can appoint anyone he/she wishes to the board of directors or to the management team. Comparable transactions (comps): A method of valuing a company for a merger or acquisition that involves studying similar transactions. Convertible preferred stock: A relatively uncommon type of equity issued by a company, convertible preferred stock is often issued when it cannot successfully sell either straight common stock or straight debt. Preferred stock pays a dividend, similar to how a bond pays coupon payments, but ultimately converts to common stock after a period of time. It is essentially a mix of debt and equity, and most often used as a means for a risky company to obtain capital when neither debt nor equity works. Capital market equilibrium: The principle that there should be equilibrium in the global interest rate markets. Convertible bonds: Bonds that can be converted into a specified number of shares of stock. Cost of Goods Sold: The direct costs of producing merchandise. Includes costs of labor, equipment, and materials to create the finished product, for example. Coupon payments: The payments of interest that the bond issuer makes to the bondholder. Credit ratings: The ratings given to bonds by credit agencies. These ratings indicate the risk of default. Currency appreciation: When a currencys value is rising relative to other currencies. Currency depreciation: When a currencys companys. Dividend: A payment by a company to shareholders of its stock, usually as a way to distribute some or all of the profits to shareholders. EBIAT: Earnings Before Interest After Taxes. Used to approximate earnings for the purposes of creating free cash flow for a discounted cash flow. EBIT: Earnings Before Interest and Taxes. EBITDA: Earnings Before Interest, Taxes, Depreciation and Amortization.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library Enterprise Value: Levered value of the company, the Equity Value plus the market value of debt. Equity: In short, stock. Equity means ownership in a company that is usually represented by stock. The Fed: The Federal Reserve Board, which gently (or sometimes roughly) manages the countrys economy by setting interest rates. Fixed income: Bonds and other securities that earn a fixed rate of return. Bonds are typically issued by governments, corporations and municipalities. Float: The number of shares available for trade in the market times the price. Generally speaking, the bigger the float, the greater the stocks liquidity. Floating rate: An interest rate that is benchmarked to other rates (such as the rate paid on U.S. Treasuries), allowing the interest rate to change as market conditions change. Forward contract: A contract that calls for future delivery of an asset at an agreed-upon price. 114 Forward exchange rate: The price of currencies at which they can be bought and sold for future delivery. Forward rates (for bonds): The agreed-upon interest. Glass-Steagall Act: Part of the legislation passed during the Depression (GlassSteagall was passed in 1933) designed to help prevent future bank failure - the establishment of the F.D.I.C. was also part of this movement. The Glass-Steagall Act split Americas investment banking (issuing and trading securities) operations from commercial banking (lending). For example, J.P. Morgan was forced to spin off its securities unit as Morgan Stanley. Since the late 1980s, the Federal Reserve has steadily weakened the act, allowing commercial banks such as NationsBank and Bank of America to buy investment banks like Montgomery Securities and Robertson Stephens..Visit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more.CAREER LIBRARY 115 Leveraged buyout (LBO): The buyout of a company with borrowed money, often using that companys).Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Libraryis earn interest payments without having to pay federal taxes. Sometimes investors are exempt from state and local taxes, too. Consequently, municipalities can pay lower interest rates on muni bonds than other bonds of similar risk. 116 Net present value (NPV): The present value of a series of cash flows generated by an investment, minus the initial investment. NPV is calculated because of the important concept that money today is worth more than the same money tomorrow. Non-convertible preferred stock: Sometimes companies issue non-convertible preferred stock, which remains outstanding in perpetuity and trades like stocks. Utilities represent the best example of non-convertible preferred stock issuers. Par value: The total amount a bond issuer will commit to pay back when the bond expires. P/E ratio: The price to earnings ratio. This is the ratio of a companys stock price to its earnings-per-share. The higher the P/E ratio, the more expensive a stock is (and also the faster investors believe the company will grow). Stocks in fastgrowing.Customized for: Vijit Jain (Vijit_Jain2010@pgp.isb.edu) Indian School of Business Online Career Library. Selling, General & Administrative Expense (SG&A): Costs not directly involved in the production of revenues. SG&A is subtracted from Gross Profit to get EBIT. Spot exchange rate: The price of currencies for immediate delivery. Statement of Cash Flows: One of the four basic financial statements, the Statement of Cash Flows presents a detailed summary of all of the cash inflows and outflows during a specified period.Visit the Vault Finance Career Channel at with insider firm profiles, message boards, the Vault Finance Job Board and more.CAREER LIBRARY 117STs). 118. 119 121 Bien plus que des documents. Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.Annulez à tout moment.
https://fr.scribd.com/document/123857631/Vault-Guide
CC-MAIN-2020-05
refinedweb
31,535
61.97
Originally posted by Marc Peabody: For instance, a friend of mine was asked in an interview to use Java to find the intersection of two lists. This requires some thought using Java but I was able to solve and test this problem in a matter of seconds using Groovy: def a = [1,3,5,7,9,11] def b = [5,4,7,2,2,3,1] return a-(a-b) Result: [1, 3, 5, 7] Originally posted by Himanshu Gupta: I want to know that can groovy be used in the java classes itself or it is something like ANT. Is it integrable with java code. Can we use Groovy in between java code in methods? Originally posted by Prad Dip: I have an issue with groovy that sometimes it does not spit proper exception messages which one would normally see when running Java programs.
http://www.coderanch.com/t/24/Groovy/Groovy-cases-Groovy
CC-MAIN-2014-41
refinedweb
145
61.6
Practice QuestionsProblem 14.2. The volatility of a stock price is 30% per annum. What is the standard deviation of the percentage price change in one trading day? The standard deviation of the percentage price change in time t is t where is the volatility. In this problem = 0.3 and, assuming 252 trading days in one year, t = 1 / 252 = 0.004 so that t = 0.3 0.004 = 0.019 or 1.9%. Problem 14.7. A stock price is currently $40. Assume that the expected return from the stock is 15% and its volatility is 25%. What is the probability distribution for the rate of return (with continuous compounding) earned over a two-year period? In this case = 0.15 and = 0.25 . From equation (14.7) the probability distribution for the rate of return over a two-year period with continuous compounding is: 0.252 0.252 0.15 , 2 2 i.e.,(0.11875,0.03125) The expected value of the return is 11.875% per annum and the standard deviation is 17.7% per annum. Problem 14.8. A stock price has an expected return of 16% and a volatility of 35%. The current price is $38. a) What is the probability that a European call option on the stock with an exercise price of $40 and a maturity date in six months will be exercised? b) What is the probability that a European put option on the stock with the same exercise price and maturity will be exercised? a) The required probability is the probability of the stock price being above $40 in six months time. Suppose that the stock price in six months is ST ln ST : (ln 38 + (0.16 i.e., ln ST : (3.687, 0.247 2 ) Since ln 40 = 3.689 , the required probability is 3.689 3.687 1 N = 1 N (0.008) 0.247 From normal distribution tables N(0.008) = 0.5032 so that the required probability is 0.4968. b) In this case the required probability is the probability of the stock price being less than $40 in six months time. It is 1 0.4968 = 0.5032Problem 14.13. three months? In this case S0 = 52 , K = 50 , r = 0.12 , = 0.30 and T = 0.25 . ln(52 / 50) + (0.12 + 0.32 / 2)0.25 d1 = = 0.5365 0.30 0.25 d 2 = d1 0.30 0.25 = 0.3865 The price of the European call is 52 N (0.5365) 50e0.120.25 N (0.3865) = 52 0.7042 50e0.03 0.6504 = 5.06 or $5.06.
https://id.scribd.com/document/112951915/ch14qa
CC-MAIN-2019-39
refinedweb
440
79.26
3 Sep 22:03 2006 %cstring_output_allocate_size and perl Rene Rebe <rene <at> exactcode.de> 2006-09-03 20:03:35 GMT 2006-09-03 20:03:35 GMT Hi all, while using SWIG to bridge a C++ (imaging) library with e.g. Perl I run into a problem with &cstring_output_allocate_size. Among the functions I export is one to store the encoded image in the system memory. According the SWIG documentation %cstring_output_allocate_size should be what I need "This macro is used to return strings that are allocated within the program and ... the returned string may contain binary data.". However in my case the Perl scalar I get is terminated at the first embedded binary 0. I use Perl-5.8.8 and SWIG-1.3.29. Fragments of my code so far (simplified for the list): #ifdef SWIG %cstring_output_allocate_size( char ** s, int *slen, free(*$1)) #endif void encodeImage (char **s, int *slen, Image* image, const char* codec, int quality, const char* compression); ... void encodeImage (char **s, int *slen, Image* image, const char* codec, int quality, const char* compression) { // just for testing ... std::ostringstream stream (""); // empty string to start with stream << '1'; << '2' << '\0' << '3' << '4'; stream.flush(); std::cerr << "c++> size: " << stream.str().size() << std::endl; char* payload = (char*) malloc (stream.str().size()); memcpy (payload, stream.str().c_str(), stream.str().size()); *s = payload; *slen = stream.str().size(); } and when I call it from Perl: $image_bits = ExactImage::encodeImage ($image, "jpeg", 80, ""); print "perl> size: " . length($image_bits) . "\n"; I get: c++> size: 5 perl> size: 2 When I look at the code generated from SWIG I noticed: if (argvi >= items) EXTEND(sp,1); ST(argvi) = SWIG_FromCharPtrAndSize(*arg1,*arg2); argvi++ ; and wonder if that function is the right thing to call there as it only sets the pointer and no size in the target object. Can someone confirm I hit a SWIG bug to be fixed or is there some other function that I should use instead? Thanks in advance, -- -- René userRebe - ExactCODE - Berlin (Europe / Germany) | | +49 (0)30 / 255 897 45 ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://permalink.gmane.org/gmane.comp.programming.swig.devel/16349
CC-MAIN-2014-42
refinedweb
377
65.22
POSIX_OPENPT(3P) POSIX Programmer's Manual POSIX_OPENPT(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. posix_openpt — open a pseudo-terminal device #include <stdlib.h> #include <fcntl.h> int posix_openpt(int oflag); The posix_openpt() function shall establish a connection between a master device for a pseudo-terminal and a file descriptor. The file descriptor shall be allocated as described in Section 2.14, File Descriptor Allocation and can file descriptor for a master pseudo-terminal device and return a non-negative integer representing the file descriptor. Otherwise, -1 shall be returned and errno set to indicate the error. The posix_openpt() function shall fail if: EMFILE All file descriptors available to the process are currently open. ENFILE The maximum allowable number of files is currently open in the system. The posix_openpt() function may fail if: EINVAL The value of oflag is not valid. EAGAIN Out of pseudo-terminal resources. ENOSR Out of STREAMS resources. The following sections are informative.. Section 2.14, File Descriptor Allocation, grantpt(3p), open(3p), ptsname(3p), unlockpt(3p) The Base Definitions volume of POSIX.1‐2017, fcntl.h(0p), stdlib POSIX_OPENPT(3P) Pages that refer to this page: stdlib.h(0p), grantpt(3p), ptsname(3p), unlockpt(3p)
https://man7.org/linux/man-pages/man3/posix_openpt.3p.html
CC-MAIN-2021-39
refinedweb
232
50.84
Generate some random data to work with First, we'll generate some random numeric data to work with in the form of a Numpy array import numpy as np data = np.random.randint(100, size=(3, 5)) Calculate the mean We can use Numpy's np.mean() function to calculate the mean value in the range of values in the data array: mean = np.mean(data) print ("The mean value of the dataset is", mean) Out: The mean value of the dataset is 61.333333333333336 Calculate the median Numpy also has a np.median function, which is deployed like this: median = np.median(data) print ("The median value of the dataset is", median) Out: The median value of the dataset is 80.0 Calculate the mode Numpy doesn't have a built-in function to calculate the modal value within a range of values, so use the stats module from the scipy package. from scipy import stats mode = stats.mode(data) print("The mode is {}".format(mode[0])) Out: The mode is [[55 6 56 35 7]]
https://carrefax.squarespace.com/new-blog/2018/3/8/calculate-mean-media-and-mode
CC-MAIN-2019-09
refinedweb
177
65.52
- - - - - - - StyleBook to upload SSL certificate and certificate key files to Citrix ADM When creating a StyleBook configuration that uses the SSL protocol, you must upload the SSL certificate files and certificate key files as required by the StyleBook parameters. StyleBook allows you to directly upload the SSL files and key files from your local system by using the Citrix ADM GUI. You can also use Citrix ADM APIs to upload certificate files and key files that are already managed by Citrix ADM. StyleBook configuration This document assists you to create your own StyleBook - Load Balancing Virtual Server (SSL) with components to upload SSL certificates and key files. The StyleBook provided here as an example creates a basic load balancing virtual server configuration on the selected Citrix ADC instance. The configuration uses the SSL protocol. To create a configuration using this StyleBook, you must provide the name and IP address of the virtual server, select the load balancing method parameters, and upload the certificate file and the certificate key file for the virtual server, or use a certificate file and certificate key file that are already present in the Citrix ADM. These are specified in the “parameters” section, as shown below: Two components are then created in the components section of the StyleBook, as shown below. The “my-lbvserver-comp” component is of type ns::lbvserver, where: - “ns” is the prefix that refers to the built-in namespace netscaler.nitro.config and version 10.5 that you had specified in the import-stylebooks section. - “lbvserver” is a built-in StyleBook in this namespace. It corresponds to the Citrix ADC NITRO load balancing virtual server resource of the same name. The second component “lbvserver-certificate-comp” is of type stlb::vserver-certs-binds. The prefix “stlb” refers to the namespace “com.citrix.adc.stylebooks” and version 1.0 that is specified in the import-stylebooks section of the StyleBook. If the “com.citrix.adc.stylebooks” namespace can be thought of as a folder, “vserver-certs-binds” is another StyleBook (or a file) in that folder. StyleBooks that are in the namespace “com.citrix.adc.stylebooks” are shipped as part of Citrix ADM. The “vserver-certs-binds” StyleBook used by user-defined StyleBooks allows you to easily configure the certificates by uploading the certificate and key files to the target Citrix ADC instance, and by configuring the binding of the certificate and key files to the appropriate virtual servers. The properties for this component are - the name of the lb virtual server and the names of the SSL certificates that you provide while creating the configpack. When you use the API to create a configuration from such a StyleBook, use just the file names (not the full file path). These files are expected to be already available in the certificate and key file folders on Citrix ADM. The uploaded SSL certificate file is stored on Citrix ADM in the /var/mps/tenants/…/ns_ssl_certs directory, and the SSL certificate key file is stored in /var/mps/tenants/…/ns_ssl_keys directory in Citrix ADM. Creating configurations to upload SSL files The following procedure creates a basic load balancing virtual server configuration on a selected Citrix ADC instance using the SSL protocol from the StyleBook specified above. You can use this procedure to upload the SSL certificate files and the certificate keys files in Citrix ADM. To create a configuration for uploading files In Citrix ADM, navigate to Applications > Configuration > StyleBooks. The StyleBooks page displays all the StyleBooks that are available in your Citrix ADM. Scroll down and select Load Balancing Virtual Server (SSL) or type Load Balancing Virtual Server (SSL) in the search field and press the Enter key. Click Create Configuration link in the StyleBook panel. The StyleBook parameters appear as a user-interface page that allows you to enter the values for all the parameters defined in this StyleBook. Enter the name of the load balancer and the virtual IP address in the basic load balancer settings section. In SSL Certificates Settings section, select the respective files from your local storage folder. Alternatively, you can select the files present on the Citrix ADM itself. Select the target Citrix ADC instance on which the configuration needs to be created, and click Create. Note You can also click the refresh icon to add recently discovered Citrix ADC instances in Citrix ADM to the available list of instances in this window. Note In Citrix ADM, the following default StyleBooks, which are shipped as part of Citrix ADM, enable you to create SSL support by uploading the SSL certificates and keys. - HTTP/SSL LoadBalancing StyleBook (lb) - HTTP/SSL LoadBalancing (with Monitors) StyleBook (lb-mon) - HTTP/SSL Content Switched Application with Monitors (cs-lb-mon) - Sample Application StyleBook using CS, LB and SSL features (sample-cs-app) You can also create your own StyleBooks that make use of SSL certificates in the same way as described in the above StyleBook Build your StyleBook The full content of the file lb-vserver-ssl.yaml is shown below: name: lb-vserver-ssl description: "This stylebook defines a load balancing virtual server configuration." display-name: "Load Balancing Virtual Server (SSL)" namespace: com.example.ssl.stylebooks schema-version: "1.0" version: "0.1" import-stylebooks: - namespace: netscaler.nitro.config prefix: ns version: "10.5" - namespace: com.citrix.adc.stylebooks prefix: stlb version: "1.0" Using the Citrix ADM API to create a configuration pack You can also use the Citrix ADM API to create a configpack that uploads Cert and Key files to the selected Citrix ADC instance. For more information on how to use APIs, see How to Use API to Create Configurations to Upload Cert and Key Files. Viewing the objects defined on the Citrix ADC instance After the StyleBook configuration pack (configpack) is created on Citrix ADM, click View objects created to display all the Citrix ADC objects created on the target Citrix ADC instance Create a StyleBook to upload SSL certificate and certificate key files.
https://docs.citrix.com/en-us/citrix-application-delivery-management-software/13/stylebooks/how-to-create-custom-stylebooks/how-to-create-stylebook-to-upload-ssl-certificate-and-key-files.html
CC-MAIN-2020-40
refinedweb
996
52.9
Procedural Level Generation in Games Tutorial: Part 1 A tutorial on procedural level generation using the Drunkard Walk algorithm. Version - Other, Other, Other Note from Ray: This is a brand new Sprite Kit tutorial released as part of the iOS 7 Feast. Enjoy! Most games you play have carefully designed levels that always remain the same. Experienced players know exactly what happens at any given time, when to jump and what button to press. While this is not necessarily a bad thing, it does to some extent reduce the game’s lifespan with a given player. Why play the same level over and over again? One way to increase your game’s replay value is to allow the game to generate its content programmatically – also known as adding procedurally generated content. In this tutorial, you will learn to create tiled dungeon-like levels using an algorithm called the Drunkard Walk. You will also create a reusable Map class with several properties for controlling a level’s generation. This tutorial uses Sprite Kit, a framework introduced with iOS 7. You will also need Xcode 5. If you are not already familiar with Sprite Kit, I recommend you read the Sprite Kit Tutorial for Beginners on this site. For readers who are not yet ready to switch to Sprite Kit, fear not. You can easily rewrite the code in this tutorial to use Cocos2d. Getting Started Before getting started, let’s clear up one possible misconception: procedural should not be confused with random. Random means that you have little control over what happens, which should not be the case in game development. Even in procedurally generated levels, your player should be able to reach the exit. What would be the fun of playing an endless runner like Canabalt if you got to a gap between buildings that would be impossible to jump? Or playing a platformer where the exit is in a place you cannot reach? In this sense, it might be even harder to design a procedurally generated level than to carefully craft your level in Tiled. I assume, being the bad-ass coder that you are, that you scoff at such cautionary statements. To get started, download the starter project for this tutorial. Once downloaded, unzip the file and open the project in Xcode, and build and run. You should now see a screen similar to this: The starter project contains the basic building blocks of the game, including all necessary artwork, sound effects and music. Take note of the following important classes: Map: Creates a basic 10×10 square that functions as the level for the game. MapTiles: A helper class that manages a 2D grid of tiles. I will explain this class later in the tutorial. DPad: Provides a basic implementation of a joystick to control the player’s character, a cat. MyScene: Sets up the Sprite Kit scene and processes game logic. Spend a few moments getting familiar with the code in the starter project before moving on. There are comments to help you understand how the code works. Also, try playing the game by using the DPad at the bottom-left corner to move the cat to the exit. Notice how the start and exit points change every time the level begins. The Beginnings of a New Map If you played the starter game more than once, you probably discovered that the game isn’t very fun. As Jordan Fisher writes in GamaSutra, game levels, especially procedurally generated ones, need to nail these three criteria to be successful: - Feasibility: Can you beat the level? - Interesting design: Do you want to beat it? - Skill level: Is it a good challenge? Your current level fails two of these three criteria: The design is not very interesting, as the outer perimeter never changes, and it is too easy to win, as you can always see where the exit is when the level starts. Hence, to make the level more fun, you need to generate a better dungeon and make the exit harder to find. The first step is to change the way you generate the map. To do so, you’ll delete the Map class and replace it with a new implementation. Select Map.h and Map.m in the Project Navigator, press Delete and then select Move to Trash. Next go to File\New\New File…, choose the iOS\Cocoa Touch\Objective-C class and click Next. Name the class Map, make it a Subclass of SKNode and click Next. Make sure the ProceduralLevelGeneration target is selected and click Create. Open Map.h and add the following code to the @interface section: @property (nonatomic) CGSize gridSize; @property (nonatomic, readonly) CGPoint spawnPoint; @property (nonatomic, readonly) CGPoint exitPoint; + (instancetype) mapWithGridSize:(CGSize)gridSize; - (instancetype) initWithGridSize:(CGSize)gridSize; This is the interface that MyScene expects for the Map class. You specify here where to spawn the player and exit, and create some initializers to construct the class given a certain size. Implement these in Map.m by adding this code to the @implementation section: + (instancetype) mapWithGridSize:(CGSize)gridSize { return [[self alloc] initWithGridSize:gridSize]; } - (instancetype) initWithGridSize:(CGSize)gridSize { if (( self = [super init] )) { self.gridSize = gridSize; _spawnPoint = CGPointZero; _exitPoint = CGPointZero; } return self; } Here you add a stub implementation that simply sets the player spawn and exit points to CGPointZero. This will allow you to have a simple starting point – you’ll fill these out to be more interesting later. Build and run, and you should see the following: Gone are the borders of the map and the feline hero gets sucked right into the exit, making the game unplayable – or really, really easy if you are a glass-half-full kind of person. Not really the a-maze-ing (pun intended) game you were hoping for, right? Well, time to put down some floors. Enter the Drunkard Walk algorithm. The Drunkard Walk Algorithm The Drunkard Walk algorithm is a kind of random walk and one of the simplest dungeon generation algorithms around. In its simplest implementation, the Drunkard Walk algorithm works as follows: - Choose a random start position in a grid and mark it as a floor. - Pick a random direction to move (Up, Down, Left or Right). - Move in that direction and mark the position as a floor, unless it already is a floor. - Repeat steps 2 and 3 until a desired number of floors have been placed in the grid. Nice and simple, eh? Basically, it is a loop that runs until a desired number of floors have been placed in the map. To allow the map generation to be as flexible as possible, you will start implementing the algorithm by adding a new property to hold the number of tiles to generate. Open Map.h and add the following property: @property (nonatomic) NSUInteger maxFloorCount; Next, open Map.m and add the following method: - (void) generateTileGrid { CGPoint startPoint = CGPointMake(self.gridSize.width / 2, self.gridSize.height / 2); NSUInteger currentFloorCount = 0; while ( currentFloorCount < self.maxFloorCount ) { currentFloorCount++; } } The above code begins to implement step 1 in the basic Drunkard Walk algorithm loop, but there is one significant difference. Can you spot it? [spoiler title="Solution"] startPoint is defaulted to the center of the grid instead of a random position. You do this to prevent the algorithm from butting up against the edges and getting stuck. More about that in the second part of the tutorial.[/spoiler] generateTileGrid begins by setting a start position and then enters a loop that runs until the currentFloorCount is equal to the desired number of floors defined by the maxFloorCount property. When you initialize a Map object, you should invoke generateTileGrid to ensure that you create the grid. So, add the following code to initWithGridSize: in Map.m, after the _exitPoint = CGPointZero line: [self generateTileGrid]; Build and run to make sure the game compiles as expected. Nothing has changed since the last run. The cat is still sucked into the exit and there are still no walls. You still need to write the code to generate the floor, but before you do that, you need to understand the MapTiles helper class. Managing the Tile Grid The MapTiles class is essentially a wrapper for a dynamic C array that will manage a 2D grid for the Map class. Note: If you're wondering why I choose to use a C array instead of an NSMutableArray, it comes down to personal preference. I generally do not like boxing primitive data types like integers into objects and then unboxing them again to use them, and since the MapTiles grid is just an array of integers, I prefer a C array. The MapTiles class is already in your project. If you've taken a glance through and feel you understand how it works well, feel free to skip ahead to the next section, Generating the Floor. But if you're unsure about how it works, keep reading to learn how to recreate it step-by-step, and I'll explain how it works along the way. To start, select MapTiles.h and MapTiles.m in the Project Navigator, press Delete and then select Move to Trash. Go to File\New\File..., choose the iOS\Cocoa Touch\Objective-C class and click Next. Name the class MapTiles, make it a subclass of NSObject and click Next. Be sure the ProceduralLevelGeneration target is selected and click Create. In order to make it easy to identify the type of tile, add this enum below the #import statement in MapTiles.h: typedef NS_ENUM(NSInteger, MapTileType) { MapTileTypeInvalid = -1, MapTileTypeNone = 0, MapTileTypeFloor = 1, MapTileTypeWall = 2, }; If later on you want to extend the MapTiles class with further tile types, you should put those in this MapTileType enum. Note: Notice the integer values you assign to each of the enums. They weren't picked at random. Look in the tiles.atlas texture atlas and click the 1.png file, and you will see that it is the texture for the floor just as MapTileTypeFloor has a value of 1. This makes it easy to convert the 2D grid array into tiles later on. Open MapTiles.h and add the following properties and method prototypes between @interface and @end: @property (nonatomic, readonly) NSUInteger count; @property (nonatomic, readonly) CGSize gridSize; - (instancetype) initWithGridSize:(CGSize)size; - (MapTileType) tileTypeAt:(CGPoint)tileCoordinate; - (void) setTileType:(MapTileType)type at:(CGPoint)tileCoordinate; - (BOOL) isEdgeTileAt:(CGPoint)tileCoordinate; - (BOOL) isValidTileCoordinateAt:(CGPoint)tileCoordinate; You've added two read-only properties: count provides the total number of tiles in the grid and gridSize holds the width and height of the grid in tiles. You'll find these properties handy later on. I'll explain the five methods as you implement the code. Next, open MapTiles.m and add the following class extension right above the @implementation line: @interface MapTiles () @property (nonatomic) NSInteger *tiles; @end This code adds a private property tiles to the class. This is a pointer to the array that holds information about the tile grid. Now implement initWithGridSize: in MapTiles.m after the @implementation line: - (instancetype) initWithGridSize:(CGSize)size { if (( self = [super init] )) { _gridSize = size; _count = (NSUInteger) size.width * size.height; self.tiles = calloc(self.count, sizeof(NSInteger)); NSAssert(self.tiles, @"Could not allocate memory for tiles"); } return self; } You initialize the two properties in initWithGridSize:. Since the total number of tiles in the grid is equal to the width of the grid multiplied by the grid height, you assign this value to the count property. Using this count, you allocate the memory for the tiles array with calloc, which ensures all variables in the array are initialized to 0, equivalent to the enumerated variable MapTileTypeNone. As ARC will not manage memory allocated using calloc or malloc, you should release the memory whenever you deallocate the MapTiles object. Before initWithGridSize: but after @implementation, add the dealloc method: - (void) dealloc { if ( self.tiles ) { free(self.tiles); self.tiles = nil; } } dealloc frees the memory when you deallocate an object and resets the tiles property pointer to avoid it pointing to an array that no longer exists in memory. Apart from the construction and deconstruction, the MapTiles class also has a few helper methods for managing tiles. But before you start implementing these methods, you need to understand how the tiles array exists in memory versus how it is organized in a grid. Figure 1: How calloc organizes the variables in memory. Each number is the index of the variable in memory. When you allocate memory for the tiles using calloc, it reserves n bytes for each array item, depending on the data type, and puts them end-to-end in a flat structure in memory (see figure 1). This organization of tiles is hard to work with in practice. It is much easier to find a tile by using an (x,y) pair of coordinates, as illustrated in Figure 2, so that is how the MapTiles class should organize the tile grid. Thankfully, it is very easy to calculate the index of a tile in memory from an (x,y) pair of coordinates since you know the size of the grid from the gridSize property. The numbers outside the square in Figure 2 illustrate the x- and y-coordinates, respectively. For example, the (x,y) coordinates (1,2) in the grid will be index 9 of the array. You calculate this using the formula: index in memory = y * gridSize.width + x With this knowledge, you can start implementing a method that will calculate an index from a pair of grid coordinates. For convenience, you will also create a method to ensure the grid coordinates are valid. In MapTiles.m, add the following new methods: - (BOOL) isValidTileCoordinateAt:(CGPoint)tileCoordinate { return !( tileCoordinate.x < 0 || tileCoordinate.x >= self.gridSize.width || tileCoordinate.y < 0 || tileCoordinate.y >= self.gridSize.height ); } - (NSInteger) tileIndexAt:(CGPoint)tileCoordinate { if ( ![self isValidTileCoordinateAt:tileCoordinate] ) { NSLog(@"Not a valid tile coordinate at %@", NSStringFromCGPoint(tileCoordinate)); return MapTileTypeInvalid; } return ((NSInteger)tileCoordinate.y * (NSInteger)self.gridSize.width + (NSInteger)tileCoordinate.x); } isValidTileCoordinateAt: tests if a given pair of coordinates is within the bounds of the grid. Notice how the method checks to see if it is outside of the bounds and then returns the opposite result, so if the coordinates are outside the bounds, it returns NO, and if they are not outside of the bounds, it returns YES. This is faster than checking if the coordinates are within the bounds, which would require the conditions to be AND-ed together instead of OR-ed. tileIndexAt: uses the equation discussed above to calculate an index from a pair of coordinates, but before doing this, it tests if the coordinates are valid. If not, it returns MapTileTypeInvalid, which has a value of -1. With the math in place, it is now possible to easily create the methods to return or set the tile type. So, add the following two methods after initWithGridSize: in MapTiles.m: - (MapTileType) tileTypeAt:(CGPoint)tileCoordinate { NSInteger tileArrayIndex = [self tileIndexAt:tileCoordinate]; if ( tileArrayIndex == -1 ) { return MapTileTypeInvalid; } return self.tiles[tileArrayIndex]; } - (void) setTileType:(MapTileType)type at:(CGPoint)tileCoordinate { NSInteger tileArrayIndex = [self tileIndexAt:tileCoordinate]; if ( tileArrayIndex == -1 ) { return; } self.tiles[tileArrayIndex] = type; } The two methods calculate the index from the pair of coordinates passed using the tileIndexAt: method you just added and then either set or return the MapTileType from the tiles array. Last but not least, add a method to determine if a given pair of tile coordinates is at the edge of the map. You'll later use this method to ensure you do not place any floors at the edge of the grid, thereby making it impossible to encapsulate all floors behind walls. - (BOOL) isEdgeTileAt:(CGPoint)tileCoordinate { return ((NSInteger)tileCoordinate.x == 0 || (NSInteger)tileCoordinate.x == (NSInteger)self.gridSize.width - 1 || (NSInteger)tileCoordinate.y == 0 || (NSInteger)tileCoordinate.y == (NSInteger)self.gridSize.height - 1); } Referring to Figure 2 above, notice that border tiles would be any tile with an x-coordinate of 0 or gridSize.width – 1, since the grid indices are zero-based. Equally, an y-coordinate of 0 or gridSize.height – 1 would be a border tile. Finally, when testing it's nice to be able to see what your procedural generation is actually generating. Add the following implementation of description, which will output the grid to the console for easy debugging: - (NSString *) description { NSMutableString *tileMapDescription = [NSMutableString stringWithFormat:@"<%@ = %p | \n", [self class], self]; for ( NSInteger y = ((NSInteger)self.gridSize.height - 1); y >= 0; y-- ) { [tileMapDescription appendString:[NSString stringWithFormat:@"[%i]", y]]; for ( NSInteger x = 0; x < (NSInteger)self.gridSize.width; x++ ) { [tileMapDescription appendString:[NSString stringWithFormat:@"%i", [self tileTypeAt:CGPointMake(x, y)]]]; } [tileMapDescription appendString:@"\n"]; } return [tileMapDescription stringByAppendingString:@">"]; } This method simply loops through the grid to create a string representation of the tiles. That was a lot of text and code to take in, but what you've built will make the procedural level generation much easier, since you can now abstract the grid handling from the level generation. Now it's time to lay down some ground. Generating the Floor You're going to place ground or floor tiles procedurally in the map using the Drunkard Walk algorithm discussed above. In Map.m, you already implemented part of the algorithm so that it finds a random start position (step 1) and loops a desired number of times (step 4). Now you need to implement steps 2 and 3 to generate the actual floor tiles within the loop you created. To make the Map class a bit more flexible, you'll start by adding a dedicated method to generate a procedural map. This will also be handy if you later need to regenerate the map. Open Map.h and add the following method declaration to the interface: - (void) generate; In Map.m, add the following import to the top of the file: #import "MapTiles.h" Add the following code right above the @implementation line: @interface Map () @property (nonatomic) MapTiles *tiles; @end The class extension holds one private property, which is a pointer to a MapTiles object. You'll use this object for easy grid handling in the map generation. You're keeping it private since you don't want to change the MapTiles object from outside the Map class. Next, implement the generate method in Map.m: - (void) generate { self.tiles = [[MapTiles alloc] initWithGridSize:self.gridSize]; [self generateTileGrid]; } First the method allocates and initializes a MapTiles object, then it generates a new tile grid by calling generateTileGrid. In Map.m, go to initWithGridSize: and delete this line: [self generateTileGrid]; You deleted that line because map generation should no longer occur immediately when you create a Map object. It's time to add the code to generate the floor of the dungeon. Do you remember the remaining steps of the Drunkard Walk algorithm? You choose a random direction and then place a floor at the new coordinates. The first step is to add a convenience method to provide a random number between two values. Add the following method in Map.m: - (NSInteger) randomNumberBetweenMin:(NSInteger)min andMax:(NSInteger)max { return min + arc4random() % (max - min); } You'll use this method to return a random number between min and max, both inclusive. Return to generateTileGrid and replace its contents with the following: CGPoint startPoint = CGPointMake(self.tiles.gridSize.width / 2, self.tiles.gridSize.height / 2); // 1 [self.tiles setTileType:MapTileTypeFloor at:startPoint]; NSUInteger currentFloorCount = 1; // 2 CGPoint currentPosition = startPoint; while ( currentFloorCount < self.maxFloorCount ) { // 3 NSInteger direction = [self randomNumberBetweenMin:1 andMax:4]; CGPoint newPosition; // 4 switch ( direction ) { case 1: // Up newPosition = CGPointMake(currentPosition.x, currentPosition.y - 1); break; case 2: // Down newPosition = CGPointMake(currentPosition.x, currentPosition.y + 1); break; case 3: // Left newPosition = CGPointMake(currentPosition.x - 1, currentPosition.y); break; case 4: // Right newPosition = CGPointMake(currentPosition.x + 1, currentPosition.y); break; } //5 if([self.tiles isValidTileCoordinateAt:newPosition] && ![self.tiles isEdgeTileAt:newPosition] && [self.tiles tileTypeAt:newPosition] == MapTileTypeNone) { currentPosition = newPosition; [self.tiles setTileType:MapTileTypeFloor at:currentPosition]; currentFloorCount++; } } // 6 _exitPoint = currentPosition; // 7 NSLog(@"%@", [self.tiles description]); This is what the code is doing: - It marks the tile at coordinates startPointin the grid as a floor tile and therefore initializes currentFloorCountwith a count of 1. currentPositionis the current position in the grid. The code initializes it to the startPointcoordinates where the Drunkard Walk algorithm will start. - Here the code chooses a random number between 1 and 4, providing a direction to move (1 = UP, 2 = DOWN, 3 = LEFT, 4 = RIGHT). - Based on the random number chosen in the above step, the code calculates a new position in the grid. - If the newly calculated position is valid and not an edge, and does not already contain a tile, this part adds a floor tile at that position and increments currentFloorCountby 1. - Here the code sets the last tile placed to the exit point. This is the goal of the map. - Lastly, the code prints the generated tile grid to the console. Build and run. The game runs with no visible changes, but it fails to write the tile grid to the console. Why is that? [spoiler title="Solution"]You never call generate on the Map class during MyScene initialization. Therefore, you created the map object but don't actually generate the tiles.[/spoiler] To fix this, go to MyScene.m and in initWithSize:, replace the line self.map = [[Map alloc] init] with the following: self.map = [[Map alloc] initWithGridSize:CGSizeMake(48, 48)]; self.map.maxFloorCount = 64; [self.map generate]; This generates a new map with a grid size of 48 by 48 tiles and a desired maximum floor count of 64. Once you set the maxFloorCount property, you generate the map. Build and run again, and you should see an output that resembles something similar to, but probably not exactly like (remember, it's random), the following: HOORAY!! You have generated a procedural level. Pat yourself on the back and get ready to show your masterpiece on the big – or small – screen. Converting a Tile Grid into Tiles Plotting your level in the console is a good way to debug your code but a poor way to impress your player. The next step is to convert the grid into actual tiles. The starter project already includes a texture atlas containing the tiles. To load the atlas into memory, add a private property to the class extension of Map.m, as well as a property to hold the size of a tile: @property (nonatomic) SKTextureAtlas *tileAtlas; @property (nonatomic) CGFloat tileSize; Initialize these two properties in initWithGridSize:, just after setting the value of _exitPoint: self.tileAtlas = [SKTextureAtlas atlasNamed:@"tiles"]; NSArray *textureNames = [self.tileAtlas textureNames]; SKTexture *tileTexture = [self.tileAtlas textureNamed:(NSString *)[textureNames firstObject]]; self.tileSize = tileTexture.size.width; After loading the texture atlas, the above code reads the texture names from the atlas. It uses the first name in the array to load a texture and stores that texture's width as tileSize. This code assumes textures in the atlas are squares (same width and height) and are all of the same size. Note: Using a texture atlas reduces the number of draw calls necessary to render the map. Every draw call adds overhead to the system because Sprite Kit has to perform extra processing to set up the GPU for each one. By using a single texture atlas, the entire map may be drawn in as few as a single draw call. The exact number will depend on several things, but in this app, those won't come into play. To learn more, check out Chapter 25 in iOS Games by Tutorials, Performance: Texture Atlases. Still inside Map.m, add the following method: - (void) generateTiles { // 1 for ( NSInteger y = 0; y < self.tiles.gridSize.height; y++ ) { for ( NSInteger x = 0; x < self.tiles.gridSize.width; x++ ) { // 2 CGPoint tileCoordinate = CGPointMake(x, y); // 3 MapTileType tileType = [self.tiles tileTypeAt:tileCoordinate]; // 4 if ( tileType != MapTileTypeNone ) { // 5 SKTexture *tileTexture = [self.tileAtlas textureNamed:[NSString stringWithFormat:@"%i", tileType]]; SKSpriteNode *tile = [SKSpriteNode spriteNodeWithTexture:tileTexture]; // 6 tile.position = tileCoordinate; // 7 [self addChild:tile]; } } } } generateTiles converts the internal tile grid into actual tiles by: - Two forloops, one for x and one for y, iterate through each tile in the grid. - This converts the current x- and y-values into a CGPointstructure for the position of the tile within the grid. - Here the code determines the type of tile at this position within the grid. - If the tile type is not an empty tile, then the code proceeds with creating the tile. - Based on the tile type, the code loads the respective tile texture from the texture atlas and assigns it to a SKSpriteNodeobject. Remember that the tile type (integer) is the same as the file name of the texture, as explained earlier. - The code sets the position of the tile to the tile coordinate. - Then it adds the created tile node as a child of the map object. This is done to ensure proper scrolling by grouping the tiles to the map where they belong. Finally, make sure the grid is actually turned into tiles by inserting the following line into the generate method in Map.m, after [self generateTileGrid]: [self generateTiles]; Build and run — but the result is not as expected. The game incorrectly places the tiles in a big pile, as illustrated here: The reason is straightforward: When positioning the tile, the current code sets the tile's position to the position within the internal grid and not relative to screen coordinates. You need a new method to convert grid coordinates into screen coordinates, so add the following to Map.m: - (CGPoint) convertMapCoordinateToWorldCoordinate:(CGPoint)mapCoordinate { return CGPointMake(mapCoordinate.x * self.tileSize, (self.tiles.gridSize.height - mapCoordinate.y) * self.tileSize); } By multiplying the grid (map) coordinate by the tile size, you calculate the horizontal position. The vertical position is slightly more complicated. Remember that the coordinates (0,0) in Sprite Kit represent the bottom-left corner. In the tile grid, the position of (0,0) is the top-left corner (see Figure 2 above). Hence, in order to correctly position the tile, you need to invert its vertical placement. You do this by subtracting the y-position of the tile in the grid by the total height of the grid and multiplying it by the tile size. Revisit generateTiles and change the line that sets tile.position to the following: tile.position = [self convertMapCoordinateToWorldCoordinate:CGPointMake(tileCoordinate.x, tileCoordinate.y)]; Also, change the line that sets _exitPoint in generateTileGrid to the following: _exitPoint = [self convertMapCoordinateToWorldCoordinate:currentPosition]; Build and run – oh no, where did the tiles go? Well, they are still there – they're just outside the visible area. You can easily fix this by changing the player's spawn position. You will apply a simple yet effective strategy where you set the spawn point to the position of the startPoint in generateTileGrid. Go to generateTileGrid and add the following line at the very bottom of the method: _spawnPoint = [self convertMapCoordinateToWorldCoordinate:startPoint]; The spawn point is the pair of screen coordinates where the game should place the player at the beginning of the level. Hence, you calculate the world coordinates from the grid coordinates. Build and run, and take the cat for a walk around the procedural world. Maybe you will even find the exit? Try playing around with different grid sizes and max number of floor tiles to see how it affects the map generation. One obvious issue now is that the cat can stray from the path. And we all know what happens when cats stray, right? All the songbirds of the world shiver. So, time to put up some walls. Adding Walls Open Map.m and add the following method: - (void) generateWalls { // 1 for ( NSInteger y = 0; y < self.tiles.gridSize.height; y++ ) { for ( NSInteger x = 0; x < self.tiles.gridSize.width; x++ ) { CGPoint tileCoordinate = CGPointMake(x, y); // 2 if ( [self.tiles tileTypeAt:tileCoordinate] == MapTileTypeFloor ) { for ( NSInteger neighbourY = -1; neighbourY < 2; neighbourY++ ) { for ( NSInteger neighbourX = -1; neighbourX < 2; neighbourX++ ) { if ( !(neighbourX == 0 && neighbourY == 0) ) { CGPoint coordinate = CGPointMake(x + neighbourX, y + neighbourY); // 3 if ( [self.tiles tileTypeAt:coordinate] == MapTileTypeNone ) { [self.tiles setTileType:MapTileTypeWall at:coordinate]; } } } } } } } } - The strategy applied by generateWallsis to first loop through each tile of the grid. - It does this until it identifies a floor tile ( MapTileTypeFloor). - It then checks the surrounding tiles and marks these as walls ( MapTileTypeWall) if no tile is placed there already ( MapTileTypeNone). The inner for loops (after //2) might seem a bit strange at first. It looks at each tile that surrounds the tile at coordinate (x,y). Take a peek at Figure 3 and see how the tiles you want are one less, equal to, and one more than the original index. The two for loop gives just that, starting at -1 and looping through to +1. Adding one of these integers to the original index inside the for loop, you find each neighbor. What if the tile you're checking is at the border of the grid? In that case, this check would fail, as the index would be invalid, correct? Yes, but luckily this situation is mitigated by the tileTypeAt: method on the MapTiles class. If an invalid coordinate is sent to tileTypeAt:, the method will return a MapTileTypeInvalid value. Consider the line after //3 in generateWalls and notice it only changes the tile to a wall tile if the returned tile type is MapTileTypeNone. To generate the wall tiles, go back to generate in Map.m and add the following line of code after [self generateTileGrid] and before [self generateTiles]: [self generateWalls]; Build and run. You should now see wall tiles surrounding the floor tiles. Try moving the cat around – notice anything strange? Walls are kind of pointless if you can walk right through them. There are several ways to fix this problem, one of which is described in the Collisions and Collectables: How To Make a Tile-Based Game with Cocos2D 2.X, Part 2 tutorial on this site. In this tutorial you will do it a bit differently by using the build-in physics engine in Sprite Kit. Everyone likes new tech, after all. Procedural Collision Handling: Theory There are many ways you could turn wall tiles into collision objects. The most obvious is to add a physicsBody to each wall tile, but that is not the most efficient solution. Another way, as described by Steffen Itterheim, is to use the Moore Neighborhood algorithm, but that is a tutorial in its own right. Instead, you will implement a fairly simple method where connected wall segments are combined into a single collision object. Figure 4 illustrates this method. The method will iterate over all tiles in the map using the following logic: - Starting at (0,0), iterate the tile grid until you find a wall tile. - When you find a wall tile, mark the tile grid position. This is the starting point for the collision wall. - Move to the next tile in the grid. If this is also a wall tile, then increase the number of tiles in the collision wall by 1. - Continue step 3 until you reach a non-wall tile or the end of the row. - When you reach a non-tile or the end of the row, create a collision wall from the starting point with a size of the number of tiles in the collision wall. - Start the iteration again, go back to step 2 and repeat until you've turned all wall tiles in the grid into collision walls. Note: The method described here is very basic and could be optimized further. For instance, you could iterate the map both horizontally and vertically. Iterating the map horizontally would omit all collision walls that are the size of one tile. You would then pick these up when iterating the map vertically, further decreasing the number of collision objects, which is always a good thing. It's time to put theory into practice. Procedural Collision Handling: Practice Look at initWithSize: in MyScene.m and see that the code to activate the physics engine is already in the starter project. Since Ray did an excellent job explaining how to set up the physics engine in the Sprite Kit for Beginners tutorial, I'll only explain it here in the context of procedural level generation. When the code creates the physicsBody of the player object, it sets it to collide with walls by adding the CollisionTypeWall to the collisionBitMask. That way, the physics engine will automatically bounce the player off any wall objects. However, when you created the walls in generateWalls, you didn't create them as physics objects – only as simple SKSpriteNodes. Hence, when you build and run the game the player will not collide with the walls. You're going to simplify wall collision object creation by adding a helper method. Open Map.m and add the following code: // Add at the top of the file together with the other #import statements #import "MyScene.h" // Add with other methods - (void) addCollisionWallAtPosition:(CGPoint)position withSize:(CGSize)size { SKNode *wall = [SKNode node]; wall.position = CGPointMake(position.x + size.width * 0.5f - 0.5f * self.tileSize, position.y - size.height * 0.5f + 0.5f * self.tileSize); wall.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:size]; wall.physicsBody.dynamic = NO; wall.physicsBody.categoryBitMask = CollisionTypeWall; wall.physicsBody.contactTestBitMask = 0; wall.physicsBody.collisionBitMask = CollisionTypePlayer; [self addChild:wall]; } This method creates and adds an SKNode to the map with the passed position and size. It then creates a non-moveable physics body for the node the size of the node, and ensures that the physics engine performs collision handling when the player collides with the node. It's time to implement the collision wall generation. Add the following method: - (void) generateCollisionWalls { for ( NSInteger y = 0; y < self.tiles.gridSize.height; y++ ) { CGFloat startPointForWall = 0; CGFloat wallLength = 0; for ( NSInteger x = 0; x <= self.tiles.gridSize.width; x++ ) { CGPoint tileCoordinate = CGPointMake(x, y); // 1 if ( [self.tiles tileTypeAt:tileCoordinate] == MapTileTypeWall ) { if ( startPointForWall == 0 && wallLength == 0 ) { startPointForWall = x; } wallLength += 1; } // 2 else if ( wallLength > 0 ) { CGPoint wallOrigin = CGPointMake(startPointForWall, y); CGSize wallSize = CGSizeMake(wallLength * self.tileSize, self.tileSize); [self addCollisionWallAtPosition:[self convertMapCoordinateToWorldCoordinate:wallOrigin] withSize:wallSize]; startPointForWall = 0; wallLength = 0; } } } } Here you perform the six steps described earlier. - You iterate through each row until you find a wall tile. You set a starting point (tile coordinate pair) for the collision wall and then increase the wallLengthby one. Then you move to the next tile. If this is also a wall tile, you repeat these steps. - If the next tile is not a wall tile, you calculate the size of the wall in points by multiplying the tile size, and you convert the starting point into world coordinates. By passing the starting point (as world coordinates in pixels) and size (in pixels), you generate a collision wall using the addCollisionWallAtPosition:withSize:helper method you added above. Go to generate in Map.m and add the following line of code after [self generateTiles] to ensure the game generates collision walls when it generates a tile map: [self generateCollisionWalls]; Build and run. Now the cat is stuck within the walls. The only way out is to find the exit – or is it? Where to Go from Here? You've earned a basic understanding of how to generate procedural levels in your game. Here is the full source code for the first part of the tutorial. In the second part of this tutorial, you will extend the map generation code even further by adding rooms. You'll also make map generation more controllable by adding several properties that will influence the process. If you have any comments or suggestions related to this tutorial, please join the forum discussion below.
https://www.raywenderlich.com/2637-procedural-level-generation-in-games-tutorial-part-1
CC-MAIN-2021-10
refinedweb
5,921
56.05
IRC log of gld on 2012-05-03 Timestamps are in UTC. 14:01:24 [RRSAgent] RRSAgent has joined #gld 14:01:24 [RRSAgent] logging to 14:01:26 [trackbot] RRSAgent, make logs world 14:01:26 [Zakim] Zakim has joined #gld 14:01:28 [trackbot] Zakim, this will be GLD 14:01:28 [Zakim] ok, trackbot, I see T&S_GLDWG()10:00AM already started 14:01:29 [trackbot] Meeting: Government Linked Data Working Group Teleconference 14:01:29 [trackbot] Date: 03 May 2012 14:01:30 [Zakim] +mhausenblas 14:02:15 [gatemezi] gatemezi has joined #gld 14:02:55 [cygri] cygri has joined #gld 14:03:00 [Zakim] +gatemezi 14:03:15 [cygri] zakim, who is on the phone? 14:03:15 [Zakim] On the phone I see George, +3539149aaaa, Sandro, mhausenblas, gatemezi 14:03:17 [Zakim] +??P29 14:03:39 [cygri] zakim, mhausenblas is temporarily me 14:03:39 [Zakim] +cygri; got it 14:03:50 [fadmaa] Zakim, +35391 is me 14:03:50 [Zakim] +fadmaa; got it 14:04:33 [cygri] scribe: cygri 14:04:54 [gatemezi] +1 14:04:56 [fadmaa] Zakim, mute me 14:04:56 [cygri] topic: Admin 14:04:57 [sandro] sandro has changed the topic to: Government Linked Data (GLD) WG -- 14:04:59 [Zakim] fadmaa should now be muted 14:05:08 [cygri] PROPOSAL: accept last meetings' minutes 14:05:16 [cygri] RESOLUTION: accept last meetings' minutes 14:05:52 [cygri] minutes were here: 14:06:04 [fadmaa] zakim, unmute me 14:06:04 [Zakim] fadmaa should no longer be muted 14:06:11 [George] 14:06:17 [cygri] agenda: 14:06:23 [cygri] chair: George Thomas 14:06:38 [cygri] topic: Main Vocab Agenda: DCAT 14:07:21 [sandro] cygri: What I've been saying about this is "Oh Yeah, We'll be looking intot this... as part of void or dcat or something." 14:07:31 [sandro] ... This needs to be worked out. Some people have looked at this 14:07:50 [sandro] ... semantic CKAN, which unfort. is defunct now. It was emitting both -- we can look at how they did it. 14:08:08 [sandro] ... I don't have a clear answer on how they can use them together. 14:08:19 [sandro] ... that should be published as part of one or the other. 14:08:35 [BenediktKaempgen] BenediktKaempgen has joined #gld 14:08:55 [cygri] George: do we want to do this in GLD? or do we defer to void? 14:09:01 [sandro] q+ 14:09:27 [cygri] ack sandro 14:09:31 [Zakim] +BenediktKaempgen 14:09:56 [gatemezi] I suggest we have the discussion in GLD.. 14:10:12 [cygri] sandro: procedurally, it would fall on void to explain how they work together because void is not on REC track and hence not so stable 14:10:13 [cygri] q+ 14:10:19 [BenediktKaempgen] zakim, who's here? 14:10:19 [Zakim] On the phone I see George, fadmaa, Sandro, cygri, gatemezi, DaveReynolds, BenediktKaempgen 14:10:22 [Zakim] On IRC I see BenediktKaempgen, cygri, gatemezi, Zakim, RRSAgent, fadmaa, MacTed, DaveReynolds, George, danbri_, rreck, trackbot, sandro 14:10:46 [fadmaa] zakim, mute me 14:10:46 [Zakim] fadmaa should now be muted 14:12:51 [sandro] q? 14:12:57 [sandro] ack cygri 14:15:53 [cygri] sandro: GLD should probably address it somehow, because people already use void 14:16:04 [George] i'll ping the CKAN list and see when semantic.ckan.net might be back 14:16:35 [George] then we'll use how it emits both DCAT and VOID to seed a GLD wiki page on the topic 14:17:04 [George] cygri: start with a wiki page and collect the obvious material - dcat example, void example, some options 14:17:42 [George] sandro: start with ReSpec? 14:17:54 [George] cygri: what are the possible/desired outcomes? 14:18:55 [George] sandro: could be a working group note instead 14:19:24 [George] cygri: separate doc is overkill - preference is to put this in the void note 14:19:47 [George] ... could setup a void ED soon 14:19:58 [George] ... but start with wiki to iterate a bit 14:20:29 [gatemezi] +1 to create a wiki page 14:20:40 [cygri] ACTION: cygri to create wiki page for void-dcat mapping 14:20:41 [trackbot] Created ACTION-69 - Create wiki page for void-dcat mapping [on Richard Cyganiak - due 2012-05-10]. 14:21:09 [cygri] topic: DCAT Product Tracker 14:21:16 [George] 14:21:27 [cygri] topic: DCAT Product Tracker Actions/Issues 14:22:25 [George] 14:22:36 [cygri] cygri: i'll update DERI dcat page to point to the WD 14:24:15 [cygri] ACTION: sandro to put in place proper HTML and RDF for the dcat namespace 14:24:15 [trackbot] Created ACTION-70 - Put in place proper HTML and RDF for the dcat namespace [on Sandro Hawke - due 2012-05-10]. 14:24:52 [fadmaa] zakim, unmute me 14:24:52 [Zakim] fadmaa should no longer be muted 14:24:59 [fadmaa] 14:25:51 [sandro] issue-7? 14:25:51 [trackbot] ISSUE-7 -- Drop dcat:accessUrl, use the URI of the dcat:Download resource instead -- raised 14:25:51 [trackbot] 14:25:51 [fadmaa] 14:26:41 [cygri] fadmaa: on ISSUE-7, this is an old issue, i think olyerickson tried to follow up with edsu and got no reply 14:26:52 [cygri] DaveReynolds: i thought we saw a response from Ed? 14:26:54 [cygri] q+ 14:27:17 [sandro] 14:27:18 [cygri] george: really? 14:30:03 [DaveReynolds] +1 to cygri - propose resolutions ahead of time to allow for discussion and preparation 14:30:53 [danbri] danbri has joined #gld 14:31:09 [cygri] cygri: it would be helpful to propose resolutions before via email and have them explicitly on the agenda 14:31:23 [cygri] george: yes, and it would be good if editors are prepared to address issues 14:31:48 [cygri] sandro: i think it makes sense to close the loop with ed. tell him to please come back to the WG if you care 14:32:43 [cygri] DaveReynolds: edsu offered to join a meeting where these issues are discussed. john's reply didn't respond to that 14:33:03 [cygri] george: we'll ping him 14:33:23 [cygri] ACTION: george to ping Ed Summers regarding DCAT ISSUE-7/8/9 14:33:23 [trackbot] Created ACTION-71 - Ping Ed Summers regarding DCAT ISSUE-7/8/9 [on George Thomas - due 2012-05-10]. 14:33:37 [cygri] ACTION-71? 14:33:37 [trackbot] ACTION-71 -- George Thomas to ping Ed Summers regarding DCAT ISSUE-7/8/9 -- due 2012-05-10 -- OPEN 14:33:37 [trackbot] 14:34:28 [cygri] sandro: should dcat be on the agenda for every vocabulary call, or every *other* vocab call? 14:34:49 [cygri] ... we'd like him to participate in more than one meeting 14:35:29 [fadmaa] +1 14:35:41 [George] q? 14:35:55 [cygri] ack me 14:35:55 [fadmaa] q+ 14:36:19 [gatemezi] @Sandro: you mean having at least one dcat call per month? 14:37:02 [George] fadmaa: Rufus from CKAN has some DCAT input 14:37:11 [cygri] fadmaa: i got some personal feedback from Rufus Pollock 14:37:20 [George] sandro: he should send an email to the comments list if he would 14:37:24 [cygri] ... i told him to raise issues but he can't cause not WG member 14:37:35 [cygri] sandro: ask him to mail the comments list, or forward his mail 14:38:04 [cygri] fadmaa: he suggested to distinguish mandatory and optional properties 14:38:19 [cygri] ... so consumers can be sure that some basic properties will be present 14:38:42 [cygri] ... and he suggested to drop some properties because they are underspecified 14:39:14 [cygri] ... things like dataQuality or dataDictionary 14:39:41 [cygri] ... vague definitions don't help much, and precise definitions might require more work and might be out of scope 14:39:50 [cygri] ... i found this reasonable 14:40:00 [cygri] q+ 14:40:14 [cygri] george: so should something like dataDictionary be optional, or completely removed? 14:40:24 [cygri] fadmaa: he says completely removed/deprecated 14:40:35 [George] ack fadmaa 14:42:09 [cygri] ack me 14:44:45 [cygri] cygri: defining something like "minimally complete record" is probably good 14:45:13 [cygri] george: this might relate to the new Linked Data Protocol work 14:45:44 [cygri] ... you want to say, here are properties that you can expect 14:46:05 [cygri] ... on dataDictionary, it's possible that the CKAN folks don't find it useful, but we find it useful on data.gov 14:48:50 [gatemezi] So maybe we can provide in the usage page here how useful is on data.gov 14:51:29 [George] q? 14:52:38 [George] cygri: conversations regarding DCAT as JSON-LD 14:53:06 [George] ... need to have an answer to 'can you use DCAT with JSON-LD?' kind of thing 14:53:42 [George] DaveReynolds: specific to JSON-LD? what about Talis JSON-RDF? 14:56:56 [cygri] (discussion on RDF in JSON) 14:58:08 [gatemezi] JSON-LD unofficial Draft here.. 14:58:24 [DaveReynolds] +1 to cygri, vocabularies should be agnostic to RDF serializations 14:59:26 [sandro] note that has every example in FIVE syntaxes, with toggles. 14:59:49 [cygri] sandro: the owl2 primer has all examples in five different syntaxes with a JS picker. we might want to do something like this 15:00:10 [sandro] zakim, who is on the call? 15:00:13 [Zakim] On the phone I see George, fadmaa, Sandro, cygri, gatemezi, DaveReynolds, BenediktKaempgen 15:00:31 [rreck] like i said i couldnt call in 15:01:56 [sandro] W3C Members, please give feedback on -- AC Members, please vote now. 15:01:58 [Zakim] -cygri 15:02:06 [Zakim] -fadmaa 15:02:09 [DaveReynolds] DaveReynolds has left #gld 15:02:13 [cygri] RRSAgent, make minutes public 15:02:13 [RRSAgent] I'm logging. I don't understand 'make minutes public', cygri. Try /msg RRSAgent help 15:02:17 [Zakim] -BenediktKaempgen 15:02:17 [gatemezi] -gatemezin 15:02:18 [Zakim] -DaveReynolds 15:02:20 [Zakim] -Sandro 15:02:24 [gatemezi] -gatemezi 15:02:24 [Zakim] -George 15:02:25 [cygri] RRSAgent, make logs public 15:02:51 [rreck] invited experts cannot vote? 15:03:56 [George] thanks for scribing richard! 15:04:01 [cygri] rreck, i don't think so. it's one vote per member org? 15:04:51 [rreck] rreck has left #gld 15:05:04 [sandro] correct. this a vote as to what W3C will work on, so it's paying members only. public input has been and remains welcome, though. 15:05:14 [sandro] of course, rreck just left. 15:35:00 [Zakim] disconnecting the lone participant, gatemezi, in T&S_GLDWG()10:00AM 15:35:02 [Zakim] T&S_GLDWG()10:00AM has ended 15:35:02 [Zakim] Attendees were George, +3539149aaaa, Sandro, gatemezi, DaveReynolds, cygri, fadmaa, BenediktKaempgen 17:21:20 [Zakim] Zakim has left #gld 18:39:19 [bhyland] bhyland has joined #gld
http://www.w3.org/2012/05/03-gld-irc
CC-MAIN-2016-44
refinedweb
1,889
65.46
Limit or increase the quantity of content that is crawled (Office SharePoint Server) Updated: October 23, 2008 Applies To: Office SharePoint Server 2007 Updated: 2008-10-23 (Office SharePoint Server 2007). You can increase or limit the quantity of content that is crawled by using: Crawl settings in the content sources For example, you can specify to crawl only the start addresses that are specified in a particular content source, or you can specify how many levels deep in the namespace (from those start addresses) to crawl and how many server hops to allow. Note that the options that are available within a content source for specifying the quantity of content that is crawled vary by content-source type. File type inclusions You can choose the file types that you want to crawl. Crawl rules You can use crawl rules to exclude all items in a given path from being crawled. This is a good way to ensure that subsites that you do not want to index are not crawled with a parent site that you are crawling. You can also use crawl rules to increase the amount of content that is crawled — for example crawling complex URLs for a given path. Crawl settings. The options available in the properties for each content source vary depending upon the content source type that is selected. The following table describes the crawl settings options for each content source type. As the preceding table shows, shared services administrators can use crawl setting options to limit or increase the quantity of content that is crawled. The following table describes best practices when configuring crawl setting options. File-type inclusions and IFilters Content is only crawled if the relevant file name extension is included in the file-type inclusions list and an IFilter is installed on the index server that supports those file types. Several file types are included automatically during initial installation.. Office SharePoint Server 2007 provides several IFilters, and more are available from Microsoft and third-party vendors. If necessary, software developers can create IFilters for new file types. To install and register additional IFilters provided by Microsoft with Office SharePoint Server 2007, the File types and IFilter reference (Office SharePoint Server). Limit or exclude content by using crawl rules:
https://technet.microsoft.com/en-us/library/cc262531(v=office.12)
CC-MAIN-2016-22
refinedweb
377
57.81
Let me tackle that question by sorting the kinds of problems for which you would use XML. Just about every software application needs to store some data. There are look-up tables, work files, preference settings, and so on. XML makes it very easy to do this. Say, for example, you've created a calendar program and you need a way to store holidays. You could hardcode them, of course, but that's kind of a hassle since you'd have to recompile the program if you need to add to the list. So you decide to save this data in a separate file using XML. Example 1-4 shows how it might look. <caldata> <holiday type="international"> <name>New Year's Day</name> <date><month>January</month><day>1</day></date> </holiday> <holiday type="personal"> <name>Erik's birthday</name> <date><month>April</month><day>23</day></date> </holiday> <holiday type="national"> <name>Independence Day</name> <date><month>July</month><day>4</day></date> </holiday> <holiday type="religious"> <name>Christmas</name> <date><month>December</month><day>25</day></date> </holiday> </caldata> Now all your program needs to do is read in the XML file and convert the markup into some convenient data structure using an XML parser. This software component reads and digests XML into a more usable form. There are lots of libraries that will do this, as well as standalone programs. Outputting XML is just as easy as reading it. Again, there are modules and libraries people have written that you can incorporate in any program. XML is a very good choice for storing data in many cases. It's easy to parse and write, and it's open for users to edit themselves. Parsers have mechanisms to verify syntax and completeness, so you can protect your program from corrupted data. XML works best for small data files or for data that is not meant to be searched randomly. A novel is a good example of a document that is not randomly accessed (unless you are one of those people who peek at the ending of a novel before finishing), whereas a telephone directory is randomly accessed and therefore may not be the best choice to put in a single, enormous XML document. If you want to store huge amounts of data and need to retrieve it quickly, you probably don't want to use XML. It's a sequential storage medium, meaning that any search would have to go through most of the document. A database program like Oracle or MySQL would scale much better, caching frequently used data and using a hash table to zero in on records with lightning speed. I mentioned before that a large class of XML documents are narrative, meaning they are for human consumption. But we don't expect people to actually read text with XML markup. Rather, the XML must be processed to put the data in a presentable form. XML has a number of strategies and tools for turning the unappealing mishmash of marked-up plain text into eye-pleasing views suitable for web pages, magazines, or whatever you like. Most XML markup languages focus on the task of how to organize information semantically. That is, they describe the data for what it is, not in terms of how it should look. Example 1-2 encodes a mathematical equation, but it does not look like something you'd write on a blackboard or see in a textbook. How you get from the raw data to the finished product is called formatting. There are a number of different strategies for formatting. The simplest is to apply a Cascading Style Sheet (CSS) to it. This is a separate document (not itself XML) that contains mappings from element names to presentation details (font style, color, margins, and so on). A formatting XML processor such as a web browser, reads the XML data file and the stylesheet, then produces a formatted page by applying the stylesheet's instructions to each element. Example 1-5 shows a typical example of a CSS stylesheet. telegram { display: block; background-color: tan; color: black; font-family: monospace; padding: 1em; } message { display: block; margin: .5em; padding: .5em; border: thin solid brown; background-color: wheat; whitespace: normal; } to:before { display: block; color: black; content: "To: "; } from:before { display: block; color: black; content: "From: "; } subject:before { color: black; content: "Subject: "; } to, from, subject { display: block; color: blue; font-size: large; } emphasis { font-style: italic; } name { font-weight: bold; } villain { color: red; font-weight: bold; } To apply this stylesheet, you need to add a special instruction to the source document. It looks like this: <?xml-stylesheet type="text/css" href="ex2_memo.css"?> This is a processing instruction, not an element. It will be ignored by any XML processing software that doesn't handle CSS stylesheets. To see the result, you can open the document in a web browser that accepts XML and can format with CSS. Figure 1-1 shows a screenshot of how it looks in Safari version 1.0 for Mac OS X. CSS is limited to cases where the output text will be in the same order as the input data. It would not be so useful if you wanted to show only an excerpt of the data, or if you wanted it to appear in a different order from the data. For example, suppose you collected a lot of phone numbers in an XML file and then wanted to generate a telephone directory from that. With CSS, there is no way to sort the listings in alphabetical order, so you'd have to do the sorting in the XML file first. A more powerful technique is to transform the XML. Transformation is a process that breaks apart an XML document and builds a new one. The new document may or may not use the same markup language (in fact, XML is only one option; you can transform XML into any kind of text). With transformation, you can sort elements, throw out parts you don't want, and even generate new data such as headers and footers for pages. Transformation in XML is typically done with the language XSLT, essentially a programming language optimized for transforming XML. It requires a transformation instruction which happens to be called a stylesheet (not to be confused with a CSS stylesheet). The process looks like the diagram in Figure 1-2. A popular use of transformations is to change a non-presentation XML data file into a format that combines data with presentational information. Typically, this format will throw away semantic information in favor of device-specific and highly presentational descriptions. For example, elements that distinguish between filenames and emphasized text would be replaced with tags that turn on italic formatting. Once you lose the semantic information, it is much harder to transform the document back to the original data-specific format. That is okay, because what we get from presentational formats is the ability to render a pleasing view on screen or printed page. There are many presentational formats. Public domain varieties include the venerable troff, which dates back to the first Unix system, and TEX, which is still popular in universities. Adobe's PostScript and PDF and Microsoft's Rich Text Format (RTF) are also good candidates for presentational formats. There are even some XML formats that can be included in this domain. XHTML is rather generic and presentational for narrative documents. SVG, a graphics description language, is another format you could transform to from a more semantic language. Example 1-6 shows an XSLT stylesheet that changes any telegram document into HTML. Notice that XSLT is itself an XML application, using namespaces (an XML syntax for grouping elements by adding a name prefix) to distinguish between XSLT commands and the markup to be output. For every element type in the source document's markup language, there is a corresponding rule in the stylesheet describing how to handle it. I don't expect you to understand this code right now. There is a whole chapter on XSLT (Chapter 7) after which it will make more sense to you. <xsl:transform xmlns: <xsl:template <html> <head><title>telegram</title></head> <body> <div style="background-color: wheat; padding=1em; "> <h1>telegram</h1> <xsl:apply-templates/> </div> </body> </html> </xsl:template> <xsl:template <h2><xsl:text>from: </xsl:text><xsl:apply-templates/></h2> </xsl:template> <xsl:template <h2><xsl:text>to: </xsl:text><xsl:apply-templates/></h2> </xsl:template> <xsl:template <h2><xsl:text>subj: </xsl:text><xsl:apply-templates/></h2> </xsl:template> <xsl:template <blockquote> <font style="font-family: monospace"> <xsl:apply-templates/> </font> </blockquote> </xsl:template> <xsl:template <i><xsl:apply-templates/></i> </xsl:template> <xsl:template <font color="blue"><xsl:apply-templates/></font> </xsl:template> <xsl:template <font color="red"><xsl:apply-templates/></font> </xsl:template> <xsl:template <img width="100"> <xsl:attribute <xsl:value-of </xsl:attribute> </img> </xsl:template> </xsl:transform> When applied against the document in Example 1-1, this script produces the following HTML. Figure 1-3 shows how it looks in a browser. <html> <head> <meta content="text/html; charset=UTF-8" http- <title>telegram</title> </head> <body><div style="background-color: wheat; padding=1em; "> <h1>telegram</h1> <h2>to: Sarah Bellum</h2> <h2>from: Colonel Timeslip</h2> <h2>subj: Robot-sitting instructions</h2> <blockquote><font style="font-family: monospace">Thanks for watching my robot pal <font color="blue">Zonky</font> while I'm away. He needs to be recharged <i>twice a day</i> and if he starts to get cranky, give him a quart of oil. I'll be back soon, after I've tracked down that evil mastermind <font color="red">Dr. Indigo Riceway</font>. </font></blockquote> </div></body> </html> Transforming XML into HTML is fine for online viewing. It is not so good for print media, however. HTML was never designed to handle the complex formatting of printed documents, with headers and footers, multiple columns, and page breaks. For that, you would want to transform into a richer format such as PDF. A direct transformation into PDF is not so easy to do, however. It requires extensive knowledge of the PDF specification which is huge and difficult, and much of the content is compressed. A better solution is to transform your XML into an intermediate format, one that is generic and easy for humans to understand. This is XSL-FO, the style language for formatting objects. A formatting object is an abstract representation for a portion of a formatted page. You use XSLT to map elements to formatting objects, and an XSL formatter turns the formatting objects into pages, paragraphs, graphics, and other presentational components. The process is illustrated in Figure 1-4. The source document on the left is first transformed, using an XSLT stylesheet and XSLT processor, into a formatting object tree using XSLT. This intermediate file is then fed into the XSL formatter which processes it into a presentational format, such as PDF. The beauty of this system is that it is modular. You can use any compliant XSLT processor and XSL formatter. You don't need to know anything about the presentational format because XSL is so generic, describing layout and style attributes in the most declarative form. I will describe XSL in more detail in Chapter 8. Finally, if stylesheets do not fit the bill, which may be the case if your source data is just too raw for direct transformation, then you may find a programming solution to be to your liking. Although XSLT has much to offer in transformation, it tends to be rather weak in some areas, such as processing character data. I often find that, despite my best efforts to stay inside the XSLT paradigm, I sometimes have to resort to writing a program that preprocesses my XML data before a transformation. Or I may have to write a program that does the whole processing from source to presentational format. That option is always available, and we will see it in detail in Chapter 10. Trust is important for datatrust that it hasn't been corrupted, truncated, mistyped or left incomplete. Broken documents can confuse software, format as gibberish, and result in erroneous calculations. Documents submitted for publishing need to be complete and use only the markup that you specify. Transmitting and converting documents always entails risk that some information may be lost. XML gives you the ability to guarantee a minimal level of trust in data. There are several mechanisms. First, there is well-formedness. Every XML parser is required to report syntax errors in markup. Missing tags, malformed tags, illegal characters, and other problems should be immediately reported to you. Consider this simple document with a few errors in it: <announcement< <TEXT>Hello, world! I'm using XML & it's a lot of fun.</Text> </anouncement> When I run an XML well-formedness checker on it, here is what I get: > xwf t.xml t.xml:2: error: xmlParseEntityRef: no name <TEXT>Hello, world! I'm using XML & it's a lot of fun.</Text> ^ t.xml:2: error: Opening and ending tag mismatch: TEXT and Text <TEXT>Hello, world! I'm using XML & it's a lot of fun.</Text> ^ t.xml:3: error: Opening and ending tag mismatch: announcement and anouncement </anouncement> ^ It caught two mismatched tags and an illegal character. And not only did it tell me what was wrong, it showed me where the errors were, so I can go back and correct them more easily. Checking if a document is well-formed can pick up a lot of problems: Mismatched tags, a common occurrence if you are typing in the XML by hand. The start and end tags have to match exactly in case and spelling. Truncated documents, which would be missing at least part of the outermost document (both start and end tags must be present). Illegal characters, including reserved markup delimiters like <, >, and &. There is a special syntax for complex or reserved characters which looks like < for <. If any part of that is missing, the parser will get suspicious. Parsers should also warn you if characters in a particular encoding are not correctly formed, which may indicate that the document was altered in a recent transmission. For example, transferring a file through FTP as ASCII text can sometimes strip out the high bit characters. The well-formedness check has its limits. The parser doesn't know if you are using the right elements in the right places. For example, you might have an XHTML document with a p element inside the head, which is illegal. To catch this kind of problem, you need to test if the document is a valid instance of XHTML. The tool for this is a validating parser. A validating parser works by comparing a document against a set of rules called a document model. One kind of document model is a document type definition (DTD). It declares all the elements that are allowed in a document and describes in detail what kind of elements they can contain. Example 1-7 is a small DTD for telegrams. <!ELEMENT telegram (from,to,subject,graphic?,message)> <!ATTLIST telegram pri CDATA #IMPLIED> <!ELEMENT from (#PCDATA)> <!ELEMENT to (#PCDATA)> <!ELEMENT subject (#PCDATA)> <!ELEMENT graphic EMPTY> <!ATTLIST graphic fileref CDATA #REQUIRED> <!ELEMENT message (#PCDATA|emphasis|name|villain)*> <!ELEMENT emphasis (#PCDATA)> <!ELEMENT name (#PCDATA)> Before submitting the telegram document to a parser, I need to add this line to the top: <!DOCTYPE telegram SYSTEM "/location/of/dtd"> Where "/location..." is the path to the DTD file on my system. Now I can run a validating parser on the telegram document. Here's the output I get: > xval ex1_memo.xml ex1_memo.xml:13: validity error: No declaration for element villain mastermind <villain>Dr. Indigo Riceway</villain>. ^ ex1_memo.xml:15: validity error: Element telegram content doesn't follow the DTD </telegram> ^ Oops! I forgot to declare the villain element, so I'm not allowed to use it in a telegram. No problem; it's easy to add new elements. This shows how you can detect problems with structure and grammar in a document. The most important benefit to using a DTD is that it allows you to enforce and formalize a markup language. You can make your DTD public by posting it on the web, which is what organizations like the W3C do. For instance, you can look at the DTD for "strict" XHTML version 1.0 at. It's a compact and portable specification, though a little dense to read. One limitation of DTDs is that they don't do much checking of text content. You can declare an element to contain text (called PCDATA in XML), or not, and that's as far as you can go. You could not check whether an element that should be filled out is empty, or if it follows the wrong pattern. Say, for example, I wanted to make sure that the to element in the telegram isn't empty, so I have at least someone to give it to. With a DTD, there is no way to test that. An alternative document modeling scheme provides the solution. XML Schemas provide much more detailed control over a document, including the ability to compare text with a pattern you define. Example 1-8 shows a schema that will test a telegram for completely filled-out elements. <xs:schema xmlns: <xs:element <xs:complexType <xs:sequence> <xs:element <xs:element <xs:element <xs:element <xs:element </xs:sequence> <xs:attribute </xs:complexType> <xs:simpleType <xs:restriction <xs:minLength </xs:restriction> </xs:simpleType> <xs:complexType <xs:attribute </xs:complexType> <xs:complexType <xs:choice <xs:element <xs:element <xs:element </xs:choice> </xs:complexType> </xs:schema> So there are several levels of quality assurance available in XML. You can rest assured that your data is in a good state if you've validated it. XML wants to be useful to the widest possible community. Things that have limited other markup languages from worldwide acceptance have been reworked. The character set, for starters, is Unicode, which supports hundreds of scripts: Latin, Nordic, Arabic, Cyrillic, Hebrew, Chinese, Mongolian, and many more. It also has ample supplies of literary and scientific symbols. You'd be hard-pressed to think of something you can't express in XML. To be flexible, XML also supports many character encodings. The difference between a character set and a character encoding can be a little confusing. A character set is a collection of symbols, or glyphs. For example, ASCII is a set of 127 simple Roman letters, numerals, symbols, and a few device codes. A character encoding is a scheme for representing the characters numerically. All text is just a string of numbers that tell a program what symbols to render on screen. An encoding may be as simple as mapping each byte to a unique glyph. Sometimes the number of characters is so large that a different scheme is required. For example, UTF-8 is an encoding for the Unicode character set. It uses an ingenious algorithm to represent the most common characters in one byte, some less common ones in two bytes, rarer ones in three bytes, and so on. This makes the vast majority of files in existence already compatible with UTF-8, and it makes most UTF-8 documents compatible with most older, 1-byte character processing software. There are many other encodings, such as UTF-16 and ISO-8859-1. You can specify the character encoding you want to use in the XML prologue like this: <?xml version="1.0" encoding="iso-8859-1"?> This goes at the very top of an XML document so it can prepare the XML parser for the text to follow. The encoding parameter and, in fact, the whole prologue, is optional. Without an explicit encoding parameter, the XML processor will assume you want UTF-8 or UTF-16, depending on the first few bytes of the file. It is inconvenient to insert exotic characters from a common terminal. XML provides a shorthand, called character entity references. If you want a letter "c" with a cedilla (ç), you can express it numerically like this: à (decimal) or ç (hexadecimal), both of which use the position of the character in Unicode as an identifier. Often, there may be one or more translations of a document. You can keep them all together using XML's built-in support for language qualifiers. In this piece of XML, two versions of the same text are kept together for convenience, differentiated by labels: <para xml:There is an answer.</para> <para xml:Es gibt ein Antwort.</para> This same system can even be used with dialects within a language. In this case, both are English, but from different locales: <para xml:Consult the program.</para> <para xml:Consult the programme.</para>
http://etutorials.org/Programming/Learning+xml/Chapter+1.+Introduction/1.3+What+Can+I+Do+with+XML/
crawl-001
refinedweb
3,486
55.03
papyrus_ogcproxy Project description up: Using a proxy for the proxy If the requests made by the OGC proxy should be made through a proxy, the additional package pysocks is required. After the installation of this package, configure the proxy: from papyrus_ogcproxy import views as ogcproxy_views from httplib2 import ProxyInfo import socks ogcproxy_views.proxy_info = ProxyInfo(socks.SOCKS5, 'localhost', 1080) With this configuration the OGC proxy will make requests through the proxy localhost:1080. For information please refer to the documentation of PySocks and httplib2. Set up a development environment To set up a development environment with virtualenv, run the following commands: $ virtualenv venv $ venv/bin/python setup.py develop $ venv/bin/pip install -r requirements-dev.txt Run the tests To run the tests: $ venv/bin/nosetests --with-coverage One test assumes that a proxy server is running at localhost:1080. To start a proxy run: $ ssh -N -D 0.0.0.0:1080 localhost 0.2 - Do not verify certificate from the remote server. from @sbrunner. (We may want to revisit this later.) 0.1 - First version Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/papyrus_ogcproxy/
CC-MAIN-2021-21
refinedweb
206
57.06
If we want to look at a scene as if we had photographed it with a camera, we must first define some things: pos. target). It is also common to define the direction in which we look. Technically we need a line of sight. One straight in space is mathematically defined either by 2 points or by a point and a vector. The first part of the definition is the eye position and the 2nd is either the targetor the line of sight vector los. up. fov_y. This means the angle between the two straight lines, starting at the eye position and ending at the leftmost point and the rightmost point, which can be seen simultaneously. vp. nearand the far plane far. The near plane is distance from the eye position to the plane from where the objects become visible to us. The far plane is the distance from the eye position to the plane to which the objects of the scene are visible to us. An explanation of what the near plane and far plane are needed will follow later.. A definition of this data in C++ and in Python may look like this: C++ using TVec3 = std::array<float,3>; struct Camera { TVec3 pos {0.0, -8.0, 0.0}; TVec3 target {0.0, 0.0, 0.0}; TVec3 up {0.0, 0.0, 1.0}; float fov_y {90.0}; TSize vp {800, 600}; float near {0.5}; float far {100.0}; }; Python class Camera: def __init__(self): self.pos = (0, -8, 0) self.target = (0, 0, 0) self.up = (0, 0, 1) self.fov_y = 90 self.vp = (800, 600) self.near = 0.5 self.far = 100.0 In order to take all this information into consideration when drawing a scene, a projection matrix and a view matrix are usually used. In order to arrange the individual parts of a scene in the scene, model matrices are used. However, these are mentioned here only for the sake of completeness and will not be dealt with here. Projection matrix: The projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. View matrix: The view matrix defines the eye position and the viewing direction on the scene. Model matrix: The model matrix defines the location and the relative size of an object in the scene. After we have filled the data structures above with the corresponding data, we have to translate them into the appropriate matrices. In the OGL compatibility mode, this can be done with the gluLookAt and gluPerspective functions that set the built in uniforms gl_ModelViewMatrix, gl_NormalMatrix, and gl_ModelViewProjectionMatrix. In OGL 3.1 and GLSL #version 150 the built in uniforms were removed, because the entire fixed-function matrix stack became deprecated. If we want to use OGL high level shader with GLSL version 330 or even higher we have to define and set the matrix uniforms our self (Apart from the use of GLSL compatibility keyword). A point on the viewport is visible when it is in the native AABB (axis aligned bounding box) defined by the points (-1.0, -1.0, -1.0) and (1.0, 1.0, 1.0). This is called the Normalized Device Coordinates (NDC). A point with the coordinates (-1.0, -1.0, z) will be painted to the lower left corner of the viewport and a point with the coordinates (1.0, 1.0, z) will be painted to the upper right corner of the viewport. The Z-coordinate is mapped from the interval (-1.0, 1.0) to the interval (0.0, 1.0) and written into the Z-buffer. All we can see from the scene is within a 4-sided pyramid. The top of the pyramid is the eye position. The 4 sides of the pyramid are defined by the filed of view ( fov_y) and the aspect ratio ( vp[0]/vp[1]). The projection matrix has to map the points from inside the pyramid to the NDC defined by the points (-1.0, -1.0, -1.0) and (1.0, 1.0, 1.0). At this point our pyramid is infinite, it has no end in depth and we can not map an infinite space to a finite one. For this we now need the near plane and the far plane, they transform the pyramid into a frustum by cutting the top and limiting the pyramid in the depth. The near plane and the far plane have to be chosen in such a way that they include everything that should be visible from the scene. The mapping from the points within a frustum to the NDC is pure mathematics and can be generally solved. The development of the formulas were often discussed and repeatedly published throughout the web. Since you can not insert a LaTeX formula into a Stack Overflow documentation this is dispensed with here and only the completed C++ and Python source code is added. Note that the eye coordinates are defined in the right-handed coordinate system, but NDC uses the left-handed coordinate system. The projection matrix is calculated from, the field of view fov_y, the aspect ratio vp[0]/vp[1], the near plane nearand the far plane far. C++ using TVec4 = std::array< float, 4 >; using TMat44 = std::array< TVec4, 4 >; TMat44 Camera::Perspective( void ) { float fn = far + near; float f_n = far - near; float r = (float)vp[0] / vp[1]; float t = 1.0f / tan( ToRad( fov_y ) / 2.0f ); return TMat44{ TVec4{ t / r, 0.0f, 0.0f, 0.0f }, TVec4{ 0.0f, t, 0.0f, 0.0f }, TVec4{ 0.0f, 0.0f, -fn / f_n, -1.0 }, TVec4{ 0.0f, 0.0f, -2.0f * far * near / f_n, 0.0f } }; } Python def Perspective(self): fn, = self.far + self.near f_n = self.far - self.near r = self.vp[0] / self.vp[1] t = 1 / math.tan( math.radians( self.fov_y ) / 2 ) return numpy.matrix( [ [ t/r, 0, 0, 0 ], [ 0, t, 0, 0 ], [ 0, 0, -fn/f_n, -1 ], [ 0, 0, -2 * self.far * self.near / f_n, 0 ] ] ) In the coordinate system on the viewport, the Y-axis points upwards (0, 1, 0) and the X axis points to the right (1, 0, 0). This results a Z-axis which points out of the viewport ( (0, 0, -1) = cross( X-axis, Y-axis )). In the scene, the X axis points to the east, the Y axis to the north, and the Z axis to the top. The X axis of the viewport (1, 0, 0) matches the Y-axis of the scene (1, 0, 0), the Y axis of the viewport (0, 1, 0 ) matches the Z axis of the scene (0, 0, 1) and the Z axis of the viewport (0, 0, 1 ) matches the negated Y axis of the scene (0, -1, 0). Each point and each vector from the reference system of the scene must therefore be converted first into viewport coordinates. This can be done by some swapping and inverting operations in the the scalar vectors. x y z -------- 1 0 0 | x' = x 0 0 1 | y' = z 0 -1 0 | z' = -y To setup a view matrix the position pos, the target target and the up vector up have to be mapped into the viewport coordinate system, as described above. This give the 2 points p and t and the vector u, as in the following code snippet. The Z axis of the view matrix is the inverse line of sight, which is calculated by p - t. The Y axis is the up vector u. The X axis is calculated by the cross product of Y axis and Z axis. For orthonormalising the view matrix, the cross product is used a second time, to calculate the Y axis from the Z axis and the X axis (Of course the Gram-Schmidt orthogonalization would work just as well). At the end, all 3 axles must be normalized and the eye position pos has to be set as th origin of the view matrix. The code below defines a matrix that exactly encapsulates the steps necessary to calculate a look at the scene: C++ template< typename T_VEC > TVec3 Cross( T_VEC a, T_VEC b ) { return { a[1] * b[2] - a[2] * b[1], a[2] * b[0] - a[0] * b[2], a[0] * b[1] - a[1] * b[0] }; } template< typename T_A, typename T_B > float Dot( T_A a, T_B b ) { return a[0]*b[0] + a[1]*b[1] + a[2]*b[2]; } template< typename T_VEC > void Normalize( T_VEC & v ) { float len = sqrt( v[0] * v[0] + v[1] * v[1] + v[2] * v[2] ); v[0] /= len; v[1] /= len; v[2] /= len; } TMat44 Camera::LookAt( void ) { TVec3 mz = { pos[0] - target[0], pos[1] - target[1], pos[2] - target[2] }; Normalize( mz ); TVec3 my = { up[0], up[1], up[2] }; TVec3 mx = Cross( my, mz ); Normalize( mx ); my = Cross( mz, mx ); TMat44 v{ TVec4{ mx[0], my[0], mz[0], 0.0f }, TVec4{ mx[1], my[1], mz[1], 0.0f }, TVec4{ mx[2], my[2], mz[2], 0.0f }, TVec4{ Dot(mx, pos), Dot(my, pos), Dot(TVec3{-mz[0], -mz[1], -mz[2]}, pos), 1.0f } }; return v; } python] ] ) The matrices are finally written in uniforms and used in the vertex shader to transform the model positions. Vertex shader In the vertex shader, one transformation after the other is performed. #version 400 layout (location = 0) in vec3 inPos; layout (location = 1) in vec3 inCol; out vec3 vertCol; uniform mat4 u_projectionMat44; uniform mat4 u_viewMat44; uniform mat4 u_modelMat44; void main() { vertCol = inCol; vec4 modelPos = u_modelMat44 * vec4( inPos, 1.0 ); vec4 viewPos = u_viewMat44 * modelPos; gl_Position = u_projectionMat44 * viewPos; } Fragment shader The fragment shader is listed here only for the sake of completeness. The work was done before. #version 400 in vec3 vertCol; out vec4 fragColor; void main() { fragColor = vec4( vertCol, 1.0 ); } After the shader are compiled and liked, the matrices can bound to the uniform variables. C++ int shaderProg = ; Camera camera; // ... int prjMatLocation = glGetUniformLocation( shaderProg, "u_projectionMat44" ); int viewMatLocation = glGetUniformLocation( shaderProg, "u_viewMat44" ); glUniformMatrix4fv( prjMatLocation, 1, GL_FALSE, camera.Perspective().data()->data() ); glUniformMatrix4fv( viewMatLocation, 1, GL_FALSE, camera.LookAt().data()->data() ); Python shaderProg = camera = Camera() # ... prjMatLocation = glGetUniformLocation( shaderProg, b"u_projectionMat44" ) viewMatLocation = glGetUniformLocation( shaderProg, b"u_viewMat44" ) glUniformMatrix4fv( prjMatLocation, 1, GL_FALSE, camera.Perspective() ) glUniformMatrix4fv( viewMatLocation, 1, GL_FALSE, camera.LookAt() ) In addition, I have added the entire code dump of a Python example (To add the C ++ example would unfortunately exceed the limit of 30000 characters). In this example, the camera moves elliptically around a tetrahedron at a focal point of the ellipse. The viewing direction is always directed to the tetraeder. Python To run the Python script NumPy must be installed. from OpenGL.GL import * from OpenGL.GLUT import * from OpenGL.GLU import * import numpy from time import time import math import sys def Cross( a, b ): return ( a[1] * b[2] - a[2] * b[1], a[2] * b[0] - a[0] * b[2], a[0] * b[1] - a[1] * b[0], 0.0 ) def Dot( a, b ): return a[0]*b[0] + a[1]*b[1] + a[2]*b[2] def Normalize( v ): len = math.sqrt( v[0] * v[0] + v[1] * v[1] + v[2] * v[2] ) return (v[0] / len, v[1] / len, v[2] / len) class Camera: def __init__(self): self.pos = (0, -8, 0) self.target = (0, 0, 0) self.up = (0, 0, 1) self.fov_y = 90 self.vp = (800, 600) self.near = 0.5 self.far = 100.0 def Perspective(self): fn, f_n = self.far + self.near, self.far - self.near r, t = self.vp[0] / self.vp[1], 1 / math.tan( math.radians( self.fov_y ) / 2 ) return numpy.matrix( [ [t/r,0,0,0], [0,t,0,0], [0,0,-fn/f_n,-1], [0,0,-2*self.far*self.near/f_n,0] ] )] ] ) # shader program object class ShaderProgram: def __init__( self, shaderList, uniformNames ): shaderObjs = [] for sh_info in shaderList: shaderObjs.append( self.CompileShader(sh_info[0], sh_info[1] ) ) self.LinkProgram( shaderObjs ) self.__unifomLocation = {} for name in uniformNames: self.__unifomLocation[name] = glGetUniformLocation( self.__prog, name ) print( "uniform %-30s at loaction %d" % (name, self.__unifomLocation[name]) ) def Use(self): glUseProgram( self.__prog ) def SetUniformMat44( self, name, mat ): glUniformMatrix4fv( self.__unifomLocation[name], 1, GL_FALSE, mat ) # read shader program and compile shader def CompileShader(self, sourceFileName, shaderStage): with open( sourceFileName, 'r' ) as sourceFile: sourceCode = sourceFile.read() nameMap = { GL_VERTEX_SHADER: 'vertex', GL_FRAGMENT_SHADER: 'fragment' } print( '\n%s shader code:' % nameMap.get( shaderStage, '' ) ) print( sourceCode ) shaderObj = glCreateShader( shaderStage ) glShaderSource( shaderObj, sourceCode ) glCompileShader( shaderObj ) result = glGetShaderiv( shaderObj, GL_COMPILE_STATUS ) if not (result): print( glGetShaderInfoLog( shaderObj ) ) sys.exit() return shaderObj # linke shader objects to shader program def LinkProgram(self, shaderObjs): self.__prog = glCreateProgram() for shObj in shaderObjs: glAttachShader( self.__prog, shObj ) glLinkProgram( self.__prog ) result = glGetProgramiv( self.__prog, GL_LINK_STATUS ) if not ( result ): print( 'link error:' ) print( glGetProgramInfoLog( self.__prog ) ) sys.exit() # vertex array object class VAObject: def __init__( self, dataArrays, tetIndices ): self.__obj = glGenVertexArrays( 1 ) self.__noOfIndices = len( tetIndices ) self.__indexArr = numpy.array( tetIndices, dtype='uint' ) noOfBuffers = len( dataArrays ) buffers = glGenBuffers( noOfBuffers ) glBindVertexArray( self.__obj ) for i_buffer in range( 0, noOfBuffers ): vertexSize, dataArr = dataArrays[i_buffer] glBindBuffer( GL_ARRAY_BUFFER, buffers[i_buffer] ) glBufferData( GL_ARRAY_BUFFER, numpy.array( dataArr, dtype='float32' ), GL_STATIC_DRAW ) glEnableVertexAttribArray( i_buffer ) glVertexAttribPointer( i_buffer, vertexSize, GL_FLOAT, GL_FALSE, 0, None ) def Draw(self): glBindVertexArray( self.__obj ) glDrawElements( GL_TRIANGLES, self.__noOfIndices, GL_UNSIGNED_INT, self.__indexArr ) # glut window class Window: def __init__( self, cx, cy ): self.__vpsize = ( cx, cy ) glutInitDisplayMode( GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH ) glutInitWindowPosition( 0, 0 ) glutInitWindowSize( self.__vpsize[0], self.__vpsize[1] ) self.__id = glutCreateWindow( b'OGL window' ) glutDisplayFunc( self.OnDraw ) glutIdleFunc( self.OnDraw ) def Run( self ): self.__startTime = time() glutMainLoop() # draw event def OnDraw(self): self.__vpsize = ( glutGet( GLUT_WINDOW_WIDTH ), glutGet( GLUT_WINDOW_HEIGHT ) ) currentTime = time() # set up camera camera = Camera() camera.vp = self.__vpsize camera.pos = self.EllipticalPosition( 7, 4, self.CalcAng( currentTime, 10 ) ) # set up attributes and shader program glEnable( GL_DEPTH_TEST ) glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) prog.Use() prog.SetUniformMat44( b"u_projectionMat44", camera.Perspective() ) prog.SetUniformMat44( b"u_viewMat44", camera.LookAt() ) # draw object modelMat = numpy.matrix(numpy.identity(4), copy=False, dtype='float32') prog.SetUniformMat44( b"u_modelMat44", modelMat ) tetVAO.Draw() glutSwapBuffers() def Fract( self, val ): return val - math.trunc(val) def CalcAng( self, currentTime, intervall ): return self.Fract( (currentTime - self.__startTime) / intervall ) * 2.0 * math.pi def CalcMove( self, currentTime, intervall, range ): pos = self.Fract( (currentTime - self.__startTime) / intervall ) * 2.0 pos = pos if pos < 1.0 else (2.0-pos) return range[0] + (range[1] - range[0]) * pos def EllipticalPosition( self, a, b, angRag ): a_b = a * a - b * b ea = 0 if (a_b <= 0) else math.sqrt( a_b ) eb = 0 if (a_b >= 0) else math.sqrt( -a_b ) return ( a * math.sin( angRag ) - ea, b * math.cos( angRag ) - eb, 0 ) # initialize glut glutInit() # create window wnd = Window( 800, 600 ) # define tetrahedron vertex array opject sin120 = 0.8660254 tetVAO = VAObject( [ (3, [ 0.0, 0.0, 1.0, 0.0, -sin120, -0.5, sin120 * sin120, 0.5 * sin120, -0.5, -sin120 * sin120, 0.5 * sin120, -0.5 ]), (3, [ 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, ]) ], [ 0, 1, 2, 0, 2, 3, 0, 3, 1, 1, 3, 2 ] ) # load, compile and link shader prog = ShaderProgram( [ ('python/ogl4camera/camera.vert', GL_VERTEX_SHADER), ('python/ogl4camera/camera.frag', GL_FRAGMENT_SHADER) ], [b"u_projectionMat44", b"u_viewMat44", b"u_modelMat44"] ) # start main loop wnd.Run()
https://riptutorial.com/opengl/example/32043/implement-a-camera-in-ogl-4-0-glsl-400
CC-MAIN-2021-17
refinedweb
2,554
59.3
I really love crafting web applications with the Ruby on Rails framework. Creating small maintainable applications is not a problem at all – your code is beautifully structured and everything fits in. However, as your app grows, it may become significantly harder to organize parts of the code so that it does not turn into something monstrous and unmanageable. This, of course, applies to internationalization as well. In this article, you will learn about seven I18n best practices, including advice that will help organize your translations better and take full advantage of Rails power. I am assuming that you already know the basics behind internationalization and localization in Rails, but if you wish to refresh your knowledge, take a look at this official guide or our article describing the topic thoroughly. Have a nice reading! Split Translations Large applications may have hundreds of translations keys. If they are stored in a single file it becomes really hard to manage them, therefore it’s a good idea to split your messages into various files stored in separate folders. For example, you may have a folder named models storing translations for the models’ attributes, forms to translate form-related stuff etc: - locales - models - en.yml - ru.yml - forms - en.yml - ru.yml However, by default this is not going to work as Rails does not load translations from the nested directories. To fix this, you’ll need to add the following line of code inside your config/application.rb file: config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}')] load_path is a setting to announce your custom translation files and it should do the trick. Learn how to find the best i18n manager and follow our best practices for making your business a global success. Learn how to find the best i18n manager and follow our best practices for making your business a global success.Check out the guide Take Advantage of Nesting It is not advised to store all your translation messages on the same level of nesting (using flat naming). For example, this is not a recommended practice: - submit: ‘Submit’ - log_out: ‘Log Out’ - blog: ‘Browse Blog’ - errors: ‘Errors were found:’ As you see, all the messages here are messed up – they relate to different pieces of the application but still they are mixed together. It is much better to introduce the parent keys to nest translations of the same type. For example, you might have something like that: - forms: - submit: ‘Submit’ - errors: ‘Errors were found’ - main_menu: - log_out: ‘Log Out’ - blog: ‘Browse Blog’ This way the messages are grouped and it is easier to manage them. By the way, when translating Rails models you must follow this principle, as attributes’ names should be nested properly: en: activerecord: attributes: category: name: 'Name' Give Keys Sensible Names This sounds like an obvious advice, but still don’t forget about it. Your translation keys should be named in such a manner that it is easy to understand what is their purpose. It becomes even more important for large applications. I really recommend browsing translations for the Devise gem as a nice example of now to organize and name your keys. Employ “Lazy” Lookups One cool thing about I18n in Rails is that when naming and nesting your keys properly, you can write less code inside the views and controllers. How is that possible? Well, let’s say you have a view called about.html.erb stored inside the app/views/pages folder. Inside you wish to display the page’s title: <%= t('pages.about.title') %> However, this code can be as simple as: <%= t(:title) %> In order for this to work, you need to nest the title key under the pages.about scope: pages: about: title: 'About us' This works, because the scope is written properly: it contains the folder’s and the view’s name (pages and about respectively). The same approach works for controllers, so having the following translation in place: orders: create: success: 'Your order is created!' You may create a flash message easily: class OrdersController < ApplicationController def create # ... flash[:success] = t(:success) end end It’s really convenient, so don’t be lazy to employ the “lazy” lookups! Enforce Available Locales By default each Rails application uses English locale and does not explicitly say which languages are supported. This is okay for applications that are not meant to be localized, but if you are planning to support multiple locales, then list them inside your config.rb file: config.i18n.available_locales = [:en, :de, :ru] This is convenient, because later you can take advantage of this array (fetch it using I18n.available_locales() method). For instance, this array may be employed to generate the proper routing scopes based on the languages’ codes: scope "(:locale)", locale: /#{I18n.available_locales.join("|")}/ do So you will get routes like /en/blog or /ru/shop. Even if the available locales change, the routes file will not require any changes. Moreover, this setting can come in handy when implementing a switching locale feature. Specifically, it can be used to check whether a user is requesting a supported locale and fallback to the default one if not. It does not really matter how exactly this feature is built, but suppose you have a method to extract locale data: def extract_locale parsed_locale = request.subdomains.first end You may then easily check whether the requested locale is supported or not: def extract_locale parsed_locale = request.subdomains.first I18n.available_locales.map(&:to_s).include?(parsed_locale) ? parsed_locale : nil end available_locales() returns an array with symbols, so here we convert them to strings and then check whether the requested language code was found inside. You can read more on this technique at our Setting and Managing Locales in Rails I18n article. Utilize Localized Views Some developers tend to place all translations inside the YAML files regardless of their length and complexity. In some cases, however, that’s not really convenient. Suppose you have a page that looks totally different depending on the chosen language. Of course, you might do something like this: <h1><%= t(:welcome) %></h1> <p><%= t(:special_offer) %></p> <p><%= t(:about) %></p> Then you would have to define these keys, but the corresponding messages are too large: welcome: 'Welcome!' special_offer: 'Here is our cool special offer! And then more text here...' about: 'Long text goes here.... and more text... even more...' Alternatively, you may take advantage of HTML translations but that’s not going to help a lot – you still have long messages that are hard to maintain. What you can do instead, is stick with the localized views that allows you to store totally different content for each locale. They are employed by simply prefixing your views with the locale’s code, for example, about.ru.html.erb and about.en.html.erb. Rails will render the proper view automatically depending on the value returned by the I18n.locale() method. Take Advantage of Variables Another important thing some novice developers tend to forget about is the fact that you can pass variables to your translations in Rails. Suppose, for example, you wish to display how many new messages has the user received. Of course, you may simply employ string interpolation like this: <%= "#{t(:message_count)}#{@messages.count}" %> But even though this piece of code is small, it looks overly complex and too ugly. Instead, let’s allow the message_count to accept a variable: message_count: "%{count} new messages" Then simply pass a hash to the t method as the second argument: <%= t(:message_count, count: @message.count) %> Clean and simple. You can further extend this example by introducing pluralization rules that also rely on variables. Helper Methods Rails offers us a bunch of powerful helper methods that can be used to display datetime and month select boxes, convert numbers to currency and display distance of time in words. Some of them work as magic and can really save you from writing many lines of code every time. However, be warned that some of these methods are somewhat expensive in terms of processing time. distance_of_time_in_words is one of such methods – it says how much time has passed since the provided datetime. If you are planning to use this method extensively in some view (the typical example is the comments block to say how old the comment is), then maybe it is better to reconsider and put it away. You might think about employing fragment caching here but then you’ll need to constantly expire it. Time runs fast and the phrase “the comment was published X days ago” should be constantly updated. Another solution to this problem is performing the calculation on the client-side. For example, there is a great library called Moment.js that supports this feature (and many others). Stick with Phrase! Working with translations is hard: So, in this article we have discussed some common best practices advised when working with I18n in Rails. I hope it will help you crafting maintainable and beautiful applications! Still, if you think that I have missed something, do not hesitate to share your opinion in the comments. Also, if you wish to learn more about internationalizing Rails applications, you may be interested in our article The Last Rails I18n Guide You’ll Ever Need. That’s all for today, folks. I thank you for staying with me and happy coding!
https://phrase.com/blog/posts/rails-i18n-best-practices/
CC-MAIN-2021-04
refinedweb
1,554
62.27
The new Leo tree data model has been evolving during last few weeks. Some things has been changed since my last post. I guess now may be the right time to explain how it works and why it was designed the way it is. But before I start to explain model, let me give short summary of what features new model supports. - super fast tree drawing - selecting nodes - moving nodes left, right, up and down as well as moving from one position in the outline to any other position - clone node - copy and paste node/subtree - delete node/subtree - promote/demote node - reading XML leo files - reading external files - writing external files - reading at-auto python files - writing at-auto files - unlimitted universal undo/redo functionality (with a single method entry handles any kind of tree/node modification including changes to body and/or headline) All these features are achieved in one python module with some 1669 LOC (with comments included 2299 lines). All these commands for modification of outline run much faster than analog Leo commands. For example executing promote/demote pair of commands takes Leo on average 113.6ms (measured 100 times with one clone node and three following sibling nodes). The new data model performs this in 3.7ms which means about 30 times faster than Leo. Comparing all other outline commands give similar speed ratio. Also all outline traversals are faster than Leo’s iterators. For example executing: import timeit def tf(): res = [] for p in c.all_positions(): res.append(('----'*p.level()) + p.h) return '\n'.join(res) t1 = timeit.timeit(tf, number=100)*1000/100 g.es('Average: %.1fms\n'%t1) inside LeoPyRef.leo, prints Average: 66.8ms on my machine, while computing the same value from new model prints Average: 5.4ms. New model traversal in this case is 12 times faster. The new data model is very well tested using excellent hypothesis framework. Testing involves creating 90 random outlines and performing series of 1000 random tree operations, checking everytime certain invariants. This kind of testing revealed several bugs that would be very hard to encounter using hand written test cases. An interesting way to understand how code works is to comment out any line and run tests again. Failure is almost certain and in case of failure, hypothesis will display minimal sequence of commands that lead to the failure. How can Leo benefit best from this code There may be several ways for Leo to benefit from this code. It may be used as an inspiration for changing Leo’s VNode and Position classes and possibly some other as well like undoer, importCommands, leoTree, … But, there is no guarantee that it would be possible to implement all these features by changing Leo’s original classes in backward compatible way. Plus, this also would mean lots of additional work. A better way would be to add this single module in leo/core/ and use it internally. Redirect top-level commands (like cut/copy/paste, clone, delete-outline, move node, expand, collapse, promote, demote, undo, redo) to delegate their work to the new data model. Also, selecting nodes and drawing tree should be delegated to the new model. All these changes can be done with minimal changes to the current Leo. For example: def clone_node(self): if g.USE_NEW_MODEL: return self.ltm.clone_node() ... the rest of method remains unchanged The same approach can be used for all other methods. Finally, execute-script command can be changed to do the following: if g.USE_NEW_MODEL: c.ltm.store_to_vnode(c.hiddenRootNode) old_USE_NEW_MODEL = True g.USE_NEW_MODEL = False else: old_USE_NEW_MODEL = False try: ... old body of execute script method finally: g.USE_NEW_MODEL = old_USE_NEW_MODEL ltm.restore_from_vnode(c.hiddenRootNode) All scripts will continue to work as they used to, without any change. If one day, LeoTreeModel class, gets its C or Rust implementation, the only thing that Leo should do to use power of such extension is to change one import so that LeoTreeModel class comes from that extension and not from ordinary Python module. Helper data classes There are two new data classes that the new model uses. While prototyping I was using tuples and lists for holding necessary data. In the end I have converted those tuples and lists into specialized data classes. At the lowest level there is the NData class, which contain data related to a single node in the outline. It has at least following five fields: - h - headline - b - body - children - is a list of gnx-es of children nodes - parents - is a list of gnx-es of parent nodes - size - is the total number of cells this node and its subtree occupies in the outline lists. Basicaly it is the same value that you would get from the Leo position class with the following code: len(list(p.self_and_subtree())). One could easily imagine adding more fields to this class. For example unknownAttributes or u is a good candidate. But for the sake of simplicity, I left this for later. The other data class is a named tuple LTMData which contains all relevant data for the outline. It has following fields: positions is a list of unique doubles. It could be list of unique items of any type. I have used doubles and random.random()to populate it, but really it can be anything hashable and immutable. In some testing that I have done using hypothesisframework, it turned out that there was some interference between hypothesisusing randommodule and the model itself, which made some tests to fail. To avoid this intereference I have changed usage of random.random()to function that calculates (1 - 1/x)where x is an integer increased everytime new position is generated. And all tests have passed with this. nodes is a list of gnx-es of all nodes in the outline order. It is the same value you would get from Leo using: list(x.gnx for x in c.all_positions()) - levels is a bytearraycontaining levels of each node in the outline order. The same thing you would get from: bytearray(x.level() for x in c.all_positions()). This restricts the depth of the outline to 255 levels, but I don’t expect that anyone would mind this restriction. Using bytearrayhere is esential, because it implements rfindmethod for searching backwards, and plain python lists do not imlement this method. It is a pitty that python lists do not have this method. One possible solution would be to store values in reversed order and then use indexmethod for searching list. In case restriction to 255 levels should be relaxed, this is something that must be done. - attrs is a dict: keys are gnx-es and values are NDatainstances. Perhaps name attrsis not very intuitive. It started as a place to store attributes of nodes such as headline, body,… Later all these attributes were joined into a single NDatainstance. It may be better if this field was named nodes, and have nodesrenamed to gnxes, gnxlistor even outline. - expanded is a set of positions that should be drawn as expanded. marked is a set of gnxes of all marked nodes. One thing to keep in mind is that in the new Leo tree data model, root node is considered to be a part of the outline. It is the first item in nodes, positions and levels, and its size is equal to size of the length of these lists. In the above comprehension expressions this rule is not allways obeyed. These expressions were shown just as an illustration. In the previous version of LTMData there were few other fields like parPos which was list of positions. For each item in positions list at the same index in parPos was position of the parent node. Later I have noticed that this position can be deduced from the levels list. The parent position is at the same index that has level for one less than this level. For example let i be the index of any given node in the outline. Index of its parent pi must satisfy the following equation: There is also a special case, when levels[i] <= 1, index of parent node is zero, i.e. parent node is in fact root node which has fixed index so we don’t need to search for it. The other field that was removed is gnx2pos. It was used to keep the track of all occurrences of a single node in the outline. Keys were gnxes and values were lists of positions. However, later I have realized that in every usage of this field, positions were looked up in the list of positions by index method of list class. So, there were no real benefit from this index. The same value is retrieved just by searching for gnx in the list of nodes. Here is how outline of an experimental outline is represented inside LTMData object. To make this image more readable, actual values of gnx-es are replaced by single upper case letter and positions are represented as P<index> instead of their float values. Also for each node after headline all positions the node is located are listed, followed by children and parents lists. For example let’s look at the node in position P24. From the above picture we can see its headline is clone 1, its gnx is C, its at the level 3, and its size (self_and_subtree has total of 2 nodes). Reading text to the right of headline we can see that this same node can be found on positions: P2, P6, P8, P24 and P26. It has a single child node with the gnx K, and its parents are nodes with the gnxes: B, B and N. In the following post, I’ll try to explain how some of the tree operations work.
https://computingart.net/leo-tree-model-4.html
CC-MAIN-2021-49
refinedweb
1,634
64.51
Among. The code for this series is based on JSF 2 running in an enterprise container, such as GlassFish or Resin. The last section in this article is a step-by-step tutorial on installing and running the article's code with GlassFish. A JSF autocomplete custom component Made famous by Google's search field, autocomplete fields (also known as suggest boxes), are a staple of many Web applications. They are also a typical use case for Ajax. Autocomplete fields come with most Ajax frameworks, such as Scriptaculous and JQuery, as Figure 1 — a look at AjaxDaddy's collection of autocomplete components (see Resources) — attests: Figure 1. AjaxDaddy autocomplete components This article will explore one way to implement an Ajax-enabled autocomplete field with JSF. You'll see how to implement the autocomplete field shown in Figure 2, which shows a short list of fictional countries (culled from Wikipedia's "List of fictional countries" article; see Resources): Figure 2. The autocomplete field Figure 3 and Figure 4 show the autocomplete field in action. In Figure 3, when Al is typed into the field, the country list is reduced to names that start with those two letters: Figure 3. Completion items that start with Al Similarly, Figure 4 shows the results when Bar is typed into the field. The list shows only country names that begin with Bar: Figure 4. Completion items that start with Bar Using the autocomplete component The Locations autocomplete field is a JSF composite component, and it is used in a facelet, as shown in Listing 1: Listing 1. The facelet The facelet in Listing 1 uses the autoComplete composite component by declaring an appropriate namespace — util — and using the component's associated tag, <util:autoComplete>. Notice the two attributes for the <util:autoComplete> tag in Listing 1: valueis the country property of a managed bean named user. completionItemsis the initial set of completion items for the field. The User class is a simple managed bean, obviously contrived for just this occasion. Its code is shown in Listing 2: Listing 2. The Userclass Notice the @Named annotation, which, along with @SessionScoped, instantiates a managed bean named user and places it in session scope the first time JSF encounters #{user.country} in a facelet. This application's only reference to #{user.country} takes place in Listing 1, where I specify the country property of the user managed bean as the value for the <util:autoComplete> component. Listing 3 shows the AutoComplete class, which defines the countries property that I specified as the autocomplete component's list of completion items: Listing 3. The completion items That's all there is to using the autocomplete component. Now you'll see how it works. How the autocomplete component works The autocomplete component is a JSF 2 composite component, so, like most composite components, it is implemented in an XHTML file. The component consists of a text input and a listbox, and some JavaScript. Initially, the listbox's style is display: none, which makes the listbox invisible. The autocomplete component responds to three events: keyupevents in the text input blur(losing focus) events in the text input change(selection) events in the listbox When the user types in the text input, the autocomplete component calls a JavaScript function for every keyup event. That function coalesces keystroke events to make no more than one Ajax call every 350ms. So, in response to keyup events in the text input, the autocomplete component makes an Ajax call, at most every 350ms, to the server. (All of that is to prevent fast typists from flooding the server with Ajax calls. In practice, coalescing events may be overrated in this case, but it affords an opportunity to illustrate coalescing events in JavaScript, which in general is a useful tool.) When the user selects an item from the listbox, the autocomplete component makes another Ajax call to the server. Both the text input and the listbox have listeners attached to them that do most of the meaningful work on the server during Ajax calls. In response to keyup events, the text input's listener updates the listbox's completion items. In response to listbox selection events, the listbox's listener copies the listbox's selected item into the text input and hides the listbox. Now that you have a good idea of how the autocomplete component works, you're ready to take a look at its implementation. Implementing the autocomplete component The autocomplete component implementation consists of these artifacts: - A composite component - A handful of JavaScript functions - A value-change listener that updates completion items I'll start with the composite component in Listing 4: Listing 4. The autoCompletecomponent Three things are going on in Listing 4's implementation section. First, the component makes Ajax calls in response to keyup events in the text input, and it hides the listbox when the text input loses focus by virtue of JavaScript functions assigned to keyup and blur events in the text input. Second, the component makes Ajax calls in response to change events in the listbox with JSF 2's <f:ajax> tag. When the user makes a selection from the listbox, JSF makes an Ajax call to the server and updates the text input's value when the Ajax call returns. Third, both the text input and the listbox have value-change listener methods attached to them, so when JSF makes Ajax calls in response to the user typing in the text input, JSF invokes the text input's value-change listener on the server. When the user selects an item from the listbox, JSF makes an Ajax call to the server and invokes the listbox's value-change listener. Listing 5 shows the JavaScript used by the autocomplete component: Listing 5. The JavaScript The JavaScript in Listing 5 consists of three functions that I placed inside a namespace named com.corejsf. I implemented the namespace (which is technically a JavaScript literal object) to prevent someone from accidentally (or not) clobbering any of my three functions. If those functions were not tucked away inside com.corejsf, someone could implement their own updateCompletionItems function, thereby replacing my implementation with theirs. It's feasible that some JavaScript library might implement a function named updateCompletionItems, but it's a pretty good bet nobody's going to come up with com.corejsf.updateCompletionItems. (In retrospect, dropping the com, and going with corejsf.updateCompletionItems probably would've sufficed, but sometimes it's easy to get carried away.) So, what do the functions do? The updateCompletionItems() function makes an Ajax request to the server — by calling JSF's jsf.ajax.request() function — asking only that JSF render the listbox component when the Ajax call returns. The updateCompletionItems() function also passes two extra parameters to jsf.ajax.request(): the x and y coordinates of the upper left-hand corner of the listbox. The jsf.ajax.request() function turns those function parameters into request parameters that it sends with the Ajax call. JSF calls the inputLostFocus() function when the text input loses focus. That function simply hides the listbox, using Prototype's Element object. Both updateCompletionItems() and inputLostFocus() store their functionality in a function. Then they schedule their functions to execute in 350ms and 200ms, respectively. In other words, each function has a job to do, but it delays that job for either 350ms or 200ms. The text input delays after a keyup event, so that the updateCompletionItems() method sends an Ajax request once per 350ms, at most. The idea is that if the user is an (extremely!) fast typist, you don't want to flood the server with Ajax calls. The inputLostFocus function, called when the text input loses focus, delays its work for 200ms. That delay is needed because the value will be copied out of the listbox when the Ajax call returns, and the listbox must be visible for that to work. Finally, notice the getListBoxId() function. That helper function obtains the client identifier of the listbox from the client identifier of the text input. The function is able to do that because it's in cahoots with the autoComplete component in Listing 4. The autoComplete component assigns input and listbox as the component identifiers for the text input and listbox respectively, so the getListBoxId() function merely chops off input and appends listbox to get from the text input's client identifier to the listbox's. Listing 6 shows the implementation of the listener that pulls everything together: Listing 6. The listener JSF invokes the listener's valueChanged() method during Ajax calls in response to keyup events in the text input. That method creates a new set of completion items and then sets the listbox's items to this new set. The method also sets style attributes for the listbox that determine whether the listbox is displayed when the Ajax call returns. The setListboxStyle() method in Listing 6 uses the x and y request parameter values that I specified when I made an Ajax call in Listing 5. JSF invokes the listener's only other public method, completionItemSelected(), during Ajax calls in response to selection events in the listbox. That method copies the listbox's value into the text input and hides the listbox. Notice that the valueChanged() method also stores the original completion items in an attribute of the listbox. Because each autoComplete component maintains its own list of completion items, multiple autoComplete components can peacefully coexist in the same page without stomping on one another's completion items. Running the examples with GlassFish and Eclipse The code in this series of articles is best suited to a JEE 6 container, such as GlassFish or Resin. You can get things to work with a servlet container, such as Tomcat, but note the "get things to work" part. Since my goal is to focus on your getting the full potential out of JSF 2 and JEE 6, and not on configuration issues, I will stick to GlassFish v3. For the rest of this article, I'll show you how to run this article's sample code using GlassFish v3 and Eclipse. The instructions here will also suffice for the code from the rest of this series of articles. (I'm using Eclipse 3.4.1, so the closer you can match that when running the examples, the better.) Figure 5 shows the directory structure that you'll find in the code for this article. (See Download to get the code now.) There's an autoComplete directory containing the application and an empty workspace directory for Eclipse. Figure 5. Source code in this article's download Now that you have the code, you're almost ready to get it running. First, you need the GlassFish Eclipse plug-in, which you can download at, shown in Figure 6: Figure 6. The GlassFish Eclipse plug-in Follow the installation instructions for the plug-in, and you're ready to go. To install the code for this article, create a Dynamic Web project in Eclipse. You can do that from the File > New menu: if you don't see Dynamic Web project there, select Other, and in the ensuing dialog open the Web folder and select Dynamic Web Project, as shown in Figure 7: Figure 7. Creating a Dynamic Web project The next step is to configure the project. Make the following selections on the first screen of the New Dynamic Web Project wizard, as shown in Figure 8: - Under Project contents, leave the Use default box unchecked. In the Directory field, enter (or browse to) the sample code's autoComplete directory. - For Target Runtime, select GlassFish v3 Java EE 6. - For Dynamic Web Module version, enter 2.5. - For Configuration, select Default Configuration for GlassFish v3 Java EE 6. - Under EAR Membership, leave the Add project to an EAR box unchecked, and enter autoCompleteEARin the EAR Project Name: field. Figure 8. Configuring the application, step 1 Click Next, then enter the values shown in Figure 9: - For Context Root: enter autoComplete. - For Content Directory: enter web. - For Java Source Directory: enter src/java. Leave the Generate deployment descriptor box unchecked. Figure 9. Configuring the application, step 2 Now you should have an autoComplete project, visible in Eclipse's Project Explorer view, as shown in Figure 10: Figure 10. The autoComplete project Now select the project, right click on it, and select Run on Server, as shown in Figure 11: Figure 11. Run on server in Eclipse Select GlassFish v3 Java EE 6 from the list of servers in the Run On Server dialog, shown in Figure 12: Figure 12. Selecting GlassFish Click Finish. Eclipse should start GlassFish and, subsequently, the autoComplete application, as shown in Figure 13: Figure 13. Running in Eclipse JSF 2 makes it easy to create powerful Ajax-enabled custom components. You don't have to implement a Java-based component or renderer, or declare that component or renderer in XML, or integrate third-party JavaScript to make Ajax calls. With JSF 2, all you need to do is create a composite component, with markup almost identical to any JSF 2 facelet view, and perhaps add a little JavaScript or Java code, and voilà — you have a cool custom component that will make data input a breeze for your application's users. In the next installment of JSF fu, I'll discuss more aspects of implementing Ajaxified JSF custom components, such as integrating the <f:ajax> tag so your custom components can participate in Ajax initiated by others. Information about download methods Learn - The JSF homepage: Find more resources about developing with JSF. - AjaxDaddy: AjaxDaddy offers Ajax examples, JavaScript scripts, and Web 2.0 demos. -. - List of fictional countries: This article's example uses Wikipedia's list of fictional countries. -_13<<.
http://www.ibm.com/developerworks/java/library/j-jsf2fu-0410/index.html
crawl-003
refinedweb
2,298
52.7
0 I am trying to write a program that takes a series of numbers input by a user and performs some calculations on it. While I have most of the code put together and working perfectly fine, I haven't been able to write code that will refuse the users input if it is a number followed by letters (ie., "12a" or "12f345"). For some reason, this causes the program to close. Below is the snippet of code isolated to test with. How can I get the code to display a message that the user input was not a number, to ask for a valid number, and to not close the program? Thank you, All... #include <stdio.h> int main(void) { float num1; printf("Please enter a number:"); while(! scanf("%f", &num1)) { printf("\n\tInvalid entry, please try again: "); fflush(stdin); } printf("The number you entered was %.2f\n", num1); getChar(); } Edited by Dani: Formatting fixed
https://www.daniweb.com/programming/software-development/threads/14914/need-help-with-error-checking-user-input
CC-MAIN-2017-26
refinedweb
156
82.04
On Tue, 20 Jan 2009 22:15:04 -0800 (PST)David Miller <davem@davemloft.net> wrote:> From: Sam Ravnborg <sam@ravnborg.org>> Date: Wed, 21 Jan 2009 06:33:10 +0100> > > On Wed, Jan 21, 2009 at 10:20:17AM +0530, Jaswinder Singh Rajput wrote:> > > On Wed, Jan 21, 2009 at 6:36 AM, Krzysztof Halasa <khc@pm.waw.pl> wrote:> > > > Jaswinder Singh Rajput <jaswinder@kernel.org> writes:> > > >> > > >> usr/include/linux/if_frad.h is covered with CONFIG_DLCI from many years> > > >> and no body is complaining about it so it means no body is using it.> > > >>> > > >> So should we need to drop #if / #endif pair or the whole file from> > > >> userspace.> > > >> > > > I think the file. "Empty file exported to userspace", long unused. We> > > > can also have it back there, and it being not exported is an indicator> > > > that it's not used. I guess the #ifdef __KERNEL__ can be removed, too.> > > > > > I will definately define #ifdef __KERNEL__ but I am also curious after> > > defining it there is no point of making empty> > > usr/include/linux/if_frad.h> > > > Googling a bit did not turn up _any_ non-kernel hits that has relevance.> > So based on the information given in this thread I strongly> > suggest to drop the export of this header.> > Sure, but on the other hand this makes all of the userland APIs> essentially inaccessible and undefined.> > I bet Sangoma's internal tools reference this stuff.> --> To unsubscribe from this list: send the line "unsubscribe netdev" in> the body of a message to majordomo@vger.kernel.org> More majordomo info at are no references if_frad.h in the version of Sangoma (out of tree) codethat we use in Vyatta.
http://lkml.org/lkml/2009/1/21/23
CC-MAIN-2014-41
refinedweb
278
75.3
Return to Scripting Issues by Niriel » Mon Apr 30, 2012 8:50 am by coppertop » Mon Apr 30, 2012 9:07 am by Niriel » Mon Apr 30, 2012 9:43 am TypeError: BulletWorld.attach_character() argument 1 must be BulletBaseCharacterControllerNode, not MyCharacterControllerNode from panda3d.bullet import BulletBaseCharacterControllerNodeclass MyCharacterControllerNode(BulletBaseCharacterControllerNode): def __init__(self, shape, step_height, name): pass by coppertop » Mon Apr 30, 2012 10:06 am by enn0x » Mon Apr 30, 2012 10:41 am by coppertop » Mon Apr 30, 2012 11:15 am by Niriel » Mon Apr 30, 2012 11:21 am by enn0x » Tue May 01, 2012 4:14 am coppertop wrote:I must add my two cents. IIRC, Panda's PhysX wrapper doesn't have continuous collision detection support, which makes it unsuitable for many use cases -- any thrown/falling object (be it a grenade or even a crate) will often just fly through walls. This is an important thing to know, if you don't want to spend a lot of time building stuff around PhysX just to find that one feature is missing. by coppertop » Tue May 01, 2012 7:04 am Um... you could have told me that CCD is not exposed. Or maybe you did, and I forgot. Anyway, added CCD support for PhysX on the trunk. Hasn't been much work. I'm not particularly attached to Bullet. by enn0x » Tue May 01, 2012 3:55 pm by Niriel » Wed May 02, 2012 4:34 am Users browsing this forum: No registered users and 4 guests
http://www.panda3d.org/forums/viewtopic.php?p=84475
CC-MAIN-2016-18
refinedweb
252
68.91
Connected paths in a $2 \times n$ grid By Vamshi Jandhyala February 28, 2020 Problem Consider a 2x12 rectangle with dotted lines marking the 24 squares. How many sets R are there such that R is connected, R is obtained by cutting on the dotted lines, and R contains at least one square at the left end and at least one square at the right end? “Connected” means that one can get from any square in R to any other by a path of adjacent squares, adjacency meaning that at least one edge is shared. Symmetry is ignored: if R is congruent to R' but they involve different squares, they count as different. There is in fact a formula for the count for the $2 \times n$ case (and some extensions to $m \times n$). Here is an example of a set R. For a $2 \times 2$ rectangle there are seven such sets, shown below. Source: S. Durham and T. Richmond, Connected subsets of an nx2 rectangle, College Math. J., 51, (Jan. 2020), 32-42. Solution Let $A(n)$ be the number of sets as in the problem for 2xn grid. Let $T(n)$ be the number of sets as in the problem, but that have in the rightmost column a square on the top but not the bottom. Note that this is the same, by symmetry, if we count sets that have a square on the bottom but not the top in the last column. Let $B(n)$ be the number of sets as in the problem that have the rightmost column contain both squares. Note that $B(n) = A(n-1)$. We have $A(n) = B(n) + 2 L(n)$, so $$ \begin{equation} \label{eq1} 2 L(n) = A(n) - B(n) = A(n) - A(n - 1) \end{equation} $$ Also $L(n) = L(n-1) + B(n - 1) = L(n-1) + A(n - 2)$, so $$ \begin{equation} \label{eq2} 2 L(n) = 2 L(n - 1) + 2 A(n - 2) \end{equation} $$ Now \ref{eq1} and \ref{eq2} give, $$ \begin{align} A(n) - A(n - 1) &=& 2 L(n - 1) + 2 A(n - 2) \nonumber\\ &=& A(n - 1) - A(n - 2) + 2 A(n - 2) \nonumber\\ \implies A(n) &=& 2 A(n - 1) + A(n - 2) \label{eq3} \end{align} $$ Solving the recurrence relation \ref{eq3} using the initial conditions $A(1) = 3$ and $A(2) = 7$ gives us the sequence $3, 7, 17, 41,\dots$ for $A(n)$. What is interesting about this sequence: $3, 7, 17, 41, 99, 239, 577,…$ is that it is identical with the numerators of the convergents of the continued fraction of $\sqrt{2}$: $1/1 , 3/2 , 7/5, 17/12, 41/29, 99/70, 239/169, 577/408, \dots, 47321/33461$. Here is the Python code for calculating $A(n)$ and generating the connected paths. def num_connected_paths(n): if n == 1: return 3 if n == 2: return 7 cnt = 2 t_n_2, t_n_1 = 3, 7 while(cnt != n): t_n_2, t_n_1 = t_n_1, 2*t_n_1 + t_n_2 cnt += 1 return t_n_1 print(num_connected_paths(5)) def connected_paths(n): if n == 1: return [[(1,0)],[(0,1)],[(1,1)]] ext_paths = [] for path in connected_paths(n-1): s1, s2 = path[-1] ext_paths.append(path + [(1,1)]) if (s1,s2) == (1,1): ext_paths.append(path + [(1,0)]) ext_paths.append(path + [(0,1)]) else: ext_paths.append(path + [(s1,s2)]) return ext_paths print(len(connected_paths(5)))
https://vamshij.com/blog/2020-02-28-connected-paths/
CC-MAIN-2022-21
refinedweb
567
67.49
High Performance Python Data driven programming framework for Web Crawler Databot High Performance Python Data driven programming framework for Web Crawler,ETL,Data pipeline work. - Data-driven programming framework - Paralleled in coroutines and ThreadPool - Type- and content-based route function Installing Install and update using pip: pip install -U databot What's data-driven programming? All functions are connected by pipes (queues) and communicate by data. When data come in, the function will be called and return the result. Think about the pipeline operation in unix: ls|grep|sed. Benefits: . Decouple data and functionality . Easy to reuse Databot provides pipe and route. It makes data-driven programming and powerful data flow processes easier. Databot is... - Simple Databot is easy to use and maintain, does not need configuration files, and knows about asyncio and how to parallelize computation. Here's one of the simple applications you can make: Load the price of Bitoin every 2 seconds. Advantage price aggregator sample can be found here <>. .. code-block:: python from databot.flow import Pipe, Timer from databot.botframe import BotFrame from databot.http.http import HttpLoader def main(): Pipe( Timer(delay=2),#send timer data to pipe every 2 sen "", #send url to pipe when timer trigger HttpLoader(),#read url and load http response lambda r:r.json['bpi']['USD']['rate_float'], #read http response and parse as json print, #print out ) BotFrame.render('simple_bitcoin_price') BotFrame.run() main() flow graph below is the flow graph generated by databot. Fast Nodes will be run in parallel, and they will perform well when processing stream data. Visualization With render function: BotFrame.render('bitcoin_arbitrage') databot will render the data flow network into a graphviz image. - Replay-able With replay mode enabled: config.replay_mode=True when an exception is raised at step N, you don't need to run from setup 1 to N. Databot will replay the data from nearest completed node, usually step N-1. It will save a lot of time in the development phase.
https://pythonawesome.com/high-performance-python-data-driven-programming-framework-for-web-crawler/
CC-MAIN-2020-05
refinedweb
330
58.99
Namespace: MailBee.Pop3MailNamespace: MailBee.Pop3Mail When bodyLineCount is 0, only the message header is downloaded. When bodyLineCount is -1, this method is equivalent to DownloadEntireMessage(Int32) method. Setting bodyLineCount to a positive value allows the developer to implement message body preview feature. In this case, it's recommended to set bodyLineCount >= 20 since the first 5-15 lines of the message source body are often filled with the special information and do not contain the actual body text. If bodyLineCount is set to a certain value (such as 100), small messages having less than 100 lines in the message source body will be downloaded completely. Larger messages will be parsed partially. For instance, if 100 body lines of the message have been received, and the message contains an attachment which starts at 80-th line and ends at 150-th line of the message source body (so it has not fitted in the 100 lines received), MailBee will still add this attachment into Attachments collection, but the attachment binary data will obviously be incomplete. // To use the code below, import MailBee namespaces at the top of your code. using MailBee; using MailBee.Pop3Mail; using MailBee.Mime; // The actual code (put it into a method of your class). Pop3 pop = new Pop3(); pop.Connect("mail.domain.com"); pop.Login("jdoe", "secret"); MailMessage msg = pop.DownloadMessageHeader(pop.InboxMessageCount, 20); Console.WriteLine(msg.BodyPlainText); pop.Disconnect(); ' To use the code below, import MailBee namespaces at the top of your code. Imports MailBee Imports MailBee.Pop3Mail Imports MailBee.Mime ' The actual code (put it into a method of your class). Dim pop As New Pop3 pop.Connect("mail.domain.com") pop.Login("jdoe", "secret") Dim msg As MailMessage msg = pop.DownloadMessageHeader(pop.InboxMessageCount, 20) Console.WriteLine(msg.BodyPlainText) pop.Disconnect()
https://afterlogic.com/mailbee-net/docs/MailBee.Pop3Mail.Pop3.DownloadMessageHeader_overload_2.html
CC-MAIN-2018-51
refinedweb
296
50.53
Bart Schuller <schuller@lunatech.com> writes: >. I believe for clarity people call these XSL-FO and XSLT (the latter being the transformation language). XSLT is more mature and usable at this point. I would estimately that XSL-FO being at print output quality matching existing DSSSL implementations (jade) are around 18 months off -- the spec itself is still very unstable. >. *blush* Well, actually, XSLT and the Jade markup transformation language (aka 'jade -t sgml' or 'jade -t xml', which is used, say, in docbook-stylesheets for HTML output) both are based on transformations of a grove, which is a representation of a SGML or XML document as a tree of trees. DSSSL for print output is also based on groves -- the XSL analogue for this is XSL-FO. However, flow objects models vary a bit between the standards. To give a historic overview, DSSSL was an 8-year-in-the-making standard, which merges in architectural and grove work from the HyTime group. Many consider it overly difficult to program with, comparing it with proprietary systems such as Belize or Omnimark. Many consider it bizaree because they have what I would consider to be an unfair aversion to Lisp and Lisp dialects. Personally, I love DSSSL hacking; I see it as a very elegant use of functional/recursive programming with a grove model (traversing nodes on an acyclic directed graph is a nice application for recursive programming techniques). If you don't know Scheme, it can be a bit intimidating, but I really do find it a pleasurable programming experience.?) XSL is basically an attempt by the former DSSSL spec folks to make a version of DSSSL which is palatable for the web. The original idea is still to find a way to have a single stylesheet for both print and for web. However, IMHO, the whole XSL-FO effort is still seriously in doubt, whereas XSLT (which was just a splinter issue) is actually getting work done today. XSLT is derived from the grove model as well the as the "proprietary" jade ML transformation extensions. Basically it's all about transforming one grove into another. One strange thing about all XSL flavors is that they are "declarative languages" and all XSL stylesheets are couched as instances of an XML file. Rather than a big scheme file like DSSSL, XSL is like a big XML file with namespaces (it is rather ugly until you get used to it) with little tiny Scheme snippets in it. XSL is also based on a 'if pattern matches, apply this template' model. CSS is just an annotation model -- you just attach style to rectangles of text. You can't use it to do things you might do in DSSSL or XSL. For instance, you couldn't create a TOC in the stylesheet with CSS. > Here's my view on the stylesheets for XML situation: > > Viewing XML directly inside browsers is limited to IE 4 and 5 and > Mozilla. Only IE5 does XSL (but not flow objects), the rest can use CSS. There is also (from) a MSIE ActiveX DSSSL thing, I think. > The only widely accepted XML formatting standard is to use XSLT to > generate HTML. Both print and native browser rendering is currently in > turmoil. Actually, you can use jade/DSSSL to render XML or SGML. See the docbook-xml or the website DTD from. -- .....Adam Di Carlo....adam@onShore.com.....<URL:>
https://lists.debian.org/debian-devel/1999/06/msg01851.html
CC-MAIN-2018-05
refinedweb
569
70.53
On Wed, Apr 07, 2004 at 06:23:21PM -0700, Michael A. Peters wrote: > New to list, I'm sure this topic has been discussed before. > I've put together a small article discussing the issue of standard > users being able to install software on the system that is available > for all users, without having to be root to do so. > > This I think is critical for acceptance of LOTD but also has benefits > in other areas too. > > The article is at > > I am seeking comments on it. This kind of things always gets me thinking of per-process namespaces like, I am led to believe, there are in plan9, Inferno and Hurd systems. Some more OS organization that would lead to more resources being available as files would probably be needed for many things. And instead of using separate scattered filesystem trees, each user (process) would just "see" (have access to, be able to probe/browse) a different view of the host. It might also be possible to do some of this by using something like union mount or supermount or whatever that missing copy on write layered filesystem is called and letting each user have a whole "layer" to themselves. Or perhaps even a whole user mode Linux kernel. -- If xawtv dumps core, you can fix this with "ulimit -c 0".
http://www.redhat.com/archives/rpm-list/2004-April/msg00037.html
CC-MAIN-2014-49
refinedweb
223
68.4
You've just seen how the parts of a SAX2 application fit together, so now you're ready to see how the data is actually handled as it arrives. Here we focus on the events that deal with the core XML data model of elements, attributes, and text. To work with that model, you need to use only a handful of methods from the ContentHandler interface. As mentioned earlier, this class is a convenient way to start using SAX2 because it provides stubs for many of the handler methods. You can just override those stubs with methods to do real work. Using DefaultHandler as a base class is just an implementation option. It's often just as convenient not to use such a base class. The class is used in this chapter to avoid explaining handler methods that you don't really need. In some scenarios, Sun's JAXP requires you to use DefaultHandler as a base class. That's much more of a restriction than SAX itself makes. If you stick to using the SAX XMLReader API, as recommended in this book, you'll still have the option of using DefaultHandler as a base class, but this policy won't be imposed on your application code. For example, you can have separate objects to encapsulate policies such as error handling, so you won't need to hardwire all such policies into a single class. Let's use this simple XML document to learn the most essential SAX callbacks: <stanza> <line>In a cavern, in a canyon,</line> <line>Excavating for a mine,</line> <line>Dwelt a miner, forty-niner,</line> <line>And his daughter Clementine.</line> </stanza> This is a simple document, only elements and text, with no attributes, DTD, or namespaces to complicate the code we're going to write. When SAX2 parses the document, our ContentHandler implementation will see events reported for those elements and for the text. The calls will be more or less as follows; they're indented here to correspond to the XML text, and the characters() calls show strings since slices of character arrays are awkward: startElement ("", "", "stanza", empty) characters ("\n ") startElement ("", "", "line", empty) characters ("In a cavern, i"); characters ("n a canyon,"); endElement ("", "", "line") characters ("\n ") startElement ("", "", "line", empty) characters ("Excavating for a mine,"); endElement ("", "", "line") characters ("\n ") startElement ("", "", "line", empty) characters ("Dwelt a miner, forty-niner,"); endElement ("", "", "line") characters ("\n ") startElement ("", "", "line", empty) characters ("And his daughter"); characters (" Clementine."); endElement ("", "", "line") characters ("\n") endElement ("", "", "stanza") Notice that SAX does not guarantee that all logically consecutive characters will appear in a single characters() event callback. With this simple text, most parsers would deliver it in one chunk, but your application code can't rely on that always being done. Also, notice that the first two parameters of startElement() are empty strings; they hold namespace information, which we explain toward the end of this chapter. For now, ignore them and the last parameter, which is for the element's attributes. For our first real work with XML, let's write code that prints only the lyrics of that song, stripping out the element markup. We'll start with the characters() method, which delivers characters in part of a character buffer with a method signature like the analogous java.io.Reader.read() method. This looks like Example 2-2.[9] with a URL for the XML text shown earlier, you'll see the output. [9]On some systems, the user will need to provide system property on the command line, passing -Dorg.xml.sax.driver=..., as shown in Section 3.2, "Bootstrapping an XMLReader" in Chapter 3, "Producing SAX2 Events". <userinput>$ java Skeleton</userinput> In a cavern, in a canyon, Excavating for a mine, Dwelt a miner, forty-niner, And his daughter Clementine. $ You'll notice some extra space. It came from the whitespace used to indent the markup! If we had a DTD, the SAX parser might well report this as "ignorable whitespace." (See Section 4.1.1, "Other ContentHandler Methods " in Chapter 4, "Consuming SAX2 Events" for information about this callback.) But we don't have one, so to get rid of that markup we should really print only text that's found inside of <line> elements. In this case, we can use code like Example 2-3 to avoid printing that extra whitespace; however, we'll have to add our own line ends since the input lines won't have any. wouldn't work. SAX content handlers are often written to understand particular content models and to carefully track application state within parses. They often keep a stack of open element names and attributes, along with other state that's specific to the particular task the content handler performs (such as the "ignored" flag in this example). A full example of an element/attribute stack is shown later, in Example 5-1.[10] [10]Whitespace handling in text can get quite messy. XML defines an xml:space attribute that may have either of two values in a document: default, signifying that whatever your application wants to do with whitespace is fine, and preserve, which suggests that whitespace such as line breaks and indentation should be preserved. W3C XML Schemas replace default with two other options to provide a partial match for the whitespace normalization rules that apply to attribute values. In simple cases like this, where namespaces aren'll understand an essential part of how SAX works. public class Example extends DefaultHandler { private Stack stack = ())) return; System.out.write (new String (buf, offset, length)); } } Although they didn't appear in this simple scenario, most startElement() callbacks will have if/then/else decision trees that compare element names. Or if you isn't always what you should check first. One policy is just to ignore unexpected elements, which is what most HTML browsers do with unexpected tags. Another policy is to treat them as some kind of document validity error. In the previous section, we skipped over the attributes provided with each element. Let's look at them in a bit more detail. SAX2 wraps the attributes of an element into a single Attributes object. For any attribute, there are three things to know: its name, its value, and its type. There are two basic ways to get at the attributes: by an integer index (think "array") or by names. The only real complication is there are two kinds of attribute name, courtesy of the XML Namespaces specification. You often need to write handler code that uses the value of a specific attribute. To do this, use code that accesses attribute values directly, using the appropriate type of name as arguments to a getValue() call. If the attribute name has a namespace URI, you'll pass the URI and the local name (as discussed later in this chapter). Otherwise you'll just pass a single argument. A value that is an empty string would be a real attribute value, but if a null value is returned, no value was known. In such a case, your application might need to infer some nonempty attribute value. (This is common for #IMPLIED attributes.) Consider this XML element: <billable label='finance' xmlns:units="" units: 25000 </billable> Application code might need to enforce a policy that it won't present documents with such data to users that aren't permitted to see "finance" can't prefix is used to identify that namespace: String currency; currency = atts.getValue ("", "currency"); // what's the best exchange rate today? There are corresponding getType() accessors, which accept both types of attribute names, but you shouldn). You might need to look at all the attributes provided with an element, particularly when you're building infrastructure components. Here's how you might use an index to iterate over all the attributes you were given in a startElement() callback and print all the important information. This code uses a few methods that we'll explain later when we discuss namespace support. getLength() works like the "length" attribute on an array. Attribute atts = ...; int'll notice that accomodating input documents that use XML namespaces has complicated this code. It's. It's rarely safe to assume your input documents will only use one kind of name. It's often good practice to scan through all the attributes for an element and report some kind of validity error if a document has unexpected attributes. (These might include xmlns or xmlns:* attributes, but often it's best to just ignore those.) This can serve as a sanity check or a kind of procedural validation. For example, if you validated the input against its own DTD, that DTD might have been modified (using the internal subset or some other mechanism) so that it no longer meets your program's expectations. Such a scan over attribute values can be a good time to make sure your application does the right thing with any attributes that need to be #IMPLIED, or have type ID. Attribute values will always be whitespace-normalized as required by the XML specification. This means that the only whitespace in an attribute will be space characters or whitespace provided by character references to a tab, newline, or carriage return. If the type isn Section 4.3.1, "The DeclHandler Interface " in Chapter 4, "Consuming SAX2 Events".) Or you can use the Attributes.getType() methods if you can deal with incomplete reporting for enumerated types. (You won't, "Other SAX Classes" in Section 5.1.1, "The AttributesImpl Class ".. The methods in the Attributes interface are summarized in Appendix A, "SAX2 API Summary". For more information, consult the SAX javadoc. In the earlier code example, we used some callbacks without really explaining what they did and what their parameters were. This section provides more details. In the summaries of handler callbacks presented in this book, the event signatures are omitted. This is just for simplicity: with a single exception (ContentHandler.setDocumentLocator()), the event signature is always the same. Every handler can throw a SAXException to terminate parsing, as well as java.lang.RuntimeExceptions and java.lang.Error, which any Java method can throw. Handlers can throw such exceptions directly, or as a slightly more advanced technique, they can delegate the error-handling policies to an ErrorHandler and recover cleanly if those calls return instead of throwing exceptions. (ErrorHandler is discussed later in this chapter.) The ContentHandler callbacks include:. For elements associated with a namespace URI, this is the URI. For other kinds of elements, this is the empty string. For elements associated with a namespace URI, this is the element name with any prefix removed. For other kinds of elements, this is the empty string. This is the element name as found in the XML text, but for elements associated with a namespace URI, this might be the empty string. (Don't rely on it being nonempty unless the URI is empty, or you've configured the parser in "mixed" namespace reporting mode as described later in this chapter, in Section 2.6.3, "Namespace Feature Flags".) An element's it's just a quick state cleanup (popping stacks), and sometimes it's where all the work queued during an element's processing is finally performed.. A character array holding the text being provided. You must ignore characters in this buffer that are outside of the specified range. The index of the first character from the buffer that is in range. The number of text characters that are in the range's buffer, beginning at the specified Section 4.1.2, "The Locator Interface " in Chapter 4, "Consuming SAX2 Events".) Most parsers have only a limited amount of buffer space and will flush characters whenever the buffer fills; flushing can improve performance because it eliminates a need for extra buffer copies. Excess buffer copying is a classic performance killer in all I/O-intensive software. The XML specification guarantees that you won't see CRLF- or CR-style line ends here. All the line ends from the document will use single newline characters ("\n"). However, some perverse documents might have placed character references to carriage returns into their text; if you see them, be aware that they're not real line ends! There are many other methods in the ContentHandler interface, discussed later in Section 4.1.1, "Other ContentHandler Methods " in Chapter 4, "Consuming SAX2 Events".
https://docstore.mik.ua/orelly/xml/sax2/ch02_03.htm
CC-MAIN-2022-33
refinedweb
2,059
62.78
#include <MFnCharacter.h> Maya offers "character" nodes to simplify the setting of keyframes on collections of attributes. The implementation of the "character" is a sub-class of MFnSet taking advantage of the fact that attributes can be represented as MObjects and can be made members of a set. The fact that sets also derive from MObject means that characters may have other character sets as members thus establishing a hierarchy. Only attributes and characters can be members of a character set. The character node will disallow the addition of other objects to its set. Character sets are also part of a partition, meaning that membership of character sets cannot overlap with other character sets. Thus, when an attribute already in a character is added to another character it must be removed from the original character. Characters are integral to Maya's nonlinear animation system, "Trax". Trax allows the user to create "animation clips", which bundle a set of animation curves so that they can be reused multiple times, with different timing then the original clip. When a clip is created, Maya finds the the animation curves that are attached to the attributes in the character set and moves those animation curves into the newly created clip. The MFnClip function set is the Maya function set for clips. Clips in maya can be of two types: source clips and scheduled clips. In the Maya UI, source clips are visible in the Visor while scheduled clips are visible in Trax. A source clip contains the animation curves for the clip. An scheduled clip contains data about the placement of an instance of a source clip in the Maya timeline. In this context, an "instance" means that the animation curves from the source clip are shared by the scheduled clip. Scheduled clips never contain their own animation curves, they always refer to a source clip's curves. For example, if you create a clip called "run" in maya that lasts from frames 1-20, a source clip node will be created with a start of 1, a duration of 19, and dependency graph connections to all of the animation curves that make up the "run". If you then place an instance of the run clip at frame 5 and another instance of the run clip at frame 20, you have 2 scheduled clips: one with a start frame of 5 and one with a start frame of 20. As mentioned in the previous paragraph, only a single set of animation curves exist for the run regardless of the number of times the run is scheduled. Trax also allows you to create "blends" between clips, which enable you to control the transition between the clips. A blend is represented in the dependency graph by an "animBlendInOut" node, which uses an animation curve to determine the transition type. In the dependency graph, when a character has animation clips, the character node will always be connected to a "clipLibrary" node and a "clipScheduler" node. The clipLibrary node is connected to all of the source clips and their animation curves. The clipScheduler node is connected to the scheduled clips and blends. It is the clipScheduler that computes the final animation by looking at the placement and overlap of the clips and feeding the attribute data back into the character set. Constructor. Class constructor that initializes the function set to the given MObject. Constructor. Class constructor that initializes the function set to the given MObject. Function set type. Return the class type : MFn::kCharacter Reimplemented from MFnSet. Attaches a given source clip node (created using MFnClip::createSourceClip) to the character's clipLibrary. Attaches a clipLibrary and clipScheduler to the character if they are not attached already. Attaches an instance of a clip to the character. If the source clip related to the clip instance is not already attached to the character, it will be attached as well. This command will fail if the instanced clip passed in is not associated with a source clip. The best way to associate an instanced clip with a source clip is to create the instanced clip using MFnClip::createInstancedClip. Adds an animation curve to a clip. The user must provide the animation curve, and the related source clip node (typically created using MFnCharacter::createSourceClip). The user must also specify the plug that the clip will drive. The plug must be a member of the character managed by this MFnCharacter. Creates a blend between two instanced clips on the character. The blend is defined by a specified paramCurve, which should be keyed between times of 0 and 1. Time 0 corresponds to the start time of the blend. Time 1 corresponds to the end time of the blend. The blend will be performed on the clips according to the keyed value of the blend curve, using the equation: (value)*clip1 + (1-value)*clip2. For example, let's say the blend curve goes from a value of (0,0) to (1,1). At the start of the blend you will have 100% of clip1, and 0% of clip2. At the end of the blend you will have 0% of clip1, and 100% of clip2. Return true if a blend exists between the two instanced clips on the character. If a blend exists, the animBlend node related to the blend is also returned. Remove the blend between the two instanced clips on the character. If a blend exists and was deleted, returns true. If a blend did not exist, returns false. Given a plug, test the plug to see if it is owned by a character. If a character controls this plug, the character will be returned Get the members of the character set that are attributes. Return them as a plug array. A character set can contain only attributes and subcharacters. To get all of the members of the character, use MFnSet::getMembers. To get the subcharacters, use MFnCharacter::getSubcharacters. Get a list of the subcharacters that are members of the character set. Get the clipScheduler node that manages the playback of clips on this character. If no clips have been created for this character, this method will return an empty MObject. Return the number of clips that have been scheduled on this character. Return the scheduled animClip node corresponding to the specified index. The specified index should range from 0 to clipCount-1 where clipCount is the value returned by MFnCharacter::getScheduledClipCount. Return the number of source clips managed by the clipLibrary node of this character. For more information on source clips, refer to the description of the MFnCharacter node. Return the animClip node corresponding to the specified index. The animClip node will be a source clip node. The specified index should range from 0 to clipCount-1 where clipCount is the value returned by MFnCharacter::getSourceClipCount. Return the number of blends that have been added to clips on this character. Return the animBlendInOut node corresponding to the specified index. Returns the clip nodes that are blended by the blend node corresponding to the specified index.
http://download.autodesk.com/us/maya/2009help/api/class_m_fn_character.html
CC-MAIN-2016-36
refinedweb
1,173
63.9
Thank you for your answers. I undrestand better why Struts2 use dojo. I think It's very important to explain in the Struts2 documentation the dojo choice (portlet environment,...), because a lot of people use prototype/scriptaculous (like AjaxTags). I forwarded your mail into Rails forums, to see if there is a solution to use prototype into environnment portlet. If I have not a solution I will try new version of dojo to see how it works. But it seems that Scriptaculous effect is better than dojo like drag/drop. In my open source project, I use a great treeview tafeltree based on Srciptaculous) see demo at : There is a lot of functionnality, like open with AJAX, drag/drop, copy tree node, edit node,... I must study dojo before. Thank you again 2006/11/5, Musachy Barroso <musachy@gmail.com>: > > I had never used Dojo before I started playing with struts. The thing I > didn't liked was the lack of documentation, but with 0.4 they improved it > a > lot (?). Another thing is that everything > seems > to change really fast, but it is shaping out, and the namespaces are a > welcomed addition. When my patch gets through I will start working on the > documentation of the widgets that are already implemented, and start > working > on the autocomplete widget. So Angelo, if you decide to take Dojo for a > spin, there are plenty of things to do over here :) > > musachy > > On 11/4/06, Frank W. Zammetti <fzlists@omnytex.com> wrote: > > > > On Sat, November 4, 2006 5:29 pm, Martin Cooper wrote: > > > It's not a question of which one has the most widgets. Prototype, and > > > hence > > > script.aculo.us, is fragile, especially in a portlet environment, so > we > > > cannot, in good conscience, encourage people to use that to build > robust > > > enterprise-ready applications. Since Struts supports portlet > > development, > > > we > > > don't want to have to say "oh, but you shouldn't use our AJAX tags if > > > you're > > > building portlets". > > > > Many people are rather fond of Prototype, so I think it might be a good > > thing to explain why Martin calls it "fragile", for those that might not > > be aware... > > > > Prototype modifies some intrinsic Javascript objects. Arrays for > example > > have some additional methods, among other things. Some of this can > > conceivably (and in practice sometimes) break other code that depends on > > those intrinsic objects working a certain way. Especially in a portlet > > environment, where you aren't in complete control of the full page, this > > can lead to some very unexpected consequences... it would really suck to > > create a portlet that your company gives to its clients that you've > tested > > every which way you can and found it to work, then find it breaks in > your > > clients' portals because they have some portlet you don't and which > > doesn't play nice with these changes Prototype makes. > > > > Another problem with Prototype is that it isn't properly (or at least > > fully) namespaced... for example, Prototype defines a Field and Form > > object in global scope (well, it DID... I haven't looked to see if > recent > > versions may have corrected this). Especially these two examples, which > > are clearly pretty common names that other developers may choose as > well, > > can easily lead to conflicts. Again, in a portal environment, where you > > aren't developing a complete page and therefore can't be sure what might > > be present on the page at any given time, you can run into some big > > problems because of this. > > > > I don't think anyone is saying Prototype is inherently bad... if you are > > writing a typical webapp where you are in control of the entire page, > you > > can quite easily work around these issues, or never run into them in the > > first place, and be perfectly happy with Prototype, and to be sure, many > > people are (as well as scriptaculous, and others that use > Prototype). In > > a portal environment though, the rules of the game are quite different, > > and Prototype can lead to issues because of these two points. > > > > > I'm not sure why you say you can't write valid XHTML with Dojo; you > can. > > > There are three ways of adding Dojo widgets to your apps. Yes, not all > > of > > > them will give you XHTML that will validate, but at least one of them > > > does. > > > > I think Angelo is clearly referring to the markup approach to widget > > creation... correct me if I'm wrong Martin, but isn't it in fact true > that > > with that approach you cannot write valid XHTML because of widgetId, > > dojoType, etc? Of course your right, that's not the only way to use > > widgets... but you mentioned three ways... out of curiosity, what's the > > third, aside from markup and programmatic creation? > > > > > And as for effects, they're getting better all the time. Have you > tried > > > 0.4yet? > > > > Indeed... with Dojo, it's important to realize that it's still > relatively > > early in its lifecycle... with each new versions comes pretty big > > improvements... I looked at it for the first time roughly a year or so > > ago, and it looked interesting, but very immature (I in fact wrote a > > warning because of this in my AJAX book)... note that this isn't just a > > quality of code concern, or a functionality concern, it also includes > > documentation, support, examples, etc... looking at it now though, you > can > > see a really vast improvement compared to where it was just a short time > > ago... there's still things to not be thrilled with, but most people > tend > > to agree that the pluses outweigh the minuses by a good margin at this > > point. > > > > > Martin Cooper > > > > Frank > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: dev-unsubscribe@struts.apache.org > > For additional commands, e-mail: dev-help@struts.apache.org > > > > > > > -- > "Hey you! Would you help me to carry the stone?" Pink Floyd > >
http://mail-archives.apache.org/mod_mbox/struts-dev/200611.mbox/%3Cc77e343c0611050157ta7d91cexdc7da0472dc6e2ba@mail.gmail.com%3E
CC-MAIN-2019-47
refinedweb
974
62.58
23 July 2012 13:10 [Source: ICIS news] LONDON (ICIS)--Investment bank Credit Suisse on Monday raised its target share price for Yara International after the Norway-based fertilizer producer's second-quarter earnings beat expectations. Credit Suisse raised Yara’s target share price to Norwegian kroner (NKr) 297 ($49, €40) from NKr282 and reiterated a “Neutral” rating for the company. “Yara beat expectations in Q2 benefiting from its flexible production system to maximise margins in a quarter with volatile prices,” the investment bank said. “We reiterate our Neutral rating believing the share pricing in a good balance between the tailwind from near-term strong grain prices and medium-term headwinds from increasing nitrogen supply,” Credit Suisse added. Yara on 18 July said upstream operations delivered its second “best ever” quarter thanks to a combination of high volumes and prices. The company posted a 26% year on year jump in second-quarter net income to Norwegian kroner NKr2.80bn as it benefited from better margins and more fertilizer deliveries due to “tight balance in most agricultural markets”. Second-quarter sales increased 15% year on year to NKr21.4bn, while earnings, before interest, tax, depreciation and amortisation (EBITDA) excluding special items surged 47% year on year to NKr5.20bn. Looking ahead, Credit Suisse said urea prices are expected to remain volatile, although prices have come down sharply from the peaks in second-quarter trading. “The recent spike in grain prices should lend support to [urea] prices but we do not expect urea prices to sustainably follow corn higher,” it said. Credit Suisse added that it expects new low cost-capacity from the Middle East and ?xml:namespace> “Falling nitrogen prices should directly hurt Yara's margins at its own production facilities and weigh on margins in Yara's global downstream operations,” it said. ($1 = NKr6.09, €1 = NKr7.38) Additional reporting by Richard E
http://www.icis.com/Articles/2012/07/23/9580256/credit-suisse-raises-its-target-share-price-for-norways.html
CC-MAIN-2014-15
refinedweb
313
51.68
Show Table of Contents 43.3. Root resource classes Overview A root resource class is the entry point into a JAX-RS implemented RESTful Web service. It is decorated with a @Paththat specifies the root URI of the resources implemented by the service. Its methods either directly implement operations on the resource or provide access to sub-resources. Requirements In order for a class to be a root resource class it must meet the following criteria: - The class must be decorated with the @Pathannotation.The specified path is the root URI for all of the resources implemented by the service. If the root resource class specifies that its path is widgets and one of its methods implements the GETverb, then a GETon widgets invokes that method. If a sub-resource specifies that its URI is {id}, then the full URI template for the sub-resource is widgets/{id} and it will handle requests made to URIs like widgets/12 and widgets/42. - The class must have a public constructor for the runtime to invoke.The runtime must be able to provide values for all of the constructor's parameters. The constructor's parameters can include parameters decorated with the JAX-RS parameter annotations. For more information on the parameter annotations see Chapter 44, Passing Information into Resource Classes and Methods. - At least one of the classes methods must either be decorated with an HTTP verb annotation or the @Pathannotation. Example Example 43.3, “Root resource class” shows a root resource class that provides access to a sub-resource. Example 43.3. Root resource class package demo.jaxrs.server;.QueryParam; import javax.ws.rs.core.Response; @Path("/customerservice/") 1 public class CustomerService { public CustomerService() 2 { ... } @GET 3 public Customer getCustomer(@QueryParam("id") String id) { ... } @DELETE public Response deleteCustomer(@QueryParam("id") String id) { ... } @PUT public Response updateCustomer(Customer customer) { ... } @POST public Response addCustomer(Customer customer) { ... } @Path("/orders/{orderId}/") 4 public Order getOrder(@PathParam("orderId") String orderId) { ... } } The class in Example 43.3, “Root resource class” meets all of the requirements for a root resource class. - 1 - The class is decorated with the @Pathannotation. The root URI for the resources exposed by the service is customerservice. - 2 - The class has a public constructor. In this case the no argument constructor is used for simplicity. - 3 - The class implements each of the four HTTP verbs for the resource. - 4 - The class also provides access to a sub-resource through the getOrder()method. The URI for the sub-resource, as specified using the the @Pathannotation, is customerservice/order/id. The sub-resource is implemented by the Orderclass.For more information on implementing sub-resources see Section 43.5, “Working with sub-resources”.
https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/apache_cxf_development_guide/RootResourceClass
CC-MAIN-2020-34
refinedweb
447
51.34
Hrm. I think an analogy/metaphor might help understand the CND vs OCM annotation issue. The CND specification and language allows for specifying constraints and relationships between 'stuff' in the JCR. Its a bit like throwing a constraint on an RDBMS table with DDL, although a better idea would be to create a class of tables that share the same constraints and foreign keys. OCM operates at a level or tier above the JCR, similar to ORM tools. Of course, all the varied ways of defining and typing objects apply, with much more flexibility than the JCR specs. But you don't get the namespace enforcement at the storage layer - which means a second tool that doesnt use your OCM implementation could violate constraints and namespaces established in your OCM implementation. One of the questions I like to ask teams when starting projects is: Should we handle constraints at the storage layer or persistence layer? And which constraints where? Juan Pereyra wrote: > Hi all, > > Here goes another question about the OCM. According to the documentation, one would define new namespaces and new nodetypes as part of an CND configuration. However, I noticed that most things that you define on the CND file, you could define them as annotations (i.e. jcrSuperTypes, etc.). Trying to get OCM to load the new types from annotations I got to the AnnotationDescriptorReader class, which JavaDoc is this: > > <code> > /** > * Helper class that reads the xml mapping file and load all class descriptors into memory (object graph) > * > * @author <a href="mailto:christophe.lombart@gmail.com">Lombart Christophe </a> > * @author : <a href="mailto:boni.g@bioimagene.com">Boni Gopalan</a> > * > */ > public class AnnotationDescriptorReader implements DescriptorReader > </code> > > However, the class reads annotation descriptors, not an XML, and interestingly enough, the JavaDoc is identical to the one in the class DigesterDescriptorReader. > > So, summarizing, is it possible to load namespace definitions without the CND or an XML? just with annotations? Everything that's needed seems to be there already. > > Many thanks guys! > Juan >
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200907.mbox/%3C4A4BC95D.7080806@tacitknowledge.com%3E
CC-MAIN-2017-34
refinedweb
333
53.61
Wraptor 0.6.0 Useful decorators and other utility functions. Decorators Memoize Add a cache to a function such that multiple calls with the same args will return cached results. Supports an optional cache timeout which will flush items from the cache after a set interval for recomputation. from wraptor.decorators import memoize @memoize() def foo(bar, baz): print(bar, baz) foo(1, 2) # prints (1, 2) foo(3, 4) # prints (3, 4) foo(1, 2) # no-op Supports timeouts! @memoize(timeout=.5) def foo(bar, baz): print(bar, baz) foo(1, 2) # prints (1, 2) foo(1, 2) # no-op import time time.sleep(2) foo(1, 2) # prints (1, 2) Supports attaching to an instance method! class foo(object): @memoize(instance_method=True) def bar(self, a, b): return random() f = foo() f2 = foo() # they don't share a cache! f.bar(1,2) != f2.bar(1,2) Throttle Throttle a function to firing at most 1 time per interval. The function is fired on the forward edge (meaning it will fire the first time you call it). from wraptor.decorators import throttle import time @throttle(.5) def foo(bar, baz): print(bar, baz) foo(1, 2) # prints (1, 2) foo(3, 4) # no-op time.sleep(1) foo(5, 6) # prints (1, 2) Supports attaching to an instance method! arr = [] class foo(object): @throttle(1, instance_method=True) def bar(self): arr.append(1) x = foo() x2 = foo() x.bar() x2.bar() # they don't share the same throttle! assert arr == [1, 1] Timeout Timeout uses signal under the hood to allow you to add timeouts to any function. The only caveat is that signal.alarm can only be used in the main thread of execution (so multi-threading programs can’t use this decorator in sub-threads). The timeout value must be a positive integer. from wraptor.decorators import timeout, TimeoutException import time @timeout(1) def heavy_workload(): # simulate heavy work time.sleep(10) try: heavy_workload() except TimeoutException: print('workload timed out') You can also catch the timeout exception from inside the function: @timeout(1) def heavy_workload(): try: # simulate heavy work time.sleep(10) except TimeoutException: print('workload timed out') Exception Catcher exception_catcher is a helpful method for dealing with threads that may raise an exception. It is especially useful for testing. from wraptor.decorators import exception_catcher @exception_catcher def work(): raise Exception() t = threading.Thread(target=work) t.start() t.join() try: work.check() except Exception as e: print e Context Managers Throttle Throttle a with statement to executing its body at most 1 time per interval. The body is fired on the forward edge (meaning it will fire the first time you call it). from wraptor.context import throttle import time throttler = throttle(seconds=3) def foo(): with throttler: print 'bar' foo() # prints bar sleep(2) foo() # does nothing sleep(2) foo() # prints bar Maybe Execute a with block based on the results of a predicate. from wraptor.context import maybe def foo(cond): with maybe(lambda: cond == 5): print 'bar' foo(5) # prints bar foo(3) # does nothing Timer Time a block of code. from wraptor.context import timer def foo(cond): with timer('my slow method') as t: expensive_stuff() print t foo() # prints "my slow method took 435.694 ms" - Author: Carl Sverre - License: LICENSE.txt - Package Index Owner: carlsverre - DOAP record: Wraptor-0.6.0.xml
https://pypi.python.org/pypi/Wraptor/0.6.0
CC-MAIN-2016-44
refinedweb
561
64.91
Tree-structured menuing application for Django. Project Description This is a simple and generic tree-like menuing system for Django with an easy-to-use admin interface. It covers all the essentials for building tree-structured menus and should be enough for a lot of projects. It is also easily extendable if you need to add some special behaviour to your menu items. django-treemenus works with Django 1.0 and above and with python 2.5 and above. Installation Installing an official release django-treemenus is available on PyPI, and can be installed using Pip: pip install django-treemenus Alternatively, official source releases are made available at Download the .zip distribution file and unpack it. Inside is a script named setup.py. Run this command: python setup.py install …and the package will install automatically. Installing the development version If you prefer to update Django Tree Menus occasionally to get the latest bug fixes and improvements before they are included in an official release, do a git clone instead: git clone Then add the treemenus folder to your PYTHONPATH or symlink (junction, if you’re on Windows), such as in your Python’s site-packages directory. Basic use To build a menu, log into the admin interface, and click “Menus” under the Treemenus application section, then click “Add menu”. Give your new menu a name and then save. Then, to create menu items, click on your menu in the menu list. You will then see a table in the bottom part of the page with only one item: the menu’s root. Click “Add an item”, select its parent (obviously, since this is the first item you’re creating you can only select the root). Fill out the item’s details and click “Save”. The new item now shows up in the table. Now keep going to build the whole structure of your tree menu by creating as many branches as you like. When you’ve finished building your menu from the admin interface, you will have to write the appropriate templates to display the menu on your site (see below). Attributes and methods As you’ve guessed it, you can manipulate two types of objects: menus and menu items. In this section I present their attributes and methods, which you can use in your templates. Customizing/Extending The attributes and methods enumerated above provide the essential behaviour for a tree-structured menu. If that is not enough for you, it is also possible to add customized behaviour by extending the menu item definition. To do so, you need to create a model class that will contain all the extra attributes for your menu items. To illustrate this, let’s say that you’d like to add a published attribute to your menu items so that they only show up on your site if published is turned to True. To do so, create a new application (let’s call it menu_extension), with the following structure: menu_extension __init__.py models.py forms.py Then, in menu_extension.models.py add the following: from django.db import models from treemenus.models import MenuItem class MenuItemExtension(models.Model): menu_item = models.OneToOneField (MenuItem, related_name="extension") published = models.BooleanField(default=False) It is required that your extension object has the attribute menu_item that is a unique link to a menu item object. This is what makes the extension possible. Then you can notice our attribute published, feel free to add any other attribute there to customize your menu items. You then need to create the database table that will store your extension data by adding menu_extension to the INSTALLED_APPS setting of your Django project, and then running the following command from the root of your project: python manage.py syncdb Now, you need to specify a form to let you edit those extra attributes from the admin interface. In your project’s admin.py or your extension menu app’s admin.py, add the following: from django.contrib import admin from treemenus.admin import MenuAdmin, MenuItemAdmin from treemenus.models import Menu from menu_extension.models import MenuItemExtension class MenuItemExtensionInline(admin.StackedInline): model = MenuItemExtension max_num = 1 class CustomMenuItemAdmin(MenuItemAdmin): inlines = [MenuItemExtensionInline,] class CustomMenuAdmin(MenuAdmin): menu_item_admin_class = CustomMenuItemAdmin admin.site.unregister(Menu) # Unregister the standard admin options admin.site.register(Menu, CustomMenuAdmin) # Register the new, customized, admin options And that’s it! Now, when creating or editing a menu item, you’ll see an inline form with all the extension attributes (in this example, the published check box). Now, if you want to use published attribute in your template, you need to use the menu item’s extension method, as follows: {% if menu_item.extension.published %} <li><a href="{{ menu_item.url }}">{{ menu_item.caption }}</a></li> {% endif %} Your menu items will now only appear if their published check box has been ticked. Using this technique, you can obviously extend your menu items with whatever attribute you’d like. Other examples might be that you want to add special CSS styles to certain menu items, or to make some of them show up only if the user is logged in, etc. Simply add attributes in you extension model and make use of them in your templates to create special behaviour. See the ‘Tips and Tricks’ section for more ideas. Tips and tricks In this section I give some examples on using or extending menus. These may just cover some of your own specific needs or at least inspire you and get you started to make the most out of your menus. Internationalization Making your menus multi-lingual is very easy if you use the Django internationalization module. What you can do is apply the translation to the caption attribute of a menu_item. For example: {% load i18n %} ... <li><a href="{{ menu_item.url }}">{% trans menu_item.caption %}</a></li> Then, add manually the translation entries in your *.po file. If you use more complex or custom translation systems, you may simply define your extension class (or create it if you don’t already have one) with a method to manage the translation, for example: class MenuItemExtension(models.Model): menu_item = models.OneToOneField (MenuItem, related_name="extension") ... def translation(): translation = do_something_with(self.menu_item.caption) return translation And then in your template: <li><a href="{{ menu_item.url }}">{% trans menu_item.extension.translation %}</a></li> Login restriction If you want to make some of your menus items private and only available to logged in users, that’s simple! Simply define your extension class (or create it if you don’t already have one) like the following: class MenuItemExtension(models.Model): menu_item = models.OneToOneField (MenuItem, related_name="extension") protected = models.BooleanField(default=False) ... And then in your template: {% if menu_item.extension.protected %} {% if user.is_authenticated %} <li><a href="{{ menu_item.url }}">{{ menu_item.caption }}</a></li> {% endif %} {% else %} <li><a href="{{ menu_item.url }}">{{ menu_item.caption }}</a></li> {% endif %} (assuming that the context variable ‘user’ represents the currently logged-in user) That’s it!! Please log any issue or bug report at Enjoy! Julien Phalip (project developer) Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-treemenus2/
CC-MAIN-2018-17
refinedweb
1,186
58.48
I have created a very basic validator class. My base code is in a my src/ folder, which gets autoloader with "kevdotbadger\\Validator\\": "src/" this works fine, so that when I instantiate a new "kevdotbadger\Validator\ Validator is gives me src/Validator.php My Validator.php class then loads a bunch of sub-classes in my src/Rules directory. These are magically loaded by using the __call, so ->between() should look for src/Rules/between.php. However, for some reason it won't usual load despite it being setup in my composer.json file. My whole codebase is available at Have I setup my namespace correctly? I think the problem might be with php version 5.3, however I need to use version 5.3. Thanks.
http://www.howtobuildsoftware.com/index.php/how-do/boF/php-namespaces-psr-0-psr-4-autoloading-nested-classes-with-composer
CC-MAIN-2017-09
refinedweb
126
58.89
BatteryChargingState Since: BlackBerry 10.0.0 #include <bb/device/BatteryChargingState> To link against this class, add the following line to your .pro file: LIBS += -lbbdevice The set of possible charging states for the battery. Overview Public Types Index Public Types The set of possible charging states for the battery. BlackBerry 10.0.0 - Unknown 0 Battery state could not be determined. - NotCharging 1 Battery is plugged in, but is not receiving enough power to charge.Since: BlackBerry 10.0.0 - Charging 2 Battery is charging.Since: BlackBerry 10.0.0 - Discharging 3 Battery is not plugged in.Since: BlackBerry 10.0.0 - Full 4 Battery is plugged in and fully charged.Since: BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/cascades/bb__device__batterychargingstate.html
CC-MAIN-2014-42
refinedweb
132
64.27
> But this thread-local attribute on the function seems bizarre to me. > I would prefer another way to get the errno. I can see two alternatives: > - the function returns a tuple (normalresult, errno) on each call. > - when errno is not zero, EnvironmentError (or WindowsError) is raised. I'd stronly prefer NOT to add errno to the function return value. Raising an Exception when errno or LastError != zero is wrong. There are functions that set the errno or LastError value even if they actually succeed. The recommended way to check for errors that I had in mind is in the 'errcheck' result checker: func = CDLL(..., errno=True) func.argtypes = [...] func.restype = ... def errcheck(result, func, args): if result == -1: # function failed raise EnvironmentError(func.errno) func.errcheck = errcheck Of course, an alternative to a thread local storage attribute would be to pass the error value to the errcheck function. I just felt it would be better not to change the signature, but maybe I was wrong. Anyway, this patch should be extended so that it is also possible to create a foreign function using the descibed calling convention from a prototype created by CFUNCTYPE or WINFUNCTYPE.
https://bugs.python.org/msg59835
CC-MAIN-2019-18
refinedweb
195
56.96
This post is based on a talk I gave to the Sydney Go users group in mid April 2013 describing the Go build process. Frequently on mailing list or IRC channel there are requests for documentation on the details of the Go compiler, runtime and internals. Currently the canonical source of documentation about Go’s internals is the source, which I encourage everyone to read. Having said that, the Go build process has been stable since the Go 1.0 release, so documenting it here will probably remain relevant for some time. This post walks through the nine steps of the Go build process, starting with the source and ending with a fully tested Go installation. For simplicity, all paths mentioned are relative to the root of the source checkout, $GOROOT/src. For background you should also read Installing Go from source on the golang.org website. Step 1. all.bash % cd $GOROOT/src % ./all.bash The first step is a bit anticlimactic as all.bash just calls two other shell scripts; make.bash and run.bash. If you’re using Windows or Plan 9 the process is the same, but the scripts end in .bat or .rc respectively. For the rest of this post, please substitute the the extension appropriate for your operating system. Step 2. make.bash . ./make.bash --no-banner make.bash is sourced from all.bash so that calls to exit will terminate the build process properly. make.bash has three main jobs, the first job is to validate the environment Go is being compiled in is sane. The sanity checks have been built up over the last few years and generally try to avoid building with known broken tools, or in environments where the build will fail. Step 3. cmd/dist gcc -O2 -Wall -Werror -ggdb -o cmd/dist/dist -Icmd/dist cmd/dist/*.c Once the sanity checks are complete, make.bash compiles cmd/dist. cmd/dist replaces the Makefile based system which existed before Go 1 and manages the small amounts of code generation in pkg/runtime. cmd/dist is a C program which allows it to leverage the system C compiler and headers to handle most of the host platform detection issues. cmd/dist always detects your host’s operating system and architecture, $GOHOSTOS and $GOHOSTARCH. These may differ from any value of $GOOS and $GOARCH you may have set if you are cross compiling. In fact, the Go build process is always building a cross compiler, but in most cases the host and target platform are the same. Next, make.bash invokes cmd/dist with the bootstrap argument which compiles the supporting libraries, lib9, libbio and libmach, used by the compiler suite, then the compilers themselves. These tools are also written in C and are compiled by the system C compiler. echo "# Building compilers and Go bootstrap tool for host, $GOHOSTOS/$GOHOSTARCH." buildall="-a" if [ "$1" = "--no-clean" ]; then buildall="" fi ./cmd/dist/dist bootstrap $buildall -v # builds go_bootstrap Using the compiler suite, cmd/dist then compiles a version of the go tool, go_bootstrap. The go_bootstrap tool is not the full go tool, for example pkg/net is stubbed out which avoids a dependency on cgo. The list of directories containing packages or libraries be compiled, and their dependencies is encoded in the cmd/dist tool itself, so great care is taken to avoid introducing new build dependencies for cmd/go. Step 4. go_bootstrap Now that go_bootstrap is built, the final stage of make.bash is to use go_bootstrap to compile the complete Go standard library, including a replacement version of the full go tool. echo "# Building packages and commands for $GOOS/$GOARCH." "$GOTOOLDIR"/go_bootstrap install -gcflags "$GO_GCFLAGS" \ -ldflags "$GO_LDFLAGS" -v std Step 5. run.bash Now that make.bash is complete, execution falls back to all.bash, which invokes run.bash. run.bash‘s job is to compile and test the standard library, the runtime, and the language test suite. bash run.bash --no-rebuild The --no-rebuild flag is used because make.bash and run.bash can both invoke go install -a std, so to avoid duplicating the previous effort, --no-rebuild skips the second go install. # allow all.bash to avoid double-build of everything rebuild=true if [ "$1" = "--no-rebuild" ]; then shift else echo '# Building packages and commands.' time go install -a -v std echo fi Step 6. go test -a std echo '# Testing packages.' time go test std -short -timeout=$(expr 120 \* $timeout_scale)s echo run.bash is to run the unit tests for all the packages in the standard library, which are written using the testing package. Because code in $GOPATH and $GOROOT live in the same namespace, we cannot use go test ... as this would also test every package in $GOPATH, so an alias, std, was created to address the packages in the standard library. Because some tests take a long time, or consume a lot of memory, some tests filter themselves with the -short flag. Step 7. runtime and cgo tests The next section of run.bash runs a set of tests for platforms that support cgo, runs a few benchmarks, and compiles miscellaneous programs that ship with the Go distribution. Over time this list of miscellaneous programs has grown as it was found that when they were not included in the build process, they would inevitably break silently. Step 8. go run test (xcd ../test unset GOMAXPROCS time go run run.go ) || exit $? The penultimate stage of run.bash invokes the compiler and runtime tests in the test folder directly under $GOROOT. These are tests of the low level details of the compiler and runtime itself. While the tests exercise the specification of the language, the test/bugs and test/fixedbugs sub directories capture unique tests for issues which have been found and fixed. The test driver for all these tests is $GOROOT/test/run.go which is a small Go program that runs each .go file inside the test directory. Some .go files contain directives on the first line which instruct run.go to expect, for example, the program to fail, or to emit a certain output sequence. Step 9. go tool api echo '# Checking API compatibility.' go tool api -c $GOROOT/api/go1.txt,$GOROOT/api/go1.1.txt \ -next $GOROOT/api/next.txt -except $GOROOT/api/except.txt The final step of run.bash is to invoke the api tool. The api tool’s job is to enforce the Go 1 contract; the exported symbols, constants, functions, variables, types and methods that made up the Go 1 API when it shipped in 2012. For Go 1 they are spelled out in api/go1.txt, and Go 1.1, api/go1.1.txt. An additional file, api/next.txt identifies the symbols that make up the additions to the standard library and runtime since Go 1.1. Once Go 1.2 ships, this file will become the contract for Go 1.2, and there will be a new next.txt. There is also a small file, except.txt, which contains exceptions to the Go 1 contract which have been approved. Additions to the file are not expected to be taken lightly. Additional tips and tricks You’ve probably figured out that make.bash is useful for building Go without running the tests, and likewise, run.bash is useful for building and testing the Go runtime. This distinction is also useful as the former can be used when cross compiling Go, and the latter is useful if you are working on the standard library. Update: Thanks to Russ Cox and Andrew Gerrand for their feedback and suggestions. Pingback: Geek Reading June 4, 2013 | Regular Geek
http://dave.cheney.net/2013/06/04/how-go-uses-go-to-build-itself
CC-MAIN-2014-49
refinedweb
1,284
66.64
NAME PRANG::Graph::Meta::Element - metaclass metarole for XML elements SYNOPSIS use PRANG::Graph; has_element 'somechild' => is => "rw", isa => "Some::Type", xml_required => 0, ; # equivalent alternative - plays well with others! has 'somechild' => is => "rw", traits => [qw/PRANG::Element/], isa => "Some::Type", xml_required => 0, ; DESCRIPTION The PRANG concept is that attributes in your classes are marked to correspond with attributes and elements in your XML. This class is for marking your class' attributes as XML elements. For marking them as XML attributes, see PRANG::Graph::Meta::Attr. Non-trivial elements - and this means elements which contain more than a single TextNode element within - are mapped to Moose classes. The child elements that are allowed within that class correspond to the attributes marked with the PRANG::Element trait, either via has_element or the Moose traits keyword. Where it makes sense, as much as possible is set up from the regular Moose definition of the attribute. This includes the XML node name, the type constraint, and also the predicate. If you like, you can also set the xmlns and xml_nodeName attribute property, to override the default behaviour, which is to assume that the XML element name matches the Moose attribute name, and that the XML namespace of the element is that of the enclosing class (ie, $class->xmlns), if defined. The order of declaring element attributes is important. They implicitly define a "sequence". To specify a "choice", you must use a union sub-type - see below. Care must be taken with bundling element attributes into roles as ordering when composing is not defined. The predicate property of the attribute is also important. If you do not define predicate, then the attribute is considered required. This can be overridden by specifying xml_required (it must be defined to be effective). The isa property (type constraint) you set via 'isa' is required. The behaviour for major types is described below. The module knows about sub-typing, and so if you specify a sub-type of one of these types, then the behaviour will be as for the type on this list. Only a limited subset of higher-order/parametric/structured types are permitted as described. - Bool sub-type If the attribute is a Bool sub-type (er, or just "Bool", then the element will marshall to the empty element if true, or no element if false. The requirement that predicatebe defined is relaxed for Boolsub-types. ie, Boolwill serialise to: <object> <somechild /> </object> For true and <object> </object> For false. - Scalar sub-type If it is a Scalar subtype (eg, an enum, a Str or an Int), then the value of the Moose attribute is marshalled to the value of the element as a TextNode; eg <somechild>somevalue</somechild> - Object sub-type If the attribute is an Object subtype (ie, a Class), then the element is serialised according to the definition of the Class defined. eg, with; { package CD; use Moose; use PRANG::Graph; has_element 'author' => qw( is rw isa Person ); has_attr 'name' => qw( is rw isa Str ); } { package Person; use Moose; use PRANG::Graph; has_attr 'group' => qw( is rw isa Bool ); has_attr 'name' => qw( is rw isa Str ); has_element 'deceased' => qw( is rw isa Bool ); } Then the object; CD->new( name => "2Pacalypse Now", author => Person->new( group => 0, name => "Tupac Shakur", deceased => 1, ) ); Would serialise to (assuming that there is a PRANG::Graph document type with cdas a root element): <cd name="2Pacalypse Now"> <author group="0" name="Tupac Shakur> <deceased /> </author> </cd> - ArrayRef sub-type An ArrayRefsub-type indicates that the element may occur multiple times at this point. Bounds may be specified directly - the xml_minand xml_maxattribute properties. Higher-order types are supported; in fact, to not specify the type of the elements of the array is a big no-no. If xml_nodeNameis specified, it refers to the items; no array container node is expected. For example; has_attr 'name' => is => "rw", isa => "Str", ; has_attr 'releases' => is => "rw", isa => "ArrayRef[CD]", xml_min => 0, xml_nodeName => "cd", ; Assuming that this property appeared in the definition for 'artist', and that CD has_attr 'title'..., it would let you parse: <artist> <name>The Headless Chickens</name> <cd title="Stunt Clown">...<cd> <cd title="Body Blow">...<cd> <cd title="Greedy">...<cd> </artist> You cannot (currently) Union an ArrayRef type with other simple types. - Union types Union types are special; they indicate that any one of the types indicated may be expected next. By default, the name of the element is still the name of the Moose attribute, and if the case is that a particular element may just be repeated any number of times, this is fine. However, this can be inconvenient in the typical case where the alternation is between a set of elements which are allowed in the particular context, each corresponding to a particular Moose type. Another one is the case of mixed XML, where there may be text, then XML fragments, more text, more XML, etc. There are two relevant questions to answer. When marshalling OUT, we want to know what element name to use for the attribute in the slot. When marshalling IN, we need to know what element names are allowable, and potentially which sub-type to expect for a particular element name. After applying much DWIMery, the following scenarios arise; - 1:1 mapping from Type to Element name This is often the case for message containers that allow any number of a collection of classes inside. For this case, a map must be provided to the xml_nodeNamefunction, which allows marshalling in and out to proceed. has_element 'message' => is => "rw", isa => "my::unionType", xml_nodeName => { "nodename" => "TypeA", "somenode" => "TypeB", }; It is an error if types are repeated in the map. The empty string can be used as a node name for text nodes, otherwise they are not allowed. This case is made of win because no extra attributes are required to help the marshaller; the type of the data is enough. An example of this in practice; subtype "My::XML::Language::choice0" => as join("|", map { "My::XML::Language::$_" } qw( CD Store Person ) ); has_element 'things' => is => "rw", isa => "ArrayRef[My::XML::Language::choice0]", xml_nodeName => +{ map {( lc($_) => $_ )} qw(CD Store Person) }, ; This would allow the enclosing class to have a 'things' property, which contains all of the elements at that point, which can be cd, storeor personelements. In this case, it may be preferrable to pass a role name as the element type, and let this module evaluate construct the xml_nodeNamemap itself. - more types than element names This happens when some of the types have different XML namespaces; the type of the node is indicated by the namespace prefix. In this case, you must supply a namespace map, too. has_element 'message' => is => "rw", isa => "my::unionType", xml_nodeName => { "trumpery:nodename" => "TypeA", "rubble:nodename" => "TypeB", "claptrap:nodename" => "TypeC", }, xml_nodeName_prefix => { "trumpery" => "uri:type:A", "rubble" => "uri:type:B", "claptrap" => "uri:type:C", }, ; FIXME: this is currently unimplemented. - more element names than types This can happen for two reasons: one is that the schema that this element definition comes from is re-using types. Another is that you are just accepting XML without validation (eg, XMLSchema's processContents="skip"property). In this case, there needs to be another attribute which records the names of the node. has_element 'message' => is => "rw", isa => "my::unionType", xml_nodeName => { "nodename" => "TypeA", "somenode" => "TypeB", "someother" => "TypeB", }, xml_nodeName_attr => "message_name", ; If any node name is allowed, then you can simply pass in *as an xml_nodeNamevalue. - more namespaces than types The principle use of this is PRANG::XMLSchema::Whatever, which converts arbitrarily namespaced XML into objects. In this case, another attribute is needed, to record the XML namespaces of the elements. has 'nodenames' => is => "rw", isa => "ArrayRef[Maybe[Str]]", ; has 'nodenames_xmlns' => is => "rw", isa => "ArrayRef[Maybe[Str]]", ; has_element 'contents' => is => "rw", isa => "ArrayRef[PRANG::XMLSchema::Whatever|Str]", xml_nodeName => { "" => "Str", "*" => "PRANG::XMLSchema::Whatever" }, xml_nodeName_attr => "nodenames", xmlns => "*", xmlns_attr => "nodenames_xmlns", ; FIXME: this is currently unimplemented. - unknown/extensible element names and types These are indicated by specifying a role. At the time that the PRANG::Graph::Node is built for the attribute, the currently available implementors of these roles are checked, which must all implement PRANG::Graph. They Treated as if there is an xml_nodeNameentry for the class, from the root_elementvalue for the class to the type. This allows writing extensible schemas. SEE ALSO PRANG::Graph::Meta::Attr, PRANG::Graph::Meta::Element, PRANG::Graph::Node AUTHOR AND LICENCE Development commissioned by NZ Registry Services, and carried out by Catalyst IT - Copyright 2009, 2010, NZ Registry Services. This module is licensed under the Artistic License v2.0, which permits relicensing under other Free Software licenses.
https://metacpan.org/pod/PRANG::Graph::Meta::Element
CC-MAIN-2015-22
refinedweb
1,431
50.36
[adding gnulib] On 09/04/2012 10:52 AM, Jasper Lievisse Adriaanse wrote: >> I'd still like to know the compiler error you got when <sys/socket.h> >> was not present, but just guessing from the source code, I see one call >> of socket() (protected behind #if defined(HAVE_NET_IF_H) && >> defined(SIOCBRADDBR), but maybe those are both true for OpenBSD?). Even >> though I'm pushing, I would STILL like to know why. > Of course, here it is: > > In file included from util/virnetdevbridge.c:35: > /usr/include/net/if.h:276: warning: 'struct sockaddr' declared inside > parameter list Ouch. The POSIX definition of <net/if.h> doesn't include any interface that needs to use struct sockaddr. Which OpenBSD extension function is triggering this warning? According to POSIX, this .c file should compile: #define _POSIX_C_SOURCE 200809L #include <net/if.h> #include <sys/socket.h> struct if_nameindex i; and it might just compile on OpenBSD (I haven't checked myself); the difference is that we have explicitly asked for namespace pollution beyond what _POSIX_C_SOURCE guarantees, which may explain why 'struct sockaddr' is interfering. But since <net/if.h> is required to be self-contained when in a strict environment, it makes sense for it to also be self-contained in an extension environment. It sounds like gnulib should consider providing a replacement <net/if.h> function to work around this lameness. -- Eric Blake address@hidden +1-919-301-3266 Libvirt virtualization library signature.asc Description: OpenPGP digital signature
https://lists.gnu.org/archive/html/bug-gnulib/2012-09/msg00026.html
CC-MAIN-2019-35
refinedweb
246
59.6
#include <nng/nng.h> int nng_ctx_get(nng_ctx ctx, const char *opt, void *val, size_t *valszp); int nng_ctx_get_bool(nng_ctx ctx, const char *opt, bool *bvalp); int nng_ctx_get_int(nng_ctx ctx, const char *opt, int *ivalp); int nng_ctx_get_ms(nng_ctx ctx, const char *opt, nng_duration *durp); int nng_ctx_get_size(nng_ctx ctx, const char *opt, size_t *zp); int nng_ctx_get_string(nng_ctx ctx, const char *opt, char **strp); int nng_ctx_get_uint64(nng_ctx ctx, const char *opt, uint64_t *u64p); nng_ctx_get(3) NAME nng_ctx_get - get context option SYNOPSIS DESCRIPTION The nng_ctx_get() functions are used to retrieve option values for the context ctx. The actual options that may be retrieved in this way vary. A number of them are documented in nng_options(5). Forms In all of these forms, the option opt is retrieved from the context ctx. The forms vary based on the type of the option they take. The details of the type, size, and semantics of the option will depend on the actual option, and will be documented with the option itself. nng_ctx_get() This function is untyped and can be used to retrieve the value of any option. The caller must store returned size in valszp does not exceed the original buffer size. It is acceptable to pass NULLfor val if the value in valszp is zero. This can be used to determine the size of the buffer needed to receive the object. nng_ctx_get_bool() This function is for options which take a Boolean ( bool). The value will be stored at ivalp. nng_ctx_get_int() This function is for options which take an integer ( int). The value will be stored at ivalp. nng_ctx_get_ms() This function is used to retrieve time durations (such as timeouts), stored in durp as a number of milliseconds. (The special value NNG_DUR_INFINITEmeans an infinite amount of time, and the special value NNG_DUR_DEFAULTmeans a context-specific default.) nng_ctx_get_size() This function is used to retrieve a size into the pointer zp, typically for buffer sizes, message maximum sizes, and similar options. nng_ctx_get_string() This function is used to retrieve a string into strp. This string is created from the source using nng_strdup()and consequently must be freed by the caller using nng_strfree()when it is no longer needed. nng_ctx_get_uint64() This function is used to retrieve a 64-bit unsigned value into the value referenced by u64p. This is typically used for options related to identifiers, network numbers, and similar. RETURN VALUES These functions return 0 on success, and non-zero otherwise.
https://nng.nanomsg.org/man/tip/nng_ctx_get.3.html
CC-MAIN-2021-04
refinedweb
400
62.17
When playing with a new bit of language, it can be helpful to restrict the problem space to an old, well understood algorithm. For me at least, learning one thing at a time is easier! For this post, It’ll be prime sieves, and I’ll be exploring clojure reducers. A quick recap, the sieve of eratosthenes is a not-maximally-non-optimal way of finding primes. It’s usually expressed as follows: To find primes below n: generate a list of n integers greater than 1 while the list is not empty: take the head of the list and: add it to the output remove all numbers evenly divisible by it from the list In clojure, something like: (defn sieve ([n] (sieve [] (range 2 n))) ([primes xs] (if-let [prime (first xs)] (recur (conj primes prime) (remove #(zero? (mod % prime)) xs)) primes))) (sieve 10) ;= [2 3 5 7] Which is fine, but I’d like it lazy so I only pay for what I use, and I can use as much as I’m willing to pay for. Let’s look at lazy sequences. Luckily for us, there is an example of exactly this on the lazy-seq documentation, which we slightly modify like so: (defn lazy-sieve [s] (cons (first s) (lazy-seq (lazy-sieve (remove #(zero? (mod % (first s))) (rest s)))))) (defn primes [] (lazy-seq (lazy-sieve (iterate inc 2)))) (take 5 (primes)) ;= (2 3 5 7) So now we have a nice generic source of primes that grows only as we take more. But is there another way? A few months ago Rich Hickey introduced reducers. By turning the concept of ‘reducing’ inside out the new framework allows a parallel reduce (fold) in some circumstances. Which doesn’t apply here. But let’s see if we can build a different form of sieve using the new framework. First a quick overview (cribbing from the original blog post): Collections are now ‘reducible’, in that they implement a reduce protocol. Filter, map, etc are implemented as functions that can be applied by a reducible to itself to return another reducible, but lazily, and possibly in parallel. So in the example below we have a reducible (a vector), that maps inc to itself to return a reducible that is then wrapped with a filter on even? which returns a further reducible, that reduce then collects with +. (require '[clojure.core.reducers :as r]) We’ll be referring to r here and there – just remember it’s the clojure.core.reducers namespace (reduce + (r/filter even? (r/map inc [1 1 1 2]))) ;= 6 These are composable, so we can build ‘recipes’. ;;red is a reducer awaiting a collection (def red (comp (r/filter even?) (r/map inc))) (reduce + (red [1 1 1 2])) ;= 6 into uses reduce internally, so we can use it to build collections instead of reducing: (into [] (r/filter even? (r/map inc [1 1 1 2]))) ;= [2 2 2] So here’s the core of ‘reducer’, which “Given a reducible collection, and a transformation function xf, returns a reducible collection, where any supplied reducing fn will be transformed by xf. xf is a function of reducing fn to reducing fn.” (defn reducer ([coll xf] (reify clojure.core.protocols/CollReduce (coll-reduce [_ f1 init] (clojure.core.protocols/coll-reduce coll (xf f1) init))))) And we can then use that to implement mapping as so: (defn mapping [f] (fn [f1] (fn [result input] (f1 result (f input))))) (defn rmap [f coll] (reducer coll (mapping f))) (reduce + 0 (rmap inc [1 2 3 4])) ;= 14 Fine. So what about sieves? One thought is we could build up a list of composed filters, built as new primes are found (see the lazy-seq example above). But there’s no obvious place to do the building, as applying the reducing functions is left to the reducible implementation. Another possibility is to introduce a new type of reducing function, the ‘progressive-filter’, which keeps track of past finds and can filter against them. (defn prog-filter [f] (let [flt (atom [])] (fn [f1] (fn [result input] (if (not-any? #(f input %) @flt) (do (swap! flt conj input) (f1 result input)) result))))) (defn progressive-filter [f coll] (reducer coll (prog-filter f))) And we then reduce with a filtering function that is a function of the current candidate and one of the list of found primes (see the #(f input %) bit above) (into [] (progressive-filter #(zero? (mod %1 %2)) (range 2 10)) ;= [2 3 5 7] It’s nicely lazy, so we can use iterate to generate integers, and take only a few (r/take, as it’s operating on a reducer): (into [] (r/take 5 (progressive-filter #(zero? (mod %1 %2)) (iterate inc 2)))) ;= [2 3 5 7 11] Or even (def primes (progressive-filter #(zero? (mod %1 %2)) (iterate inc 2))) (into [] (r/take 5 primes)) ;= [2 3 5 7 11] You get the idea.
https://tech.labs.oliverwyman.com/blog/2013/07/31/expanding-reducers/
CC-MAIN-2019-26
refinedweb
824
67.79
With the release and announcement of Visual Studio 2012 RC, you get among other things an updated version of C++ AMP. Our goal was that you could recompile your Visual Studio 11 Beta projects with the Visual Studio 2012 RC bits, and see no breaking changes in your usage of C++ AMP - we came extremely close to that goal. Read on. If you are wondering what is new for C++ AMP in RC (besides numerous bug fixes), the biggest item is performance. We have a couple of performance items that did not make it in the RC (and are on track for the RTM version), but most of our planned performance improvements are in the RC bits that you can download today. As a reminder, here are a couple of useful links for measuring performance with C++ AMP: Recompile, re-measure the performance, and let us know if you think there are areas we can still improve. Beyond performance, we have made one medium API massaging to improve asynchronous operations and we already shared on our blog We hope you like what we've done there... In the Beta we introduced for the first time amp_graphics.h and the concurrency::graphics namespace with Short Vector Types and Texture support. In the RC you'll find a behavioral change including a rename of bits_per_channel (now bits_per_scalar_element). All our related blog posts have been updated, and I encourage you to start with this post and follow the links from there: This is really the only place where you need to make minor edits before successfully recompiling your Beta projects with the RC. We have re-read all our C++ AMP blog posts, including re-testing all the samples to ensure everything is still valid with the latest Release Candidate. So if you are new here, start with the links on the side and read our blog content, the de facto reference for learning C++ AMP. As usual, your comments are welcome in our MSDN Forum.
http://blogs.msdn.com/b/nativeconcurrency/archive/2012/05/31/what-is-new-in-the-release-candidate-for-c-amp.aspx
CC-MAIN-2014-42
refinedweb
333
62.92
- 19 times out of 20 we already have dynflags in scope. We could just always use `return dflags`. But this is in fact not free. When looking at some STG code I noticed that we always allocate a closure for this expression in the heap. Clearly a waste in these cases. For the other cases we can either just modify the callsite to get dynflags or use the _D variants of withTiming I added which will use getDynFlags under the hood. - 07 ...). - 28 Aug, 2019 1 commit This generalizes code generators (outputAsm, outputLlvm, outputC, and the call site codeOutput) so that they'll return the return values of the passed Cmm streams. This allows accumulating data during Cmm generation and returning it to the call site in HscMain. Previously the Cmm streams were assumed to return (), so the code generators returned () as well. This change is required by !1304 and !1530. Skipping CI as this was tested before and I only updated the commit message. [skip ci] -. - 14 Jul, 2019 1 commit - John Ericson authored Instead following @angerman's suggestion put them in the config file. Maybe we could re-key llvm-targets someday, but this is good for now. - 24 Jun, 2019 1 commit LLVM version numberinf changed recently. Previously, releases were numbered 4.0, 5.0 and 6.0 but with version 7, they dropped the redundant ".0". Fix requires for Llvm detection and some code. - 15 Mar, 2019 1 commit - Ryan Scott authored This moves all URL references to Trac tickets to their corresponding GitLab counterparts. - 22 Nov, 2018 1 commit - Gabor Greif authored only concat once -: - 15 Jan, 2017 1 commit: - 18.
https://gitlab.haskell.org/nineonine/ghc/-/commits/ed62b6657f71cfe88b170ba55ce534f8026e944f/compiler/llvmGen/LlvmCodeGen.hs
CC-MAIN-2020-40
refinedweb
276
66.23
rabacus 0.9.5 Calculates analytic cosmological radiative transfer solutions in simplified geometries. Description Rabacus is a Python package for performing analytic radiative transfer calculations in simple geometries relevant to cosmology and astrophysics. It also contains tools to calculate cosmological quantities such as the power spectrum and mass function. Prerequisites The Rabacus package requires three other Python packages and a fortran compiler, - Scipy - Numpy (version 1.7 or later) - Quantities - Fortran compiler in your path. Installing prerequisites with pip A simple way to install Python packages is using the package manager pip. To check if you have pip installed on your system, type pip at the command line, pip If this produces usage instructions then congratulations, you have pip installed. If not, instructions for installing pip can be found here. To check if the python packages are installed on your system, attempt to import them from the python command prompt, >>> import scipy >>> import numpy as np >>> import quantities as pq If any of these import commands produce an error message you will need to install the proper software before installing Rabacus. Once you have access to pip, you can install any missing prerequisites using the following commands, sudo pip install scipy sudo pip install numpy sudo pip install quantities If you do not have root access on your system you can pass the --user flag which will install the packages into a hidden folder called .local in your home directory, pip install --user scipy pip install --user numpy pip install --user quantities Installing prerequisites on Debian (Ubuntu) On Debian based systems (such as Ubuntu) you may prefer to install these prerequisites using the APT tool, sudo apt-get install python-scipy python-numpy python-quantities To increase the speed of execution, much of Rabacus is written in Fortran 90 and then wrapped using the f2py tool that is part of numpy. For the installation to be successful, a fortran compiler must be in your executable path. If you don’t already have one, I recommend the gnu fortran compiler gfortran. On Debian based systems (such as Ubuntu) you can install this compiler using the APT tool, sudo apt-get install gfortran Installation With the prequisites installed on your system, you are ready to install the Rabacus package itself. Setting F90 environment variable Rabacus makes use of OpenMP directives in the Fortran code base and so we have to make sure the code is compiled correctly. In order to do this, you have to let the build system know what Fortran 90 compiler you are going to be using. The simplest way to do this is to set the environment variable F90 before following the installation instructions below. Rabacus has been tested with the intel compiler and the gnu gfortran compiler. For other compilers you will have to follow the Manual Install instructions below. To use the gfortran compiler, type the following at the command line (in Bash) export F90=gfortran To use the intel compiler set export F90=ifort Single command install If you have made the appropriate sacrifices to the computer gods, you should be able to install an OpenMP enabled version of Rabacus with a single comand line call to pip, sudo pip install rabacus As was the case for the prerequisites, if you do not have root access on your system you can pass the --user flag which will install Rabacus into a hidden folder called .local in your home directory, pip install --user rabacus If the last two lines printed to the screen are, Successfully installed rabacus Cleaning up... then congratulations you have a working copy of Rabacus. To double check, begin an ipython session and attempt an import, import rabacus as ra Packages installed with pip can be uninstalled in the same way, pip uninstall rabacus Manual install If the above process fails for any reason we can always download Rabacus and manually invoke the setup script. The first step is to download and untar the Rabacus tar.gz file from the PyPI site () and change into the main Rabacus directory, gunzip rabacus-x.x.x.tar.gz tar xvf rabacus-x.x.x.tar cd rabacus-x.x.x Now we have direct access to the setup.py file which gives us a lot more freedom but it comes at the cost of slightly more complexity. First it’s a good idea to see which fortran compilers are detected on your machine. The following command will list all of the fortan compilers found on your system and all the compilers available for your system but not found. f2py -c --help-fcompiler For example on my machine I get the following, Fortran compilers found: --fcompiler=gnu95 GNU Fortran 95 compiler (4.8.1-10) --fcompiler=intelem Intel Fortran Compiler for 64-bit apps (14.0.2.144) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=gnu GNU Fortran 77 compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Now we decide which of the fortran compilers to use and which flags to pass the build command. Suppose you wanted to use the Intel compiler. Edit the setup.py file such that the variable f90_flags is a list of compile flags and omp_lib is a list containing the linking flags. For example, f90_flags = ["-openmp", "-fPIC", "-xHost", "-O3", "-ipo", "-funroll-loops", "-heap-arrays", "-mcmodel=medium"] omp_lib = ["-liomp5"] These variables are already defined near the top of the setup.py file and will need to be overwritten. Once this is done, we give the build command to the setup.py script, python setup.py build --fcompiler=intelem After the package is built, give the install command to actually install it, sudo python setup.py install --record rabacus_install_files.txt The last part of the command is to allow for easy uninstall. This process just involves deleting all installed files which will be listed in the file rabacus_install_files.txt. This can be accomplished using the following command, cat rabacus_install_files.txt | xargs sudo rm -rf The install can also be done locally for those without root permission on their system by passing the --user flag to the install command, python setup.py install --user --record rabacus_install_files.txt Note that if you previously did an install of Rabacus that required the sudo command you will likely need to delete the rabacus.egg-info directory and some directories inside the build directory as they will need to be modified but will be owned by root. If you are only doing a local install then this shoudn’t be necessary. This procedure should work for any fortran compiler supported by f2py (i.e. any compiler in the list returned when using the --help-fcompiler flag. Testing install Detailed examples of using rabacus are available by following the link to the users guide below. However, we present a short example with the expected output below as a way to quickly test that a new installation has basic functionality. We first import rabacus and then create an object that gives access to the meta galactic radiation background described in Haardt & Madau 2012. Finally, we ask for the photo-heating rate of He I at a redshift of 3.0. import rabacus as ra hm12 = ra.HM12_Photorates_Table() z = 3.0 print hm12.He1h(z) The expected output from a working rabacus installation is given below. Note that there may be differences in the last significant figure due to different processor architectures. 3.39163517433e-12 eV/s Project URLs - PyPI () - documentation () - version control () - Downloads (All Versions): - 0 downloads in the last day - 0 downloads in the last week - 87 downloads in the last month - Author: Gabriel Altay - Documentation: rabacus package documentation - Download URL: - License: Free BSD - Platform: linux - Categories - Development Status :: 4 - Beta - Intended Audience :: Science/Research - License :: OSI Approved :: BSD License - Natural Language :: English - Operating System :: POSIX :: Linux - Programming Language :: Fortran - Programming Language :: Python - Topic :: Education - Topic :: Scientific/Engineering :: Astronomy - Topic :: Scientific/Engineering :: Physics - Package Index Owner: gabriel.altay - DOAP record: rabacus-0.9.5.xml
https://pypi.python.org/pypi/rabacus
CC-MAIN-2016-18
refinedweb
1,387
50.87
A python wrapper around cmprsk R package Project description cmprsk - Competing Risks Regression Regression modeling of sub-distribution functions in competing risks. A python wrapper around the cmprsk R package. Description: Estimation,. Original Package documentation Requierments This package is using rpy2 in order to use import the cmprsk R packge and therefore the requierments for rpy2 must be met. TL;DR - Unix like OS: Linux, MacOS, BSD. (May work on Windows, look at [rpy2 binaries])(). - python >= 3.5 - R >= 3.3 how to install R - readline 7.0 - Should be installed as part of rpy2. how to install on MacOS see also the following issue - The cmprskR library (open the R consule and run install.packages('cmprsk')) Quickstart Example: crr import pandas as pd import cmprsk.cmprsk as cmprsk from cmprsk import utils data = pd.read_csv('my_data_fle') # assuming that x1,x2,x3, x4 are covatiates. # x1 are x4 are categorical with baseline 'd' for x1 and 5 for x2 static_covariates = utils.as_indicators(data[['x1', 'x2', 'x3', 'x4']], ['x1', 'x4'], bases=['d', 5]) crr_result = cmprsk.crr(ftime, fstatus, static_covariates) report = crr_result.summary print(report) ftime and fstatus can be numpy array or pandas series, and static_covariates is a pandas DataFrame. The report is a pandas DataFrame as well. Example: cuminc import matplotlib.plt import numpy as np import pandas as pd from cmprsk import cmprsk data = pd.read_csv('cmprsk/cmprsk/tests/test_set.csv') print(data) cuminc_res = cmprsk.cuminc(data.ss, data.cc, group=data.gg, strata=data.strt) # print cuminc_res.print # plot using matplotlib _, ax = plt.subplots() for name, group in cuminc_res.groups.items(): ax.plot(group.time, group.est, label=name) ax.fill_between(group.time, group.low_ci, group.high_ci, alpha=0.4) ax.set_ylim([0, 1]) ax.legend() ax.set_title('foo bar') plt.show() How to update package: - update version in setup.py - rm -fr dist directory - python setup.py sdist bdist_wheel - twine upload dist/* --verbose Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cmprsk/
CC-MAIN-2020-50
refinedweb
346
53.37
SETREUID(2) Linux Programmer's Manual SETREUID(2) setreuid, setregid - set real and/or effective user or group ID ". On success, zero is returned. On error, -1 is returned, and errno is set appropriately. Note: there are cases where setreuid() can fail even when the caller is UID 0; it is a grave security error to omit checking for a failure return from set. POSIX.1-2001, 4.3BSD (the setreuid() and setregid() function calls first appeared in 4.2BSD). Setting the effective user (group) ID to the saved set-user-ID (saved set-group-ID) is possible since Linux 1.1.37 (1.1.38).. getgid(2), getuid(2), seteuid(2), setgid(2), setresuid(2), setuid(2), capabilities(7), user_namespaces(7) This page is part of release 3.80 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2014-09-21 SETREUID(2)
http://man7.org/linux/man-pages/man2/setreuid.2.html
CC-MAIN-2015-11
refinedweb
162
57.98
Common Python Security Pitfalls and How to Avoid Them Being such a widely-used language makes Python a target for malicious hackers. Let's see a few ways to secure your Python apps and keep the black-hats at bay. Join the DZone community and get the full member experience.Join For Free Introduction Python is undoubtedly a popular language. It consistently ranks among the most popular and most loved languages year after year. That's not hard to explain, considering how fluent and expressive it is. Its pseudocode-like syntax makes it extremely easy for beginners to pick it up as their first language, while its vast library of packages (including the likes of giants like Django and TensorFlow) ensure that it scales up for any task required of it. Being such a widely-used language makes Python a very attractive target for malicious hackers. Let's see a few simple ways to secure your Python apps and keep the black-hats at bay. Problems and Solutions Python places a lot of importance on zen, or developer happiness. The clearest evidence of that lies in the fact that the guiding principles of Python are summarized in a poem. Try import this in a Python shell to read it. Here are some security concerns that might disturb your zen, along with solutions to restore it to a state of calm. Unsafe Deserialization OWASP Top Ten, a basic checklist for web security, mentions unsafe deserialization as one of the ten most common security flaws. While it's common knowledge that executing anything coming from the user is a terrible idea, serializing and deserializing user input does not seem equally serious. After all, no code is being run, right? Wrong, PyYAML is the de-facto standard for YAML serialization and deserialization in Python. The library supports serializing custom data types to YAML and deserializing them back to Python objects. See this serialization code here and the YAML produced by it. Deserializing this YAML gives back the original data type. $ python deserialize.py ↵ <PersonDhruv - 24> As you can see, the line !!python/object:__main__.Person in the YAML describes how to re-instantiate objects from their text representations. But this opens up a slew of attack vectors that can escalate to RCE when this instantiation can execute code. Solution The solution, as trivial as it may seem, is to use safe-loading by swapping out the loader yaml.Loader in favor of the yaml.SafeLoader loader. This loader is safer because it completely blocks the loading of custom classes. xxxxxxxxxx $ python deserialize.py ↵ ConstructorErrorcould not determine a constructor for the tag 'tag:yaml.org,2002:python/object:__main__.Person' in "person.yml", line 1, column 1 Standard types like hashes and arrays can still be serialized to and deserialized from YAML documents just like before. Most people, probably including you, won't even realize the difference. xxxxxxxxxx age24 name Dhruv xxxxxxxxxx $ python deserialize.py ↵ {'age'24, 'name''Dhruv' Dynamic Execution Python has a pair of very dangerous functions: exec and eval. Both are very similar in terms of what they do: process the strings passed to them as Python code. exec expects the string to be a statement, which it will execute and not return a value. eval expects the string to be an expression and will return the computed value of the expression. Here is an example of both of these functions in action. xxxxxxxxxx eval('2 + 5') # returns 7 exec('print("Hello")') # prints "Hello", no return You could, in theory, pass a statement to eval and get a similar effect as exec, because in Python returning None is virtually the same as not returning anything at all. xxxxxxxxxx eval('print("Hello")') # prints "Hello", returns None The danger of these functions lies in the ability of these functions to execute virtually any code in the same Python process. Passing any input to the function that you cannot be 100% certain about is akin to handing over your server keys to malicious hackers on a plate. This is the very definition of RCE. Solution There are ways to mitigate the access that eval has. You can restrict access to globals and locals by passing dictionaries as the second and third arguments to eval respectively. Remember that locals take priority over globals in case of conflict. xxxxxxxxxx x = 3 eval('x + 5') # returns 8 eval('x + 5', { 'x': 2 }) # returns 7 eval('x + 5', { 'x': 2 }, { 'x': 1 }) # returns 6 That makes the code safer, yes. At least it somewhat prevents the data in the variables from being leaked. But it still doesn't prevent the string from accessing any built-ins like pow, or more dangerously, __import__. To counter that, you need to override __builtins__. xxxxxxxxxx eval("__import__('math').sqrt(5)", {}, {}) # returns 2.2360679774997898 eval( "__import__('math').sqrt(5)", { "__builtins__": None }, {} # restricts access to built-ins ) # error Safe to expose now? Not quite. Because regardless of how secure you make eval, its job is to evaluate an expression and nothing can stop the expression from taking too long and freezing the server for a long time. xxxxxxxxxx eval("2 ** 2147483647", { "__builtins__": None }, {}) # goodbye! As we Python developers like to say, " eval is evil." Dependency Management Python's popularity draws the attention of white-hat security researchers just as much as it does that of hackers with malicious intent. As a result, new security vulnerabilities are constantly discovered, disclosed, and patched. To keep the malicious hackers at bay, your software needs to keep all of its dependencies up to date. A common technique for pinning packages in Python is the ubiquitous requirements.txt file, a simple file that lists all the dependencies and exact versions needed by your project. Let's say you install Django. As of writing this, Django depends on three more packages. If you freeze your dependencies, you end up with the following requirements. Note that only one of these dependencies was installed by you; the other 3 are sub-dependencies. xxxxxxxxxx $ pip freeze ↵ asgiref==3.3.1 Django==3.1.5 pytz==2020.5 sqlparse==0.4.1 pip freeze does not place dependencies in levels and that's a problem. For smaller projects with a few dependencies that you can keep a track of mentally, this is not a big deal. But as your projects grow, so will your top level dependencies. Conflicts arise when sub-dependencies overlap. Updating individual dependencies is a mess too because the graph relationship of these dependencies is not clear from the plain text file. Solution Pipenv and Poetry are two tools that help you manage dependencies better. I prefer Pipenv but Poetry is equally as good. Both package managers build on top of pip. Fun fact: DeepSource is compatible with both Pipenv and Poetry as package managers. Pipenv, for example, tracks your top-level dependencies in a Pipfile and then does the hard work of locking down dependencies in a lockfile named Pipfile.lock, similar to how npm manages Node.js packages. Here's an example of Pipfile. xxxxxxxxxx [[source]] name = "pypi" url = "" verify_ssl = true [dev-packages] [packages] django = "*" [requires] python_version = "3.9" With this, you can get a clear picture of the top-level dependencies of your app. Updating dependencies is also much easier because you just need to update the top-level packages and the locking algorithm will figure out the most compatible and up-to-date versions of all the sub-dependencies. Here is the same example with Django. Notice how Pipenv can identify your top-level dependencies and their dependencies and so on. xxxxxxxxxx $ pipenv graph ↵ Django==3.1.5 - asgiref [required: >=3.2.10,<4, installed: 3.3.1] - pytz [required: Any, installed: 2020.5] - sqlparse [required: >=0.2.2, installed: 0.4.1] If your code is hosted on GitHub, make sure you turn on and configure Dependabot as well. It's a nifty little bot that alerts you if any of your dependencies have gone out of date or if a vulnerability has been identified in the pinned version of a dependency. Dependabot will also make PRs to your repo, automatically updating your packages. Very handy indeed! Runtime Assertions Python has a special assert keyword for guarding against unexpected situations. The purpose of assert is simple—verify a condition and raise an error if the condition is not fulfilled. In essence, assert evaluates the given expression and - if it evaluates to a truthy value, it moves along - if it evaluates to a falsely value, it raises an AssertionErrorwith the given message Consider this example. xxxxxxxxxx def do_something_dangerous(user, command): assert user.has_permissions(command), f'{user} is not authorized' user.execute(command) This is a very simple example where we check if the user has the permissions to perform an action and subsequently performs the given action. In this case, if user.has_permissions() returns False, our assertion would cause an AssertionError and the execution would be halted. Seems pretty safe, right? No. Assertions are tools for developers during the development and debugging phases. Asserts should not be used to guard critical functionality. The Python constant __debug__ is set to False during compilation, which removes assert statements from the compiled code to optimize the code for performance. Removing assert statements from the compiled code leaves the function unguarded. Another reason to avoid asserts is that assertion errors are not helpful to debuggers, as they provide no information other than the fact that an assertion did not hold up. Defining apt exception classes and then raising their instances is a much more solid solution. Solution For an alternative approach, go back to the basics. Here's the same program, this time using if/else and raising a PermissionError (you'll need to define it somewhere) when the required assertion is unfulfilled. xxxxxxxxxx def do_something_dangerous(user, command): if user.has_permissions(command): # safer as it will not be removed by compiler user.execute(command) else: raise PermissionError(f'{user} is not authorized') # suitable error class This code uses straightforward Python constructs, works the same with __debug__ set to True or False, and raises clear exceptions that can be handled with much more clarity. Achieving Zen The key takeaway from all of these examples is to never trust your users. Input provided by the users should not be serialized-deserialized, evaluated, executed, or rendered. To be safe, you must be careful of what you write and thoroughly audit the code after it's been written. Do you know what's better than scanning for vulnerabilities in code after it's been written? Getting vulnerabilities highlighted as soon as you write code with them. Bandit Static Code Analysis tools such as linters and vulnerability scanners can help you find a lot of issues before they get exploited in the wild. An excellent tool for finding security vulnerabilities in Python is Bandit. Bandit goes through each file, generates an abstract syntax tree (AST) for it, and then runs a whole slew of tests on this AST. Bandit can detect a whole bunch of vulnerabilities out-of-the-box and can also be extended for specific scenarios and compatibility with frameworks via plugins. As a matter of fact, Bandit is capable of detecting all of the aforementioned security shortcomings. If you're a Python developer, I cannot recommend Bandit enough. I could write a whole article extolling its virtues. I might even do that. DeepSource You should also consider automating this entire audit and review process using code review automation tools, like DeepSource, that scans your code, on every commit and for every PR, through its linters and security analyzers and can automatically fix a multitude of issues. DeepSource also has its own custom-built analyzers for most languages that are constantly improved and kept up-to-date. And it's incredibly easy to set up! xxxxxxxxxx version = 1 [[analyzers]] name = "python" enabled = true [analyzers.meta] runtime_version = "3.x.x" max_line_length = 80 Who knew it could be so simple? Experience the Zen of Python, and be careful not to let black-hats disturb your peace! Published at DZone with permission of Dhruv Bhanushali. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/common-python-security-pitfalls-and-how-to-avoid-t
CC-MAIN-2021-17
refinedweb
2,031
57.06
Hey, I'm trying to get a sprite to always face the mouse. How to I do this? Is there a formula I need to calculate the angle based on the mouse position? If so, what is it? Thanks,KG Depends a lot on your game. If this is some top down angle where the camera is perfectly perpendicular to the playfield, easy as pie (your alliteration skill increases by 1). If not, it gets stickier ... --Software Development == Church DevelopmentStep 1. Build it.Step 2. Pray. Here you go:_______________________________________include math.h #include <math.h> _______________________________________define pi. #ifndef PI #define PI 3.1415927 //++ #endif _______________________________________you need a bitmap... BITMAP *test; _______________________________________you need to store a value.. double player_angle; _______________________________________you need some logic. player_angle = (atan2(mouse_y - pY, mouse_x - pX)*128/PI) + V; mouse_y is the vertical screen position of the mouse pointer.you can figure mouse_x out yourself. pY is the second point of which is needed for the calculation of the degree, and which i suggest is you bitmaps rotation point is. V is the additional degrees added to the calculated direction, you want to adjust this until it looks good . NOTE: instead of a 360 degree, use 256. _______________________________________and you need to draw the sprite. pivot_sprite(buffer, test, pX, pY, cX, cY, itofix(player_angle)); pX and pY is the posision of the sprite.and cX and xY is from which point the sprite should be rotated. enjoy! this should work, i think.. #ifndef PI#define PI 3.1415927 //++#endif What is the //++ for? It was my extremely bad way of saying: "you can use more decimals if you want". >< Why not just use M_PI? and you need to draw the sprite. Slight correction pivot_sprite(buffer, test, pX, pY, cX, cY, itofix(player_angle * (256 / 360.0f)); But personally I would leave the number in radians, which would result in this (slightly simpler) code: player_angle = atan2(mouse_y - pY, mouse_x - pX) + V; pivot_sprite(buffer, test, pX, pY, cX, cY, itofix(player_angle * (128 / M_PI)); How is my posting? - Order Hero - SF Bike Rentals - Mouse Mash - Got Tod? - Taco Roco[The Musings of a Lost Programmer] todo Thing Library 1.0 Punchcard Metrics What is M_PI? -.- Its a constant that represents Pi. Part of some standard and included with your compiler via cmath or math.h Oops. Should have read. Thanks, everybody. I wasn't expecting this many replies! I'll try those methods and tell you how they worked. EDIT: Well, they were a bit confusing at first, but I got them to work! Now, is there any way to rotate a sprite without distortion?
http://www.allegro.cc/forums/thread/591216/669439#target
crawl-003
refinedweb
432
78.35
I'm a beginner in python and having trouble with this function. I'm trying to count the values in list xs that are greater than v as well as the duplicates. I'm getting an error for line 3, should I get rid of j and just use I to represent the values in the list? Code below that I have: def count_greater(xs,v): count_list = 0 for i in (xs) and j in (v): if i > j: count_list += 1 else: count_list += 0 return count_list count_greater([12,0,20,34,0,20],3) I think this is what you are after: def count_greater(xs,v): count = 0 for i in xs: if i >= v: count += 1 return count count_greater([12,0,20,34,0,20],3) It returns the number of times that a value in the list is equal to or greater than the second argument (3 in the example). The value returned in this case is 4.
https://codedump.io/share/vVwl1gCZRVRB/1/finding-the-min
CC-MAIN-2016-44
refinedweb
160
68.64
You are familiar with static type checking through languages such as C# and Java. In these languages, the type checking is straightforward but rather crude, and can be seen as an annoyance compared with the freedom of dynamic languages such as Python and Ruby. But in F# the type system is your friend, not your enemy. You can use static type checking almost as an instant unit test – making sure that your code is correct at compile time. In the earlier posts we have already seen some of the things that you can do with the type system in F#: In this post and the next we will focus on using the type system as an aid to writing correct code. I will demonstrate that you can create designs such that, if your code actually compiles, it will almost certainly work as designed. In C#, you use the compile-time checks to validate your code without even thinking about it. For example, would you give up List<string> for a plain List? Or give up Nullable<int> and be forced to used object with casting? Probably not. But what if you could have even more fine-grained types? You could have even better compile-time checks. And this is exactly what F# offers. The F# type checker is not that much stricter than the C# type checker. But because it is so easy to create new types without clutter, you can represent the domain better, and, as a useful side-effect, avoid many common errors. Here is a simple example: //define a "safe" email address type type EmailAddress = EmailAddress of string //define a function that uses it let sendEmail (EmailAddress email) = printfn "sent an email to %s" email //try to send one let aliceEmail = EmailAddress "alice@example.com" sendEmail aliceEmail //try to send a plain string sendEmail "bob@example.com" //error By wrapping the email address in a special type, we ensure that normal strings cannot be used as arguments to email specific functions. (In practice, we would also hide the constructor of the EmailAddress type as well, to ensure that only valid values could be created in the first place.) There is nothing here that couldn’t be done in C#, but it would be quite a lot of work to create a new value type just for this one purpose, so in C#, it is easy to be lazy and just pass strings around. Before moving on to the major topic of “designing for correctness”, let’s see a few of the other minor, but cool, ways that F# is type-safe. Here is a minor feature that demonstrates one of the ways that F# is more type-safe than C#, and how the F# compiler can catch errors that would only be detected at runtime in C#. Try evaluating the following and look at the errors generated: let printingExample = printf "an int %i" 2 // ok printf "an int %i" 2.0 // wrong type printf "an int %i" "hello" // wrong type printf "an int %i" // missing param printf "a string %s" "hello" // ok printf "a string %s" 2 // wrong type printf "a string %s" // missing param printf "a string %s" "he" "lo" // too many params printf "an int %i and string %s" 2 "hello" // ok printf "an int %i and string %s" "hello" 2 // wrong type printf "an int %i and string %s" 2 // missing param Unlike C#, the compiler analyses the format string and determines what the number and types of the arguments are supposed to be. This can be used to constrain the types of parameters without explicitly having to specify them. So for example, in the code below, the compiler can deduce the types of the arguments automatically. let printAString x = printf "%s" x let printAnInt x = printf "%i" x // the result is: // val printAString : string -> unit //takes a string parameter // val printAnInt : int -> unit //takes an int parameter F# has the ability to define units of measure and associate them with floats. The unit of measure is then “attached” to the float as a type and prevents mixing different types. This is another feature that can be very handy if you need it. // define some measures [<Measure>] type cm [<Measure>] type inches [<Measure>] type feet = // add a conversion function static member toInches(feet : float<feet>) : float<inches> = feet * 12.0<inches/feet> // define some values let meter = 100.0<cm> let yard = 3.0<feet> //convert to different measure let yardInInches = feet.toInches(yard) // can't mix and match! yard + meter // now define some currencies [<Measure>] type GBP [<Measure>] type USD let gbp10 = 10.0<GBP> let usd10 = 10.0<USD> gbp10 + gbp10 // allowed: same currency gbp10 + usd10 // not allowed: different currency gbp10 + 1.0 // not allowed: didn't specify a currency gbp10 + 1.0<_> // allowed using wildcard One final example. In C# any class can be equated with any other class (using reference equality by default). In general, this is a bad idea! For example, you shouldn’t really be able to compare a string with a person at all. Here is some C# code which is perfectly valid and compiles fine: using System; var obj = new Object(); var ex = new Exception(); var b = (obj == ex); If we write the identical code in F#, we get a compile-time error: open System let obj = new Object() let ex = new Exception() let b = (obj = ex) Chances are, if you are testing equality between two different types, you are doing something wrong. In F#, you can even stop a type being compared at all! This is not as silly as it seems. For some types, there may not be a useful default, or you may want to force equality to be based on a specific field rather than the object as whole. Here is an example of this: // deny comparison [<NoEquality; NoComparison>] type CustomerAccount = {CustomerAccountId: int} let x = {CustomerAccountId = 1} x = x // error! x.CustomerAccountId = x.CustomerAccountId // no error
https://fsharpforfunandprofit.com/posts/correctness-type-checking/
CC-MAIN-2018-13
refinedweb
995
67.79
Introduction Plone is a fantastic content management system. Out of the box, it contains a number of useful features, and managing content is stunningly easy. A number of third-party utilities also exist that can be used to expand the capabilities of a Plone site. However, each website has its own unique needs. While there’s a good chance that one of Plone’s many third-party content types will meet your needs, there’s also a chance that you won’t be able to find exactly what you want. In this case, you are forced to create your own content types. This sounds like quite a task, but, thankfully, it really isn’t. Creating content types in Plone can actually be extremely easy, and the tool that makes it all possible is Archetypes. In this article, we’ll take a look at using Archetypes to create a content type for Plone. As I said, this can be done very easily, and only minimal knowledge of Python is needed to create something simple. Archetypes Overview The way Archetypes works is incredibly simple, which enables you as a developer to easily add new content types made for specific purposes. Say you wanted to build a collection of quotations which interest you. Now, you could just shove them into separate files and format them as best you could. However, this would be very hard to maintain, especially if you wanted to make a universal change to things. This is where Archetypes comes in. The quotation pages can be broken down into basic fields. Here, there would be a field for the speaker of the quotation and a field for the quotation itself. Both would be represented as basic text. To implement these fields in Archetypes, you would create something called a schema defining the two fields. The schema would then be shoved into a Python script with a few more instructions. Next, you’d create a few more files with more information on the content type as well as instructions telling Plone how to install it. Finally, you’d simply install your new content type. Archetypes would generate the look of the page as well as a management interface based on the data in the Python script. That’s all there is to something as simple as this example. As you can tell, Archetypes is an extremely powerful tool that can handle a lot of work for you. {mospagebreak title=A Quotation Content Type} We’ll start with the example I described in the previous section, a simple quotation content type. When an object based off the content type is added, text fields for the speaker of the quotation and the quotation will be presented. After they are filled out and the form is submitted, the object will be created. When the object is viewed, it will display the speaker and the quotation. Create a folder called Quotation in the Products directory. This is where we’ll store our content type’s files. The first file we will create is config.py. This defines some constants that we’ll use throughout our content type: from Products.CMFCore.CMFCorePermissions import AddPortalContent ADD_CONTENT_PERMISSION = AddPortalContent GLOBALS = globals() PROJECTNAME = “Quotation” In config.py, we first define the permission required to manipulate our content type. Since our content type really isn’t anything special, we just assign it to the generic AddPortalContent permission. We then define a constant called GLOBALS, which is used during the installation of our content type. Finally, we give our content type (or project/product, rather, since it’s possible to have multiple content types in the same package) a name. Next, we’ll have to create our content type’s schema, which will define the fields associated with the content type. When we edit quotations, we’ll want a text field for the speaker and a larger text area for the quotation itself. All of this is specified in the schema: from Products.Archetypes.public import * from Products.Quotation.config import * schema = BaseSchema + Schema(( TextField(‘speaker’, required = True), TextField(‘quotation’, required = True, widget = TextAreaWidget) )) class Quotation(BaseContent): “A simple quotation content type.” schema = schema registerType(Quotation, PROJECTNAME) The first thing we do is create our content type’s schema. We create two text fields, one for the speaker and one for the quotation. Both are required to be filled out. The second one is also set to a text area. Next, we create the content type’s class, where we simply copy the schema, and, finally, we register the content type. Next is __init__.py, where we glue what we’ve done so far together: from config import * from Products.Archetypes import process_types, listTypes from Products.CMFCore import utils def initialize(context): import Quotation content_types, constructors, ftis = process_types(listTypes (PROJECTNAME), PROJECTNAME) utils.ContentInit(PROJECTNAME, content_types = content_types, permission = ADD_CONTENT_PERMISSION, extra_constructors = constructors, fti = ftis).initialize(context) Besides importing our configuration file and various methods that we’ll be using, the first thing we do is define a function and import our Quotation.py file. From there, we get some information about our package, the content types associated with it, constructors and factory type information, in that order. We then create a utils.ContentInit with this information, along with the package name and the permission we defined in config.py. Finally, we have to create Install.py, which is responsible for installing our product. Create a folder called Extensions inside Quotation to place the install script in: from Products.Archetypes.Extensions.utils import installTypes from Products.Archetypes.public import listTypes from Products.Quotation.config import PROJECTNAME, GLOBALS import StringIO def install(self): out = StringIO.StringIO() installTypes(self, out, listTypes(PROJECTNAME), PROJECTNAME) out.write(“Installed: ” + PROJECTNAME) return out.getvalue() This simply installs our content type and inserts a short message into the installation log. Our Quotation content type is now complete. Restart Zope and then install the product in the “Add/Remove Products” section of Plone. You should now be able to add an object based off our content type to a folder. You can also create the files version.txt and README.txt. The contents of the former file will be added to the product’s name, and the contents of the latter file will be displayed as the product’s description. {mospagebreak title=Mutators and Accessors} Archetypes enables you to manipulate the values of fields. When you manipulate the value of a field when it is set, you are using a mutator. When you manipulate the value of a field when it is requested, you are using an accessor. Both mutators and accessors are simple to create and use in content types. Let’s take another look at our Quotation product above. Let’s say we want to modify the field for the quotation. When the user sets the value, we’ll put quotation marks around everything. To do this, we simply create a method called setQuotation. The method will be automatically called when the value of the field is set. The method goes into our Quotation class: class Quotation(BaseContent): … def setQuotation(self, value): value = ‘”‘ + value + ‘”‘ self.getField(‘quotation’).set(self, value) Restart Zope and install the product again. Add a quotation and look at the result—the value of the quotation field now has quotation marks around it. However, our system has a serious flaw. Click the “edit” tab and notice how the value of the text area has quotation marks around it. Save it and look again. There are now two sets of quotation marks, which is no good. While we could modify our mutator to work around this, the easiest way to fix this is to use an accessor instead. Accessors, too, are methods, but they don’t have any sort of value argument. Here’s an accessor that does what we want: class Quotation(BaseContent): … def getQuotation(self): return ‘”‘ + self.getField(‘quotation’).get(self) + ‘”‘ Remember to delete the mutator we defined above in order for our content type to function properly. Restart Zope and reinstall our product. Create a quotation, or edit one, and save it. If quotation marks are left over from our mutator, make sure to delete them. You should now see quotation marks wrapped around your quotation. Notice that they do not appear when editing. {mospagebreak title=Validators} A problem with our Quotation product is that people can enter anything they want when creating a new object. For example, I could set the speaker of the quotation to “Benj4min Frankl1n”, which is unrealistic. Because of this, it might be a good idea to restrict what the user can enter as the speaker. We can restrict what a use can enter by using things called validators. When a user submits data, any validators attached to that field are run. They look at the information submitted and make sure that everything is valid. If it is, the data is accepted. If it’s not valid, then an error is returned. There is more than one way to create a validator, but, in my opinion, the easiest is to create one out of the RegexValidator class. It accepts a regular expression, and it matches the value of any associated fields with the regular expression. In our example, we want to make sure that there are no numbers in the speaker’s name. Add this code to your __init__.py file: def initialize(context): from Products.validation.validators.RegexValidator import RegexValidator from Products.validation import validation validation.register(RegexValidator(‘isValidSpeaker’, r’AD*Z’, errmsg = ‘ contains irregular characters.’)) … This registers our validator under the name of “isValidSpeaker” so that we can attach it to any field we’d like. Note that the registration of our validator must come before everything else in initialize. Modify the schema variable in Quotation.py and assign our validator to the field containing the speaker’s name: schema = BaseSchema + Schema(( TextField(‘speaker’, required = True, validators = [‘isValidSpeaker’]), TextField(‘quotation’, required = True, widget = TextAreaWidget) )) Restart Zope and reinstall the product. Try to create a quotation with numbers in the speaker’s name and examine what happens. Conclusion What’s covered in this article is not, of course, all there is to Archetypes. Archetypes contains a lot more features—fields, widgets, built-in validators and many more interesting things. However, from what’s been covered, I think it’s pretty safe to conclude quite a few things about Archetypes. Archetypes is a utility that allows developers to create Plone products very easily. A schema is created and put into a class. The schema contains fields which content types are built around. A __init__.py file is then created, along with an installation script. Archetypes then does its magic, generating interfaces for modifying and viewing objects. Using this process, a developer can create a simple content type with just a few lines of Python code.
http://www.devshed.com/c/a/Zope/Plone-Content-Types-With-Archetypes/
CC-MAIN-2015-22
refinedweb
1,805
57.98
A function in python is a group of statements within a program that performs a specific task. Usually functions input data, process it, and “return” a result. Once a function is written, it can be used repeatedly. We’ll be covering the following topics in this tutorial: Types of Functions Functions can be of two types one is a built-in function, and other is a user-defined function. Built-in function We have already seen that print() is a built-in function or for example input() is a built-in function or for example, min() is a built-in function and so on. Now usually function does one task at a time. you can see print() function only prints something whatever input you give is going to print it. • input() function takes some input from the user. • min() function finds the minimum out of some values. So a particular function do one task at a time. Define a function def nameoffunction(arg1, arg2, arg3): print("") To define a function, you use a keyword def, and then you give the name of the function, and after the name of the function you give these parentheses (), and you provide the number of arguments or parameters. You can give any number of arguments to your function. After this ending parenthesis, you give this colon and then under this function signature you write some statements, which you want to execute when this function is called. For example, if you want to calculate something, for example, the product of two or three variables or a sum of two or three variables you can do under this function declaration. Calling of function I’m going to define a very simple function. Which is going to add two values. I’m going to name it as the sum and for example def sum(args1, args2): print(args1 + args2) sum(10, 10) Output: 20 It takes two values one is let’s say Args1, and other is Args2 and then after the colon (:). I want to add these two values. I can write print and then I can write args1 + args2. It is a very simple function which takes two arguments and then adds these two arguments and print them. This is how you declare a user-defined function. Now after declaring a function. You also need to call this function so in order to call the function you use the name of the function, and then you provide the arguments. Which is required by the function. Our function requires two-argument args1 and args2. we are going to give these two values. Let’s say I want to provide 10 as the argument-1, and I will provide 10 as the second value now let’s run the code and let’s see what happens so when we run the code you can see our function prints 20 which is the sum of these two values, which we have provided as an argument to this function. Now also if you remember this '+'operator. You can also use to concatenate two strings. I can use this sum function, and this time I’m going to provide for example hello as the first parameter and then world as the second parameter, and then I’m going to run the program, and it’s going to print hello world. def sum(args1, args2): print(args1 + args2) sum('Hello ', 'World') Output: Hello World in addition we can provide to float numbers for example I will provide 15.647 and the second argument I’m going to provide is 80.258 and this is also allowed. I’m going to just run this code and it gives us the sum of these two values. def sum(args1, args2): print(args1 + args2) sum(15.647, 80.258) Output: 95.905 Above function, the sum is doing one task: to add two values, whether it’s a string or a number, or a float value. You may also observe that when I provide a string as a first argument, and I will give a number as a second argument. Will this work. def sum(args1, args2): print(args1 + args2) sum('Hello ', 80) Let’s see so it’s going to give us an error and this error says Can't convert 'int' object to str implicitly. This is a problem. To solve this problem, we can provide a simple condition, and we are going to check the type of both the arguments. def sum(args1, args2): if type(args1) != type(args2): print("Please give the args of same type") return print(args1 + args2) sum('Hello ', 80) If the type of args1 is not equal to args2. We are going to use Let’s run the code, and you can see. It prints message which says, please give args of the same type. Suppose these arguments are not of the same type. Whether it’s an integer or a string or a float value, if the user provides the first argument, which is a string type and the second argument is an integer type, this condition will be true. The statement will be executed. It will print the message and then return is called, and after this whatever statements are there will not be executed. '!='keyword, which is returned. You can also return some values from a function. When you don’t give any values after the return keyword, it’s good not to return anything but let’s return the addition of two arguments using our return keyword. I’m going to use the return keyword, which will return these two values using this sum function. Now what will happen is let’s run the code once again, and you can see the sum is executed, but the result is not printed. def sum(args1, args2): if type(args1) != type(args2): print("Please give the args of same type") return(args1 + args2) To get the result of the above function. When it returns something, we need to save the return value in a variable. Let’s save this value into a variable, and then you can use the variable to print the value of the sum. def sum(args1, args2): if type(args1) != type(args2): print("Please give the args of same type") return(args1 + args2) s = sum(10, 80) print(s) print(sum(10, 80)) You can directly and close this sum function inside a print function, and it’s going to print the sum of these two strings. Either you can assign the result of the sum function. Return the result into a variable or use the print function directly to get the result and print it. Benefits of using functions in Python • The function makes your code simpler because if you don’t use the function to execute the code, you need to write the code repeatedly whenever you want to use the functionality at different places. • The function makes your code reusable. The same code is used to add two integer values to concatenate two string to add two float values, and it’s also used to give the error. If you provide the arguments of different types, you write the code once and use it multiple times, which results in faster development of the code. If you use a function. You can develop your code much faster than if you don’t use the function. • when you declare a function you can test and debug your code in a better way.
https://ecomputernotes.com/python/functions-in-python
CC-MAIN-2022-21
refinedweb
1,253
71.34
#include <complex.h> Complex numbers are numbers of the form z = a+b*i, where a and b are real numbers and i = sqrt(−1), so that i*i = −1. There are other ways to represent that number. The pair (a,b) of real numbers may be viewed as a point in the plane, given by X- and Y-coordinates. This same point may also be described by giving the pair of real numbers (r,phi), where r is the distance to the origin O, and phi the angle between the X-axis and the line Oz. Now z = r*exp(i*phi) = r*(cos(phi)+i*sin(phi)). The basic operations are defined on z = a+b*i and w = c+d*i as: Nearly all math function have a complex counterpart but there are some complex-only functions. Your C-compiler can work with complex numbers if it supports the C99 standard. Link with −lm. The imaginary unit is represented by I. /* check that exp(i * pi) == −1 */ #include <math.h> /* for atan */ #include <stdio.h> #include <complex.h> int main(void) { double pi = 4 * atan(1.0); double complex z = cexp(I * pi); printf("%f + %f * i\n", creal(z), cimag(z)); })
http://manpages.courier-mta.org/htmlman7/complex.7.html
CC-MAIN-2017-30
refinedweb
205
76.11
This Tutorial is most relevant to Sencha Touch, 1.x. Sencha Touch allows you to create applications that work on both mobile phone and tablet devices, as well as use layouts that cater to different screen sizes. In addition to the display differences between types of devices, users also have certain expectations about apps' user-interface conventions. In this two-part series, we show how, with a single code-base, we can create an app which responds to these conventions, and which, through the use of the Sencha Touch 'application profiles' mechanism, delivers familiar user interfaces to both phone and tablet users. (If you want to skip ahead, part two is here). The Basics A modern trend in web design is to build web sites that are 'responsive' - meaning that they employ fluid layouts and techniques such as CSS media queries to adapt to a wide range of screen sizes. Quite apart from whether this allows us to deliver services tapered for particular user contexts is another matter - but the question this article sets out to answer is: can we do something similar for mobile and tablet web apps? The good news is that Sencha Touch provides a subsystem especially for this purpose, using the Ext.Application class to define, and respond to, multiple 'profiles'. In this article, we'll show how to use application profiles to handle layouts for various screen configurations. For the purposes of this walk-through, our goal will be be to deliver idiomatic UIs to both phone and tablet users, in both portrait and landscape modes. These are the four profiles that we will define and work with: Note that Sencha Touch allows you to define as many different profiles as you'd like. You simply need to create the rules that allow the framework to decide which one it is in at any given point. You might like to create different profiles for different operating systems perhaps: removing app-defined back buttons when you know the device has a physical back button, for instance. Our application is going to be a very simple one, but the principles should hold for more complex implementations. It's the 'Piet Mondrian' app - slightly contrived, admittedly - which shows information about four periods of the painter's life. The data set is going to be burnt into the app, but of course you could easily wire up an app like this to an online data source of some sort. The key to making everything work is to define our app using the Ext.Application class. This is the standard way to construct consistent MVC-style applications, and although we're not strictly following a fully-fledged MVC pattern here, it's still good practice to use this as an architectural entry point (rather than just an ad-hoc Ext.onReady-style approach) for all Sencha Touch and Ext JS apps. Before we go any further, you might like to read the detail in the Ext.Application API docs. Also, you might want to take a sneak peak at the finished application here (with a smartphone, tablet, or WebKit desktop browser) so you know where we are heading. As we go through this tutorial, you can stay abreast of the code by following the step-by-step branches of its associated GitHub repo. Application Structure Let's quickly get our Mondrian app's architecture bootstrapped. Make yourself the folder structure shown below, or checkout or download the GitHub repo's first branch, named 1_structure. Copy or symlink the Sencha Touch SDK as touch within the lib directory. The index.htmlfile links to the Sencha Touch JavaScript the app's two files, app.jsand data.js, and a custom stylesheet, mondrian.css: <!DOCTYPE html> <html> <head> <title>Mondrian</title> <script src="lib/touch/sencha-touch.js" type="text/javascript"></script> <script src="app.js" type="text/javascript"></script> <script src="data.js" type="text/javascript"></script> <link href="theming/mondrian.css" rel="stylesheet" type="text/css" /> </head> <body></body> </html> For now, start with a simple application instance in app.js: new Ext.Application({ name: 'mondrian', launch: function() { var app = this; // construct UI var viewport = this.viewport = new Ext.Panel({ fullscreen: true, layout: 'card' }); } }); The name property sets up a namespace for the application, and the launch function is our start-up code. In it, we create a handy reference to the application (so we can close over that variable in any other functions defined in launch), and instantiate a fullscreen root Ext.Panel called viewport. We make it a card layout since in one profile at least (for portrait phones), we'll be transitioning between two panes. data.js you can leave empty for now. In the theming directory, we'll be using Sass and Compass to compile the app's stylesheet from a single custom Sass file. We'll return to this in part two, but for now, either use the code from the 1_structure branch GitHub repo, or just copy in the standard sencha-touch.css file from the resources/css part of the SDK and rename it to mondrian.css. If all goes well, your app should load up from the index.html file. Don't get too excited yet - it's nothing more than a light gray screen - but let's move on quickly. Data and a Basic UI We're going to display four pages of information about Mondrian, each with a title and some HTML. For this, we instantiate an Ext.data.Store, containing records of a very simple Ext.data.Model with id, title and content fields. We then declare in-line data for the content itself (with attribution to Wikipedia): mondrian.stores.pages = new Ext.data.Store({ model: Ext.regModel('', { fields: [ {name:'id', type:'int'}, {name:'title', type:'string'}, {name:'content', type:'string'} ] }), data: [ {id: 1, title: 'Introduction', content: "<p>Pieter Cornelis 'Piet' Mondriaan" + ... }, {id: 2, title: 'Cubism', content: "<p>In 1911, Mondrian moved to Paris" + ... }, ... ] }); Note how we can use the mondrian.stores sub-namespace to put this store in. This was created automatically by the name: 'mondrian' configuration of the main Ext.Application. Needless to say, a typical application would probably pull data from an online source. The full file is available in the GitHub repo's 2_data branch. Let's also get a simple UI going. In the launch method, add the following component instantiations. // the page that displays each chapter var page = viewport.page = new Ext.Panel({ cls: 'page', styleHtmlContent: true, tpl: '<h2>{title}</h2>{content}', scroll: 'vertical' }); This is the detail page containing the main text of each page. It has a cls option to set a CSS class on the DOM element that we can use to lightly style it, and styleHtmlContent so basic HTML styling will be displayed. tpl is the template - simply the title and content fields of a model record - and then we want to ensure vertical scrolling of the page. // the data-bound menu list var menuList = viewport.menuList = new Ext.List({ store: this.stores.pages, itemTpl: '{title}', allowDeselect: false, singleSelect: true }); // a wrapper around the menu list var menu = viewport.menu = new Ext.Panel({ items: [menuList], layout: 'fit', width: 150, dock: 'left' }); // a button that toggles the menu when it is floating var menuButton = viewport.menuButton = new Ext.Button({ iconCls: 'list', iconMask: true }); The menu is a list of the chapter titles, so we use an Ext.List bound to our app's stores.pages store, with the appropriate, simple, itemTpl template. Since you can only view one page at a time, we set two selection mode flags accordingly. We also wrap the list itself in an Ext.Panel container since we will need to float it for the landscape phone and portrait tablet profiles. Lastly, we also need a button, decorated with a 'list' icon, that will toggle it on and off in that mode. // a button that slides page back to list (portrait phone only) var backButton = viewport.backButton = new Ext.Button({ ui: 'back', text: 'Back' }); // a button that pops up a Wikipedia attribution var infoButton = viewport.infoButton = new Ext.Button({ iconCls: 'info', iconMask: true }); The back button is only used in the portrait phone profile and will slide the detail page back to the list. Its ui option gives us the left-hand arrow styling. Also, a simple information button will appear on all profiles and will pop up the Wikipedia attribution. // the toolbar across the top of the app, containing the buttons var toolbar = this.toolbar = new Ext.Toolbar({ ui: 'light', title: 'Piet Mondrian', items: [backButton, menuButton, {xtype: 'spacer'}, infoButton] }); The final part of the jigsaw is the lightly-colored Ext.Toolbar across the top of the application. It hosts our app's title, as well as the three buttons. We use xtype: 'spacer' to push the information button to the far right of the toolbar. Finally, dock the toolbar to the top of the root viewport, ensure the page is part of the card layout (and activate it): //stitch the UI together and create an entry page viewport.addDocked(toolbar); viewport.setActiveItem(page); page.update('<img class="photo" src="head.jpg">'); The final line just puts a picture of the esteemed artist onto the page panel. You could equally force the first record of the store to be live (the 'Introduction', for example), but this technique will act as a sort of splash screen until the user chooses one of the menu items. If everything is in order, we should now have something on our screen(s): This cosmetic car-crash (as well as the head.jpg image file) is available in the GitHub repo's 3_components branch. Describing the Profiles Apart from missing icons and an uninspiring blue look, our main issue is that all of the button components are showing on the toolbar - on all devices - and that our menu is nowhere to be seen. Let's define our four profiles and make sure things appear and disappear when they are supposed to. The profiles are defined as the profiles property of our Ext.Application. Place the following configuration alongside (not inside) the launch function: profiles: { portraitPhone: function() { return Ext.is.Phone && Ext.orientation == 'portrait'; }, landscapePhone: function() { return Ext.is.Phone && Ext.orientation == 'landscape'; }, portraitTablet: function() { return !Ext.is.Phone && Ext.orientation == 'portrait'; }, landscapeTablet: function() { return !Ext.is.Phone && Ext.orientation == 'landscape'; } } For each profile we're targeting, we create a unique name and use that as a property containing a function which returns a boolean result. When the application starts up (and when orientation or screen size changes), Sencha Touch will evaluate these functions. When one returns a truthy value, that name becomes the current profile. It's important to note that JavaScript does not guarantee the order of properties in an object, so you can't be sure of the order in which the functions are called. Be careful to ensure that only one of the functions will return a truthy value at any given time. Hopefully, the rules we've defined here are very self-explanatory. Note that, rather than explicitly testing Ext.is.Tablet, we're using !Ext.is.Phone. This means that the last two profiles will also apply to desktop browser windows too: useful for testing. Now that we've defined the profiles, we need to get them to affect the components. Sencha Touch will call the setProfile method on each component within your application, if it's present, so we add such functions to the components as required. When modeling the appearance or disappearance of different controls for the four different profiles, you might want to check back with the table at the beginning of the tutorial. // add profile behaviors for relevant controls viewport.setProfile = function (profile) { if (profile=='portraitPhone') { this.setActiveItem(this.menu); } else if (profile=='landscapePhone') { this.remove(this.menu, false); this.setActiveItem(this.page); } else if (profile=='portraitTablet') { this.removeDocked(this.menu, false); } else if (profile=='landscapeTablet') { this.addDocked(this.menu); } }; The viewport (' this') as a whole changes for each of the four profiles. The function is called with the name of the profile, so we check to see which is in play and act accordingly. (If you have any more profiles than this, you might prefer to use a switch statement.) So what is going on here? Portrait phones need the menu to be an active card of the whole viewport, and horizontal phones need to have it removed (so it can float), and the page made active instead. Portrait tablets also need a floating menu, and landscape tablets need it docked. The false argument on the remove and removeDocked methods simply ensures that the menu is not destroyed in either case and is merely removed from its container, so it is ready to float. It's worth keeping in mind which transitions are likely to occur between profiles, so you can keep this state machine terse. While you should certainly expect orientation changes between portrait and landscape profiles, you'll never see a phone turning into a tablet or vice versa. So in the code above, we only need to have pairs of profiles reversing each other's transitions. In addition to the viewport as a whole, let's implement similar transitions for the other UI components. Firstly the menu, which we want to have sized and floating for landscape phone and portrait tablet profiles, and not floating (either as a card or a docked sidebar) for portrait phone and landscape tablet profiles. menu.setProfile = function (profile) { if (profile=="landscapePhone" || profile=="portraitTablet") { this.hide(); if (this.rendered) { this.el.appendTo(document.body); } this.setFloating(true); this.setSize(150, 200); } else { this.setFloating(false); this.show(); } }; Note that we hide the floating menu by default, so it appears only when the user clicks the list icon in the toolbar. (The appendTo line may seem a little cryptic, but it ensures that the element containing the list is at the top level of the DOM and can float freely, rather than having it constrained down inside the viewport element - which can adversely affect its positioning.) Finally, two simple toggles: the menu button that needs to appear when we know the menu itself is floating, and the back button that only needs to appear when we're using the card transitions on the portrait phone profile: menuButton.setProfile = function (profile) { if (profile=="landscapePhone" || profile=="portraitTablet") { this.show(); } else { this.hide(); } }; backButton.setProfile = function (profile) { if (profile=='portraitPhone') { this.show(); } else { this.hide(); } }; Of course, assuming you had the correct references, it would be possible to alter the entire UI from just one of these setProfile methods. However, by dictating the profile-specific behavior of each component within its own method, we've increased the encapsulation and maintainability of the app as the UI gets more complex. Fire this up in phone and tablet simulators, and try orienting them. You should see something like this: Hopefully you can see what is going on, based on the profile events we have implemented above. The code at this point is available in the GitHub repo's 4_profiles branch. (PS: if you want to leave a comment on this article, please do so at the end of part two...)
http://www.sencha.com/learn/idiomatic-layouts-with-sencha-touch
CC-MAIN-2014-10
refinedweb
2,540
64.61
Welcome to F# Weekly, A roundup of F# content from this past week: News - Announcing F# support for .NET Core and .NET Standard projects in Visual Studio - Rider’s F# plugin is now open source - Visual Studio 2017 version 15.5 – Preview Release Notes (F# for .NET Core are officially supported) - F# Software Foundation News – Q3 2017 Edition - The F# Mirror: Board member plans for the winter - F# Advent Calendar 2017 - LambdUp: Prague’s biggest functional programming event of the year - Welcome to C# 7.2 and Span - Introducing Nullable Reference Types in C# - Announcing the Windows Compatibility Pack for .NET Core - Introducing Tensor for multi-dimensional Machine Learning and AI data - Mono’s New .NET Interpreter - Xamarin Workbooks is open sourced - .NET Core November 2017 Update - Modernize existing .NET apps with Windows Containers and Azure - Pivotal Contributes Steeltoe to the .NET Foundation Videos & Slides - Binding Redirects – Immo Landwerth - Passing and returning functions in F# – Casual F# with Kit Eason - DevOps for Everyone with Donovan Brown and Damian Brady - NDC Sydney 2016 – Ask Me Anything! with Mark Seemann (C# vs F#, Functional Programming, Unit Testing) - The Cynical Developer Episode 49 – What is F#? with Mårten Rånge - Faking Typeclasses in F# (Monoid Example) – Michael Gilliland - How do we make code fun and more intuitive? – Ramón Soto Mathiesen - What’s new in C# – Mads Torgersen - C# 7.2: Understanding Span – Jared Parsons - Introducing Visual Studio Live Share – Chris Dias, Dan Fernandez, Amanda Silver - .NET Framework 4.7.1 improvements – Jeff Fritz - The Gamma: Democratizing Data Science – Tomas Petricek - Would Aliens Understand: Lambda Calculus? – Tomas Petricek Blogs - Episerver UI tests with canopy – Māris Krivtežs - Combining Suave and Azure Storage – Anders Kofoed - Visualizing performance tests execution times – Michał Niegrzybowski - F# – Idiomatic Huffman Coding – Ramón Soto Mathiesen - Functional Factory – Daniel Oliver - Increasing the level of de-obsfucation – Elliott V. Brown - How to test CSV files content in end-to-end testing – Michał Niegrzybowski - ts2fable 0.2.0 written in F# – Cameron Taggart - ts2fable 0.3.0 generating comments – Cameron Taggart - Querying Last.fm web API with F# – Bohdan Stupak - Exploring Azure Container Instances and Web App for Containers – Lena Hall - Azure F#unctions Talk at FSharping Meetup in Prague – Mikhail Shilkov - Going Down the Property Based Testing Rabbit Hole – Michael Newton - How close are two words? – Elliott V. Brown - Quickstart WPF F#-only app in VSCode – Alex Netkachov - This tutorial will get you started with Azure service bus, dotnet core and F# – Jan Tourlamain - Documentation: it’s really important – Elliott V. Brown F# vNext - Mixed F#/C# solutions now work in VS Code thanks to this recently merged PR to OmniSharp - If you want (.Prop) or _.Prop added to F# as first-class property accessors please react to this comment to express a preference - New F#-lang and tooling RFCs: - New ideas: Open source projects - fsharp-support – F# support in JetBrains Rider - visualfsharpdocs – Documentation for Visual F# - CNTK.FSharp – F# utilities to make the CNTK .NET API pleasant to use from the F# scripting environment - fsharpstation – An editor for FSharp Development, powered by Chrome, WebSharper & CodeMirror - SAFE-BookStore – Working sample of a Suave + Azure + Fable + Elmish aka SAFE-Stack project with hot reloading - fable-validation – An isomorphic validation library for Fable/F#, inspired by elm-validate - ConfPlanner – SAFE CQRS sample project - RogueText – A .NET natural language library for rogue-like game engines New Releases - Fable 1.3.0 - SwaggerProvider v0.9.0 with new TypeProvider.SDK - FSharp. Azure. StorageTypeProvider 1.9.5 (hot schema loading) - Unquote 4.0.0 released with support for NetStandard 2.0 - Hedgehog 0.6.0 - Persimmon 3.0.0 - TaskBuilder. fs 1.0.0 - Legivel 0.1.0 - Fez First Alpha - New release of Ionide. Adding autocomplete for external (unopened namespaces/modules) symbols, background project parsing and more! - VSCode 1.18 - Rider 2017.2.1 is released with sweet new F# features That’s all for now. Have a great week. Previous F# Weekly edition –#44 2 thoughts on “F# Weekly #45-#47 – VS support for .NET Core, new RFCs, F# plugin for Rider, LambdUp and more”
https://sergeytihon.com/2017/11/18/f-weekly-45-47-vs-support-for-net-core-new-rfcs-f-plugin%E2%80%8B-for-rider-lambdup-and-more/
CC-MAIN-2019-30
refinedweb
678
56.86
You can subscribe to this list here. Showing 4 results of 4 > Hi all, > > Is there a fast way to make the export put a title at the begining of the > Document (PDF, RTF, XLS). > > Thanks, > Narcis > Hi I answer to you the same way I answered someone else asking about PDF exporting: If you want to use the provided PDF export class org.displaytag.export.PdfView as a base then I suggest you learn how to use iText (since that class already uses it). And you can apply the the same to RTF and XLS exporting as well. Just check from the code what libraries they use. Regards, Ilari As a work around I have my decorator create a MyDateString object which wraps a string, implements Comparable and overrides toString. compareTo() is never called, but toString is. No joy there. -------------- Original message ---------------------- From: developer.dude@... > Oh, and I am using version 1.1.1 > > -------------- Original message ---------------------- > From: developer.dude@... > > I searched an archive of the list and the closest I have come to the problem I > > am having is this (which pretty much describes my problem): > > Oh, and I am using version 1.1.1 -------------- Original message ---------------------- From: developer.dude@... > I searched an archive of the list and the closest I have come to the problem I > am having is this (which pretty much describes my problem): > I searched an archive of the list and the closest I have come to the problem I am having is this (which pretty much describes my problem): Except I am not using the EL TLD. In short, I have written a compartor, I have set the comparator property for a column, but the compare() method is never being called. Here is what I have. The relevant part of the table: <jsp:root version="1.2" xmlns:jsp=""; xmlns:display="urn:jsptld:"; xmlns: <jsp:directive.page <display:table <display:column >> more columns The comparator. public class ContractStartDateComparator extends DefaultComparator { private static final Log LOG = LogFactory.getLog(ContractStartDateComparator.class); public ContractStartDateComparator() { super(); LOG.debug(" >>>>>>>>>>>> constructor called! <<<<<<<<<<<"); } public int compare(Object objectLhs, Object objectRhs) { LOG.debug(" >>>>>>>>>>>> lhs = " + objectLhs + " rhs " + objectRhs); // return some int from compare logic here. } Although I am seeing log statements from my decorator, I see no log statements from the compare method of my comparator when display the table or when I click on the column to change the order. The constructor is being called because I get a log statement from it, but not the compare method. I haven't spelunked into the DisplayTag code yet - I was hoping that there was something obvious to the list about what I am doing wrong because I have seen where other people referenced using custom comparators and therefore *somebody* got them to work. Right? Just a thought - the decorator is called before the comparator - right? Because I am decorating a date property that can be null - if it is null I return "ASAP", if not null I return the date as a string. Any help appreciated. I found this very late in a dev cycle too late because the default sort order was what I was expecting and my unit tests worked fine with the comparator itself, but now the req is that the sort order be reversed and I am unable to comply. Thanks in advance.
http://sourceforge.net/p/displaytag/mailman/displaytag-user/?viewmonth=200901&viewday=21
CC-MAIN-2014-15
refinedweb
554
63.8
John M. Dlugosz has asked for the wisdom of the Perl Monks concerning the following question: I looked at Moose Manual Attributes to be sure, and it states, "An attribute is a property that every member of a class has." Why the double-take? Because I see MooseX::Traits adds an attribute to the class called _trait_namespace whose value is used when you write: my $class = Class->with_traits('Role')->new( foo => 42 ); [download] The instructions on overriding the value of _trait_namespace does not show anything different. So what's going on here? I want to do something similar, and have a value that is used by some class method (like new) and can be changed by a derived class. This looks like the accessor is a virtual class method (something C++ doesn't have). In Smalltalk, class slots and virtual class methods are simply normal slots and methods on the metaclass instance (the class object itself). So it makes sense that Moose should accommodate such a thing as it uses the same architecture. Looking at the source of MooseX::Traits, I see a perfectly ordinary has in a role. However, it is 'bare', so there is no getter and nothing else that would make it usable. Eventually, there is a call to my $namespace = $class->meta->find_attribute_by_name('_trait_namespace +'); ... $namespace->default ... [download] The use of such a thing is testimony that "class variables" and "class methods" are indeed wanted!.. A class has zero or more attributes. It doesn't say anything about class-level attributes. I'm pretty sure that class methods as supported by Perl's normal dispatch are virtual, though, and that document doesn't say anything other than "Any subroutine you define in your class is a method." Strict Warnings Results (151 votes). Check out past polls.
https://www.perlmonks.org/?node_id=902153
CC-MAIN-2019-51
refinedweb
299
71.55
Mitaka Series Release Notes¶ 13.1.4¶ Security Issues¶ [CVE-2017-7214] Failed notification payload is dumped in logs with auth secrets 13.1.3¶ Bug Fixes¶. 13.1.2¶ 13.1.1¶ Known Issues¶. 13.1.0¶ Upgrade Notes¶ The recordconfiguration option for the console proxy services (like VNC, serial, spice) is changed from boolean to string. It specifies the filename that will be used for recording websocket frames. 13.0.0¶ Prelude¶ Nova 12.0.0 (Liberty) to 13.0.0 (Mitaka). That said, a few major changes are worth to notice here. This is not an exhaustive list of things to notice, rather just important things you need to know : - Latest API microversion supported for Mitaka is v2.25 - Nova now requires a second database (called ‘API DB’). - A new nova-manage script allows you to perform all online DB migrations once you upgrade your cloud - EC2 API support is fully removed. New Features¶ Enables NUMA topology reporting on PowerPC architecture from the libvirt driver in Nova but with a caveat as mentioned below. NUMA cell affinity and dedicated cpu pinning code assumes that the host operating system is exposed to threads. PowerPC based hosts use core based scheduling for processes. Due to this, the cores on the PowerPC architecture are treated as threads. Since cores are always less than or equal to the threads on a system, this leads to non-optimal resource usage while pinning. This feature is supported from libvirt version 1.2.19 for PowerPC. A new REST API to cancel an ongoing live migration has been added in microversion 2.24. Initially this operation will only work with the libvirt virt driver.. It is possible to block live migrate instances with additional cinder volumes attached. This requires libvirt version to be >=1.2.17 and does not work when live_migration_tunnelled is set to True. Project-id and user-id are now also returned in the return data of os-server-groups APIs. In order to use this new feature, user have to contain the header of request microversion v2.13 in the API request. Add support for enabling uefi boot with libvirt. A new host_status attribute for servers/detail and servers/{server_id}. In order to use this new feature, user have to contain the header of request microversion v2.16 in the API request. A new policy os_compute_api:servers:show:host_statusadded to enable the feature. By default, this is only exposed to cloud administrators. A new server action trigger_crash_dump has been added to the REST API in microversion 2.17. When RBD is used for ephemeral disks and image storage, make snapshot use Ceph directly, and update Glance with the new location. In case of failure, it will gracefully fallback to the “generic” snapshot method. This requires changing the typical permissions for the Nova Ceph user (if using authx) to allow writing to the pool where vm images are stored, and it also requires configuring Glance to provide a v2 endpoint with direct_url support enabled (there are security implications to doing this). See for more information on configuring OpenStack with RBD. A new option “live_migration_inbound_addr” has been added in the configuration file, set None as default value. If this option is present in pre_migration_data, the ip address/hostname provided will be used instead of the migration target compute node’s hostname as the uri for live migration, if it’s None, then the mechanism remains as it is before. Added support for CPU thread policies, which can be used to control how the libvirt virt driver places guests with respect to CPU SMT “threads”. These are provided as instance and image metadata options, ‘hw:cpu_thread_policy’ and ‘hw_cpu_thread_policy’ respectively, and provide an additional level of control over CPU pinning policy, when compared to the existing CPU policy feature. These changes were introduced in commits ‘83cd67c’ and ‘aaaba4a’. Add support for enabling discard support for block devices with libvirt. This will be enabled for Cinder volume attachments that specify support for the feature in their connection properties. This requires support to be present in the version of libvirt (v1.0.6+) and qemu (v1.6.0+) used along with the configured virtual drivers for the instance. The virtio-blk driver does not support this functionality. A new autovalue for the configuration option upgrade_levels.computeis accepted, that allows automatic determination of the compute service version to use for RPC communication. By default, we still use the newest version if not set in the config, a specific version if asked, and only do this automatic behavior if ‘auto’ is configured. When ‘auto’ is used, sending a SIGHUP to the service will cause the value to be re-calculated. Thus, after an upgrade is complete, sending SIGHUP to all services will cause them to start sending messages compliant with the newer RPC version. Libvirt driver in Nova now supports Cinder DISCO volume driver. A disk space scheduling filter is now available, which prefers compute nodes with the most available disk space. By default, free disk space is given equal importance to available RAM. To increase the priority of free disk space in scheduling, increase the disk_weight_multiplier option. A new REST API to force live migration to complete has been added in microversion 2.22. The os-instance-actions methods now read actions from deleted instances. This means that ‘GET /v2.1/{tenant-id}/servers/{server-id}/os-instance-actions’ and ‘GET /v2.1/{tenant-id}/servers/{server-id}/os-instance-actions/{req-id}’ will return instance-action items even if the instance corresponding to ‘{server-id}’ has been deleted. When booting an instance, its sanitized ‘hostname’ attribute is now used to populate the ‘dns_name’ attribute of the Neutron ports the instance is attached to. This functionality enables the Neutron internal DNS service to know the ports by the instance’s hostname. As a consequence, commands like ‘hostname -f’ will work as expected when executed in the instance. When a port’s network has a non-blank ‘dns_domain’ attribute, the port’s ‘dns_name’ combined with the network’s ‘dns_domain’ will be published by Neutron in an external DNS as a service like Designate. As a consequence, the instance’s hostname is published in the external DNS as a service. This functionality is added to Nova when the ‘DNS Integration’ extension is enabled in Neutron. The publication of ‘dns_name’ and ‘dns_domain’ combinations to an external DNS as a service additionaly requires the configuration of the appropriate driver in Neutron. When the ‘Port Binding’ extension is also enabled in Neutron, the publication of a ‘dns_name’ and ‘dns_domain’ combination to the external DNS as a service will require one additional update operation when Nova allocates the port during the instance boot. This may have a noticeable impact on the performance of the boot process. The libvirt driver now has a live_migration_tunnelled configuration option which should be used where the VIR_MIGRATE_TUNNELLED flag would previously have been set or unset in the live_migration_flag and block_migration_flag configuration options. For the libvirt driver, by default hardware properties will be retrieved from the Glance image and if such haven’t been provided, it will use a libosinfo database to get those values. If users want to force a specific guest OS ID for the image, they can now use a new glance image property os_distro(eg. --property os_distro=fedora21). In order to use the libosinfo database, you need to separately install the related native package provided for your operating system distribution. Add support for allowing Neutron to specify the bridge name for the OVS, Linux Bridge, and vhost-user VIF types. Added a nova-manage db online_data_migrations command for forcing online data migrations, which will run all registered migrations for the release, instead of there being a separate command for each logical data migration. Operators need to make sure all data is migrated before upgrading to the next release, and the new command provides a unified interface for doing it. Provides API 2.18, which makes the use of project_ids in API urls optional. Libvirt with Virtuozzo virtualisation type now supports snapshot operations Remove onSharedStorageparameter from server’s evacuate action in microversion 2.14. Nova will automatically detect if the instance is on shared storage. Also adminPass is removed from the response body which makes the response body empty. The user can get the password with the server’s os-server-password action. Add two new list/show API for server-migration. The list API will return the in progress live migratons information of a server. The show API will return a specified in progress live migration of a server. This has been added in microversion 2.23. A new service.status versioned notification has been introduced. When the status of the Service object is changed nova will send a new service.update notification with versioned payload according to bp versioned-notification-api. The new notification is documented in Two new policies soft-affinty and soft-anti-affinity have been implemented for the server-group feature of Nova. This means that POST /v2.1/{tenant_id}/os-server-groups API resource now accepts ‘soft-affinity’ and ‘soft-anti-affinity’ as value of the ‘policies’ key of the request body. In Nova Compute API microversion 2.19, you can specify a “description” attribute when creating, rebuilding, or updating a server instance. This description can be retrieved by getting server details, or list details for servers. Refer to the Nova Compute API documentation for more information. Note that the description attribute existed in prior Nova versions, but was set to the server name by Nova, and was not visible to the user. So, servers you created with microversions prior to 2.19 will return the description equals the name on server details microversion 2.19. As part of refactoring the notification interface of Nova a new config option ‘notification_format’ has been added to specifies which notification format shall be used by nova. The possible values are ‘unversioned’ (e.g. legacy), ‘versioned’, ‘both’. The default value is ‘both’. The new versioned notifications are documented in For the VMware driver, the flavor extra specs for quotas has been extended to support: quota:cpu_limit - The cpu of a virtual machine will not exceed this limit, even if there are available resources. This is typically used to ensure a consistent performance of virtual machines independent of available resources. Units are MHz. quota:cpu_reservation - guaranteed minimum reservation (MHz) quota:cpu_shares_level - the allocation level. This can be ‘custom’, ‘high’, ‘normal’ or ‘low’. quota:cpu_shares_share - in the event that ‘custom’ is used, this is the number of shares. quota:memory_limit - The memory utilization of a virtual machine will not exceed this limit, even if there are available resources. This is typically used to ensure a consistent performance of virtual machines independent of available resources. Units are MB. quota:memory_reservation - guaranteed minimum reservation (MB) quota:memory_shares_level - the allocation level. This can be ‘custom’, ‘high’, ‘normal’ or ‘low’. quota:memory_shares_share - in the event that ‘custom’ is used, this is the number of shares. quota:disk_io_limit - The I/O utilization of a virtual machine will not exceed this limit. The unit is number of I/O per second. quota:disk_io_reservation - Reservation control is used to provide guaranteed allocation in terms of IOPS quota:disk_io_shares_level - the allocation level. This can be ‘custom’, ‘high’, ‘normal’ or ‘low’. quota:disk_io_shares_share - in the event that ‘custom’ is used, this is the number of shares. quota:vif_limit - The bandwidth limit for the virtual network adapter. The utilization of the virtual network adapter will not exceed this limit, even if there are available resources. Units in Mbits/sec. quota:vif_reservation - Amount of network bandwidth that is guaranteed to the virtual network adapter. If utilization is less than reservation, the resource can be used by other virtual network adapters. Reservation is not allowed to exceed the value of limit if limit is set. Units in Mbits/sec. quota:vif_shares_level - the allocation level. This can be ‘custom’, ‘high’, ‘normal’ or ‘low’. quota:vif_shares_share - in the event that ‘custom’ is used, this is the number of shares. Upgrade Notes¶ All noVNC proxy configuration options have been added to the ‘vnc’ group. They should no longer be included in the ‘DEFAULT’ group. All VNC XVP configuration options have been added to the ‘vnc’ group. They should no longer be included in the ‘DEFAULT’ group. Upon first startup of the scheduler service in Mitaka, all defined aggregates will have UUIDs generated and saved back to the database. If you have a significant number of aggregates, this may delay scheduler start as that work is completed, but it should be minor for most deployments. During an upgrade to Mitaka, operators must create and initialize a database for the API service. Configure this in [api_database]/connection, and then run nova-manage api_db sync We can not use microversion 2.25 to do live-migration during upgrade, nova-api will raise bad request if there is still old compute node in the cluster. The option scheduler_driveris now changed to use entrypoint instead of full class path. Set one of the entrypoints under the namespace ‘nova.scheduler.driver’ in ‘setup.cfg’. Its default value is ‘host_manager’. The full class path style is still supported in current release. But it is not recommended because class path can be changed and this support will be dropped in the next major release. The option scheduler_host_manageris now changed to use entrypoint instead of full class path. Set one of the entrypoints under the namespace ‘nova.scheduler.host_manager’ in ‘setup.cfg’. Its default value is ‘host_manager’. The full class path style is still supported in current release. But it is not recommended because class path can be changed and this support will be dropped in the next major release. The local conductor mode is now deprecated and may be removed as early as the 14.0.0 release. If you are using local conductor mode, plan on deploying remote conductor by the time you upgrade to the 14.0.0 release. The Extensible Resource Tracker is deprecated and will be removed in the 14.0.0 release. If you use this functionality and have custom resources that are managed by the Extensible Resource Tracker, please contact the Nova development team by posting to the openstack-dev mailing list. There is no future planned support for the tracking of custom resources. For Liberty compute nodes, the disk_allocation_ratio works as before, you must set it on the scheduler if you want to change it. For Mitaka compute nodes, the disk_allocation_ratio set on the compute nodes will be used only if the configuration is not set on the scheduler. This is to allow, for backwards compatibility, the ability to still override the disk allocation ratio by setting the configuration on the scheduler node. In Newton, we plan to remove the ability to set the disk allocation ratio on the scheduler, at which point the compute nodes will always define the disk allocation ratio, and pass that up to the scheduler. None of this changes the default disk allocation ratio of 1.0. This matches the behaviour of the RAM and CPU allocation ratios. (Only if you do continuous deployment) 1337890ace918fa2555046c01c8624be014ce2d8 drops support for an instance major version, which means that you must have deployed at least commit 713d8cb0777afb9fe4f665b9a40cac894b04aacb before deploying this one. nova now requires ebtables 2.0.10 or later nova recommends libvirt 1.2.11 or later Filters internal interface changed using now the RequestSpec NovaObject instead of an old filter_properties dictionary. In case you run out-of-tree filters, you need to modify the host_passes() method to accept a new RequestSpec object and modify the filter internals to use that new object. You can see other in-tree filters for getting the logic or ask for help in #openstack-nova IRC channel. The force_config_driveconfiguration option provided an alwaysvalue which was deprecated in the previous release. That alwaysvalue is now no longer accepted and deployments using that value have to change it to Truebefore upgrading. Support for Windows / Hyper-V Server 2008 R2 has been deprecated in Liberty (12.0.0) and it is no longer supported in Mitaka (13.0.0). If you have compute nodes running that version, please consider moving the running instances to other compute nodes before upgrading those to Mitaka. The libvirt driver will now correct unsafe and invalid values for the live_migration_flag and block_migration_flag configuration options. The live_migration_flag must not contain VIR_MIGRATE_SHARED_INC but block_migration_flag must contain it. Both options must contain the VIR_MIGRATE_PEER2PEER, except when using the ‘xen’ virt type this flag is not supported. Both flags must contain the VIR_MIGRATE_UNDEFINE_SOURCE flag and not contain the VIR_MIGRATE_PERSIST_DEST flag. The libvirt driver has changed the default value of the ‘live_migration_uri’ flag, that now is dependent on the ‘virt_type’. The old default ‘qemu+tcp://%s/system’ now is adjusted for each of the configured hypervisors. For Xen this will be ‘xenmigr://%s/system’, for kvm/qemu this will be ‘qemu+tcp://%s/system’. The minimum required libvirt is now version 0.10.2. The minimum libvirt for the N release has been set to 1.2.1. In order to make project_id optional in urls, we must constrain the set of allowed values for project_id in our urls. This defaults to a regex of [0-9a-f\-]+, which will match hex uuids (with / without dashes), and integers. This covers all known project_id formats in the wild. If your site uses other values for project_id, you can set a site specific validation with project_id_regexconfig variable. The old neutron communication options that were slated for removal in Mitaka are no longer available. This means that going forward communication to neutron will need to be configured using auth plugins. All code and tests for Nova’s EC2 and ObjectStore API support which was deprecated in Kilo () has been completely removed in Mitaka. This has been replaced by the new ec2-api project (). Warning Some installation tools (such as packstack) hardcode the value of enabled_apisin your nova.conf. While the defaults for enabled_apisdropped ec2as a value, if that is hard coded in your nova.conf, you will need to remove it before restarting Nova’s API server, or it will not start. The commit with change-id Idd4bbbe8eea68b9e538fa1567efd304e9115a02a requires that the nova_api database is setup and Nova is configured to use it. Instructions on doing that are provided below. Nova now requires that two databases are available and configured. The existing nova database needs no changes, but a new nova_api database needs to be setup. It is configured and managed very similarly to the nova database. A new connection string configuration option is available in the api_database group. An example: [api_database] connection = mysql+pymysql://user:secret@127.0.0.1/nova_api?charset=utf8 And a new nova-manage command has been added to manage db migrations for this database. “nova-manage api_db sync” and “nova-manage api_db version” are available and function like the parallel “nova-manage db …” version. A new use_neutronoption is introduced which replaces the obtuse network_api_classoption. This defaults to ‘False’ to match existing defaults, however if network_api_classis set to the known Neutron value Neutron networking will still be used as before. The FilterScheduler is now including disabled hosts. Make sure you include the ComputeFilter in the scheduler_default_filtersconfig option to avoid placing instances on disabled hosts. Upgrade the rootwrap configuration for the compute service, so that patches requiring new rootwrap configuration can be tested with grenade. For backward compatible support the setting CONF.vmware.integration_bridgeneeds to be set when using the Neutron NSX|MH plugin. The default value has been set to None. XenServer hypervisor type has been changed from xento XenServer. It could impact your aggregate metadata or your flavor extra specs if you provide only the former. The glance xenserver plugin has been bumped to version 1.3 which includes new interfaces for referencing glance servers by url. All dom0 will need to be upgraded with this plugin before upgrading the nova code. Deprecation Notes¶ It is now deprecated to use [glance] api_servers without a protocol scheme (http / https). This is required to support urls throughout the system. Update any api_servers list with fully qualified https / http urls. The conductor.manager configuration option is now deprecated and will be removed. Deprecate compute_stats_classconfig option. This allowed loading an alternate implementation for collecting statistics for the local compute host. Deployments that felt the need to use this facility are encoraged to propose additions upstream so we can create a stable and supported interface here. Deprecate the db_driverconfig option. Previously this let you replace our SQLAlchemy database layer with your own. This approach is deprecated. Deployments that felt the need to use the facility are encourage to work with upstream Nova to address db driver concerns in the main SQLAlchemy code paths. The host, port, and protocol options in the [glance] configuration section are deprecated, and will be removed in the N release. The api_servers value should be used instead. Deprecate the use of nova.hooks. This facility used to let arbitrary out of tree code be executed around certain internal actions, but is unsuitable for having a well maintained API. Anyone using this facility should bring forward their use cases in the Newton cycle as nova-specs. Nova used to support the concept that service managerswere replaceable components. There are many config options where you can replace a manager by specifying a new class. This concept is deprecated in Mitaka as are the following config options. [cells] manager metadata_manager compute_manager console_manager consoleauth_manager cert_manager scheduler_manager Many of these will be removed in Newton. Users of these options are encouraged to work with Nova upstream on any features missing in the default implementations that are needed. Deprecate security_group_apiconfiguration option. The current values are novaand neutron. In future the correct security_group_api option will be chosen based on the value of use_neutronwhich provides a more coherent user experience. Deprecate the vendordata_driverconfig option. This allowed creating a different class loader for defining vendordata metadata. The default driver loads from a json file that can be arbitrarily specified, so is still quite flexible. Deployments that felt the need to use this facility are encoraged to propose additions upstream so we can create a stable and supported interface here. The configuration option api_versionin the ironicgroup was marked as deprecated and will be removed in the future. The only possible value for that configuration was “1” (because Ironic only has 1 API version) and the Ironic team came to an agreement that setting the API version via configuration option should not be supported anymore. As the Ironic driver in Nova requests the Ironic v1.8 API, that means that Nova 13.0.0 (“Mitaka”) requires Ironic 4.0.0 (“Liberty”) or newer if you want to use the Ironic driver. The libvirt live_migration_flag and block_migration_flag config options are deprecated. These options gave too fine grained control over the flags used and, in some cases, misconfigurations could have dangerous side effects. Please note the availability of a new live_migration_tunnelled configuration option. The network_device_mtuoption in Nova is deprecated for removal since network MTU should be specified when creating the network with nova-network. With Neutron networks, the MTU value comes from the segment_mtuconfiguration option in Neutron. The old top-level resource /os-migrations is deprecated, it won’t be extended anymore. And migration_type for /os-migrations, also add ref link to the /servers/{uuid}/migrations/{id} for it when the migration is an in-progress live-migration. This has been added in microversion 2.23. Deprecate volume_api_classand network_api_classconfig options. We only have one sensible backend for either of these. These options will be removed and turned into constants in Newton. Option memcached_serversis deprecated in Mitaka. Operators should use oslo.cache configuration instead. Specifically enabledoption under [cache] section should be set to True and the url(s) for the memcached servers should be in [cache]/memcache_servers option. The Zookeeper Service Group driver has been removed. The driver has no known users and is not actively mantained. A warning log message about the driver’s state was added for the Kilo release. Also, evzookeeper library that the driver depends on is unmaintained and incompatible with recent eventlet releases. A future release of Nova will use the Tooz library to track service liveliness, and Tooz supports Zookeeper. Security Issues¶ [OSSA 2016-001] Nova host data leak through snapshot (CVE-2015-7548) [OSSA 2016-002] Xen connection password leak in logs via StorageError (CVE-2015-8749) [OSSA 2016-007] Host data leak during resize/migrate for raw-backed instances (CVE-2016-2140) Bug Fixes¶ In a race condition if base image is deleted by ImageCacheManager while imagebackend is copying the image to instance path, then the instance goes in to error state. In this case when libvirt has changed the base file ownership to libvirt-qemu while imagebackend is copying the image, then we get permission denied error on updating the file access time using os.utime. Fixed this issue by updating the base file access time with root user privileges using ‘touch’ command. Conductor RPC API no longer supports v2.x. The service subcommand of nova-manage is deprecated. Use the nova service-* commands from python-novaclient instead or the os-services REST resource. The service subcommand will be removed in the 14.0 release. The Neutron network MTU value is now used when plugging virtual interfaces in nova-compute. If the value is 0, which is the default value for the segment_mtuconfiguration option in Neutron before Mitaka, then the (deprecated) network_device_mtuconfiguration option in Nova is used, which defaults to not setting an MTU value. The sample policy file shipped with Nova contained many policies set to “”(allow all) which was not the proper default for many of those checks. It was also a source of confusion as some people thought “” meant to use the default rule. These empty policies have been updated to be explicit in all cases. Many of them were changed to match the default rule of “admin_or_owner” which is a more restrictive policy check but does not change the restrictiveness of the API calls overall because there are similar checks in the database already. This does not affect any existing deployment, just the sample file included for use by new deployments. Nova’s EC2 API support which was deprecated in Kilo () is removed from Mitaka. This has been replaced by the new ec2-api project ().
https://docs.openstack.org/releasenotes/nova/mitaka.html
CC-MAIN-2019-43
refinedweb
4,398
56.86
Basic development environment - Python 3.6 - Pycharm import parsel import requests import re Target web page analysis Today, I’ll climb the international news column in the news network Click to display more news content You can see the relevant data interface, which contains the news title and the URL address of the news details How to extract URL address 1. Convert to JSON, and the value of key value pair; 2. Match URL address with regular expression; Both methods can be realized, depending on personal preferences Page turning is carried out according to the pager change in the interface data link, which corresponds to the page number. On the details page, you can see that the news content is in the div tag and the P tag. According to the normal analysis of the website, you can get the news content. Save mode 1. You can save TXT text 2. It can also be saved as PDF Previously, I also talked about crawling the content of the article and saving it into PDF. You can click the link below to see the relevant saving methods. Python crawls the bid winning bid of Bibi network and saves it in PDF format Python crawls CSDN blog posts and makes them into PDF files In this article, use the form of saving TXT text. Summary of overall crawling ideas - On the column list page, click more news content to obtain the interface data URL - The data content returned in the interface data URL matches the URL of the news details page - Extract news content using regular parsing website operations (re, CSS, XPath) - Save data code implementation - Get web page source code def get_html(html_url): """ Get web page source code response :param html_ URL: Web page URL address : Return: Web page source code """ response = requests.get(url=html_url, headers=headers) return response - Get the URL address of each news article def get_page_url(html_data): """ Get the URL address of each news article :param html_data: response.text : Return: URL address of each news article """ page_url_list = re.findall('"url":"(.*?)"', html_data) return page_url_list - The file saving and naming cannot contain special characters, and the news title needs to be processed def file_name(name): """ File naming cannot carry special characters : param Name: news title : Return: title without special characters """ replace = re.compile(r'[\\\/\:\*\?\"\\|]') new_name = re.sub(replace, '_', name) return new_name - Save data def download(content, title): """ With OpenSave news content TXT : param content: news content : param Title: news title :return: """ Path = 'news \ \' + title + '. TXT' with open(path, mode='a', encoding='utf-8') as f: f.write(content) Print ('Saving ', title) - Main function def main(url): """ Main function : param URL: URL address of news list page :return: """ html_ data = get_ HTML (URL). Text # get interface data response.text lis = get_ page_ URL (html_data) # get the list of news URL addresses for li in lis: page_ data = get_ HTML (LI). Content.decode ('utf-8 ',' ignore ') # news details page response.text selector = parsel.Selector(page_data) Title = re. Findall ('(. *?)', page_data, re. S) [0] # get the news title new_title = file_name(title) new_data = selector.css('#cont_1_1_2 div.left_zw p::text').getall() content = ''.join(new_data) download(content, new_title) if __name__ == '__main__': for page in range(1, 101): url_1 = '{}&pagenum=9&t=5_58'.format(page) main(url_1) Operation effect diagram
https://developpaper.com/python-crawls-news-network-data/
CC-MAIN-2022-05
refinedweb
545
51.99
'SharePoint Web Services Default' $meta = New-SPMetadataServiceApplication -HubUri -ApplicationPool $pool -Name 'Tenant 'Tenant Managed Metadata Proxy' -DefaultProxyGroup -ServiceApplication $meta Hopefully that's enough to get you curious and started for now. I'll follow up with additional scripts and the tenant admin app later. Hi Sieena, I actually have no idea…I’m about the worst person to ask SKU-related questions. 🙂 Sorry about that. Steve Hello Steve I’m trying to run the PowerShell scripts provided in this article and for some reason the SPIisWebServiceApplicationPool object is not identifyed by my SharePoint 2010 Management Shell. Do you happen to know if this object is specific to SP 2010 beta version? I’m using the RC version of SP 2010. Thanks a lot Eugene Hello again Yeah I figured out – actually is part of Microsoft.SharePoint.Administration namespace. Hi speschka, thank you for your post and information… do you have any information if this multi-tenant support will be something that is going to be by default on both standard and enterprise editions? thank you! With SP2010 RTM , you must replace New-SPIisWebServiceApplicationPool with Set-SPServiceApplicationPool (see technet.microsoft.com/…/ff621077.aspx) With SP2010 RTM , you must replace New-SPIisWebServiceApplicationPool with New-SPServiceApplicationPool (see technet.microsoft.com/…/ff621077.aspx) (ignore last post) Great post on multi-tenancy. One of the benefits SharePoint 2010 offers in comparison to previous editions is improved host-named site collections which means that site collections are a real (scalable) alternative to having a Web app per tenant. I have posted a step-by-step for creating a couple of host-named site collections over at mossblogger.blogspot.com/…/multi-tenancy-in-sharepoint-2010-using.html Hi Benjamin, How the alternative URL works for 2010. For example I create a site called mysharepoint.onmyCloud.com . My customer will buy a domain called which has to map to mysharepoint.onmycloud.com. How do we do that? Thanks CR Pingback from Tips & Tricks – Useful Resources for Implementing Multi-Tenancy Sharepoint 2010 Environment | Linxiao's Sharepoint, Dynamics CRM and BI Space Referring to Chelsea is referred to the pain in my heart those who love the team M88 port city of Liverpool is the most special and captain Gerrard. Last season, Liverpool are holding a big advantage in the race to the championship NHA but things soon put an end to the army’s Chelsea Rodgers suddenly lost with the score 0-2 on field Anfield home. : M88 match against Chelsea in the second leg last season with sad memories Gerrard Worth mentioning is not Chelsea play too or M88 (the London team that is focused 100% effort for the semi-finals of the Champions League with Atletico Madrid) but the Liverpool has shot himself in his leg after a trip of captain Gerrard lead to goals at the end of the first half. After the goals, Liverpool gong to rise to search leveling goal but what they get is another last-minute goals. Chelsea lost with the score 0-2, the Kop rising trophy in the hands of competitors and Man City to finish the season in second place full of fresh news.
https://blogs.technet.microsoft.com/speschka/2009/11/30/enabling-multi-tenant-support-in-sharepoint-2010/
CC-MAIN-2017-43
refinedweb
525
50.46
PID Table of Contents A proportional-integral-derivative controller (PID controller) is a generic loop feedback mechanism. It measures a process variable (e.g. temperature), and checks it against a desired set point (e.g. 100 degrees celsius); it then attempts to adjust the process variable by changing a controller output (e.g. PWM signal to an electric heater) which will bring the process variable closer to the desired set point. The wikipedia article on PID controllers is an excellent place to start in understanding the basic concepts. The controlguru website also contains a wealth of easy to digest information on how to implement and tune a PID controller. As these resources can already explain the fundamentals and implementation of a PID controller, this page will detail the PID library and then present an example of putting it into practice. Software¶ The PID software runs a loop at a set interval, and performs the following calculation: Where - CO is the controller output - CObias is an optional, user set bias for the controller output - Kc is a proportional tuning constant - e(t) is the error at time t - Ti is an integral tuning constant - Td is a derivative tuning constant - PV is the process variable - dt is the rate the loop runs at The controller works in percentages during the calculations and then scales relevant outputs back into real world values. Initialization¶ When a PID object is created, the three tuning constants (Kc, Ti, Td) and an interval time are passed as parameters. By default the controller starts in manual mode - whenever the controller is changed to auto mode, the working variables are reset (according to the current limits) which allows for "bumpless" transfer between manual and auto mode. The input and output limits are set as 0.0-3.3 [volts], and the tuning constants are slightly modified to make things easier during calculations. Finally, appropriate variables (such as the controller output and process variable) are initialized to zero before the main loop method is attached to a Ticker which runs at the rate passed in by the user. Application¶ In order to set up a PID object for use in a specific application, a typical initialization and loop might look like this. #include "PID.h" #define RATE 0.1 //Kc, Ti, Td, interval PID controller(1.0, 0.0, 0.0, RATE); AnalogIn pv(p15); PwmOut co(p26); int main(){ //Analog input from 0.0 to 3.3V controller.setInputLimits(0.0, 3.3); //Pwm output from 0.0 to 1.0 controller.setOutputLimits(0.0, 1.0); //If there's a bias. controller.setBias(0.3); controller.setMode(AUTO); //We want the process variable to be 1.7V controller.setSetPoint(1.7); while(1){ //Update the process variable. controller.setProcessValue(pv.read()); //Set the new output. co = controller.getRealOutput(); //Wait for another loop calculation. wait(RATE); } } Example: Velocity Control¶ This example will show how to use the PID controller library and a brushed DC motor with a quadrature encoder and H-bridge to perform velocity control. We can calculate the velocity of the motor by taking two samples of the quadrature encoder's pulse count in a short interval and then dividing the difference between them by the length of the interval to get the number of pulses per second. We could turn pulses per second into a more familiar unit, such as metres per second, but we want to try and choose a process variable which is as closely related to what we're measuring as possible; and since what we're measuring (pulses per second) is directly proportional to the velocity, it should provide a much better value to work with during our PID calculations. Our process variable will therefore be the number pulses per second we've read, and our controller output will be the PWM signal's duty cycle to the H-bridge. Tuning Method¶ There are many ways to tune the constants in a PID controller, including simple trial and error; the method presented on controlguru involves fitting a simple first order plus dead time dynamic model to process test data that we take - a lot easier than it sounds! This is the method we will follow, but it is not the only way. Step Test¶ The first we need to do, is observe how our process variable changes with respect to controller output. We'll do this by performing a step test - after setting our controller output to a specific value and observing our process variable, we will then "step" our controller output to a new value and watch what happens to our process variable. Here are the results. The number of counts per second were observed while the PWM duty cycle was 70%, and after it was stepped to 60%. Process Gain¶ The process gain constant or Kp describes how the process variable changes when the controller output changes. It is calculated in the following way: We can use the data from our step test to calculate to Kp. dPV = 1000, and dCO = -0.1; when talking about the controller output, we will use how far "on" or "off" it is as a percentage in our calculations to make things easier. Therefore dCO = -10%. Kp = dPV / dCO = 1000 / -10% = -100 counts per second/% Library¶ [Not found]
https://developer.mbed.org/cookbook/PID
CC-MAIN-2017-34
refinedweb
885
59.84
Created on 2018-08-27 08:44 by hniksic, last changed 2018-09-18 10:37 by xtreak. Coroutine objects have public methods such as send, close, and throw, which do not appear to be documented. For example, at a StackOverflow user asks how to abort an already created (but not submitted) coroutine without a RuntimeWarning, with the answer being to use the close() method. The user asked where does one find the close method. Currently methods only appear to be documented in PEP 492, referring to generator documentation for details. The glossary entry for coroutine (object) links to PEP 492 and to the async def statement. Various places in the documentation, e.g. the index, link to but that page is mostly concerned with the usage of coroutines within asyncio, where the methods on individual coroutine objects should not be used. I would expect to find documentation on coroutine objects under built-in types, . In comparison, generator-iterator methods are documented in the language reference: Is this what you are referring to that has docs for send, close and throw in coroutine objects ? Coroutine object docs : If the above is the one then I think we can improve on the visibility by linking it from other pages since it doesn't show up with Google for 'coroutine close'. I had to use Docs folder in source code and sphinx search in docs.python.org with 'coroutine close' to get there. Thanks That's exactly it, thanks! I have no idea how I missed it, despite looking (I thought) carefully. But yes, they should be linked from . Just as currently there is that links to , there could be a #coroutine-types that links to Another place to link is - it currently does link to the reference, but only to the "async def" syntax. Since is linked from many places, it might make sense to mention that *calling* a coroutine immediately returns a coroutine object, with a link to Since there is work on asyncio docs overhaul I just want to bring this to your attention since I don't know if this has already been resolved with the merged PRs to master and your thoughts on this. Thanks
https://bugs.python.org/issue34518
CC-MAIN-2020-29
refinedweb
368
68.5
g.message.1grass man page g.message — Prints a message, warning, progress info, or fatal error in the GRASS way. This module should be used in scripts for messages served to user. Keywords general, support, scripts Synopsis g.message g.message --help g.message [-wedpiv] message=string [debug=integer] [--help] [--verbose] [--quiet] [--ui] Flags - -w Print message as warning - -e Print message as fatal error - -d Print message as debug message - -p Print message as progress info - -i Print message in all modes except of quiet mode Message is printed on GRASS_VERBOSE>=1 - -v Print message only in verbose mode Message is printed only on GRASS_VERBOSE>=3 Print usage summary - --verbose Verbose module output - --quiet Quiet module output - --ui Force launching GUI dialog Parameters - message=string [required] Text of the message to be printed Message is printed on GRASS_VERBOSE>=2 - debug=integer Level to use for debug messages Options: 0-5 Default: 1 Description This. Notes Messages containing "=" must use the full message= syntax so the parser doesn’t get confused. enouraged to single-quote messages that do not require $VARIABLE expansion. Usage in Python scripts - GRASS Python Scripting Library defines special wrappers for g.message. - Note: The Python shell in the wxGUI can be used for entering the following sample code: import grass.script as gcore gcore.warning("This is a warning") is identical with g.message -w message="This is a warning" Verbosity Levels Controlled by the "GRASS_VERBOSE" environment variable. Typically this is set using the --quiet or --verbose command line options. - 0 - only errors and warnings are printed - 1 - progress messages are printed - 2 - all module messages are printed - 3 - additional verbose messages are printed Debug Levels Controlled by the "DEBUG" GRASS gisenv variable (set with g.gisenv). Recommended levels: - 1 - message is printed once or few times per module - 3 - each row (raster) or line (vector) - 5 - each cell (raster) or point (vector) See Also GRASS variables and environment variables g.gisenv, g.parser Author Jachym Cepicky Last changed: $Date: 2015-12-31 09:19:51 +0100 (Thu, 31 Dec 2015) $ Main index | General index | Topics index | Keywords index | Full index © 2003-2016 GRASS Development Team, GRASS GIS 7.0.4 Reference Manual
https://www.mankier.com/1/g.message.1grass
CC-MAIN-2017-17
refinedweb
367
52.6
This is the continuation of the blog presented here. This blog has been split in 3 parts: In this third part we are going to see how to deploy the application to the HANA Cloud Platform. As an optional step I’ll show you how to deploy the app to SAP Fiori Launchpad. 1.4 DEPLOYING THE APP TO HANA CLOUD PLATFORM In this part of the exercise you will learn how to take the application you have built so far and deploy to the HCP. After deployment you can even run the app from HCP. 1. As a first step, since the HCP requires an index.html file to start an application, we need to create a file like this. So right click on the webapp folder inside your project and create a new file named index.html 2. Paste the following code inside the new file and save it. Just pay attention to the lines 13 and 23 where you need to properly specify the namespace (in this case “com.so”) used for the project <!DOCTYPE HTML> <html> <head> <meta http-equiv=”X-UA-Compatible” content=”IE=edge” /> <meta charset=”UTF-8″> <title>DCExercise</title> <script id=”sap-ui-bootstrap” src=”resources/sap-ui-core.js” data-sap-ui-libs=”sap.m” data-sap-ui-theme=”sap_bluecrystal” data-sap-ui-xx-bindingSyntax=”complex” data-sap-ui-resourceroots='{“com.so”: “../webapp/”}’> </script> <link rel=”stylesheet” type=”text/css” href=”css/style.css”> <script> sap.ui.getCore().attachInit(function() { new sap.m.Shell({ app: new sap.ui.core.ComponentContainer({ height : “100%”, name : “com.so” }) }).placeAt(“content”); }); </script> </head> <body class=”sapUiBody” id=”content”> </body> </html> 3. Open the neo-app.json file and add the line "welcomeFile": "/webapp/index.html", just after the first line so that, when the application is launched, it knows which is the file to execute first. 4. You can even select the index.html file and click on the Run button in the top toolbar to see if the application runs fine: this means that the file is correct. 5. Select the name of the project, right click on it and choose Deploy –> Application status. We are starting from this point so that you verify that so far we have never deployed to HCP 6. Enter your HCP password and click on Login 7. As you can see here, so far we have not yet deployed the app to HCP. So, since we want to do it right now, click on Deploy 8. Here you can enter the application name, its version and you can decide if you want to activate this version automatically as soon as the application has been deployed. All the fields come pre-filled. Leave all as by default and click on Deploy 9. The application has been deployed to HPC and the first version has been created. Now you can do two things: the first one is to Open the active version of the application 10. Enter again the credentials for the ES1 system, if required, and click on Log In 11. The application is running directly on your HANA Platform. Notice the application’s URL, which is no longer tied to the SAP Web IDE tool 12. Alternatively, according to the screen shown at step 9, you can Open the application’s page in the SAP HANA Cloud Platform cockpit 13. In this case the following page is opened. Here you can administer your HANA applications. For example, here you can check the application’s URL and verify that the application is started 14. When finished, click on the Versioning tab. Here you can check all the versions that have been pushed to the HANA repository. You can activate or deactivate a specific version and do a lot of other administrative functions. You can now close this page 15. Once back here to this message you can click on the Close button. At moment we are not going yet to register the app to SAP Fiori Launchpad because we’ll do it in the next chapter 16. Refresh the SAP Web IDE tool 17. Now you can see some new fancy symbols on the left of each folder or file in the project explorer. In this case the green ball means that the file/folder has been committed and it is up to date with the one on HCP 18. If you search in the documentation you should be able to find a description for all the available decorations 1.5 DEPLOYING THE APP TO SAP FIORI LAUNCHPAD (optional step) Finally, you can deploy your SAP Fiori application to the SAP Fiori Launchpad in order to be consumed by users. Please ensure that you have properly subscribed to the Fiori Launchpad Portal in the Trial Landscape as described in the Prerequisites chapter on the first part of this blog. 1. Select your project in the Project Explorer, right click on it and select Deploy –> Register to SAP Fiori Launchpad 2. Enter your SCN password if required 3. Enter a description for this new application and click on Next 4. Enter a Title and a Subtitle for your tile in the FLP and click on the Browse button to choose an icon for your app 5. Type “e-learn” in the search box, select the e-learning icon and click on OK 6. The new icon is assigned to the tile. Click on Next 7. Choose the Catalog and the Group for this application. The first two selections are made automatically by the system because they are mandatory. For the third you can choose the Sample Group and click on Next 8. Click on Finish 9. Your application has been successfully registered to FLP. Now you can open the registered application in your browser 10. The application is running fine. You might need to enter the ES4 credentials again 11. Alternatively, you can open your SAP Fiori Launchpad in another browser tab 12. This is how the tile for your app looks like. By clicking on this tile you can run your application 13. Congratulations! You have successfully deployed your first app to SAP Fiori Launchpad! Hi Thank you for the blogs. I am getting error THERE ARE NO CATEGORIES AVAILABLE on the assignment page when launching to Launchpad Kind regards Hi, normally a default category and a default group should be already present. If not, you can create your categories and groups by accessing the FLP admin page at the following URL:-<your_account>.dispatcher.hanatrial.ondemand.com/sap/hana/uis/clients/ushell-app/shells/fiori/FioriLau… where you have to replace the string <your_account> with your account ID. Regards, Simmaco Running into uncaught exception for the above index.html, dumping when attempting to initiate sap.m.Shell require code were missing for the above index.html > this will make your code work. Please create a new Discussion to ask your questions. Regards, Mike (Moderator) SAP Technology RIG Hi Michael It only makes sense to ask here since I am getting the error following this blog Thank you Hi Mr. Eli, That’s the only time it does make sense. Even then, you will get more eyes viewing a Discussion, but it is good to bring a possible problem with the content to the author. Thanks, Mike (Moderator) SAP Technology RIG Hi, I have been able to register my App on the trial FLP. But the thing is that although I am able to run the App using Mock data from WebIDe, I am not able to run with data OR no data comes up if I run the App from Fiori LaunchPad. Any idea what step am I missing? I have already configured project settings and Mock data settings. Thanks, Ags As suggested, creating a thread for discussion too. No Mock data when running App from FLP Thanks –PavanG Hi, in point 2 you write Paste the following code to the index.html. But I can’t find a screenshot or something else with the code fro index.html. Can you add? Thanks Jochen The source code should be visible now, before it was not because I formatted it with a wrong format. Thanks for your head up! Simmaco Hi Mounika, without getting more information regarding your problem it would be difficult to help you. I would suggest you to open a new thread in this community and mark it as a question so that you can receive a better assistance to your problem. Please also document your issue with some screenshots or logs if any. Regards, Simmaco Hi, thank you for your post, when i deploy an app on sapui5 repository, it doesn’t work when i test it there. Are there any other changes that i should make in my hana app, in order to work on sapui5 repository once i test it ? Hi, Simmaco, When i try to deploy the application by using Deploy->Register to SAP Fiori Launchpad Last option of part 3 excerise(1.5 Deploying the app to sap Fiori Launchpad(optional step)). I am getting error. “The account must have a subscription to SAP Fiori Launchpad to proceed.“ Provider Account * is disable. Please Suggest. Regards Ashok Kumar You need this subscription for the FLP. You can add it by using the button “New Subscription” in the HANA Cloud Platform Cockpit Regards, Simmaco Hi, Thanks. problem solved. but i stuck at another point. When registering for the SAP Fiori Launchpad, then in Assignment section, We have to give site name and this field is disable with error message “There are no sites available”. Please suggest how to proceed. Regards Ashok Kumar 1 – Go on the FLP with the URL-<your_HCP_account>.dispatcher.hanatrial.ondemand.com/sites?hc_reset#Shell-home where <your_HCP_account> must be replaced with your account on HANA Cloud Platform 2 – From the top right menu choose “Manage Site”. 3 – You have to create at least one Catalog and one Tile Group. I suggest you to carefully read the FLP documentation at: SAP Fiori Launchpad CPv2 Regards, Simmaco Hi, Thanks. its working. I have just started Web IDE. Sorry for asking questions on a document blog. i will start new Discussion on this issue. Deeply regretted 🙁 . Regards Ashok Kumar an error ‘HTTP Status 503 – The requested service is not currently available’ when I go to-<my_account>trial.dispatcher.hanatrial.ondemand.com So,when I am trying to deploy to FLP , it is not allowing me to put a Provider Account . I have already subscribed for the FLP . What is it that I am doing wrong Sorry for the trouble . Was able to proceed . 1 – Enable the Fiori Services (HANA Trial Account -> Services -> Enabled Fiori Mobile and Portal Service) . 2 – after Step 1 , Provider Account got filled in automatically . 3 – Also ,-<my_account>trial.dispatcher.hanatrial.ondemand.com was enabled , I was able to create the Catalog which was mandatory for the next step of deploying it to FLP . Many thanks Simmaco for the excellent blog . Rahul .
https://blogs.sap.com/2015/07/27/creating-and-deploying-sap-fiori-app-dc-exercise-part-3-of-3/
CC-MAIN-2020-40
refinedweb
1,832
74.59
ModTweaker Mod<< In reply to jaredlll08: Understood - thanks Jared! Excuse me, trying to replace the recipes for mekanism combiner to take one ex nihilo dust block instead of the dusts of the ingots, so i can further the ore multiplication line. Could anyone tell me how i could do this? I dont want to add the block to the ore dictionary, rather just change the recipe, but i am not sure how. I do not understand the instructions on the wiki at all. Will there be EvilCraft support in the future, such as for the blood infuser? I want to make it possible to infuse a slime block with blood in the blood infuser to get a slime spawn egg. I guess I can make it a crafting recipe with a bucket of blood, but I was hoping for the recipe to be higher on the tech tree than that, such as by requiring Promise of Tenacity III (the diamond one). 1.12? Please? Is it possible to use the transformDamage() with something like the Botania runic altar? This might be very low priority, but it is something I would like if possible: Forestry Bottler compatibility? I have a food mod (Bird's Foods) that adds bottles of milk, and I'd like to be able to use the bottler to autocraft them. Any chance of updating to 1.11.2 soon? I mean, 9 of the 10 supported mods for 1.10.2 are already on 1.11.2, and 3 of those are either primarily or exclusively supporting the latter version. Would be nice to be able to use some of this mod's additions on CraftTweaker, for our 1.11.2 modpack :) Are you all looking towards moving into 1.11? Are you going to readd support for thermalexpansion as it has 1.10.2 port? Could someone help me with adding recipes to Embers? I'm trying to allow players to make manyullyn with the Melter/Mixer/Stamper combination, but all I get is an error that says "embersmelting.zs: null", which isn't very helpful. This is what I've got: import mods.embers.Melter; import mods.embers.Mixer; import mods.embers.Stamper; Melter.addOreRecipe(<ore:ingotCobalt>, <liquid:cobalt>, false, false); Melter.addOreRecipe(<ore:nuggetCobalt>, <liquid:cobalt>, false, false); Melter.addOreRecipe(<ore:oreCobalt>, <liquid:cobalt>, false, false); Melter.addOreRecipe(<ore:ingotArdite>, <liquid:ardite>, false, false); Melter.addOreRecipe(<ore:nuggetArdite>, <liquid:ardite>, false, false); Melter.addOreRecipe(<ore:oreArdite>, <liquid:ardite>, false, false); Melter.addOreRecipe(<ore:ingotManyullyn>, <liquid:manyullyn>, false, false); Melter.addOreRecipe(<ore:nuggetManyullyn>, <liquid:manyullyn>, false, false); Mixer.addRecipe(<liquid:cobalt>, <liquid:ardite>, null, null, <liquid:manyullyn>); Stamper.addRecipe(null, <liquid:cobalt>, "bar", <tconstruct:ingots:0>); Stamper.addRecipe(null, <liquid:ardite>, "bar", <tconstruct:ingots:1>); Stamper.addRecipe(null, <liquid:manyullyn>, "bar", <tconstruct:ingots:2>); Well, I figured it out. The issue was with Stamper.addRecipe - evidently the booleans that were supposedly optional weren't. I do this now and it works flawlessly: Well Rip 1.11 since I can't do much to change recipes yet hope to see it for 1.11 :3 Would there be any chance of supporting the Draconic Evolution Fusion Crafting? i know it has is own method of adding recipes but it would be awesome to have the ability in Modtweaker/Minetweaker. Please add the ability to add various liquid fuel for the boiler. Hey, So I was looking for some support for some mods and found this, I thought "awesome", then I read the comments, So it is right that you guys dont support 1.7.10 anymore? If yes that kinda sad (for me), was actually looking for some more support but seems like I have to find an other way to do it :D, but anyways I wanted to thank you for the content u released yet for 1.7.10, it is making the things much easier x'D Hello, Is Tinker's Construct still the only mod which is supported by the 1.10 version? Can you please add support for the Armor Plus mod? It's increasing in popularity and it's 4x4 and 5x5 crafting tables are a modpack devs dream. If this is updated to 1.10.2 but requires MineTweaker which is not updated how can we use this is 1.10.2 Use Crafttweaker. When will we have documentation on the new mod support in ModTweaker?
https://minecraft.curseforge.com/projects/modtweaker?page=5
CC-MAIN-2018-34
refinedweb
737
67.45
Hi! How are ya! Anyway, I've just begun learning programning and I was trying the whole recursive program tutorial. I dunno what the problem is but every time the recursive function returns an integer to main it changes into 30866468. Here is what I got: Thanks for any help.Thanks for any help.Code:#include <iostream> using namespace std; int recursion (int x, int y); int main () { int num_user; int num_result = 0; cout<<"Input a number\n"; cin>> num_user; num_result = recursion(num_user, 1); cout<< num_result; _flushall(); cin.get(); } int recursion (int x, int y) { y *= x; if (x == 1) { return y; } recursion ((x - 1), y); }
http://cboard.cprogramming.com/cplusplus-programming/81757-bloody-recursive-thing.html
CC-MAIN-2016-18
refinedweb
106
74.19
So I have been using mkdir() to make my folders, only problem is that it doesn't create subfolders. I.e. if mkdir("c:\a\b\c\d") but if c:\a\b\c are not present it doesn't make d folder. To circumvent this i made this code, it looks fine and runs fine but crashes at the end. No idea why though :/ Might have missed something silly. Thanks! Code:#include <iostream> #include<fstream> #include <direct.h> using namespace std; int main() { char file_loc[140] = "C:/a/b/c/d/"; char tempfile_loc[140] = {' '}; int counter = 0; do { while( file_loc[counter] != '/' ) { tempfile_loc[counter] = file_loc[counter]; counter++; } tempfile_loc[counter] = file_loc[counter]; counter++; cout << tempfile_loc << endl; mkdir(tempfile_loc); // problem only makes one folder at a time } while (file_loc[counter+1] != ' ' ); return 0; }
https://cboard.cprogramming.com/cplusplus-programming/147367-creating-folder-subfolders.html
CC-MAIN-2017-04
refinedweb
131
73.47
EMF Compare/Roadmap Contents - 1 Plan - 1.1 Long Term - 1.1.1 Framework Enhancements - 1.1.1.1 per namespace MatchEngine selection and dynamic switching - 1.1.1.2 2 ways and 3 ways merge robustness = - 1.1.1.3 High Performances during matching phase - 1.1.1.4 Match/Diff/Merge on elements not contained in a EMF Resource - 1.1.1.5 MatchModel maintenance on incremental changes - 1.1.1.6 DiffModel maintenance on incremental changes - 1.1.1.7 Re-Usable Comparison UI Component - 1.1.1.8 Undo/Redo support in comparison editor - 1.1.1.9 ChangeModel to DiffModel and DiffModel to ChangeModel - 1.1.2 Tooling Enhancements - 1.2 2011 - 1.3 2012 - 1.4 2013 - 2 Released bits Plan Long Term EMF Compare's focus is on providing a powerful, stable and proven framework for model comparison and merging. The API should be usable in any context from the plain Java application to the complete IDE customization. Framework Enhancements per namespace MatchEngine selection and dynamic switching Right now EMF compare allows adopters to define specific match or diff engines associated with file extensions. As EMF is more and more broadly used we often encounter cases where a given file contains model elements conformed to several Ecore models. A MatchEngine (and respectively Diff Engine) should be associated with one or several EPackages and not with the serialization mechanism. Furthermore that would require the Match process in general to be able to switch from a MatchEngine to another depending on the elements it is matching. 2 ways and 3 ways merge robustness = We have a bunch of issues reported by the adopters about the merge support, we need to work on building a strong test basis to make sure the merge is perfect in any kind of scenarios for instance fragmented models, scoped matching ... High Performances during matching phase The matching phase is critical for performances, several optimization could be done (some being addressed by the Google Summer of Code 2010). Match/Diff/Merge on elements not contained in a EMF Resource The framework can be used on any kind of EObjects, but as historically it was about providing model comparison support for file based models many parts of the code are making assumptions about a resource being there or not. We should build a strong test basis with eObjects not being in resources, being compared and merged. MatchModel maintenance on incremental changes Right now the match process is not split in several parts, and as such when one of the matched models has been changed all the resource set needs to be matched again. We should be able to maintain parts of the match model and update those when the matched models have been changed. DiffModel maintenance on incremental changes Once the match model is maintained incrementaly, the next step is to maintain the diff model accordingly. Re-Usable Comparison UI Component Several adopters expressed their will to reuse parts of the compare UI. Right now it's nor really doable as the UI is pretty tied to the Eclipse compare framework. This task is about defining a new API one can easily reuse splitting content provider and viewers just like JFace. Undo/Redo support in comparison editor ChangeModel to DiffModel and DiffModel to ChangeModel Tooling Enhancements Integration with the Team Model Synchronization UI Generating a Custom Match Engine from Ecore Annotations 2011 2012 2013 Released bits 2010 2009 2008 - Initial project description : EMF Compare Description (PDF)
https://wiki.eclipse.org/index.php?title=EMF_Compare/Roadmap&direction=next&oldid=217332
CC-MAIN-2020-34
refinedweb
584
54.42
std::hash<X> hash_append? hash_appendto operator== hash_appendfor vector<T, A> hash_appendfor std::pair<T, U> hash_appendfor int is_contiguously_hashable<T>: hash_appendthe same thing as boost::hash_combine? hash_appendthe same thing as serialization? hash_append? hash_appendPimpl designs? hash_combinesolution? type_erased_hasher debugHasher This paper proposes a new hashing infrastructure that completely decouples hashing algorithms from individual types that need to be hashed. This decoupling divides the hashing computation among 3 different programmers who need not coordinate with each other: Authors of hashable types (keys of type K) write their hashing support just once, using no specific hashing algorithm. This code resembles (and is approximately the same amount of work as) operator== and swap for a type. Authors of hashing algorithms write a functor (e.g. H) that operates on a contiguous chunk of generic memory, represented by a void const* and a number of bytes. This code has no concept of a specific key type, only of bytes to be hashed. Clients who want to hash keys of type K using hashing algorithm H will form a functor of type std::uhash<H> to give to an unordered container. unordered_set<K, uhash<H>> my_set; Naturally, there could be a default hashing algorithm supplied by the std::lib: unordered_set<K, uhash<>> my_set; To start off with, we emphasize: there is nothing in this proposal that changes the existing std::hash, or the unordered containers. And there is also nothing in this proposal that would prohibit the committee from standardizing both this proposal, and either one of N3333 or N3876. N3333 and N3876 contradict each other, and thus compete with each other. Both cannot be standardized. This proposal, on the other hand, addresses a problem not addressed by N3333 or N3876. Nor does this proposal depend upon anything in N3333 or N3876. This paper simply takes a completely different approach to producing hash codes from types, in order to solve a problem that was beyond the scope of N3333 and N3876. The problem solved herein is how to support the hashing of N different types of keys using M different hashing algorithms, using an amount of source code that is proportional to N+M, as opposed to the current system based on std::hash<T> which requires an amount of source code proportional to N*M. And consequently in practice today M==1, and the single hashing algorithm is supplied only by the std::lib implementor. As it is too difficult and error prone for the client to supply alternative algorithms for all of the built-in scalar types ( int, long, double, etc.). Indeed, it has even been too difficult for the committee to supply hashing support for all of the types our clients might reasonably want to use as keys: pair, tuple, vector, complex, duration, forward_list etc. This paper makes ubiquitous hash support for most types as easy and as practical as is today's support for swap and operator==. This paper starts with an assertion: Types should not know how to hash themselves. The rest of this paper begins with demonstrating the problems created when software systems assume that types do know how to hash themselves, and what can be done to solve these problems. Instead of starting with a basic example like std::string or int, this paper will introduce an example class X that is meant to be representative of a type that a programmer would write, and would want to create a hash code for: namespace mine { class X { std::tuple<short, unsigned char, unsigned char> date_; std::vector<std::pair<int, int>> data_; public: X(); // ... friend bool operator==(X const& x, X const& y) { return std::tie(x.date_, x.data_) == std::tie(y.date_, y.data_); } }; } // mine How do we write the hash function for X? std::hash<X> If we standardize N3876 which gives us hash_combine and hash_val from boost, then this is relatively doable: } // mine namespace std { template <> struct hash<mine::X> { size_t operator()(mine::X const& x) const noexcept { size_t h = hash<tuple_element<0, decltype(x.date_)>::type>{}(get<0>(x.date_)); hash_combine(h, get<1>(x.date_), get<2>(x.date_)); for (auto const& p : x.data_) hash_combine(h, p.first, p.second); return h; } }; } // std First we need to break out of our own namespace, and then specialize std::hash in namespace std. And we also need to add a friend statement to our class X: friend struct std::hash<X>; Without hash_combine from N3876 we would have to write our own hash_combine. This could easily result in a bad hash function as aptly described in N3876. In our first attempt to use the tools presented in N3333, we were surprised at the difficulty, as we were expecting it to be easier. However after studying the reference implementation in LLVM, we succeeded in writing the following friend function of X: friend std::hash_code hash_value(X const& x) { using std::hash_value; return std::hash_combine ( hash_value(std::get<0>(x.date_)), hash_value(std::get<1>(x.date_)), hash_value(std::get<2>(x.date_)), std::hash_combine_range(x.data_.begin(), x.data_.end()) ); } We also strongly suspect that with a little more work on the proposal, this could be simplified down to: friend std::hash_code hash_value(X const& x) { using std::hash_value; return std::hash_combine(hash_value(x.date_), std::hash_combine_range(x.data_.begin(), x.data_.end())); } Or possibly even: friend std::hash_code hash_value(X const& x) noexcept { using std::hash_value; return std::hash_combine(hash_value(x.date_), hash_value(x.data_)); } The reduced burden on the author of X on writing the code to hash X is very much welcomed! However, hashing algorithms are notoriously difficult to write. Has the author of X written a good hashing algorithm? The answer is that the author of X does not know, until he experiments with his data. The hashing algorithm is supplied by the std::lib implementor. If testing reveals that the algorithm chosen by the std::lib implementor is not appropriate for the client's data set, then everything offered by both N3333 and N3876 is for naught. The author of X is on his own, starting from scratch, to build an alternate hashing algorithm -- even if just to experiment. This concern is not theoretical. If the keys to be hashed can be influenced by a malicious attacker, it is quite possible for the attacker to arrange for many distinct keys that all hash to the same hash code. Even some seeded hashing algorithms are vulnerable to such an attack. Here is a very short and fast C++ program that can generate as many distinct keys as you like which all hash to the same hash code using MurmurHash3, even with a randomized seed. Here is another such C++ program demonstrating a similar seed-independent attack on CityHash64. These attacks do not mean that these are bad hashing algorithms. They simply are evidence that it is not wise to tie yourself down to a single hashing algorithm. And if the effort to change, or experiment with, hashing algorithms takes effort that is O(N) (where N is the number of types or sub-types to be hashed), then one is tied down. This paper demonstrates infrastructure allowing the author of X to switch hashing algorithms with O(1) work, regardless of how many sub-types of X need to be hashed. No matter what hashing algorithm is used, the C++ code to hash X is the same: template <class HashAlgorithm> friend void hash_append(HashAlgorithm& h, X const& x) noexcept { using std::hash_append; hash_append(h, x.date_, x.data_); } With this proposal, the author of X gets simplicity, without being heavily invested in any single hashing algorithm. The hashing algorithm is completely encapsulated in the templated parameter HashAlgorithm, and the author of X remains fully and gratefully ignorant of any specific hashing algorithm. The key to solving this problem is the recognition of one simple observation: Types should not know how to hash themselves. However types do know what parts of their state should be exposed to a hashing algorithm. The question now becomes: How do you present X to a general purpose hashing algorithm without binding it to any specific algorithm? Just as an example, here is a very simple hashing algorithm that many have used with great success: std::size_t fnv1a (void const* key, std::size_t len) { unsigned char const* p = static_cast<unsigned char const*>(key); unsigned char const* const e = p + len; std::size_t h = 14695981039346656037u; for (; p < e; ++p) h = (h ^ *p) * 1099511628211u; return h; } Although most modern hashing algorithms are much more complicated than fnv1a shown above, there are similarities among them. void const*and a size_tlength. Not all, but most of the algorithms also have the property that they consume bytes in the order that they are received, possibly with a fixed sized internal buffer. This characteristic can be taken advantage of in order to hash discontiguous memory. For example consider this minor repackaging of the FNV-1a algorithm: class fnv1a { std::size_t state_ = 14695981039346656037u; public: using result_type = std::size_t; void operator()(void const* key, std::size_t len) noexcept { unsigned char const* p = static_cast<unsigned char const*>(key); unsigned char const* const e = p + len; for (; p < e; ++p) state_ = (state_ ^ *p) * 1099511628211u; } explicit operator result_type() noexcept { return state_; } }; Now the algorithm can be accessed in 3 stages: operator()(void const* key, std::size_t len)function. Note that this function can be called any number of times. In each call the hashed memory is contiguous. But there is no requirement at all that separate calls refer to a single block of memory. On each call, the state of the algorithm is recalled from the previous call (or from the initialization step) and updated with the new lenbytes located at key. result_type(in this case a size_t). This is the finalization stage, which in this case is trivial, but could be arbitrarily complex. With the FNV-1a algorithm divided into its 3 stages like this, one can call it in various ways, for example: fnv1a::result_type hash_contiguous(int (&data)[3]) { fnv1a h; h(data, sizeof(data)); return static_cast<fnv1a::result_type>(h); } or fnv1a::result_type hash_discontiguous(int data1, int data2, int data3) { fnv1a h; h(&data1, sizeof(data1)); h(&data2, sizeof(data2)); h(&data3, sizeof(data3)); return static_cast<fnv1a::result_type>(h); } But either way it is called, given the same inputs, the algorithm outputs the exact same result: int data[] = {5, 3, 8}; assert((hash_contiguous(data) == hash_discontiguous(5, 3, 8))); We can say that fnv1a meets the requirements of a HashAlgorithm. A HashAlgorithm is a class type that can be constructed (default, or possibly with seeding), has an operator() member function with the signature represented above. The operator() member function processes bytes, updating the internal state of the HashAlgorithm. This internal state can be arbitrarily complex. Indeed an extreme example of internal state could be a copy of every chunk of memory supplied to the HashAlgorithm. And finally a HashAlgorithm can be explicitly converted to the nested type result_type, which when used with the unordered containers should be an alias for size_t. At all times during its lifetime, a HashAlgorithm is CopyConstructible and CopyAssignable, with each copy getting an independent copy of the state from the right-hand-side of the copy (value semantics -- no aliasing among copies). Thus if one knew that two sequences of data shared a common prefix, one could hash the prefix in just one sequence, make a copy of the HashAlgorithm, and then continue after the prefix in each sequence with the two independent HashAlgorithms. This would be a pure optimization, getting the same results as if one had hashed each sequence in full. Given the concept of HashAlgorithm, a universal hash functor, which takes any type T can now be written (almost): template <class HashAlgorithm> struct uhash { using result_type = typename HashAlgorithm::result_type; template <class T> result_type operator()(T const& t) const noexcept { HashAlgorithm h; using std::hash_append; hash_append(h, t); return static_cast<result_type>(h); } }; Now one can use uhash<fnv1a> as the hash function for std::unordered_map, for example: std::unordered_map<MyKey, std::string, uhash<fnv1a>> the_map; First note several important attributes of uhash: uhashdepends only on the hashing algorithm, which is encapsulated in the HashAlgorithm. uhashdoes not depend upon the type Tbeing hashed. uhashis simple. Though such a utility should certainly be supplied by the std::lib, any programmer can very easily implement their own variant of uhashfor desired customizations (e.g. random seeding, salting, or padding), without having to revisit the hashing code for distinct types. size_t. For example, this could come in handy for computing a SHA-256 result. And all without having to revisit each individual type! Let's walk through uhash one step at a time. The HashAlgorithm is constructed (default constructed in this example, but that is not the only possibility). This step initializes the hashing algorithm encapsulated in the HashAlgorithm. It is appended to using t as a key. The function hash_append is implemented for each type that supports hashing. We will see below that such support code need be written only once per type in order to support many hashing algorithms. It is implemented in the type's own namespace, but there are implementations in namespace std for most scalars (just like swap). And then the HashAlgorithm is explicitly converted to the desired result. This is where the algorithm is "finalized." The above hash functor is accessing the generic hashing algorithm by its 3 distinct phases. Additionally, this hash functor could even be defaulted to use your favorite hash algorithm: template <class HashAlgorithm = fnv1a> struct uhash; The question usually arises now: Are you proposing that uhash<> replace hash<T> as the default hash functor in the unordered containers? The answer is it really almost doesn't matter. With templated using declarations, it is just so easy for programmers to specify their own defaults: namespace my { template <class Key, class T, class Hash = std::uhash<>, class Pred = std::equal_to<Key>, class Alloc = std::allocator<std::pair<Key const, T>>> using unordered_map = std::unordered_map<Key, T, Hash, Pred, Alloc>; } // my // ... my::unordered_map<MyKey, std::string> the_map; // uses std::uhash<> instead of std::hash<MyKey> hash_append? The hash_append function is the way that individual types communicate with the HashAlgorithm. Each type T is responsible only for exposing its hash-worthy state to the HashAlgorithm in the function hash_append. T is not responsible for combining hash codes. Nor is it responsible for any hashing arithmetic whatsoever. It is only responsible for pointing out where its data is, how many different chunks of data there are, and what order they should be presented to the HashAlgorithm. For example, here is how X might implement hash_append: class X { std::tuple<short, unsigned char, unsigned char> date_; std::vector<std::pair<int, int>> data_; public: // ... friend bool operator==(X const& x, X const& y) { return std::tie(x.date_, x.data_) == std::tie(y.date_, y.data_); } // Hook into the system like this template <class HashAlgorithm> friend void hash_append(HashAlgorithm& h, X const& x) noexcept { using std::hash_append; hash_append(h, x.date_); hash_append(h, x.data_); } } Like swap, hash_append is a customization point for each type. Only a type knows what parts of itself it should expose to a HashAlgorithm, even though the type has no idea what algorithm is being used to do the hashing. Note that X need not concern itself with details like whether or not its sub-types are contiguously hashable. Those details will be handled by the hash_append for the individual sub-types. The only information the hash_append overload for X must implement is what sub-types need to be presented to the HashAlgorithm, and in what order. Furthermore the hash_append function is intimately tied to the operator== for the same type. For example if for some reason x.data_ did not participate in the equality computation, then it should also not participate in the hash_append computation. hash_appendto operator== For all combination of two values of X, x and y, there are two rules to follow in designing hash_append for type X. Actually the second rule is more of a guideline. But it should be followed as closely as possible: If x == y, then both x and y shall send the same message to the HashAlgorithm in hash_append. If x != y, then x and y should send different messages to the HashAlgorithm in hash_append. It is very important to keep these two rules in mind when designing the hash_append function for any type, or for any instantiation of a class template. Failure to follow the first rule will mean that equal values hash to different codes. Clients such as unordered containers will simply fail to work, resulting in run time errors if this rule is violated. Failure to follow the second guideline will result in hash collisions for the two different values that send identical messages to the HashAlgorithm, and will thus degrade the performance of clients such as unordered containers. hash_appendfor vector<T, A> For example std::vector<T, A> would never expose its capacity(), since capacity() can be different for vector's that otherwise compare equal. Likewise it should not expose its allocator_type to hash_append, since this value also does not participate in the equality computation. Should vector expose its size() to the HashAlgorithm? To find out, lets look closer at the operator== for vector: Two vector's xand ycompare equal if x.size() == y.size()and if x[i] == y[i]for iin the range of 0 to x.size(). To meet rule 1, it is sufficient that every element in the vector be sent to the HashAlgorithm as part of the vector's message. A logical convention is that the elements will be sent in order from begin() to end(). But this alone will not satisfy rule 2. Consider: std::vector<std::vector<int>> v1{}; std::vector<std::vector<int>> v2{1}; assert(v1 != v2); v1 and v2 are not equal. v1.size() == 0 and v2.size() == 1. However v2.front().size() == 0. If an empty vector<int> sends no message at all to the HashAlgorithm, then v2, even though it is not empty, also sends no message to the HashAlgorithm. And therefore v1 and v2 send the same (0 length) message to the HashAlgorithm, violating rule 2. One idea for fixing this is to special case 0-length vectors to output a special value such as "empty" or 0. However in the first case the result would be ambiguous with a vector<string> of length 1 containing the string "empty". The second case has the exact same problem but for vector<int>. The right way to fix this problem is to have vector<T> send its size() in addition to sending all of its members to the HashAlgorithm. Now the only question is: Should it send its size before or after sending its members to the HashAlgorithm? To answer this last question, consider another sequence container: forward_list<T>. It has the exact same issues as we have been discussing for vector<T>, but forward_list<T> has no size() member. In order to send its size(), forward_list<T> has to loop through all of its members to first compute size(). In order to avoid the requirement that hash_append for forward_list<T> make two passes through the list, we should specify that the size() of the container is sent to the HashAlgorithm after all of the elements are sent. And for consistency, we should do this for all std-containers for which hash_append is defined. template <class HashAlgorithm, class T, class Alloc> void hash_append(HashAlgorithm& h, std::vector<T, Alloc> const& v) noexcept { for (auto const& t : v) hash_append(h, t); hash_append(h, v.size()); } I.e. vector considers itself a message composed of 0 or more sub-messages, and appends each sub-message (in order) to the state of the generic HashAlgorithm. And this is followed with a final message consisting of the size() of the vector. Note that as N3333 and N3876 both stand today, this critically important but subtle detail is not treated, and is left up to the client (the author of X) to get right. This proposal states that this is a detail that the hash_append for vector (and every other hashable std-container) is responsible for. Emphasis The message a type sends to a HashAlgorithmis part of its public API. E.g. whether or not a container includes its size()in its hash_appendmessage, and if so, whether the size()is prepended or appended to the message, is critical information a type's client needs to know, in order to ensure that their composition of some type's message with another type's message doesn't produce an ambiguous message (doesn't create collisions). The standard should clearly document the message emanating from every hash_appendit defines, to the extent possible. It might not be possible to nail down that an implementation is using IEEE floating point or two's complement signed integers. But the standard can certainly document the message produced by a vectoror any other std-defined class type. hash_appendfor std::pair<T, U> The situation is simpler for std::pair<T, U>: template <class HashAlgorithm, class T, class U> void hash_append (HashAlgorithm& h, std::pair<T, U> const& p) noexcept { hash_append (h, p.first); hash_append (h, p.second); } All there is to do is to just hash_append the first and second members of the pair. hash_appendfor int Eventually hash_append will drill down to a scalar type such as int: template <class HashAlgorithm> void hash_append(HashAlgorithm& h, int const& i) noexcept { h(&i, sizeof(i)); } Whereupon a contiguous chunk of memory is actually accumulated by the HashAlgorithm, using the HashAlgorithm's operator(). Recall that the HashAlgorithm has a member function operator()(const void* key, std::size_t len) noexcept. And the int is just a chunk of contiguous memory that is hashable. It is now prudent to deeply consider what it means to say that a type (such as int) is contiguously hashable. A type T is contiguously hashable if for all combinations of two values of a type, say x and y, if x == y, then it must also be true that memcmp(addressof(x), addressof(y), sizeof(T)) == 0. I.e. if x == y, then x and y have the same bit pattern representation. A 2's complement int satisfies this property because every bit pattern an int can have results in a distinct value (rule 2). And there are no "padding bits" which might take on random values. This property is necessary because if two values are equal, then they must hash to the same hash code (rule 1). is_contiguously_hashable<T>: With that in mind we can easily imagine a type trait: template <class T> struct is_contiguously_hashable; which derives from either true_type or false_type. And on 2's complement systems, is_contiguously_hashable<int>::value is true. And we might anticipate that some other types, such as char and long long are also contiguously hashable. With this tool we can now easily write hash_append for all contiguously hashable types: template <class HashAlgorithm, class T> inline std::enable_if_t < is_contiguously_hashable<T>::value > hash_append(HashAlgorithm& h, T const& t) noexcept { h(addressof(t), sizeof(t)); } Now the task remains to specialize is_contiguously_hashable properly for those scalars we want to use this implementation of hash_append for, and for any other scalars, implement hash_append appropriately. As an example of the latter, consider IEEE floating point types. An IEEE floating point type is not contiguously hashable because 0. == -0. but these two values are represented with different bit patterns. Rule 1 would be violated if hashed contiguously. Therefore the hash_append for IEEE floating point types must go to extra effort to ensure that 0. and -0. hash to identical hash codes, but without dictating a specific hash algorithm. This could be done like so: template <class HashAlgorithm, class T> inline std::enable_if_t < std::is_floating_point<T>::value > hash_append(HashAlgorithm& h, T t) noexcept { if (t == 0) t = 0; h(&t, sizeof(t)); } I.e. if the value is -0., reset the value to 0., and then contiguously hash the resulting bits. N3333 also introduced a very similar is_contiguous_layout trait. Although the paper did not make it perfectly clear, we believe is_contiguously_hashable is approximately the same trait, but with a better name. Just because a type has a contiguous layout does not necessarily imply that a type is contiguously hashable. IEEE floating point is a case in point. IEEE floating point does have a contiguous layout (and is trivially copyable, and has a standard layout). And yet still it is not contiguously hashable because of how its operator== works with signed zeros (violating rule 1). Class types that are composed of only contiguously hashable types and that have no padding bytes, may also be considered to be contiguously hashable. For example consider this specialization of is_contiguously_hashable<std::pair<T, U>>: template <class T, class U> struct is_contiguously_hashable<std::pair<T, U>> : public std::integral_constant<bool, is_contiguously_hashable<T>::value && is_contiguously_hashable<U>::value && sizeof(T) + sizeof(U) == sizeof(std::pair<T, U>)> { }; In English: If the pair's two types are both contiguously hashable, and if the size of the two members is the same size as the pair (so there are no padding bytes), then the entire pair itself is contiguously hashable! This same logic can be applied to array, tuple, and possibly user-defined types as well (but only with the user-defined type's author's permission). Consequently, a great many types can be easily and safely classified as contiguously hashable. This is important because with modern hash algorithm implementations, the bigger the chunk of contiguous memory you can send to the HashAlgorithm at one time, the higher the performance (in terms of bytes-hashed/second) the HashAlgorithm is likely to perform. With that in mind (the bigger the memory chunk the better), consider again hash_append for vector: template <class HashAlgorithm, class T, class Alloc> inline std::enable_if_t < !is_contiguously_hashable<T>::value > hash_append(HashAlgorithm& h, std::vector<T, Alloc> const& v) noexcept { for (auto const& t : v) hash_append(h, t); hash_append(h, v.size()); } template <class HashAlgorithm, class T, class Alloc> inline std::enable_if_t < is_contiguously_hashable<T>::value > hash_append(HashAlgorithm& h, std::vector<T, Alloc> const& v) noexcept { h(v.data(), v.size()*sizeof(T)); hash_append(h, v.size()); } I.e. if the T is contiguously hashable, then even though vector itself is not, there can still be a huge optimization made by having vector send its contiguous data buffer to hash_append. Note that this is a pure optimization. I.e. the HashAlgorithm sees the exact same sequence of bytes, in the same order, whether or not this optimization for vector is done. But if it is done, then the HashAlgorithm sees almost all of the bytes at once. This optimization could be made for vector without any help from the std::lib. Other optimizations are possible, but could only be made from within the std::lib. For example, what if T is bool in the above example? vector<bool> doesn't follow the usual vector rules. What about deque<T>? It could hash its internal contiguous buffers all at once, but there is no way to implement that without intimate knowledge of the internals of the deque implementation. Externally, the best one can do for deque<T> is to send each individual T to hash_append one at a time. This still gives the very same correct message, but is just much slower. Because only the std::lib implementor can fully implement this optimization for types such as deque, bitset and vector<bool>, it is important that we standardize is_contiguously_hashable and hash_append instead of asking the programmer to implement them (for std-defined types). If you believe your type to be contiguously hashable, you should specialize is_contiguously_hashable<YourType> appropriately, as has been shown for pair. This would mean that not only is hashing YourType optimized, but hashing vector<YourType>, et. al. is also optimized! But note that there is no bullet proof way to automate the registration of YourType with is_contiguously_hashable as IEEE floating point so ably demonstrates. To do so requires an in depth analysis of operator== for YourType, which only the author of YourType is qualified to do. hash_appendthe same thing as boost::hash_combine? No! boost::hash_combine is used to combine an already computed hash code with an object that is to be hashed with boost::hash<T> (and this is also the N3876 hash_combine, modulo using std::hash<T>). The N3333 hash_combine takes two objects, hashes both of them with std::hash<T>, and combines those two hash codes into one. In contrast hash_append is used to expose an object's hashable state to an arbitrary hashing algorithm. It is up to the generic hashing algorithm to decide how to combine later bytes with earlier bytes. hash_appendthe same thing as serialization? It is very closely related. Close enough that there may be a way to elegantly combine the two. Each type can expose its state to a HashAlgorithm or Serializer. However there are differences. IEEE floating point is our poster-child for the difference. For hashing, IEEE floating point needs to hide the difference between -0. and 0. For serialization one needs to keep these two values distinct. Combining these two functions, for now, remains beyond the scope of this paper. hash_append? Yes, this is easily written as: template <class HashAlgorithm, class T0, class T1, class ...T> inline void hash_append (HashAlgorithm& h, T0 const& t0, T1 const& t1, T const& ...t) noexcept { hash_append (h, t0); hash_append (h, t1, t...); } This allows hash_append for X (for example) to be rewritten as: template <class HashAlgorithm> friend void hash_append(HashAlgorithm& h, X const& x) noexcept { using std::hash_append; hash_append(h, x.date_, x.data_); } Algorithms such as CityHash are not efficiently adapted to this infrastructure, because as currently coded, CityHash actually hashes the end of the buffer first. However SpookyHash, which is reported to have quality comparable to CityHash is trivial to incorporate: #include "SpookyV2.h" class spooky { SpookyHash state_; public: using result_type = std::size_t; spooky(std::size_t seed1 = 1, std::size_t seed2 = 2) noexcept { state_.Init(seed1, seed2); } void operator()(void const* key, std::size_t len) noexcept { state_.Update(key, len); } explicit operator result_type() noexcept { std::uint64_t h1, h2; state_.Final(&h1, &h2); return h1; } }; MurmurHash2, MurmurHash3, and the cryptographically secure algorithms SipHash and the SHA-2 family are also efficiently adaptable to this framework. Indeed, CityHash is the only hashing algorithm we have come across to date which is not efficiently adapted to this framework. Given the class X shown above, with its complex state distributed among at least two different contiguous chunks of memory, and potentially many more if the container switched from vector to deque or list, one can create an unordered container with the default hash function like so: std::unordered_set<X, std::uhash<>> my_set; If one instead wanted to specify FNV-1a, the code is easily modified to: std::unordered_set<X, std::uhash<fnv1a>> my_set; This would change the hash code algorithm for every vector, every deque, every string, every char, every int, etc. for which X considered part of its hash-worthy state. That is, hashing algorithms are controlled at the top of the data structure chain, at the point where the client (e.g. unordered_map) asks for the hash. It is not controlled at all down at the bottom of the data structure chain. I.e. int has no clue how to hash itself. It only knows what state needs to be exposed to a hashing algorithm. And there is no combining step. The hash algorithm works identically as if you had copied all of the various discontiguous chunks of state into one big contiguous chunk of memory, and fed that one big chunk to the hash algorithm. If one wants to use spooky instead, simply change in one place: std::unordered_set<X, std::uhash<spooky>> my_set; If a new hashing algorithm is invented tomorrow, and you want to use it, all that needs to be done is to write an adaptor for it: class new_hash_function { public: using result_type = std::size_t; new_hash_function() noexcept; void operator()(void const* key, std::size_t len) noexcept; explicit operator result_type() noexcept; }; And then use it: std::unordered_set<X, std::uhash<new_hash_function>> my_set; You do not need to revisit the hash_append for X, nor for any of X's sub-types. The the N hashing algorithms x M sub-types problem has been solved! hash_appendPimpl designs? So far, every hash_append function shown must be templated on HashAlgorithm so as to handle any hashing algorithm requested by some unknown, far away client. But with the Pimpl design, one can not send a templated HashAlgorithm past the implementation firewall. Or can you ... ? With the help of std::function one can type erase the templated HashAlgorithm, adapting it to a type with a concrete type, and pass that concrete HashAlgorithm through the implementation firewall. Imagine a class as shown here. Here is how it can support arbitrary hash algorithms with the proposed infrastructure: class Handle { struct CheshireCat; // Not defined here CheshireCat* smile; // Handle public: // Other operations... // Hash support using type_erased_hasher = acme::type_erased_hasher<std::size_t>; friend void hash_append(type_erased_hasher&, CheshireCat const&); template <class HashAlgorithm> friend void hash_append(HashAlgorithm& h, Handle const& x) { using std::hash_append; if (x.smile == nullptr) hash_append(h, nullptr); else { type_erased_hasher temp(std::move(h)); hash_append(temp, *x.smile); h = std::move(*temp.target<HashAlgorithm>()); } } }; So you still have to implement a templated hash_append for Handle, but the implementation of that function forwards to a non-template function which can be implemented in the source, within the definition of CheshireCat: friend void hash_append(Handle::type_erased_hasher& h, CheshireCat const& c) { using std::hash_append; hash_append(h, c.data1_, c.data2_, etc. ...); } Besides the type of the HashAlgorithm, hash_append for CheshireCat looks just like any other hash_append. The magic is in acme::type_erased_hasher<std::size_t>, not proposed (thus the namespace acme). Appendix A outlines exactly how to code acme::type_erased_hasher<std::size_t>. In a nutshell, this is a HashAlgorithm adaptor, which takes any HashAlgorithm, stores it in a std::function<void(void const*, std::size_t)>, and makes the std::function behave like a HashAlgorithm. Think about what has just happened here. You've compiled CheshireCat.cpp today. And tomorrow, when somebody invents a brand new hash algorithm, your CheshireCat.cpp uses it, with no re-compile necessary, for the cost of a virtual function call (or many such calls) to the HashAlgorithm. And yet no other client of this new HashAlgorithm (outside of those called by CheshireCat), is forced to access the new hashing algorithm via a virtual function call. That borders on magic! It is this very concern (hashing of Pimpl's) that decided the name of the member function of HashAlgorithms which appends state to the hash algorithm: void operator()(void const* key, std::size_t len) noexcept; Had this member function been given any other name, such as: void append(void const* key, std::size_t len) noexcept; then programmers would not be able to use std::function to create a type-erased wrapper around a templated HashAlgorithm. Many hash algorithms can be randomly seeded during the initialization stage in such a way that the hash code produced for a type is constant between invocations by a single client (just like a non-seeded algorithm), but varies between clients. The variance might be per-process, but could also be as frequent as per-hash-functor construction, excluding copy or move construction. In the latter case one might have two distinct unordered_sets (for example) of the same type, and even containing the same data, and yet have the two containers result in different hash codes for the same values. Doing so can help harden an application from attacks when the application must hash keys supplied by an untrusted source. This is remarkably easily done with this proposal. One codes one new hash functor, which can be used with any HashAlgorithm which accepts a seed, and for any type which already has hash_append implemented (even those CheshireCats which have already been compiled, and can not be recompiled). Here is one possible implementation for a hash functor that is randomly seeded by a seed selected on a per-process basis: std::tuple<std::uint64_t, std::uint64_t> get_process_seed(); template <class HashAlgorithm = acme::siphash> class process_seeded_hash { public: using result_type = typename HashAlgorithm::result_type; template <class T> result_type operator()(T const& t) const noexcept { std::uint64_t seed0; std::uint64_t seed1; std::tie(seed0, seed1) = get_process_seed(); HashAlgorithm h(seed0, seed1); using std::hash_append; hash_append(h, t); return static_cast<result_type>(h); } }; And then in a source: namespace { std::tuple<std::uint64_t, std::uint64_t> init_seeds() { std::mt19937_64 eng{std::random_device{}()}; return std::tuple<std::uint64_t, std::uint64_t>{eng(), eng()}; } } // unnamed std::tuple<std::uint64_t, std::uint64_t> get_process_seed() { static std::tuple<std::uint64_t, std::uint64_t> seeds = init_seeds(); return seeds; } And then use it: std::unordered_set<MyType, process_seeded_hash<>> my_set; In this example, the hashing algorithm is initialized with a random seed when process_seeded_hash is invoked. The same seed is used to initialize the algorithm on each hash functor invocation, and for all copies of the functor, for the life of the process. Alternatively, one could randomly seed the hash functor on each default construction: template <class HashAlgorithm = acme::siphash> class randomly_seeded_hash { private: static std::mutex mut_s; static std::mt19937_64 rand_s; std::size_t seed0_; std::size_t seed1_; public: using result_type = typename HashAlgorithm::result_type; randomly_seeded_hash() { std::lock_guard<std::mutex> _(mut_s); seed0_ = rand_s(); seed1_ = rand_s(); } template <class T> result_type operator()(T const& t) const noexcept { HashAlgorithm h(seed0_, seed1_); using std::hash_append; hash_append(h, t); return static_cast<result_type>(h); } }; template <class HashAlgorithm> std::mutex randomly_seeded_hash<HashAlgorithm>::mut_s; template <class HashAlgorithm> std::mt19937_64 randomly_seeded_hash<HashAlgorithm>::rand_s{std::random_device{}()}; Perhaps using it like: std::unordered_set<MyType, randomly_seeded_hash<acme::spooky>> my_set; One uses the same technique to apply salting, or padding to a type to be hashed. E.g. one would prepend and/or append the salt or padding to the message of T by using additional calls to hash_append in the operator()(T const& t) of the hash functor. Emphasis There is no need for the standard to specify a random seeding policy or interface, because using this infrastructure the client can very easily specify his own random seeding policy without having to revisit every type that needs to be hashed, and without having to heavily invest in any given hashing algorithm. It can be done with only a few dozen lines of code. And he can easily do so in a per-use manner: I.e. in use-case A we need to randomly seed the hashing of types X and Y. And in use-case B we need to not seed the hashing of types Y and Z. Type Y is correctly handled in both use-cases, and without having to revisit Y or Y's sub-types. Y remains ignorant of the detail as to whether it is being hashed with a random seed or not (or even with what hashing algorithm).Flexibility is built into this system in exactly the right places so as to achieve maximum options for the programmer with an absolute minimum of programmer intervention. The std::lib merely has to set up the right infrastructure, and provide a simple default. The unordered containers present a problem. The problem is not specific to this infrastructure. Neither N3333 nor N3876 solve this problem either. But we highlight the problem here so as to definitively state that we do not solve the problem here either. Given two unordered_set<int>: std::unordered_set<int> s1{1, 2, 3}; std::unordered_set<int> s2{3, 2, 1}; One can assert that s1 == s2, and yet if one iterates over s1 and s2, one will not (in general) come upon the same elements in the same order. So in what order do you hash the elements of an unordered sequence? Since s1 == s2, then hash(s1) == hash(s2) must also be true. There are several answers to this dilemma that will work. However there is no answer that is definitely better than all other answers. Therefore we recommend that we not standardize a hash_append overload for any of the unordered containers. If a client really wants to hash an unordered container, then they can choose a technique that works for them, and do so. For example, one could hash each element using a copy of the HashAlgorithm, and then append the sum of all hash codes to the state of the original HashAlgorithm: template <class HashAlgorithm, class Key, class Hash, class Pred, class Alloc> void hash_append(HashAlgorithm& h, std::unordered_set<Key, Hash, Pred, Alloc> const& s) { using result_type = typename HashAlgorithm::result_type; result_type k{}; for (auto const& x : s) { HashAlgorithm htemp{h}; hash_append(htemp, x); k += static_cast<result_type>(htemp); } hash_append(h, k, s.size()); } Or one could sort all the elements and hash the elements in sorted order: template <class HashAlgorithm, class Key, class Hash, class Pred, class Alloc> void hash_append(HashAlgorithm& h, std::unordered_set<Key, Hash, Pred, Alloc> const& s) { hash_append(h, std::set<Key>(s.begin(), s.end())); } And there are various other schemes. They are all implementable. But they each have their advantages and disadvantages. Therefore this proposal proposes none of them. Should the future expose the ideal hash_append specification for unordered containers, it can always be added at that time. The answer to this question is nothing. But to demonstrate this answer, X has been given a randomized default constructor: std::mt19937_64 eng; X::X() { std::uniform_int_distribution<short> yeardata(1914, 2014); std::uniform_int_distribution<unsigned char> monthdata(1, 12); std::uniform_int_distribution<unsigned char> daydata(1, 28); std::uniform_int_distribution<std::size_t> veclen(0, 100); std::uniform_int_distribution<int> int1data(1, 10); std::uniform_int_distribution<int> int2data(-3, 3); std::get<0>(date_) = yeardata(eng); std::get<1>(date_) = monthdata(eng); std::get<2>(date_) = daydata(eng); data_.resize(veclen(eng)); for (auto& p : data_) { p.first = int1data(eng); p.second = int2data(eng); } } Given this, one can easily create a great number of random X's and specify any hash algorithm. Herein I'm testing 7 implementations of hashing 1,000,000 X's: Using std::hash augmented with N3876 as shown in Solution 1. Using llvm::hash_value as shown here which is intended to be representative of N3333. Using uhash<fnv1a> where fnv1a is derived from here. Using uhash<jenkins1> where jenkins1 is derived from here. Using uhash<MurmurHash2A> where MurmurHash2A is derived from here. Using uhash<spooky> where spooky is derived from here. Using uhash<siphash> where siphash is derived from here. The hash function quality tester suite used herein is described below. This test looks at each 64 bit hash code as a collection of 16 hex-digits. The expectation is that each hex-digit should be roughly equally represented in each hexadecimal place of the hash code. The test returns maximum deviation of the average, from the expected average. An ideal score is 0. This test simply counts the number of duplicate hashes. A score of 0 indicates each hash code is unique. A score of 1 indicates that all hash codes are the same. This test is TestDistribution gratefully borrowed from the smhasher test suite. An ideal score is 0. This test hashes the hash codes into a list of buckets sized to the number of hash codes (load factor == 1). It then scores each bucket with the number of comparisons required to look up each element in the bucket, and then averages the number of comparisons per lookup. Given a randomized hash, the result should be lf/2+1, where lf is load factor. This test returns the percent difference above/below the ideal randomized result. This test hashes the hash codes into a list of buckets sized to the number of hash codes (load factor == 1). It then returns the max collision count among all of the buckets. This represents the maximum cost for a lookup of an element not found. Assuming the not-found-key hashes to a random bucket, the average cost of looking up a not-found-key is simply the load factor (i.e. independent of the quality of the hash function distribution). A million hash codes are generated from a million randomized but unique X's, randomized by a default constructed std::mt19937_64, and fed to these tests. For each test, the smaller the result the better. The intent in showing the above table is two-fold: To show that running times are competitive, and that with the exception of MurmurHash2A, and possibly std::hash<X>, the quality results are competitive. If one insists on picking "the best algorithm" from this table, we caution you with one additional test. The table below represents the same test except that X's data members have been changed as shown: std::tuple< short, unsigned char, unsigned char> date_; std::vector<std::pair<int, int>> data_; Other than this change in types, no other change has been made. Not even the random values assigned to each type. Here are the results: While none of the quality metrics changes that much with this minor change, the timing tests did vary considerably. For example llvm::hash_value was previously one of the faster algorithms (fastest among those with good quality results), and is now 50% slower than the fastest among those with quality results. The take-away point is that there is no "best algorithm". But there is a lot of value in being able to easily change algorithms for testing, performance and security purposes. This paper presents an infrastructure that decouples types from hashing algorithms. This decoupling has several benefits: template <class T> struct is_contiguously_hashable; // A type property trait template <class HashAlgorithm> void hash_append(HashAlgorithm& h, T const& t) noexcept; // overloaded for each type T template <class HashAlgorithm = unspecified-default-hasher> struct uhash; // A hashing functor There is an example implementation and lots of example code using the example implementation here. See hash_append.h for the example implementation. type_erased_hasher Though type_erased_hasher is not proposed, it easily could be if the committee so desires. Here is how it is implemented, whether by the programmer, or by a std::lib implementor: template <class ResultType> class type_erased_hasher { public: using result_type = ResultType; private: using function = std::function<void(void const*, std::size_t)>; function hasher_; result_type (*convert_)(function&); public: template <class HashAlgorithm, class = std::enable_if_t < std::is_constructible<function, HashAlgorithm>{} && std::is_same<typename std::decay_t<HashAlgorithm>::result_type, result_type>{} > > explicit type_erased_hasher(HashAlgorithm&& h) : hasher_(std::forward<HashAlgorithm>(h)) , convert_(convert<std::decay_t<HashAlgorithm>>) { } void operator()(void const* key, std::size_t len) { hasher_(key, len); } explicit operator result_type() noexcept { return convert_(hasher_); } template <class T> T* target() noexcept { return hasher_.target<T>(); } private: template <class HashAlgorithm> static result_type convert(function& f) noexcept { return static_cast<result_type>(*f.target<HashAlgorithm>());; } }; type_erased_hasher must be templated on result_type (or have a concrete result_type), otherwise it can not have an explicit conversion operator to that type. The type_erased_hasher stores a std::function<void(void const*, std::size_t)>, and a pointer to a function taking such a function and returning a result_type. The latter is necessary to capture the type of the HashAlgorithm in the type_erased_hasher constructor, so that the same HashAlgorithm type can later be used in the conversion to result_type. The constructor is naturally templated on HashAlgorithm, which can be perfectly forwarded to the underlying std::function. The constructor also initialized the function pointer convert_ using the decayed type of HashAlgorithm. The pointed-to function will extract the HashAlgorithm from the std::function and explicitly convert it to the result_type. Note that the conversion to result_type isn't explicitly used in the Pimpl example of this paper. However the hash_append of the private implementation may need to copy the type_erased_hasher, and possibly convert a copy to a result_type as part of its hash computation. Such code has been prototyped in motivating examples, such as the hash_append for an unordered sequence container. The Pimpl example does need access to the stored HashAlgorithm after the call to hash_append to recover its state. This is accomplished with the target function which simply forwards to std::function's target function. In Elements of Programming, Stepanov uses the term uniquely represented for the property this paper refers to as contiguously hashable. Therefore another good name name for is_contiguously_hashable is is_uniquely_represented. debugHasher Another interesting "hash algorithm" is debugHasher. This is a small utility that can be used to help type authors debug their hash_append function. This utility is not proposed. It is simply presented herein to illustrate the utility of this overall hashing infrastructure design. #include <iostream> #include <iomanip> #include <vector> class debugHasher { std::vector<unsigned char> buf_; public: using result_type = std::size_t; void operator()(void const* key, std::size_t len) noexcept { unsigned char const* p = static_cast<unsigned char const*>(key); unsigned char const* const e = p + len; for (; p < e; ++p) buf_.push_back(*p); } explicit operator std::size_t() noexcept { std::cout << std::hex; std::cout << std::setfill('0'); unsigned int n = 0; for (auto c : buf_) { std::cout << std::setw(2) << (unsigned)c << ' '; if (++n == 16) { std::cout << '\n'; n = 0; } } std::cout << '\n'; std::cout << std::dec; std::cout << std::setfill(' '); return buf_.size(); } }; debugHasher is a fake "hashing algorithm" that does nothing but collect the bytes sent to a hash by the entire collection of the calls to hash_append made by a key and all of its sub-types. The collection of bytes is output to cout when the hasher is converted to its result_type. As can be readily seen, it is not difficult to create such a debugging tool. It is then used, just as easily: std::vector<std::vector<std::pair<int, std::string>>> v {{{1, "abc"}}, {{2, "bca"}, {3, "cba"}}, {}}; std::cout << uhash<debugHasher>{}(v) << '\n'; Assuming a 32 bit int, 64 bit size_t, and little endian, this will reliably output: 01 00 00 00 61 62 63 00 01 00 00 00 00 00 00 00 02 00 00 00 62 63 61 00 03 00 00 00 63 61 62 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 00 00 00 00 00 00 00 56 The last line is simply the number of bytes that have been sent to the HashAlgorithm. The first 4 lines are those bytes, formatted as two hex digits per byte, with bytes separated by a space, and 16 bytes per line for readability. If one carefully inspects this byte stream and compares it to the data structure which has been "hashed", and to the proposed hash_append above for vector, string and pair, one can verify that the byte stream is consistent with the specification. Improving debugHasher to collect useful statistics such as the number of times called, and the average number of bytes hashed per call, is left as a fun exercise for the reader. Add a new section to [hash.requirements]: HashAlgorithm requirements [hash.algo.rqmts] A type Hmeets the HashAlgorithmrequirements if all of the following are met: - H::result_typeis valid and denotes a MoveConstrutibletype (14.8.2 [temp.deduct]). - His either default constructible, or constructible by some documented seed. This construction shall initialize Hto a deterministic state such that if two instances are constructed with the same arguments, then they have equivalent state. - His CopyConstructible. Updates to the state of one copy shall have no impact on any other copy. - His CopyAssignable. Updates to the state of one copy shall have no impact on any other copy. - void operator()(void const* key, std::size_t len); Requires: If len > 0, keypoints to lencontiguous bytes to be consumed by the HashAlgorithm. The conversion to result_typehas not been called on this object since construction, or since *thiswas assigned to. Effects: Updates the state of the HashAlgorithmusing the lenbytes referred to by {key, len}. If len == 0then keyis not dereferenced, and there are no effects. Consider two keys {k1, len1}and {k2, len2}, with len1 > 0and len2 > 0. If len1 != len2, the two keys are considered not equivalent. If len1 == len2and if memcmp(k1, k2, len1) == 0, the two keys are equivalent, else they are not equivalent. If two instances of HashAlgorithm(e.g. h1and h2) have the same state prior to an update operation, and given two equivalent keys {k1, len}and {k2, len}, then after h1(k1, len)and h2(k2, len), then h1and h2shall have the same updated state. If {k1, len1}and {k2, len2}are not equivalent, then after h1(k1, len1)and h2(k2, len2), h1and h2should have different updated state. Given a key {k, len}with len > 0, one can create multiple keys each with length li, where the first key k0 == k, and subsequent keys ki == ki-1 + li-1. Combined with a constraint that ∑ li == len, the single key {k, len}shall be equivalent to the application of all of the keys {ki, li }applied in order. The HashAlgorithmshall not access this memory range after the update operation returns. - explicit operator result_type(); Requires: This operation has not been called on this object since construction or since *thiswas assigned to. Effects: Converts the state of the HashAlgorithmto a result_type. Two instances of the same type of HashAlgorithm, with the same state, shall return the same value. It is unspecified if this operation changes the state of the HashAlgorithm. Returns: The converted state. Add a new section to [hash.requirements]: HashAlgorithm-based Hashrequirements [hash.algo.hash.rqmts] A type Hmeets the HashAlgorithm-based Hashrequirements if all of following are met: - Hmeets the Hashrequirements ([hash.requirements]). - His a class template instantiation of the formtemplate <class HashAlgorithm, class Args> struct H; where Args is zero or more type arguments, and the first template parameter meets the HashAlgorithmrequirements ([hash.algo.rqmts]). The HashAlgorithmparameter may be defaulted. - Hhas the nested type: using result_type = typename HashAlgorithm::result_type; - His either default constructible, or constructible by some documented seed. This construction shall initialize H. Hmay be stateless or have state. If not stateless, different default constructions, and different seeded constructions (even with the same seeds), are not required to initialize Hto the same state. - His CopyConstructible. - His CopyAssignable. - template <class T> result_type operator()(T const& t) const; Requires: HashAlgorithmshall be constructible as specified by a concrete Htype. Effects: Constructs a HashAlgorithm hwith automatic storage. Each concrete Htype shall specify how his constructed. However hshall be constructed to the same state for every invocation of (*this)(t). Updates the state of the HashAlgorithmin an unspecified manner, except that there shall be exactly one call to:using std::hash_append; hash_append(h, t); at some time during the update operation. Furthermore, subsequent calls shall update the the local hwith exactly the same state every time, except as changed by different values for t, unless there is an intervening assignment to *thisbetween calls to this operator. Returns: static_cast<result_type>(h). [Note: For the same value of t, the same value is returned on subsequent calls unless there is an intervening assignment to *thisbetween calls to this operator. — end note] Add a new row to Table 49 — Type property predicates in [meta.unary.prop]: Add a new section to [unord.hash]: hash_append[unord.hash_append]template <class HashAlgorithm, class T> void hash_append(HashAlgorithm& h, T const& t); Remarks: This function shall not participate in overload resolution, unless is_contiguously_hashable<T>::valueis true. Effects: h(addressof(t), sizeof(t)). For any scalar types T, except for member pointers, for which is_contiguously_hashable<T>{}evaluates to false, there shall exist an overload of hash_appendsimilar to that shown above for contiguously hashable types. For each of these overloads for scalar types T, the implementation shall ensure that for two values of T(e.g. t1and t2), if t1 == t2, then hash_append(h, t1)shall update the state of hto the same state as does hash_append(h, t2). And if t1 != t2, then hash_append(h, t1)should update the state of hto a different state as does hash_append(h, t2). It is unspecified exactly what signature such overloads will have, so it is not portable to form function pointers to these overloads. [Note: For example, here is a plausible implementation of hash_appendfor IEEE floating point:template <class HashAlgorithm, class T> enable_if_t < is_floating_point<T>{} > hash_append(HashAlgorithm& h, T t) { if (t == 0) t = 0; h(&t, sizeof(t)); } This implementation accepts the Tby value instead of by const&, and gives -0. and 0. the same bit representation prior to forwarding the value to the HashAlgorithm(since these two values compare equal). And here is a plausible definition for nullptr_t:template <class HashAlgorithm> void hash_append(HashAlgorithm& h, nullptr_t) { void const* p = nullptr; h(&p, sizeof(p)); } — end note]template <class HashAlgorithm, class T, size_t N> void hash_append(HashAlgorithm& h, T (&a)[N]); Remarks: This function shall not participate in overload resolution, unless is_contiguously_hashable<T>::valueis false. Effects:for (auto const& t : a) hash_append(h, t); [Note: It is intentional that the hash_appendfor built-in arrays behave in exactly this way, sending a "message" to the HashAlgorithmof each element, in order, and nothing else. This "message" to the HashAlgorithmis considered part of a built-in array's API. It is also intentional that for arrays of Tthat are contiguously hashable, the exact same message is sent to the HashAlgorithm, except in one call instead of many. — end note]template <class HashAlgorithm, class T0, class T1, class ...T> inline void hash_append (HashAlgorithm& h, T0 const& t0, T1 const& t1, T const& ...t); Effects:hash_append (h, t0); hash_append (h, t1, t...); Add a new section to [unord.hash]: uhash[unord.hash.uhash]template <class HashAlgorithm = unspecified> struct uhash { using result_type = typename HashAlgorithm::result_type; template <class T> result_type operator()(T const& t) const; }; Instantiations of uhashmeet the HashAlgorithm-based Hashrequirements ([hash.algo.hash.rqmts]). The template parameter HashAlgorithmmeets the HashAlgorithmrequirements ([hash.algo.rqmts]). The unspecified default for this parameter refers to an implementation provided default HashAlgorithm.template <class HashAlgorithm> template <class T> typename HashAlgorithm::result_type uhash<HashAlgorithm>::operator()(T const& t) const; Effects: Default constructs a HashAlgorithmwith automatic storage duration (for example named h), and calls hash_append(h, t)(unqualified). Returns: static_cast<result_type>(h). Add to [type.info]: class type_info { ... }; ...template <HashAlgorithm> void hash_append(HashAlgorithm& h, type_info const& t); Effects: Updates the state of hwith data that is unique to twith respect to all other type_infos that compare not equal to t. Add to the synopsis in [syserr]: template <class HashAlgorithm> void hash_append(HashAlgorithm& h, error_code const& ec) Add to [syserr.hash]: template <class HashAlgorithm> void hash_append(HashAlgorithm& h, error_code const& ec) Effects: hash_append(h, ec.value(), &ec.category()); Add to the synopsis in [utility]: template <class T, class U> struct is_contiguously_hashable<pair<T, U>> : public integral_constant<bool, is_contiguously_hashable<T>{} && is_contiguously_hashable<U>{} && sizeof(T) + sizeof(U) == sizeof(pair<T, U>)> {}; template <class HashAlgorithm, class T, class U> void hash_append(HashAlgorithm& h, pair<T, U> const& p) Add a new section to [pairs]: [pairs.hash]: Hashing pair [pairs.hash]template <class HashAlgorithm, class T, class U> void hash_append(HashAlgorithm& h, pair<T, U> const& p) Remarks: This function shall not participate in overload resolution, unless is_contiguously_hashable<pair<T, U>>::valueis false. Effects: hash_append(h, p.first, p.second); Add to the synopsis in [tuple.general]: template <class ...T> struct is_contiguously_hashable<tuple<T...>>; template <class HashAlgorithm, class ...T> void hash_append(HashAlgorithm& h, tuple<T...> const& t) Add to [tuple.special]: template <class ...T> struct is_contiguously_hashable<tuple<T...>>; Publicly derives from true_typeif for each Typein T..., is_contiguously_hashable<Type>{}is true, and if the sum of all sizeof(Type)is equal to sizeof(tuple<T...>), else publicly derives from false_type.template <class HashAlgorithm, class ...T> void hash_append(HashAlgorithm& h, tuple<T...> const& t) Remarks: This function shall not participate in overload resolution, unless is_contiguously_hashable<tuple<T...>>::valueis false. Effects: Calls hash_append(h, get<I>(t))for each Iin the range [0, sizeof...(T)). If sizeof...(T) is 0, the function has no effects. Add to the synopsis in [template.bitset]: template <class HashAlgorithm, size_t N> void hash_append(HashAlgorithm& h, bitset<N> const& bs) Add to [bitset.hash]: template <class HashAlgorithm, size_t N> void hash_append(HashAlgorithm& h, bitset<N> const& bs) Effects: Calls hash_append(h, w)successively for some wintegral type for which each bit in wcorresponds to a bit value contained in bs. The last wmay contain padding bits which shall be set to 0. After all bits have been appended to h, calls hash_append(h, bs.size()). Add to the synopsis in [memory.syn]: template <class HashAlgorithm, class T, class D> void hash_append(HashAlgorithm& h, unique_ptr<T, D> const& p); template <class HashAlgorithm, class T> void hash_append(HashAlgorithm& h, shared_ptr<T> const& p); Add to [util.smartptr.hash]: template <class HashAlgorithm, class T, class D> void hash_append(HashAlgorithm& h, unique_ptr<T, D> const& p); template <class HashAlgorithm, class T> void hash_append(HashAlgorithm& h, shared_ptr<T> const& p); Effects: hash_append(h, p.get()); Add to the synopsis in [time.syn]: template <class Rep, class Period> struct is_contiguously_hashable<duration<Rep, Period>> : public integral_constant<bool, is_contiguously_hashable<Rep>{}> {}; template <class Clock, class Duration> struct is_contiguously_hashable<time_point<Clock, Duration>> : public integral_constant<bool, is_contiguously_hashable<Duration>{}> {}; template <class HashAlgorithm, class Rep, class Period> void hash_append(HashAlgorithm& h, duration<Rep, Period> const& d); template <class HashAlgorithm, class Clock, class Duration> void hash_append(HashAlgorithm& h, time_point<Clock, Duration> const& tp); Add a new section to [time.duration], [time.duration.hash]: duration hash [time.duration.hash]template <class HashAlgorithm, class Rep, class Period> void hash_append(HashAlgorithm& h, duration<Rep, Period> const& d); Remarks: This function shall not participate in overload resolution, unless is_contiguously_hashable<duration<Rep, Period>>::valueis false. Effects: hash_append(h, d.count()). Add a new section to [time.point], [time.point.hash]: time_point hash [time.point.hash]template <class HashAlgorithm, class Clock, class Duration> void hash_append(HashAlgorithm& h, time_point<Clock, Duration> const& tp); Remarks: This function shall not participate in overload resolution, unless is_contiguously_hashable<time_point<Clock, Duration>>::valueis false. Effects: hash_append(h, tp.time_since_epoch()). Add to the synopsis in [type.index.synopsis]: template <class HashAlgorithm> void hash_append(HashAlgorithm& h, type_index const& ti); Add to [type.index.hash]: template <class HashAlgorithm> void hash_append(HashAlgorithm& h, type_index const& ti); Effects: hash_append(h, *ti.target); Add to the synopsis in [string.classes]: template <class HashAlgorithm, class CharT, class Traits, class Alloc> void hash_append(HashAlgorithm& h, basic_string<CharT, Traits, Alloc> const& s); Add to [basic.string.hash]: template <class HashAlgorithm, class CharT, class Traits, class Alloc> void hash_append(HashAlgorithm& h, basic_string<CharT, Traits, Alloc> const& s); Effects:for (auto c : s) hash_append(h, c); hash_append(h, s.size()); [Note: If is_contiguously_hashable<CharT>{}is true, then the following may replace the loop (as an optimization):h(s.data(), s.size()*sizeof(CharT)); — end note] Add to the synopsis of <array> in [sequences.general]: template <class T, size_t N> struct is_contiguously_hashable<array<T, N>> : public integral_constant<bool, is_contiguously_hashable<T>{} && sizeof(T)*N == sizeof(array<T, N>)> {}; template <class HashAlgorithm, class T, size_t N> void hash_append(HashAlgorithm& h, array<T, N> const& a); Add to the synopsis of <deque> in [sequences.general]: template <class HashAlgorithm, class T, class Allocator> void hash_append(HashAlgorithm& h, deque<T, Allocator> const& x); Add to the synopsis of <forward_list> in [sequences.general]: template <class HashAlgorithm, class T, class Allocator> void hash_append(HashAlgorithm& h, forward_list<T, Allocator> const& x); Add to the synopsis of <list> in [sequences.general]: template <class HashAlgorithm, class T, class Allocator> void hash_append(HashAlgorithm& h, list<T, Allocator> const& x); Add to the synopsis of <vector> in [sequences.general]: template <class HashAlgorithm, class T, class Allocator> void hash_append(HashAlgorithm& h, vector<T, Allocator> const& x); template <class HashAlgorithm, class Allocator> void hash_append(HashAlgorithm& h, vector<bool, Allocator> const& x); Add to [array.special]: template <class HashAlgorithm, class T, size_t N> void hash_append(HashAlgorithm& h, array<T, N> const& a); Remarks: This function shall not participate in overload resolution, unless is_contiguously_hashable<array<T, N>>::valueis false. Effects:for (auto const& t : a) hash_append(h, t); Add to [deque.special]: template <class HashAlgorithm, class T, class Allocator> void hash_append(HashAlgorithm& h, deque<T, Allocator> const& x);Effects:for (auto const& t : x) hash_append(h, t); hash_append(h, x.size()); [Note: When is_contiguously_hashable<T>{}is true, an implementation may optimize by calling h(p, s)on suitable contiguous sub-blocks of the deque. — end note] Add to [forwardlist.spec]: template <class HashAlgorithm, class T, class Allocator> void hash_append(HashAlgorithm& h, forward_list<T, Allocator> const& x);Effects:typename forward_list<T, Allocator>::size_type s{}; for (auto const& t : x) { hash_append(h, t); ++s; } hash_append(h, s); Add to [list.special]: template <class HashAlgorithm, class T, class Allocator> void hash_append(HashAlgorithm& h, list<T, Allocator> const& x);Effects:for (auto const& t : x) hash_append(h, t); hash_append(h, x.size()); Add to [vector.special]: template <class HashAlgorithm, class T, class Allocator> void hash_append(HashAlgorithm& h, vector<T, Allocator> const& x);Effects:for (auto const& t : x) hash_append(h, t); hash_append(h, x.size()); [Note: When is_contiguously_hashable<T>{}is true, an implementation may optimize by calling h(x.data(), x.size()*sizeof(T))in place of the loop. — end note] Add to [vector.bool]: template <class HashAlgorithm, class Allocator> void hash_append(HashAlgorithm& h, vector<bool, Allocator> const& x); Effects: Calls hash_append(h, w)successively for some wintegral type for which each bit in wcorresponds to a bit value contained in x. The last wmay contain padding bits which shall be set to 0. After all bits have been appended to h, calls hash_append(h, x.size()). Add to the synopsis of <map> in [associative.map.syn]: template <class HashAlgorithm, class Key, class T, class Compare class Allocator> void hash_append(HashAlgorithm& h, map<Key, T, Compare, Allocator> const& x); template <class HashAlgorithm, class Key, class T, class Compare class Allocator> void hash_append(HashAlgorithm& h, multimap<Key, T, Compare, Allocator> const& x); Add to the synopsis of <set> in [associative.set.syn]: template <class HashAlgorithm, class Key, class Compare class Allocator> void hash_append(HashAlgorithm& h, set<Key, Compare, Allocator> const& x); template <class HashAlgorithm, class Key, class Compare class Allocator> void hash_append(HashAlgorithm& h, multiset<Key, Compare, Allocator> const& x); Add to [map.special]: template <class HashAlgorithm, class Key, class T, class Compare class Allocator> void hash_append(HashAlgorithm& h, map<Key, T, Compare, Allocator> const& x); Effects:for (auto const& t : x) hash_append(h, t); hash_append(h, x.size()); Add to [multimap.special]: template <class HashAlgorithm, class Key, class T, class Compare class Allocator> void hash_append(HashAlgorithm& h, multimap<Key, T, Compare, Allocator> const& x); Effects:for (auto const& t : x) hash_append(h, t); hash_append(h, x.size()); Add to [set.special]: template <class HashAlgorithm, class Key, class Compare class Allocator> void hash_append(HashAlgorithm& h, set<Key, Compare, Allocator> const& x); Effects:for (auto const& t : x) hash_append(h, t); hash_append(h, x.size()); Add to [multiset.special]: template <class HashAlgorithm, class Key, class Compare class Allocator> void hash_append(HashAlgorithm& h, multiset<Key, Compare, Allocator> const& x); Effects:for (auto const& t : x) hash_append(h, t); hash_append(h, x.size()); Add to the synopsis in [complex.syn]: template <class HashAlgorithm, class T> void hash_append(HashAlgorithm& h, complex<T> const& x); Add to [complex.ops]: template <class HashAlgorithm, class T> void hash_append(HashAlgorithm& h, complex<T> const& x); Effects: Calls hash_append(h, x.real(), x.imag()). Add to the synopsis in [thread.thread.id]: template <class HashAlgorithm> void hash_append(HashAlgorithm& h, thread::id const& id); Add to [thread.thread.id]: template <class HashAlgorithm> void hash_append(HashAlgorithm& h, thread::id const& id); Effects: Updates the state of hwith id. Thanks to Daniel James (et al.) for highlighting the problem of hashing zero-length containers with no message. Thanks to Dix Lorenz (et al.) for pointing out that the result_type of the HashAlgorithm need not be size_t, and indeed, can not be if we want this infrastructure to fully handle cryptographic hash functions (which produce results larger than a size_t). Thanks to Jeremy Maitin-Shepard for pointing out problems in an earlier scheme to hash std::string and arrays of char identically. Also thanks to Jeremy and Chris Jefferson for their guidance on hashing unordered sequences. Additional thanks to Walter Brown, Daniel Krügler, and Richard Smith for their invaluable review and guidance. This research has been generously supported by Ripple Labs. We would especially like to thank our colleagues on the RippleD team.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3980.html
CC-MAIN-2020-45
refinedweb
11,354
52.39
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Odoo/Open ERP Customer Wizard Hi folks, I'm new to Odoo/OpenERP and I need some help. I've created a wizard to be displayed on the res.partner page which lists/shows all of the customers. What it does is it will allow me to select multiple customers and add a parent company to them. However, I don't know where to add the wizard folder in the file structure. I realize that I have to create my init file, but that's easy once I know where this is supposed to go. Any thoughts would be greatly helpful. Thank you! Realistically, your folder's file structure can look almost however you want, but naming conventions (at least in v7) would look something like this: my_custom_module/ wizard/ __init__.py res_partner_batch_parent.py res_partner_batch_parent_view.xml __init__.py __openerp__.py res_partner.py res_partner_view.xml As long as your __init__.py files are all importing folders and files properly and your __openerp__.py file has the proper file structure to import all XML/CSV/CSS/JS/etc files, you should be fine. my_custom_module/__init__.py: import res_partner import wizard my_custom_module/wizard/__init__.py: import res_partner_batch_parent You could technically have a wizard folder named "asdf" and your wizard Python file named "qwerty.py" and be perfectly fine as long as __init__.py is accurate, but it's terrible naming
https://www.odoo.com/forum/help-1/question/odoo-open-erp-customer-wizard-84155
CC-MAIN-2017-09
refinedweb
255
58.28
Richard Stallman <address@hidden> writes: > Anyway, someone suggested that I try enabling (i.e., uncommenting) > SIGNALS_VIA_CHARACTERS in s/gnu-linux.h. I did that, and it worked > fine -- and in fact has been working fine for months, on both a 2.4 > and a 2.6 kernel. > > Ok, let's turn it on. > To play it safe, this patch which enables it only for 2.4 and later. *** gnu-linux.h 06 Feb 2006 18:21:53 +0100 1.94 --- gnu-linux.h 23 Jun 2006 12:06:21 +0200 *************** *** 52,57 **** --- 52,60 ---- #if LINUX_VERSION_CODE >= 0x20000 #define LINUX_MAP_SHARED_DOES_WORK #endif /* LINUX_VERSION_CODE >= 0x20000 */ + #if LINUX_VERSION_CODE >= 0x20400 + #define LINUX_SIGNALS_VIA_CHARACTERS_WORKS + #endif /* LINUX_VERSION_CODE >= 0x20400 */ #endif /* HAVE_LINUX_VERSION_H */ #endif /* emacs */ #endif /* NOT_C_CODE */ *************** *** 247,255 **** #define C_DEBUG_SWITCH #endif ! /* Let's try this out, just in case. ! Nah. Rik Faith <address@hidden> says it doesn't work well. */ ! /* #define SIGNALS_VIA_CHARACTERS */ /* Rob Malouf <address@hidden> says: SYSV IPC is standard a standard part of Linux since version 0.99pl10, --- 250,258 ---- #define C_DEBUG_SWITCH #endif ! #ifdef LINUX_SIGNALS_VIA_CHARACTERS_WORKS ! #define SIGNALS_VIA_CHARACTERS ! #endif /* Rob Malouf <address@hidden> says: SYSV IPC is standard a standard part of Linux since version 0.99pl10, -- Kim F. Storm <address@hidden>
https://lists.gnu.org/r/emacs-pretest-bug/2006-06/msg00277.html
CC-MAIN-2022-40
refinedweb
193
60.41
I) ... and compile it with -O2, it generates the following very nice core: $wa1 = \ (w_s25b :: Int#) (ww_s25e :: Int#) (w1_s25g :: State# RealWorld) -> case <=# w_s25b 1 of _ { False -> $wa1 (-# w_s25b 1) (+# 1 ww_s25e) w1_s25g; True -> (# w1_s25g, ((), (I# (+# 1 ww_s25e)) ) #) } ... and runs in 4.6 seconds on my netbook: time ./writer ((),Sum {getSum = 1000000000}) real 0m4.580s user 0m4.560s sys 0m0.008s ... which is about 4.6 nanoseconds per element. This is quite impressive when you consider it is factoring everything through the 'IO' monad. If you use `Identity` as the base monad: main4 = print $ runIdentity $ runWriterT4 $ replicateM_ n $ tell4 $ Sum (1 :: Int) ... then it gets slightly faster: real 0m3.678s user 0m3.668s sys 0m0.000s ... with an even nicer inner loop: $wa1 = \ (w_s25v :: Int#) (ww_s25y :: Int#) -> case <=# w_s25v 1 of _ { False -> $wa1 (-# w_s25v 1) (+# 1 ww_s25y); True -> (# (), (I# (+# 1 ww_s25y)) #) } The reason this stalled last time is that Edward and I agreed that I should first investigate if there is a "smaller" type that gives the same behavior. Now I'm revisiting the issue because I can safely conclude that the answer is "no". The StateT implementation is the smallest type that gives the correct behavior. To explain why, it helps to compare the definition of `(>>=)` for both WriterT and StateT: m >>= k = WriterT $ do (a, w) <- runWriterT m (b, w') <- runWriterT (k a) return (b, w `mappend` w') m >>= k = StateT $ \s -> do (a, s') <- runStateT m s runStateT (k a) s' The `WriterT` fails to run in constant space because of the pattern of binding the continuation before mappending the results. This results in N nested binds before it can compute even the very first `mappend`. This not only leaks space, but also punishes the case where your base monad is a free monad, since it builds up a huge chain of left-associated binds. The canonical solution to avoid this sort of nested bind is to use a continuation-passing-style transformation where you pass the second `runWriterT` a continuation saying what you want to do with its monoid result. My first draft of such a solution looked like this: newtype WriterT w m a = WriterT { unWriterT :: (w -> w) -> m (a, w) } m >>= k = WriterT $ \f -> do (a, w) <- runWriterT m f runWriterT (k a) (mappend w) tell w = WriterT $ \f -> return ((), f w) runWriterT m = unWriterT m id ... but then I realized that there is no need to pass a general function. I only ever use mappend, so why not just pass in the monoid that I want to mappend and let `tell` just supply the `mappend`: newtype WriterT w m a = WriterT { unWriterT :: w -> m (a, w) } m >>= k = WriterT $ \w -> do (a, w') <- runWriterT m f runWriterT (k a) w' tell w' = WriterT $ \w -> return ((), mappend w w') runWriterT m = unWriterT m mempty Notice that this just reinvents the StateT monad transformer. In other words, StateT *is* the continuation-passing-style transformation of WriterT, which is why you can't do any better than to reformulate WriterT as StateT internally. So I propose that we add an additional stricter WriterT (under say, "Control.Monad.Trans.Writer.Stricter") which is internally implemented as StateT, but hide the constructor so we don't expose the implementation: newtype WriterT w m a = WriterT { unWriterT :: w -> m (a, w) } instance (Monad m, Monoid w) => Monad (WriterT w m) where return a = WriterT $ \w -> return (a, w) m >>= f = WriterT $ \w -> do (a, w') <- unWriterT m w unWriterT (f a) w' And define `tell` and `runWriterT` as follows: tell :: (Monad m, Monoid w) => w -> WriterT w m () tell w = WriterT $ \w' -> let wt = w `mappend` w' in wt `seq` return ((), w `mappend` w') runWriterT :: (Monoid w) => WriterT w m a -> m (a, w) runWriterT m = unWriterT m mempty If we do that, then WriterT becomes not only usable, but actually competitive with expertly tuned code.
http://www.haskell.org/pipermail/libraries/2013-March/019528.html
CC-MAIN-2014-42
refinedweb
648
63.83
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Default value for function field ? i have one function field with type integer. i want to set default value for it ? how it is possible ? i used that field in other module with one2many relation and display it in tree view under other module. - first make the functional field writable (if you don't like it to be writable by the end user, then addtionally you'll need to set it as readonly in view) in v7.0 it may be something like: def _set_my_field(self, cr, uid, ids, field_name, field_value, arg, context=None): pass 'my_field': fields.function(_compute_my_field, fnct_inv=_set_my_field, method=True, string="My Field", type="integer") - then set default value by the ordinary way as for other
https://www.odoo.com/forum/help-1/question/default-value-for-function-field-81884
CC-MAIN-2017-04
refinedweb
148
66.44
Hello, Has anyone done this (e.g. hook) with C++ and Libtorch? Thanks Hello, Has anyone done this (e.g. hook) with C++ and Libtorch? Thanks Hope this snippet will help you to plot 16 outputs in the subplot: fig, axarr = plt.subplots(4, 4) k = 0 for idx in range(act.size(0)//4): for idy in range(act.size(0)//4): axarr[idx, idy].imshow(act[k]) k += 1 For example, if we consider @ptrblck code snippet and change the conv2 layer as 16 feature maps for the visualization, the output could look like: Hi, I have one doubt in addition to this. While calculating the loss i.e. nn.CrossEntopyLoss(output,labels) in this, we use output of last layer or final output of the network. But I want to use feature maps of each convolutional layer in loss calculation as well in addition to above calculation. Can you please throw some light, how can it be done? It depends which target you would like to use for the intermediate activations. Since they are conv outputs you won’t be able to use e.g. nn.CrossEntropyLoss directly since these outputs don’t represent logits for the classification use case as they have a different shape. However, you could follow a similar approach as seen in Inception models, which create auxiliary outputs forked from intermediate outputs and use these aux. output layers to calculate the loss. Actually I want to use discriminative loss in addition to cross entropy loss. The discriminative loss will be based on feature maps of each convolutional layers and cross entropy loss remains as usual. But I didn’t find any source so far, how to do it! You could return the intermediate activations in the forward as seen here: def forward(self, x): x1 = self.layer1(x) x2 = self.layer2(x1) out = self.layer3(x2) return out, x1, x2 and then calculate the losses separately: out, x1, x2 = model(input) loss = criterion(out, target) + criterion1(x1, target1) + criterion2(x2, target2) loss.backward() or alternatively you could use forward hooks to store the intermediate activations and calculate the loss using them. This post gives you an example on how to use forward hooks. Thank you for the reply @ptrblck. I actually created a new thread because the discussion deviates slightly and I have few more doubts. Can you please look at this? If I do something like Loss = my_Loss (x1, original_images) for every layer the size of tensor will be different, thus it will give an error. i.e. after 1 convolutional layer, x1 has a size of [batch_size, num_features, h,w] = [50, 12, 28, 28] While that of original_images = [50, 3, 32, 32]. So, how to calculate loss then! And one my more doubt is, can we do something to get all features (num_features=12 in this case) to be as distinct as possible from one another? You could use additional conv/pooling/interpolation layers to create the same output size of the intermediate activations as the target (or input) tensor. Alternatively you could also change the model architecture such that the spatial size won’t be changed, but again it depends on your use case. I don’t know which approach would create activations meeting this requirement. As far as we provide some kernel size, there will be a reduction in the size. Size=( (W-K+2P)/ S )+1 I really don’t get your point. Can you please give dummy examples of both the scenarios you suggested? Yes, you cannot simply calculate a loss between arbitrarily sized tensors, so you would need to come up with a way to calculate the loss between intermediate activations (with different sizes) and a target. To get the same spatial size you could e.g. use pooling/conv/interpolations while you would still have to make sure to create the same number of channels. This can be done via convs again (or via reductions), but it depends on your use case. Here is a small example: act = torch.randn(1, 64, 24, 24) target = torch.randn(1, 3, 28, 28) # make sure same spatial size act_interp = F.interpolate(act, (28, 28)) # create same channels conv = nn.Conv2d(64, 3, 1) act_interp_channels = conv(act_interp) print(act_interp_channels.shape) # has same shape now loss = F.mse_loss(act_interp_channels, target) Thanks for the reply. This is something new, I will try it. So, can we say like pooling and interpolate corresponds to downscaling and upscaling? And in another approach, I have a output tensor(still intermediate activations after each convolutional layer) of say size (batch_size= 100, channels= 32 , 1, 1) and target(which are labels now) of size (batch_size= 100, 1) provided we have no. of classes = 10. So, for loss calculation i.e. Loss = lossFunc(output, labels), Is there any loss function, which can reduce the dimensionality from 32 to 10 and then calculate the loss? Or I have to use a conv/linear layer for that and then calculate the loss? Because in this case it adds some more weights, which I want to avoid. Yes, you could name it like that. Note that you could down- and upscale using an interpolation method. I assume the shape should be [100, 10]? If so you could either use a trainable layer (as you’ve already described) or try to reduce the channels using e.g. mean, sum etc. However, a reduction might be a bit tricky in this use case, as the target size of 10 doesn’t fit nicely into the channel size of 32. Generally, all operations would be allowed which map 32 values to 10. ok that looks super cool can you give some tips, code, steps of how you did it :)? Hi, Thank you so much for this code! I am a beginner and learning CNN by looking into the examples. It really helps. Can you guide me on how can I visualize the last layer of my model? If you would like to visualize the parameters of the last layer, you can directly access them e.g. via model.last_layer.weigth and visualize them using e.g. matplotlib. The activations would most likely be the return value and thus the model output, which can be visualized in a similar fashion. If you want to grab intermediate activations, you could use forward hooks and plot them after the forward pass was executed. Let me know, if that answers the question. Hi, thanks as per your comments I have implemented. Can you tell me how can we interpreted the results? As per your comment I got the weights of fully connected layer 1 and the size is this torch.Size([500, 800]) but I am unable to plot it as getting this error TypeError: Invalid shape (800,) for image data. I think the problem is because fully connected layer is flattened so it is 1-D array therefore I am getting this error? What is the solution for this? You should be able to visualize a numpy array in the shape [500, 800] using matplotlib in the same way as done in your previous post. I guess you might be indexing the array and could thus try to plot it directly. I don’t know what kind of information you are looking for but the color map would indicate the value ranges and you could try to interpret them as desired. (layer1): Sequential( (0): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer2): Sequential( (0): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (fc1): Linear(in_features=800, out_features=500, bias=True) (dropout1): Dropout(p=0.5, inplace=False) (fc2): Linear(in_features=500, out_features=10, bias=True) )> This is my CNN model and whenever I tried to fetch the weights for layer 1 I am getting this error: ModuleAttributeError: ‘Sequential’ object has no attribute ‘weight’ Can you please help me with this error?
https://discuss.pytorch.org/t/visualize-feature-map/29597/77
CC-MAIN-2022-33
refinedweb
1,347
65.52
The code, scripts, and some of the instructions in this post are adapted from the second chapter of _The Linux Kernel Module Programming Guide_, by Peter Jay Salzman, Michael Burian, and Ori Pomerantz. As of this writing, an electronic version of the book is available at the URL ““. 1. Download and install Oracle VirtualBox. As of this writing, the latest version is available at the URL ““. 2. Download a copy of the Ubuntu operating system in .ISO format. As of this writing, the latest version is available at the URL ““. 3. Run VirtualBox, create a new virtual machine instance, and install Ubuntu on it. Follow the prompts, and accept the default settings wherever possible. 4. Start up the new virtual instance of Ubuntu and allow it to boot. Log in when prompted and wait for the the desktop to appear. The remaining instructions in this document should be applied to the virtual Ubuntu instance running in VirtualBox, not to the host operating system. 5. In any convenient location, create a new folder named “DeviceDriverTest”. 6. In the newly created DeviceDriverTest directory, create a new file called “hello-1.c”, containing the following text. #include <linux/module.h> #include <linux/kernel.h> int init_module(void) { printk(KERN_INFO "Hello world 1.\n"); return 0; } void cleanup_module(void) { printk(KERN_INFO "Goodbye world 1.\n"); } 7. Still in the DeviceDriverTest directory, create a new file named “Makefile”, containing the following text. obj-m += hello-1.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean 8. Open a command prompt console (perhaps by selecting the menu item “Accessories ~ Console” from the Start Menu) and navigate to the DeviceDriverTest directory (perhaps by entering the command “cd Desktop/DeviceDriverTest”). 9. Still in the console window, enter the command “make”. The file ‘hello-1.c” will be compiled, and a new object file named “hello-1.ko” will be created. 10. Still in the console window, enter the command “sudo su” and provide the proper password to run commands as the root user. 11. Still in the console window, enter the command “insmod ./hello-1.ko”. The newly created module will be inserted into the Linux kernel. 12. Still in the console window, enter the command “nano /proc/modules”. A text file will be opened in the “nano” text editor. The name of the newly installed “hello-1” module should be visible at the top of the opened file. 13. If desired, remove the module from the Linux kernel by entering the command “rmmod hello-1”. Reblogged this on Perfectly Opaque and commented: God Bless Open-source!
https://thiscouldbebetter.wordpress.com/2013/02/02/building-a-simple-device-driver-in-linux/
CC-MAIN-2018-13
refinedweb
445
58.48
> Hello, Noob here. I have a 2d project where I want to create a hex grid for unit placement. Currently I have a script that creates a hex mesh and then uses that mesh to create a hex grid. I also implemented a mouse over color change. Which worked great. Then I switched it from reacting to the mouse pointer to reacting to a single hex I moved with the mouse. Again, worked great. But what I really want is for it to react to multiple hexes, as the units will have varying sizes. Smallest unit may be 1 hex but a larger unit might take 3 or 7 hexes of space. At my current level of Unity noobness I can't figure out how to do this, nor the best way to go about it. Currently I am applying a script to each individual grid hex that checks for a raycast and changes the color. This was based on an example I found to do the mouse over. My idea was to create a group of 'selection hexes' that matched the size of the unit and as I move it around it would highlight a legitimate location to place it (i.e. unit size 7 finds 7 empty hexes). No idea if this is the way to go about it and currently what I have doesn't work anyway. I just took my working single example and put it in a loop. But, again, it doesn't work, it just highlights 1 hex at a time. Any advice on how to best implement this? Best practices? Code attached to hexes in the grid: using UnityEngine; [RequireComponent(typeof(Collider))] public class GridHex : MonoBehaviour { Color normalColor; GameObject[] selectors; //public Transform selectionObject; void Awake() { selectors = GameObject.FindGameObjectsWithTag("selectortile"); //selectionObject = GameObject.Find("HexagonSelectorTile").transform; } void Start() { normalColor = GetComponent<Renderer>().material.color; } void Update () { foreach (GameObject selector in selectors) { //Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); //Ray ray = new Ray(selectionObject.position, Vector3.forward); Ray ray = new Ray(selector.transform.position, Vector3.forward); RaycastHit hitInfo; Debug.DrawRay(ray.origin, ray.direction, Color.green); if (GetComponent<Collider>().Raycast(ray, out hitInfo, Mathf.Infinity)) { GetComponent<Renderer>().material.color = UnityEngine.Color.red; } else { GetComponent<Renderer>().material.color = normalColor; } } } } What it looks like: Alright. Good luck! Answer by Zypherr7 · Sep 30, 2015 at 01:01 AM I see what you are trying to do and I may be able to help. There is a formula you can use that, when the width and length of the tile is entered, can calculate a world coordinate, into a hex based coordinate, and back again if you wished. I have actually built a method to do so, but it is slightly confusing just to enter into here. I am going to suggest you use the initial coordinate to find the hexs around it. The offset you need would be on the y, .75*height, and for the x, it would be width\2 every other tile. Thanks, this has given me something to think about. Ok, so I had a little more time to work on this, so, if you wanted to find the six tiles around the original one, you would be looking at tiles relative to it either (-1,0), (1,0), (0,1), (1,1), (0,-1), (1,-1); or you would be looking for (-1,0), (1,0), (0,1), (-1,1), (0,-1), (-1,-1). It would just depend on the offset of the initial tile. If it was on a row with tiles further left, it would be the first one. If it was on a row with the tiles further to the right, it would be the second group. Based off of that, you would then have to find the position of the origin tile to find the ones around it. The code would be something like this. vector2 tile1; vector2 tile2; vector2 tile3; vector2 tile4; vector2 tile5; vector2 tile6; //tile width and length double tileWidth; double tileHeight; public void findSurroundingTiles(vector2 origin){ //to find tile left tile1.x = orgin.x + (-1 * coordinateTransferX(tileWidth, tileHeight).x); tile1.y = orgin.y + (0 * coordinateTransferX(tileWidth, tileHeight).y); //to find tile above tile2.x = orgin.x + (0 * coordinateTransferY(tileWidth, tileHeight).x); tile2.y = orgin.y + (1 * coordinateTransferY(tileWidth, tileHeight).y); } //finds coordinate of tile on same offset compared to origin public vector2 coordinateTransferX(double width, double height){ vector2 temp.x = width; temp.y = height; return temp; } //finds coordinate of tile on offset compared to origin public vector2 coordinateTransferY(double width, double height){ vector2 temp.x = width * .75; temp.y = height / 2; return temp; } Now, this code should work for what you are trying to do, if you can find objects based on their location. Also, you have to take into fact that an offset may be left or right. I cannot think of anything that could fix that off the top of my head. Also, on a side note, this may not compile properly if copied and pasted directly into scripts because I wrote this directly into the comment, and may have messed up some syntax.If you need any help you can email me at zypherr7@gmail.com, I will be glad to help with any questions. Thanks @Zypherr7. Your previous comment had me thinking that instead of the multiple raycast method I should just find the hex my mouse was over and then find the adjacent hexes. Yesterday I ran through this Minesweeper tutorial to see how they did it. I not sure I'm a big fan of how they did their Grid data class, but it did make me realize I need a data class of some kind. Barring any other examples though I'll try to semi-clone what they did in that tut and modify it for hex and just basic detection. Combined with your code example I'm sure I can hack something together, though I doubt the data class will follow any kind best practice. I'll report back on my progress... Answer by huxley443 · Oct 05, 2015 at 11:38 PM Well I got this working, but I think it's pretty ugly. I'm not sure I'm passing data back & forth in a good way. And I'm not happy using OnMouseOver & OnMouseExit. I tried some OnCollision events but couldn't make them work. Anyway, there's a lot going on, but it boils down to this. In a Grid class I have: public List<GridTile> GetAdjacentTiles(Vector2 pPosition) { int x = (int)pPosition.x; int y = (int)pPosition.y; int w = gridWidthInHexes - 1; int h = gridHeightInHexes - 1; List<GridTile> Adjacents = new List<GridTile>(); if (x > 0) { Adjacents.Add(hexes[x-1,y].GetComponent<GridTile>()); } if (x < w) { Adjacents.Add(hexes[x+1,y].GetComponent<GridTile>()); } if (y > 0) { //the two above>()); } } if (y < h) { //the two below>()); } } return Adjacents; } And In a Tile class I have: void OnMouseOver() { this.IsSelected = true; List<GridTile> adjecentTiles = gridMap.GetAdjacentTiles(myGridPosition); foreach (GridTile gt in adjecentTiles) { gt.IsSelected = true; gt.SetVisual(); } } void OnMouseExit() { this.IsSelected = false; List<GridTile> adjecentTiles = gridMap.GetAdjacentTiles(myGridPosition); foreach (GridTile gt in adjecentTiles) { gt.IsSelected = false; gt.SetVisual(); } } Which results in this as I move my mouse. Instantiate GameObjects in a Sphere shape 2 Answers Hexagon grid check for specific tile 0 Answers Hexagonal geodesic map 0 Answers Best practices: hierarchy does not seem possible because redefined method need other private methods 0 Answers Move object to grid 1 Answer
https://answers.unity.com/questions/1073767/multiple-hex-placement-grid.html
CC-MAIN-2019-26
refinedweb
1,244
66.13
In the previous post of this series, we looked at how the Template pattern is implemented in both Ruby and C#. In this post, we’ll take a look at the Strategy pattern…one of my favorites. In it’s classic form, the Strategy pattern consists of a context class and various “strategies” which share a common interface. The context class is given a strategy to which it can delegate work to. Think of it as passing an algorithm to a class. To change the way the class works, change the algorithm you feed it. This makes it really easy to adhere to the open-closed principle which states that an object should be open for extension but closed for modification. We can drastically change how our class operates without changing a line of code in the class itself. In .Net, the Sort method on generic lists is a perfect example of how the Strategy pattern can be applied. To change how the list is sorted, you simply pass in a method that matches the Comparison<T> delegate or a class that implements IComparer<T>. (See my earlier post on the various ways you can sort a generic list in .Net). Ruby has a really cool example built in as well with it’s rdoc utility which uses strategies both for distilling documentation from various languages (C, Ruby, etc) and in how it ouputs the documentation (HTML, XML, CHM). In our examples below, we’re creating a Driver object that has three different properties: DrivingHabit, MilesDriven, and SpeedingTickets. It also has a method “Drive” which accepts the number of hours that the driver should drive as a parameter. The Drive method delegates the actual "driving” logic to the driving habit that it is composed with. A cautious driver follows the speed limit (55mph) and never gets a speeding ticket. A reckless driver, however, travels along at 80mph and gets a speeding ticket every 1/2 hour. C# Example For the Strategy pattern to work in a static language, we must first define a contract that the strategies themselves must adhere to as well as the context class. So we’ll start by creating the Driver object and defining the IDrivingHabit interface. namespace DesignPatterns.StategyPattern { public class Driver { public IDrivingHabit DrivingHabit { get; set; } public int MilesDriven { get; set; } public int SpeedingTickets { get; set; } public Driver(IDrivingHabit drivingHabit) { DrivingHabit = drivingHabit; } public void Drive(int hours) { DrivingHabit.Drive(this, hours); } } } namespace DesignPatterns.StategyPattern { public interface IDrivingHabit { void Drive(Driver driver, int duration); } } Now let’s create our strategies. namespace DesignPatterns.StategyPattern { public class CautiousDrivingHabit : IDrivingHabit { public void Drive(Driver driver, int hours) { driver.MilesDriven = hours * 55; } } } namespace DesignPatterns.StategyPattern { public class RecklessDrivingHabit : IDrivingHabit { public void Drive(Driver driver, int hours) { driver.MilesDriven += hours * 80; driver.SpeedingTickets += hours * 2; } } } And to see how this would actually be used, we’ll take a look at the unit tests. using NUnit.Framework; namespace DesignPatterns.StategyPattern { [TestFixture] public class When_a_cautious_driver_is_on_the_road_for_3_hours { private Driver granny; [SetUp] public void EstablishContext() { granny = new Driver(new CautiousDrivingHabit()); granny.Drive(3); } [Test] public void Should_move_a_total_of_165_miles() { Assert.That(granny.MilesDriven, Is.EqualTo(165)); } [Test] public void Should_not_receive_any_speeding_tickets() { Assert.That(granny.SpeedingTickets, Is.EqualTo(0)); } } [TestFixture] public class When_a_reckless_driver_is_on_the_road_for_3_hours { private Driver speedRacer; [SetUp] public void EstablishContext() { speedRacer = new Driver(new RecklessDrivingHabit()); speedRacer.Drive(3); } [Test] public void Should_move_a_total_of_240_miles() { Assert.That(speedRacer.MilesDriven, Is.EqualTo(240)); } [Test] public void Should_receive_a_speeding_ticket_for_every_half_hour_on_the_road() { Assert.That(speedRacer.SpeedingTickets, Is.EqualTo(6)); } } } Pretty simple, eh? Let’s take a look at how we would do this in Ruby. Ruby Example First, we’ll create the Driver object. So far doesn’t seem that much different from the C# example. class Driver attr_accessor :driving_habit, :miles_driven, :speeding_tickets def initialize(driving_habit) @driving_habit = driving_habit @miles_driven = 0 @speeding_tickets = 0 end def drive(hours) @driving_habit.drive self, hours end end Next we’re going to jump straight into creating our Strategy classes. Since Ruby supports duck typing, we really have no need to create an object that defines what our strategies should look like. We could define a base DrivingHabit object that included a Drive method that we can override in our implementations, but that would be producing extra code for no added benefit. We’re ok as long as the object we pass in has a Drive method that matches the required signature. class CautiousDrivingHabit def drive(driver, hours) driver.miles_driven += hours * 55 end end require 'lib/driver' class RecklessDrivingHabit def drive(driver, hours) driver.miles_driven += hours * 80 driver.speeding_tickets += hours * 2 end end And now we’ll take a look at the corresponding unit tests. require 'lib/cautious_driving_habit' describe "When a cautious driver is on the road for 3 hours" do before(:each) do @granny = Driver.new(CautiousDrivingHabit.new) @granny.drive 3 end it "should move a total of 165 miles" do @granny.miles_driven.should == 165 end it "should not receive any speeding tickets" do @granny.speeding_tickets.should == 0 end end require 'lib/reckless_driving_habit' describe "When a reckless driver is on the road for 3 hours" do before(:each) do @speed_racer = Driver.new(RecklessDrivingHabit.new) @speed_racer.drive 3 end it "should move a total of 240 miles" do @speed_racer.miles_driven.should == 240 end it "should not receive a speeding ticket for every half hour on the road" do @speed_racer.speeding_tickets.should == 6 end end Simplifying the Strategy Pattern For simple scenarios (such as this one) where the strategies are extremely basic, we can actually reduce the need for the strategy classes themselves. In C#, we just need to add an overloaded Drive method that accepts a delegate that matches the Drive method in the IDrivingHabit interface. (We could have changed the contructor to do this instead.) public void Drive(int hours, Action<Driver, int> drivingHabit) { drivingHabit(this, hours); } And to use it, we pass a lambda expression in that contians our logic. speedRacer.Drive(3, (driver, hours) => { driver.MilesDriven += hours*80; driver.SpeedingTickets += hours*2; }); We can do something very similiar in Ruby. We’ll create a new method that accepts a code block as a second parameter. Note that we had to create a new name for this method (Ruby doesn’t support method overloading). def drive_using_habit(hours, &amp;driving_habit) driving_habit.call self, hours end And the code using this. This is awesome stuff. I don’t know a ton about Ruby and I’ve always wondered how different patterns are done in Ruby. Good to see the tests too! Looking forward to the rest of the series. It is pretty neat seeing examples side by side in different languages. We’re working through the Ruby Koans at our bi-weekly get togethers then moving onto a Rails app. I’m sure these posts will come in handy! Keep it up John.
http://www.gembalabs.com/2009/07/03/comparing-design-patterns-in-ruby-and-c-the-strategy-pattern/
CC-MAIN-2017-22
refinedweb
1,118
57.27
tafadzwa manzunzu3,601 Points help please Create a public method in the NumberAnalysis class called NumbersGreaterThanFive that returns an IEnumerable<int>. Inside the method, use LINQ query syntax to return only the numbers in the _numbers field that are greater than 5. using System.Collections.Generic; using System.Linq; namespace Treehouse.CodeChallenges { public class NumberAnalysis { private List<int> _numbers; public NumberAnalysis() { _numbers = new List<int> { 2, 4, 6, 8, 10 }; IEnumerable<int>NumbersGreaterThanFive = from n in _numbers where n >5 select n; } } } 2 Answers Calin Bogdan14,614 Points Howdy! Instead of creating a method you created an Enumerable with that name in the constructor. You should create a method that returns that Enumerable. Good luck! SPOILER public IEnumerable<int> NumbersGreaterThanFive() { return from n in _numbers where n > 5 select n; } Allan ClarkTreehouse Moderator 10,771 Points First, you are required to create a new method for this task. You currently have your linq query in the constructor method. Being that it gives you the name and return type the method signature should look like this: public IEnumerable<int> NumbersGreaterThanFive() Your query looks correct so I will leave the implementation up to you! Happy coding!!! tafadzwa manzunzu3,601 Points thank you tafadzwa manzunzu3,601 Points tafadzwa manzunzu3,601 Points thank you very much
https://teamtreehouse.com/community/help-please-121
CC-MAIN-2019-26
refinedweb
212
53.51
Introduction Description In previous articles I explained open pdf file in web browser in asp.net using C# and VB.NET. To implement this concept first create one new website and add one of your existing pdf file to your website after that open Default.aspx page and write the following code Now open Default.aspx code behind file and add following namespaces C# Code Once namespaces added write the following code VB.NET Code Demo Download Sample Code Attached 40 comments : nice post..But if User do not want to let the others to download/Save .pdf file from Website..then...How is it possible?? how to redirect to siteAnalytics.pdf..where we have to create pdf file .... Thanks.... but how i can ad text in pdf files using itextsharp.dlll How to disable to Right Click and print option and control in key on pdf files in this page "facebook" pops up while on mouseover effect and then gets actual position after mouseover completes.I want to add such effect in my website but can not identify which control is used over there.plz let me know about it with detail. How to disable right click in pdf file on browser How to display xml file in browser using asp.net with c#.. sir how can i get data from web site but this site is open only in Internet explorer and then how can i automatically open internet and goto loggin using sending credential after that i want to extract data and store into data base using C# lang .please help me ti achive Sir,i dont want to download pdf/doc files,only want to view the files on click event of a button that is stored in sql server 2008 in bytes..type of column in sql server is varbinary(max) ....pls help... its very useful to me sir. how to diable the save option in the pdf file. thanks in advance. hi, its working only for pdf file, but wants to show all file as like doc, xls file. Hello,Can be set file name while saving the file using save button. how to show multiple pdf files in a slider i hv multiple pdf files in a folder n i need to show all files in a slider with paging(in C#). Pls give me some suggestion... Parth:) post is good but i have a problem that i want to open docx file on the browser without using microsoft interlop library.plz help me How can I get the page number which i'm reading from the broweser? I need the current page to be extracted as a separate pdf file Thx for the code and sample. How would you show the PDF inside a jquery dialog box? I have the PDF bytes() but I don't know ho to show them inside a DIV/dialog box. Thanks i have scanned 10 personal files of employees.based on the id number the pdf link should open.thnx plz give suggestions i want to display that data in pdf file that are show in repeater using asp .net with C#. what is the method for this.. how to open any format file ? how to open two files this way?i am getting error in opened file while i am trying to open two file at the same time How to Print PDF File using Window.Print("D:/a.pdf") ? Is there any possibility to get the currently viewing pdf file page number in a text box. Which means when i am viewing 4th page of pdf file, my text box should show 4. after scrolling to next page, text box shd show 5.. like wise i wanted. kindly help me.. we are in urgent to get this code. we found possibility of inserting text, link and all options in pdf.. We can also able to find total pages count. But how to capture current pdf page number in text box. How can I get the page number which i'm reading from the browser? I need the current page number to be extracted in a text box.. kindly help me with the code. Thank you suresh so much,U r just brilliant Great help. But it requires PDF Reader to be installed on client machine. Also I dont want anyone to download my files. Is there any pdf viewer which can help me in this? Thanks in advance nice post..But if User do not want to let the others to download/Save .pdf file from Website..then...How is it possible?? it not working with update panel But its giving an error when size of pdf file is large How to stay on same page and open a dialogue to save pdf file.Please reply fast urgent. Thanks in advance It is working nice. But I want open pdf file in panel using choose file control instead of iframe src in asp.net using c# hi, can you please help me?? I'm newbie in ASP.net. I'm trying to open a PDF file using ASP.net but I can't seem to work it out. I want to view the PDF file which is stored in (1) local machine and (2) web server. Thanks. Thank you so much for posting this! I was using this on IIS6 without issue but when I moved to IIS7.5 we began seeing problems with Internet Explorer displaying gibberish when trying to load another PDF or navigate to another page. Found that I needed to add the following at the end of the above to correct the problem: Response.End() The page where the report is triggered is closed as soon as the user closes the report in the browser. Awesome post - helped me get rolling on a back office application I am developing for my business. Thanks a ton. 1)How do i disable Save/print option for pdf file? i want the user can only view the file... 2)How can i view doc/docx/xls file also in browser? Please help. Thanks in advance Have tried in Mozila fire fox and IE. this is not working.Do you have any solution ?
https://www.aspdotnet-suresh.com/2012/11/aspnet-open-pdf-file-in-web-browser.html?showComment=1357316833386
CC-MAIN-2019-30
refinedweb
1,039
84.68
acl_from_text() Create an access control list (ACL) from text Synopsis: #include <sys/acl.h> acl_t acl_from_text( const char *buf_p ); Since: BlackBerry 10.0.0 Arguments: - buf_p - A pointer to a buffer that contains the text form of the ACL that you want to convert. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The acl_from_text() function converts an access control list from text form into the internal form of an ACL. This function accepts the long and short text forms of an ACL: tag_type:qualifier:permissions In the long form, tag_type is user, group, other, or mask, the qualifier is the name or numeric ID of a user or group, and the permissions are in the form rwx, with a hyphen (-) replacing any permissions that aren't granted. In the short form, you can abbreviate the tag_type to its first letter, and the permissions can contain at most one each of r, w, and x, in any order. When you're finished with the resulting ACL, you should call acl_free() to release it. Returns: A pointer to the internal representation of the ACL, or NULL if an error occurred ( errno is set). Errors: - EINVAL - The contents of the buffer couldn't be converted into an ACL. - ENOMEM - There wasn't enough memory available to allocate for the ACL in working storage. Classification: This function is based on the withdrawn POSIX draft P1003.1e. Last modified: 2014-11-17 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/a/acl_from_text.html
CC-MAIN-2017-22
refinedweb
266
63.9
Could you please write the followong programs for me.. Thanks in advance 1)1. write a program that reads a string composed of 12....)Write a program that reads a string s then another string p contained in s Java Programs - Java Beginners Java Programs Java Programs Java Programs Hi, What is Java Programs? How to develop application for business in Java technology? Is there any tool to help Java programmer in development of Java Programs? Thanks Java programs on students assesment Java programs on students assesment 1) Write a Java program that reads the details of student and do the mark assessments. (i) Input the name of the student and student id from the user. (Use String variable to hold the student array programs array programs write a program in java to input 10 numbers in an array and print out the Armstrong numbers from the set. import java.util.*; class ArmstrongNumbers { public static boolean find(int num java programs java programs please help in this series .. 55555 55554 55543 55432 54321 Installing programs over a network using java Installing programs over a network using java Hi, i want to write a java program that will allow me to install programs from a server to a client machine. Any help will be appreciated. Thanks How to access the "Add/Remove Programs" list in Control Panel using Java Program? - Java Beginners How to access the "Add/Remove Programs" list in Control Panel using Java Program? Dear Sir, I'm very interested in creating java programs innovatively. I've planned to write a java coding for accessing the Add/Remove programs - Java Beginners java programs take one file as a input exam.txt under this file 1,2,3,4,5 6,7,8,9,10 11,12,13,14,15 and give output in new file as result.txt under this file 1,6,11 2,7,12 3,8,13 4,9,14 5,10,15 java programs - Java Beginners java programs a coin is tossed for three sets of times i.e 10,100,1000.write to print how many times the head and tail occurs in each sets of toss and total number of head nand tail occured at the end.use method call java programs - Java Beginners java programs lisp-like list in java to perform basic operations such as car, cdr ,cons Hi Friend, Try the following code: import java.util.*; class Lisp { public int car(List l){ Object ob=l.get(0 java programs - Java Beginners java programs
http://roseindia.net/tutorialhelp/comment/99275
CC-MAIN-2014-42
refinedweb
424
62.38
Advertisement need to run hello world in Eclipse 3.2 hi, Need to run the application in eclipse 3.2. can anybody help me in this regard for running the HelloWorld application. Tutorial doesn't work Hello, a few things seem wrong in this tutorial: - how are we supposed to get that Struts 2 tutorial? we never altered index.html from the blank war file. - the build.xml creates the "classes" folder under the src folder, and not under WEB-INF; do package name in struts.xml The HelloWorld.jsp has to be located in struts2tutorial\roseindia\pages NOT in struts2tutorial\pages as mentioned. The package namespace /roseindia in struts.xml anticipates HelloWorld.jsp in struts2tutorial\roseindia\pages. It took me a while to fi Struts 2 tutorial clarification HI, - We have added index.html file to add the link to call all the tutorials present. You can remove index.html with your project file. - the build.xml creates the "classes" folder under the src folder... Acuall build.xml file first compiles Running struts 2 application Hi Karthik, Run Struts 2 application on Tomcat 6 server. Thanks this app is not working..... hey let me tell you one thing...... the people who are new to struts2 will try to execute the application but this is not working . it disappoints me a lot. ............Madhu Running code Hi Madhu, You download the code and then install on the tomcat. It will work. If you find error please post here. Thanks Error on Struts 2 sample code I am trying to execute this in Tomcat 5 and i am gettting this error. NFO: Installing web application at context path /struts2tutorial from URL file:D:\Tomcat 5.0\webapps\struts2tutorial Jul 30, 2007 7:12:26 PM org.apache.catalina.core.StandardCo Struts 2 Tutorial I known that the full integrated application working but application from tutorial doesn't work. It's good tutorial but maybe some more examples add for work(testing and learn Struts 2) without errors. Thanks View the Comments this tutorial is not perfect first of all the first jsp is not there to be ...ie homepage.jsp .....rest all is fine but plz provide the full information one more thing is missing out from this for which i had faced the problem is Xalan.jar file is mi struts2 in eclipse 2 Here is a work around to get it working in eclipse 3.2. Get WTP plugins for eclipse. Create DynamicWebProject. The below is what the project structure will look like: projectname --> Name of the projct in eclipse src com.yourcompname Mr your configuration does'nt work. I tested twice. tutorial this is tutorial ??? step by step ? sorry but I think that is some sh... . When we create tutorial we must do this good ... and without problems then application must run (here is that ?) Struts 2 code download Hi Mahendra, I have just added a link to download the full integrated application. You download the application and then deploy it on your server. We have fully tested the application so it will 100% work. Thanks Deepak Kumar The machinery mapping *.action to struts.xml file The article says that "By default web.xml file of struts blank application is configured to route all the request for *.action through org.apache.struts2.dispatcher.FilterDispatcher." and the configuration of web.xml is highlighted. The question Where to keep the struts.xml struts.xml - There is no Action mapped for action the ant doesn't work? Hello, I've tried the download code there is no problem but when I rebuilt the project using the ant build.xml,the execution of bach is ok but the project doesn't work on tomcat. Did you have the same problem? Have you solve? Thank you An error I've got an error listed below after running example downloaded from your webpage (got the same while trying to do this step by step using tutorial). And yes, I'm using tomcat 6.0. Struts Problem Report Struts has detected an unhandled except Reason for an unhandled exception: Hi, when I followed the tutorial by creating files as explained, I got the below error when I have run the application. Struts Problem Report Struts has detected an unhandled exception: # Messages: There is no Action mapped for action name Error Hi it downloaded your zip file and extracted it to my webapps folder in Tomcat5.5. However this example does not seem to work and i get 404 error(other apps with struts 1.1 work fine) Even the examples given in Apache's Struts site does nt work. Wi after Rebuild it is not working Hello, I've tried the download code there is no problem but when I rebuilt the project using the ant build.xml,the execution of bach is ok but the project doesn't work on tomcat. Did you have the same problem? Have you solve? Thank you Working Download the source code from the link provided and place the extracted content in the webapps folder of tomcat it will work. The solution Problem: Struts has detected an unhandled exception: Messages: There is no Action mapped for action name HelloWorld. Solution: You should also modify the index.html, which the author maybe have omitted. The problem exist in the following: <M struts 2 hello world application hi, after i download and run i got the error msgs below. C:\Tomcat\webapps\struts2tutorial\WEB-INF\src>ant Buildfile: build.xml clean: [delete] Deleting directory C:\Tomcat\webapps\struts2tutorial\WEB-INF\classes [mkdir] Created dir: Null Pointer exception in Struts2HelloWorld I downloaded "struts2-blank-2.0.9" and configured it as per this site to run the Struts2HelloWorld application. The build was successful. ---------------------- Buildfile: build.xml clean: [delete] Deleting directory C:\Program Files\A Struts 2 Tutorial I had the same problem encounter: Struts has detected an unhandled exception: Messages: There is no Action mapped for action name HelloWorld. I had follow the tutorial and also update the index.html Anyone got the solution? Thanks Struts 2 Hello World Example Issue Dear All, The only missing link in successful execution of this application is that you need to modify the index.html, which is not mentioned in the steps. Change the index.html file, and it works. Here is the file: ****** index.html... *** com.opensymphony.xwork2.ActionSupport Hi , "class file has wrong version 49.0, should be 48.0" This error is coz of the jdk . Install jdk 1.5.x and dont forget to set your classpath. Error still still get the error message "There is no Action mapped for action name HelloWorld" even after I used the new index.html file. NullPointerException (STRUTS 2) WHEN WE OPEN ANY JSP PAGES WHY IT SHOW ON BROWSER : java.lang.NullPointerException org.apache.struts2.views.jsp.TagUtils.getStack(TagUtils.java:58) plz sent me response hurry thanks & regards Build error solution Hi, I had the same problem and solution of it was to install JDK 1.6, my old version was 1.4 Is it correct? Hi, are you sure that struts.xml is correct? I think entries should follow in a different order i want some latest reviews Hello....this is nice site for learners even... am thankful to this site... Thanking You, Regards, Ramana Reddy Plz help I've placed my struts.xml file in classes directory and do some modifications in build.xml as it generate errors. 1. target name="prepare" copy file="classes/struts.xml" todir="src/classes" 2. target name="clean" delete dir="src/classes" m Hi Stefan Stefan, Download the source code from the above given link and see how index.html updated... hope you manage to get this work Harshi HTTP Status 500 I modified the index.html and launched this is not mentioned either and fails on tomcat 6.0.14 so I tried on 5.5.23 and it worked. Not sure why, but it runs! Struts2 - action not find I am using struts2 in netbeans.While running the programm i am getting servlet Exception action not mapped -------------- index.jsp ---------- regarding helloworld application hi i too got the same error messages when ant is run D:\JAVA\Tomcat 5.5\webapps\struts2tutorial\WEB-INF\src>ant Buildfile: build.xml clean: [delete] Deleting directory D:\JAVA\Tomcat 5.5\webapps\struts2tutorial\WEB-IN F\classes [mkdi ANT? "I am assuming that you have already installed ant build tool on your machine." Why are you assuming that? this ant-thing has not been mentioned earlier and there is no link to it, correct me if I am wrong. ANT? What is this ant-thing? there is no link to it? index How should I change the index file? What more do I have to do to make this work? Has anyone managed to get this to work, or does anyone know a working tutorial plz help. Thanks Stefan BUILD FAILED I too got the same error... ================================================= C:\Program Files\Tomcat 5.5\webapps\struts2tutorial\WEB-INF\src>ant Buildfile: build.xml clean: [delete] Deleting directory C:\Program Files\Tomcat 5.5\webapps\ I got a struts problem reports like this Struts Problem Report Struts has detected an unhandled exception: Messages: There is no Action mapped for action name HelloWorld. -------------------------------------------------------------------------------- Stacktraces There is no Action Mapped Trying to follow this tutorial but I too get # Messages: There is no Action mapped for action name HelloWorld. Stacktraces There is no Action mapped for action name HelloWorld. - [unknown location] com.opensymphony.xwork2.DefaultAct try that.. I found the same problem and i could continue with tutorial configuring DevMode to false. Then the message disspeared. I hope be helpfull There is no Action mapped for action name HelloWor Hi, I too got the same error: There is no Action mapped for action name HelloWorld. Best regards, Mounir Error I am unable to download the source code. It's giving the invalid archive file error. kindly solve the problem ASAP. thanks, Nagesh Bemini at com.opensymphony.xwork2.DefaultActionProxy.prep can some body explain why Iam getting the following error "SEVERE: Could not find action or result There is no Action mapped for action name testAction. - [unknown location]". I guess there is lot of mismatch with the jars that apache published, a still getting same error I tried the below change, but I'm still getting the same error. Does this thing work? Build war file I wonder why the ant doesn't build the war file so that I can work within Eclipse and deploy the war file to TomCat webapp directory, instead of working in TomCat webapp directory. Only jar file is built but war is mention in 'dist' target. solution I found the solution. The problem is the namespace. This was set as /roseindia in the struts.xml file: <package name="roseindia" namespace="/roseindia" extends="struts-default"> Instead, do this: <package name="roseindia" extends="struts-d Error while trying.. Hi, After i did everything i got the status error,"There is no Action mapped for action name HelloWorld." Can anybody tell why this doesnt maps? Also, I dont use ant. But i do it using JDK1.5. Is there any problem using JDK1.5 to compile it Error Hi, After i did everything i got the compilation error. that is C:\Program Files\Tomcat 5.5\webapps\struts2tutorial\WEB-INF\src>ant Buildfile: build.xml clean: [delete] Deleting directory C:\Program Files\Tomcat 5.5\webapps\struts2tutoria ANT Build fails When I try to run my "ant" command. I am getting the following message.. BUILD FAILED java.lang.NoSuchMethodError: org.apache.tools.ant.util.FileUtils.getFileUtils()Lorg/apache/tools/ant /util/FileUtils ANT Build fails C:\tomcat5028\webapps\struts2tutorial\WEB-INF\src>ant Buildfile: build.xml BUILD FAILED java.lang.NoSuchMethodError: org.apache.tools.ant.util.FileUtils.getFileUtils()Lorg/apache/tools/ant /util/FileUtils; at org.apache.tools.ant.taskd try ... Make sure you stop tomcat then remove the jar file you built .. C:\Program Files\Apache Software Foundation\Tomcat 5.5\webapps\struts2tutorial\WEB-INF\lib then restart with the required change and your good to go no Action mapped for action name HelloWorld Change the index.html file it will work. the tutorial doesnt say that explicitly. The file is available at the source code struts2tutorial.zip There is no Action mapped for action name HelloWor The solution was posted above: This was set as /roseindia in the struts.xml file: <package name="roseindia" namespace="/roseindia" extends="struts-default"> Instead, do this: <package name="roseindia" extends="struts-default"> wont change index.html Hi, I have a weird problem. When I go to the address: the application works fine, however if I go to then the application redirects me to the implement extra features every thing is good but implement extra features in jdbc it not enough so pls do this . Re: There is No Action Mapped for Action Name Hi Andrew I tried the solution posted by you to remove the namespace entry in the package tag. I still see the same error. Was anyone else successful in resolving this error? Thanks Krishna Thanks for your posted solution for Unkown Action I was stucked in the similar problem that Action is not mapped for unknown action error. I have removed the namespace from struts.xml and the application runs properly. No Action mapped I just downloaded the source code and placed it under tomcat/webapps. when i try to run the application I get this message: Struts Problem Report Struts has detected an unhandled exception: Messages: There is no Action mapped for action name H Regarding ant build tool In the helloworld application while building the application ant build tool is used...can any one tell me about this installation...where to install...what is the settings we have to do...i have istalled this from apache site...but where to install i whenever I put web.xml, my application not start I am new to struts 2. I have problem to start the sturts 2 framework. Whenever I put web.xml under WEB-INF direcotry, the application starts to fail, but it works when I remove web.xml. Does anyone oan tell why? Thanks, -- Allen -- Help me My Struts2 application not run. it gives 404 error.Why? i am just download file and put it to web app for sample application but it gives the resoureces not found 404 error. i am using tomcat 5.0 java 1.5 Where the problem please tell help i didn't k help sample app Please can someone fix the cannot find action error? i have tried the things the previous commenter suggested and it is still not working thanks thanks a ton dude...i had already spent 1 day on it no Action mapped for action name HelloWorld. Trying to follow this tutorial but I too get # Messages: There is no Action mapped for action name HelloWorld. Stacktraces There is no Action mapped for action name HelloWorld. - [unknown location] com.opensymphony.xwork2.DefaultActionProx This is fine... just forget the index.html, it's here: [code] <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>RoseIndia.Net Struts 2 Tutorial</title> </head> <body> <div align="center"> <center> < mrs Thank u very much for ur comment.....i was facing the same prob.....after removing namespace its wrking fine.....thank u. Struts 2 Example When i tried to run my first Struts 2 Application I got this error. org.apache.jasper.JasperException: java.lang.NullPointerException Developer I'm having problems running this. I've followed the instructions closely, but get a message "There is no Action mapped for action name HelloWorld." THANKS FOR THE STARTUP EXAMPLE The error comes because we are trying to put the jar file in lib directory as well as the compiles classes in the classes directory .. please remove all the directory from the classes directory if u are using ant... Excellent Tutorial It was really inspiring for me to go through this tuorial.I would be really proud to & eager to see more tutorials like this to improve our knowldge. Struts problem Hi! I'm having a problem when i test the application "Hello World Application" When i start the application on Tomcat, it appears the following message: Struts Problem Report Struts has detected an unhandled exception: # Messages: There is struts demp it is good and i want to learn from u r site request My Struts2 application not run. it gives 404 error.Why? i am just download file and put it to web app for sample application but it gives the resoureces not found 404 error. i am using tomcat 5.5 java 1.5 Where the problem please tell me? problem loading the page everytime i put the localhost 8080/strutstutorial it takes me to some example folder and runs the other helloworld.jsp There is no Action mapped for action name HelloWor The tutorial does not specify the changes to be made in the index file. Please download the source file given at the end of this tutorial, "struts2tutorial.zip". Your default index file contains "URL=example/HelloWorld.action" which should be "URL=ro There is no Action mapped for action name HelloWor I followed all the instructions carefully, yet getting the following error! "There is no Action mapped for action name HelloWorld." somebody suggested removing all directories from classes, the one in WEBINF is empty and the one in src has net( Struts has detected an unhandled exception Struts has detected an unhandled exception: Messages: No result defined for action com.coin.fk.SalesShipmentAction and result input File: file:/H:/sample/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/coinsprakash/W Fix Plz check your web.xml welcome file list to be <welcome-file>index.html</welcome-file> edit index.html and correct the element to be <a href="net/roseindia/HelloWorld.action">Run Struts 2 Hello World Application</a> It should work fine after th doubt Respected sir i run struts2tutorial Helloworld example in above tutorial.but it will give errors something like this "There is no Action mapped for action name HelloWorld".so please give me the correct solution for this erro Filter start Error in Tomcat I wonder how come any body posted question on this issue. its a official problem from struts, I was facing this problem deploying hello world application finally found the solution to it. Refer: There is no action mapped for HelloWorld Check for the existence of index.html in /webapps/struts2tutorial. If not present, copy from the source code download or create one. Getting Exception starting filter struts2 EVERE: Exception starting filter struts2 Unable to load bean: type:com.opensymphony.xwork2.ObjectFactory class:org.apache.struts2.spring.StrutsSpringObjectFactory - bean - jar:file:/C:/Ranjith/J2EE%20Enviornment/MyEclipse_WorkSpace/struts2tutorial/W Getting error while running the example Struts Problem Report Struts has detected an unhandled exception: # Messages: There is no Action mapped for action name HelloWorld. Stacktraces There is no Action mapped for action name HelloWorld. - [unknown location] com.opensymphony. Compilation Error [javac] Compiling 1 source file to C:\Program Files\Apache Software Foundation\Tomcat 5.0\webapps\struts2tutorial\WEB-INF\src\classes [javac] C:\Program Files\Apache Software Foundation\Tomcat 5.0\webapps\struts2tutorial\WEB-INF\src\java\net\rosei comment on tutorial this is simple and beautiful tutorial Struts Help for struts Struts 2 Hello World Application Following the instructions for setting-up the application resulted in failure. I found I had to move the pages folder under WebContent to get it to work. 404 Error Hi!! I'm a new learner in struts and liked the content in this site.. but facing problem in running as i receive 404 error. guess some user have received the same.. can someone pls help.. also how can i see the solution for previous queries.. thnx in downloaded sample code not working when i start tomcat it gives the following error in log. 2009-10-03 16:55:03 StandardContext[/struts2tutorial]Exception starting filter struts2 javax.xml.transform.TransformerFactoryConfigurationError: Provider org.apache.xalan.processor.Transfor error in struts with netbeans how to solve exception package com.opensymphony.xwork2 does not exist "There is no Action mapped for HelloWorld" issue Instead of, type. This will show the newly created page. Alternatively, update struts2tutorial\index.html to have URL as "roseindia/HelloWorld.action" i getting error in Struts 2 Hello World Application Thanks for the example first of all. i deployed the example succesfully ,after that when i clicked on "Run Struts 2 Hello World Application " link i am getting the below error. org.apache.struts2.dispatcher.Dispatcher serviceAction SEVERE: Coul "no Action mapped" error I fixed the "no action mapped" error by editing index.html and changing the meta tag as follows: <META HTTP- The tutorial doesn't mention changing it and as-is it redirects to th COMMENT Its beautiful.But it can be more helpful if you can provide example based on some IDE (Eclipse). Defining each and every step wrt the IDE will be more helpul. But still its very nice. Thanks faced difficulty in running first struts2 applicat thanks for the tutorial. the application has given the message action not found. after long struggle and correcting the package name to net.roseindia i could run. if the correction is justified make necessary changes so that other learners don Thanks for Solution Hi, Being a new-bee I was struggling for long to find solution doing google and trying out many things but nothing worked. Thanks for this simple solution. Regards, Vikash Anand Good but incomplete The tutorial out here is not correct and there are some loose gaps which takes asses of the newbie to figure out... anyways after spending good amount of hours finally able to run the example.. 2 imp things missed out here.. 1> in the web.xml a pls correct the following changes The tutorial out here is not correct and there are some loose gaps which takes asses of the newbie to figure out... anyways after spending good amount of hours finally able to run the example.. 2 imp things missed out here.. 1> in the web.xml a After Session Time out on action user must redirec Problem - We need to check if session time is out or server restarted, anyhow session is lost then user must redirected to login page with message “Session Expired. Please Re-login.” The way I try to resolve above problem is – I write an custom in struts has detected an unhandled exception When following the tutorial I get the following error message: Struts has detected an unhandled exception: # Messages: There is no Action mapped for action name HelloWorld. overcoming from errors isp i dont know i usually got errors of 404 file not found and error 500...suggest me the solution for this strut2 with tomcat 5.5 error--'ant' is not recognize as internal and external inn the command prompt for jar file The problem still persists I've tried every solution that has been mentioned.. but the problem is still there. Can someone help me? Thanks for the help problem solved! Thanks for the help I just followed bill edwards and andrew's tips and it work! you just need to remove the namespace=/roseindia to make it blank. Feedback Very Nice and informative to learn Struts2 clearly.... Thanks Regards Veekshith no action map this tutorial is good..and make sure to include web.xml as because of it it gv errors.. software engineer nice tutorial. good and simple coding standards. did this code work? com.opensymphony.xwork2.ActionSupport; Where is this package? 2 Hello World Example Struts 2 Hello World Example - Video Tutorial that shows how to create Hello... how to develop 'Hello World' application in Struts 2 framework. We are developing this tutorial using the Eclipse IDE. This Struts 2 video tutorial Developing JSP, Java and Configuration for Hello World Application and required configuration files for our Struts 2 Hello World application. Now... on the "Run Struts 2 Hello World Application" link on the tutorial...:8080/struts2tutorial/ and then select "Run Struts 2 Hello World Application" from Spring MVC Hello World Example Spring MVC Hello World Example Spring MVC Hello World Example in Spring 2.5 In this tutorial we... will use tomcat to run the example. Eclipse provides the way to easily run "Hello World" example in Wicket ; Hello World example is everybody's...; example we need to create following three files: HelloWorldApplication.java... is that you need to keep all these three files in a same folder. Here is the example Smarty Hello World Program How to write "Hello World" program? In any smarty program we need two files: a) .php file and b).tpl file. i) .php file: Which...;//enable the caching $smarty->assign ('name', 'hello world Developing Hello World application Hello World Example A Hello World Example given below which shows the basic... the example</p> <b>Hello World Example</b><br> <a href="roseindia/HelloWorld.action?request_locale=en">Hello World
http://roseindia.net/tutorialhelp/allcomments/4063
CC-MAIN-2015-35
refinedweb
4,192
59.7
I was conducting an ASP.NET 4.0 training for a webdev team recently. The participants were working on a project where they were supposed to display a statistical report using charts. Now with ASP.NET 3.5 Charts, this was somewhat challenging since no default charts controls were provided in 3.5. However In ASP.NET 4.0, adding charts to applications is much simpler. In ASP.NET 4.0, the Chart control is provided under the namespace ‘System.Web.UI.DataVisualation.Charting’. It provides a Chart class which serves as a root class for all chart controls. This class defines the ‘Series’ property of the type SeriesCollection class. This class defines grouping of related data points and attribute for the series. The Chart class also defines a chart area collection of the type ChartArea. This class represents the chart area on the chart image. Step 1: Open VS2010 and create a ASP.NET 4.0 blank web site, name it as ‘ASPNET40_Charting’. To this web site, add a new WebForm and name it as ‘WebForm_Charts.aspx’. Step 2: Drag-Drop a CheckBox, DropDownList and Chart control on the webform as shown below: Set the control names as below: Step 3: In the Web Site, add a new class file and name it as ‘DataClasses.cs’. Add the following classes in it: Step 4: In the code-behind, write the following code in the loaded event: The above code gets all chart names from the enumeration ‘SeriesChartType’ and these names are displayed in the DropDownList. Step 5: In the SelectedIndexChanged event of the Drop-Down List, write the following code: (Note: Please read the comments summary). Step 6: Run the application and select the Chart-Name for the Drop-Down, the result will be as shown below: Now check the ‘View as 3D’ check box and select the chart from the Drop-Down. The result will be as shown below: Conclusion: Since developers are provided with out-of-box charts in ASP.NET 4 web development, it is easy for the developers to represents the statistical and analytical reports. Download the Source Code Tweet 3 comments: Thanks man.. you rocking :) Thank you for such a great article ..!! Happy Coding Thanks And Regards !! Rabbil
https://www.devcurry.com/2011/11/aspnet-40-chart-control-to-display.html
CC-MAIN-2021-31
refinedweb
374
65.83
This is the Z Solid database adapter product for the Z Object Publishing Environment. *** IMPORTANT *** This product is distributed as a NON-BINARY release! This product requires a compiled Python extension that is NOT included as a binary with this release. You must build or install the required extensions using the instructions below before the product will work properly! The Z Solid database adapter uses a Solid (i.e. ODBC) extension module, sql. Before using the Z Solid Database Adapter, you must build the sql extension from the sources included in the Z Solid Database Adapter distribution. The source files and associated files required to build the required Solid module are included in this product distribution. Follow the steps below to build the extension on UNIX platforms. Note that the included files do not support building on win32 platforms at this time. Change to the /src directory of your ZSolidDA product directory and issue the following commands: make -f Makefile.pre.in boot make Note that if the Python interpreter that will be used to run Zope is not run with the command python, then you must supply the command used to run Python as an option on the first make command. For example, if you run Python with the command python1.5.1, then use: python python1.5.1 make -f Makefile.pre.in boot PYTHON=python1.5.1 This should create the file sql.so in your src directory. This file is a dynamically-linked library. Some versions of Unix (e.g. HP/UX) use a different suffix for dynamically-linked libraries. If errors occur when trying to build the extension, you may need to modify the Setup file to ensure that the correct Solid include and lib directory options are being passed to the compiler. For example, it has been reported that Solid 2.3 wants the library solcli to be linked rather than scllux22.a. Next, start Python and type "import MySQL", to make sure you can import the module without problems. If Python is unable to import the module, you may need to try rebuilding the module after adding the switch "-lgcc" to the Setup file. Finally, copy the output file sql.so up one directory into your ZSolidDA product directory and restart your Zope installation to complete the product installation. The extension module, sql.c, was generated using the Simplified Wrapper and Interface Generator (SWIG). If you encounter problems building the extension module, you might try re-running swig. Assuming that you have swig installed, you can re-swig the module with the command: swig -python -o sql.c sql.i The connection string used for Z Solid Database Connection consists of: The user ID and password may not contain spaces. For example, a connection string to connect to a server name of "upipe solid" with user ID "jim" and password "spam" would be: upipe solid jim spam The server name may be a network name, such as upipe solid or tcpip spiff 1313. A logical data source name may also be used. Data source names are defined in the [Data Sources] section of the solid.ini file for database being used, as in: upipe solid tcpip spiff 1313 [Data Sources] solid.ini [Data Sources] demo=upipe solid, Default DB at tarzan.digicool.com Note, however, that to use logical data source names, it is necessary to set the SOLIDDIR environment variable to the location of the solid database being used in your Zope PCGI resource file, as in: SOLIDDIR=/usr/local/solid/database/default Setting this variable may be necessary for other reasons, depending on how Solid is configured on your system.
http://old.zope.org/Products/ZSolidDA/swpackage_view
CC-MAIN-2014-52
refinedweb
609
65.01
Hi everyone. I'm new to this forum, recommended by a friend. I've just finished my first term at uni, and learnt some java. I'm creating a simple helicopter game in java, and have some issues using timers. The funny thing was, i started the project on my laptop, and it was going fine. Went back to my desktop, and it lagged. Weird. So I tried it on my laptop again, and it had no lag. By lag, I mean graphically and mouse movement when the application runs. Its not consuming resources, so that's not the problem. I read something on a forum about it being to do with the fact my desktop has a quad core. If this is the case, how can I work around this? Obviously I will have to use my laptop for now. Laptop is dual core running vista, desktop is a quad core running xp sp3. code listing below. 2 separate files but together for this forum import javax.swing.JFrame; import java.awt.*; public class HeliApp extends Object { public static void main(String[] argStrings) throws Exception { JFrame frame = new JFrame ("Heli"); frame.setDefaultCloseOperation (JFrame.EXIT_ON_CLOSE); frame.getContentPane().add (new HeliPanel()); frame.pack(); frame.setVisible(true); } } import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.io.*; import java.lang.* public class HeliPanel extends JPanel { int heliX = 50; int heliY = 200; int yMove = 5; private ImageIcon image; int delay = 20; private Timer timer; public HeliPanel() { image = new ImageIcon ("../black-helicopter.gif"); setBackground (Color.white); setPreferredSize (new Dimension(1000,400)); setFocusable (true); Timer timer = new Timer(10, new GameListener()); timer.start(); } public void paintComponent (Graphics page) { super.paintComponent (page); image.paintIcon (this,page,heliX, heliY); } private class GameListener implements ActionListener { public void actionPerformed (ActionEvent event) { heliY = heliY + yMove; repaint(); } } }
https://www.daniweb.com/programming/software-development/threads/162266/simple-java-timer-lag-issue-caused-by-quad-core
CC-MAIN-2021-17
refinedweb
299
60.21
The following code simulates (very approximately) the growth of a polycrystal from a number of seeds. Atoms are added to the crystal lattice of each of the resulting grains until no more will fit, creating realistic-looking boundaries where two grains meet. Two sorts of lattice are supported, 'hex' and 'square'. On each iteration of the main loop in Crystal.grow_crystal() an existing atom is selected at random and its neighbouring lattice sites are examined to see if a new atom will fit there. If no such sites are available, the atom becomes 'inactive' and won't be selected in future. To avoid having to compute the distance to each of the already-placed atoms in the simulation to see if a new atom will fit on a site, the crystal's domain is divided into square cells of length equal to two atom diameters and every atom is assigned to a cell; it is only necessary to examine atoms placed in the same cell as a candidate site and its immediate neighbouring cells – we know that atoms in cells further away cannot be too close to the candidate site. Some example usage cases are given below: extra features include (a) colouring the atoms in each grain; (b) customizing the style in which an atom is drawn; (c) representing the atoms as circles or space-filling polygons appropriate to the lattice type. crystal = Crystal(ngrains=5, seed_minimum_distance=0.3, lattice='square', d=0.03) crystal.grow_crystal() crystal.save_atom_positions() colours = plt.get_cmap("tab10").colors crystal.plot_crystal(colours=colours) crystal = Crystal(ngrains=15, seed_minimum_distance=0.1, lattice='hex', d=0.03) crystal.grow_crystal() crystal.save_atom_positions() colours = plt.get_cmap("tab10").colors crystal.plot_crystal(colours=colours, circular_atoms=False) crystal = Crystal(ngrains=32, seed_minimum_distance=0.05, lattice='hex', d=0.01) crystal.grow_crystal() crystal.save_atom_positions() crystal.plot_crystal(linewidth=0) crystal = Crystal(ngrains=10, seed_minimum_distance=0.1, lattice='square', d=0.03) crystal.grow_crystal() crystal.save_atom_positions() colours = ['#d3f8e2', '#e4c1f9', '#f694c1', '#ede7b1', '#a9def9'] crystal.plot_crystal(colours=colours, edgecolor='#444444', circular_atoms=False) import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import PatchCollection import random class Atom: """A simple atom in a 2-D crystal grain, with its coordinates.""" def __init__(self, grain, coords): self.grain = grain self.coords = coords class Grain: """A grain in a 2-D (poly-)crystal. grain_id is the unique ID of the grain, seed is the (x,y) coordinates of the first atom placed in the grain, and lattice is a string identifying which kind of crystal lattice to use ('hex' or 'square'). """ def __init__(self, grain_id, seed, lattice='hex'): self.grain_id = grain_id self.seed = seed self.lattice = lattice # Initialize the displacements for other atoms around a reference atom, # and the maximum rotation angle, phi, to obtain all orientations. if lattice == 'hex': # Hexagonal lattice: 6 other atoms in a hexagonal pattern. a, b = 0.5, np.sqrt(3)/2 self.lattice_disp = np.array( [[a,-b],[1,0],[a,b],[-a,b],[-1,0],[-a,-b]]).T self.phi = np.pi / 3 elif lattice == 'square': # Square lattice: 4 other atoms placed orthogonally. self.lattice_disp = np.array([[1.,0],[0,1.],[-1.,0],[0,-1.]]).T self.phi = np.pi / 2 else: sys.exit('Undefined lattice type: {}'.format(lattice)) # Rotate the displacements by some random angle up to phi. self.setup_rotated_displacements() def setup_rotated_displacements(self): """Rotate atom displacements at random to change the orientation.""" def _make_rot_matrix(alpha): return np.array([[np.cos(alpha), -np.sin(alpha)], [np.sin(alpha), np.cos(alpha)]]) theta = np.random.rand() * self.phi # Two-dimensional rotation matrix. self.rot = _make_rot_matrix(theta) self.lattice_disp = (self.rot @ self.lattice_disp).T patch_rot = _make_rot_matrix(self.phi/2) if self.lattice == 'hex': a = 1 / np.sqrt(3) else: a = 1 / np.sqrt(2) self.patch_disp = a * (patch_rot @ self.lattice_disp.T).T def distance(p, q): """Return the Euclidean distance between points p and q.""" return np.hypot(*(p-q)) class SimCells: """A region of the simulation area to search for neighbours. To save us from calculating all the pairwise distances, keep track of the location of atoms in "cells": for a given candidate site, we then only need to look within that site's cell and its immediate neighbouring cells. """ def __init__(self, d): """Initialize the cell size and the array of cells.""" self.n = int(1 / 2 / d) self.a = 1 / self.n self.cell_array = [[[] for i in range(self.n)] for j in range(self.n)] def _get_cell_indexes_from_atom_coords(self, coords): """Return the indexes ix, iy of the cell containing point coords.""" x, y = coords return int(x / self.a), int(y / self.a) def _get_atom_cell(self, atom): """Return the cell containing atom.""" ix, iy = self._get_cell_indexes_from_atom_coords(atom.coords) return self.cell_array[ix][iy] def add_atom_to_cell(self, atom): """Add atom to the appropriate cell.""" self._get_atom_cell(atom).append(atom) def neighbouring_atoms_generator(self, coords): """Return a generator yielding all atoms "near" point coords.""" ix, iy = self._get_cell_indexes_from_atom_coords(coords) dxy = ((0,0), (1,0), (1,1), (0,1), (-1,1), (-1,0), (-1,-1), (0,-1), (-1,1)) for dx, dy in dxy: ixx, iyy = ix+dx, iy+dy if not (0 <= ixx < self.n and 0 <= iyy < self.n): continue for atom in self.cell_array[ixx][iyy]: yield atom class Crystal: """A simulation of a two-dimensional polycrystal.""" def __init__(self, ngrains=5, seed_minimum_distance=0.2, lattice='hex', d=0.02): """Initialise the polycrystal. ngrains is the number of grains, to be placed randomly on the unit square with a minumum distance, seed_minimum_distance, between them. hex = 'hex' or 'square' is the crystalline lattice type and d is the atom diameter. """ self.ngrains = ngrains self.seed_minimum_distance = seed_minimum_distance self.lattice = lattice self.d = d self.atoms, self.grains = [], [] def seed_grains(self): """Place the ngrain seeds randomly, a minimum distane apart.""" # Reset the crystal. self.atoms, self.grains = [], [] self.sim_cells = SimCells(self.d) for i in range(self.ngrains): while True: site = np.random.random((2,)) for atom in self.atoms: if distance(site,atom.coords) < self.seed_minimum_distance: # Seed atom too close to another: go back and try again break else: # Initialise a grain and add its seed atom. grain = Grain(i, site, self.lattice) self.grains.append(grain) atom = Atom(grain, site) self.atoms.append(atom) self.sim_cells.add_atom_to_cell(atom) break def grow_crystal(self): """Grow a new polycrystal.""" self.seed_grains() # i_active is a list of the indices of atoms whcih have space next # to them to place a new atom. i_active = list(range(self.ngrains)) while i_active: # Pick a random "active" atom, and get its neighbouring lattice # sites with enough space to place a new atom i = np.random.choice(i_active) candidate_sites = self.get_neighbour_candidate_sites(self.atoms[i]) if not candidate_sites: # No candidate site was found: the atom is no longer active. i_active.remove(i) continue # Add the atom and mark it as active (until we know better). n = len(self.atoms) atom = Atom(self.atoms[i].grain, random.choice(candidate_sites)) self.atoms.append(atom) self.sim_cells.add_atom_to_cell(atom) i_active.append(n) print(len(self.atoms), 'atoms placed') def get_neighbour_candidate_sites(self, atom): """Return candidate locations next to atom to place a new atom. Look for sites on the crystal lattice of the grain of the provided atom with enough space to locate a new atom and return a list of the site coordinates. """ neighbour_sites = atom.coords + self.d * atom.grain.lattice_disp candidate_sites = [] for site in neighbour_sites: if not (0 <= site[0] < 1 and 0 <= site[1] < 1): continue # neighbouring_atoms_generator spits out atoms in the # vicinity of site, using our array of "SimCells". neighbouring_atoms_generator = self.sim_cells.\ neighbouring_atoms_generator(site) for other_atom in neighbouring_atoms_generator: if distance(site, other_atom.coords) < self.d * 0.99: break else: candidate_sites.append(site) return candidate_sites def save_atom_positions(self, filename='crystal.out'): """Save the atom diameter and all atom locations to filename.""" with open(filename, 'w') as fo: print('d =', self.d, file=fo) for atom in self.atoms: print(atom.coords[0], atom.coords[1], file=fo) def _get_patch_vertices(self, atom): return atom.coords + self.d * atom.grain.patch_disp def plot_crystal(self, filename='crystal.png', circular_atoms=True, colours=None, **kwargs): """Create a Matplotlib image of the polycrystal as filename. If colours is None, use a single colour for all atoms; otherwise a sequence of colours to cycle through for each grain can be provided. Additional kwargs are passed straight to the PatchCollection call that controls the drawing style of the atoms. If circular_atoms is not True, each atom is represented by the shape of its lattice (square or hexagon). """ if not colours: # Atoms are boring grey if no alternative is provided. colours = ['#444444'] ncolours = len(colours) if not kwargs: kwargs = {'linewidth': 1, 'edgecolor': 'k'} fig, ax = plt.subplots() # We have a bit of book-keeping to do: group the atoms into their # grains in this dictionary, keyed by the grain_id. grains = {} for atom in self.atoms: grains.setdefault(atom.grain.grain_id, []).append(atom) for j,atoms in grains.items(): if circular_atoms: patches = [plt.Circle(atom.coords, radius=self.d/2) for atom in atoms] else: patches = [plt.Polygon(self._get_patch_vertices(atom)) for atom in atoms] c = PatchCollection(patches, facecolor=colours[j % ncolours], **kwargs) ax.add_collection(c) # Ensure the Axes are square and remove the spines, ticks, etc. ax.set_aspect('equal', 'box') plt.axis('off') plt.savefig(filename) plt.show() crystal = Crystal(ngrains=10, seed_minimum_distance=0.2, lattice='square', d=0.02) crystal.grow_crystal() crystal.save_atom_positions() colours = plt.get_cmap("tab10").colors crystal.plot_crystal(colours=colours) Comments are pre-moderated. Please be patient and your comment will appear soon. Sarah 8 months, 1 week ago This is such a nice way to simulate those results wih cristals. Since I'm new to the python language, it was very enlightening how you wrote the code, not too much stuff to make it too hard. Will be trying this out. Thanks.Link | Reply Anandaram Mandyam 8 months ago Nice and have rated it at 5 !Link | Reply New Comment
https://scipython.com/blog/simulating-two-dimensional-polycrystals/
CC-MAIN-2019-51
refinedweb
1,641
52.76
Hide Forgot Description of problem: /usr/include/asm-x86_64/unistd.h defines the _syscall6 macros using __syscall_return which is defined as follows: #define __syscall_return(type, res) \ do { \ if ((unsigned long)(res) >= (unsigned long)(-MAX_ERRNO)) { \ errno = -(res); \ res = -1; \ } \ return (type) (res); \ } while (0) MAX_ERRNO is not defined, nor is there any definition found in any other public header. This means that programs that have to use the _syscall macros directly because they cannot use the glibc syscall() function now cannot compile anymore unless they define MAX_ERRNO themselves (looking at the kernel sources it seems the correct definition on x86_64 is 4095, which was also the value used pre 2.6.19). Version-Release number of selected component (if applicable): Name : kernel-headers Arch : x86_64 Version: 2.6.19 Release: 1.2895.fc6 You must not include anything from asm/ or linux/ into a user mode program. You can use the definition as a basis to write your own. This is related to this upstream Frysk bug btw: The bogus _syscallX macros were removed from the 2.6.18 kernel but the x86_64 maintainer sneaked them back in to 2.6.19. We should remove them again -- they are removed again from upstream. !
https://partner-bugzilla.redhat.com/show_bug.cgi?id=224130
CC-MAIN-2019-35
refinedweb
202
61.46
/* * NS Roll the dice plugin * * By White Panther and mICKE * * This Plugin gives players prizes when they Roll the Dice * * Any say * commands can only be used by clients, NOT SERVER CONSOLE * * Credits: * - Depot - his assistance in getting bugs fixed * - CheesyPeteza - For help with fake_damage * - Ludwig van - rips from his plugins * * Commands: * say rolldice / rollthedice / the dice * say_team rolldice / rollthedice / roll the dice * say vote_rtd / vote_roll / vote_dice * * Changelog: * v 0.7.4c: * - Initial beta * * v 0.7.7c: * - fixed: * - alien who got crappy weapon, still got his normal weapon * - marine could pickup his old weapon after getting crappy weapon * - fixed bug for 32 players * - message hud display was fading in and out * - changed: * - cvar names (to recognize them being from Roll the Dice) * - some cvar values * - kills by time bomb are shown * - now u can have multiple timer prizes (if u manage to get them ;) ) * * v0.7.9: * - fixed: * - bug with constant godmode/noclip * - setting stealth time to 0 (= no time limit) acted as being disabled * - menu did not disappear once voted * - when RtD started disabled and been enabled by vote, timer prizes were bugged * - diabling RtD by vote did not remove a players prizes * - CO: when winning JP/HA players lost their upgrades * * v0.8: * - changed: * - moved from pev/set_pev to entity_get/entity_set (no fakemeta) * * v0.8.4b: * - fixed: * - error with slapdisease * - aliens could see marines with stealth * - minor errors with hud text * - added: * - define to either kill or unstuck player after noclip * - cvar to only allow RTD on specific mode * - changed: * - drunken can now be set as a timer prize too (default is none timer prize) * * v0.8.6: * - fixed: * - crash with stealth * - rtd_vote_mode was acting wrong * - changed: * - some tweaks * * v0.8.7: * - fixed: * - no more godmode/noclip for 1 sec even if time set to zero * - spawnprotect will not interfer with godmode anymore * - changed: * - code improvements * * v0.8.7b: * - fixed: * - bug with compiling option "kill_if_stuck" * * v0.9.0: * - fixed: * - some bug fixes * - changed: * - partly rewritten * - removed fun module * - added fakemeta */ #include <amxmodx> #include <amxmisc> #include <engine> #include <fakemeta> #include <ns>
https://forums.alliedmods.net/showthread.php?s=08be881d1af757c1674e7f865a6c1efa&t=7490
CC-MAIN-2020-16
refinedweb
343
55.81
Web Character View Tutorial¶ Before doing this tutorial you will probably want to read the intro in `Basic Web tutorial`_. In this tutorial we will create a web page that displays the stats of a game character. For this, and all other pages we want to make specific to our game, we’ll need to create our own Django “app” We’ll call our app character, since it will be dealing with character information. From your game dir, run evennia startapp character This will create a directory named character in the root of your game dir. It contains all basic files that a Django app needs. To keep mygame well ordered, move it to your mygame/web/ directory instead: mv character web/ Note that we will not edit all files in this new directory, many of the generated files are outside the scope of this tutorial. In order for Django to find our new web app, we’ll need to add it to the INSTALLED_APPS setting. Evennia’s default installed apps are already set, so in server/conf/settings.py, we’ll just extend them: INSTALLED_APPS += ('web.character',) Note: That end comma is important. It makes sure that Python interprets the addition as a tuple instead of a string. The first thing we need to do is to create a view and an URL pattern to point to it. A view is a function that generates the web page that a visitor wants to see, while the URL pattern lets Django know what URL should trigger the view. The pattern may also provide some information of its own as we shall see. Here is our character/urls.py file (Note: you may have to create this file if a blank one wasn’t generated for you): # URL patterns for the character app from django.conf.urls import url from web.character.views import sheet urlpatterns = [ url(r'^sheet/(?P<object_id>\d+)/$', sheet, name="sheet") ] This file contains all of the URL patterns for the application. The url function in the urlpatterns list are given three arguments. The first argument is a pattern-string used to identify which URLs are valid. Patterns are specified as regular expressions. Regular expressions are used to match strings and are written in a special, very compact, syntax. A detailed description of regular expressions is beyond this tutorial but you can learn more about them here. For now, just accept that this regular expression requires that the visitor’s URL looks something like this: sheet/123/ That is, sheet/ followed by a number, rather than some other possible URL pattern. We will interpret this number as object ID. Thanks to how the regular expression is formulated, the pattern recognizer stores the number in a variable called object_id. This will be passed to the view (see below). We add the imported view function ( sheet) in the second argument. We also add the name keyword to identify the URL pattern itself. You should always name your URL patterns, this makes them easy to refer to in html templates using the {% url %} tag (but we won’t get more into that in this tutorial). Security Note: Normally, users do not have the ability to see object IDs within the game (it’s restricted to superusers only). Exposing the game’s object IDs to the public like this enables griefers to perform what is known as an account enumeration attack in the efforts of hijacking your superuser account. Consider this: in every Evennia installation, there are two objects that we can always expect to exist and have the same object IDs– Limbo (#2) and the superuser you create in the beginning (#1). Thus, the griefer can get 50% of the information they need to hijack the admin account (the admin’s username) just by navigating to sheet/1! Next we create views.py, the view file that urls.py refers to. # Views for our character app from django.http import Http404 from django.shortcuts import render from django.conf import settings from evennia.utils.search import object_search from evennia.utils.utils import inherits_from def sheet(request, object_id): object_id = '#' + object_id try: character = object_search(object_id)[0] except IndexError: raise Http404("I couldn't find a character with that ID.") if not inherits_from(character, settings.BASE_CHARACTER_TYPECLASS): raise Http404("I couldn't find a character with that ID. " "Found something else instead.") return render(request, 'character/sheet.html', {'character': character}) As explained earlier, the URL pattern parser in urls.py parses the URL and passes object_id to our view function sheet. We do a database search for the object using this number. We also make sure such an object exists and that it is actually a Character. The view function is also handed a request object. This gives us information about the request, such as if a logged-in user viewed it - we won’t use that information here but it is good to keep in mind. On the last line, we call the render function. Apart from the request object, the render function takes a path to an html template and a dictionary with extra data you want to pass into said template. As extra data we pass the Character object we just found. In the template it will be available as the variable “character”. The html template is created as templates/character/sheet.html under your character app folder. You may have to manually create both template and its subfolder character. Here’s the template to create: {% extends "base.html" %} {% block content %} <h1>{{ character.name }}</h1> <p>{{ character.db.desc }}</p> <h2>Stats</h2> <table> <thead> <tr> <th>Stat</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Strength</td> <td>{{ character.db.str }}</td> </tr> <tr> <td>Intelligence</td> <td>{{ character.db.int }}</td> </tr> <tr> <td>Speed</td> <td>{{ character.db.spd }}</td> </tr> </tbody> </table> <h2>Skills</h2> <ul> {% for skill in character.db.skills %} <li>{{ skill }}</li> {% empty %} <li>This character has no skills yet.</li> {% endfor %} </ul> {% if character.db.approved %} <p class="success">This character has been approved!</p> {% else %} <p class="warning">This character has not yet been approved!</p> {% endif %} {% endblock %} In Django templates, {% ... %} denotes special in-template “functions” that Django understands. The {{ ... }} blocks work as “slots”. They are replaced with whatever value the code inside the block returns. The first line, {% extends "base.html" %}, tells Django that this template extends the base template that Evennia is using. The base template is provided by the theme. Evennia comes with the open-source third-party theme prosimii. You can find it and its base.html in evennia/web/templates/prosimii. Like other templates, these can be overwritten. The next line is {% block content %}. The base.html file has blocks, which are placeholders that templates can extend. The main block, and the one we use, is named content. We can access the character variable anywhere in the template because we passed it in the render call at the end of view.py. That means we also have access to the Character’s db attributes, much like you would in normal Python code. You don’t have the ability to call functions with arguments in the template– in fact, if you need to do any complicated logic, you should do it in view.py and pass the results as more variables to the template. But you still have a great deal of flexibility in how you display the data. We can do a little bit of logic here as well. We use the {% for %} ... {% endfor %} and {% if %} ... {% else %} ... {% endif %} structures to change how the template renders depending on how many skills the user has, or if the user is approved (assuming your game has an approval system). The last file we need to edit is the master URLs file. This is needed in order to smoothly integrate the URLs from your new character app with the URLs from Evennia’s existing pages. Find the file web/urls.py and update its patterns list as follows: # web/urls.py custom_patterns = [ url(r'^character/', include('web.character.urls', namespace='character', app_name='character')), ] Now reload the server with evennia reload and visit the page in your browser. If you haven’t changed your defaults, you should be able to find the sheet for character #1 at Try updating the stats in-game and refresh the page in your browser. The results should show immediately. As an optional final step, you can also change your character typeclass to have a method called ‘get_absolute_url’. # typeclasses/characters.py # inside Character def get_absolute_url(self): from django.core.urlresolvers import reverse return reverse('character:sheet', kwargs={'object_id':self.id}) Doing so will give you a ‘view on site’ button in the top right of the Django Admin Objects changepage that links to your new character sheet, and allow you to get the link to a character’s page by using {{ object.get_absolute_url }} in any template where you have a given object. Now that you’ve made a basic page and app with Django, you may want to read the full Django tutorial to get a better idea of what it can do. `You can find Django’s tutorial here`_.
http://evennia.readthedocs.io/en/latest/Web-Character-View-Tutorial.html
CC-MAIN-2018-13
refinedweb
1,536
67.04
Recently, we had a need to execute multiple SQLAlchemy queries in a parallel fashion against a PostgreSQL database with Python 2 and psycopg2. We didn’t really need a full scale multithreading approach, so we turned to gevent. Gevent is a implementation of green threading that uses libevent. This post really isn’t about what threading model is better, as that is great subjective and changes based on your use case and needs. Making Psycopg2 Coroutine Friendly So the first thing we need to do to prepare to leverage gevent is make sure psycopg2 is configured properly. Because the main psycopg2 is a C extension, we can not just take advantage of gevent’s monkey patching. Nonetheless, psycopg2 exposes a hook that can be use with coroutine libraries to integrate with the event scheduler. There are a few libraries to help with this such as sqlalchemy_gevent or psycogreen that you can use to automatically set this part up for you. However, the code is fairly short and straight forward. Let’s look at the code from psycogreen, that I use in my applications. import psycopg2 from psycopg2 import extensions from gevent.socket import wait_read, wait_write def make_psycopg_green(): """Configure Psycopg to be used with gevent in non-blocking way.""" if not hasattr(extensions, 'set_wait_callback'): raise ImportError( "support for coroutines not available in this Psycopg version (%s)" % psycopg2.__version__) extensions.set_wait_callback(gevent_wait_callback) def gevent_wait_callback(conn, timeout=None): """A wait callback useful to allow gevent to work with Psycopg.""" while True: state = conn.poll() if state == extensions.POLL_OK: break elif state == extensions.POLL_READ: wait_read(conn.fileno(), timeout=timeout) elif state == extensions.POLL_WRITE: wait_write(conn.fileno(), timeout=timeout) else: raise psycopg2.OperationalError( "Bad result from poll: %r" % state) The code above will block the calling thread with a callback if we are busy reading or writing, which will return control back to the event loop that could start working on another thread if one needs to be worked on. The wait_read function blocks the current green let until the connection is ready to read from, and the wait_write does the same thing for writing to the connection. With psycopg2 ready to work with gevent, we can start writing the code to execute our queries. I’m going to use a few different elements from gevent to control how the queries are executed, how many queries can be run at once, and how the query results are handled. There are many ways to use gevent and workers to accomplish this tasks, and some are quite simpler. My structure is set for future options and growth. I call this structure a QueryPool. Building our Gevent based QueryPool The QueryPool consists of an input queue to hold the queries we want to run, an output queue to hold the results, an overseer to load up the input queue, workers to actually get the queries and run them, and finally a function to drive the process. The central element of our QueryPool, is the input queue. It is a special kind of FIFO queue called a JoinableQueue, which has a .join() method that blocks until all items in the queue have been acknowledged as processed with a task_done(). The input queue is loaded by an overseer as the first step of running the QueryPool. Then workers grab tasks one at a time from the input queue and when they finish their work they use the task_done() method to inform the queue that the item has been processed. Let’s look at the __init__() for our Query pool: import gevent from gevent.queue import JoinableQueue, Queue class QueryPool(object): def __init__(self, queries, pool_size=5): self.queries = queries self.POOL_MAX = pool_size self.tasks = JoinableQueue() self.output_queue = Queue() So in our __init__() method, we start by storing the queries. Next we set a POOL_MAX which is maximum number of workers we are going to use. This controls how many queries can be executed in parallel in our QueryPool. Then we setup the tasks queue, which is where are workers will pick up work from. Finally we setup the output queue that workers will publish the query results. Defining worker methods Now we can look at the workers, I use two methods in the QueryPool calls. The first one contains the logic to prepare a database connection and execute the SQLAlchemy query. def __query(self, query): conn = engine.connect() results = conn.execute(query).fetchall() return results For the workers method within the QueryPool class, we start a loop that is going to run until their are not tasks left in the tasks (input) queue. It will get one task, call the __query() method with the details from the task, handle any exceptions, add the results to the output queue, and then mark the task as done. When adding the results to the output queue, we use the put method of gevent queues to add the results without blocking. This allows the event loop to look for the next thing to work on. def executor(self, number): while not self.tasks.empty(): query = self.tasks.get() try: results = self.__query(query) self.output_queue.put(results) except Exception as exc_info: print exc_info print 'Query failed :(' self.tasks.task_done() Building an overseer method Now that we have a something to handle the work, we need to load the task (input) queue. I use an overseer method that iterates over the supplied recipes and puts them on the task (input) queue. def overseer(self): for query in self.queries: self.tasks.put(query) Building the run method Finally, we are ready to build the run method that ties everything together and makes it work. Let’s look at the code first and then cover it step by step. def run(self): self.running = [] gevent.spawn(self.overseer).join() for i in range(self.POOL_MAX): runner = gevent.spawn(self.executor, i) runner.start() self.running.append(runner) self.tasks.join() for runner in self.running: runner.kill() The runner first spawn the overseer and immediately calls join on it, which blocks until the overseer is done loading items into the tasks (input) queue. This is overkill, but provides more options and would allow up to load asynchronously in the future. Next, we start a number of workers up to the POOL_MAX we defined earlier, and add them to a list so we can control them later. When we start them, they immediately begin working. Then we call .join() on the tasks (input) queue which will block execution until all the queries in the queue has been execute and acknowledge as completed. Finally, we clean up by using the .kill() method on all of the workers. Using the QueryPool Now we are ready to use our QueryPool class! I’ve created an Jupyter notebook available on GitHub with some example queries against a benchmarking database. I created my test database using the pgbench command to create a 5Gb database. Execution Results I setup 6 queries of varying complexity and execution time. I then executed those queries in a traditional serial execution style and then with our QueryPool using different worker counts. - Serial: 24 seconds - QueryPool - 5 workers: 8 seconds - QueryPool - 3 workers: 9 seconds - QueryPool - 2 workers: 15 seconds History - 2016-04-28 Added missing return thanks to Mike B. - 2016-04-29 Updated information about .put()method based on feedback from David S. - 2016-04-30 Updated while 1 to while True based on feedback from /u/Asdayasman
http://www.jasonamyers.com/2016/gevent-postgres-sqlalchemy/
CC-MAIN-2017-13
refinedweb
1,242
65.42
We will use the train as an example and create a train object with relevant method associated with the object. Lets say we need to calculate the number of days that we need to travel between 2 cities. We would need the speed of the train and the distance between the 2 cities. #include <iostream> #include <string> using namespace std; class Train { public: double speed; double dist; double Days(double, double); Train(); //This is the constructor ~Train(); //This is the destructor }; double Train::Days(double sp, double dist) { double speed = sp; double distance = dist; return (dist/sp)/24; } Train::Train() {} Train::~Train() {} int main() { double speed = 10; double distance = 1000; Train Train1; // instance declaration cout << "No of days: " << Train1.Days(speed,distance); cout << endl; } In this example, there are 2 data members speed and train. In our constructor, we did not initialize them within the constructor. We are assuming that the values can change. So in line 42, we declare the instance Train1 without passing in any parameters. As a result, in our method Days, we would need to pass in the 2 parameters speed and distance.
http://codecrawl.com/2015/01/14/cplusplus-creating-object/
CC-MAIN-2017-04
refinedweb
187
59.94
NAME Siebel::COM - Perl extension to access Siebel application through Microsoft COM SYNOPSIS package Siebel::COM::App; use Moose; use namespace::autoclean; with 'Siebel::COM'; sub BUILD { my $self = shift; my $app = Win32::OLE->new( $self->get_ole_class() ) or confess( 'failed to load ' . $self->get_ole_class() . ': ' . $! ); $self->_set_ole($app); } DESCRIPTION Siebel::COM was developed to make it easier to use Microsoft COM to access a Siebel application, either a Siebel Enterprise or a Siebel Client, without having to go down the details of Win32::OLE. Inspiration for this distribution came from the article () wrote by Jason Brazile and the despicable information in the documentation () of Oracle saying that Perl cannot be used to connected to Siebel with COM. Siebel::COM should be used directly only for maintenance or extensions since it is a Moose role. You probably want to look for subclasses of Siebel::COM::App to start connecting to a Siebel environment. This roles provides the _ole attribute, with the accessors get_ole and the "private" _set_ole, which holds a reference to the Win32::OLE object that is used to really provide functionality from Siebel DLL's. EXPORT None by default. METHODS get_ole Expects no parameter. Returns the Win32::OLE associated with the class. CAVEATS A known issue is that this distribution only works with Microsoft Windows OS (which should be garanteed by Devel::AssertOS). Having a full Siebel Client setup in the OS is also required. It is also known that Siebel COM is not supported in 64 bits systems due incompability of the required Siebel DLLs. SEE ALSO Siebel::COM::App::DataServer Siebel::COM::App::DataControl Project website: AUTHOR Alceu Rodrigues de Freitas Junior, <arfreitas@cpan.org> This software is copyright (c) 2012 of Alceu Rodrigues de Freitas Junior, <arfreitas@cpan.org<> This file is part of Siebel COM project. Siebel COM is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Siebel COM. If not, see <>.
https://metacpan.org/pod/Siebel::COM
CC-MAIN-2018-34
refinedweb
347
52.6
![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]> Part Number: UCD3138 Tool/software: TI C/C++ Compiler Hi, I am using CCS6.1 with compiler 5.22. I found the result of multiplication of two 32-bits data is not correct. I defined the result as 64 bits data. the correct result should be 0x0000 0056 0038 7512. I saw one register R0 has the number 0x0000 0056, another register R1 has the number 0x00387512, which is correct. however, the result I get is 0x0038 7512 0000 0056, which seems the two registers's data are put in the wrong order. Is it a bug or I didn't configure the CCS correctly? Thanks, Julie In reply to Chester Gillon: In reply to Julie Zhu: Julie, I was able to create a small test case using lab 1 of the UCD3138 training labs firmware package and could verify that both CCSv7.1.0 (Windows) and CCSv6.2.0 (Ubuntu 16.04/64) work fine. I am using my UCD3138OL64EVM-031 with a Spectrum Digital XDS200 USB JTAG debug probe. Therefore I would simply update your copy of CCS to benefit from this bug. Hope this helps, Rafael In reply to desouza: desouzaI was able to create a small test case using lab 1 of the UCD3138 training labs firmware package and could verify that both CCSv7.1.0 (Windows) and CCSv6.2.0 (Ubuntu 16.04/64) work fine Whereas the problem Julie reported was that a 64-bit variable which the compiler optimizer had placed in a pair of 32-bit variables was displayed incorrectly, while a 64-bit global variable was displayed correctly. Are you able to re-check with a UCD3138 example which has optimization level of one (or greater) that CCS v6.2.0 or v7.1.0 correctly display a 64-bit variable held in a register pair? It may require some experimentation, but I found with the following code fragment compiled at optimization level one caused the sum 64-bit variable to be stored in a register pair, while the 64-bit scale_product variable was optimized out of the debug information: #include <stdint.h> #include <stdio.h> int main(void) { uint32_t x = 0x8a2290; uint32_t y = 0xa000; uint64_t scale_product; uint64_t sum; uint32_t iteration; sum = 0; for (iteration = 0; iteration < 5; iteration++) { scale_product = (uint64_t) x * y; printf ("0x%x * 0x%x = 0x%llx\n", x, y, scale_product); sum += scale_product; } return sum; } Julie, Chester, Good point; I tried several things but couldn't find a way for the compiler to place the 64-bit result in a register pair in my system here. In search of a possible outstanding bug, I found SDSCM00008685 at the file below: C:/ti//ccsv7/tools/compiler/ti-cgt-arm_5.2.9/Open_defects.html Given this is still an outstanding bug even on modern ARM compiler releases, there is a chance the debug team implemented the workaround proposed there: arbitrarily pointing the two halves of the variable at Rn:Rn+1 and placing them on the Variables/Expressions Views. As hinted above, the fact this ARM7 is a big endian platform, it may be possible the workaround fails as the data could have been stored in reverse order. I searched around in our bug system for any fixes to the debugger and the closest ones I could find are SDSCM00039488 and SDSCM00042523 - both marked and confirmed as fixed. However, the testcases were targeted for other architectures. At this point I would need a complete testcase to try to fully reproduce this in this platform. I can't confirm if this is still outstanding in our newest IDEs. Regards, desouzaAt this point I would need a complete testcase to try to fully reproduce this in this platform. TMS570LC43_multiply.zip The program was tested on a number of CCS and TI ARM compiler releases. 1) CCS 7.1.0.00016 and compiler v5.2.5: In this combination the location for the 64-bit variable sum is reported as the single "Register R5" and the value is incorrectly shown as 0x559A000000000056" instead of the expected "0x00000056559A0000". 2) CCS 7.1.0.00016 and compiler v16.9.3.LTS: In the combination the location for the 64-bit variable a pair of 32-bit registers "R7:32,R5:32" and the value is correctly shown as "0x00000056559A0000". 3) CCS 6.1.3.00033 and compiler v5.2.5: 4) CCS 6.1.3.00033 and compiler v16.9.3.LTS: In this combination the CCS debugger reports an error the register for the 64-bit variable is unknown. The conclusions are: a) CCS 7.1.0.00016 (as well as CCS 6.1.3.00033) display the value of a 64-bit variable in a register pair incorrectly, such that the upper and lower 32-bit words are swapped, when used with a TI ARM compiler which only describes the 64-bit variable as in a 32-bit register and the ARM device is big-endian. b) Somewhere between the TI ARM compilers v5.2.5 and v16.9.3.LTS there has been a change to describe the location of a 64-bit variable as a register pair which allows the CCS debugger to display the 64-bit variable held in a pair of registers on a ARM big-endian device correctly. However, the new debug information is not understood by older CCS versions. Don't know which CCS version between 6.1.3 and 7.1 in which the change to understand the debug information changed. desouzaI found out that CCSv7.1.0 + CGT 15.12.5 shows the issue while, as you saw, 16.9.3 does not. With v16.9.1.LTS a register pair is reported at the location: ;* V2 assigned to sum $C$DW$7 .dwtag DW_TAG_variable .dwattr $C$DW$7, DW_AT_name("sum") .dwattr $C$DW$7, DW_AT_TI_symbol_name("sum") .dwattr $C$DW$7, DW_AT_type(*$C$DW$T$71) .dwattr $C$DW$7, DW_AT_location[DW_OP_reg5 DW_OP_piece 4 DW_OP_reg7 DW_OP_piece 4] With v15.12.5.LTS only a single register is reported as the location: ;* V2 assigned to sum $C$DW$8 .dwtag DW_TAG_variable .dwattr $C$DW$8, DW_AT_name("sum") .dwattr $C$DW$8, DW_AT_TI_symbol_name("sum") .dwattr $C$DW$8, DW_AT_type(*$C$DW$T$71) .dwattr $C$DW$8, DW_AT_location[DW_OP_reg5] Where the type of sum is 64-bits via the $C$DW$T$71 type: $C$DW$T$71 .dwtag DW_TAG_typedef .dwattr $C$DW$T$71, DW_AT_name("uint64_t") .dwattr $C$DW$T$71, DW_AT_type(*$C$DW$T$15) .dwattr $C$DW$T$71, DW_AT_language(DW_LANG_C) .dwattr $C$DW$T$71, DW_AT_decl_file("C:/ti_ccs7_1/ccsv7/tools/compiler/ti-cgt-arm_15.12.5.LTS/include/stdint.h") .dwattr $C$DW$T$71, DW_AT_decl_line(0x33) .dwattr $C$DW$T$71, DW_AT_decl_column(0x20) Therefore, between the v15.12.x and v16.9.x series compilers there has been an enhancement to the debug information to allow a 64-bit variable held in a register to be given the location of a register.
http://e2e.ti.com/support/development_tools/code_composer_studio/f/81/p/594508/2194465
CC-MAIN-2017-30
refinedweb
1,180
67.65
Building HtmlHelper Extension Methods for ASP.NET MVC ASP.NET MVC is the latest buzz pattern in web development. ASP.NET MVC was recently released and I think it is a great way to develop web applications. If you are new to ASP.NET MVC, check ASP.NET MVC - Some Frequently Asked Questions and Your First ASP.NET MVC Application. One of the benefits of MVC is you having total control over what and how your code is rendered in the browser. When you create the HTML for your page, a class that is very helpful is the HtmlHelper class. This class represents support for rendering HTML controls in a view. You can use the HtmlHelper class to render controls such as a TextBox, which renders as input type=”text”, to the browser like the following: <%= Html.TextBox("Firstname") %> In the first release of ASP.NET MVC, the HtmlHelper class does not cover a large area of HTML controls. This article will show you how to create your own extension method to complement the existing HtmlHelper class. Before going any further you’ll need to install ASP.NET MVC. You can download that here. To begin with open Visual Studio 2008 and choose File > New > Project > ASP.NET MVC Web Application. By default Visual Studio creates several folders for you, namely Models, Views and Controllers. Start by creating a new folder in the root directory called Common. In that folder create a new class called Helpers and add the following code: C# public static class Helpers { public static string Span(this HtmlHelper html, string text) { var builder = new TagBuilder("span"); builder.GenerateId("firstName"); builder.SetInnerText(text); return builder.ToString(TagRenderMode.Normal); } } VB.NET Public Module Helpers <System.Runtime.CompilerServices.Extension> _ Public Function Span(ByVal html As HtmlHelper, ByVal text As String) As String Dim builder = New TagBuilder("span") builder.GenerateId("firstName") builder.SetInnerText(text) Return builder.ToString(TagRenderMode.Normal) End Function End Module I’m using a feature in C# 3.0 called extension methods. Extension methods enable you to "add" methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type. Using this new feature allows you to write custom methods that will appear in Visual Studio’s intellisense when working in the HTML designer. The ASP.NET MVC framework includes a useful utility class named the TagBuilder class that you can use when building HTML helpers. The TagBuilder class, as the name of the class suggests, enables you to easily build HTML tags. The next step is to make this helper class available to all the Views in the project. You could add this directive to each view: <%@ Import Namespace="MvcApplication1.Common" %> But that would mean you would need to add this directive to every view that needs to use this helper class. That isn’t acceptable because of maintenance. What happens if you want to move it to another namespace? Allot of find and replaces! The better option is to add the namespace to the web.config file so it’s available to every view. Add the following code to the pages/namespaces element: <add namespace="MvcApplication1.Common"/> This helper class can now be used through the entire project. To demonstrate how to use this helper class you need to open the About.aspx in the Home view and add the following code: <asp:Content <h2>About</h2> <p> <%= Html.Span("About page...") %> </p> </asp:Content> In the code above there is the Span extension method created earlier. As you might expect it will render a <span> tag in the browser. Pretty neat huh?! You
http://blog.csdn.net/yanzhou1224/article/details/6154923
CC-MAIN-2017-51
refinedweb
605
68.06
Last there are kn possible sequences of n symbols, and every one corresponds to some starting position in the De Bruijn cycle, an element of B(k, n) has to have at least kn symbols. In fact, the elements of B(k, n) have exactly kn symbols. It’s not obvious that B(k, n) should be non-empty for all k and n, but it is. And there are algorithms that will take a k and an n and return an element of B(k, n). The post from last week gave the example of a sequence in B(4, 3) that contains all triples of DNA base pairs: AAACAAGAATACCACGACTAGCAGGAGTATCATGATTCCCGCCTCGGCGTCTGCTTGGGTGTTT Generating De Bruijn cycles When k is a prime number, i.e. we’re working over an alphabet with a prime number of symbols, it is particularly simple generate De Bruijn sequences [1]. For example, let’s work over the alphabet {0, 1, 2}. Then the following code will produce the next symbol in a sequence in B(3, 4). def B34(a,b,c,d): if (a,b,c,d) == (0,0,0,0): return 1 if (a,b,c,d) == (1,0,0,0): return 0 return (a+b) % 3 We can initialize the sequence wherever we like since it produces a cycle, but if we start with 0000 we get the following: 000010011012110021020122101011112220112120002002202122001201021120202222111022121 You can verify that every sequence of four elements from {0, 1, 2} is in there somewhere, provided you wrap the end around. For example, 1000 can be found by starting in the last position. Where did the algorithm above come from? How would you create an analogous algorithm for other values of k and n? The algorithm goes back to Willem Mantel in 1894, and can be found, for example, in Knuth’s TAOCP Volume 4. Here is Mantel’s algorithm to generate an element of B(k, n) where k is prime. The function takes the latest n symbols in the De Bruijn cycle and returns the next symbol. - If (x1, x2, …, xn ) = (0,0, …, 0), return c1. - If (x1, x2, …, xn ) = (1,0, …, 0), return 0. - Otherwise return c1x1 + c2x2 + … cnxn mod k. In our example above, c1 = c2 = 1 and c3 = c4 = 0, but how do you find the c‘s in general? Primitive polynomials To find the c‘s, first find a primitive polynomial of degree n over the prime field with k elements. Then the c‘s are the coefficients of the polynomial, with a sign change, if you write the polynomial in the following form. In the example above, I started with the polynomial We can read the coefficients off this polynomial: c1 = c2 = -2 and c3 = c4 = 0. Since -1 and 2 are the same working mod 3, I used c1 = c2 = 1 above. Backing up a bit, what is a primitive polynomial and how do you find them? A primitive polynomial of degree n with coefficients in GF(k), the finite field with k elements, has leading coefficient 1 and has a root α that generates the multiplicative group of GF(kn). That is, every nonzero element of GF(kn) can be written as a power of α. In my example, I found a primitive polynomial in GF(34) by typing polynomials into Mathematica until I found one that worked. In[1]:= PrimitivePolynomialQ[x^4 + 2 x + 2, 3] Out[1]= True Since coefficients can only be 0, 1, or 2 when you’re working mod 3, it only took a few guesses to find a primitive polynomial [2]. Brute force guessing works fine k and n are small, but clearly isn’t practical in general. There are algorithms for searching for primitive polynomials, but I’m not familiar with them. The case where k = 2 and n may be large is particularly important in applications, and you can find where people have tabulated primitive binary polynomials, primitive polynomials in GF(2n). It’s especially useful to find primitive polynomials with a lot of zero coefficients because, for example, this leads to less computation when producing De Bruijn cycles. Finding sparse primitive binary polynomials is its own field of research. See, for example, Richard Brent’s project to find primitive binary trinomials, i.e. primitive binary polynomials with only three non-zero coefficients. More on binary sequences in the next post on linear feedback shift registers. *** [1] The same algorithm can be used when k is not prime but a prime power because you can equate sequence of length n from an alphabet with k = pm elements with a sequence of length mn from an alphabet with p elements. For example, 4 is not prime, but we could have generated a De Bruijn sequence for DNA basepair triples by looking for binary sequences of length 6 and using 00 = A, 01 = C, 10 = G, and 11 = T. [2] The number of primitive polynomials in GF(pn) is φ(pn – 1)/m where φ is Euler’s totient function. So in our case there were 8 polynomials we could have found.
https://www.johndcook.com/blog/2019/10/28/generating-de-bruijn/
CC-MAIN-2020-05
refinedweb
846
59.84
Your answer is one click away! I'm struggling to find a generic approach to detecting a form in HTML and then submitting it. When the page structure is know in advance for a given page, we of course have several options: -- Selenium/Webdriver (by filling in the fields and 'clicking' the button) -- Determining the form of the POST query manually, then reconstructing it with urllib2 directly: import urllib2 import urllib import lxml.html as LH url = "" params = urllib.urlencode([('field_36[]', 73), ('field_37[]', 76), ('field_32[]', 82)]) response = urllib2.urlopen(url, params) or with Requests: import requests r = requests.post("", data = 'Manager') r.text But although most forms involve a POST request, some input fields and a submit button, they vary greatly in their implementation under the hood. When the number of pages to be scraped gets into the hundreds, it's not feasible to define a custom form-filling approach for each. My understanding is that Scrapy's main added value is its ability to follow links. I presume that this would also include links ultimately arrived at via form submission. Can this ability then be used to build a generic approach to "following" a form submission? CLARIFICATION: In the case of a form with several dropdown menus, I will typically be leaving these at their default value, and only filling in the search term input field. So locating this field and 'filling it in' is ultimately the main challenge here. Link Extractors cannot follow the form submissions in Scrapy. There is an another mechanism called FormRequest that is specifically designed to ease submitting forms. Note that FormRequests cannot handle forms when JavaScript is involved in the submission. You can look into Selenium with PhantomJS. It can handle JS and then you can use the CSS selectors from Selenium to pick specific elements on the webpage.
http://www.devsplanet.com/question/35274729
CC-MAIN-2017-22
refinedweb
306
55.74
Tutorials > C++ > Recursion Recursion occurs when a function calls itself. eg.void func() { func(); } This is a very bad function as the function will loop forever, constantly calling itself. Eventually, stack space will run out and the program will crash. There therefore needs to be a stopping condition. This condition should be chosen so that the program will always reach this condition no matter what parameters are passed onto the program. You do not want to have your program crashing every 2nd time you run it. You may have a problem that requires a typical operation to act on a piece of data a number of times. Most recursive functions can be converted into an iterative function (loops), but recursion is usually a more simple approach. You shall see how recursion can be used below. Contents of main.cpp : #include <iostream> #include <stdlib.h> using namespace std; Our first recursive function calculates the factorial of a number. A factorial of a number N is calculated as follows : N! = N * (N-1) * (N-2) * ... * 1 int factorial(int num) { Below we provide our stopping condition. We carry on multiplying the current number by the previous value - 1 and eventually the value passed onto the function will be 1. We therefore return 1 as this is what the previously calculated value should be multiplied by in the last step. if (num == 1) return 1; The line below is what makes this function a recursive one. We take the value passed onto the function and multiply it by the same value - 1. To make this clearer, we can look at how 3! is calculated. We can see that factorial(1) returns 1 and factorial(2) returns 2 * factorial(1). factorial(2) therefore returns 2 * 1 which is 2. Our original calculation is 3 * factorial(2) which from the above calculation returns 3 * 2 which is 6. Our final value returned is therefore 6. return num * factorial(num - 1); } Another example of recursion is given below where the power of a number is calculated eg. 2 to the power of 3 is 8. The function takes as its first parameter, the base number and the second parameter is the exponent. int power(int base, int index) { Our stopping condition is similar to the factorial function above. if (!index) return 1; Our recursive call is shown below. The base is multiplied by the power of the same base to the same exponent - 1 eg. 2^3 = 2^1 * 2^2 return base * power(base, index - 1); } The functions are tested below. 5! = 120 2^8 = 256 int main() { cout << factorial(5) << endl; cout << power(2, 8) << endl; system("pause"); return 0; } Well done. You should now have an understanding of what recursion is and how it works. Remember not to let your function recurse too much as each time you call the function, new variables are pushed onto the stack. If you run out of space on the stack, your program will crash. Recursion can often make complex problems easily solvable. Please let me know of any comments you may have : Contact Me Back to Top Read the Disclaimer
http://www.zeuscmd.com/tutorials/cplusplus/34-Recursion.php
CC-MAIN-2014-42
refinedweb
524
66.13
. Forty three is, as my father-in-law put it, an odd age. you generally know to some accuracy your strengths and your weaknesses. You have become what you will be when you grow up. You realize that your dreams of becoming a world-class artist, musician, or any other alternative path will likely never happen, and what’s more, you’ve come to understand both why and reached an acceptance of these facts. At forty three, you’re equidistant in your career from its beginning (assuming it starts at 21) and your retirement (assuming it ends at 65). Perhaps that has extended - certainly the boomers would prefer to believe that, but my suspicion is that the enhanced quality of life that seems to be so much a part of the self-help literature has more to do with accrued wealth than it does with significant medical advances. I’m not an optimist here. I personally believe that the boomers have lived a privileged existence, one when energy was cheap, when housing was cheap, when significant advances in areas as diverse as computer science, biotechnology, materials science and the like all converged at once. Of course, I was at the very end of that cycle, born a few months before John F. Kennedy felt the assassin’s bullet, my own personal benchmark of the boomer generation’s end, just as VE Day was the more or less official beginning of that generation. It meant that in general the benefits that had accrued from the largest population boom had largely dissipated by the time it came around to me. Perhaps this is why I have become as philosophical as I have - my own particular position as betwixt and between generations has given me foresight due primarily to having seen it all before - the extrapolations are not hard. Indeed, lately, I’ve been reading S.I. Hayakawa, one of the seminal figures of semantics and language, and have found surprisingly that words written first in the early days of World War II still have surprising relevance today, perhaps even more so than could be envisioned then. Words are inextricably intertwined with thought, a concept that illuminates why programming in general and the discipline of XML in particular should touch so heavily on the philosophical domains. Semantics is more than just meaning - it is the way in which we map symbols to our interpretations of reality (and vice versa). Cripple or pollute a namespace - make it too limited, or too fraught with resonant (and potential hateful) meaning - and you will end up with a flawed reality. Certainly practitioners of both marketing and propaganda are, or at the very least should be, aware of this. With marketing, you are attempting to create a set of symbols that not only stand for a product but that represent a desirable version of that product - either by touting the benefits of that product in comparison to its rival, or increasingly by touting the providers of those products as being trustworthy, sexy, cool, desirable. When have you seen an ad for Microsoft Office, or Intel’s processors? Chances are you haven’t, at least not directly - instead, you’ve seen chalk-marked imagery around school kids or guys covered in blue paint, and maybe, as an afterthought, a mention of a given product. The product itself wasn’t important from the marketing standpoint - how much can you say about an office suite in thirty seconds, after all - but the symbolism, oh the symbolism! If you buy from us, if you give us your symbolic representations of wealth, we will give you and your children wisdom and intelligence and success. We will work magic, reach from the world of possibilities and extract from them the ones that will make your children what you want them to be. Behold our namespace, and know, by taking it in as your own, that you will partake of our deeper mysteries. When Hayakawa was writing his first, ground-breaking book, Language in Thought and Action, we were, as a society, just beginning to explore the power of such semantic manipulation. Hitler’s name has become synonymous with propaganda, but Hitler’s ability to work with language was remarkably crude, the use of blunt hammers to inflame the passions of a country while hiding the political skullduggery occuring in the background. Sixty years has added refinements in that manipulative power to make it far more adroit, more targeted, more capable of being used as a scalpel rather than a sword. A brand is nothing but a namespace URI for a company’s namespace, a bundle of semantics that extends beyond the shape of words and into the shape of symbols. As I look out the window, I see the Canadian flag flying, a red maple leaf against a white (and red bracketed) background. As a symbol it’s potent, and shapes the thoughts of the people under it as readily as the people shape their perspective of the counry around them. It is a symbol of a northern country, one where ice and fallen leaves are far and away the norm. It is a symbol of peace and tranquility, walking through a quiet woods on a late-fall day, a feeling of serenity wrapped around you. Brand Canada works surprisingly well because the semantic associations are, in general, those that appeal strongly to people who seek such tranquility and cooperative spirit. Brand America, on the other hand, seriously needs to hire a new marketing director, because the symbolism and semantics are far more redolant of Hitler’s early days than they are of a sophisticated nation. War, crisis, terror, homeland security, bunker-busters, all terrain vehicles, patriotism, hate, militias, insurgents, traitors, border walls, ‘Bring em on!’,'You’re either with us or against us’ … you can practically see the distorted swastikas on blood-red banners, Centurian salutes, “Seig Heil” echoing from the building square, thousands of books blazing in the night sky - the fiery death of one namespace so that another, cruder and with far crueler words, can ascend. My readers, gentle readers, you deal with words at a fundamental level, at a point down deep in the “stack” where such words exist in a pre-semantic fashion, dealing with the bindings and tools to provide for computers (and indirectly for people) a means of handling the semantics, to manipulate it to accomplish tasks far beyond what emerged “naturally”. Yet you who’s living is in words and namespaces and the very matrix where the mapmaker works also have a responsibility to show those who are not quite so close where the flaws in the code are, where the hacking is being done that is changing the very shapes of the thoughts in people’s heads. I have talked, more than once, about the similarities between the computational semanticists, the XML gurus of this day and age, and the wizards of our imagination and the alchemists of our history. A wizard, a wise one, commands mystical power to alter and shape the realities of those people around them - something that most certainly describes the semanticists of today’s age - but also has both the wisdom to understand the dangers of such abuse and the ethical responsibility to attempt to stop the abuse of such powers by those who have “gone over to the dark side”. I believe this transcends politics, although ultimately it is also fundamentally about politics. We, as stewards of these semantics, have let the language become polluted and divisive, have failed to challenge those who seek to foment hate and bring themselves material gain by subvertng the thoughts of others. I do not find any contradiction in the fact that one of the most vocal critics of the “mental pollution” of this day and age is Noam Chomsky, whose initial claims to fame were as a linguist and metalinguist, for he understands full well the power of language, symbolism and their effects on thought. There are few others, however, and it is perhaps extraordinarily important for us as semanticists (and programmers) to realize that neutrality in the face of such corruption is not neutrality at all, only cowardice. For others to dictate the symbols of the namespace is to cede power to them, and power ceded to the venal and avaricious is never a good investment. So I ask you, gentle readers, to consider these words as I cut myself a slice of birthday cake. Perhaps forty three is not such a bad age to be after all - perhaps, in fact, it is that age when I must take up the staff, move beyond the lessons of my training and my journeyman period, and recognize that maturity is ultimately that point in your life when you recognize that there are responsibilities in your life that you must do, not because you are getting paid or it will make you famous, but simply because these things need to be done. * bland statement 1: Why not say 'America is dead wrong' or 'we must change the world' or perhaps something a bit more assured like 'we all die someday'. * Jim, Huh, I would have thought the connection was obvious. XLINK. What else? had a moment of 'info overload' rage...passed now and I have Hayakawa on my list of authors to read. ta, Jim Fuller I wish it were so black and white, Kurt. What you will learn in your next ten years is the hazy recursion of boundaries where before you have seen clean borders. Symbols have a quality of motion and the force that creates that motion itself changes the intelligence that gives it impetus. Corruption is made of innocence.. Len,. -- Kurt Very interesting read. As someone who recently turned 40, I've found my vision becoming similar to that which you describe (speaking particularly to the personal aspects of your essay).. If those illusions include us vs Canadians, I can only say, well, the Canadians are certainly willing to do the work for the money so I don't see it as much more enlightened: just feeling less threatened..
http://www.oreillynet.com/xml/blog/2006/06/ruminations_on_turning_43.html?CMP=OTC-TY3388567169&ATT=Ruminations+on+Turning+43
crawl-003
refinedweb
1,689
51.52
Reference Table of Contents - Installation - Socket - Transport - Quirks Installation As a browser client Download the package the way you want. Then load it by using either bundlers such as webpack, Browserify or Rollup or a script tag. Bundler import cettia from "cettia-client/cettia-bundler"; const socket = cettia.open("/cettia"); script tag <script src="/path/to/cettia-browser.min.js"></script> <script> var socket = cettia.open("/cettia"); </script> As a Node.js client cettia.js is available on npm under the name of cettia-client. Install the module. npm install cettia-client --save It will install the latest version adding it to dependencies entry in package.json in the current project folder. Then load it as a npm module. var cettia = require("cettia-client"); var socket = cettia.open(""); Socket The interface to represent a client-side socket. Opening a socket To create a socket and connect to the server, use cettia.open(uri: string, options?: SocketOptions): Socket or cettia.open(uris: string[], options?: SocketOptions): Socket. The returned socket is in connecting state. Here URI is used to not only identify a name of an endpoint but also determine transport type so that it should follow a specific URI format according to transport. But it’s allowed to use a plain form of URI like or /cettia for convenience. If a connection is established successfully, then new and open events are fired. If not, close event is fired. Note - Plain URI is translated to ones which follow WebSocket, HTTP Streaming and HTTP Long Polling transport, respectively, in order. To change this default order, you should use fully qualified URIs instead of plain URI. - Relative URI is valid only in browser. Opening a socket. // A plain URI // Internally the URI is translated to fully qualified URIs like the below form cettia.open(""); // Fully qualified URIs // A fully qualified URI follows the corresponding transport's own URI format cettia.open(["ws://localhost/cettia", "", ""]); Properties These are read only. State The current state of the socket. socket.state(); Lifecycle Socket always is in a specific state that can be accessed by state() method. Note that regardless of the lifecycle, a reference to the socket isn’t affected by disconnection and reconnection, and only the new event among reserved events determines the lifecycle. The following list is a list of state which a socket can be in. connecting The connectingevent is fired. Given URIs, transports are created through transport factories specified by transports?: ((uri: string, options: TransportOptions) => Transport)[]option and used to establish a connection over wire. Each transport should establish a connection within the time specified by timeout?: numberoption. If it turns out that a transport corresponding to the current URI is not available, next URI is tried. State transition occurs to opened: if one of transports succeeds in establishing a connection. closed: if close()method is called. closed: if every transport fails to connect in time. opened The connection is successfully established and communication is possible. If the server issues a new identifier for the socket, the newevent is fired as the beginning of the new lifecycle and the end of the old lifecycle. Then, the openevent follows. It would happen if it’s the first time to connect to the server so there is no corresponding socket in the server or if a connection was disconnected but reconnection doesn’t occur for a long time so the socket is deleted from the server. If the server doesn’t issue a new identifier, that is to say, the client reconnects in time, only the openevent is fired, which doesn’t affect the current lifecycle. Only in this state, the socket can send and receive events via connection. State transition occurs to closed: if close()method is called. closed: if connection is closed cleanly. closed: if heartbeat fails. closed: if connection is disconnected due to some error. closed The connection has been closed, has been regarded as closed or could not be opened. The closeevent is fired. If reconnect? (lastDelay: number, lastAttempts: number)option is set to falseor returns false, the whole lifecycle ends here. In this state, sending and receiving events is not allowed but sent events in this state are passed to the cacheevent without throwing an exception so that you can cache and send them on next reconnection. It is the same for the server. State transition occurs to waiting: if reconnectoption returns a positive number. waiting The socket waits out the reconnection delay. The waitingevent is fired with the reconnection delay in milliseconds and the total number of reconnection attempts. State transition occurs to connecting: after the reconnection delay. closed: if close()method is called. Handling errors To capture any error happening in the socket, add error event handler. As an argument, Error object in question is passed. Exceptions from the underlying transport are also propagated. Note - In most cases, there is no error that you can ignore safely. You should watch this event and log thrown errors. - Errors thrown by user created event handler are not propagated to errorevent. Sending and receiving events You can send event using send(event: string, data?: any) and receive event using on(event: string, onEvent: (data?: any) => void). Any type of data can be sent and received regardless of whether is is text, binary or composite. Note - Any event name can be used except reserved ones: connecting, new, open, cache, waitingand error. - If data or one of its properties is Bufferin Node or ArrayBufferin browser, it is regarded as binary. Though, you don’t need to be aware of that. - To manage a lot of events easily, use URI as event name format like /account/update. - If you send an event via a closed socket, it will be delegated to that socket’s cacheevent so you don’t need to worry about socket’s state when sending event. The client sends an event and the server echoes back to the client. Client cettia.open("", {reconnect: false}) .on("open", function() { if (typeof exports === "object") { // Node this.send("echo", new Buffer("echo")); this.send("echo", {text: "echo", binary: new Buffer("echo")}); } else { // Browser // From Encoding standard var encoder = new TextEncoder(); this.send("echo", encoder.encode("echo")); this.send("echo", {text: "echo", binary: encoder.encode("echo")}); } }) .on("echo", function(data) { console.log(data); }); Server server.on("socket", function(socket) { socket.on("echo", function(data) { console.log(data); this.send("echo", data); }); }); The server sends an event and the client echoes back to the server. Client cettia.open("", {reconnect: false}) .on("echo", function(data) { console.log(data); this.send("echo", data); }) Server server.on("socket", function(socket) { socket.on("open", function() { socket.send("echo", new Buffer("echo")); socket.send("echo", {text: "echo", binary: new Buffer("echo")}); }); socket.on("echo", function(data) { console.log(data); }) }); Reconnection Reconnection has been disabled in the code snippets in this page for convenience of test, but it’s essential for production so that it’s enabled by default. The default strategy generates a geometric progression with initial delay 500 and ratio 2 (500, 1000, 2000, 4000 …). To change it, set reconnect? (lastDelay: number, lastAttempts: number): number function which receives the last delay in ms or null at first and the total number of reconnection attempts so far and should return a reconnection delay in ms or false not to reconnect. Note - Don’t add event handler during dispatch. Because reconnection doesn’t remove existing event handlers, it will be duplicated. - If reconnection is done in time, the server doesn’t delete the corresponding socket, and the openevent is fired without the newevent, which doesn’t affect the current lifecycle. But if reconnection isn’t done for a long time, the server deletes the corresponding socket, and the newevent is fired after successful reconnection and then the openevent is fired, which initiates the new lifecycle. Offline handling Once the underlying transport is disconnected, it’s not possible to send an event through the socket until the new transport establishes a connection. To cache event which is being passed to send method while offline and send it on next reconnection, add cache event handler with new and open event handler. The cache event is fired if the send method is called when there is no connection with an array of arguments used to call the send method. Note - There is no default behavior for offline handling. Caching events while offline and sending them on next reconnection. var socket = cettia.open(""); // A queue containing events the client couldn't send to the server while disconnection var cache = []; // Fired if the send method is called when there is no connection socket.on("cache", function(args) { // You can determine whether or not to cache this arguments used to call the send method // For example, in some cases, you may want to avoid caching to deliver live data in time cache.push(args); }); socket.on("open", function() { // Now that communication is possible, you can flush the cache while(socket.state() === "opened" && cache.length) { // Removes the first event from the cache and sends it to the server one by one var args = cache.shift(); socket.send.apply(socket, args); } }); socket.on("new", function() { // The old lifecycle ends and the new lifecycle begins // If the cache is not empty, it means that there are cached message that should have been sent through the old socket if (cache.length) { // If you don't empty the cache here, cached messages will be sent through the new socket on following open event } }); Extending the lifecycle to the next page To extend the lifecycle of the socket to the next page, that is to say, for the socket of the next page to inherit the lifecycle of the socket of the current page, set the same name?: string option to both sockets which is an identifier that uniquely specifies socket within the document. It enables for the server to cache events which cannot be sent to the socket of the previous page due to temporary disconnection during page navigation and send them on next reconnection to the socket of the next page. Since these sockets are the same in terms of the lifecycle, you can deal with them using a single socket reference in the server and actually don’t need to know what’s happening in the client. With this option, you don’t need to stick with the single page application model to avoid message loss from page navigation. Note - The lifecycle is extended only within the browsing context. That’s why if you duplicate a tab or window, a socket of the new tab will have a different lifecycle with that of the original tab. - In a page where the socket inherits the lifecycle of the socket of the previous page, the newevent is not fired of course. If some resources are supposed to be initialized on newevent before being used, it won’t work in such pages. nameoption doesn’t require to set the same URI so you can include variable parameter to URI like "/cettia?now=" + Date.now(). - This features monopolizes window.nameas a storage for the browsing context. Make sure that none of your application use window.name. Handling the result of the remote event processing You can get the result of event processing from the server in sending event using send(event: string, data?: any, onFulfilled?: (data?: any) => void, onRejected?: (data?: any) => void) and set the result of event processing to the server in receiving event using on(event: string, handler:(data?: any, reply?: {resolve: (data?: any) => void; reject: (data?: any) => void}) => void) in an asynchronous manner. You can apply this functionality to Acknowledgements, Remote Procedure Call and so on. Note - If the server doesn’t call either attached fulfilled or rejected callback, these callbacks won’t be executed in any way. It is the same for the client. Therefore, it should be dealt with as a kind of contract. - Beforehand determine whether to use rejected callback or not to avoid writing unnecessary rejected callbacks. For example, if required resource is not available, you can execute either fulfilled callback with nullor rejected callback with error e.g. ResourceNotFoundError. The client sends an event attaching callbacks and the server executes one of them with the result of event processing. Client cettia.open("", {reconnect: false}) .on("open", function(data) { this.send("/account/find", "flowersinthesand", function(data) { console.log("fulfilled", data); }, function(data) { console.log("rejected", data); }); }); Server server.on("socket", function(socket) { socket.on("/account/find", function(id, reply) { console.log(id); try { reply.resolve(accountService.findById(id)); } catch(e) { reply.reject(e.message); } }); }); The server sends an event attaching callbacks and the client executes one of them with the result of event processing. Client cettia.open("", {reconnect: false}) .on("/account/find", function(id, reply) { console.log(id); try { reply.resolve(accountService.findById(id)); } catch(e) { reply.reject(e.message); } }); Server server.on("socket", function(socket) { socket.on("open", function() { socket.send("/account/find", "flowersinthesand", function(data) { console.log("fulfilled", data); }, function(data) { console.log("rejected", data); }); }); }); Transport The interface to reprsent a full-duplex connection. Implementation According to the technology, WebSocket transport factory, HTTP Streaming transport factory and HTTP Long polling transport factory are provided and accessible through cettia.transport.createWebSocketTransport, cettia.transport.createHttpStreamTransport and cettia.transport.createHttpLongpollTransport respectively. Compatibility The compatiblity of Cettia JavaScript Client depends on transport compatibility. Browser The browser support policy is the same with the one of jQuery. A word in WebSocket cell stands for WebSocket protocol browser implements, and in order to use WebSocket in some browser, the server should implement WebSocket protocol the browser implements as well. A word list in HTTP Streaming and HTTP Long Polling cell stands for the host objects used to establish a read-only channel and the final host object is determined through feature detection automatically. Note - 1: only availabe in same origin connection - 2: xdrURLoption required. - 3: binary features are not available. Node.js Quirks There are problems which can’t be dealt with in non-invasive way. The browser limits the number of simultaneous connections Applies to: HTTP transport According to the HTTP/1.1 spec, a single-user client should not maintain more than 2 connections. This restriction actually varies with the browser. If you consider multiple topics to subscribe and publish, utilize the custom event using a single connection. Sending an event emits a clicking sound Applies to: cross-origin HTTP connection on browsers not supporting CORS If a given url is cross-origin and the browser doesn’t support CORS such as Internet Explorer 6, an invisible form tag is used to send data to the server. Here, a clicking sound occurs every time the form is submitted. There is no workaround.
https://cettia.io/projects/cettia-javascript-client/1.0.0-Beta2/reference/
CC-MAIN-2018-43
refinedweb
2,451
58.69
0 I have to write a program for an online class that accepts data to the function enterData() and passes it to printCheck to display. I am new to C++ and since it is an online class it is hard to get help. I have to compose a check and be able to use the functions to fill in the date, first and last name, and amount. I have the check set up good and it displays properly but I have no idea where to even begin with the functions. I will put what I have on here. Any suggestions or help would be appreciated. #include "stdafx.h" #include <iostream> using namespace std; void enterData(int date, char firstName, char lastName, double amount); int _tmain(int argc, _TCHAR* argv[]) { int date; char firstName, lastName; double amount; enterData(12102007); cin >> date; enterData(Bruce); cin >> firstName; enterData(Chiesa); cin >> lastName; enterData(547.50); cin >> amount; cout << "Zzyz Corp Date: (date)"<<endl; cout << "1164 Sunrise Avenue "<<endl; cout << "Kalispell, Montana\n "<<endl; cout << "Pay to the order of: (firstName lastName) $ (amount)\n "<<endl; cout << "UnderSecurity Bank "<<endl; cout << "Missoula, MT "<<endl; cout << " ____________________"<<endl; cout << " Authorized Signature"; cout << endl << endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/91051/help-with-a-function
CC-MAIN-2018-30
refinedweb
199
64.64
Object-oriented programming (OOP) is the approach used in the development of software. Before OOP there is a procedural oriented programming approach (POP). In POP programmers write functions to accomplish each task and the data is shared among these functions. The limitation of POP is that it is hard to maintain the complexity of big projects and data security. To overcome these limitations OOP approach helps. In OOP, data is treated as a very important thing and does not allow it to move freely around the system. Data is accessible to those functions which are tied to data. In this approach, the problem is decomposed into objects. It’s easier to maintain the complexity of a system based on objects. These objects can communicate with each other with the help of functions. The OOP approach follows a bottom-up design approach. Another advantage of OOP is that one can add new data and functions whenever necessary. The basic concepts of OOP are as below: - Class: The class creates the user-defined data type. The class defines the data and behavior of the object. It is the blueprints for objects. Objects are the instances of these classes in OOPS. For example, if we take the flower as a class, then rose, jasmine, lily etc. are the members of the class. Once you define a class, you can create any number of objects of this class. The general syntax for defining a class is following: class <ClassName> { //Fields declaration //Method definitions } 2. Object: The object is the basic building block in OOP. Programming problem analyzed in terms of objects and the nature of communication between them. Every object has data and functions to manipulate that They are basic runtime entities and may represent a real world thing like a person, car etc. or user-defined data, such as lists, time etc. The data represents the characteristics of the object that differentiate it from other objects of the same class. Like if we take an object of the class book, then, author, the number of pages, and the name of the book are the characteristics of the objects. The object interacts with each other by sending messages. Objects take place in memory. In C# objects are created using ‘new’ keyword as below Type <obj_name> =new Type(); Example: using System; namespace OOPDemo { class Book //Defining class { //Fields public string BookName; public string Author; public int NoOfPages; //Methods public void ShowDetails() { Console.WriteLine("Book Name: {0}",BookName); Console.WriteLine("Author: {0}", Author); Console.WriteLine("No. Of Pages: {0}", NoOfPages); } } class Program { static void Main(string[] args) { Book b1 = new Book(); //Object creation using new operator b1.BookName = "Software Architecture"; b1.Author = "M. S. Murthy"; b1.NoOfPages = 100; Book b2 = new Book(); b2.BookName = "Mathematics"; b2.Author = "B. S. Swami"; b2.NoOfPages = 200; b1.ShowDetails(); b2.ShowDetails(); Console.ReadLine(); } } } The output of above program is: Book Name: Software Architecture Author: M. S. Murthy No. Of Pages: 100 Book Name: Mathematics Author: B. S. Swami No. Of Pages: 200 1. Data Abstraction: Abstraction means showing only essential information without adding unnecessary background details to the outside world. For example, we use cell phones to call or for texting, but we don’t know how these functionalities implemented internally. This internal implementation is abstracted from users. In OOPS data abstraction reduces the complexity and ensures the efficiency. 2. Encapsulation: The process of combining data and function that operates on that data in a single unit (class) is known as encapsulation. In OOPs data and function are combined by defining a class. Only the function within the class can have access to the fields in the class. This hiding of data from the outside world is known as data hiding. Encapsulation ensures the data security in OOPS. In C# encapsulation is achieved by using access modifiers. 3. Inheritance: One of the most important reasons behind using OOPS approach is that it supports the idea of reusability. Inheritance is the process by which new class is created from the old class. In terms of OOPS newly created class is called as ‘derived class’ or ‘child class’ and the old class is known as ‘base class’ or ‘parent class’. The derived class inherits the functionalities from a base class and one can also add new functionalities to the derived class. The base class defines the common attributes that are shared by derived classes. Suppose vehicle is classier than two wheelers and four wheelers are the derived classes of it and they have inherited characteristics of the vehicle as well as they have some other specific characteristics also. The class in C# is inherited as below: class BaseClass { //Fields declaration //Method definitions } class DerivedClass:BaseClass { //Fields declaration //Method definitions } Example: using System; namespace InheritanceDemo { //Parent Class class Parent { public void Show() { Console.WriteLine("This is parent class."); } } //Child Class inherites Parent class class Child : Parent { public void Display() { Console.WriteLine("This is child class."); } } class Program { static void Main(string[] args) { Child C = new Child(); //Creating object of child class C.Show(); //Accessing Parent class method C.Display(); //Accessing Child class method Console.ReadLine(); } } } The output of above program is: This is parent class. This is child class. 4. Polymorphism: Another important feature of OOPS is a polymorphism. Polymorphism is the ability to act differently in different situations. For example addition of two numbers generates the third number, but the addition of two string results in the third string which is the concatenation of two strings. In this case, we can use the single function name and depend upon the inputs the appropriate function is called. Result String Result Number [Text Wrapping Break] There are two types of polymorphism; first is static or compile time and second is dynamic or runtime polymorphism. In static polymorphism, the operation to be done is decided at compile time of the program. In dynamic polymorphism, the operation to be done is decided at runtime. Function overloading, function overriding and operator overloading are the examples of polymorphism. Advantages of OOPs: The software complexity is managed as the system is divided into objects. So it is easy to maintain big systems in OOPS. - In OOPs we can reuse code to a great extent. - It is easy to partition work based on objects. - The old system can be upgraded to new system easily. - With the help of data, hiding programmer can build a secure program.
http://knowledgetpoint.com/csharp/introduction-to-oops/
CC-MAIN-2017-34
refinedweb
1,066
57.98