text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi,I digged around HPSF and found the following bug. Word 8.0/97 docs DocumentSummaryInformation have 2 sections, but getCategory() (Category is located within the section with index 0) (implicitly) calls GetSingleSection() which throws an exception if sectionCount != 1. Word 6.0/95 has single section and this works fine. Here's my solution to the problem until you find a better way... Than my class can be simply removed and everything will work ok... After putting the code below through this form it may need some beautifying (indentation)... Regards, Mickey <code> /** * This class is a manual work around HPSF * <NOBR>DocumentSummaryInformation.getCategory()</NOBR> bug. This method calls * <NOBR>getProperty();</NOBR> which further calls * <NOBR>getSingleSection().getProperty();</NOBR>. Now, <NOBR>getSingleSection()</NOBR> * throws a <I>NoSingleSectionException</I> for <NOBR>Word 8.0/97- 2000</NOBR> documents * because these have two sections and only one is expected. Here's the stack trace: <BR> * <PRE> * org.apache.poi.hpsf.NoSingleSectionException: Property set contains 2 sections. * at org.apache.poi.hpsf.PropertySet.getSingleSection(PropertySet.java) * at org.apache.poi.hpsf.SpecialPropertySet.getSingleSection (SpecialPropertySet.java) * at org.apache.poi.hpsf.PropertySet.getProperty(PropertySet.java) * at org.apache.poi.hpsf.DocumentSummaryInformation.getCategory (DocumentSummaryInformation.java) * </PRE> * * @author Miroslav Obradovic (micky@eunet.yu) */ public class MyDocumentSummaryInformation extends DocumentSummaryInformation { /** * Creates a DocumentSummaryInformation from a given PropertySet. */ public MyDocumentSummaryInformation(final PropertySet ps) throws org.apache.poi.hpsf.UnexpectedPropertySetTypeException { super(ps); } /** * Returns the stream's category (or <code>null</code>). */ public String getCategory() { int pid = org.apache.poi.hpsf.wellknown.PropertyIDMap.PID_CATEGORY; // equals 2 String category = null; List sections = getSections(); int sectionCount = (int) getSectionCount(); org.apache.poi.hpsf.Section section = null; org.apache.poi.hpsf.Property[] properties = null; // Iterate through sections, get their properties and look for Category. // Category should be found in the section with index 0. for (int i = 0; i < sectionCount; i++) { try { // Get the current section. section = (org.apache.poi.hpsf.Section) sections.get(i); // Get section properties and look for Category. properties = section.getProperties(); for (int j = 0; j < properties.length; j++) { if (properties[j].getID() == pid) { category = (String) properties[j].getValue(); break; } } // If Category found, break the loop. if (category != null) { break; } } catch (Exception e) { category = null; } } return category; } } </code> Miroslav, can you please attach your Word file to this bug in Bugzilla? Or better, can you create a minimal Word file which behaves as you described? I need a test case to verify the bug. Thanks! The author of this bug did not provide a test file nor did he respond to any e-mail. hi there, i'm sorry for the delay. i'm not used to using these forums and stuff... i've just found a work-around for the problem i once had and thought it would be useful if i post it, in case someone else needs it. it was long ago, but i'll try to find the sample word file. best regards, miroslav Created attachment 6990 [details] here's the java code (word document plain text content extractor) i have developed when i noticed the bug... Created attachment 6991 [details] this is the POI library i have used for my project when noticed the bug... well, here we are. i have added two attachments and here are a few words about these: the second attachment (POI library) is poi-1.5.1.jar file i used when i noticed the bug (or what i think it was a bug). i don't remember the date well, but i think it was the latest stable version at the moment i have written the code. the first attachment is a part of the project i have worked on when i noticed this bug. it's a content (plain text) extractor for word file format. i don't know if you have something similar added to POI, but if you find this code useful (there are a lot of comments in there!), you can freely use this code (though, it would be nice of you if you'd mention me as a developer somewhere, :-) ) the problem is that there are some new Summary Info "pages" added with new versions of ms word and i think you have assumed there (in poi) that there is only a single one. i guess you could use a solution similar to the one i have attached (in MyDocumentSummaryInfo.java), since Micro$oft can add more and more of these new "pages" with new releases of office. i hope this was useful :-) best regards, miroslav sorry, i forgot to mention. the sample word file you requested (sample.doc) is included in the first of the two attachments. mickey oh, i'm the most boring man today... i tried to download attachments, but i guess you must know which type of binary files is in it to properly download and save the file. the first attachment should be saved as .zip (created win WinZip 8.1) the second attachment should be saved as .jar i hope this is the last one :-) bye, m The current CVS HEAD can process your sample application without any flaws. I suggest an upgrade.
https://bz.apache.org/bugzilla/show_bug.cgi?id=14734
CC-MAIN-2020-29
refinedweb
854
50.43
28896/hyperledger-composer-incompatible-versions-error I am trying to run the following command: composer network ping -n calma-network -p hlfv1 -i admin -s adminpw I get this error: Error: Error trying to ping. Error: Composer runtime (0.8.0) is not compatible with client (0.11.0) Command failed The versions look the same: composer -v composer-cli v0.11.0 composer-admin v0.11.0 composer-client v0.11.0 composer-common v0.11.0 composer-runtime-hlf v0.11.0 composer-runtime-hlfv1 v0.11.0 The versions are same. Maybe its a bug. Try rebooting the Hyperledger Fabric like this: stopFabric.sh teardownFabric.sh startFabric.sh See if it works There is an error in that document. ...READ MORE It seems like there is no required ...READ MORE The Hyperledger Composer pre-requisites can be installed ...READ MORE I think the docker-compose tool is not ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE It looks like a problem with composer-cli. A ...READ MORE First delete the cache and cookies from ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/28896/hyperledger-composer-incompatible-versions-error
CC-MAIN-2019-47
refinedweb
214
55.4
This action might not be possible to undo. Are you sure you want to continue? Life Insurance Get more at Your shortcut to success™ — in personal finance! Get more at Things change. To stay up to date, visit the CliffsNotes Web site and take advantage of: s s s Additional references with links Interactive tools for selected topics Plus: s The entire CliffsNotes catalog, including titles you can sample or download ,!7IA7G4-fifbci!:p;K;s;T;t ISBN 0-7645-8512-6 Understanding Life Insurance By Bart Astor IN THIS BOOK s Make sense of the different types of life insurance s Determine which type of life insurance is best for you s Assess what your life is worth s Understand the role of life insurance in estate planning s Choose a company, policy, and agent that match your needs s Reinforce what you learn with CliffsNotes Review s Find more life insurance information in CliffsNotes Resource Center and online at IDG Books Worldwide, Inc. An International Data Group Company Foster City, CA • Chicago, IL • Indianapolis, IN • New York, NY CA 94404 www. please contact our Educational Sales department at 800-434-2086 or fax 317-596-5499. tax. and National Public Radio’s Marketplace. by Livraria Cultura for Brazil. Indonesia. INCLUDING BUT NOT LIMITED TO SPECIAL. Malaysia. investment.A. in the United States and other countries. by Norma Comunicaciones S. INCIDENTAL. Authorized Sales Agent: Anthony Rudkin Associates for the Middle East and North Africa. OR OTHER DAMAGES.com (IDG Books Worldwide Web site) www. by IDG Sweden Books for Sweden. the reader is strongly encouraged to obtain the services of a professional expert. author interviews.cliffsnotes. LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND AUTHOR HAVE USED THEIR BEST EFFORTS IN PREPARING THIS BOOK. accounting.A. Inc. IDG Books Worldwide. Inc. please phone +1-650-655-3109. by Grupo Editorial Norma S. the accuracy and completeness of the information provided herein and the opinions that have been generated are not guaranteed or warranted to produce particular results. by Editorial Norma de Panama S.About the Author Bart Astor. All rights reserved. including interior design. de C. Fox Morning News. Baby Boomers Guide to Caring for Aging Parents. or similar professional services.S. NEITHER THE PUBLISHER NOR AUTHOR SHALL BE LIABLE FOR ANY LOSS OF PROFIT OR ANY OTHER COMMERCIAL DAMAGES. some material may be affected by changes in the laws and/or interpretation of laws since the manuscript in this book was completed. Ltd. you Published by should be aware that this book is stolen property. Hillsdale Blvd. by IDG Books Australia Publishing Corporation Pty. from International Data Group. For information on where to purchase IDG Books Worldwide’s books outside the U. fax 317-596-5692. by WS Computer Publishing Corporation. may be reproduced or transmitted in any form. The author and publisher are not engaged in rendering legal. for Panama. for Singapore. please contact our Public Relations department at 650-655-3000 or fax 650-655-3299." Suite 400 Foster City. 222 Rosewood Drive. a contributing author and editor for various college guides. please call our Consumer Customer Service department at 800-762-2974. For consumer information on foreign language translations. Library of Congress Catalog Card No. real estate. by Ediciones ZETA S. for Canada. by Distribuidora Cuspide for Argentina. Although legal. THE PUBLISHER AND AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS BOOK AND SPECIFICALLY DISCLAIM ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. investment.C. photocopying. or e-mail rights@idgbooks..R. for Taiwan. or registered trademarks of their respective owners. Distributed by CDG Books Canada Inc. Note: This book is intended to offer general information on life insurance. including discounts and premium sales. recording. All other brand names and product names used in this book are trade names. for Mexico. please contact our Sales department at 650-655-3200 or write to the address above. THE ACCURACY AND COMPLETENESS OF THE INFORMATION PROVIDED HEREIN AND THE OPINIONS STATED HEREIN ARE NOT GUARANTEED OR WARRANTED TO PRODUCE ANY PARTICULAR RESULTS. and the strategies outlined in this book may not be suitable for every individual. For general information on IDG Books Worldwide’s books in the U. For information on licensing foreign or domestic rights. is a registered trademark or trademark under exclusive license to IDG Books Worldwide. by Chips Computadoras S. please call our Reseller Customer Service department at 800-434-3422. accounting. please contact our International Sales department at 317-596-5530 or fax 317-596-5692. For reseller information. For information on using IDG Books Worldwide’s books in the classroom or for ordering examination copies. and Cliffs Notes. Inc. for Micronesia. No part of this book. investment. real estate.S. by Contemporanea de Ediciones for Venezuela. and all related logos and trade dress are registered trademarks or trademarks of Cliffs Notes. Thailand. and An International Data Group Company neither the author nor the publisher has received any 919 E. for Australia and New Zealand. Danvers. trademarks.: 99-64597 ISBN: 0-7645-8515-0 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 1O/TR/QY/ZZ/IN Distributed in the United States by IDG Books Worldwide. in the United States and/or other countries. Austria and Switzerland.V. tax. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES REPRESENTATIVES OR WRITTEN SALES MATERIALS. by Micronesia Media Distributor. by Gotop Information Inc. are not associated with any product or vendor mentioned in this book. please contact Copyright Clearance Center. by American Bookshops for Finland. by ICG Muse. by TransQuest Publishers Pte Ltd. CONSEQUENTIAL. THERE ARE NO WARRANTIES WHICH EXTEND BEYOND THE DESCRIPTIONS CONTAINED IN THIS PARAGRAPH. Carol A. or other publicity information.. Inc. It was IDG Books Worldwide. for Peru. a freelance writer based in Virginia.com. personal. reported as "unsold and destroyed" to the publisher. AND THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY INDIVIDUAL.idgbooks. for the Philippines. by Express Computer Distributors for the Caribbean and West Indies. Publisher’s Acknowledgments Editorial Project Editors: Christine Meloy Beck. is the author of six books and numerous articles that have appeared in national magazines. or otherwise) without the prior written permission of the publisher. by Transworld Publishers Limited in the United Kingdom. for Japan. please contact our Customer Service department at 1-800-434-3422. has been featured on several radio and television talk shows. and similar issues addressed by this book have been checked with sources believed to be reliable. by International Thomson Publishing for Germany. His previous book. If legal. Bailey. For press review copies. Inc. For authorization to photocopy items for corporate. or educational use. or fax 978-750-4470.A. including ABC’s Good Morning America. by Intersoft for South Africa. Inc. Mitchell Production Indexer: York Production Services Proofreader: York Production Services IDG Books Indianapolis Production Department CliffsNotes™ Understanding Life Insurance Note: If you purchased this book without a cover. or other expert advice is needed or appropriate. Trademarks: Cliffs. for Colombia. CliffsNotes. tax. For sales inquiries and special prices for bulk quantities. and Hong Kong. for Guatemala. payment for this "stripped book. Inc. Mr. Ltda. Inc. Astor was also series editor and coauthor of four books on college admission and financial aid. Karen Hansen Senior Copy Editor: Tamara Castleman Technical Reviewers: George W.com (CliffsNotes Web site) Copyright © 1999 IDG Books Worldwide. MA 01923. and publisher of a national newsletter on college planning. Inc. and icons. by any means (electronic.A. by Eyrolles for France. Pamela Mourouzis Acquisitions Editors: Mark Butler. accounting. Inc. by IDG Norge Books for Norway. service marks. real estate. cover design. .. Inc. Therefore. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Beneficiaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Don’t Miss Our Web Site . . . . . . . . . . . . . . .6 Using life insurance as an investment . . . . . . . . . .10 The Economics of Your Life . . . . . . . . .3 Myth 2: Life insurance is a bad investment . . .16 Estate taxes . . . . . . . . . . . . . . . . . . .24 . . . . .10 Your income . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Why Do You Need This Book? . . . . . . . . . .23 Spreading the wealth . . . . . . . . .18 Future expenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 Life expectancy . . . . . . . .23 Avoiding tax consequences . . . . . . . . . . . . . . . . .18 Age — Yours and Your Survivors’ .16 Probate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Minor children . . . . . . . . . . . . . . . . . . . . . .11 Your cost of living . . . . . . . . . . . . . . . .7 Using life insurance as part of your estate planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Myth 1: I only need life insurance if I have kids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 The Purposes of Life Insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Table of Contents Introduction . . . . . . . . . . . . .6 Using life insurance as a tax shelter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Your uninsured medical costs . . .2 Chapter 1: How Life Insurance Differs from Other Types of Insurance . . . . . . . .20 Chapter 3: Life Insurance in Estate Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Myth 3: Life insurance is unnecessary for older people . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 How to Use This Book . . . .8 Chapter 2: How Much Life Insurance Do You Need? . . . . . . .19 Cost of premiums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Your Survivors’ Needs . . . . . .17 The cost of dying . . . . . . . . . . . . . . .3 Myths about Life Insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Providing protection for beneficiaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 For business owners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Convertible term . . . . . . . . . . . . . . . . . . . . . . . . . .48 Summing Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40 Tax benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Irrevocable trusts . . . . . . . . . . . . . . . . . . . .27 Annuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44 Traditional Whole Life . . . . . .35 Employee benefits . . . . . . . . . . . . . . . . .32 Decreasing term . . . . . . . .40 The investment concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Rates of Return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52 How Universal Life Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Interest-Sensitive Whole Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 Renewable term . . . . . . . . . . . .30 Exploring Options in Term Insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Borrowing Against Your Cash Value . . . .29 Chapter 4: Term Insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52 Generating Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 Comparing Term Insurance Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49 Comparing Whole Life Insurance Policies . . . . . . . . . . . . . .34 Re-entry term . . . . . . . . . . . .39 Cash Value . . . . . . . . . . . . . . . .46 Death benefit . . . . .24 Revocable trusts . .46 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Charitable remainder trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 Summing Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Dividends . . . .28 Viatical Settlements . . . . . . . . . . . . . .47 Single-Premium Whole Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50 Chapter 6: Universal Life Insurance . . . . . . . . . . . . .46 Policy loans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 Dividends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41 Rates of return . . . . . . . . . . . . . . . . . . . .37 Chapter 5: Whole Life Insurance . . . . . . . . . . . . .56 Death Benefits Options . . . . . . . . . . . . . . . . . . . . . . .30 Understanding Term Insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 Age limits . . . . . . . . . . . . . . . . . . . . . . . .iv CliffsNotes Understanding Life Insurance Trusts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 Prepaying your premium . . . . . . . . . . . . . . . . . . .61 Summing Up . . . . .71 Nondeath and living needs benefits . . . .67 Industrial Life . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 . . . . . . . . . . . . . . . .69 Riders . . . . . . . . . . . . . . . . . . . .74 Chapter 8: Buying Life Insurance . . . . . . . . . . . . . . . . . . . . . . .77 Checking into a company’s ratings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 What if I fail the exam? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79 Shopping for Insurance Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 Double indemnity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69 Family rider protection . . . . . . . . . . . . . . . . . . . . . . . . .66 The bottom line on variable life . . . . . . . . . . . . . . . . . . .64 Variable life investments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61 Comparing Universal Life Insurance Policies . . . . . .74 Policy protection: Waiver of premium . . . . . . . . . . . . . . . . . . . . . .72 Guaranteed insurability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64 Variable life death benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62 Chapter 7: All the Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57 Which option is for me? . . . . . . . . . . . . .66 Group Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77 What if the company goes belly-up? . . . . . . . . . . . . . . . . . . . . .Table of Contents v Option 1: Fixed death benefit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Cost of living adjustments . . . . . . . . . . . . . . . . . . . . . . . . . . .58 Premiums . . . . . . . . . . . . . .68 Endowment Policies . . . . . . . . . . . . . . . . . . . . . . . . .60 Choosing your premium . . . . . . . . . . .82 Qualifying for the Coverage You Want . . . . . . .64 Variable Life . . . .83 Taking the medical exam . . . . . . . . . . . . . . . .57 Option 2: Increasing death benefit . . . . . . . . . . .84 What “high risk” means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65 Other terms of variable life . . . . . .79 Selecting the Right Agent . . . . . . . . .80 Buying through the Mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68 Mortgage Insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65 Straight versus universal variable life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Selecting the Right Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88 The costs of cash-value insurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98 Death Benefit Payment Options . . . . . . . . . . . . . . . . . . . . .117 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98 Ownership . . . . . . . . . . . . . . . .89 Mortality Charges . . . . . . . . . .vi CliffsNotes Understanding Life Insurance Chapter 9: How Much Should You Pay? . . . . .102 Provisions and Exclusions . . . . . . . . . . . . . . . . . . . . .108 Choosing among settlement options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101 Joining together: Joint life income . . . . . . . . . . . . . .102 Setting up special options . . . . . . . . . . . . . . . . . . . . . . . . . . .91 Comparing Rates and Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93 Keys to comparing term insurance policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 Choosing installment payments . . . . . . . . . . . . . . . . . . . . . .96 Chapter 10: Life Insurance Provisions and Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 Underwriting . .105 War . . . . . . . . . . . . . . . . . . . . . . . .106 Non-commercial aviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 Age and sex . . . . . . . . . . . . . . .106 Dangerous activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 Taking a lump-sum payment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 Maintaining the principal: Interest-only payments . . . . . . . . . . . . .110 . . . .106 Incontestability clause . . . . . . . . . . . . . . . . . . . . . . . . .94 Keys to comparing whole life insurance policies . . . . . . . . . . . . . . . .105 Suicide .87 How Premiums Are Calculated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87 The costs of term insurance . .106 Premium provisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108 Steps to filing a claim . . . . . . . . . . . . . . . . . . . . .107 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 Keys to comparing universal life insurance policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113 CliffsNotes Resource Center Index . . . . . . . .109 CliffsNotes Review . . . . . . . . . . . . . . . . . . . . . .107 Filing a Claim . . . . . . . .101 Making it last: Payments for life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . INTRODUCTION Life insurance is a unique product. You can find information on a . and a specific policy? If so. but how do you figure the value of your life? Why Do You Need This Book? Can you answer yes to any of these questions? s s s s Do you need to learn about life insurance fast? Do you not have time to read 500 pages on life insurance? Do you need to find out about the different types of life insurance? Do you need to find an insurance company. life insurance isn’t for you. an insurance agent. Knowing how much life insurance coverage to buy isn’t an easy decision. evaluating a car or a house is fairly simple. then CliffsNotes Understanding Life Insurance is for you! How to Use This Book This book discusses key things you need to know to make informed decisions about life insurance. Life insurance has no uncertainties. its primary purpose is to protect against loss. if you’re covered by a life insurance policy — and the company holding the policy has the resources to pay — you can pretty much bet that someday your survivors will receive a benefit. But unlike other kinds of insurance products. it’s for your survivors. Like other insurance products. You can read this book straight through or just look for the information you need. com! .cliffsnotes. See you at www. pick up Getting on the Internet. or read the In This Chapter list in each chapter. locate your topic in the Table of Contents.cliffsnotes. you can even register for a new feature called CliffsNotes Daily. check out the Review and the Resource Center at the back of the book. To reinforce your learning. which offers you newsletters on a variety of topics. This icon clues you in to helpful hints and advice. To help you find important information in the book. You’ll learn just what you need to make your online connection quickly and easily.com. delivered right to your e-mail inbox each business day. new from CliffsNotes. This icon alerts you to something dangerous or to avoid. Here’s what you find: s s s Interactive tools that are fun and informative Links to interesting Web sites Additional resources to help you continue your learning At www. look for the following icons in the text: This icon points out something worth keeping in mind.2 CliffsNotes Understanding Life Insurance particular topic in a number of ways: You can search the index in the back of the book.com.cliffsnotes. Don’t Miss Our Web Site Keep up with the changing world of insurance by visiting the CliffsNotes Web site at www. If you haven’t yet discovered the Internet and are wondering how to get online. and as part of your estate planning. This chapter describes some of the myths that surround life insurance and takes a look at the main purposes of having a life insurance policy — not just as income for your survivors. Myth 1: I only need life insurance if I have kids Most people think that they need life insurance only if they have a family — to ensure that their survivors aren’t left hanging if they die prematurely. Not so fast. your family gets some money. right? You buy a policy to protect your family. Simple. Easy. as a tax shelter. Direct. you die.CHAPTER 1 HOW LIFE INSURANCE DIFFERS FROM OTHER TYPES OF INSURANCE I N T H I S C HAPT E R s s Dispelling life insurance myths Looking at the purposes of life insurance Life insurance is simple. but also as part of your investment portfolio. Myths about Life Insurance The following sections work to dispel the three main myths about life insurance. . Life insurance is much more complicated than that. or that money plus more when you die. If your survivors get less. But the primary function of insurance is not as an investment but as protection. Life insurance provides s s Protection for your dependents Peace of mind for you If you’re looking for a pure return on capital. life insurance is much more than return on investment. you put money into an account that will pay your survivors that same money. When you buy a life insurance policy. life insurance may or may not be a good investment. Myth 2: Life insurance is a bad investment Life insurance may not be the most profitable investment you can make with your money. If you measure an investment only in how much you get in return. and as an estate planning tool. which you can read about in greater detail later in this chapter. . as an investment. you can find many more lucrative investments — even tax-deferred or taxfree options — that can yield considerably more than your life insurance policy. However. it isn’t such a good investment. a portion of that money. as a tax shelter. these other purposes are income replacement for a spouse to help him or her through a difficult adjustment period. but rarely is it a bad one. If your survivors get more than you put in (or more than what the money could have earned elsewhere).4 CliffsNotes Understanding Life Insurance Life insurance is important for several other reasons. No other investment can offer the same amount of protection. life insurance is a good investment. Briefly. Age is not always a reason to abandon life insurance. for example. 60. .” But many people need life insurance at 50. Even so-called “older” people may need income protection for their survivors if these heads of household or primary caregivers die prematurely. You have a greater need for tax planning as you age because you’ll most likely earn more. he or she may not be able to find a job that brings in a comparable income to maintain the same standard of living. Many older people actually need more life insurance for a number of reasons: s s s s You may have less time to make up for the loss of income. You have a greater need for estate planning as you get older because you have less time to carry out your plan. Or suppose you have a non-working spouse and you die at the age of 60. and life insurance can play a significant part in your tax planning. You may want to give your loved ones the time to adjust to your death without having to change their normal standards of living. 55. many more people in their 50s and 60s are still supporting young children. You may find that inflation has cut into the value of the life insurance benefit. Today. older meant “over 50.Chapter 1: How Life Insurance Differs from Other Types of Insurance 5 Myth 3: Life insurance is unnecessary for older people Not very long ago. People in their 50s and 60s (and sometimes into their 70s) are in their peak earnings years and have family responsibilities. or 65. that child won’t finish college until Mom’s at least 62. If a woman has a child when she’s 40 or 42. your life insurance death benefit replaces those earnings so that they won’t have to suffer financially. If you die. If you have children. the business may purchase a life insurance policy on you so that if you die. By balancing your portfolio and diversifying your investments. a life insurance death benefit can help your family stay in their home if you die. you probably spend your earnings on the costs of bringing them up. Using life insurance as an investment A second purpose of having life insurance is to use it as part of your investment portfolio. the other purposes can take on greater importance in certain situations. to protect your beneficiaries if you die prematurely. obviously. your partner can use that death benefit to buy out your share of the business from your heirs. Lastly. you can weather storms in one area by having some assets in the other areas that go up or stay level. Every other reason is secondary — although for some people. for example). Life insurance can help you overcome the difficulty of having to totally change your way of life because you lose half or more of your income. If you have a mortgage on your house. another one will likely go up (bonds or real estate. That’s clear. Providing protection for beneficiaries Protecting your survivors means replacing the income you bring in if you die prematurely. perhaps). if you are part-owner in a business. .6 CliffsNotes Understanding Life Insurance The Purposes of Life Insurance The number one reason to have life insurance is. Most financial advisors encourage you to balance your investments so that if one kind of investment goes down (the stock market. so you get a higher return than simply putting your money in a savings account on which you must pay taxes.000. which you can contribute to and withdraw funds from before you die. Take a look at the example illustrated in Figure 1-1.000 at = interest rate tax-deferred account (such as a whole life or universal life insurance policy) $40 – $0 (no taxes) = $40 yield (4%) . Figure 1-1: Comparing two ways to invest $1. Although these policies don’t command the highest interest rates you can find. increases the yield. they are untaxed earnings. The proceeds of a death benefit settlement are not taxable to your survivors. These so-called cash-value policies — whole life (see Chapter 5) and universal life insurance (see Chapter 6) — are actually savings accounts that accrue a cash value over time and also pay for your protection.000 5% $50 $14 $36 at = – = savings interest rate interest yield (28% income yield tax bracket) (3. Using life insurance as a tax shelter Life insurance can play two roles as a tax-sheltered investment: s s The earnings on a cash-value policy are not taxed until you take them out. Your cash-value account yields tax-deferred income. in effect. $1.6%) 4% $1. which.Chapter 1: How Life Insurance Differs from Other Types of Insurance 7 Some life insurance policies are actually long-term investments. life insurance can also be an important part of estate planning — that is. and none of it is taxed. Most states allow the same amount or they have no inheritance tax at all.000 in inheritance is federally tax-exempt (that amount increases over the next few years — see Chapter 3). and you buy a $180. Realistically.000 a year. Although the examples may seem a bit complicated. Because the proceeds of a death settlement and the earnings of a cash-value life insurance policy are both taxdeferred. If you die. how do you ensure that your wealth goes to your survivors and not to the government? That’s where life insurance and life insurance trusts come in. dealing with how to distribute your wealth after you die. most people don’t need to worry much about taxes eating away their estate. Furthermore. But if your estate is worth more than the law allows. the point is simple. compare the taxable versus the non-taxable return: Suppose that you currently earn $60. and using the same logic. your survivors get the full $180.000 life insurance policy to help your survivors through three years without your income.8 CliffsNotes Understanding Life Insurance Now look at the fact that the death benefit is not taxable to your survivors. most couples own their property and assets jointly.000. Using life insurance as part of your estate planning In addition to serving as a tax shelter for you and your survivors. the federal tax laws state that the first $650. so surviving spouses or owners don’t have to pay inheritance taxes. even if the estate is greater than the amount allowed under the law. they serve as excellent tax shelters. Currently. . Using some of your estate. But your heirs lose nothing! Isn’t that the goal of estate planning? Don’t try to wade through this complicated process by yourself. by definition. Briefly. 3. so that money is also taxfree. which goes to your children or survivors taxfree. consult an expert who can both counsel you and set up the appropriate vehicles. If you don’t will the remaining amount to a charity. you buy a tax-free life insurance policy so that your heirs get the same amount they would have before any estate taxes — the amount equivalent to your estate. it is considered part of your estate. which charges you a significant amount for that policy over a period of years). to which you contribute annually. you donate a large portion of your estate to a charity rather than to the government.Chapter 1: How Life Insurance Differs from Other Types of Insurance 9 To do this sort of estate planning. You can’t withdraw that money for any reason (hence the term irrevocable). and your heirs have to pay taxes on it. . in effect. Plus. You set up an irrevocable life insurance trust. A qualified professional can help you sort through the fine details and prevent you from making a costly mistake. is exempt from inheritance taxes. you take the IRS out of this picture. which. The only party that loses is the IRS (and another party wins — the life insurance company. The trust is. In this situation. here’s how it works: 1. You will the remaining amount to a qualified charity of your choice. You and your spouse each leave to your children what- ever the law allows at the time. a life insurance policy. 2. CHAPTER 2 HOW MUCH LIFE INSURANCE DO YOU NEED? I N T H I S C HAPT E R s s s Determining your personal economics Understanding your survivors’ needs Dealing with your age After you understand the purposes of life insurance (see Chapter 1) and decide that you need a life insurance policy. The Economics of Your Life The basic step in determining how much life insurance you need is to figure out just how much your life is worth. focus on three things: s s s Your income Your cost of living Any uninsured medical costs . This chapter also includes a worksheet so that you can make your calculations. the next step is determining how much protection you should buy. When looking at the economic value of your life. This chapter shows you how to determine how much life insurance you need by explaining how to judge the economic value of your life and your survivors’ needs. 157 106. You can see an example of how your salary may grow in Figure 2-1.445 76.454 169.181 237.439 129. use the lower number. many people have little idea of what they may be earning five years from now.526 32.579 53.591 135.066 62.097 259. Estimated Salary Increases at 5% Annually Years from now 15 20 41.000 30.198 145.156 63.000 70. plus the amount of money you’d expect to earn if you hadn’t died prematurely.317 302. For one thing. If you know your increases will be less than 5 percent.000 50.665 124. more and more people change careers (not just jobs.000 Current salary 40.000 5 10 25.132 103. not jobs) for people is now over five! How can anyone possibly say what his or her income will be in 15 years? Your best bet when figuring the value of your income is to estimate how much your annual salary is likely to increase each year.731 20.658 172.525 185. Secondly.867 51. You can round off later in the final formula when you determine how much life insurance you need. but careers) numerous times in their lives.536 . But this figure can be difficult to determine accurately.878 216.736 159.000 60. use the higher figure. The average number of careers (again.368 79.288 48.045 30 86.023 25 67. If you know your salary increases will be more than 5 percent per year.318 203.051 65. use an increase of about 5 percent per year. Figure 2-1: Estimating salary increases for the future.734 89. especially younger people who may not have settled into a career yet.578 38.946 132.727 101.577 97.340 114. When completing the worksheet in this chapter.Chapter 2: How Much Life Insurance Do You Need? 11 Your income The value of your income is relatively easy to calculate: It’s the amount of money you earn.599 83.814 81. Note also that this budget doesn’t include unusual expenses. to cover a big vacation. the cost of living you and your family have set up is really the amount of life insurance income protection you need to purchase — especially because most people tend to spend a bit more than they bring in. if nothing else. One of these. Note that most of your expenses increase over the years due to inflation. saving for your retirement isn’t something you have to be concerned about if you die. of course. and other expenses for children who will eventually be out on their own and paying their own expenses. you’re also underestimating the amount of income protection that your survivors need. If you underestimate your increases. Your cost of living The economic value of your life is not only how much you will be earning but also the cost of living — that is.12 CliffsNotes Understanding Life Insurance Don’t forget that in five or ten years. how much you actually need to live on. But some other examples of unnecessary costs are clothing. But clearly. to go toward your retirement. food. You still want your survivors to be able to save for some of these items (college expenses. either planned (such as college expenses or weddings. In addition. you may quite possibly be working for a different organization or in a different job. and so on. for example). some expenses may decrease or be eliminated because they are no longer necessary. is the life insurance premium. On the other hand. The budget worksheet that follows can help you determine your cost of living. More importantly. . part of your living costs are more than likely going into some sort of savings — to pay college expenses when your children are old enough. unless your budget includes saving for them) or unexpected (such as medical emergencies or funerals). However. by definition you want to be certain that your survivors can pay for your medical costs should you die. On the other hand. Moreover. most of your medical expenses are covered. your portion is likely to be far greater. benefits. that the budget worksheet doesn’t include paying off any large debts which you’re currently paying over time.Chapter 2: How Much Life Insurance Do You Need? 13 And note. if you have a private plan in which you pay 20 percent of the costs. most experts say that you should always maintain approximately three months worth of living expenses available. Your uninsured medical costs Uninsured medical costs are one of the biggest potential drains on a family budget. add a flat amount at the bottom to pay for these unexpected and uninsured medical costs. and regulations change so quickly. So after completing the budget worksheet. Including uninsured medical costs in your family budget is crucial because health insurance terms. because life insurance protection is related to your health. If you want your life insurance to pay off some or all of these debts. finally. . Only you can really estimate this amount. The figure you decide on will vary depending on what kind of health insurance you have now. If you belong to an HMO. make sure that you increase the death benefit to cover these amounts so that your survivors no longer have to include the debt payments in their budgets. How much to add? Good question. so try adding that amount to the bottom of the worksheet as your emergency fund to cover these uninsured medical costs. electric.14 CliffsNotes Understanding Life Insurance Budget Worksheet: Expenses Fixed Expenses Household: Rent/Mortgage Property Tax/Escrow Lawn and tree service Domestic help Insurance: Homeowners/renters insurance Life insurance Automobile insurance Health/disability insurance Loans: Automobile Other fixed loans or credit ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ Total Fixed Expenses Variable Expenses Household: Utilities (gas. sewer) ______ Home repair Telephone Groceries Transportation: Auto licenses and registration Auto gas and oil Auto repair Parking Medical: Doctors/dentists Prescriptions Over the counter drugs Clothing: Laundry/cleaning New purchases ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ . water. Chapter 2: How Much Life Insurance Do You Need? Personal Care: Toiletries Haircuts/styling Miscellaneous Personal: Dining out Entertainment Gifts Subscriptions and books Donations Miscellaneous: Bank charges Investment expenses Legal and professional fees Taxes Other Total Variable Expenses TOTAL EXPENSES (fixed plus variable) 15 ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ Budget Worksheet: Income Fixed income Wages Dividends Certificates of deposit Fixed interest-bearing bank accounts Rental income Child support/alimony Variable income Capital gains/losses ______ Continued ______ ______ ______ ______ ______ ______ . however.000 may seem . he or she faces no tax consequence. your heirs have to pay taxes on the balance over those amounts. You need to account for four expenses that your survivors will need to cover: s s s s Estate taxes Probate costs The cost of dying Planned future expenses Estate taxes Depending on the size of your estate. regardless of the estate’s size. your heirs may have to pay taxes on the amount they inherit. your estate goes to other beneficiaries and your estate is larger than the amount allowed under the tax law. If. Although the 1999 figure of $650.16 CliffsNotes Understanding Life Insurance Budget Worksheet: Income (continued) Variable income Variable-interest bank accounts Tax refunds Gifts Other benefits (assign monetary value) Total Variable Income TOTAL INCOME (fixed plus variable) ______ ______ ______ ______ ______ ______ Your Survivors’ Needs The next step in determining how much life insurance you should buy is to decide just how much your survivors need. If your entire estate goes to your spouse. an attorney may charge $2. Dealing with estates and probate can sometimes get fairly complicated. Be sure to include an amount to cover the estate taxes when you complete the worksheet to determine the amount of protection you need. Many executors choose to get assistance from an attorney to handle the financial affairs (although doing so isn’t required). they won’t have the cash or other liquid assets to pay the estate taxes. You may want to increase your life insurance death benefit by that amount to take care of the probate expenses. Speak to an attorney before handling them yourself. At this point. Depending on the size of the estate. your will is officially registered and the executor of your estate is given the legal right to dispose of your assets.000 to handle probate. and whatever is left over goes to the rightful heirs. all debts and taxes are paid. In that case. many people own homes that have increased in value so much that the equity in their home is over the limits allowed by federal inheritance tax law. heirs may have to pay a significant sum in taxes. particularly if the inheritance is in a form that they can’t easily convert to cash. In these types of situations. .000 to $3. Probate Probate is the process by which your estate is accounted for. they either have to sell the asset or pay the estate taxes from other sources. You can help your heirs pay the inheritance taxes by buying a higher amount of life insurance (and thus a higher death benefit).Chapter 2: How Much Life Insurance Do You Need? 17 huge to you. Quite likely. When determining the amount of life insurance to purchase. This conversation may not be easy. burials cost between $5. Of course. if you have one. Future expenses When calculating the needs of your survivors. building in expenses that you know will occur is extremely important. Many funeral homes won’t require payment directly from your survivors but will allow and will help you arrange to be paid directly from the life insurance proceeds. Build in a cost of about $80.000.18 CliffsNotes Understanding Life Insurance The cost of dying The cost of dying refers to the expenses of funerals. Cremations cost considerably less. These expenses are usually the largest factors in determining how much insurance to buy. or to your parents if they are your survivors.000 to $5.000 and $10. The cost of funerals varies enormously. One of the most obvious of these planned future expenses is the cost of attending college. the actual amount your child needs will probably be considerably more later on. about funeral expenses.000 to $100. which is taken into account with the inflation factor in the worksheet or in the insurance policy. but be persistent so that they can honor your wishes (and make theirs known) should you die. or your children if they are old enough. and the disposal of your body. from $2. . How much you pay for these expenses is more than likely up to your survivors. consider including an amount in the death benefit that can cover the cost of the funeral. Talk with your spouse. burials. But on average.000.000 per child in today’s dollars. If you are 30 years old and in good health. You can read more about these kinds of policies in Chapters 5 and 6. build in some “fudge factor” — about 10 percent of your annual income is good — to account for these unplanned costs. the chances are great that you will live another 50 years or more. such as orthodontia. summer camps. what your style of living is. Additionally. You should build these expenses into the worksheets. including a new roof for the house. As medical advancements continue. you can buy an insurance policy in which the death benefit increases in value. Age — Yours and Your Survivors’ One of the determinants of how much life insurance to buy is age — your age and the age of your survivors. special classes for your children. a new car. assuming that you’re not hit by that proverbial truck. or special medical needs. . When you complete the budget worksheet. You may be aware of other expenses that your family will incur. If you’re worried about inflation eating into the death benefit. and what lies ahead for your beneficiaries. you can count on at least one or two of those unexpected expenditures that come up. and medical emergencies for which your health insurance doesn’t pay the entire cost. how much your survivors need.Chapter 2: How Much Life Insurance Do You Need? 19 The amount of life insurance you buy now is the amount your survivors need if you die soon. Life expectancy The amount of insurance you purchase depends very much on your life circumstances. your life span may be even greater. the cheaper the premium. The younger you are. about three-quarters of which goes into his cash-value account. A 38year-old male buying a five-year. and the more likely it is that your living expenses are lower. while a 48-year-old may have to pay about twice that amount for the same coverage. and the more likely your spouse is to need medical. you have to take into account how much life insurance you can afford. The cost of insurance goes up every year as you age because your life expectancy is lower and the insurance . s Cost of premiums The age at which you buy life insurance relates directly to the cost of your premium (the amount you must pay for the coverage). or nursing home care. term life insurance policy with a death benefit of $100. the less likely it is that your survivors will have to depend on you to fund a college education. however. skilled-care assistance. When determining how much life insurance you need.20 CliffsNotes Understanding Life Insurance Your life span affects your life insurance needs in these ways: s The younger you are. The older you are. If. that 38-year-male old wants to buy a cash-value life insurance policy — one that not only provides a death benefit when he dies but also builds some value that he can use when he retires (or that adds to the death benefit) — he may have to pay about $600 a year. the longer your survivors are going to need income replacement. the less chance your spouse has to plan for his or her retirement. the more dollars you need to put away for future expenses such as your children’s education.000 may pay only about $175 per year. Chapter 2: How Much Life Insurance Do You Need? 21 company knows it has fewer years before you are expected to die. Decide how much you can afford to pay per year and work with that amount to determine how much life insurance to buy. One way to estimate how much your premiums will be in five or ten years is to find out what the premium would be now if you were five or ten years older. Doing so gives you the price in today’s dollars. You can add about 15 to 20 percent more for five years and about 40 to 50 percent more for ten years to account for inflation. CHAPTER 3 LIFE INSURANCE IN ESTATE PLANNING I N T H I S C HAPT E R s s s s Naming beneficiaries Setting up trusts Tracking your dividends, rates of return, and annuities Looking into viatical settlements Although income protection for their survivors is clearly the main reason that most people buy life insurance, estate planning is a close second. The goal of estate planning is to ensure not only the smooth distribution of your wealth to your heirs, but also that the government doesn’t take too big a bite. And the wealthier you are, the bigger the tax bite. This chapter looks at the role that life insurance plays in planning for your retirement, including dividends, annuities, and tax consequences. The chapter also focuses on how life insurance can help ensure a seamless transfer of your estate to your heirs. Beneficiaries When you purchase a life insurance policy, one of the first things you must do is decide who will be the recipient of the benefits — hence the term beneficiaries. Most people designate their spouse as the primary beneficiary, which means that the spouse gets the entire death benefit when the policyholder dies. If you’re single, your primary beneficiary is likely to be your children, if you have any. However, your circumstances may give you reason to name more than one beneficiary, especially if your estate is sizable. Chapter 3: Life Insurance in Estate Planning 23 Naming additional beneficiaries is extremely important in the event that the primary beneficiary dies at the same time you do or that person dies before you. The following sections outline four reasons why you may (or may not) choose to name someone other than your spouse or children as your beneficiary. Minor children You may not want to choose your children as your beneficiaries if they are still minors. Under the current law, children under the age of 18 can’t collect insurance benefits directly, even if they’re the rightful heirs. If you die and your children are the beneficiaries, the proceeds can only go to them in a trust fund, which an adult must manage. If you don’t select this adult (perhaps a lawyer or an accountant, or an organization such as a bank), the probate court selects someone or some organization to oversee the money. When your children reach the age of maturity, usually 18 years old, the funds automatically go to them. Before that time, the trust fund administrator controls how the funds are invested and spent. Avoiding tax consequences As Chapter 1 points out, estates larger than $650,000 (in 1999) may face some potential tax consequences. (This applicable exclusion amount, as it’s called, is set to increase over the next several years, as illustrated in Figure 3-1.). 000 $700. Some business owners who are in partnerships arrange to have the proceeds of their life insurance policy tied to the price of the business. Applicable Exclusion Amount $625. You can set up a trust from your assets or from your life insurance policy.000 $700. For business owners Owning a business may be another reason you choose beneficiaries other than family members.000 Figure 3-1: Year 1998 1999 2000 2001 2002 2003 2004 2005 2006 & thereafter Spreading the wealth You may also stray from the norm when selecting your beneficiaries if you want to donate part of your estate to charity.000 $675.24 CliffsNotes Understanding Life Insurance The tax-free portion of your estate will increase to $1 million in 2006. Trusts A trust is an account that you set up for someone else but for which you decide the terms.000. and you can continue to add to it or not.000 $950.000 $675. . the other partners can buy the business without having to negotiate the price.000 $850.000 $1. In this way. The following sections describe the three kinds of trusts that most affect life insurance policies. When an owner dies and the death benefit goes to the family (the heirs). that death benefit becomes the payment for the deceased’s share of the business.000 $650. Chapter 3: Life Insurance in Estate Planning 25 Revocable trusts A revocable trust is a means by which you can allocate your property, including your life insurance death benefit, to another person. You can change the provisions of this trust fund any time during your life, making the trust “revocable.” You still manage and have total control of the fund, and you can even abolish it if you choose. Your minor children can’t be the direct beneficiaries of your life insurance death benefit. You must set up a trust fund in their names. Trusts serve to ensure that certain funds go directly to a beneficiary (your minor children, for example). But you can also use a trust to save on taxes. For example, you can set up a bypass trust, which allows you to pass some of your estate on to your grandchildren, thereby “bypassing” any tax consequence to your children. Because of the legal and tax implications, consult an attorney if you are considering setting up any sort of trust. Irrevocable trusts An irrevocable trust, as its name implies, means that you cannot amend, change, or alter the terms of the trust. The trust becomes a legal entity unto itself and, in that sense, has certain rights. People commonly use irrevocable trusts to make large gifts to children, grandchildren, and even great-grandchildren without creating any liability for estate taxes. The tax law permits giving any one individual (or any one irrevocable trust) $10,000 per person per year without that individual having to pay taxes on the gift. When you die, the death benefit from an irrevocable trust goes directly to the trust, also with no tax liability. 26 CliffsNotes Understanding Life Insurance Because your spouse pays no estate tax on funds that go to him or her, you need to set up an irrevocable trust only for your children and/or grandchildren. Charitable remainder trust As its name implies, a charitable remainder trust fund is set up by people who want to give their property to a charity. The property can come in any form: cash, real estate, stocks, bonds, or the proceeds from a life insurance policy. The charitable remainder trust has two benefits: s s The charity or organization you choose gets the property. Your heirs aren’t responsible for any taxes on the appreciated value of the property, if the property value has increased (and it probably has). Of course, this benefit is only important if your estate is valued at over the $650,000 exemption. Rates of Return The two basic forms of life insurance policies (which I cover in greater detail in the following chapters) are term life — in which you buy protection for a specified period of time (the term) — and cash-value. With cash-value insurance policies, you pay more than just the cost of the premium into an account that is yours and accrues interest. In effect, cash-value policies are like a savings plan. For a life insurance policy to be part of your estate planning, you must know your rate of return — how much interest your money earns. When a company quotes you a price for a cash-value policy, it also quotes a guaranteed rate of return. At the same time, Chapter 3: Life Insurance in Estate Planning 27 you will likely be given one or two other rates of return and will be shown charts and tables demonstrating how much your money will yield after just a few years. Be very wary of these tactics. Don’t think you’ll make a killing! Assume your money will earn very close to the guaranteed minimum, and consider anything over the guaranteed minimum as a bonus. The return you receive from your cash-value policy is to s s s Increase your surrender value (the amount that you can expect to receive if you withdraw the funds) Increase your death benefit Pay the expected increase in the cost of your protection each year Dividends Many insurance companies are mutual companies, meaning that the policyholders own the company’s stock. When the insurance company does well, the owners receive dividends. The amount of the dividend relates directly to how well the company performs. Dividends from mutual insurance companies go directly to lowering the premiums. These dividends can be quite substantial, as high as 50 to 70 percent of the premium. You can’t count on getting this dividend each year. However, with term insurance, you buy only one term at a time. So if the insurance company doesn’t declare a dividend consistently, you can look elsewhere for a better rate from a company that does offer a dividend. On the other hand, you don’t want to constantly jump from one company to the next. For the company pays you a guaranteed rate of growth. These payments can be either for a set number of years or until you die. based on the amount you have paid in premiums and the terms you have agreed upon beforehand. some lower. s . Two types of annuities are available: s Fixed annuities: With a fixed annuity. Annuities Most people think of life insurance as the primary way of protecting survivors or heirs. bond funds. Some are higher risk. You can instruct the company to invest in any combination of funds that the company offers — stock funds. either as a lump sum or in years of payments. Of course. that period won’t arrive until well after the company has received enough money from you to make paying you financially viable for them. ranging from growth stock funds to global investment funds. you can’t always be sure that you’ll qualify for the life insurance. called annuities. So choosing the right company at the very beginning is one of the most important decisions you will make. or other investment funds. Most insurance companies that offer annuities offer a pretty wide array of funds from which to choose. Read Chapter 8 and select a good life insurance company that you know will be around for a while. you can direct how and where the money is invested. provide an income to the insured person later on in life. However. Variable annuities: With variable annuities. The amount you’re paid varies depending on how successful you and the company’s investment funds are. Annuities are basically investment vehicles in which you plop down a bunch of money. usually after a defined period. in exchange for a promise from an insurance company that it will pay you a monthly income.28 CliffsNotes Understanding Life Insurance one thing. some life insurance policies. fixed income funds. Room 403. DC 20580. If your disease has progressed even further. “sell” the proceeds of their life insurance death benefit to a third party and receive the cash they need while they’re alive. terminally ill people must decide whether the life insurance benefits are for them or for their survivors. With this program. with an annuity. The company then purchases your life insurance policy (including any cash value) for 60 percent of the face value. You can usually do better in other funds and with other accounts. or its Web site at www. To qualify for this benefit. and your doctors certify that you have less than six months. which may be an attractive benefit for you. which you can reach by writing Federal Trade Commission. As you would expect.htm. Box P. the viatical company will purchase your life insurance policy for up to 80 percent of the face value. . terminally ill patients can. On the other hand. you’re investing and being insured at the same time.Chapter 3: Life Insurance in Estate Planning 29 As an investment opportunity. Viatical Settlements Life insurance offers a relatively new benefit called viatical settlements. your doctor must certify that your life expectancy is no more than two years based on the fact that you have a terminal disease. However. You should also check out the information made available by the Federal Trade Commission (FTC). annuities aren’t the greatest options. in effect. P. The purpose of the settlement is to ensure that terminally ill patients have the bulk of their life insurance benefit available to pay for their medical and living costs. this issue is fairly controversial. Washington.O. which may affect your estate planning. You can learn more about this kind of policy by checking with an insurance company that offers viatical settlements.gov/bcp/online/viatical.ftc. or ten-year term). quarterly. The only difference between term insurance and all the others is whether you pay the premiums directly (as with term life insurance) or the payment comes out of the earnings from an investment that the insurance company holds (as with other types of life insurance). In this chapter I discuss the attributes of term life insurance that distinguish it from the other forms.CHAPTER 4 TERM INSURANCE I N T H I S C HAPT E R s s s s s Understanding term insurance Defining term insurance options Looking into employee benefits Determining the pros and cons of term insurance Choosing a term insurance policy Some say that every type of life insurance is term insurance: For all forms of life insurance. therefore. annual. you pay a monthly. I can’t even say that the death benefit remains the same for the entire term of the . At its simplest level. term insurance provides life insurance for a defined period (usually a one-. you pay each year. the easiest to understand. For that insurance. But that’s pretty much the only constant. Understanding Term Insurance Term insurance is the most basic form of life insurance and. five-. or semiannual premium that remains constant during the specified term. I cover the various provisions that differentiate each attribute and each type of term life insurance. it comes in many different forms. Term insurance is not an investment — you receive no benefits other than the security of knowing that if you die. Term insurance provides a benefit for others if you die during the specified period. but you also want to be able to keep buying the insurance until you decide to stop — not when the company decides that you’ve become too great a risk. the insurance proceeds go to your beneficiaries. in which the death benefit remains the same for a specified period. five-. Exploring Options in Term Insurance Although term insurance is no more than a policy for a defined period. Most term life policies are renewable — but your premium may not be the same for the renewed period. in which the death benefit changes each year (rarely more than 20 percent above or below the original policy amount). The options you must consider are discussed in the following sections. such as decreasing term insurance and increasing term insurance. (I discuss premiums in Chapter 9.Chapter 4: Term Insurance 31 policy because various options are available.) After each term (the one-. not only do you want your policy to remain in effect during the specific period you designated. or tenyear period that you specified) ends. These two products are distinguished from level term insurance. Renewable term The primary purpose of life insurance is so that your beneficiaries receive a benefit if you die. and you have various options when choosing a policy. the amount you pay per year for the next term will increase. If you buy life insurance. . Convertible term If you purchase a convertible policy. and many people at that age want continued coverage. Age limits Many term insurance policies have an age limit (specified in your contract) after which the company won’t allow you to renew your policy. Renewable doesn’t mean that you can change the face amount of your policy. Renewable term insurance insures that you can still buy life insurance regardless of the condition of your health later. make sure you give consideration to any age limits imposed in the policy. for example). Obviously. 90. So when shopping for a policy. or even older. you don’t have to take any additional medical exams to maintain your policy. at the upper age ranges most people are extremely high risks. But certainly 60 or 70 isn’t very old. which I discuss in . after you qualify the first time. the insurance company can decide that it no longer wants to insure you when the term of your policy ends. be sure to check for how long it is renewable. not passing a medical exam is the danger of not purchasing a renewable policy. And because you become a bigger health risk as you age. Without renewability. the company will likely require that you pass a new medical exam to qualify. such as whole life or universal life. This age can range from as low as 60 years old to 85.32 CliffsNotes Understanding Life Insurance A policy may be renewable only for a limited time (ten years. If you decide that you want more insurance. so the price of coverage would be so high that it wouldn’t pay for you to purchase coverage. you are allowed to convert to a different type of policy — one that builds a cash value. When looking at term insurance policies. many people think that this provision is an important one. Because buying life insurance is. because your health is more likely to deteriorate as you age. convertibility is an important option. and the parents want to make sure the children have sufficient money even if the father dies. Most people don’t continue to insure themselves after they reach retirement age. you may want to ensure that you don’t have to pass a medical exam later. These parents may want to keep insuring the father after he reaches the age of 70. cancer. A third reason to keep the convertible option has to do with the price of term policies versus cash-value policies. Term policies generally cost considerably less than other types of life insurance because the others also build value while paying for the insurance. The younger child won’t start college for another 15 years. Convertibility may be important to you if you’re on a limited budget but want a cash-value policy. take a look at a family of four in which the father is 56. Again. or other serious illness. the mother is 43. Another reason you may want to be able to convert your term insurance is if your family has a history of heart disease. and the two children are both under 10. but there are exceptions. usually because they no longer have anyone dependent upon them. therefore. . even after term insurance is not available. basically. For example. If your family history makes you more likely to become sick later in life. eliminating as much risk as possible. Keeping the option to convert means that your policy will likely cost you more. To them. the age at which his policy specifies that he can no longer renew his term insurance policy. You know that you can convert later. when you have greater financial strength. this feature may be important if you think that you may want to keep buying life insurance later in life.Chapter 4: Term Insurance 33 Chapters 5 and 6 — without having to pass another medical exam. . In that way. decreasing term life insurance allows them to be insured to the maximum when they most need it (when their beneficiaries are most dependent on them) but to be insured for lower amounts as their beneficiaries need less. With a decreasing term life policy.34 CliffsNotes Understanding Life Insurance Consider the following questions regarding convertibility: s To what can you convert your policy? Whole life? Universal life? Either one? Any product the company offers later on? When can you convert? Some policies specify how many years you have to convert. budgeting is simple — you always pay the same amount. which it must do because. the additional flexibility likely means a higher premium throughout the life of the term policy. When you do decide to convert. as you age. the opposite is true: After the specified term. you’re at a greater risk for death. more time to decide gives you more options when you need them. For many people. will the new premium be based on your age when you convert? Or will the company require you to make a lump sum payment to “catch up. the death benefit remains constant and the premium increases over time. Because the premiums remain locked in. Obviously. Of course. the face value of the policy decreases. So the same premium purchases an increasingly lower amount of insurance. while the amount you pay each year or month remains the same.” as though you had purchased the cash value policy to begin with? s s Decreasing term For most term insurance policies. the insurance company effectively increases its premium. Although may have to pay higher premiums. not your health. Re-entry term Renewable term insurance may have a provision called reentry. you ought to know how much you’re gambling on. In return.Chapter 4: Term Insurance 35 Mortgage insurance is an example of decreasing term insurance. The amount often ranges from the equivalent of your annual salary to triple your annual salary. so the balance keeps declining. When purchasing re-entry term insurance. The employer usually pays the entire premium. at least the company won’t be able to cancel your policy. you can purchase renewed insurance at a reduced rate — basically. while you’re alive. make sure that you keep the right to renew your insurance even if you don’t pass a medical exam. Then again. Employee benefits Many companies offer term life insurance policies to their employees as a fringe benefit. If your health isn’t good and the reentry clause permits it. The gamble here is that you will remain healthy. the company can cancel your insurance. if the re-entry clause doesn’t permit the company to cancel your insurance but does allow it to charge you higher premiums. If you’re gambling. Some re-entry policies spell out the maximum premium that can be charged. you’re paying off your mortgage principal. you’re gambling on money. the rate a person who just passed an exam would pay. When you buy mortgage insurance. But of course. . which means that the insurance company can ask you to undergo a medical exam before it will renew your policy after the term expires. you’re making sure that your home mortgage gets paid off if you die. you now have to reevaluate your position. the policy toward the death benefit has no cash value. Insurance agents don’t often encourage people to buy term policies because they make more money off cash-value policies. None goes to cash value. All the money you pay goes If you outlive the term. the protection is less. Table 4-1: Pros and Cons of Term Insurance Pros At a young age. Very little of what you pay goes toward commissions. No tax advantages. term insurance is considerably less expensive than cashvalue policies are. . policy. So in effect. due to inflation. When you leave.000 is additional income that you must claim on your tax return. In addition. If having that insurance was part of your estate planning. this benefit is available to you only while you are employed with the company. Term insurance is simple to understand and does exactly what it is meant to do: protect your beneficiaries. a death benefit that remains constant actually declines in purchasing value because the dollars buy less later on.36 CliffsNotes Understanding Life Insurance Only the cost of the first $50. Table 4-1 lists the pros and cons of this type of insurance. the dividends you receive are often much less than what the company pays to cashvalue policyholders. Cons Term insurance becomes more expensive as you age. Summing Up To help you put term life insurance in perspective and determine whether it’s a product that you should consider. you lose the protection. In addition. In addition.000 in life insurance is tax-free. Any premium an employer pays on your behalf for an insurance policy over $50. Initial death benefit/ amount of insurance (Use the same amount for each company for easy comparison) 1st year premium Guaranteed 1st year dividend Total paid first year (line 4 minus line 5) 5th year premium Estimated 5th year dividend Total paid 5th year (line 7 minus line 8) 10th year premium ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ 4. and see the Resource Center in this book for how to obtain ratings. 9. 6. 5. To help you. 8. ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ Continued . the next step is to get quotes from at least three insurance companies and compare them. I’ve provided a worksheet for you to fill out. Step 2 of the worksheet asks you to find out the rating of the insurance company — see Chapter 8 for more information about ratings companies.Chapter 4: Term Insurance 37 Comparing Term Insurance Policies If you decide that term insurance is for you. 2. 10. Name of company Rating of insurance company Rating from company 1 Rating from company 2 Rating from company 3 3. Term Insurance Worksheet Company Company Company 1 2 3 1. 7. Other options Age limit? Decreasing term? 15.38 CliffsNotes Understanding Life Insurance Term Insurance Worksheet (continued) Company Company Company 1 2 3 11. Estimated 10th year dividend Total paid 10th year (line 10 minus line 11) Additional cost for options Renewability Convertibility Re-entry Other 14. 12. Additional cost if placed in a higherrisk category ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ . 13. not just a specific period.CHAPTER 5 WHOLE LIFE INSURANCE I N T H I S C HAPT E R s s s s s s Understanding cash value Looking at traditional whole life Exploring interest-sensitive whole life Investigating single-premium whole life Determining the pros and cons of whole life insurance Choosing a whole life insurance policy Whole life insurance covers you for your entire life. . You build up a cash value with whole life insurance. You can distinguish whole life insurance from term insurance in two significant ways: s s You don’t buy insurance for just one year at a time with whole life insurance. Your death benefit remains the same. your premium increases as you age to account for the fact that you’re a bigger health risk. and so does your premium.) Insurance companies keep both the premium and the death benefit constant during the life of a whole life policy by charging you a premium that’s higher than the cost of the insurance when you’re young and using some of that profit to pay for the higher cost of your insurance when you’re older. (With term insurance. In a sense. (See Chapter 4 for more information about term insurance. you first need to examine how cashvalue whole life insurance works. I explain how cash value works and illustrate how the concept ties into whole life insurance policies. therefore. and in return. In this chapter.) So what’s the catch? Why would anyone want term insurance instead of a policy with a value beyond the death benefit? To answer that question. The investment concept When you purchase a whole life policy.40 CliffsNotes Understanding Life Insurance Compared to term insurance. The insurance company invests the cash-value portion. But that’s not entirely true. a portion of your premium goes toward the life insurance itself. you do seem to get more than you pay for with whole life. you get some of the profits. then. Your share goes into your cash value. the value is only the amount of death benefit you sign up for.) Cash Value Building a cash value means that your life insurance policy has a value greater than just the death benefit — the face value of the insurance policy — that goes to your beneficiaries when you die. this portion is an investment. and the pluses and minuses of whole life versus term insurance. another portion goes toward your cash value. it has its rewards. and the rest goes toward administrative costs and your agent’s commission. I also discuss the different kinds of whole life policies. . the options you have when purchasing a whole life policy. Like all investments. (With term insurance. (Even the capital gains tax is 20 percent.Chapter 5: Whole Life Insurance 41 Tax benefits One of the biggest differences between this investment and many others is that all of the return on your investment is tax-deferred. you’re getting 20 percent more than if you put this same money in an interest-bearing savings account. you gain all of the $500. subtracted from $500. which. with an interest of 5 percent. In contrast. unlike other investments for which you must pay a capital gains tax. But you also have to pay 20 percent of that return in income taxes. . An investment in whole life insurance becomes even more attractive when you consider the following: s The portion of your premiums that goes toward purchasing your actual insurance reduces the amount of gain you realize. so if nothing else. In effect.000 is in a tax-deferred account.) s For example. You can see this comparison illustrated in Figure 5-1. if the $10. such as a whole life insurance policy. at the end of the year you have $500.000. making the return on your investment significantly higher than if you had to pay taxes on the entire thing. You pay taxes only on the difference between the total gain and the total premiums paid. you only gain $400 (20 percent of $500 is $100. if you get a 5 percent return on a savings account of $10. You don’t have to pay any taxes on this gain until you withdraw it. equals $400). 000. The company combines all the cash-value money it gets from all its policyholders and invests this sum.42 CliffsNotes Understanding Life Insurance Comparing two ways of investing $10. Anything the company makes over that (and over and above its costs) goes toward your cash value. Rates of return The rate of return you get from an investment in whole life insurance is based on the insurance company’s ability to invest wisely. which naturally pay lower profits than many other investments. as your basis for judging any type of life insurance as an investment. most whole life policies guarantee at least a minimum rate of return. even adding the tax savings back in. Use the protection you’re buying. usually in low-risk investments. Figure 5-1: $10. . Regardless of what return the company gets on its investment. keep in mind that the rate of return you receive historically has been pitifully low. Table 5-1 gives you an example. not the rate of return you get.000 $500 at = – interest rate interest yield tax-deferred account (such as a whole life insurance policy) $0 (no taxes) = $500 yield (5%) If you’re considering whole life insurance as a good way to invest.000 5% $500 $100 $400 at = – = savings interest rate interest yield (20% capital yield gains tax) (4%) 5% $1. though. 735 2.Table 5-1: Sample Whole Life Insurance Summary End of Year 1 2 3 4 5 10 15 20 25 Age Premium Total Accrued Premiums $1.466 n/a** n/a** n/a** 38 39 40 41 42 47 52 57 62 $1.365 3.040 3.560 3.440 Cash Value (Actual)* $1.000 50.604 18.861 59.143 1.791 2.820 3.000 50.058 10.016 2.208 14.671 37.084 1.164 1.015 2.229 4.207 11. **n/a: data not available.208 2.596 2.076 Death Benefit Cash Value (8% Projection) $1.432 4.75% Example) $1.052 1.005 Cash Value (4% Guarantee) $1. 43 .164 13.000 50.567 23.000 50.596 612 612 612 612 612 612 612 612 $50.719 33.000 50.284 Cash Value (10.044 7.000 50.505 3.529 2.053 9.000 Chapter 5: Whole Life Insurance *The figures in the cash value actual return column are taken from my personal life insurance policy.475 3.137 6.224 16.760 8.529 2.000 50.000 50.005 5.104 10.556 8. 000. indicating that unless you still want a death benefit at that age. The company guarantees him a rate of return of 4 percent but shows projections of double and almost triple the return (8 percent and 10. He also begins the policy with an initial investment of $1. you can see that it’s just a bit higher than the minimum guaranty. On the other hand — in part because of the initial investment — somewhere around the 10th year. As you age.200. and you shouldn’t count on getting anything more than what is guaranteed. the insured — a 38-year-old married man — insures himself for $50.596. Note that the cash value never exceeds the amount of the premiums paid with the 4 percent guaranteed return (which is just over the minimum). this policyholder pays over $4. That lower yield reflects the very high cost of purchasing insurance at age 60. But it’s also significantly lower than either of the more optimistic projections the agent presented. based on how well the .75 percent respectively). The company pays policyholders dividends.44 CliffsNotes Understanding Life Insurance In this example. The projections that insurance agents give you are often little more than high hopes. the cash value increases at a slower rate because your insurance costs more. some companies offer policyholders the opportunity to benefit from the company’s success.000 in premiums. all of the premium paid goes toward the cash value. for which he pays $51 per month ($612 per year). Dividends Over and above the guaranteed rate of return. for example. meaning that the interest on the reserve amount actually covers the cost of the insurance itself. If you look at the actual return. just like shareholders do. From the 20th to the 25th year. but his cash value increases only $1. you may want to cash out the policy. The amount of your dividend depends on a number of factors: s s s s s The company’s overall financial success How well the company anticipated interest rates The number of claims the company had to pay How well the company did in its investments The specific terms of your policy Dividends are factored into complex formulas — the outcome of which is your premium amount — and on the other options the company offers (such as loan benefits).Chapter 5: Whole Life Insurance 45 company did the previous year. However. The option is usually up to the policyholder. for that matter). Policies with this benefit are called participating policies (often referred to as par policies as opposed to non-par policies — that is. Naturally. if you will — upon the policyholder’s death. paid. the company hopes you choose to apply the amount toward additional coverage — in fact. Some companies pay a terminal dividend — a kind of bonus. . it can pay the policyholder in cash. Naturally. or apply the amount toward additional life insurance coverage. most annual and terminal dividends are. use the dividend to pay the cost of the purchased life insurance. put the money into the cash value. most of the illustrations and examples that agents use in their sales presentations select that option. When the insurance company declares that it will pay a dividend. ones that don’t pay dividends). as well as the guaranteed rate of return. and there are reasons for choosing each option. indeed. a company makes no guaranty that it will pay this dividend (or any dividend. and you should consider that possibility when evaluating competing policies. you’re entitled to receive a surrender value. Death benefit With whole life. usually at interest rates significantly lower than market rates — between 6 and 8 percent. none of the cash value you accrue adds to the death benefit because the cash value goes toward paying the higher cost of the insurance as you age. but as you saw in Table 5-1. your beneficiaries receive only the death benefit — the face value of the policy.46 CliffsNotes Understanding Life Insurance The portion of an annual dividend paid to you in cash that exceeds the premiums you paid for the year is taxable. Traditional Whole Life The following sections look at some of the specific features of traditional whole life insurance. The surrender value continues to increase as you continue to contribute to the policy. Termination If you terminate your whole life insurance. even though the premiums you paid also contributed toward a cash value. Policy loans An additional benefit of whole life policies is the fact that you can borrow against your cash value. A dividend credited to your cash value or applied toward your premium is not taxable until you withdraw funds from the account. meaning the protection you purchased — when you die. . the amount increases at a progressively slower rate. just as with term insurance. With whole life. which is basically the amount of your accrued cash value. the opposite options are also available when interest rates are on the decline: cash-value return decreases while premiums and death benefit are constant. When interest rates rise. or death benefit decreases while cash-value return and premiums stay at the same levels. it can take the money from your account. death benefits. Like an adjustable rate mortgage. . In some policies you can increase your death benefit while keeping the premium and cash value rate of return steady. When you borrow against your policy. the outstanding loan balance is deducted from the cash surrender value. and you can usually get the money in just a few days. Interest-Sensitive Whole Life Interest-sensitive whole life insurance is a whole life policy in which you are paid an adjustable. Naturally. you can maintain the same death benefit and percentage that goes toward your cash value but lower your premiums. Or you can hold the premium and death benefit steady while increasing your cash value.Chapter 5: Whole Life Insurance 47 The rates are so low because the lender is assuming absolutely no risk — if you don’t pay back the loan. you don’t have to pay any fees. If you terminate the policy. variable interest. the rate you are paid is often tied into an economic indicator such as the Treasury Bill rate. and cash value are all aligned with the variable interest rate so that policyholders have various options. rather than a guaranteed rate. Premiums. premiums rise so that cash-value return and death benefit remain the same. the outstanding amount is deducted from the death benefit. Interest-sensitive whole life policies are very similar to universal life policies (see Chapter 6). If you die before you pay back the loan. In essence. The tax savings may be the primary reason that people purchase this kind of life insurance. a cash value does accrue. the death benefit plus any accrued cash value goes to the beneficiaries. The insurance company takes your money up front. In exchange. depending on the kind of policy you purchase. and the death benefit either remains constant or increases. usually with a guaranteed minimum return. You can immediately borrow against the cash value in your account. invests it. and pays you a small return. You purchase the policy by plunking down one large sum for a specified death benefit. Like other whole life policies. consider this option. you pay all your premiums up front with single-premium whole life and then let the company have use of your money for the entire period. All the money you earn on your singlepremium whole life policy is tax-deferred. You don’t pay any annual premiums. similar to what you get with other whole life policies. you get a few benefits that may be important to you: s You get a substantial discount on the cost of the insurance itself. then you will likely pay s s . You get a significant tax savings. When the insured dies. Your lump-sum payment is less than the accumulated cost of annual premiums on comparable whole life policies. And if you only borrow against what the cash value itself earns.48 CliffsNotes Understanding Life Insurance Single-Premium Whole Life Some insurance companies offer single-premium whole life for people who have a large sum of money available to spend on insurance and who are looking for some tax benefits. If you’re sitting on some cash and you want to buy life insurance. probably just a bit more than the amount guaranteed. and because the early premiums subsidize the higher cost of insurance when you’re older. If the amount you borrow includes any of the taxdeferred gain (the interest you’ve accrued). by storing some money in the company’s coffers. you may have to pay taxes on the gain. Summing Up Compare the pros and cons of whole life insurance in Table 5-2. If you don’t need or want life insurance. Table 5-2: Pros and Cons of Whole Life Insurance Pros An increasing amount of the money you pay goes toward your cash value. If you borrow against your whole life insurance policy. you give yourself the opportunity to take out a loan virtually interest-free. Check with your tax advisor before exercising this option. Higher premium than term insurance because you’re also contributing toward your cash value and the agent’s higher commission. more profitable methods of saving money or lowering your tax burden. The premium remains constant for the entire time you’re covered (unless you choose a variable-premium policy).Chapter 5: Whole Life Insurance 49 approximately the same interest the cash value earns. Don’t purchase any kind of insurance policy based on the rate of return or tax benefit you receive. Cons Your earnings will most likely be fairly low. as well. Continued . Thus. you need to pay particular attention to the potential tax consequences. you need to look into other. (which also allow you to invest your money Comparing Whole Life Insurance Policies If you’ve decided that whole life insurance is for you. To help you. from the cash value. I’ve provided a worksheet for you to fill out. Lifelong coverage and no medical exam (unless you make changes to policy). 2. wherever you want). Whole Life Insurance Worksheet Company Company Company 1 2 3 1. and when you cash out. Death benefit coverage becomes unnecessary later in life when you have no the dependent beneficiaries. Tax-deferred cash-value earnings.50 CliffsNotes Understanding Life Insurance Table 5-2: Pros and Cons of Whole Life Insurance (continued) Pros Cons Your life insurance coverage Not appropriate to use as an is totally paid after only a investment due to poor rate of few years by the profits return. Name of company Rating of insurance company Rating from company 1 ______ Rating from company 2 ______ ______ ______ ______ ______ ______ ______ ______ . and see the Resource Center at the back of this book for how to obtain ratings. Returns aren’t as high as other tax-deferred plans such as IRAs and 401(k)s. Step 2 of the worksheet asks you to find out the rating of the insurance company — see Chapter 8 for more information about ratings companies. the premiums you paid reduce the gain. the next step is to get quotes from at least three insurance companies and compare them. Additional costs if placed in a higher-risk category ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ . 6. Projected surrender value after: 1st year 5th year 10th year 8. 5. Guaranteed rate of return______ Interest-sensitive policy ______ How often can interest rate change? Can premium or expenses change? Current interest rate ______ ______ ______ Previous year’s interest rate ______ 7.Chapter 5: Whole Life Insurance 51 Company Company Company 1 2 3 Rating from company 3 ______ 3. Dividends Is there a history of dividends? Annual estimated dividend Terminal estimated dividend 9. Initial death benefit/ amount of insurance (Use the same amount for each company for easy comparison) Annual premium ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ 4. How Universal Life Works Universal life insurance is a form of whole life insurance — but with much greater flexibility. In fact. in that you have a cash value that grows. However. and your options in designing the right policy for you. It’s similar to whole life. I discuss the basics of universal life insurance. So in this chapter. where you decide the terms and renew the policy annually.CHAPTER 6 UNIVERSAL LIFE INSURANCE I N T H I S C HAPT E R s s s s s s s Understanding universal life Learning about interest Borrowing against your cash value Deciding on death benefits Determining premiums Determining the pros and cons of universal life insurance Choosing a universal life insurance policy Universal life insurance provides flexibility to policyholders. you have two components: a term insurance policy and an investment . the elements that you get to compare for your particular needs. balancing the options can be quite complicated. universal life isn’t simple like term insurance — or even like whole life. Like whole life. and it encompasses a term life policy. including the charges for any options (or riders). You’re left with a cash value that generates interest. how much toward your cash value.Chapter 6: Universal Life Insurance 53 account from which the term insurance premiums are paid. . But with universal life. Other aspects of universal life to consider include: s All the earnings in your investment account are taxdeferred. you get to choose your options. The company subtracts an expense charge based on its fees. and how much toward administrative expense (including commissions). Start with a planned death benefit that you work out with your agent. You know how the premiums may change the death benefit and cash value. companies charge you a surrender charge to cash out. leaving you with the surrender value. and it’s much clearer how much of the premiums go toward your insurance protection. But understanding universal life insurance doesn’t end there. The surrender charge is usually a small percentage of the total cash value. Then the company adds in interest that your investment money earns. and monthly administrative expenses. Often. From the cash value. You determine your planned premium. the company subtracts the current cost of insurance (the mortality charge). and the choices are generally laid out in front of you. usually a fixed percentage of the premiums. Your ending cash value is the accumulated value that belongs to you when you cash out (or to your beneficiary when you die). based on how much you can afford and the cost of the insurance. your death benefit decreases because it depends partially upon the accumulated cash value. You can withdraw money that has accumulated in your cash value.54 CliffsNotes Understanding Life Insurance s If you stop paying premiums. You can borrow against the cash value of your policy at a fixed rate. For example. and surrender value of a sample universal life insurance policy. your interest rate may be 4 percent for the first $500 and 7 percent for the balance. This fee decreases each year you have the policy. The company does so until no cash value is left. the company continues to pay the premium for you by deducting from your policy’s cash value. but it does lower the amount of your cash value. The interest rate is a fixed rate. expenses. s s s s s Table 6-1 illustrates the premiums. cash value. This is one way to continue coverage without paying premiums. although it may be a tiered interest rate. in which part is paid at one rate while the balance is paid at a higher rate. death benefit. you may have to requalify by taking a medical exam. You probably have to pay a termination fee or surrender charge (backloading). If you increase your coverage. If you do. generally below market rates. . 07 $51.07 $51.153 $7.00 $42.000 $50.83 $45.07 $51.000 Interest Credited — $39.94 $40.000 $50.36 $27.000 $50.20 $27.392 $7.22 $27.748 $7.07 $51.83 $3.07 $612.32 $27.212 Chapter 6: Universal Life Insurance $7.000 $50.33 Death Benefit $50.575 $7.36 $42.000 $50.000 $50.72 Ending Cash Value $7.83 $3.625 $7.000 $50.83 $3.000 $50.824 55 .637 $7.83 $3.322 $7.262 $7.184 $7.61 $39.07 $51.34 $27.07 $51.96 Cost of Insurance $27.452 $7.66 $42.83 $3.83 $3.000 $50.513 $7.811 $7.563 $7.28 $40.382 $7442 $7.07 $51.07 $51.698 $7.23 $27.83 $3.000 $50.502 $7.83 $3.761 $7.84 Expense Charge $3.83 $3.687 $7.332 $7.000 $50.27 $27.62 $40.000 $50.29 $27.Table 6-1: Sample Financial Transactions and Surrender Value End of Month 1 2 3 4 5 6 7 8 9 10 11 12 Total Premium $51.83 $3.37 $27.27 $39.134 $7.874 Surrender Value $7.18 $327.07 $51.07 $51.31 $41.30 $27.96 $41.25 $27.83 $3.203 $7.71 $450.272 $7.07 $51. 56 CliffsNotes Understanding Life Insurance Generating Interest Insurance companies frequently have two or more interest rates that kick in at different levels. borrowing against your policy generally lowers the interest you receive on your cash value. The interest earned continues to increase and eventually will equal and then exceed the amount you contribute. even for loans secured against other assets. the money you pay is. making it equal to or less than the interest at which you borrow. the guaranteed interest rate may be 4 percent on the first $500 and 7 percent on balances over $500. the annual percentage rate (APR) is actually more than 8 percent. However. Borrowing Against Your Cash Value Universal life policies allow you to borrow against your cash value. going directly into your own account. the total interest is just under 7 percent. At this point. pick the one with the highest guaranteed interest so that your cash value grows most quickly. For example. of $7.000. With a balance of almost $8.874. so you get compounded interest (interest on your interest). When you’re choosing a policy or company from which to buy your universal life policy. For a policy with an interest rate of 7 percent. or surrender value. For example. in effect. You can borrow $3. usually at interest rates below what you can get elsewhere. say that you earn 4 percent on the first $500 of cash value and 7 percent on any amount in excess of that $500. You now have an accumulated cash value.000 against this policy at a 6 percent interest rate — well below what you can . The interest rate is calculated daily. you can choose how much death benefit is to be paid. With both options. $50. But the interest you earn on the cash value of the insurance policy is no longer the 7 percent of the amount over $500. and the total interest you earn on your account is s s s 4 percent on the first $500 6 percent on the next $3. the interest you earn takes into account the $3.000 (the amount borrowed) 7 percent on the balance Death Benefits Options With universal life insurance. your death benefit increases in line with the increase in your cash value. You have two options. The death benefit remains the same because the decreased face value and the increased cash value add up to the total amount you chose. your premium remains the same throughout the term of the policy. Option 2: Increasing death benefit With the second option. the face value of the policy — the initial $50. In fact. In actuality.000 you borrowed. whatever amount you sign up for (in the example. Option 1: Fixed death benefit When you choose a fixed death benefit.Chapter 6: Universal Life Insurance 57 get at a bank (even for another type of secured loan) — so this deal is quite good if you need the cash.000 — decreases by the amount you’ve accumulated in your cash value account.000) goes to your survivors. some subtle differences between them can change the amount dramatically. and although the options appear similar. Your survivors get the . but the death benefit and surrender value differ. your cash value increases more slowly than with Option 1. often when you no longer need the same kind of protection you did 25 years earlier.000 more under Option 2. which certainly appears to be a great deal more for the consumer than what Option 1 provides. Which option is for me? If you look at Table 6-2. So you must continue paying the annual premiums. In effect. you must consider a number of factors: s s s s Your current age and health How much protection your dependents will need as you age Whether you can increase your net worth at a greater rate by investing in other options How much of a gamble you’re willing to take . you’re gambling on a long life so that you can withdraw a larger cash value. if you don’t die during that time and instead take your surrender value. On the other hand. you’re better with Option 1. Table 6-2 illustrates the two types of death benefit options you can choose with universal life insurance. by choosing Option 1. To determine which option is best for you. If you die in 25 years. So what’s the catch? Why would anyone choose Option 1? With Option 2. you can see that the two death benefit options differ significantly. your survivors receive $18.58 CliffsNotes Understanding Life Insurance surrender value. Table 6-2: Fixed versus Increasing Death Benefit Universal Life Policies Death Benefit Option 1 End of Year 5 10 15 20 25 Annual Premium $612 $612 $612 $612 $612 Ending Cash Value $1.382 $17.382 $19.262 $7.874 Death Benefit $51.412 $8.000 $50.462 $8.382 $67.822 $11.000 Death Benefit Option 2 Surrender Ending Cash Value Value $1.824 Death Benefit $50.874 Surrender Value $1.272 $12.753 $3.874 59 .822 $61.262 $7.332 $19.874 $1.000 $50.803 $3.000 $50.322 $12.382 $17.000 $50.803 $3.262 $57.822 $11.803 Chapter 6: Universal Life Insurance $53.803 $3. either the higher your premiums or the lower your cash value. not because life insurance is a good investment. but clearly. You invest in a life insurance policy to be covered in case you die. your primary consideration should be the amount of the death benefit. The more you pay per month in premiums. The higher the protection you buy. accumulated cash value. You can generally make more money by putting your money into other investments. you need to balance the following three considerations to determine your cash value: s s s Determine how much you can afford from your monthly budget. You can’t have both. and premium are all interrelated with universal life. your premium should be in line with how much you can afford to pay.60 CliffsNotes Understanding Life Insurance Premiums The amounts of your death benefit. Because your primary goal in buying insurance is protection. Balance the amount of protection you want to purchase with the amount of premium you can afford. Choosing your premium If you want a universal life policy and believe that you can afford the premium. either the more protection you’re buying or the more cash value you’re building. Decide how much protection you want to purchase. Cash value and your premium cost should be secondary factors. either the higher the premium or the less protection you’re buying. . And the more cash value you want to build. Prepaying also allows you to lower your premiums because the accumulated cash value is earning more. and 401(k)s. chances are you won’t be interested in this option. prepaying also means that you must take a lump sum of cash from somewhere. you have to trust your agent to guide you to the best plan for you. Summing Up Table 6-3 compares the pros and cons of universal life insurance. and death benefit to suit your needs. Roth IRAs. Cons Complicated terms and options. cash value. and you can opt for either a fixed death benefit or an increasing death benefit. Table 6-3: Pros and Cons of Universal Life Insurance Pros A lot of flexibility — you can tailor the premiums. On the other hand. Premium remains constant for the entire time you’re covered. which means that more of your earnings can contribute to the cost of the insurance. But with IRAs.Chapter 6: Universal Life Insurance 61 Prepaying your premium With universal life insurance. unless you really want insurance but can’t afford monthly payments. People have generally used prepaid insurance as an investment to build cash value without incurring any tax consequences. which allows your cash value to increase more because your account is starting at a higher level. and if your death benefit remains fixed. Premiums are considerably higher than for term insurance. Continued . you actually reduce the accumulated cash value. you can buy in up-front by putting a significant sum into your account. Is prepaying for you? Not likely. the premiums you paid reduce the gain. I’ve provided a worksheet for you to fill out. To help you. 2. Lifetime coverage and no medical exam (unless you make changes to the policy). Tax-deferred cash-value earnings. Name of company Rating of insurance company Rating from company 1 ______ ______ ______ ______ ______ ______ . Returns aren’t as high as other taxdeferred plans such as IRAs and 401(k)s (which allow you to invest your money wherever you want).62 CliffsNotes Understanding Life Insurance Table 6-3: Pros and Cons of Universal Life Insurance (continued) Pros Coverage is totally paid after only a few years by the profits from the cash value. Step 2 of the worksheet asks you to find out the rating of the insurance company — see Chapter 8 for more information about ratings companies. dependent beneficiaries. Comparing Universal Life Insurance Policies If you’ve decided that universal life insurance is for you. you’re still paying the premium. Universal Life Insurance Worksheet Company Company Company 1 2 3 1. Not appropriate as an investment due to poor rate of return. Death benefit coverage becomes unnecessary later in life when you have no policy). and most of your premiums are going to you. and see the Resource Center at the back of this book for how to obtain ratings. the next step is to get quotes from at least three insurance companies and compare them. and when you cash out. Cons Whether from your monthly budget or your accumulated cash value. Projected surrender value after: (using agent’s estimated/hoped for rate of return) 1st year 5th year 10th year ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ 10. Additional costs if higher-risk category Special features/ options available . 6. Death benefit/amount of insurance (Use the same amount for each company for easy comparison) Death benefit if increasing policy Annual premium ______ ______ ______ ______ ______ ______ 63 ______ ______ ______ ______ ______ ______ 4. 11. ______ ______ ______ Guaranteed rate of return ______ ______ ______ Guaranteed surrender value after: 1st year 5th year 10th year ______ _____ ______ ______ ______ ______ ______ ______ ______ 8. 5. Dividends Is there a history of dividends? Annual estimated Terminal estimated ______ ______ ______ ______ ______ ______ ______ ______ ______ 9.Chapter 6: Universal Life Insurance Rating from company 2 Rating from company 3 3. 7. CHAPTER 7 ALL THE OTHERS I N T H I S C HAPT E R s s s s s s Looking at variable life insurance Understanding group life insurance Finding out about industrial life insurance Checking into mortgage insurance Exploring endowment life insurance Learning about various riders In this chapter. you maintain a cash value from which your term insurance premiums are paid. You maintain the . which are some of the options you can purchase. or any combination of funds. and some ways you can save money by combining with other people. respectively) is a variable life insurance policy. a bond fund. some of the different options you have with all policies. Under the terms of this form of insurance. you have the choice of investing your cash value in a common stock mutual fund. I also cover riders. The amount of your death benefit is tied to the mutual fund’s performance. a money market fund. I talk about an interesting variation on universal life. Variable life investments Typically. What makes this type of policy different is that the cash value is invested in mutual funds. Variable Life A variation on whole and universal life insurance (see Chapters 5 and 6. a variable life policy with a minimum guaranteed death benefit is a whole different story. If your investments go down. the death . at least as far as protection goes. so. you can change funds. so does your death benefit. If it decreases — well. so does your death benefit. However.Chapter 7: All the Others 65 control. In fact. and your cash value goes up and down with your investments. that’s one of the biggest dangers of this kind of policy. With some variable life insurance policies. but you’re always investing in the insurance company’s funds. the cash value can go up much more than a policy with a fixed rate of return. in which your premium is fixed and the death benefit rises and falls as the investments go up and down. Straight versus universal variable life You can get two different kinds of variable life policies: a straight variable policy. With the straight variable. With a guaranteed death benefit. The success of these funds depends on how successful the insurance company’s investment managers are. if the value of your investment exceeds a specified minimum (usually 4 percent). variable life can be excellent insurance policies because if the investments are successful. in which the premiums vary. too. If your investments go up. Variable life death benefits The amount of your death benefit with a variable life policy varies but is typically never below the face value. and a variable universal policy. Consequently. does your death benefit! And remember. you’re buying life insurance not as an investment but to protect your survivors if you die. some insurance professionals are wary of this type of insurance. the death benefit is either fixed or increases (just like a normal universal life policy). your investments are paying not only for the cost of your term life insurance but also for the cost of the company’s investment managers. . make certain it has a guaranteed death benefit. Other terms of variable life Most of the other terms of variable life policies are the same as a universal life policy: s You can borrow against your cash value. which adds value for many people. And with most policies. s s You can choose to have an increasing or fixed death benefit. the amount you borrow is limited by the amount that’s invested in a money market fund. If you’re considering purchasing a straight variable life insurance policy. Remember. If your investment’s value decreases. In many policies. which often gets a lower rate of return than the stock or bond funds. so does your death benefit. your death benefit will never decrease below the original face value. variable life is an excellent product because it combines flexibility and bigger returns. in part because the expense charges are much higher.66 CliffsNotes Understanding Life Insurance benefit goes up by that amount. But it also adds in considerably higher costs. This sort of defeats the purpose of buying this kind of policy to take advantage of the more dramatic rises in the stock market. Your cash-value earnings are all tax-deferred. The premiums are much higher than universal life. The bottom line on variable life For many people. employers are allowed to provide up to $50. One of the key things to remember about group life policies is that you don’t own the policy! Although you’re the one who is covered. and cheaper when a lot of people buy it. you’re not insuring yourself to make a killing on the market. Group Life Life insurance. that company can offer rates considerably less than if you buy a policy on your own. income. particularly in a tax-deferred vehicle. If you find a different job. you’re responsible for paying taxes on that income.000 policies). has a guaranteed death benefit. However.Chapter 7: All the Others 67 The purpose of life insurance is to protect your survivors. Be aware that you may suffer tax consequences when your employer pays for your life insurance. Any premium paid for a policy above that minimum is considered . in effect. not you. like any other commodity. under the current tax rules. you can no longer buy the coverage. Typically. or hundreds. or quit.000 policies aren’t double the price of $50. especially a variable life policy. or you no longer are a member in the organization through which you purchased the insurance. If you’re looking for a good return on your money and want to invest.000 in term life insurance for you without your facing any tax consequence. The premium that your employer pays for your life insurance coverage is. group life insurance is arranged through your employer or an organization to which you belong. the policy belongs to your employer or organization. or thousands of people agree to purchase policies from one company. you’re probably better off doing so through an IRA or 401(k). So make sure that any policy you buy. When tens. retire. is cheaper when you buy a lot of it (which is why $100. Therefore. Although you want to choose the right kind of policy for your needs. and so does your mortgage insurance death benefit. is a specialized kind of policy that isn’t too common. however. industrial/burial life insurance is not a good buy. the policy pays a small amount to your beneficiaries for your burial expenses. you must count the additional $550 as income on your tax return. If. but the death benefit is specifically tied to the cost of burial. you feel that you would like enough coverage to pay for your burial.68 CliffsNotes Understanding Life Insurance additional income for which you must pay taxes. Mortgage Insurance Mortgage insurance is. Industrial Life Industrial life. In essence. As you pay off your mortgage. The premiums are quite low because the policy’s face value — the cost of burial — is fairly low. It’s usually more expensive than simple term insurance and doesn’t pay your beneficiaries for anything other than your burial.000 term life insurance policy for you. So if your employer purchases a $250. The cost of group term life insurance over $50. . As its name implies.000 policy for $300). the balance declines.000 is item C in Box 13 of your annual W-2 Earnings Summary. you should speak with an agent about buying or increasing a life insurance policy. another term insurance program designed specifically to pay the balance of your home mortgage. industrial life is another term life policy. essentially. For most people. also called burial life. for which the company pays $850 per year (and the company can buy a $50. Chapter 7: All the Others 69 If you want enough life insurance coverage to pay for your mortgage. the company pays your premium from your cash value if you don’t pay. With this rider. Riders are some of the most important parts of the policy. talk it over with a trusted and knowledgeable agent. speak with an agent about buying or increasing a life insurance policy. Endowment Policies Endowment insurance policies are unique because they’re both savings plans that build cash value and term life insurance policies that expire if you don’t die. pay special attention to the riders being offered. Riders Riders is the term used in the insurance business for options that you can purchase as extras. You may want to speak directly to the fundraising director of the organization you want to be the beneficiary. But one. If you’re interested in this kind of plan. an automatic premium loan. People who want the proceeds of their insurance to go to some organization. may take out endowment policies. such as a college or charity. either because you forget or because the payment is never received. Most riders cost you additional fees. is almost always offered to you at no charge if you have a cash-value policy (whole life or universal life) and you check off the box that requests it. Few insurance agents know much about these plans because they’re very specific and people don’t use them very often. . When you purchase a life insurance policy. . either more of your premium goes toward your cash value. Remember. universal life. at the same time. You may also want to cover your spouse in the event of his or her untimely death because both of you bring in income to support the family. so purchasing life insurance for your children is usually unnecessary. using your premium to buy your term life coverage and add to your cash value. You’re building some cash value and. you’re probably going to pay much more than if you broaden your coverage to include the other person. taking this rider is a smart decision. that you’re 40 years old and you buy a universal life or whole life policy. Look before you leap. insurance is protection. Generally. This benefit usually saves you a great deal of money. If you go out and buy a separate term life policy for your spouse. for example. and most variable life — is the opportunity to purchase a term life insurance policy for others in your family under your cash-value policy.70 CliffsNotes Understanding Life Insurance For most insurance policyholders. breadwinners use insurance to protect their family. Don’t get caught without protection just because your check gets lost in the mail or you forget to send it in. or the additional amount it earns goes toward paying the higher premiums for your coverage. Family rider protection One of the options available with cash value policies — whole life. Say. As the cash value grows and earns interest. and compare the numbers by using the tables provided in Chapter 10. Although accidental death/double indemnity riders are quite cheap. One of these provisions covers accidental dismemberment. Nondeath and living needs benefits Another rider benefit that many policies (particularly newer ones) offer is a nondeath benefit. for most people they’re not really good deals. you can receive a portion of the total face value. or vision in one or both eyes due to an accident (not an illness or disease). Most policies specify that in order to be covered under this rider. you base how much you insure yourself for on your survivors’ needs. in which you lose an arm. a leg. because illness often brings additional and prolonged costs. they probably need less than if you die from an illness. Typically. the insured must die within 90 days of an accident. and an accidental death won’t increase the amount they need. this rider pays your survivors twice the amount of the policy if you die an accidental death. consider raising your amount of coverage. and the cause of death must be directly related to the accident — and that may be hard to prove. you probably feel under insured. After all. Some policies offer a triple death benefit if the accident occurs while you’re a passenger on a common carrier. Essentially. such as a commercial airline. or bus. depending on .Chapter 7: All The Others 71 Double indemnity Double indemnity is also called accidental death. Accidental death riders sometimes raise questions about whether a death is a result of an accident or an illness or disease. (In fact. Instead. where the insured receives the insurance payout without dying. as opposed to death by disease or illness.) If this coverage appeals to you. train. It works fairly simply: Each year. the amount of death benefit increases based on some neutral statistic. which pays for nursing home care or in-home nursing care. your coverage is lower. and so on. Most companies charge a fee to take advantage of this benefit. or long-term care insurance. if you are terminally ill and have less than one year to live (your doctors must provide certified statements to that effect). and high-tech medical care that extends lives. so if you live longer than expected. one eye or two. which covers your expenses if you become partially or totally disabled and can’t work. Don’t confuse this kind of coverage with disability insurance. Usually. the amount of which depends on how much coverage you have and the percentage you’re withdrawing. With this rider. you can withdraw a percentage of the total coverage — perhaps as much as half. Cost of living adjustments Another rider that many people feel strongly about is the cost of living rider. You can also get a policy rider to cover you if you suffer a catastrophic illness such as a stroke or kidney disease. The amount you take out reduces your death benefit. your insurance company will allow you to. in effect. and a rider if you suffer from a terminally ill disease. AIDS. This coverage is an actual payment from your life insurance company because you’re terminally ill and you want to be able to use some of the death benefit before you die. The insurance contract clearly spells out the amounts you receive in each instance. although more likely one-quarter or one-third. This kind of coverage has become increasingly popular as more and more people face the issues of aging.72 CliffsNotes Understanding Life Insurance whether you lose one limb or two. take an advance on the face value of your death benefit. . to $103. Instead. to $105.5 percent. Secondly. you don’t get this additional benefit for free. the inflation rate has been quite low. . The next year.5 percent. with most policies you get to choose each year whether to purchase the higher amount. if you have a cash-value policy and your premium is constant.000. One of the benefits of a cost of living rider is that your coverage goes up without your having to requalify for a higher amount. your coverage rises 2. the coverage goes up 3 percent. On the other hand.Chapter 7: All The Others 73 usually the Consumer Price Index (CPI) put out by the federal government. the face value of your insurance policy automatically increases by this percentage. Naturally. either your premiums rise to pay for the additional coverage or. this rider can be a pretty good deal: Your insurance coverage goes up to reflect the new needs of your survivors without your having to worry that you won’t qualify for the higher amount. In the last several years. you no longer have the option in later years and have to requalify if you want to buy the rider again. the insurance company reduces the amount that goes into your cash value. when the CPI is 2. for example. If you choose this rider. That means no new medical exams later in life when you may not qualify. Adding 2 or 3 percent is not likely to make much difference to your survivors. the amount of increased coverage can be pretty small. But if you decline in any one year. For example: If you have a $100. In that regard.575.000 policy and the CPI is 3 percent. And if some event occurs in your life that warrants increasing your coverage — the birth of a child or a new job that results in a dramatic increase in the cost of your lifestyle. you just purchase a new amount at the prevailing standard rates for someone your age. And with most guaranteed insurability riders. you probably won’t be able to pay your life insurance premium either. Speak with your family. and when your situation changes later on and you need or want additional protection. you just buy more. your financial advisor. For the privilege of being able to buy more insurance later without having to requalify. chances are your concern is really more about the total protection you have purchased than the small yearly increases. You don’t have to worry about requalifying. Furthermore. for example — buy a larger policy. and your insurance agent about these concerns. you pay more now. you buy only what you need now. which means that the additional payments you make along the way are wasted if you don’t increase your coverage by then. . which already cost more.74 CliffsNotes Understanding Life Insurance If this rider appeals to you. Policy protection: Waiver of premium If you become disabled and can no longer work. this rider is only offered with cash-value policies. making the guaranteed insurability rider sound enticing. make sure that you know how long you have to exercise your option. If you decide to purchase that added protection. Guaranteed insurability The idea that you can always buy more insurance later if you need to is very appealing to people. you can exercise this option only until you’re 45 or 50 years old (or a defined period of years. With this rider. depending on the terms of your policy). while other waiver of premium riders pay only if you can’t work in any job. You usually have to wait six months from the time you become disabled before this benefit kicks in. You’re better off increasing a disability insurance policy because it would pay for your expenses — including a life insurance premium — if you became disabled. but after that waives the premium only if you can’t work in any job.Chapter 7: All The Others 75 By purchasing a waiver of premium rider. Buying this rider usually costs you 5 to 10 percent more. for example). . the waiver of premium rider works a bit differently. For most life insurance holders. But if you don’t have disability insurance. you ensure that your insurance isn’t canceled. For universal life insurance policyholders. the waiver of premium rider only covers the cost of the insurance and the administrative fees. To qualify. The insurance company will most likely conduct its own medical exam to verify that you are disabled and unable to work. this rider may be an inexpensive means to add that coverage without having to buy a whole new disability policy. even though you don’t pay the premiums. Because a changing portion of your premium goes toward the insurance coverage and the rest for administrative fees and cash value. No additions are made to your cashvalue portion except from any interest that is earned. the company waives the premium if you can’t work in your normal line of work. possibly higher for term life because you’re always renewing it. Some waivers take over the payments if you can’t work in your chosen profession (or a profession in which you have experience or for which you’re trained). you must have documentation from your physician that you’ve become disabled and can’t work. Read the definitions and terms in these riders carefully and discuss them with your agent. a waiver of premium rider is probably not such a good deal. Still others may combine these requirements so that for a period of time (two years. CHAPTER 8 BUYING LIFE INSURANCE I N T H I S C HAPT E R s s s s s Using ratings to select the right company Choosing the right agent Buying online or through the mail Understanding the qualifying medical exam and the high risk categories What to do if you fail the exam After you find out about the different insurance products available. Rule 2: Choose the insurance product. . You have two rules to follow when choosing a life insurance policy: Rule 1: Choose the company. or you work in a high-risk occupation. selecting an agent. with particular emphasis on the medical exam and what to do if you’re not in good health. and selecting the right product. In this chapter. I also talk about qualifying for life insurance. you smoke. you need to start thinking about which product to buy and which company to buy it from. not the agent. I guide you through the process of judging a company by the ratings. not just the company. Make sure that the insurance company backs up its product. even if an individual agent goes out of business. You’re buying the future. So when you’re selecting a company from which to buy the product. Be certain that if you’re dissatisfied. The product doesn’t even come into being until you die. do what you can to make certain the company will be around to pay off. not the present. you can cancel your policy without incurring any charges. Checking into a company’s ratings Five major independent insurance ratings companies analyze almost all the insurance companies (you can find their contact information in the Resource Center at the back of this book): s s s s s A. The whole point of insurance is protection for your survivors. Moody’s Investors Service Standard & Poor’s Insurance Rating Services Weiss Research . You have to buy the best product from a company you know can deliver when called upon. Insurance is a unique product. Best Company Duff & Phelps Credit Rating Co. keep the following things in mind during your search for a life insurance provider: s s s Make sure that you’re selecting the right company for your needs.M.Chapter 8: Buying Life Insurance 77 Selecting the Right Company As with any product you purchase. and the ratings are not all consistent. AA+.) . and A+. For example. AA-. (See the Resource Center at the back of this book for more great Web sites.consumerreports. So how can you judge? You can check Consumer Reports magazine. But they all do basically the same thing: s s s Analyze the financial strength of insurance companies Evaluate the stability of insurance companies Make judgments about the reliability of insurance companies Most professionals feel strongly that you should not purchase a life insurance policy from a company until you check out its rating. get the ratings on several from at least two ratings companies.com.78 CliffsNotes Understanding Life Insurance Each of these ratings services has its strong and weak points. is the second-best rating a company can receive. Only an A+ is better. When selecting an insurance company. behind AAA. It’s sixth. Even if you can equate the scales. Quite a difference. The magazine evaluates the different life insurance companies and uses a rating system that averages all the other systems to make its determinations. the ratings of the ratings companies differ from one another. Best may offer an insurance company its top rating (AAA). But an A from Standard & Poor’s is much farther down the list. The ratings systems vary: An A from Weiss. for example. AA. You can get a copy of Consumer Reports at your local library or check its Web site at www. while Moody’s gives the company its fourth-highest rating (Aa3). 000). Most state guaranty associations have limits on the death benefit (generally not more than $500. Selecting the Right Agent Many insurance “stores” carry a number of different “brands” of insurance. The best thing you can do to make sure you don’t lose your investment or your protection is to choose your life insurance company wisely. you choose the agent before you choose the insurance company. allowing you to choose both the right product and the right company. . If that effort fails. Check with your state insurance commission for more information about insurance company bankruptcies. the customer. This method seems to go against Rule 1. you haven’t yet chosen the company. and they also may not match the interest you’re guaranteed on a cash-value policy. The insurance regulators (state and federal agencies) work with other companies to pick up the policies from a failed company. You’ve chosen an agent to present you with options. together with the insurance company. if your state association picks up your policy because no other insurance company will. chances are good that your protection will be honored. but actually. you may lose some of your benefits. Although these backups are comforting. These kinds of insurance stores or sales agents act as independent brokers. a state association made up of all the insurers in your state who collectively ensure that you won’t be left in the lurch. bringing you. Hopefully. In this case. your policy may be picked up by a guaranty association.Chapter 8: Buying Life Insurance 79 What if the company goes belly-up? If a company does fail. that agent will present you with products from more than one company. you need to find someone you trust.” using one of the large search engines (www.) . Through an open dialogue.80 CliffsNotes Understanding Life Insurance If the agent can present you with only one company. At the time of this writing. not the agent. Hundreds. Many insurance agents think of themselves as counselors as well as sales agents. or perhaps thousands. one of which you’re interested in buying. Check with others who use a prospective agent. (For more great Web sites. you may run across a slick. your agent should be able to steer you to the right product. selected almost at random. They have a variety of products to sell. with more springing up every day. more and more people are shopping for insurance online. The agent’s job is to help you decide which one. A review of just ten of these matches. Therefore. rethink your position and consider the first rule: Choose the company.com) revealed thousands of matches. showed that most can lead you to a company from which you can buy life insurance. and make sure that he or she is a registered member of the National Association of Life Underwriters (NALU) or the Society of Financial Service Professionals (see the Resource Center for contact information). Shopping for Insurance Online As more and more products become available through the Internet. Beware of high-pressure tactics. see the Resource Center at the back of this book. Although most sales agents are honest. of Web sites are devoted to insurance. hard-sell insurance salesperson who only wants to sell you something without due concern for your needs or your budget. a search for “life insurance. Your insurance sales agent is a knowledgeable resource for you who can serve as a counselor.excite. helping you decide which product is best for you. You generally pay lower prices by buying online than from an insurance “store. You have the tools to shop around for the best company that offers the best product at the best price.Chapter 8: Buying Life Insurance 81 Is shopping online a good idea? Take a look at the pluses and minuses in Table 8-1 and then make your determination. Call and solicit recommendations and advice from the online and telephone assistants (if that option is available). be overwhelming and too timeintensive to weed through. your total portfolio. Narrow your search to the top two or three policies and companies. Do all the shopping and research you can online. Your search for a product is only as good as the search engine(s) you use. Or use e-mail to correspond. . You can do all the research imaginable on a company without leaving your computer. You can choose from a vast The amount of information can range of products. 3.” You don’t develop a relationship with the online insurance company representatives or know anything about their experience. Table 8-1: Pluses and Minuses of Shopping Online Pluses Minuses You can choose from a You’re on your own. 2. which huge number of companies. means you can’t rely on getting guidance or counseling from a human being. You have to know exactly what you want and need because you won’t have an agent who knows your family. Follow these steps to make your decision on which online company to buy your life insurance from: 1. or your goals — important elements in selecting the right life insurance product. 7. Have that representative give you guidance. Make your decision. and a quote. give the agent a chance to match or beat your online offer. today it pales when compared to the positives of buying online. From friends and relatives. counseling. But if a flyer sparks your interest. Collect all the hard facts about the policy. Although buying a life insurance policy through the mail may have been a decent alternative a few years ago. 8. Do your homework. and talk with a trusted agent (or a recommended agent) to see whether the company is legitimate. get the name of an insurance agent who doesn’t represent an online insurance company. Compare terms. Make sure that you can actually talk with someone at the company itself. . 5. company ratings. consider it just as much as you would an insurance policy you discover through an online search. rates. and the company ratings from the independent ratings companies. Talk to that agent and have him or her give you guid- ance. 6. the company. If the company has both online and agent service. and a quote.82 CliffsNotes Understanding Life Insurance 4. Assuming you feel that the online companies are all good ones. Buying through the Mail Most professionals frown on buying life insurance from a direct-mail flyer sent out to you and thousands of your closest friends promising you personal service. call and make an appointment to meet or talk with a company representative. counseling. so they stack the deck. if it sounds too good to be true. Your only gamble is which insurance company you choose. is hoping that you don’t die before it has a chance to make more on what you pay than the cost of paying your survivors. If. The more likely you are to die sooner rather than later. on the other hand. it doesn’t need as much each month to make a profit from your premiums. the company doesn’t want to insure you because it will probably have to pay off before it can make a profit from your premiums (or it’ll charge you so much that the cost won’t be worth the coverage to you). If you’re too old or too unhealthy.Chapter 8: Buying Life Insurance 83 Like most things. You know that you will die eventually — the only question is when. Qualifying for the Coverage You Want The concept of life insurance is simple: Although you hope you live a long life. the company can count on you being around for a long time. are you a high or low risk? The company then bases your premiums on the answers to those questions. your survivors have something to fall back on. The company. So how does the company make this judgment? It uses the answers to two questions: s s Based on complicated statistical analyses and longevity charts. you also pay a company to make sure that if you die sooner rather than later. the more the company needs to get from you in the early years of your policy. it probably is. But companies don’t like to gamble. . how long are you likely to live? Based on your medical history and information. similarly. basing your premiums on your age. and taking basic vital signs (blood pressure and pulse) and a swab of cells from the inside of your cheek. The extent of your physical exam is based on the following: s s s s s Your age Your medical history Your family’s medical history Whether you smoke The amount of insurance you’re seeking to buy . X-rays. and you have to be healthy enough. will ask you to undergo a medical exam. you have to pass two conditions: You have to be young enough. In addition. and urinalysis. for most policies. noting your height and weight. you give the insurance company permission to check with your doctor to follow up on any findings or questionable items on the application. on your insurance application. your family’s medical history. If you’re younger than that. most companies won’t insure you. If you’re older than 60 or 65. the company will ask you to provide your complete medical history and.84 CliffsNotes Understanding Life Insurance To qualify for a life insurance policy. blood tests. The age condition is based strictly on statistics gathered for large numbers of people. Taking the medical exam Medical exams for life insurance can involve anything from a quick review of your medical history. To judge your medical condition. or it can be a more extensive exam requiring an EKG (electrocardiogram). the company will insure you. Go . The insurance company bears the cost of the exam (although. the company may ask you to see a physician or go to a clinic.Chapter 8: Buying Life Insurance 85 For the quick review. usually a health professional comes to your home or office. What if I fail the exam? If the company refuses to insure you because of your medical condition. The company will charge you a higher premium. The company will ignore the high risk (which isn’t likely). What “high risk” means Insurance companies are concerned about a number of highrisk people: s s s s s s Smokers People with high cholesterol Potential diabetics People with indications of coronary disease People with a personal history of cancer or other serious disease People who work in high-risk occupations If you fall into one or more of these categories. such administrative costs are built into the cost of the insurance policy you purchase). first check that the medical report is accurate. you face three possible outcomes: s s s The company won’t insure you. If you need to undergo more testing. Request that the company send a copy either to you (which they may not do) or to your doctor (which they will do). of course. ) In any case. but they’re likely to discover the same problems and raise their premium as well. You can go to a different company. If you find a mistake on your medical report. notify the insurance company in writing immediately. (An independent agent may be able to refer you to another company. Being turned down for life insurance can affect other insurance applications that you may be filing. appeal to the agent who sold you the policy.86 CliffsNotes Understanding Life Insurance over the report with your doctor. You can also try to find an insurance policy that doesn’t require a medical exam or that requires only a limited exam. although chances are you’ll have very little luck changing the amount. discuss the situation with your agent to see whether the two of you can come up with a solution. you either pay the higher premium or lower the face value to lower the premium. If the medical report is correct and the company charges you a higher premium. At that point. . You may be facing legitimate health concerns that you need to address. you can just subtract the amount in your account from the amount you paid out. So many of the costs are hidden inside the cash value. But what about next year? What about cancellation charges? The next sections take you through the basics. projected earnings.CHAPTER 9 HOW MUCH SHOULD YOU PAY? I N T H I S C HAPT E R s s s s Calculating premiums Dealing with mortality charges Understanding underwriting Comparing rates and companies Insurance companies use formulas to determine how much risk you pose and weigh that against how much profit they can make. and points out some of the hidden charges you need to know about. You know how much goes out of your checking account each month for the policy. if you have a cash-value policy. you have to know how insurance companies determine the cost. and dividends that figuring out how much you’re actually paying isn’t as easy as it sounds. How Premiums Are Calculated Before you can determine whether a premium is reasonable. The chapter concludes by explaining the basics of how to compare one company’s policy with another’s. even with term insurance. Theoretically. tells you about the factors that affect your rate. And you know at the end of each year how much the surrender value is. . This chapter illustrates how insurance companies calculate their premiums. either the same amount or a higher amount. Whether the amount is higher depends on whether you buy a one-year term. If you buy a five-year term policy. in which the death benefit decreases. or another multiple-year policy. if you choose a decreasing term policy. With multiple-year policies like a five-year term. If your premium is $346 per year and the dividend is $183. built-in inflation factor).88 CliffsNotes Understanding Life Insurance The costs of term insurance Every year (or every month or quarter or semiannually). If you choose to renew for another five years. . you pay a set premium for term insurance. If you add any options (convertibility. If you have a one-year term policy. On the other hand. you don’t know how much you’ll pay three or four years from now. the premium may go up slightly. your premium remains the same. the company declares the $183 dividend for all five years and you pay the same $163 each year. you pay again. such as mortgage insurance. for example — see Chapter 7). it bases the premium on your age as though you were 41 for all five years (plus a slight. With a one-year term policy. you pay the one year and the company raises the premium each year. a five-year term. which covers you for that period. Most companies can give you a pretty good estimate (provided that you select a company that has a proven track record of estimating accurately). But the key is whether the company provides dividends. So if you’re 38 years old and buy a five-year term policy. you pay just $163. the insurance company charges the same premium for all five years. The next period. you pay the next premium minus the dividend rate for five more years (refer to Chapter 3 for more information on dividends and to Chapter 4 for a discussion of term insurance). You don’t know how much you’ll pay next year. the company figures the premium based on the middle year. but also to build up the investment so that the interest it generates will pay the costs of the protection. How all those numbers are configured is an extremely complex formula based on amortization tables. So what changes? Well. When you purchase a cash-value policy. actuarial mortality charts. you’re paying more for the flexibility a cash-value policy gives you. and the guaranteed amount that will accrue in your account. isn’t so simple. and commissions. you pay a set premium based on the amount of protection you’re buying. in many cases. Cash-value is also much more expensive because you’re paying not only for the protection. administrative fees. Cash-value. In addition. and contracts with life insurance salespeople. Unless you have a burning desire to wade through the numbers. the only things you really have to be concerned with are s s s How much protection you’re buying (so that the death benefit is sufficient for your needs) The amount of your total premium (to be sure that you can afford the monthly payment) The guaranteed return (so you know how much will go into your investment) .Chapter 9: How Much Should You Pay? 89 The costs of cash-value insurance Term insurance is simple. fees. the amount that goes to your survivors. The one thing that’s simple about cash-value policies is that the premium remains the same for the entire time. and commissions. as you may have guessed. everything else. including. the amount needed to pay the insurance company’s other costs. The insurance company determines your rate as follows: 1. chances are with a term policy you’ll have paid no more than half to two-thirds of that figure. depending on your age when you die. 2. life insurance is about how much you have to pay to insure your loved ones. The insurance company uses these numbers and some others to determine your life expectancy.000 death benefit you will have paid more than $50. the insurance company knows how many men and women in various age ranges (for example. based on its own research or by relying on research done by others. and the company uses these premiums to pay out death benefits. The insurance company then calculates how much money it has to earn on your premiums so that when it pays your survivors. Many other policyholders continue to pay premiums.to 39-year-olds. . In fact.90 CliffsNotes Understanding Life Insurance Mortality Charges When you pare things down to the bare essentials. such as the National Association of Insurance Commissioners. The insurance company and the industry at-large keep statistics on the mortality rates throughout the country.to 34year-olds.to 29-year-olds. and the company gets to keep all that money. 35. 30. The insurance company builds in the commission it pays the agent who sold you the policy. Many people pay premiums for years but stop their insurance before they die. it pays less than what it has made. Through this research.000? No. 4. 3. Does that mean that if you have a $50. and so on) die each year. Definitely not. 25. Keep in mind that the insurance company makes money on the premiums you pay. 6. the insurance company places you into one of three categories of risk. employee benefits. should be able to tell you the exact amount you’re being charged as a mortality charge. salaries. which is the mortality charge. This amount increases with age because it’s based on your life expectancy. if you use one. administrative costs. such as overhead. Even mail-order or online policies involve some costs of doing business. although you do have some flexibility to negotiate a better placement.Chapter 9: How Much Should You Pay? 91 5. The insurance company adds in the other costs of doing business. And if you’re deemed insurable. The three levels are: s s s Preferred risk Standard risk Substandard risk Ideally. you want to fall into the top category. Your agent. Underwriting Underwriting — the process by which the insurance company determines whether or not it will insure you — is the very heart of insurance. The insurance company totals all these numbers. underwriting is the process by which they determine how much to charge you for the protection. When determining whether or not you’re insurable and what your premium will be. and a profit. The insurance company looks at four things to determine your category of risk: . Nevertheless. you suddenly — with no apparent reason — buy a $5 million policy when you’re in your 20s. you should know that your job does have an effect. The degree to which your occupation places you into a highrisk category (not too many do. Changing jobs or careers isn’t an option for most people. or diabetes. If your parents died at relatively young ages or if one or both of them had serious diseases such as cancer. your personal medical history. and your family’s medical history (see the preceding bullet) all indicate how much risk you have of dying earlier than the statistics project based on your current age. it will investigate. translates into the amount of your premium. for example. Your job: Some jobs are simply riskier than others. heart disease. the company has use of that money and never has to pay off. The insurance company also asks about your family’s medical history. Special circumstances: If the insurance company identifies any unusual or special circumstances. If your health has been good. grandparents. If. Either way. including the ages at which your parents. A personal history of illness puts you into a high-risk category. the insurance company may ask a few more s s . you may fall into a higher risk category. and/or siblings died. by the way). a good projection means that you’ll either pay your premiums for a longer period or you’ll cancel your insurance at some point. Because the insurance company is betting that you’ll live longer than expected.92 CliffsNotes Understanding Life Insurance s Your current health status: Through a medical history and the required medical exam. s Your projected health status: Your current health status. which translates into lower premiums. the insurance company places you into one of the categories. you qualify at a higher level. and doing so just to save money on a life insurance premium is absurd. universal life. the underwriter will ask you to elaborate. s . whole life. which explain the details of each kind of policy and rider and goes over the pluses and minuses of each. If the company isn’t satisfied with your answers. Comparing the prices of term insurance versus cash-value policies is virtually impossible. remember to find out which company would be underwriting your insurance. the insurance company may even hire investigators who will ask questions of family members. The degree to which you can resolve any unusual circumstances without having to force an investigation makes you more insurable at a lower premium rate. And in some cases. if any. Likewise. When you’re ready to start shopping for life insurance. you want. Look for insurance companies: Seek out at least five or six companies and/or agents who sell the kind of policy you want. if you make a statement in your application that conflicts with other statements. and friends. Comparing Rates and Companies Chapter 8 discusses how to choose the company from which you buy your life insurance policy. or variable life.Chapter 9: How Much Should You Pay? 93 questions than usual. If you speak with any independent agents. You must also decide which riders. do the following: s Decide which kind of insurance: The first thing you must do is decide which kind of policy you want (if you could afford it): term. Eliminate any companies that you have doubts about. Review the information in the previous chapters. neighbors. This section gives you steps you can use to help you get to the bottom line. Use the information in Chapter 8 to help you rate the insurance companies. s s . Compare the same categories of risk: If one company quotes you a preferred rate while another company quotes you the standard rate. using the quote they will actually offer is essential. On the other hand. If possible. if the companies will actually differ in the category of risk. but the total cost over a long period is higher than from other companies. be sure to include the projected premiums for the next three or four terms (terms.94 CliffsNotes Understanding Life Insurance s Ask for quotes from each of those remaining companies or agents: Be certain that each quote represents the same amount of death benefit with all the same riders/options you want. Keys to comparing term insurance policies To compare policies and riders by using the Term Insurance Worksheet in Chapter 4. find out the history of the dividends so that you can reasonably rely on the projections. remember these four keys: s The premium for the first year does not necessarily reflect the total cost of the insurance: Many companies discount the initial premiums in order to get your business. If the company offers dividends. If you plan to remain insured for 20 or 30 years. ask the agent or company to break down the charges so you can more easily compare the policies — which is particularly important when you evaluate cash-value policies. Only then can you accurately decide which one is best for you. get quotes for all the kinds of policies. you’re not necessarily comparing the rates. If you’re unsure as to which kind of policy to buy. not years). So a 10 percent taxable return actually is the same as a s s . you have to know your marginal tax rate (including your state income tax if applicable). the dividend-paying company is not necessarily the better buy: You can tell only after you get quotes.Chapter 9: How Much Should You Pay? s 95 Compare the term insurance costs with the cash-value insurance costs: If you think you may want to buy a cash-value policy in the future. To calculate the aftertax return. By subtracting the total tax paid from the total return. that means that 28 + 7. Take the projected cash-value earnings with a grain of salt: Companies often show you charts featuring high projected returns. Because earnings on whole life policies are tax-deferred. or 35 percent of a taxable return on investment will be paid to the governments. dividends. cash values. Although one company may pay a dividend while another may not. Use the guaranteed return for your comparisons. you get your after-tax return. you may get a better rate when converting with the same insurance company than buying a new policy. not just the return itself when comparing the return on your policy to a return on a taxable investment. you must consider your after-tax return. If you’re in the 28 percent federal income tax bracket and the 7 percent state income tax bracket. you must compare the mortality charges. Keys to comparing whole life insurance policies To compare policies and riders by using the Whole Life Worksheet in Chapter 5. remember these seven keys: s Premiums remain constant for the entire policy: To compare costs and prices. and death benefits. you’re only concerned that the company remains solvent during the term you purchase. You may be just as well off investing your money in another taxdeferred vehicle and purchasing a lower-cost term insurance policy. Find out what happens if you terminate your coverage: Some insurance companies have cancellation or termination charges. Include these charges into your comparisons. but with whole life.5). Insurance is protection. To compare policies and riders by using the Universal Life Worksheet in Chapter 6. you’re buying into the company itself. which. however. A company with a significantly lower rating may guarantee a slightly higher return. Many other investments such as IRAs and 401(k)s can give you a better — and still tax-deferred — return on your dollar. s Put extra stock into the insurance company’s rating: With term insurance.5 percent untaxed return (35 percent of 10 is 3. not investment: If you want to purchase a whole life policy because you think it’s a good investment. but you may want to seriously consider the lower-yielding but higher-rated company for your own peace of mind.5. a number of differences make universal life policies more difficult to compare with each other. remember these four keys (Note: You can use these same keys when comparing variable life insurance policies): . leaves 6. think again.96 CliffsNotes Understanding Life Insurance 6. s s Keys to comparing universal life insurance policies Universal life insurance is similar to whole life. while others build in administrative charges up front. when subtracted from 10. On the other hand. both of which offer the same tax-deferred benefit. including IRAs and 401(k)s. Insurance companies will show you both guaranteed and hopeful projected earnings: Take these hopeful projections with a grain of salt. You may be just as well off putting money in other tax-deferred investments and purchasing a lower-cost term insurance policy. to compare costs and prices.Chapter 9: How Much Should You Pay? s 97 Premiums for universal life don’t necessarily remain constant for the entire policy (unlike whole life): Therefore. consider it a bonus. If one company guarantees a little more but has a significantly lower rating. you may want to seriously consider the lower-yielding but better-rated company. Use the guaranteed earnings. you can also use the company’s historical payback to get an idea of whether the hopeful projections are pie-in-the-sky or reasonably accurate. you have to use the same premium. s s s s . and if you get more. you’re buying into the company itself. not investment: Many other investments can give you a better return on your dollar than universal life insurance. The death benefit in a universal life policy can either remain constant or increase as your cash value increases: Use the same death benefit option with each policy to compare costs and prices with other companies and policies. Insurance is protection. Put extra stock into the insurance company’s rating: With universal and whole life. and details how to file a claim. you get to decide who is the . provides sample benefit tables for life income benefits. This chapter also covers the settlement options that are available to your beneficiaries. Among these are s The right to choose your beneficiaries: As owner of the insurance policy. Follow this basic rule: Don’t sign anything that you don’t understand! Ownership Your insurance policy is property. even if you have a term life insurance policy. and you own it. The policy is real and has value. That means that you have certain rights associated with ownership.CHAPTER 10 LIFE INSURANCE PROVISIONS AND OPTIONS I N T H I S C HAPT E R s s s s Determining your death benefit payment options Reading the fine print for provisions and exclusions Terminating your policy Filing a claim This chapter discusses some of the main articles in a typical life insurance contract and highlights some of the key elements that you should look for. which defeats the whole purpose of buying the conversion rider. The right to exercise any options that your policy includes: If you include any riders or options in your policy. any surrender value goes toward paying the premium (see Chapters 5 and 6). s The right to cancel your policy: If you choose. either through written notice or by ceasing to make premium payments. you can mandate how and when these options are exercised. you may be required to pay cancellation charges. such as divorce agreements in which your dependent children are specifically named as beneficiaries. The right to cash in your surrender value at any time: Just as you have the right to cancel your policy. that with some cash-value policies. The right to convert your policy. s s s .Chapter 10: Life Insurance Provisions and Options 99 beneficiary when you die. Note: If you increase the death benefit or face value. With death benefit payments. You should note. In addition. assuming you buy that right as a rider: If you buy a term life insurance policy but include a rider that allows you to convert to a cash-value policy at a later date without any additional qualifying. the company may require you to undergo a new medical exam and requalify. your best bet may be to leave as many options open for your survivors as possible so that they can use the death benefit to suit their needs. minus any cancellation charges or administrative fees. You can change the beneficiary. however. you have the right to take out whatever cash value you accrue. you can cancel your policy at any point. assuming that you face no other legal restrictions. however. you can opt for this conversion at your discretion. You can decide whether you want to designate a specific time period for these payments or that a specific amount be paid in each payment until the total death benefit runs out. If you need to get official copies of the death certificate. be sure to follow up so that the beneficiary can receive the benefit. too. that notification must be in the form of certified copy of a legal death certificate with an embossed seal. Choosing installment payments As the life insurance policyholder. . but death benefits are paid to beneficiaries in other ways.” later in this chapter. In most instances. Nothing can be further from the truth. So if your loved one dies and you know that he or she has an active insurance policy. Taking a lump-sum payment The lump-sum payment is the most common. over a period of years. See “Filing a Claim. you can usually get copies for a small charge from the funeral home or whomever handled the burial or cremation. the beneficiaries automatically get a check in the mail as payment on the insured person’s death benefit. you can designate that your beneficiary is to be paid the death benefit a little at a time.100 CliffsNotes Understanding Life Insurance Death Benefit Payment Options Most people assume that when a person who has life insurance dies. Although executors of estates usually take care of these matters. Any monies that haven’t yet been paid generate interest. for more information. this income is not taxable to the beneficiary. although you and your beneficiary should certainly check with a tax advisor before making that assumption. usually. The insurance company must be notified that one of its policyholders has died. that’s not always the case. . the total amount of the death benefit and cash value. An interest-only payment option is a means of budgeting the funds while maintaining the investment. Because the beneficiary earns interest on the account. Table 10-1 is a sample Life Income Table that specifies the minimum amount your beneficiary will receive each month if you and your beneficiary opt for this method of payment. With this option. and the length of time you specify for the payments. Note that the amount is based on the beneficiary’s age and sex and whether you specify a number of years of payment or whether you leave the number of years open. the life insurance contract specifies a minimum interest rate.Chapter 10: Life Insurance Provisions and Options 101 Maintaining the principal: Interest-only payments You can specify that your beneficiary get a periodic payment of the interest on the death benefit’s cash value. the amount the beneficiary receives is based on the terms you set up. Making it last: Payments for life Another payment option is a payment-for-life program. Beneficiaries usually have the option of withdrawing the principal (although generally not all at once). However. Or they can borrow against the principal by using it as collateral and get a better rate than they can get from a bank loan. As in the interest-only option. your beneficiary gets a periodic payment from the death benefit. which is a means of budgeting the funds while maintaining an investment. Usually. he or she must claim the benefit as taxable income. the beneficiary can’t withdraw the funds but is guaranteed a payment for a specified period of time or for his or her lifetime. which defeats the purpose of this payment option. If you choose a specific time period. As long as whatever arrangements you want to make are legal. he or she receives equal payments at the end of each monthly interval. they’re constantly being revised). more than likely. whichever is longer. of course. . Many attorneys are quite familiar with other ways to designate how your heirs receive your death benefits. that’s up to you and the insurance company. that person’s heirs get a lump sum payment of any balance owed. they will charge a bit more for this creative arrangement because they may have to do some legal work. However. insurance professionals don’t recommend that you make up your own payment options unless you have an attorney review them. As a rule. companies are usually willing to set them up. Joining together: Joint life income This plan uses a schedule of payments that combines the two beneficiaries’ ages and sexes. You can see an example in Table 10-2. Setting up special options If you want to set up any other kind of payment option. The beneficiary receives payments as long as one of the two payees is alive. If the beneficiary dies during the specified period. The insurance and tax laws include many nuances (and. you can easily designate a creative payment program without realizing that you’ve created more obstacles for your beneficiaries than you want to.102 CliffsNotes Understanding Life Insurance As long as the beneficiary is alive. then payments continue as long as the beneficiary lives or to the end of the certain period. 32 5.10 Female $4.32 5.Table 10-1: Life Income Table: Monthly Payments per $1.96 9.80 7.83 5.62 7.77 6.85 5.22 10.53 4.18 7.15 5.46 7.54 Life with 5 Years Certain Male Female $4.51 4.20 $4.48 4.84 5.90 5.15 8.52 $4.47 Life with 20 Years Certain Male Female $4.32 5.30 8.66 103 .00 8.48 6.54 4.24 Life with 10 Years Certain Male Female $4.47 5.95 6.02 6.36 6.63 5.90 6.33 4.92 5.74 $4.94 5.20 5.92 8.85 5.60 4.43 6.77 5.43 5.000 in Life Insurance Chapter 10: Life Insurance Provisions and Options Age Last Birthday 50 55 60 65 70 75 Male $4.10 7. 19 4.01 7.79 4.58 4.67 5.53 4.25 5.82 4.15 10.04 5.53 6.50 7.97 6.67 6.29 7.33 4.78 4.47 4.90 6.47 4.72 4.71 50 54 55 59 60 64 65 69 70 74 75 79 80 84 85 & over 89 &over .33 4.71 7.90 4.90 5.48 5.15 4.83 12.72 4.78 8.99 5.81 6.29 6.04 5.61 5.74 4.83 4.52 9.24 5.67 4.90 5.16 5.44 8.44 7.70 10.58 4.74 5.82 5.78 9.52 9.104 CliffsNotes Understanding Life Insurance Table 10-2: Joint Life Income Table: Monthly Payments per $1.97 6.79 5.29 4.25 5.24 5.71 4.01 7.000 in Life Insurance Age Last Birthday Male 50 55 60 65 70 75 80 85+ Male Female Female 54 59 64 69 74 79 84 89+ $4.48 5.53 7.29 5.67 5.16 5.81 5. The insurance company adjusts the policy benefits for the correct age or sex. Rather. In addition. these paragraphs list all the reasons you would not be covered — that is. . reasons your beneficiaries would not get any of the funds you specifically provided for them. “all amounts paid will be in United States dollars”). called date of issuance. listing you either as the wrong sex or as older or younger than you were at the time of application. Age and sex This provision is a way for the insurance company to cover itself in the event that it (or you) made a mistake. your beneficiaries will receive an amount equal to the premiums you paid on the policy from the starting date. most policies also specify that if you do commit suicide within the two-year period. This provision doesn’t exclude all benefits if you commit suicide. it specifies that if you die as a result of suicide within two years from the issue date of the policy. to the date of death by suicide. Suicide Just about every life insurance application contains a suicide provision. Basically. The insurance company cites these two factors because they’re the two critical elements in how much it charges you for the protection. in addition to some fairly obvious concepts (such as.Chapter 10: Life Insurance Provisions and Options 105 Provisions and Exclusions Every insurance contract has — usually in small print — a number of paragraphs that are essential for you to read. your beneficiaries are not eligible for the death benefit (which may be referred to as the Amount of Basic Plan). does not go into effect if the total disability results from “an intended self-injury. specifying that premiums will be waived if you become totally disabled. This provision states that except for nonpayment of premiums. meaning an airplane ride for which no fares are required.106 CliffsNotes Understanding Life Insurance Also note that the waiver of premium benefit. So if the company discovers an error in your application after . Non-commercial aviation Many policies will exclude a death benefit if you die as a result of a noncommercial airplane crash.” War Although some policies specifically exclude paying death benefits when you die from injuries sustained during a war — whether you belong to the military or not — many policies do not. your policy may not cover you. the company will not contest either the policy or the waiver of premium clause after the policy has been in force for a specified period. On the other hand. Check the fine print to be certain — some policies don’t exclude the activity but do charge higher premiums if you admit that you participate in these types of activities. Dangerous activities If you die while engaging in an inherently dangerous activity. Incontestability clause A key provision in most life insurance policies is the incontestability clause. such as skydiving or hang gliding. usually two years. policies often exclude the waiver of premium benefit if you become totally disabled as a result of war. Premium provisions The policy must specify how the premiums are payable and when they are due. if so.Chapter 10: Life Insurance Provisions and Options 107 that period. If you don’t have any dividend or if the dividend is insufficient. Furthermore. as specified in the policy contract (approximately 6 percent per year). . The policy also must specify whether you have a grace period and. if so. you will have to pay all the unpaid premiums plus compounded interest. the policy states whether any cash value will be used to pay the premium and. all riders. Next. the policy lapses and your coverage ceases. Finally. how long the period lasts. This period can be as long as five years after the policy lapses. and the waiver of premium benefit will end. the policy states when the death benefit. it can’t go back and void your policy. You remain covered until the insurance is officially terminated. although you will probably have to show proof that you’re still insurable. This provision covers you even if you misrepresent yourself on your application. If you don’t pay your premium by the end of the grace period. During the grace period. as discussed earlier in this chapter. the insurance company usually extends your coverage. how you will be notified. Termination As the word termination implies. the insurance company will first look to any dividends it owes you to cover the cost of the premium. the policy states how long you have to apply for reinstatement if your policy lapses. unless you misrepresent your age or gender. ironically. thinking about life insurance can provide an enormous amount of relief.000 to $10. Filing a Claim When a loved one dies. take special note of the termination clause and when the policy terminates.000 and can be much more. it’s easy even if you don’t have the actual policy! Steps to filing a claim Here’s how to file a life insurance policy claim: 1. eating more of the premium and surrender value. call the company itself (or any local agent for the company. who will either get you to the appropriate claims agent or handle it herself ). as you age. the cost of the coverage becomes higher. and along with that expense (and others) comes the immediate loss of income. The cost of the funeral itself averages $5. So be sure to withdraw the funds if you no longer want the coverage and would prefer to have the accrued value. 2. A loved one’s death brings many things to think about and many expenses that are unplanned and.108 CliffsNotes Understanding Life Insurance Because most policies have a specified age limit after which you are no longer insured. In fact. thinking about the life insurance policy may seem crass. . Toward the end of the insurance contract. Call the insurance agent who sold your loved one the policy. even if you don’t have all the details about the policy. The good news here is that filing a life insurance claim is very easy. often. If no agent is listed or you don’t know who the agent is. Doing so is especially important if you have a cash-value policy with a declining death benefit and you plan on taking out the surrender value. Look around to see whether you can find the written policy. Yet. unaffordable. Get a certified copy of the death certificate. Submit the claim form with the death certificate. Fill out the papers and forms as instructed. The Council will forward your inquiry to about 100 of the largest insurance companies in the country so that they can investigate whether your loved one had a policy with them. The insur- ance agent can usually do most of this task for you. Take special note of these two things: s You can arrange to have the funeral expenses paid directly from the proceeds of your insurance policy. If you know your loved one was insured but you don’t have the policy or know the name of the insurance company that issued the policy. especially if you have the policy itself available. 4. you receive your settlement.Chapter 10: Life Insurance Provisions and Options 109 3. s . 5. In just a matter of a few days to a week or two. you may be able to get the necessary information by contacting the American Council of Life Insurance at 1001 Pennsylvania Ave. You can usually choose from several options. Washington. but the forms should be fairly simple. Choosing among settlement options The most difficult part of the claim process is deciding on the settlement option you want if the policyholder hasn’t already specified the payment plan. DC 20004. NW. The funeral director will arrange for this payment. Will . Variable life insurance is similar to universal life insurance. Q&A 1. After you work through the review questions and the problem-solving scenarios. 2. Powers of attorney c. Your income b. You should base how much life insurance you buy on all of the following EXCEPT: a. What are the two basic rules to follow when selecting a life insurance company? ________________________________ ________________________________________________ 5. Your age 3. you’re well on your way to achieving your goal of figuring out what your life insurance needs are. Irrevocable trusts b. Your assets c. Health care proxy d. Your future expenses e. Your cost of living d. The major difference between the two is that with variable life ___ ________________________________________________ 4. Estate planning consists of all of the following EXCEPT: a. True or false: Older folks usually need less life insurance because their children are grown and no longer dependent on them.CLIFFSNOTES REVIEW Use this CliffsNotes Review to practice what you’ve learned in this book and to build your confidence in doing the job right the first time. CliffsNotes Review 111 6. Your age b. Your children are 12 and 9 years old. Your sex c. All of the above Answers: (1) False. not the agent. the death benefit decreases.000 on it. Scenarios 1. choose the policy. (3) Your cash value account is invested in mutual funds. You’re in the 28 percent federal tax bracket and the 31⁄2 percent state tax bracket. (2) b.000 and owe $80. True or False: One of the benefits of universal life insurance is that you pay less because as your need for protection decreases with age. The option to buy term life insurance for your spouse under your cash-value insurance policy is called ________________ 7. You are single with two children whom you have designated as your life insurance beneficiaries. (8) e. The rate of return you’re getting from the money you currently have in a taxable certificate of deposit is 5 percent. (4) Choose the company. Your health status d. The down payment practically depleted your . just turned 30 years old. The equivalent rate of return in an untaxed. (7) False. What should you do to make sure that they get the death benefit if you die prematurely? ________________________________________________ ________________________________________________ 3. Two months ago you purchased a condo for $98. Your father’s health e. life insurance cash-value account is ___________________________ ________________________________________________ 2. You are single. 8. and have no children. not just the company (5) c. (6) Family rider protection. The premium you pay for life insurance is based on: a. In this situation. (2) Appoint a trustee because minors can’t collect life insurance benefits.0 – 1. You checked with an online life insurance company and found out that a $100.000 on it.58 = 3. You have no debts except your car. which your net worth would likely cover. The only expenses they’ll incur is your cost of dying. Term insurance is not something you need right away. 5. . (3) Probably not.112 CliffsNotes Understanding Life Insurance savings. although you still own some stock worth about $6. your heirs won’t be saddled with debt if you die.5 percent of 5.0 = 1. and the condo won’t have lost much value in the two months.000. You’re paying $269 per month for your car (which is worth about $8. Is this a good investment for you? Why or why not? ________________________________________________ ________________________________________________ ________________________________________________ ________________________________________________ ________________________________________________ Answers: (1) 31.42 percent.58.000). which you pay off by its value. however. You have no other debts and pay off your credit cards each month.000 term life insurance policy would cost you about $120 per year. and you still owe about $4. you may want to start investing in a cash-value life insurance policy because at your age the premiums are low. CLIFFSNOTES RESOURCE CENTER The learning doesn’t need to stop here. CliffsNotes Resource Center shows you the best of the best — excellent resources to help you get more information about life insurance. I’ve included some books on related topics, Web sites that can help you do in-depth research on life insurance and get online quotes, and information about the five major independent ratings companies that judge insurance companies. And don’t think that this is all we’ve prepared for you; we’ve put all kinds of pertinent information at. Look for all these great resources at your favorite bookstore or local library and on the Internet. When you’re online, make your first stop, where you’ll find more incredibly useful information about life insurance. Books If you’re ready to move on to the next step, check out some of these other titles. Personal Finance For Dummies, by Eric Tyson, helps you extend the learning experience into other areas of your financial life. IDG Books Worldwide, Inc. ISBN 0-7645-5013-6, published 1997. $19.99. Consumer Reports Life Insurance Handbook by Jersey Gilbert, Ellen Schultz, and the editors of Consumer Reports Books. Consumer Reports Books. ISBN 0-8904-3708-4, published 1994. $16.95. Your Life Insurance Options by Alan Lavine. Wiley. ISBN 0-4715-4919-3, published 1993. $12.95 (paper). 114 CliffsNotes Understanding Life Insurance You can find these books in your favorite bookstores (on the Internet and at a store near you). We also have three Web sites that you can use to find out about all the books published by IDG Books Worldwide, Inc.: s s s Internet Some of the largest companies’ Web sites offer good information about life insurance and their other products, calculators to help you determine the amount of coverage you need, and/or online quotes. Find company Web site addresses in ads, by calling a local agent, or just guess — type in www., fill in the name of the company, and then add .com. Check out these Web sites for more information about life insurance. Life-Line, at, gives an excellent introduction to life insurance, provides a calculator to help you determine how much insurance you need, and another one to help you determine your “value.” Insurance Online at InsWeb () offers basic information, a calculator, and instant quotes (based on limited information from you; actual premiums may vary) on term insurance from major insurance companies. This Web site is an excellent means to compare available prices with the price you get from an agent. The Consumer Insurance Guide at provides tips, in-depth stories, and guidance on auto, homeowners, health, life, and business insurance, plus annuities. CliffsNotes Resource Center 115 SafeTnet.com, at, offers in-depth information about insurance and does extensive research about other Web sites. The folks behind SafeTnet.com visit and objectively review a huge number of Web sites that provide advice or insurance products to consumers. Next time you’re on the Internet, don’t forget to drop by. We created an online Resource Center that you can use today, tomorrow, and beyond. Resource Kit In this section, I include some organizations that you may find helpful as you navigate the realm of life insurance. The National Association of Life Underwriters (NALU), 1922 F Street NW, Washington, DC 20006; phone 202-331-6000; Web site. Society of Financial Service Professionals, 270 S. Bryn Mawr Avenue, Bryn Mawr, PA 19010-2195; phone 610-526-2500; fax 610-527-4010. The following five independent insurance company ratings organizations rate over 3,000 insurance companies to help you evaluate which one you should select. When considering any insurance company, check its rating from at least two ratings companies. A.M. Best Company, Inc., Ambest Road, Oldwick, NJ 08858; phone 908-439-2200. This company doesn’t have a Web site, but its ratings are available in libraries. Duff & Phelps Credit Rating, 311 South Wacker Dr., Chicago, IL 60603; phone 312-697-4600 or 312-368-3198 (Ratings Hotline); Web site. Insurance company ratings are available online using a search engine. Standard & Poor’s.htm. the CliffsNotes staff would love to hear from you. we may publish it as part of CliffsNotes Daily. phone 212-553-0300. cliffsnotes. Individual insurance company ratings aren’t easily uncovered online. To find out more or to subscribe to a newsletter.com and click the Talk to Us button. but hardcopies are available in libraries.com/ratings/insurance/index. Go to our Web site at www. If you’ve discovered a useful tip that helped you navigate the maze of life insurance more effectively and you’d like to share it. New York. Ratings aren’t easily accessed online. NY 10007. Weiss Ratings. Palm Beach Gardens. NY 10004. New York. but they are available in hardcopy in libraries. Or you’ve found a little-known workaround that gets great results.moodys. 15th Floor. fax 561-625-668. If we select your tip. . FL 33410. phone 561-627-3300.standardandpoors. Individual insurance company ratings.. 4176 Burns Rd. phone 212-438-7200.com on the Web.com.cliffsnotes. Web site www. free e-mail newsletter. Web site CliffsNotes Understanding Life Insurance Moody’s Investors Service. have you ever experienced that sublime moment when you figure out a trick that saves time or trouble? Perhaps you realized you were taking ten steps to accomplish something that could have taken two.. 25 Broadway. listed alphabetically. go to www. Web site www. Send Us Your Favorite Tips In your quest for learning. 99 Church St. our exciting. are available on this Web site.com.weissratings. 71–72 accidental dismemberment. 29 whole life insurance. 17 cost of living. 100 joint-life income. 48 catastrophic illness. 105 C cash-value insurance. 29 loans against. 30. 47 terminal dividends. See also loans burial life insurance. 21 predicting future expenses. 84 provisions and exclusions. 95 selecting the agent. 104 term insurance limits. 25–26 considerations for business owners. 102 lump-sum. 72 age factor in coverage amount. 27 financial needs of estate taxes. 72 charitable remainder trusts. 15 income estimation. 73 coverage amount. 42 of universal life insurance. 80 selecting the company. 27 D dangerous activities provisions and exclusions. 23 survivors’ needs estate taxes. 28 convertible policies. 20 role of life insurance for. 101–102 special options. 31 applicable exclusion amount. 58–60 of variable life insurance. 103 disability insurance. 35–37 cost of living adjustments. 99–100 payments for life. 48 payment options installment payments. 23 factor in qualifying for coverage. 31 variable. 68 buying life insurance online. 29–30 surrender value. 16 B beneficiaries choosing. 47 tax considerations. 73. 35 American Council of Life Insurance. 30 methods of payment.67 of whole life insurance. 89 rates of return. 47 double indemnity. 48 tax consequences. 55.INDEX A accidental death. 83 comparing rates and companies. 21 predicting future expenses. 96 comparing whole life policies. 19 funeral expenses. determining age and life expectancy. 26 automatic premium loans. 70 aviation accident provisions and exclusions. 21–22 probate costs. 71–72 . See death benefits borrowing from policies. 46–47 introduction to. 76 dividends factors determining. 94 comparing universal policies. 108 annuities. 22 budget worksheets. 100 interest-only payments. 51 premium calculation. 77–79 bypass trusts. 14 premiums. 93–97 comparing term policies. 22 factor in premium costs. See also universal life insurance defined. 21–22 probate costs. 105 death benefits defined. 19 funeral expenses. 30 fixed. 20 uninsured medical costs. 81–82 through the mail. 9 benefits. 65. 89 . 90 mortgage insurance. 57 investment aspects of life insurance annuities. 74 H high-risk activities provisions and exclusions. 27 charitable remainder trusts. 106 war. 69 estate planning beneficiaries choosing. 70 from universal life policies. 70 filing a claim. 31. 68 M medical exams. 24 single-premium policies. 31 whole life insurance. 104 termination.118 CliffsNotes Understanding Life Insurance E employee benefit life insurance. calculation of. 46 participating policies. 98 G group life insurance. 105 non-commercial aviation accidents. 68 interest-sensitive policies. 27 estate taxes. 90 life insurance contracts provisions and exclusions age. 84 convertibility considerations. 23 dividend reduction of. 29–30 introduction to. 37 mortality charges. See also group life insurance endowment policies. 79 family rider protection. 105 premium provisions. 105 incontestability clauses. 106 sex. 46 premiums calculation of. 46 nondeath benefits. 105 living needs benefits. 89 of variable insurance. 72 ownership of policies. 48 tax consequences. 28 revocable trusts. 30 estimating future premiums. 105 industrial life insurance. 87 of cash-value insurance. 88 of universal insurance. 105 high-risk people. 85 I incontestability clauses. 25–27 considerations for business owners. 49–51 variable life insurance. 66 from whole life policies. 19 L life expectancy. 86 re-entry provisions. 11 trusts bypass trusts. 31. 38. 35 failed exams. 51 F failed insurance companies. 42 irrevocable trusts. 65 viatical settlements. 28 defined. 61 of term insurance. 28 P par policies. 49. 49–51. 27 irrevocable trusts. 60. 84. 104 dangerous activities. 72 loans automatic premium loans. 55 from variable life policies. 30 myths about life insurance. 106 defined. 30–31 cash-value insurance. 104 suicide. 69 mutual insurance companies. 21. 27 introduction to. 7–8 non-par policies. 10 single-premium plans. 67. 89 contract provisions. 107–108 funeral expenses. 67 guaranteed insurability. 58–60 defined. 55 surrender value. 38 compared to whole life insurance. 37 terminal dividends. 34 re-entry provisions. 11. 55 whole life insurance. 55 variable life policies. 67 term life insurance age limits. 27 charitable remainder trusts. estimating. 88 pros and cons. 106 trusts bypass trusts. 37 renewability. 35–37 decreasing death benefits. 54–57 comparing policies. 27 U underwriting. 35 as employee benefit.Index probate. 62 surrender charge. 57 premiums. 70 double indemnity. 29 introduction to. 86 renewing term insurance. 39 re-entry provisions. 55 survivor. 56 tax-deferred earnings. 41 comparing policies. 51 single-premium whole life policies. 96 worksheet. 28 revocable trusts. 70 cost of living adjustments. 37. 27 R riders accidental death. 49–51 suicide provisions and exclusions. 83–84 guaranteed insurability rider. 54 ending cash value. 20 in choosing beneficiaries. 20 purposes of life insurance. 9–13 119 Q qualifying for coverage criteria considered. 47 termination clauses. 39–41 convertibility. 60 prepayment. 65 T tax considerations $650. 55 surrender value. 57 loans from. 48. 37 revocable trusts. 28 defined. 55. See also beneficiaries loans from policies. 34. 74 high-risk people. 85 medical exams. 71–72 family rider protection. 91–92 universal life insurance. 14 settlement options. 75 rights of policy owners. 61 pros and cons. 94 worksheet. 74 nondeath benefits. 35. 104 surrender charges. 68 life insurance as tax shelter. 73 defined. 63–64 death benefits options. 55. 9. 49–51 trusts. 84. 10–11 .000 inheritance exemption. 47 estate taxes. 72 waiver of premium if disabled. 33 premiums. See also variable life insurance basics of. 108 sex provisions and exclusions. 55 interest accrual. 26 dividends. 71–72 automatic premium loans. 19 group life insurance. 98 S salary increases. 27–28 universal life insurance tax-deferred earnings. 55 withdrawing money from. 27 irrevocable trusts. 104 single-premium whole life policies. 37 defined. 70 guaranteed insurability. 113 Consumer Reports. 66 policyholder control of investments. 46–47 interest-sensitive policies. 115 Consumer Insurance Guide. 66 types of tax-deferred earnings. 42 policy loans. 65 premiums. 81 Federal Trade Commission. 1. 113 Life-Line. 113 Standard & Poor’s. 55 V variable life insurance. 114 SafeTnet. 67 whole life insurance. See also universal life insurance cash value. 67 types of. 112 Excite. 48 defined. 72 W waiver of premium if disabled. 111–113. 65 withdrawing money from universal policies. 67 variable nature of. 32 IDG Books. 48 traditional policies. 41 comparing policies. 114 Weiss Ratings. 65 loans against. 52–53 death benefit. 95 worksheet. 42–47 compared to term. 114 National Association of Life Underwriters (NALU). 48 variable life insurance. 79 Duff & Phelps Credit Rating. 75 war provisions and exclusions. 65 defined. 31. 65 viatical settlements. 114 Dummies. See also universal life insurance death benefits guaranteed death benefits. 49–51 surrender value. 114 . 41 dividends. 48 tax benefits. 113 Moody’s Investors Service. 51 rate of return.120 CliffsNotes Understanding Life Insurance whole life insurance. 49 investment aspects. 43 termination. 5–6. 112 Insurance Online. 44–46 single-premium policies. 48 pros and cons. 105 web sites CliffsNotes. COMING SOON FROM CLIFFSNOTES Online Shopping HTML Choosing a PC Beginning Programming Careers Windows 98 Home Networking eBay Online Auctions PC Upgrade and Repair Business Microsoft Word 2000 Microsoft PowerPoint 2000 Finance Microsoft Outlook 2000 Digital Photography Palm Computing Investing Windows 2000 Online Research . Enter your registered eBay username and password and enter the amount you want to bid. After you’re satisfied with your bid. Scroll to the Web page form that is located at the bot- tom of the page on which the auction item itself is presented. click the Place Bid button.COMING SOON FROM CLIFFSNOTES Buying and Selling on eBay Have you ever experienced the thrill of finding an incredible bargain at a specialty store or been amazed at what people are willing to pay for things that you might toss in the garbage? If so. 2. You’ll learn how to: s s s s s Find what you’re looking for. And CliffsNotes Buying and Selling on eBay is the shortest distance to eBay proficiency. and your name appears as the high bidder. A Web page appears that lets you review your bid before you actually submit it to eBay. . Then choose View➪Reload (Netscape Navigator) or View➪Refresh (Microsoft Internet Explorer) to reload the Web page information. Click the Back button on your browser until you return to the auction listing page. then you’ll want to learn about eBay — the hottest auction site on the Internet. 3. from antique toys to classic cars Watch the auctions strategically and place bids at the right time Sell items online at the eBay site Make the items you sell attractive to prospective bidders Protect yourself from fraud Here’s an example of how the step-by-step CliffsNotes learning process simplifies placing a bid at eBay: 1. Your new high bid appears on the Web page. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/49230402/0764585150
CC-MAIN-2016-50
refinedweb
32,291
68.47
Named Scope Caching When working on high-traffic Rails sites, it often becomes necessary to find ways to improve performance with caching. One place we’ve found this is most convenient and easy-to-do is by caching an ActiveRecord result set for models that change rarely or not at all. An easy example of this is a Category model. Often times, you have a categorization hierarchy that will never or rarely change over the life of an application. Ideally you would fetch the results once from the database and never have to again. So how do we go about caching this? First let’s look at our model and create a named_scope for it: class Category < ActiveRecord::Base acts_as_tree named_scope :find_top_level, :conditions => 'categories.parent_id IS NULL', :order => 'categories.name' end Next, we need to create create a method that fetches the results for our new scope and caches it in a class variable. It should also only do caching if in production environment (alternatively or additionally, we could use the ActionController.perform_caching config value), as this can cause problems in tests. def self.top_level unless ('production' == RAILS_ENV) && ActionController.perform_caching @@top_level_cache = self.find_top_level else @@top_level_cache ||= self.find_top_level end end Finally, we need to create a method to invalidate our cache when records are saved or deleted. Since we know this isn’t happening often (if at all), this should rarely be performed but is a good safeguard so we know our cache is current. after_save :reset_cached_finder after_destroy :reset_cached_finder def reset_cached_finder @@top_level_cache = nil end This is something that we could easily see doing in a number of models for a number of finders. Since this involves a lot of similar code, it would be great if we could create some meta code that would allow us to define these caches with a simple one liner. Maybe with syntax like cache_scopes :cached_method_name => :scope_name. For example: cache_scopes :top_level => :find_top_level Well here’s the code that does that: Suggestions for improvement are encouraged, which is easily done with GitHub’s new gist. Enjoy and have fun caching!
https://www.viget.com/articles/named-scope-caching
CC-MAIN-2016-30
refinedweb
343
55.64
Rechercher une page de manuel gnunet-search Langue: en Version: 336474 (ubuntu - 24/10/10) Section: 1 (Commandes utilisateur) NAMEgnunet-search - a command line interface to search for content on GNUnet SYNOPSISgnunet-search [OPTIONS] [+]KEYWORD [[+]KEYWORD]* gnunet-search [OPTIONS] [+]URI DESCRIPTION Search for content on GNUnet. The keywords are case-sensitive. gnunet-search can be used both for a search in the global namespace as well as for searching a private subspace. - -a LEVEL, --anonymity=LEVEL - The -a option can be used to specify additional anonymity constraints. If set to 0, GNUnet will try to download the file as fast as possible without any additional slowdown for anonymous routing. Note that you may still have some amount of anonymity depending on the current network load and the power of the adversary. Use at least 1 to force GNUnet to use anonymous routing. This option can be used to limit requests further than that. In particular, you can require GNUnet to have a certain amount of cover traffic from other peers before sending your queries. This way, you can gain very high levels of anonymity - at the expense of much more traffic and much higher latency. So set this option to values beyond 1 only if you really believe you need it. The definition of ANONYMITY-RECEIVE is the following: If the value v is 0, anonymous routing is not required. For 1, anonymous routing is required, but there is no lower bound on how much cover traffic must be present. For values > 1 and < 1000, it means that if GNUnet routes n bytes of messages from foreign peers, it may originate n/v bytes of queries in the same time-period. The time-period is twice the average delay that GNUnet deferrs forwarded queries. If the value v is >= 1000, it means that if GNUnet routes n bytes of QUERIES from at least (v % 1000) peers, it may originate n/v/1000 bytes of queries in the same time-period. The default is 1 and this should be fine for most users. Also notice that if you choose values above 1000, you may end up having no throughput at all, especially if many of your fellow GNUnet-peers do the same. - -c FILENAME, --config=FILENAME - use config file (defaults: ~/.gnunet/gnunet.conf) - -h, --help - print help page - -H HOSTNAME, --host=HOSTNAME - on which host is gnunetd running (default: localhost). You can also specify a port using the syntax HOSTNAME:PORT. The default port is 2087. - ). - -o PREFIX, --output=PREFIX - Writes the encountered (unencrypted) RBlocks or SBlocks to files with name PREFIX.XXX, where XXX is a number. This is useful to keep search results around. - -v, --version - print the version number NOTESAs most GNUnet command-line tools, gnunet-search supports passing arguments using environment variables. This can improve your privacy since otherwise the search terms will likely be visible to other local users. Setting "GNUNET_ARGS" will cause the respective string to be appended to the actual command-line and to be processed the same way as arguments given directly at the command line. You can run gnunet-search with an URI instead of a keyword. The URI can have the format for a namespace search or for a keyword search. For a namespace search, the format is gnunet://ecrs/sks/NAMESPACE/IDENTIFIER. For a keyword search, use gnunet://ecrs: gnunet-download -o "COPYING" gnunet://ecrs/chk/HASH1.HASH2.SIZE Description: The GNU Public License Mime-type: text/plain Public License" and the mime-type (see the options for gnunet-insert on how to supply meta-data by hand). FILES - ~/.gnunet/gnunet.conf - GNUnet configuration file; specifies the default value for the timeout REPORTING BUGSReport bugs by using mantis <> or by sending electronic mail to <gnunet-developers@gnu.org> SEE ALSOgnunet-gtk(1), gnunet-insert(1), gnunet-download(1), gnunet-pseudonym(1), gnunet.conf(5), gnunetd(1) MultideskOS. C'est ce système qui va traduire nos logiciels pour que le processeur de votre ordinateur les comprenne. -- Jayce - Je fais simple pour les neuneus -- Contenus ©2006-2018 Benjamin Poulain Design ©2006-2018 Maxime Vantorre
http://www.linuxcertif.com/man/1/gnunet-search/
CC-MAIN-2018-30
refinedweb
682
63.39
For this project, we'll assume you have already installed Meteor on your system. If not, you should follow the instructions here: Creating your first Meteor app Using React and Meteor together React and Meteor are two different JavaScript libraries but when used together, they work really well. Since this course is about Meteor with React, let’s talk about the purpose of each in the applications. Meteor keeps track of data in our application while React takes data and produces HTML and handles user events. For example, assuming Twitter as a Meteor application, Meteor keeps track of all the tweets and profiles while the purpose of React is to display this data using appropriate HTML. To install React as a dependency in any project, you can simply type npm install –-save react if you have npm installed. In case you don’t have Node or npm installed, don’t worry, you can run the command meteor npm install –-save react inside the project directory since Meteor comes bundled with Node’s packaging system npm required for running it. Note: We suggest you to have at least the theoretical understanding of the fundamentals of React.js such as components, props, etc. This will lead to faster understanding of the frontend parts of the web application since we are going to focus primarily on the Meteor part of the application in this course. If you need to learn React, we recommend you go through the first two sections of this course: Learn React and Firebase Project Goals We will cover all the fundamentals of Meteor applications including No-SQL database model, publication, subscription, React components, auto-update React interface, templates, styling, etc. Project Overview We are going to create Employee Directory app which lists down all the employees of an organization or company. The webpage will show details of each employee in a card. For better performance, the webpage will load only a small number of employees from the server database and data will be dynamically fetched from the database if the user wishes to see more data. This is what the final product will look like: Let’s point out the challenges we need to overcome for this project. - We need a database to store our data - We need to generate the data - We should send only a small subset of data to the client(frontend) at any time - We need to find the logic to load more data when “Load more” button is clicked Let’s get started. Create Project Fire up your terminal or command prompt and navigate to the directory where you want to create the project. We are going to use Meteor command line tools to generate the project. Run the command: $ meteor create employees_diary That’s it. Meteor will automatically generate a project named employees_diary. You will also get simple instructions on how to run this project on the command line. Let’s start the app by following those instructions. Run these two command to navigate inside the project directory and run the app: $ cd employees_diary$ meteor This will start the application server with default Meteor boilerplate application which can be viewed at in your browser. You can also change the default port 3000 by adding --port flag. For eg, $ meteor --port 8080 This will start the development server at Congratulations! You have just run your Meteor app on a web server. Note: It may take a few minutes for Meteor to generate and start the project for the first time because it downloads some default packages and sets up some configuration. Understanding the Meteor File Structure Before proceeding, it is important to understand the flow and directory structure of Meteor application. Choose a text editor you are comfortable with (like Sublime Text 3, Atom, or VSCode) and open the employees_diary folder. You will see this directory structure and files created by Meteor in your project: employee_diary└───.meteor...└───node_modules...└───tests...└───client├───main.html├───main.js├───main.css└───server├───main.js├───.gitignore├───package.json├───package-lock.json Let’s strip down the purpose of each directory. The client folder stores the client side application code which runs only in client. The server folder also has the same functionality except that the files will be loaded only on the server side. The default entry point for client is client/main.js and for server is server/main.js To understand it more clearly, try these simple exercises: - Open the server/main.js file. Delete everything from the file and type: server/main.js console.log("Log from server/main.js"); Save the file and you see that the server restarted automatically on the terminal/cmd and you get that Log message as shown in following figure. - Similarly, open the client/main.js file. Delete everything from this file and write: client/main.js console.log("Log from client/main.js"); Save the file and Meteor will automatically detect the changes to update the webpage. Since the code is written in client folder, we will see the log message only on the web browser. Open your browser and press shortcut Ctrl/Command+Shift+I or go to Menu -> More tools -> Developer tools to open the Developer tool. Navigate to console tab to view the console output as shown in figure. To conclude, code in the client directory will be private to the client while code in the server directory will be private to the server. The file package.json holds the information about the npm packages used in the project. The file package-lock.json generally describes the tree in which the modules were generated so that this project can be installed in exactly the same way on all the machines. These files are maintained by Node and we don’t need to worry about them. Also, the folder node_modules contains all the module files installed as a dependency of the project while .meteor has all the project configuration and module version related stuffs. The tests folder is added in Meteor to write test files for the project. Meteor recommends not to add any files in client and server folder. It is recommended to write all our code in imports directory and then import them into server and client entry files manually as per the requirement. This is also called lazy loading and it is supported by default in Meteor 1.7 in contrast to eager loading in which all files are loaded automatically as the server starts. Eager loading can be initiated in the project by simply removing the entry points definition from package.json file. To see full folder structure and learn more about the concept of eager loading and lazy loading in Meteor, check out We hope you understood the Meteor workflow and directory structure. Let’s dive into actual coding now. Create our React boilerplate Before starting with the app, delete all the default code from client folder as well as server folder. We are going to write each line of code ourselves to understand their purpose clearly. Let’s open client/main.html, remove everything from it and write this skeleton code: client/main.html <head><title>Employees Diary</title></head><body><div class="container"></div></body> Here, container is the render-target point for our code that will be generated from React application. Also note that we don’t need to define the DOCTYPE and HTML tag. Meteor does it for us. We just need to focus on the important stuff. Next, we will move to client/main.js file. Remember, this is the entry point of the web browser i.e., the execution starts from here. Write the file as: client/main.js //Code executes only at the client side//Any JavaScript code here will automatically run for us//Importing React librariesimport { Meteor } from 'meteor/meteor';import React from 'react';import ReactDOM from 'react-dom';import './main.html'; //Importing the html file//After App loads into the browser, render my app to the DOMMeteor.startup(() => {}); Here we have just imported React library and defined an empty Meteor startup function which gets triggered on the client side(browser) only after all the client side files have loaded. Since we haven’t installed the React library yet, you may see a message on the console to install the library. Let’s install them by pressing ctrl/command + C to stop the server and then run: $ meteor npm install --save react react-dom Start the server again using the command meteor Now, let’s create our first React component named App as: client/main.js //Code executes only at the client side//Any JavaScript code here will automatically run for us//Importing React librariesimport { Meteor } from 'meteor/meteor';import React from 'react';import ReactDOM from 'react-dom';import './main.html'; //Importing the html file//Creating a React componentconst App = () =>{return(<div> Hello there! </div>);};//After App loads into the browser, render my app to the DOMMeteor.startup(() => {//Next line renders App component to container class of html pageReactDOM.render(<App/>, document.querySelector('.container'));}); Navigate to on your browser to see the message Hello there! on the screen. Also see the title has changed to Employees Diary. Hurray! It worked. We are ready with a skeleton/boilerplate which will be required in almost every project. Note that we are using ES6 syntax for creating the arrow functions. Working with database: Meteor + MongoDB We have got enough code in the frontend to start with. Let’s recall the challenges we listed down earlier for this project. The first one was that we need a database to store the data. The following diagram is of a full-stack Meteor application. As we can see in the figure above, ReactJS takes some data and produces the HTML. This data is handed to React by Meteor application for which we will specifically write the code in the application. When the question about the storage point comes up, that’s where MongoDB comes into play. MongoDB stores the data for us. MongoDB is a NoSQL database which uses collection to store the data. A collection can be assumed as a giant bucket where we can simply keep on adding more and more data. Every MongoDB database has collections and each of them can store an array of objects (also referred as Documents). To get an in-depth overview of MongoDB, you can read the following article: Also, if you know about SQL database, the following figure shows the difference in structure of MongoDB and SQL databases. Meteor comes integrated with MongoDB out of the box. Once the data is added to MongoDB database, it is on Meteor to fetch the data from it and send it to the React application. Also, there is a very important low level concept of Minimongo: Meteor makes a duplicate copy of user authorised data on the web browser client known as “Minimongo”. For faster access, Meteor on the browser client loads the data directly from Minimongo instead of requesting it from MongoDB from the backend. Creating a MongoDB collection We previously learned that MongoDB stores data in a data structure called Collection. We will now create a collection which can hold an array of objects where each object represents an employee. Each object will contain details of an employee such as name, number, address, etc. Firstly, we need to find the best spot to declare our collections. Since the collection needs to be accessed in the client as well as the server, we should keep it in the imports folder and then import it wherever required. We create another folder level api/ for better folder structure and code modulation. So, create imports/api/ inside the root directory and a file within it with the name employees.js: imports/api/employees.js //Declare our collectionimport {Mongo} from 'meteor/mongo';export const Employees = new Mongo.Collection('employees'); That’s it. Just these two lines are needed to create a collection. Here, we have created a collection ‘employees’ which can be accessed by the variable Employees. Make sure to use the keyword export in the beginning so that the Employees collection can be imported and accessed by other files in the codebase for making queries on the database. Generating Data with faker Now that we have created the employee collection, let’s insert some data in it. We need to have some data to pre-populate our app with, so that we can display it in our app. Now, since we need thousands of lines of data, we should not do it manually. We need to somehow automate the process of “fake data” creation. You will also learn how to include a library in a Meteor project while learning this. There is a library called faker that can help us out here. As you already know, Npmjs.com is a listing of all the modules that we can download and use in our Node.js projects. Navigate to on your browser and search for the library faker in the search box. You will find this one () on the top. Visit this page and have a look at the documentation and the list of things we can generate using faker. Before using it, we need to include it in our project. To do so, open another terminal/command prompt or stop the existing server if you wish to use the existing one and type: $ meteor npm install --save faker You can omit the keyword Meteor from the command if you have node installed because npm already comes installed with node. We are going to generate data on the server side because generation of data has nothing to do with client/frontend for this project. Also, it doesn’t make sense to generate around 5000 records and save it all in the database from the browser. We will just read and display this data on the browser. Open the server/main.js file which should be empty till now and copy the following code: server/main.js //only executed on the serverimport { Meteor } from 'meteor/meteor';import { Employees } from '../imports/api/employees'Meteor.startup(() => {//Great place to generate data}); Same as on client side, Meteor provides a startup function for the server side too which executes only after all the files on the server side are loaded. We will generate data inside this startup function so that it should be generated after the database files and other dependent packages are loaded. It will avoid any conflicts or errors due to improper or incomplete loading of files. Also, since the server/main.js file runs every time the server starts, it will generate data every time. To avoid this, we will provide a condition before data generation that data should be generated only if the count of data in database is 0. After writing the condition, the file looks as follows: server/main.js //only executed on the serverimport { Meteor } from 'meteor/meteor';import { Employees } from '../imports/api/employees'Meteor.startup(() => {//check if data already exists//count the number of records in databaseconst numberRecords = Employees.find({}).count();if(!numberRecords){//Great place to generate data}}); Here, find() returns all the data in the collection and count() counts it. We will generate the data only if number of records is 0. Now since we need to generate thousands of records of employee data, we need some way to loop through specified number of times. We will install a popular package lodash for this purpose. We use lodash instead of for-loop or other approaches to keep our code cleaner. Similar to faker, you can install it by typing this command in the terminal/command prompt: $ meteor npm install --save lodash Once lodash is installed, add the code to generate 5000 data = helpers.createCard().name;const email = helpers.createCard().email;const phone = helpers.createCard().phone;Employees.insert({name: name,email: email,phone: phone,avatar: image.avatar()});});}}); Let’s understand what we have done. The ‘_’ variable provided by lodash is used to generate 5000 data using its function times() which accepts parameters as number of times and the function to be executed given number of times. Inside the function, we generated name, email and phone using faker module and then inserted them into database along with an image url to be used as avatar. Note how we passed a JavaScript object inside the insert function which gets added in the database. Since we are using JavaScript ES6 syntax, why not make use of it to simplify the code. After refactoring, the same code looks, email, phone} = helpers.createCard();Employees.insert({name, email, phone,avatar: image.avatar()}); //name, email,... is equivalent to name:name, ...});}}); You can verify if the data is created or not by writing a console message just below the constant numberRecords as: console.log(numberRecords); You should see the output as 5000 reflecting in your terminal or command prompt. Congratulations! You have now successfully generated a lot of fake data using faker that we will next display on our Employee interface Boilerplate for Employee Interface Now that we have generated the data, let’s move to the React side of things. Remember, the purpose of our React application is solely to display information on the screen. Let’s create a React component to display the list of employees. Following the standard convention, create a new folder named ui/ inside imports and then components/ inside ui directory. We will keep all the React components here. Create a new file as employees_list.js and add the following boilerplate. imports/ui/components/employees_list.js import React from 'react';const EmployeeList = () => {return(<div><div className="employee-list">Employee List</div></div>);};export default EmployeeList; Now import this component into client/main.js and update the file with React code as below: client/main.js //Code executes only at the client side//Any JavaScript code here will automatically run for usimport { Meteor } from 'meteor/meteor';//Importing React librariesimport React from 'react';import ReactDOM from 'react-dom';import EmployeeList from '../imports/ui/components/employees_list';//Importing the html fileimport './main.html';//Creating a React componentconst App = () =>{return(<div><EmployeeList /></div>);};//After App loads into the browser, render my app to the DOMMeteor.startup(() => {//Next lines says App component to be rendered at class container in main.htmlReactDOM.render(<App/>, document.querySelector('.container'));}); So, we have imported and used the component inside the div tag. Go to to see the text Employee List on the screen. Now, the next step is to figure out how to fetch data from the database collection and display it in the React component. But before that, we will learn a little more about two Meteor concepts.
https://www.commonlounge.com/discussion/6649877a72bb4405b927c7d7cd5f93c8
CC-MAIN-2021-43
refinedweb
3,113
55.34
Troubleshooting .NET Framework Targeting Errors Visual Studio lets you distribute a lightweight .NET Framework runtime, known as the .NET Framework 4 Client Profile, which is a runtime that includes just a subset of the binaries that are contained in .NET Framework 4. By using .NET Framework 4 Client Profile, you can distribute a smaller .NET Framework library to the users of your application so that they can run the application even if the full .NET Framework 4 is not installed on their systems. When your application targets a particular profile, you might encounter errors if you try to reference an assembly that is not part of that profile. Common errors include the following: The type or namespace name "name" does not exist in the namespace "namespace". (Are you missing an assembly reference?) Type "typename" is not defined. Could not resolve assembly "assembly". The assembly is not listed as part of the "profile" Profile. These errors can result from different actions. This topic includes descriptions of what might have caused the error and how to resolve the issue. For more information about the .NET Framework 4 Client Profile, see .NET Framework Client Profile and How to: Target a Specific .NET Framework Version or Profile. You Have Referenced an Assembly That Is Not Included in the Client Profile If your application tries to reference functionality that is contained in an assembly or dependent assembly that is not included in the .NET Framework 4 Client Profile, run-time error messages may occur. The exact message depends on where the referenced functionality is located. To eliminate such errors, you can either remove the incorrect assembly reference from the project, or set the project to target the full .NET Framework version 4 instead of the .NET Framework 4 Client Profile subset library. You Have Referenced a Project or Assembly That Targets a Different Version of the .NET Framework You can create applications that reference projects or assemblies that target different versions of the .NET Framework. For example, if you create an application that targets the .NET Framework 4 Client Profile, that project can reference an assembly that targets .NET Framework version 2.0. However, if you create a project that targets an earlier version of the .NET Framework, you cannot set a reference in that project to a project or assembly that targets the .NET Framework 4 Client Profile or the .NET Framework 4. To eliminate the error, make sure that the profile targeted by your application is compatible with the profile targeted by the projects or assemblies referenced by your application. You Have Re-Targeted a Project to a Different Version of the .NET Framework If you change the target version of the .NET Framework for your application, Visual Studio changes some of the references. However, you must also make some manual updates. For example, if you create an application that has resources or settings that rely on the .NET Framework 4 Client Profile and then change the application to target .NET Framework 3.5 SP1, you might see one of the previously mentioned errors. As a workaround for application settings, in Solution Explorer, click Show All Files, and then edit the app.config file in the Visual Studio XML Editor. Change the version in the settings to match the version of the .NET Framework. For example, you can change the version setting from 4.0.0.0 to 2.0.0.0. Similarly for an application that has added resources, in Solution Explorer, click Show All Files, expand My Project (Visual Basic) or Properties (C#), and then edit the Resources.resx file in the Visual Studio XML Editor. Change the version setting from 4.0.0.0 to 2.0.0.0. If your application has resources such as icons or bitmaps or has settings such as data connection strings, you can also remedy the problem by removing all the items on the Settings page in the Project Designer and then re-adding the required settings. You Have Re-Targeted a Project to a Different Version of the .NET Framework and References Do Not Resolve In some cases when you retarget a project to a different version of the .NET Framework, your references may not resolve properly, A common cause for this is explicit fully-qualified references to assemblies. You can fix this by removing the references that do not resolve, and then adding them back to the project. Alternatively, you can edit the project file to remove references of the form: <Reference Include="System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL" /> and replace them with the simple form: <Reference Include="System.ServiceModel" /> Note After you close and reopen your project, you should also rebuild it to ensure that all references are correctly resolved. See Also Tasks How to: Target a Specific .NET Framework Version or Profile Concepts .NET Framework Client Profile Other Resources Targeting a Specific .NET Framework Version or Profile
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/cc668079(v=vs.100)?redirectedfrom=MSDN
CC-MAIN-2019-39
refinedweb
824
58.89
What happens when you do this? Is an exception thrown? Does it just hang waiting w/ nothing happening? Also, are you doing this from the console or from modules which just get executed stand alone (e.g. ipy foo.py)? From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Vadim Khaskel Sent: Friday, January 18, 2008 3:19 PM To: users at lists.ironpython.com Subject: [IronPython] invoking new form Hi all, I'm trying to invoke a form from python module (mod1) (which is not in the same name space as Form1 class like this: #in mod1 def newWinOpen(): Application.Run(Form2()) #I did import Form2 into this module# Doesn't work... in my next attempt I put mod1 where I calling Form2 put into the same namespace as Form1 and Form2 In this case namespace class in Form1 doesn't see the mod1... What is the right way to do that....??? Thank you, Vadim ________________________________ Climb to the top of the charts! Play the word scramble challenge with star power. Play now!<> -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/ironpython-users/2008-January/006250.html
CC-MAIN-2014-15
refinedweb
190
75.81
We are going over csting, pointers, and references. When writing my programs, I usually start off really basic then work my way up to completion. So what I have here is basically the skeleton and my logic of what the final product should somewhat look like. Again, I need to count the # of words which begin w/ an uppercase letter only. My main issue is trying to figure out when a word begins/ends.My main issue is trying to figure out when a word begins/ends.Code: #include <iostream> #include <string> using namespace std; int main() { const int max = 50; char message [max], ch; cout << "enter string \n"; cin.getline(message, max); cout << "Msg is: \n" << message << endl; for (int i=0; message[i] !='\0'; i++) { ch = message [i]; if (isupper (ch)) cout << "upper, the word is "<< ch << endl; else if (islower (ch)) cout << "lower, the word is " << ch << endl; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/77826-count-number-uppercase-words-array-printable-thread.html
CC-MAIN-2016-18
refinedweb
153
72.05
Introduction Code Contracts is a new feature in the forth-coming .NET 4.0 release. Currently, Code Contracts is being nurtured as an MSDN DevLabs project, which means it has a life on its own outside the official track of the .NET Framework. Being off the beaten path also means that rapid updates are to be expected, just as has happened with the recent web application frameworks like ASP.NET MVC. Code Contracts is planned to become an official part of the framework version 4.0 once it ships. The main idea of Code Contracts is to let developers have an easier way to define a set of rules for your classes. For example, you might have rules to which the properties and fields of a given class should always conform to. These rules can then be checked both statically at compile-time and at runtime when the application executes. Code Contracts combines a .NET class library with a Visual Studio IDE integration package, and is available from all .NET compatible programming languages such as C# or Visual Basic Figure 1. Setting up Code Contracts on your development PC is easy. The age-old problem of maintaining proper internal state of objects has many incarnations: you could raise exceptions, use assertions, or simply fail methods with an error code if the call in the current context would not be valid. Similar things happen when you set property values: a class could want to make sure a percentage value sits between 0 and 100, or a sales price above the cost, for instance. Although other possibilities certainly exist, the basic implementation of state management is usually this: first, a method checks the validity of parameters and object's current state, and then proceeds to do the real work. After this, the method checks to see if all worked correctly, and as the final step, updates the state of itself accordingly. These checks are often called pre and post conditions, respectively. When you take a look at the implementation of complex and critical classes, the overhead of validating parameters and maintaining state compared to the real work done by the method can be large. It is not uncommon to see that parameters validation and state management take twice as many code lines as the real work done by a method. This article will talk about Code Contracts and provide a glimpse of how they can be used in real-world applications. The basics of Code Contracts have already been introduced elsewhere on the Internet.com web sites see (Marius Bancila's article on CodeGuru.com from June, 2009), and this article will only offer a quick recap on the basics, and instead focus in more detail to the workings of Code Contracts. If you want to try Code Contracts yourself, the easiest way to get started is to download Visual Studio 2010 Beta 1. However, you can also use Code Contracts with Visual Studio 2008 if you first download a separate installer from MSDN DevLabs. You can find links to the download pages at the end of this article. When you install the downloaded package, the installation files will be placed in C:\Program Files\Microsoft\Contracts. A quick recap: conditions and invariants To get you up to speed with Code Contracts quickly, the following example shows how Code Contracts can be used from a C# application. Remember that Code Contracts is a technology under construction, and thus changes to the syntax might become necessary as the product evolves. Nonetheless, this is how you could define a Code Contract today: using System.Diagnostics.Contracts; ... public class ContractTest { private int percentage; public int Percentage { get ... set { Contract.Requires((value >= 0) && (value <= 100)); percentage = value; } } } In the above code listing, the static Requires method of the Contract class (part of the System.Diagnostics.Contracts namespace) is used to define a pre-condition that specifies that when the Percentage property is set, the value must be between 0 and 100 inclusive. In this form, Code Contracts do not differ much from regular assert statements or throwing for instance ArgumentOutOfRangeException objects. It is said that this enforcement creates a contract for the class, and if the condition is not met, the contract has been violated. What makes Code Contracts special is that you can enable static, compile-time checks (Figure 2) to learn about contract violations already at compile-time. These checks are by default made asynchronously when you compile your application, and error messages are conveniently shown in the Visual Studio Error List window (Figure 3). Both the static checks and runtime behavior can be configured in the properties window for the project. For instance, you can enable or disable runtime checks for Code Contracts. Figure 2. The Code Contracts page in project options contains many possibilities to configure the technology. Figure 3. Contract violations or verification problems are displayed in the Error List window.
http://mobile.developer.com/net/article.php/10916_3836626_2/Understanding-and-Benefiting-from-Code-Contracts-in-NET-40.htm
CC-MAIN-2017-34
refinedweb
820
51.78
User Feedback When a user experiences an error, Sentry provides the ability to collect additional feedback. You can collect feedback according to the method supported by the SDK. Use the .NET SDK User Feedback for ASP.NET or ASP.NET Core supply integrations specific to supporting those SDKs. You can create a form to collect the user input in your prefered framework, and use the SDK's API to send the information to Sentry. You can also use the widget, as described below. If you'd prefer an alternative to the widget or do not have a JavaScript frontend, you can use this API or a Web API. using Sentry; var eventId = SentrySdk.CaptureMessage("An event that will receive user feedback."); SentrySdk.CaptureUserFeedback(eventId, "user@example.com", "It broke.", "The User"); Embeddable JavaScript Widget Our embeddable JavaScript widget is useful when you may typically render a plain error page (the classic 500.html) on your website. To collect feedback, the widget requests and collects the user's name, email address, and a description of what occurred. When feedback is provided, Sentry pairs the feedback with the original event, giving you additional insights into issues. The screenshot below provides an example of the User Feedback widget, though yours may differ depending on your customization: Integration To integrate the widget, you'll need to be running version 2.1 or newer of our JavaScript SDK. The widget authenticates with your public DSN, then passes in the Event ID that was generated on your backend. Make sure you've got the JavaScript SDK available: <script src="" integrity="sha384-nsIkfmMh0uiqg+AwegHcT1SMiPNWnhZmjFDwTshLTxur6ZPNaGT8vwT+vHwI5Jag" crossorigin="anonymous" ></script> You'll then need to call showReportDialog and pass in the generated event ID. This event ID is returned from all calls to CaptureEvent and CaptureException. There is also a function called LastEventId that returns the ID of the most recently sent event. ) User Feedback API If you'd prefer an alternative to the widget or do not have a JavaScript frontend, you can use the User Feedback API. Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) to suggesting an update ("yeah, this would be better").
https://docs.sentry.io/platforms/unity/enriching-events/user-feedback/
CC-MAIN-2021-31
refinedweb
367
57.47
Welcome to Part 8! We are really starting to get close now! In part 8 we will discuss win conditions, loading new scenes and GUI. Please note we are discussing the built in GUI as of version 4.3. I totally recommend you use NGUI as it is much more intuitive and better matches the process flow for the new GUI system that will be coming in Unity 5! We will also answer the challenge from part 7. To view the previous parts, please see the links below. Part 1: Getting Started Part 2: Animations Part 5: Modular Scripting Part 6: Timed Prefab Instantiation Part 7: Health Bars Part 8: Finishing Touches Part 9: Publishing Answer to Part 7’s Challenge! Drop the spider prefab on the scene. Change the tag to “Enemy”. If enemy is not an available tag. Add it. Drop spider back onto the prefab and delete the one from the scene. We need to do this anyways… Create a new script. Name this script “HeroScript”. From Hero Script, add the following variables. //amount of damage the hero does. public int damage = 10; //reference to the character script private CharacterScript character; Add the following function /// <summary> /// Called when a collision happens between the character /// and something else. /// </summary> /// <param name="other">What you collided with.</param> private void OnCollisionEnter2D(Collision2D other) { if (other.gameObject.tag == "Enemy") { other.gameObject.GetComponent<CharacterScript>().AdjustHealth(-1 * this.damage); } } Destroy the Spider Webs, Win The Game and Title Scene Navigate to one of your spider webs, and add a component “Circle Collider 2D”. Select the “Is Trigger” checkbox. Open your timed spawner script Add some code to pick up the trigger, check for the player and then destroy the spider web. Note that we are making a call to a function to the Hero Script that does not exist yet. /// <summary> /// Called when something enters a trigger collider /// </summary> /// <param name="other">what you collided with.</param> private void OnTriggerEnter2D(Collider2D other) { //Check if other is the player if (other.gameObject.tag == "Player") { foreach (GameObject o in spawnedObjects) { Destroy(o); } other.GetComponent<HeroScript>().AdjustDestroyedWebs(1); //destroy the spawner. Destroy(this.gameObject); } } Create a win condition on the player! So above we modified the hooks into when the triggers attached to the Spider Web game object. This event will fire whenever the player enters ANY trigger attached to the game object in which this script is attached to. You can get more complex by building a more interesting object hierarchy with different triggers and hooking into those in particular. Anyways, what we want to do now is create the function: AdjustdestroyedWebs, as well as create all the necessary code for a win condition on the character as well as loading a few levels. If you haven’t already, create a new script, “HeroScript”. Add the following variables… //number of webs to destroy public int websToDestroy = 5; //number of webs destroyed. private int destroyedSpiderWebs = 0; private Action winFunction= () => { }; Add the following functions… /// <summary> /// Adjust the number of webs destroyed /// </summary> /// <param name="amount">number of webs destroyed</param> public void AdjustDestroyedWebs(int amount) { //increment this.destroyedSpiderWebs += amount; //check for win condition if (this.destroyedSpiderWebs >= websToDestroy) { //save number of destroyed webs PlayerPrefs.SetInt("DestroyedWebs", destroyedSpiderWebs); //load win level this.winFunction(); } } public void SetFinishLevelFunction(Action function) { this.winFunction = function; } Notice that we have yet to define the win function. For now set the win function within the Awake function. When you decide to add new levels to the game, you can simply set this function from within your level management script to load the next level. Note the string “GameScene1”, if you have named your level’s scene anything different, this is where you type that text. private void Awake() { this.character=this.GetComponent<CharacterScript>(); character.SetDeathFunction(() => { Application.LoadLevel(Application.loadedLevelName); }); this.SetFinishLevelFunction(() => { Application.LoadLevel("GameScene1"); }); } Creating the Win Scene Create a new scene, save it to the Scenes folder and name it “WinScene”. Take a screen shot of your game, save it, and import it into the images folder. Change the type to sprite and drop it onto the scene. Adjust the camera or the image’s scale such that the entire view is taken up by the image. Create a new script in the scripts folder, name the script “WinScene” Load the number of webs destroyed and create a GUI that displays “YOU WIN”, the number of webs destroyed and play again. Where play again is a button that loads the game scene. Below is the code in a copy/paste format. using UnityEngine; using System.Collections; public class WinScene : MonoBehaviour { private int destroyedWebs = 0; // Use this for initialization private void Start () { destroyedWebs = PlayerPrefs.GetInt("DestroyedWebs"); } private void OnGUI() { Rect drawRect =newRect, "You WIN!"); drawRect.Set(drawRect.xMin, drawRect.yMin + 50.0f, drawRect.width, drawRect.height); GUI.Label(drawRect, destroyedWebs +" webs destroyed!"); drawRect.Set(drawRect.xMin, drawRect.yMin + 50.0f, Screen.width * .25f, Screen.height * .25f); if (GUI.Button(drawRect, "Play Again?")) { Application.LoadLevel("GameScene"); } } } Drag the win scene script onto the screen shot backdrop. Creating the Title Scene! Create another Scene called “TitleScene” Drop the image of the screen shot on title scene, adjust the camera view such that it takes up the entirety. Create a new Script called ‘TitleScript” Write some code like below… Copy/paste version here: using UnityEngine; using System.Collections; public class TitleScene : MonoBehaviour { private void OnGUI() { Rect drawRect = new Rect, "Welcome to Skeleton Dude!"); drawRect.Set(drawRect.xMin, drawRect.yMin + 50.0f, drawRect.width, drawRect.height); GUI.Label(drawRect, "use the arrow keys to move. Avoid spiders, smash their webs."); drawRect.Set(drawRect.xMin, drawRect.yMin + 50.0f, Screen.width * .25f, Screen.height * .25f); if (GUI.Button(drawRect, "Play")) { Application.LoadLevel("GameScene"); } } } Drag the titleScene script onto the sprite image. Finally – Set up the Build settings. Open each scene in your scenes folder. This should be: WinScene, TitleScene, and GameScene1. While each scene is opened, go to File ->Build Settings. Select: Add Current. Notice how TitleScene is at the top. Make sure that you click drag and drop the title scene to the top of this list. This will ensure when you start the game, it starts with the title scene. Wow, we have built an end to end game with a GUI!!!! Next article is the last article. How the heck do we publish? If you are having trouble with any parts of these articles, or I missed anything, I’ve written most of them late at night, like now. So please post comments with questions or use the contact section of the website to reach me directly and I will either respond via email or clarify by updating the articles or answering via the comments section. Pingback: MSDN Blogs Pingback: Build a 2D Top Down Game – Zero to Published – Part 9 | DaCrook
http://dacrook.com/build-a-2d-top-down-game-zero-to-published-part-8/
CC-MAIN-2017-30
refinedweb
1,136
58.69
? Field: A field is used to store information on the data for each object. For example, A class Student can contain fields like Name, Class, or Age for the student. ? Constructor: A constructor is used to initialize an object of the class with the given data. A constructor must have same name as the name of the class. For example: public Student (String Name, Integer Class) constructor will take a name and class as input from the user and then initialize a newly created Student object with those parameters. ? Overriding: It is referred to the phenomenon when a child class contains a method with same format (method name and parameters) as that of one of the methods of parent class.In overriding, when a method is called, firstly the method of class object which calls the method is invoked. If it is not found, then its parent class object’s method is called. ? To specify the parent class we use the keyword “extends” while declaring the new class followed by the name of the parent class. For example, class A extends B We can retrieve the parent class using getSuperclass() method. For example, Class p=s.getSuperclass(); Here s is current class object and its superclass is stored in p. ? Yes, even if we do not specify a parent class there is still a parent class. “Object” class is by default the parent class of all the classes that we create. ? We can overload a constructor just by defining another constructor in the same class with different list of parameters. For example: public class Student { public Student() {} public Student( String Name) {this.name = Name;}} ? Following are the differences between override and overload: ? We can declare an array using the following syntax: type[] name or type name[] Example, char[] array1 = new char[]; We can initialize an array either at the time of declaration or later in the code. ? We can access an array element using array’s name followed by the index number of the element enclosed in square brackets. The syntax is arrayname[index value]. The maximum value of index is length of array-1. For example, array1[3] ? We can initialize the array elements at the time of declaration as follows: Double [3] array1 = {1.0, 2.5, 10.0}; We can also initialize individual array elements in the code: double [] Array2 = new double[]; Array2 [0] = 1.0; Array2 [1] = 2.5; ? Javadoc comments are a special class of comments which allows us to embed information about our program into the program itself. It begins with sequence /** and end with */. Javadoc is a utility from Sun that allows us to create HTML (Hyper-Text Markup Language) documentation from the javadoc comments in our source code. Example some of the tags that the javadoc utility recognizes are @author, @deprecated, @exception etc. ? If a class field is declared to be private, it cannot be accessed by objects of a different class. A private field can only be accessed by the objects of the same class within which it is declared by its method. ? If a class field is declared to be public, it can be accessed by objects of different classes (in a different file) as the scope of a public field is global and it can be accessed from anywhere in the program. ? An accessor method is a method of the class which is used to access its fields, that is, to return the values of the required fields of the object of that class. Since fields are declared to be private and can not be accessed by objects of another class, therefore accessor methods are required. They also make sure that the object accessing the field values are not capable of changing them. Example, String getName() {return this.name ; } ? A modifier method is a method of the class which is used to modify its fields i.e. to change the values of the required fields of the object of that class. Since fields are declared to be private and can’t be modified by objects of another class, therefore modifier methods are required. They also make sure that the object accessing the field values are not capable of changing any other field values. Example, void setName(String Name) {this.name = Name ; } ? Encapsulation is the mechanism that binds together code and the data it manipulates, and keeps both safe from outside interference and misuse. ? If we use super.method() in a class then the JVM (Java Virtual Machine) while executing the code will directly start looking for that method in the parent class of the class that created the current object. ? super() will generate a call to the no-argument constructor of the parent class of the class that is creating the object. This is done to allow initialization of the inherited fields.
http://www.chegg.com/homework-help/introduction-to-computing-and-programming-in-java-a-multimedia-approach-1st-edition-chapter-11-solutions-9780131496989
CC-MAIN-2014-42
refinedweb
803
62.68
An understanding of Vue’s single-file components (SFCs) and Node Package Manager (NPM) will be helpful for this article. A framework’s command line interface, or CLI, is the preferred method to scaffold a project. It provides a starting point of files, folders, and configuration. This scaffolding also provides a development and build process. A development process provides a way to see updates occurring as you edit your project. The build process creates the final version of files to be used in production. Installing and running Vue.js (“Vue”) can be done with a script tag that points to the Vue content delivery network (CDN). No build or development process is necessary. But, if you use Vue single-file components (SFCs), you need to convert those files into something the browser can understand. The files need to be converted to Hyper-Text Markup Language (HTML), Cascading Style Sheets (CSS), and JavaScript (JS). In this case, a development and build process must be used. Instead of relying on the Vue CLI to scaffold our project and provide us with a development and build process, we will build a project from scratch. We will create our own development and build process using Webpack. What is Webpack? Webpack is a module bundler. It merges code from multiple files into one. Before Webpack, the user included a script tag for each JavaScript file. Although browsers are slowly supporting ES6 modules, Webpack continues to be the preferred way to build modular code. Besides being a module bundler, Webpack can also transform code. For example, Webpack can take modern JavaScript (ECMAScript 6+) and convert it into ECMAScript 5. While Webpack bundles the code itself, it transforms the code with loaders and plugins. Think of loaders and plugins as add-ons for Webpack. Webpack and Vue Single-file components allow us to build an entire component (structure, style, and function) in one file. And, most code editors provide syntax highlighting and linting for these SFCs. Notice the file ends with .vue. The browser doesn’t know what to do with that extension. Webpack, through the use of loaders and plugins, transforms this file into the HTML, CSS, and JS the browser can consume. The Project: Build a Hello World Vue Application Using Single-File Components. Step 1: Create the project structure The most basic Vue project will include an HTML, JavaScript, and a Vue file (the file ending in .vue). We will place these files in a folder called src. The source folder will help us separate the code we are writing from the code Webpack will eventually build. Since we will be using Webpack, we need a Webpack configuration file. Additionally, we will use a compiler called Babel. Babel allows us to write ES6 code which it then compiles into ES5. Babel is one of those “add-on features” for Webpack. Babel also needs a configuration file. Finally, since we are using NPM, we will also have a node_modules folder and a package.json file. Those will be created automatically when we initialize our project as an NPM project and begin installing our dependencies. To get started, create a folder called hello-world. From the command line, change to that directory and run npm init. Follow the on-screen prompts to create the project. Then, create the rest of the folders (except for node_modules) as described above. Your project structure should look like this: Step 2: Install the dependencies Here is a quick rundown of the dependencies we are using: vue: The JavaScript framework vue-loader and vue-template-compiler: Used to convert our Vue files into JavaScript. webpack: The tool that will allow us to pass our code through some transformations and bundle it into one file. webpack-cli: Needed to run the Webpack commands. webpack-dev-server: Although not needed for our small project (since we won’t be making any HTTP requests), we will still “serve” our project from a development server. babel-loader: Transform our ES6 code into ES5. (It needs help from the next two dependencies.) @babel/core and @babel/preset-env: Babel by itself doesn’t do anything to your code. These two “add-ons” will allow us to transform our ES6 code into ES5 code. css-loader: Takes the CSS we write in our .vue files or any CSS we might import into any of our JavaScript files and resolve the path to those files. In other words, figure out where the CSS is. This is another loader that by itself won’t do much. We need the next loader to actually do something with the CSS. vue-style-loader: Take the CSS we got from our css-loader and inject it into our HTML file. This will create and inject a style tag in the head of our HTML document. html-webpack-plugin: Take our index.html and inject our bundled JavaScript file in the head. Then, copy this file into the dist folder. rimraf: Allows us, from the command line, to delete files. This will come in handy when we build our project multiple times. We will use this to delete any old builds. Let’s install these dependencies now. From the command line, run: npm install vue vue-loader vue-template-compiler webpack webpack-cli webpack-dev-server babel-loader @babel/core @babel/preset-env css-loader vue-style-loader html-webpack-plugin rimraf -D Note: The “-D” at the end marks each dependency as a development dependency in our package.json. We are bundling all dependencies in one file, so, for our small project, we have no production dependencies. Step 3: Create the files (Except for our Webpack configuration file). <template> <div id="app"> {{ message }} </div> </template> <script> export default { data() { return { message: 'Hello World', }; }, }; </script> <style> #app { font-size: 18px; font-family: 'Roboto', sans-serif; color: blue; } </style> <html> <head> <title>Vue Hello World</title> </head> <body> <div id="app"></div> </body> </html> import Vue from 'vue'; import App from './App.vue'; new Vue({ el: '#app', render: h => h(App), }); module.exports = { presets: ['@babel/preset-env'], } Up to this point, nothing should look too foreign. I’ve kept each file very basic. I’ve only added minimal CSS and JS to see our workflow in action. Step 4: Instructing Webpack what to do All the configuration Webpack needs access to is now present. We need to do two final things: Tell Webpack what to do and run Webpack. Below is the Webpack configuration file ( webpack.config.js). Create this file in the projects root directory. Line-by-line we’ll discuss what is occurring. const HtmlWebpackPlugin = require('html-webpack-plugin'); const VueLoaderPlugin = require('vue-loader/lib/plugin'); module.exports = { entry: './src/main.js', module: { rules: [ { test: /\.js$/, use: 'babel-loader' }, { test: /\.vue$/, use: 'vue-loader' }, { test: /\.css$/, use: ['vue-style-loader', 'css-loader']}, ] }, plugins: [ new HtmlWebpackPlugin({ template: './src/index.html', }), new VueLoaderPlugin(), ] }; Lines 1 and 2: We are importing the two plugins we use below. Notice, our loaders don’t normally need to be imported, just our plugins. And in our case, the vue-loader (which we use in line 9) also needs a plugin to work (however, Babel, for example, does not). Line 4: We export our configuration as an object. This gives us access to it when we run the Webpack commands. Line 5: This is our entry module. Webpack needs a place to start. It looks in our main.js file and then starts to comb through our code from that point. Line 6 and 7: This is the module object. Here, we primarily pass in an array of rules. Each rule tells Webpack how to handle certain files. So, while Webpack uses the entry point of main.js to start combing through our code, it uses the rules to transform our code. Line 8 (rule): This rule instructs Webpack to use the babel-loader on any files which end with .js. Remember, Babel will transform ES6+ to ES5. Line 9 (rule): This rule instructs Webpack to use vue-loader (and don’t forget the associated plugin on line 17) to transform our .vue files into JavaScript. Line 10 (rule): Sometimes we want to pass a file through two loaders.. Lines 13: Create a plugins array. Here we will add the two plugins we need. Line: 14 -16 (plugin): The HtmlWebpackPlugin takes the location of our index.html file and adds our bundled JavaScript file to it via a script tag. This plugin will also copy the HTML file to our distribution folder when we build our project. Line 17 (plugin): The VueLoaderPlugin works with our vue-loader to parse our .vue files. Line 18: Close out the plugins array. Line 19: Close out the Webpack object that we are exporting. Step 5: Setting up our package.json file so we can run Webpack Our configuration is complete, now we want to see our application. Ideally, as we make changes to our application, the browser would update automatically. This is possible with webpack-dev-server. Delete the test script in our package.json file, and replace it with a script to serve our application: { "name": "hello-world", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { ": {} } The name of this command is your choice. I chose to call mine serve since we will be serving our application. From our terminal or command line, we can run npm run serve and that in turn will run webpack-dev-server --mode development . The --mode development is what’s called a flag or option. We haven’t talked about this, but it essentially instructs Webpack that you are in development mode. We can also pass in --mode production which we will do when we build our project. These aren’t necessarily required for Webpack to work. Without these, you will get a warning message telling you to provide a mode when you run Webpack . I say “necessarily required” because Webpack will minimize our code in production mode but not in development. So, don’t think those commands don’t do anything–they do. Let’s run npm run serve and see what happens. When we run npm run serve we get some output in our terminal. And, if everything goes well: And if we scroll up a bit: Point your browser to. You will see your Blue Hello World message in Roboto font. Now, let’s update the project and change the message to Hello Universe. Notice that the webpage refreshes automatically. That’s great, right? Can you think of a downside? Let’s change the application just a bit and include an input which we will bind a variable to (with v-model). We will output the variable in an <h2>tag below the input. I’ve also updated the styling section to style the message now. Our App.vue file should look like this: <template> <div id="app"> <input v- <h2 class="message">{{ message }}</h2> </div> </template> <script> export default { data() { return { message: 'Hello world!', }; }, }; </script> <style> .message { font-size: 18px; font-family: 'Roboto', sans-serif; color: blue; } </style> When we serve our application, we will have an input with a message of Hello World below it. The input is bound to the message variable, so as we type, we change the <h2> content. Go ahead, type into the input to change the <h2>content. Now go back to your editor, and below the <h2>tag, add the following: <h3>Some Other Message</h3> Save your App.vue and watch what happens. The h2 we just updated by typing in our input reverted back to Hello World. This is because the browser actually refreshes, and the script tag and page are loaded again. In other words, we were not able to maintain the state of our application. This may not seem like a big deal, but as you are testing your application and adding data to it, it will be frustrating if your app “resets” every time. Fortunately, Webpack offers us a solution called Hot Module Replacement. The hot module replacement is a plugin provided by Webpack itself. Up until this point, we have not used the Webpack object itself in our configuration file. However, we will now import Webpack so we can access the plugin. In addition to the plugin, we will pass one additional option to Webpack, the devServer option. In that option, we will set hot to true. Also, we will make an (optional) update to our build workflow: We will open the browser window automatically when we run npm run serve. We do this by setting true which is also inside the devServer option. const HtmlWebpackPlugin = require('html-webpack-plugin'); const VueLoaderPlugin = require('vue-loader/lib/plugin'); const webpack = require('webpack'); module.exports = { entry: './src/main.js', module: { rules: [ { test: /\.js$/, use: 'babel-loader' }, { test: /\.vue$/, use: 'vue-loader' }, { test: /\.css$/, use: ['vue-style-loader', 'css-loader']}, ] }, devServer: { open: true, hot: true, }, plugins: [ new HtmlWebpackPlugin({ template: './src/index.html', }), new VueLoaderPlugin(), new webpack.HotModuleReplacementPlugin(), ] }; Notice that we’ve imported Webpack so we could access the hotModuleReplacementPlugin. We’ve added that to the plugins array, and then told Webpack to use it with hot: true. We open the browser window automatically when we serve the application with open: true. Run npm run serve: The browser window should open, and if you open your dev tools, you should notice a slight change in the output. It now tells us hot module replacement is enabled. Let’s type in our input to change the <h2> content. Then, change the h3 tag to read: One More Message. Save your file and notice what happens. The browser doesn't refresh, but our <h3>change is reflected! The message we typed in the input remains, but the h3 updates. This allows our application to keep it’s state while we edit it. Step 7: Building our project So far, we’ve served our application. But, what if we want to build our application so we can distribute it? If you noticed, when we serve our application, no files are created. Webpack creates a version of these files that only exist in temporary memory. If we want to distribute our Hello World app to our client, we need to build the project. This is very simple. Just like before, we will create a script in our package.json file to tell Webpack to build our project. We will use webpack as the command instead of webpack-dev-server. We will pass in the --mode production flag as well. We will also use the rimraf package first to delete any previous builds we may have. We do this simply by rimraf dist. dist is the folder Webpack will automatically create when it builds our project. “Dist” is short for distribution–i.e. we are “distributing” our applications code. The rimraf dist command is telling the rimraf package to delete the dist directory. Make sure you don’t rimraf src by accident! Webpack also offers a plugin that will accomplish this cleaning process called clean-webpack-plugin. I chose dist show an alternative way. Our package.json file should look like this: { "name": "hello-world", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { "clean": "rimraf dist", "build": "npm run clean && webpack --mode production", ": {} } There are three things to notice: - I’ve created a separate cleanscript so we can run it independently of our build script. npm run buildwill call the independent cleanscript we’ve created. - I have &&between npm run cleanand webpack. This instruction says: “run npm run cleanfirst, then run webpack”. Let’s build the project. npm run build Webpack creates a dist directory, and our code is inside. Since our code makes no HTTP requests, we can simply open our index.html file in our browser and it will work as expected. If we had code that was making HTTP requests, we would run into some cross-origin errors as we made those requests. We would need to run that project from a server for it to work. Let’s examine the index.html that Webpack created in the browser and the code editor. If we open it in our editor or take a look at the source code in our dev tools you will see Webpack injected the script tag. In our editor though, you won’t see the styles because the style tag is injected dynamically at runtime with JavaScript! Also, notice our development console information is no longer present. This is because we passed the --production flag to Webpack. Conclusion Understanding the build process behind the frameworks you use will help you to better understand the framework itself. Take some time to try to build an Angular, React or another Vue Project without the use of the respective CLIs. Or, just build a basic three-file site (index.html, styles.css, and app.js), but use Webpack to serve and build a production version. Thanks for reading! woz
https://www.freecodecamp.org/news/how-to-create-a-vue-js-app-using-single-file-components-without-the-cli-7e73e5b8244f/
CC-MAIN-2021-31
refinedweb
2,828
67.55
I needed to write this little function because I need to add some parameters to a URL that I was going to open with urllib2. The benefit with this script is that it can combine a any URL with some structured parameters. The URL could potentially already contain a query string (aka CGI parameters). Here's how to use it if it was placed in a file called 'urlfixer.py': >>> from urlfixer import parametrize_url >>> parametrize_url('',>> The function needed some extra attention (read hack) if the starting url was of the form which is non-standard. The standard way would be. You can download urlfixer.py or read it here: from urlparse import urlparse, urlunparse from urllib import urlencode def parametrize_url(url, **params): """ don't just add the **params because the url itself might contain CGI variables embedded inside the string. """ url_parsed = list(urlparse(url)) encoded = urlencode(params) qs = url_parsed[4] if encoded: if qs: qs += '&'+encoded else: qs = encoded netloc = url_parsed[1] if netloc.find('?')>-1: url_parsed[1] = url_parsed[1][:netloc.find('?')] if qs: qs = netloc[netloc.find('?')+1:]+'&'+qs else: qs = netloc[netloc.find('?')+1:] url_parsed[4] = qs url = urlunparse(url_parsed) return url look i have a problem i need to create a unique parameter like a id for a url and that parameter get to a form man i dont know how to do it so i need your help Hello, nice site look this: As '?' cannot be in url_parsed.netloc, 'netloc.find('?') > -1' is always false, so that block is useless. Using '.find()' is discouraged, the Pythonic idiom is 'if "?" in netloc'. I guess the hack was necessary exactly becuse the non-standard '' form. As of Python 2.5 this is parsed correctly: >>> u = urlparse('') >>> u.netloc 'myfoo.com' >>> u.query 'a'
http://www.peterbe.com/plog/parametrize_url
CC-MAIN-2015-18
refinedweb
295
65.93
walks over a sequence, returns a sequence of windowed views of the sequence, with window size n (defaults to one), and stride s (defaults to n) src/r/s/rsarm-0.9/rsarm/rsarm.py rsarm(Download) def window(seq, n=1, s=None): """walks over a sequence, returns a sequence of windowed views of the sequence, with window size n (defaults to one), and stride s (defaults to n)""" i = iter(seq) if s is None: s = n b = [] try: for _ in xrange(n): b.append(i.next()) except StopIteration: pass while True: yield b[:] try: for _ in range(s): b.append(i.next()) except StopIteration: del b[:s] break del b[:s] if len(b): yield b[:] raise StopIteration() import reedsolomon from .utils import read_stream, window def chunkify(s, l=32): did_padding = False for w in map(''.join, window(s, l, l)): if not len(w)==l: yield padded(w, l) for lnum, chunk in enumerate(chunkify(i, l)): echunk = codec.encode(chunk) l = ("".join(c) for c in window(("%02x" % c for c in echunk), 3)) l1 = (" ".join(c) for c in window(l, 2)) l2 = " ".join(l1) ochunks = [] for line, echunk in enumerate(window(ifilter(hexchars.__contains__, i), n*2)): hexdigits = [int("".join(c), 16) for c in window(echunk, 2)] try:
http://nullege.com/codes/search/utils.window
CC-MAIN-2018-13
refinedweb
218
72.66
#include <sys/socket.h> #include <netpacket/packet.h> #include <net/ethernet.h> /* the L2 protocols */ packet_socket = socket(PF_PACKET, int socket_type, int protocol); The socket_type is either SOCK_RAW for raw packets including the link level header or SOCK_DGRAM for cooked packets with the link level header removed. The link level header information is available in a common format in a sockaddr_ll. to an interface. Only the sll_protocol and the sll_ifindex address fields are used for purposes of binding. The connect(2) operation is not supported on packet sockets. When the MSG_TRUNC flag is passed to recvmsg(2), recv(2), recvfrom(2) the real length of the packet on the wire is always returned, even when it is longer than the buffer. network order as defined in the linux/if_ether.h include file. It defaults to the socket's protocol. sll_ifindex is the interface index of the interface (see netdevice(7)); 0 matches any interface (only legal for binding). sll_hatype is.. In addition all standard ioctls defined in netdevice(7) and socket(7) are valid on packet sockets., e.g. eth0. This structure is obsolete and should not be used in new code.. In addition other errors may be generated by the low-level driver. #ifndef SOL_PACKET #define SOL_PACKET 263 #endif. #include <asm/types.h> #include <linux/if_packet.h> #include <linux/if_ether.h> /* The L2 protocols */ RFC 894 for the standard IP Ethernet encapsulation. RFC 1700 for the IEEE 802.3 IP encapsulation. The <linux/if_ether.h> include file for physical layer protocols.
http://www.linuxmanpages.com/man7/packet.7.php
crawl-002
refinedweb
251
52.76
Hi Paul, > +/* Return the total number of processors. The result is guaranteed to > + be at least 1. */ > +unsigned long int > +num_processors (void) > +{ > +#ifdef _SC_NPROCESSORS_ONLN > + long int nprocs = sysconf (_SC_NPROCESSORS_ONLN); > + if (0 < nprocs) > + return nprocs; > +#endif It appears that this code is right according to POSIX, but does not catch the entire reality on Linux. (A comment in libgomp/config/linux/proc.c says: "Count only the CPUs this process can use.") And on mingw, a different API should be used. See the implementation of the function omp_get_num_procs() in libgomp, part of GCC: - for POSIX systems: - for Linux: - for mingw: This leads to the question: Why not use the AC_OPENMP macro, and then use the following? unsigned long int num_processors (void) { #ifdef _OPENMP return omp_get_num_procs (); #else ... existing implementation ... #endif } This would get the count right on Linux, mingw, and on POSIX systems, i.e. nearly everywhere. Also, the omp_get_num_procs() function has the advantage that you can influence it through an environment variable, so that users have the ability to let multithreaded programs access only, say, 3 out of 4 CPU cores, if a responsive machine is more important to them than fast execution. Bruno
http://lists.gnu.org/archive/html/bug-gnulib/2009-03/msg00288.html
CC-MAIN-2018-26
refinedweb
193
60.85
Re: Security problem with User Control embedded in Web Page From: Shel Blauman [MSFT] (sheldonb_at_online.microsoft.com) Date: 06/06/03 - ] Date: Fri, 6 Jun 2003 09:24:48 -0700 Not sure why the URL based code group doesn't work for you, but my recommendation would be to use a strong name rather than the URL. Here are some generic instructions for setting up a user control: How to run a user control assembly hosted on an Internet Information Server (IIS) on an Internet Explorer (IE) client. The following applies to an assembly intended to execute with greater permissions than would normally be granted to the zone the assembly belongs to, most likely Internet, Local Intranet or Trusted Sites. 1.. The user control assembly is identifiable in a manner that can be used to set the membership condition in a code group either using the .NET Configuration Tool (Mscorcfg.msc) or caspol.exe. Signing using a strong name or a certificate is preferable, but other sources of identity such as a URL or site can also be used. Although a URL or site can serve as a membership condition, they are not recommended, as they are not as secure as a strong name or a certificate. To create a strong name use sn.exe: sn -k keyPair.snk // This strong name key is used to create a code group that gives // permissions to this assembly. // Sign the assembly with the strong name key. [assembly: AssemblyKeyFile("keyPair.snk")] 2.. If strong named, the user control has the AllowedPartiallyTrustedCallers attribute. // The AllowPartiallyTrustedCallersAttribute requires the assembly to // be signed with a strong name key. // This attribute is necessary since the control is called by either an // intranet or Internet Web page that should be running under // restricted permissions. // The fully attributed assembly should look similar to the following: [assembly: AssemblyKeyFile("snKey.snk")] [assembly: AssemblyVersion("1.0.0.0")] [assembly:AllowPartiallyTrustedCallers] namespace SignedAssembly 3.. The user control asserts permissions it requires which the zone in which it is running would not normally be granted. Permissions should only be asserted if it is positively known the calling application has insufficient permissions. Asserts should not be performed without a strong need.(); 4.. The user control RevertAsserts immediately after performing asserted actions. // It is very important to call RevertAssert to restore the stack walk // for file operations. FileIOPermission.RevertAssert(); 5.. The user control is hosted in an IIS folder on the server that has an "Execute permission" set to either "None" or "Scripts Only". 6.. The client has a code group that the assembly resolves to that grants the permissions the assembly requires. caspol -machine -addgroup All_Code -strong -file signedassembly.exe FullTrust -name FouthCoffeeStrongName -description "Code group granting trust to code signed by FourthCoffee" Alternatively, the code group can be created using the Microsoft .NET Framework Configuration tool (Mscorcfg.msc) found under Administrative Tools. 7.. In Internet Explorer, Internet Options, Advanced Security settings, the "Do not save encrypted pages to disk" should be unchecked if Internet Explorer Enhanced Security Configuration has been enabled for both Administrators and for Other Groups on the server. The Internet Explorer Enhanced Security setting selected is the default on Windows Server 2003. When in effect, one of the invoked features is the encryption of downloaded files. Another feature is the automatic setting of "Do not save encrypted pages to disk" on the client. To successfully download a user control under these conditions, the client setting for "Do not save encrypted pages to disk" should be cleared. This functionality is found in Control Panel, Add or Remove Programs, Add/Remove Windows Components, Internet Explorer Enhanced Security Configuration. 8.. The runtime version on the client machine is compatible with the used to compile the assembly. 9.. The code group created for the user control is in the same runtime that the control uses. If problems occur, check the Fusionbinderror log in "C:\Documents and Settings\<username>\Local Settings\Temporary Internet Files" to determine which operations failed. This log must first be copied to another folder before it can be opened. -- This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at "Mr B" <paul.bennett@aurora-uk.com> wrote in message news:OcrR3RELDHA.2256@TK2MSFTNGP11.phx.gbl... > Hello, > > Can anybody help me? > > I have a Win Form User control embedded in a Web Page using <object> tag > (which is pretty standard) > > In the User control, I need to read the session cookie of the web page it is > embedded in. To do this, I have used the WinInet.dll InternetGetCookie > method. It works fine on my PC, but when I put it onto the actual web site, > some PCs are not showing the control. After alot of research, I discovered > that removing the code that references the WinInet.DLL makes the control > work for everyone. > > I have created a specific code group that gives full trust for the web site > (using the URL Condition type for the permission set), which I thought would > allow the control (or, more accuratly, the assembly DLL) to be run on the > PC. With a bit more research, I found that changing the Internet security > level to full access allowed all users to display the control. But as I am > trying to write an installation program, I need to know what permissions to > setup so that everyone can run this User Control. > > It is obviously a security problem, but does anyone know why referencing a > System32 DLL from within a managed DLL with full trust would cause this > problem, or whether this is a "Red Herring" and it is something else not so > obvious. And if anyone has a solution that would be even better. > > cheers > > Paul > > - ]
http://www.derkeiler.com/Newsgroups/microsoft.public.dotnet.security/2003-06/0051.html
CC-MAIN-2015-48
refinedweb
963
54.42
Extending is one of the super powers Python, Lua, and Ruby have to offer. Extending is basically the ability to combine code from two or more different languages into one running executable or script. Although this adds a layer of complexity to a project, it gives a developer the ability to pick and choose from the existing toolbox. All of these languages are built around being extensible; extensibility is one of the features that has made them so prolific. The language documentation that comes with each includes a nifty sample and explanation of how to partner with other languages, so this section is more of a brief overview of the process. Languages are extended for many different reasons. A developer may want to use an existing C library or port work from an old project into a new development effort. Often extensible languages are used as prototypes , and then profiling tools are used to see what parts of the code execute slowly, and where pieces should be re-written. Sometimes a developer will need to do something that just isn't possible in the main language, and must turn to other avenues. Extending is mainly used when another language can do the job betterbetter meaning more efficiently or more easily. Most commonly, you will find these languages partnered with C and C++, where the Cs are running code that needs to be optimized for speed and memory. As I've already mentioned, multilanguage development adds an extra layer of complexity. Particular problems with extending are as follows : You must debug in two languages simultaneously . You must develop and maintain glue code that ties the languages together (this might be significantly large amounts of code). Different languages may have different execution models. Object layouts between languages may be completely different. Changes to one side of the code affect the other side, creating dependencies. Functions between languages may be implemented differently. Extended programs can also be difficult to debug. For instance, Ruby uses the GNU debugger, which can look at core dumps but still doesn't have breakpoints or access to variables or online source help. This is really different from the types of tools available for C and C++, where breakpoints and core dumps can be watched and managed during debug execution. Since the tools can differ between two languages, a developer may have to hunt through more than one debugger to find a problem. Also, because high-level language debuggers are usually more primitive, there is less checking during compile time, which could lead to missed code deficiencies. There are some glue code packages that solve some of these problems. These are third-party programs that manage the creation of extended code; Simple Wrapper Interface Generator (SWIG, covered later in the chapter) is one example of such a package. Though adding more than one language to a project gives you more options, as I said, it does add an extra level of complexity. When you add a language, you will need multiple compliers and multiple debuggers, and you will have to develop and maintain the glue code between the two languages. Whether to add a language is a tough management question, one that needs to be answered based on the needs of each particular project. A final issue with having high-level code in a shipped product is that the code reveals much more about the source than does C or C++; this can make it more vulnerable to hacking. This doesn't mean that C or C++ cannot be hacked, just that if the variable names and function names are shipped in scripts with the game code in a high-level format, the game can be easier to break into or deconstruct. There are a few built-in ways of integrating Python with C, C++, and other languages. Writing an extension involves creating a wrapper for C that Python imports, builds, and can then execute. Python also provides mechanisms for embedding, which is where C (or an equivalent) is given direct access to the Python interpreter. There are also a number of third-party integration solutions. You must write a wrapper in order to access a second language via a Python extension. The wrapper acts as glue between the two languages, converting function arguments from Python into the second language and then returning results to Python in a way that Python can understand. For example, say you have a simple C function called function : int function (int x) { /*code that does something useful*/ } A Python wrapper for function looks something like the following: #include <Python.h> PyObject *wrap_function(PyObject *self, PyObject *args) { int x, result; if (!PyArg_ParseTuple(args, "i:function",&x)) return NULL; result = function(x); return Py_BuildValue("i",result); } The wrapper starts by including the Python.h header, which includes the necessary commands to build a wrapper, and also a few standard header files (like stdio.h, string.h, errno.h, and dstlib.h). NOTE TIP Python commands that are included with Python.h almost always begin with Py or py, so they are easily distinguished from the rest of the C code. The PyObject wrapper wrap_function has two arguments, self and args (see Figure 12.2). The self argument is used when the C function implements a built-in method. The args argument becomes a pointer to a Python tuple object containing the arguments. Each item of the tuple is a Python object and corresponds to an argument in the call's argument list. The small "i" in the i:function line is short for int. If the function instead required a different type, you would need to use a different letter than "i": i. For an integer. I. For a long integer. s. For a character string. c. For a single character. f. For a floating point number d. For double o. For an object Tuple. Python tuples can hold multiple objects. Together, PyArg_ParseTuple() and PyBuildValue() are what converts data between C and Python (see Figure 12.3). Arguments are retrieved with PyArg_ParseTuple , and results are passed back with Py_BuildValue . Py_BuildValue() returns any values as Python objects.. If a C function returns no useful argument (i.e. void ), then the Python function must return None . In the code snippet an if statement is also used. This structure is there just in case an error is detected in the argument list. If an error is detected , then the wrapper returns NULL . Once a wrapper has been written, Python needs to know about it. Telling Python about the wrapper is accomplished with an initialization function. The initialization function registers new methods with the Python interpreter and looks like this: Static PyMethod exampleMethods[] = { {"function", wrap_function, 1}, {NULL, NULL} }; void initialize_function(){ PyObject *m m = Py_InitModule("example", "exampleMethods"); } Only after a wrapper and an initialization function exist can the code compile. After compilation, the function is part of Python's library directory and can be called at any time, just like a native Python module. You can also use a setup file when importing a module. A setup file includes a module name , the location of the C code, and any compile tags needed. The setup file is then pre- processed into a project file or makefile. The compile and build process for extending varies, depending upon your platform, environment, tools, and dynamic/static decision-making, which makes the Python parent documentation extremely valuable when you're attempting this sort of development. Guido Van Rossum has a tutorial on extending and embedding Python within the language documentation, at. The Python C API Reference manual is also extremely helpful if C or C++ is your target language. It's at. The last step in Python extension is to include any wrapped functions (in this case, function ) in the Python code. Do this with a simple import line to initialize the module, like so: import ModuleToImport Then the function can be called from Python just like any other method. ModuleToImport.function(int) Embedding in Python is where a program is given direct access to the Python interpreter, allowing the program the power to load and execute Python scripts and services. This gives a programmer the power to load Python modules, call Python functions, and access Python objects, all from his or her favorite language of comfort . Embedding is powered by Python's API, which can be used in C by including the Python.h header file. This header #include "Python.h" contains all the functions, types, and macro definitions needed to use the API. It is fairly simple to initialize Python in C once the Python header file is included (see Figure 12.4): int main() { Py_Initialize(); PyRun_SimpleFile("<filename>"); Py_Finalize(); return(); } Py_Initialize is the basic initialization function; it allocates resources for the interpreter to start using the API. In particular, it initializes and creates the Python sys , exceptions , _builtin_ , and _main_modules . NOTE CAUTION Py_Initialize searches for mod ules assuming that the Python library is in a fixed location, which is a detail that may need to be altered , depending on the operat ing system. Trouble with this func tion may indicate a need to set the operating system's environment variable paths for PYTHONHOME or PYTHON PATH . Alternately, the mod ule paths can be explicitly set using PySys_SetArgv() . The Pyrun_SimpleFile function is simply one of the very high-level API functions that reads the given file from a pointer ( FILE * ) and executes the commands stored there. After initialization and running any code, Py_Finalize releases the internal resources and shuts down the interpreter. Python's high-level API functions are basically just used for executing given Python source, not for interacting with it in any significant way. Other high-level functions in Python's C API include the following: Py_CompileString(). Parses and compiles source code string. Py_eval_input. Parses and evaluates expressions. Py_file_input. Parses and evaluates files. Py_Main(). Main program for the standard interpreter. PyParser_SimpleParseString(). Parses Python source code from string. PyParser_SimpleParseFile(). Parses Python source code from file. PyRun_AnyFile(). Returns the result of running PyRun_InteractiveLoop or PyRun_SimpleFile(). PyRun_SimpleString(). Runs given command string in _main_ . PyRun_SimpleFile(). As PyRun_SimpleString except source code can be read from a file instead of a string. Py_single_input. Start symbol for a single statement. PyRun_InteractiveOne(). Read and execute a single statement from an interactive device file. PyRun_InteractiveLoop(). Read and execute all statements from an interactive device file. PyRun_String(). Execute source code from a string. PyRun_File(). Execute source code from a file. The high-level tools really just scratch the surface, and Python's API allows memory management, object creation, threading, and exception handling, to name a few things. Other commonly used commands include PyImport_ImportModule() , which is for importing and initializing entire Python modules; PyObject_GetAttrString() , which is for accessing a given modules attributes; and PyObject_SetAttrString() , which is for assigning values to variables within modules. So what happens when there is a large integration project and some 100+ C functions must be gift-wrapped for Python? This can be a time-consuming , tedious , error-prone project. Imagine now that the library goes through a major update every four to six months, and each wrapper function will need to be revisited. Now you know what job security looks like! Luckily, there are other options available for extension besides wrappers. SWIG, for instance, is an extension wrapper designed to make extension easier. It can be used to generate interfaces (primarily in C) without having to write a lot of code. Another option is Sip, a relative of SWIG, which focuses on C++. The Boost.Python library is yet another tool that can be used to write small bits of code to create a shared library. Of these three, SWIG is the most popular, probably because it plays well not only with C, C++, Python, and Ruby, but also with Perl, Tcl/Tk, Java, and C#. SWIG is copyrighted software, but it is freely distributed. It is normally found on UNIX but will also operate on Win32 OSs. SWIG automates the wrapper process by generating wrapper code from a list of ANSI C functions and variable declarations. The SWIG language is actually fairly complex and very complete. It supports preprocessing, pointers, classes, inheritance, and even C++ templates. SWIG is typically called from a command prompt or used with NMAKE. Modules can be compiled into a DLL form and then dynamically loaded into Python, or they can be set up as a custom build option in MS Development Studio. SWIG can be found online at Sourceforge (. sourceforge .net/), and Boost.Python, by David Abrahams, can be found online at Python.org (). Lua was built to partner with other languages, and it can be extended with functions written in C just as Python can. These functions must be of the lua_CFunction type: typedef int (*lua_CFunction) (lua_State *L); A C function receives a Lua state and returns an integer that holds the number of values that must return to Lua (see Figure 12.5). The C function receives arguments from Lua in its stack in direct order. Any return values to Lua are pushed onto the stack, also in direct order. When registering a C function to Lua, a built-in macro receives the name the function will have in Lua and a pointer to the function, so a function can be registered in Lua by calling the lua_register macro: lua_register(L, "average", MyFunction); Values can be associated with a C function when it is created. This creates what is called a C closure . The values are then accessible to the C function whenever it is called. To create a C closure, first push the values onto the stack, and then use the lua_pushcclosure command to push the C function onto the stack with an argument containing the number of values that need to be associated with the function: void lua_pushcclosure (lua_State *L, lua_CFunction MyFunction, int MyArgument); Whenever the C function is called, the values pushed up are located at specific pseudo-indices produced by a macro, lua_upvalueindex . The first value is at position lua_upvalueindex(1) , the second at lua_upvalueindex(2) , and so on. Lua also provides a predefined table that can be used by any C code to store whatever Lua value it needs to store. This table is a registry and is really useful when values must be kept outside the lifespan of a given function. This registry table is pseudo-indexed at LUA_REGISTRYINDEX . Any C library can store data into this table. Extending Ruby in C is accomplished by writing C as a bridge between Ruby's C API and whatever you want to add on to Ruby (see Figure 12.6). The Ruby C API is contained in the C header file ruby.h, and many of the common API commands are listed in Table 12.2. Ruby and C must share data types, which is problematic when Ruby only recognizes objects. For C to understand Ruby, some translation must be done with data types. In Ruby, everything is either an object or a reference to an object. For C to understand Ruby, data types must be pointers to a Ruby object or actual objects. You do so by making all Ruby variables in C a VALUE type. When VALUE is a pointer, it points to one of the memory structures for a Ruby class or object structure. VALUE can also be an immediate value such as Fixnum , Symbol , true , false , or nil . A Ruby object is an allocated structure in memory that contains a table of instance variables and other class information. The class is another allocated structure in memory that contains a table of the methods defined for that class. The built-in objects and classes are defined in the C API's header file, ruby.h. Before wrapping up any Ruby in C, you must include this file: #include "ruby.h" You must define a C global function that begins with Init_ when writing new classes or modules. Creating a new subclass of Ruby's object looks like the following: void Init_MyNewSubclass() { cMyNewSubclass = rb_define_class("MyNewSubclass", rb_cObject); } Object is represented by rb_cObject in the ruby.h header file, and the class is defined with rb_define_class . Methods can be added to the class using rb_define_method , like so: void Init_MyNewSubclass() { cMyNewSubclass = rb_define_class("MyNewSubclass", rb_cObject); rb_define_method(cMyNewSubclass, "MyMethod", MyFunction, value ); } Ruby and C can also directly share global values. This is accomplished by first creating a Ruby object in C: VALUE MyString; MyString = rb_str_new(); Then bind the object's address to a Ruby global variable: Rb_define_variable("$String", &MyString); Now Ruby can access the C variable MyString as $String . You may run into trouble with Ruby's garbage collection when extending Ruby. Ruby's GC needs to be handled with kid gloves when C data structures hold Ruby objects or when Ruby objects hold C structures. You can smooth the way by writing a function that registers the objects, passing free() , calling rb_global_variable() on each Ruby object in a structure, or making other special API calls. Once code has been written for an extension, it needs to be compiled in a way that Ruby can use. The code can be compiled as a shared object to be used at runtime, or it can be statically linked to the Ruby interpreter. The entire Ruby interpreter can also be embedded within an application. The steps you should take depend greatly on the platform on which the programming is being done; there are instructions for each method on the online Ruby library reference, at. The C API, however, is quite large, and for English users the best source for documentation is likely the source code itself.
https://flylib.com/books/en/1.77.1.109/1/
CC-MAIN-2020-10
refinedweb
2,935
54.32
Post, bubble sort and other sorting methods, heap sort is used.... Example of Heap Sort in Java: public class eap_Sort{ public static void main Insertion Sort Timer Insertion Sort Timer Welcome all I wanna program in java find the timer of insertion sort and find time complexity for random value thanks all  .... Insertion sorting algorithm is similar to bubble sort. But insertion sort is more efficient than bubble sort because in insertion sort the elements Selection Sort Java NotesSelection Sort NOTE: You should never really write your own sort. Use the java.util.Arrays.sort(...) or java.util.Collections.sort(...). Like all simple sorts, selection sort is implemented with two Post your Comment
http://www.roseindia.net/discussion/48327-Bubble-Sort-in-Java.html
CC-MAIN-2014-10
refinedweb
113
53.41
Isaac Dupree wrote: > Sean Leather wrote: > >> I. >>>> >>>> Well, the warning is right that you don't need to re-export module B: >>> instances are implicitly exported. So you could just remove the export >>> of >>> "module B", unless there's a reason to export it (such as, you might add >>> some exported functions or data types to it later) >>> >> >> >> Hmm, the disappointing result of removing module B from the export list is >> that now it doesn't show up in the Haddock-generated documentation for >> module A. Not that there was anything specific in B to document, because >> it's only instances. But I do want the user to know that importing A also >> imports B. Sigh... I suppose there's CPP if I really want it. >> > So, I tried to do the above with CPP, and I can't get Haddock to recognize that it should list this. Here's what it looks like: module A ( module Z, #ifdef __HADDOCK__ module B #endif ) where ... I added "extensions: CPP" to the .cabal file as I saw instructed in some Cabal thread from long ago. Building works such that I don't get the warning, but "cabal haddock" just acts as if it doesn't recognize __HADDOCK__. Assuming I'm not doing anything wrong, this might be a Cabal problem. You could put a link to the module in the intro-documentation that comes > before the export list, possibly in a sentence saying e.g. "deliberately > exports instances from @module B@" (except I've forgotten Haddock syntax > and might have used it wrong there :-) > Yep, that is an option. I noticed it in the documentation for another package, the name of which escapes me right now..) > Yeah, I think instances should be documented in general. But, I agree with Ross in the previous thread, that's a different story. Sean -------------- next part -------------- An HTML attachment was scrubbed... URL:
http://www.haskell.org/pipermail/glasgow-haskell-users/2008-August/015306.html
CC-MAIN-2014-42
refinedweb
316
71.44
Antony Kummel <antonykummel at yahoo.com> writes: > This is how I understand the registry/manager/wrappers system: > > The meaning of the wrappers is that referenceables are > transferable to third parties who get their flow pass through the > middle process, and that the wrappers get reconnected automatically Close (IMO) - the referenceable to the original object (which is created as a result of passing that object through PB to a third party) is placed in a wrapper which is then given to client code. For example (in ASCII): +---------+ +---Wrapper-----+ | Manager |---[ PB transport ]---|[Referenceable]|<---- Client code +---------+ +---------------+ So the only reference most of the client code maintains is to the wrapper object, which can remain consistent across outages. It handles reconnections to the original object when needed, which will technically create a new referenceable (since PB referenceables can't continue to be used across a disconnect/reconnect), but that is transparent to the client code. > If I understand correctly, the only purpose of the registry is to > provide an interface to enable the re-connection of the wrappers. > The purpose of managers is simply to dispense data and state > objects. The primary reason for the registry in our system is to act as a single management object to retrieve references to our registerable objects (such as managers), whether the request to locate a given registerable object is coming locally or remotely. Much as any other central registry, it permits us to pass a single object reference around to various parts in the system (including remote clients) through which access to other official entry points can be retrieved. But yes, it also simplifies the remote connection process since all we need to do is provide a remote reference to a registry and through that remote (wrapped) references to objects such as managers may be retrieved through the same code that would be used if the registry was local. And yes, as I've described managers are largely data management objects. We do also have higher order registerables (we call them packages) which implement high level functionality - generally to simplify common operations that would otherwise need to interact with several managers simultaneously. > Questions: > > Do managers only dispense state and data, or do they also provide > state control? I suppose it depends on what you would consider covered by "state control," but the general answer would be there's no single rule. Some managers are almost entirely pure data storage/retrieval, while others provide for the retrieval of objects that themselves are fairly complex (such as our cacheable models/controllers). > Do cacheables (state objects) re-connect? Is there any reason why > they shouldnt? Yes, they can be wrapped as well. To the server side instance of the cacheable, a reconnection is just another "new" observer. > Regarding multi-layer wrapping, how do cacheables go from the > original server to the final client without becoming unjellyable in > the middle? (warning - this got very long after I started writing it...) I'm not sure if you meant copyable here instead of cacheable since a cacheable controls it's own transmission of state to the client, as opposed to a copyable which has to be directly jellyable. For the cacheable, as long as it implements the Cacheable support, it controls what gets transmitted to any observer, so whether it's the original instance or a client reference to the original instance, it's transmitting the same data. But we have to date handled cacheables with an additional layer. Since we use cacheables typically for models for which users need to monitor changes, we needed something that works the same locally and remotely. We tend to use pydispatcher for signals (or some of our older objects handle the observer pattern directly), and implement our models using that, so all monitoring is technically local. We then have a generic server side wrapper that is a pb.Cacheable, and can observe any such model as its data for the cacheable clients. This might also work by just having the models be directly cacheable, but it's the way the system has grown to date. The key to most of this is that we built a structure where the remote wrapped instance of an object uses the same class definition (directly through inheritance) as the original instance, just with a wrapper mixed-in. Not only does the client side wrapped object "work" like the local object, with the use of callRemote hidden behind the normal interface, but it then can be remotely referenced itself and behave just like the original reference. To try to strip down to a simple example, we were able to encapsulate pretty much everything about the distributed processing part of the system into two package modules - remoteable.py for a server side support, and remote.py for client side. remoteable is thin - we've have copyable/referenceable/cacheable subclasses just to isolate some custom code (lets classes define some fields that should automatically pickle to avoid PB not knowing how to transmit them) and for future expansion. This also houses the server side observer/cacheable wrapper I mentioned above. remote handles the client side. It defines the key wrapper classes (for client side copyable copies, referenceable references, and cacheable caches :-)). These wrapper classes implement reconnections (cacheable/referenceable) and other custom support (like unpickling for copies). They also themselves inherit from the remoteable classes so they can also be passed over a PB session. remote then defines classes that multiply inherit from each of the original classes for those classes that may be distributed, as well as the appropriate wrapper class. In most cases these definitions are simply "pass" but they sometimes define slightly custom functionality for the client side. The only really detailed one is the remote.Registry, which has the knowledge to automatically wrap any retrieved object in the appropriate wrapper. An example may help. Assuming the following classes in remoteable/remote as mentioned above: - - - - - - - - - - - - - - - - - - - - - - - - - remoteable.Copyable, Cacheable, Referenceable - subclasses of pb.* remoteable.ModelCache - wraps an model as a cacheable. We have subclasses of this for each model (so we can register the unjellying) remote.CopyObject - mirror on the remote side for remoteable.Copyable. Is itself also a remoteable.Copyable remote.RemoteWrapper - remote side wrapper for a Referenceable. Is itself also a remoteable.Referenceable. - - - - - - - - - - - - - - - - - - - - - - - - - Then, if in a core module in our package (call it aurora.User) in the system we defined some user related objects (that are meant to be distributable), it might look like: - - - - - - - - - - - - - - - - - - - - - - - - - class User(remoteable.Copyable): """A typical data object""" # Attributes and simple methods for manipulating as needed pass class UserModel(remoteable.Cacheable): """A typical cached model""" # Attributes and signal support for notification on changes pass class UserManager(interfaces.IUserManager, remotable.Referenceable): """A typical manager. IUserManager is an interface definition for the public API""" # Methods for accessing/changing User and UserModel objects # Assume that getUser retrieves user and getModel retrievs a UserModel - - - - - - - - - - - - - - - - - - - - - - - - - As it stands above, the user objects would be fully usable in a local context. Access to the UserManager would be through a Registry in which it had been registered, and the UserManager would provide access to either User or UserModel objects. To permit distribution, we'd first add appropriate remote_* (or view_*) entry points to the UserManager. Most would simply mirror their original methods (leaving it up to pb to construct the references). But any methods that returned models would be adjusted so that instead of just returning the model, they wrapped that model in an appropriate remoteable.ModelCache subclass and returned that instead. So something like: - - - - - - - - - - - - - - - - - - - - - - - - - remote_getUser = getUser def remote_getModel(self, *args, **kwargs): return remoteable.UserModel(self.getModel(*args, **kwargs)) - - - - - - - - - - - - - - - - - - - - - - - - - That's the extent to which original objects need to be touched. The only remote entry points are in managers (our referenceables), with data objects being handled by PB as copyable or cacheable. Then in the remote.py module we'd add the following: - - - - - - - - - - - - - - - - - - - - - - - - - class User(aurora.User.User, CopyObject): # CopyObject is our own mirror to remoteable.Copyable pass pb.setUnjellyableForClass(aurora.User.User, User) # Note that a remote copy can be a copy of itself (this handles hops 2+) pb.setUnjellyableForClass(User, User) class UserModel(aurora.User.UserModel, pb.RemoteCache): # Depending on how the model detects state changes, you may need to # do some processing in setCopyableState or you may not. pass pb.setUnjellyableForClass(remoteable.UserModel, UserModel) class UserManager(RemoteWrapper, interfaces.IUserManager): exclude = "remote_getModel" # We still need to locally wrap as a cacheable for hops 2+ def remote_getModel(self, *args, **kwargs): return remoteable.UserModel(self.getModel(*args, **kwargs)) - - - - - - - - - - - - - - - - - - - - - - - - - The last one could probably use some explaining. Our RemoteWrapper class intercepts attribute lookups, and based on any superclass that is one of our interfaces, uses the interface definition to reflect method calls (as well as remote_* versions of them) over callRemote. We permit certain methods to be excluded from the wrapping (via an "exclude" attribute) which lets us handle them locally in the wrapper. In this case, just as the original user object did, we need to wrap the local cache of a UserModel in the cacheable before trying to return to any further remote callers. (This is where having our remote.UserModel be directly a pb.Cacheable might simplify things). But the getUser method is basically for free, since PB will handle making a copyable of the original user object which will end up coming across to the client wrapped as a remote.User object. Overall, we don't do that much overriding of the remote methods. One case where we do is for the remote.Registry since it's responsible for always wrapping returned managers in the right remote class. Since our registry lookup method is given an interface to find a manager for, the remote.Registry looks in the local module (remote) for a class definition inheriting from the same interface and then uses that to wrap the returned referenceable, thus more or less transparently making the returned referenceable look just like the original object. These remote.* objects are all themselves copy/cache/referenceable since they also inherit from their remoteable counterparts (or are wrapped by such as in the getModel call). So this can go on for many hops. Now let me see if I can put this together with a few other components. For example, in a two hop setup, you'd get: Server [<--A-->] Client 1 [<---B--->] Client 2 (a) Registry <------ remote.Registry <------ remote.Registry (b) UserManager <--- remote.UserManager <--- remote.UserManager (c) User <---------- remote.User <---------- remote.User (d) UserModel <----- remote.UserModel <----- remote.UserModel (etc...) The connections "A" and "B" are actually paired Server and Client objects of our own (that I mentioned in my last note). During a startup sequence, Server creates the master registry (including instantiating and registering any managers). It then establishes a Server object that provides access to the Registry for network clients. Simultaneously the Registry may be used by local processing. At some point, Client 1 uses its Client object to connect to Server's Server object and retrieve a reference to Registry (a), which is wrapped in a remote.Registry by the Client object. That remote.Registry can then be published by Client 1's Server object (the Server object just knows it has a registry, but can't or needn't distinguish between Registry and remote.Registry), which can be retrieved by Client 2's Client object. Client 2 also gets a remote.Registry, but it's an extra "hop" removed from the original Registry instance. Now sticking with 2 hops, say Client 2 needs some information. First, it'll ask its registry for a reference to the UserManager. The call is reflected by Client 2's remote.Registry up to Client 1, whose remote.Registry reflects it up to Server's Registry. That Registry returns a reference to UserManager which PB sends as a referenceable (shared only between Server and Client 1). The remote.Registry on Client 1 wraps that as a remote.UserManager and then returns it to Client 2, which again causes PB to send a referenceable (shared only between Client 1 and Client 2), which Client 2's remote.Registry again wraps as a remote.UserManager. Now, Client 2 asks its UserManager for a User object. The call reflects up to the Server the same way, but the response this time is a copyable, so PB copies it across Server->Client 1 (which instantiates a remote.User), which is then copied by PB from Client 1->Client 2 (creating another remote.User). And perhaps now Client 2 wants a UserModel (asking the UserManager). Call again reflects up to Server, but the remote entry point on the main UserManager wraps the UserModel in a remoteable.UserModel to return to PB, which then treats it as a cacheable down to Client 1, which instantiates it as remote.UserModel. Client 1's remote.UserManager then wraps it in a local remoteable.UserModel to return (as a cacheable) to Client 2, which gets a remote.UserModel. From Server's perspective there is a remoteable.UserModel instance (which is watching signals on the original UserModel) which has Client 1 as a PB observer, and from Client 1's perspective there is a remoteable.UserModel instance (which is watching signals on the local remote.UserModel) which has Client 2 as a PB observer. Still with me? :-) Now let's say there's an outage - say between Server and Client 1. Whatever the next attempt is to use callRemote in any wrapped object will detect the problem and emit a disconnected signal. We also have a periodic Client->Server object "ping" that will pick up an outage in the absence of other calls, which occurs periodically or is triggered automatically upon receiving the disconnected signal from any wrapper object. Upon detection by the Client object of the outage, it then emits its own disconnected signal, upon which various application level operations may take place, officially disconnects the PB socket, and starts attempting to reconnect. Any operations on wrapped objects past this point will generate the normal PB DeadReferenceError exception since we shut down the connection. When Client 1's Client object manages to reconnect, it will immediately re-query the registry from the Server's Server object. Once it has successfully retrieved the new registry, it then emits a newly connected signal which includes the new registry reference. Our remote.Registry object (along with other application level stuff) listens for this signal and upon receipt, updates its internal wrapped reference, and automatically issues a requery to that reference for any managers that had previously been queried through it. When it gets new references to them it updates its internal information, as well as any wrappers that it had previously handed out (it keeps a cache). Once this final step is completed, any application code that had been attempting to use those wrapped references will be working again. The remote copyables don't need any special support since they are still legitimate copies. But remote cacheables also need to be re-connected, and are trickier since it's harder to come up with a single way to retrieve new cacheables, since they are less regular than manager references retrieved through the registry. To date we've handled this on a case by case basis either through the wrapper of the responsible manager, or via application level support for re-retrieving the model upon receipt of the reconnection signal. > P.S. > (...) > The system I had in mind: > > I like and want to adhere to the data/state distinction you made. > Events will be handled locally by remote caches, based on changes in > the cached data (this may be accomplished degenerately, by not > exposing anything other than the event). As mentioned above, in our case we make use of pydispatcher for signals/events within each local application space, using the PB cacheable setup (with wrappers on each end) to reflect the data. This lets client code be written as if it was handling local signals regardless of whether the model object is a cache of a remote object or truly the local instance. > Differences from your system: > > I would like all of my referenceables and cacheables in my system to > be re-connecting. This to some extent cancels the need for managers, > because any dynamically changing object is re-connecting. We're pretty much auto-reconnecting (as above). I think you'll probably need something akin to a manager, or at least a registry to perform the reconnection though, or else you won't have a well-defined point at the original to which you can re-issue the original request to get a new referenceable/cacheable on the reconnecting client. > Instead of (or possibly in addition to) manager objects, I want to > have what I call Seed objects, which represent a combination of state, > data and referenceables (all optional). These seeds will be copyable, > and will include the knowledge required to retrieve their components. Sounds reasonable. I still think you'll need a separate construct to "own" access to these Seed objects, or else what is the remote seed reference going to issue a query against in order to rebuild its remote references following an outage? (...) > The main reasons for seeds are: > > I want state, data and control to be provided by the same object for > clarity, and not have each of them require an individual query. For > example, a user will have Name, email address, etc. as data, > online/offline as state, and a send_message method. One thing to consider is the creation of the information/state that the seeds are encapsulating. One of the reasons we ended up going more heavily towards copyable objects (rather than references) is that we can end up creating such objects at various points within the distributed system. So it's very convenient to be able to instantiate a local object instance (say of a user object) in order to begin the process of creating a user, and populating its information, without bringing in the rest of the baggage of the remote connection until it comes time for the "store" operation. Likewise we found it much easier to manage reconnections upon "active" objects with well-defined APIs as opposed to the data objects such as a user record. So even if you have the construct of a seed object to encapsulate remote handling, you might want to consider separating out the data object components into their own class for simpler manipulation, prior to assigning that data to a seed object to become part of the distributed system. > I want state objects, referenceables, and possibly data associated > with a Seed to be retrievable from a Server different from the one who > dispensed the Seed (for example, the database may provide a seed and > the associated user-changeable data, but the state may be kept by a > different server). For example, the users data may be stored in a > database, his online/offline state retrieved from a presence server, > and sending him a message may require connecting to his workstation. This sounds like more of a reason to split some of the functionality into separable entities than trying to combine them all into a single seed object, although I could probably see some argument for combining in order to hide the origin of the data from the end user. But then you're going to have to keep a lot of information in that seed object about where each of its information pieces originally came from and be able to reconstitute the references when needed. And handle what happens if you lose contact with the owner of one piece of the information but not another. To a large extent, permitting this sort of breakout is where we headed with our registry/manager structure. To a client, it only has a registry reference, and asks it for managers in order to retrieve/manipulate state. But it doesn't know how the registry locates managers nor how the managers locate their state. So when I ask my "local" registry for a user manager, for all I know that request is replicated across 5 hosts and I eventually get what appears to be a local user manager but is a remote reference to an object instance 5 hosts away. At the same time that same registry when asked for a session manager, might return me a local object from my own local process. The decision about where the managers are located is up to top level application code that instantiates the registry and makes it available on the network (and we have various registry variants for different ways of combining local and remote managers). This lets each "hop" along the way make some of its own decisions independent of other parts of the system, with a given node running a registry in control of what any nodes "behind" it sees, or even what managers are available. This can certainly be incorporated into a single seed object, but I think you'll have to make some decisions about how the original data sources are configured (and does that itself need to be capable of being distributed). If you can own that configuration amongst various centrally maintained servers, and you're operating from primarily a hub and spoke system it'll probably work well. But if you might end up with independently operating clusters of nodes or want to distribute administrative domains over various sorts of data, it might be more of a challenge. Hope this has spurred some more thoughts. Best of luck with your project! -- David
http://twistedmatrix.com/pipermail/twisted-python/2005-July/011046.html
CC-MAIN-2014-41
refinedweb
3,593
51.89
Overview¶ pdf2image subscribes to the Unix philosophy of “Do one thing and do it well”, and is only used to convert PDF into images. You can convert from a path or from bytes with aptly named convert_from_path and convert_from_bytes. from pdf2image import convert_from_path, convert_from_bytes images = convert_from_path("/home/user/example.pdf") # OR with open("/home/user/example.pdf") as pdf: images = convert_from_bytes(pdf.read()) This is the most basic usage, but the converted images will exist in memory and that may not be what you want since you can exhaust resources quickly with big PDF. Instead, use an output_folder to avoid using the memory directly. The images will stil be readable and Pillow takes care of loading them on demand. import tempfile from pdf2image import convert_from_path with tempfile.TemporaryDirectory() as path: images_from_path = convert_from_path("/home/user/example.pdf", output_folder=path) Got it? Now by default pdf2image uses PPM as its file format. While the logic if abstracted by Pillow, this is still a raw file format that has no compression and is therefore quite big. Why not use good old JPEG? images_from_path = convert_from_path("/home/user/example.pdf", fmt="jpeg") Supported file formats are jpeg, png, tiff and ppm. For a more in depth description of every parameters, see the reference page.
https://pdf2image.readthedocs.io/en/latest/overview.html
CC-MAIN-2022-05
refinedweb
209
59.5
[PM] Remove the old 'PassManager.h' header file at the top level of LLVM's include tree and the use of using declarations to hide the 'legacy' namespace for the old pass manager. This undoes the primary modules-hostile change I made to keep out-of-tree targets building. I sent an email inquiring about whether this would be reasonable to do at this phase and people seemed fine with it, so making it a reality. This should allow us to start bootstrapping with modules to a certain extent along with making it easier to mix and match headers in general. The updates to any code for users of LLVM are very mechanical. Switch from including "llvm/PassManager.h" to "llvm/IR/LegacyPassManager.h". Qualify the types which now produce compile errors with "legacy::". The most common ones are "PassManager", "PassManagerBase", and "FunctionPassManager".
https://reviews.llvm.org/rL229094
CC-MAIN-2018-05
refinedweb
143
57.98
Day 143 — Getting Started With AWS for Unity Hey and welcome! In this one we’re going to look into getting set up with AWS (Amazon Web Services) and getting it to work with Unity. We’re doing this so that we can make use of the cloud services offered by Amazon for use in our app that we’re building. If you’re wanting to gloss over this article you can also follow the documentation provided for Amazon for this: Set Up the AWS Mobile SDK for Unity To get started with the AWS Mobile SDK for Unity, you can set up the SDK and start building a new project, or you can… docs.aws.amazon.com To get things started you will first need to create yourself an AWS account at the following link, they will ask for your card details and take 1$(USD) for authorization but on the last step of the registration you can choose the free option and you won’t need to pay for anything for this article. Cloud Computing Services - Amazon Web Services (AWS) Whether you're looking for compute power, database storage, content delivery, or other functionality, AWS has the… aws.amazon.com Once you’ve finished all that it should direct you over to your management console for AWS which will come back to later, for now you can click on the following link to download a zip folder of some AWS packages for Unity: Double click or drag into the Unity project your AWSSDK.S3 package and get that imported. When that’s done you’ll have a AWSSDK, Examples and a Plugins folder, I recommend checking out the Examples folder for the time being as it has some code showing how to get and post objects to S3 bucket. Speaking of code, let’s go ahead and create a new script called AWSManager and create an empty game object to attach to it with the same name. Next add in this code to your script: using Amazon;public class AWSManager : MonoBehaviour { private void Awake() { UnityInitializer.AttachToGameObject(this.gameObject); } } This bit of code gets the AWS Mobile SDK usable with our Unity project! Last bit we need to do is obtain an Identity Pool ID which will allow us to access the AWS services without having to use our personal credentials in the project. To get started with this you will need to head over to the following link: On this page go ahead and click on the Manage Identity Pools options and give this pool the name of “Service Adjustment App”. Next you can check the box to enable access to unauthenticated identities and then click on the create pool option which will lead you to a page about IAM roles which is fine to allow as is. With that done our project has now been set up with AWS and we’re now ready to get started with building the app!
https://connorgamedev.medium.com/day-143-getting-started-with-aws-for-unity-fe96ddf6981?source=post_internal_links---------3----------------------------
CC-MAIN-2022-27
refinedweb
494
58.45
MyMASS 2004 1.001 Sponsored Links MyMASS 2004 1.001 Ranking & Summary RankingClick at the star to rank Ranking Level User Review: 0 (0 times) File size: 3.21MB Platform: Win98, ME, NT 4.x, 2000, XP License: Shareware Price: $29.00 Downloads: 125 Date added: 2004-10-22 Publisher: IntelliRel MyMASS 2004 1.001 description MyMASS 2004 1.001. Business & Finance MyMASS 2004 1.001 Screenshot MyMASS 2004 1.001 Keywords Bookmark MyMASS 2004 1.001 MyMASS 2004 1.001 Copyright WareSeeker.com do not provide cracks, serial numbers etc for MyMASS 2004 1.001. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited. Featured Software Want to place your software product here? Please contact us for consideration. Contact WareSeeker.com Related Software You are able to access your data anywhere with Iamanywhere. Free Download Ant Access Viewer allows to view, edit, export data from a Microsoft Access database Free Download Tray DB is a data access program which sits in your system tray and allows you access your data in Microsoft Access Free Download My version of iSQL and Enterprise Manager for Microsoft Access 2000 with the Microsoft Data Engine. You need something like this because the SQL Server utilities dont with with Data Engine. Source cod Free Download import data from MS Excel, MS Access to DB2 Free Download ActiveX DLL for TCPIP client server data access from OLEDB supported providers. Free Download Designer lets you design complex MS Access databases without being an expert in relational database design. Plain-language question and answer process helps you create your database in minutes, not days. Free Download Data Juggler is data integration software that can automate repetitive web & data tasks. Simplify your complex data tasks and access, discover, cleanse and integrate all your data, from virtually any system or in any format. Free Download Latest Software Popular Software - Microsoft Data Access Components (MDAC) 2.8 SP1 - Oracle Data Access Components for C++Builder 4 2.00 - Microsoft IIS Malformed Extension Data in URL Vulnerability patch MS00-030 - ODBC DAC (ODBC Data Access Components) 1.4.3 - Database Design Tool (DDT) 1.5 - Access-to-Oracle 1.3 - Microsoft Data Access Components RTM 2.5 - Silverleaf .NET Data Access Component 1.0 Favourite Software
http://wareseeker.com/Business-Finance/mymass-2004-1.001.zip/377252
CC-MAIN-2016-07
refinedweb
380
52.66
weiqi he - Total activity 16 - Last activity - Member since - Following 0 users - Followed by 0 users - Votes 0 - Subscriptions 6 weiqi he commented, weiqi he created a post, How to list all members and properties of a class with their full informations?Now I am using the 'File Structure' window. It is greate! With its help, it is able to see such as public, protected, private, override, by different icons.However, it cannot indicate such as: stat... weiqi he created a post, 4.5 or 5.1.1 better for VS2008? Considering on performance.I'm now using 4.5.1289 in vs2008. It's features are enough for me but it's performace is not perfect.I want to know if 5.1.1 is better for VS2008, just considering on performance.If 5.1.1 is faster... weiqi he commented, weiqi he commented, weiqi he created a post, Refactor: a class can be moved to outter scope but CANNOT be moved to inner scopeAs a result, this refactor cannot be reversed.Does any body have the same problem?Here, "move to outter scope" is means:FROM: public class Class3 { public class Class3Inner { ... weiqi he created a post, How to refactor the visibility of an abstract member?I want to change the visibility of an abstract member, from public to protected. All overrides should be changed too.How can I do this by ReSharper?Thanks.
https://resharper-support.jetbrains.com/hc/en-us/profiles/2135525115-weiqi-he
CC-MAIN-2020-24
refinedweb
236
70.09
Improve RelNode validation by: - adding a context to the RelNode.isValid method; - enabling the Join.isValid method (it was previously renamed to isValid_, thus disabled). The context to isValid will allow the validator to deduce what correlation variables are available (namely, those set by a RelNode between this one and the root). The context is optional; if null, the isValid method does the best it can. RexInputRef s in the condition of a null-generating side of an outer join currently may be nullable when they should not be; the row type in the namespace is currently made nullable when actually only particular uses of the namespace should be nullable. We introduce class ScopeChild to hold the ordinal, row type and nullability of a use of a namespace.
https://issues.apache.org/jira/browse/CALCITE-1555
CC-MAIN-2020-16
refinedweb
128
53.31
Nov 01, 2010 06:59 PM|Eric_H|LINK Hello, I was hoping to get some help in regards to controlling formatting through the use of metadata classes. I have a metadata class i have created and tied to a partial class off of an entity framework object with the intent of controlling a single field's display. (Making a date look like a short date) So far nothing has altered it from showing as a regular long date. The Entity is: NewsItem -Id (int) -PostDate (DateTime) -DescriptionText (string) These are my attempts at getting it to format when viewed in a list: [MetadataType(typeof(NewsItemMetadata))] public partial class NewsItem { } public class NewsItemMetadata { //[DataType(DataType.Time)] //[DisplayFormatAttribute(ApplyFormatInEditMode = true, DataFormatString = "{0:d}")] //[DataTypeAttribute(DataType.Date)] // Display date data field in the short format 11/12/08. [DataType(DataType.Date)] [DisplayFormat(ApplyFormatInEditMode = true, DataFormatString = "{0:d}")] public object PostDate { get; set; } } Any ideas as to why this won't take? Thanks for any/all help. All-Star 17916 Points MVP Nov 02, 2010 04:40 AM|sjnaughton|LINK Hi Eric, this is usually a namespace issue, how I usually prove this is by adding a bogus property to the metadata class, if the metadata classes are in the correct namespace the app with throw an error if not in the correct namespace then no erro will be thrown. Give it a try, it can save a lot of hair pulling. Dynamic Data 4 Nov 02, 2010 01:23 PM|Eric_H|LINK Hey guys, Thanks for the help, my app is only 1 project in size, and both classes are inside of the projectname.Models namespace/folder. However, i created a fake property (public object MumboJumbo) and it built. I don't know why the namespaces wouldn't resolve to the same thing. =/ 4 replies Last post Nov 02, 2010 01:32 PM by Eric_H
https://forums.asp.net/t/1619050.aspx?Displayformat+not+being+applied
CC-MAIN-2020-50
refinedweb
312
59.64
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How can I migrate this code to Odoo?? (_get_image function) How can I migrate this code to the versión 8?? How can I get the "ids" variable? def _get_image(self, cr, uid, ids, context=None): result = dict.fromkeys(ids, False) for obj in self.browse(cr, uid, ids, context=context): result[obj.id] = tools.image_get_resized_images(obj.image, avoid_resize_medium=True) return result def _set_image(self, cr, uid, id, name, value, context=None): return self.write(cr, uid, [id], {'image': tools.image_resize_image_big(value)}, context=context) _columns = { 'image': fields.binary("Image", help="This field holds the image used as image for the category, limited to 1024x1024px."), 'image_medium': fields.function(_get_image, fnct_inv=_set_image, string="Medium-sized image", type="binary", multi="_get_image", store={ 'product.template': (lambda self, cr, uid, ids, c={}: ids, ['image'], 10), }, help="Medium-sized image of the category. It is automatically "\ "resized as a 128x128px image, with aspect ratio preserved. "\ "Use this field in form views or some kanban views."), } thanks Hi Chesucr, You can migrate your code as like below. You just need to chage your functions like below and you code will be migrate into V8. @api.multi def _get_image(self, name, args): return dict((p.id, tools.image_get_resized_images(p.image)) for p in self) @api.one def _set_image(self, name, value, args): return self.write({'image': tools.image_resize_image_big(value)}) I hope you will get your result. Note : Please refer python file from odoo=>openerp=>addons=>base=>res=>res_partner.py in this file looking on the class res_partner. mmm I think you copied the code from here but that doesn't work. Thanks anyway. Moreover with @api.one and @api.multi you only need 'self' as argument. With the function _set_image I get this error: RuntimeError: maximum recursion depth exceeded. I will go on with my research As I have write down on the note. I have seen this in that file. I have not copied from. OK thanks, I owe you one ;) I tried the functions with this field and it worked: image = fields.Binary("Second product image", compute="_get_image", multi="_get_image", inverse="_set_image") About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-can-i-migrate-this-code-to-odoo-get-image-function-81465
CC-MAIN-2018-17
refinedweb
400
61.83
After the array, perhaps the most important data structure is the stack. A stack structure restricts dramatically how elements are inserted, retrieved, and removed: The most recently inserted element in the stack is the only one that can be retrieved or removed. (Thus, if you wish to retrieve an element inserted long ago, you must first remove all the elements that were inserted after the desired one.) This strategy of removal and retrieval is sometimes called, ``last in, first out.'' work force. Many computer algorithms work best with stacks --- stacks are used for An example of 2. is the ``undo'' button on most text editors, which lets a person undo a typing error, or the ``back'' button on a web browser, which lets a user backtrack to a previous web page. Another example is a searching algorithm, which searches a maze and keeps a history of its moves in a stack. If the algorithm makes a false (bad) move, the move can be undone by retrieving the previous position from the stack. We begin with a famous example that uses a stack to remember partially completed computational tasks: Evaluating an arithmetic expression written in postfix notation (``Lukasiewicz notation''). Postfix notation is an parenthesis-free way of writing arithmetic expressions, where one places the operator symbol after the operator's two operands. For example, the addition of 3 to 2 is written 3 2 +, and the multiplication of the result by 4 is written 3 2 + 4 *. Remarkably, parentheses are never needed. An example like ((3 + 2) * 4) / (5 - 1)is written 3 2 + 4 * 5 1 - /To see why parentheses are unnecessary, let's manually compute the expression: 3 2 + 4 * 5 1 - / => 5 4 * 5 1 - / => 20 5 1 - / => 20 4 / => 5We see that an operator evaluates with the two operands that immediately precede it. This explains why the division operator is written last in the original expression, because the division is performed only after all the other subexpressions are evaluated. Postfix arithmetic is more than an interesting oddity --- it is the standard format for writing arithmetic expressions that must be executed by a CPU. Recall that the CPU's arithmetic-logic unit works with the CPU's registers to do arithmetic. A CPU cannot compute the result of the expression, ((3 + 2) * 4) / (5 - 1), but it can compute the result of 3 2 + 4 * 5 1 - /, because the operands and operators are now arranged in the correct order for loading numbers into registers and doing the operations. Here is an assembly code sequence that tells the CPU how to compute the postfix expression: loadconst R1 3 // load Register 1 with constant 3 loadconst R2 2 // load Register 2 with constant 2 add R2 R1 // add Register 1 to Register 2 loadconst R1 4 // etc. multiply R2 R1 loadconst R1 5 loadconst R3 1 subtract R1 R3 divide R2 R1The register names, R1, R2, R3, are a bit distracting --- notice the pattern hidden in the instructions (erase the register names): loadconst 3 loadconst 2 add loadconst 4 multiply loadconst 5 loadconst 1 subtract divideIt is exactly the postfix expression! Indeed, the simplified version of the assembly code is called stack code or byte code, and it is in fact the format of code embedded in the .class files constructed by the Java compiler. Because postfix format is ideal for computation with a CPU, the Java compiler not only checks the grammar of your Java program, it also translates the program into postfix format --- even the assignments, conditionals, and loops are reformatted into postfix format. If you write a program like this: ... x = x + 1; if ( x > 2 ) { y = 2 * ( x - 3 ); } ...the Java compiler produces the postfix-reformatted version: ... x 1 + =x ; x 2 > if 2 x 3 - * =y ; ...and then writes the byte code (stack code) for the postfix version into the program's .class file: ... load x loadconst 1 add storeinto x load x loadconst 2 greaterthan test_and_jump_if_false_to LabelA loadconst 2 load x loadconst 3 subtract multiply storeinto y LabelA: ... The example leaves us with two fundamental questions: Recall again that the postfix version of ((3 + 2) * 4) / (5 - 1) is 3 2 + 4 * 5 1 - /Figure 2 illustrates how we might use a stack to compute the result of this expression --- the stack holds the results of subexpressions that are awaiting further computation. In the Figure below, the stack is drawn as if it were a stack of dinner plates---it grows vertically. The arithmetic expression shrinks horizontally as it is read and computed. FIGURE 2: postfix expression evaluation with a stack data structure======== Stack Expression | | --- 3 2 + 4 * 5 1 - / (empty) | 3 | 2 + 4 * 5 1 - / --- | 2 | | 3 | + 4 * 5 1 - / --- | 5 | 4 * 5 1 - / --- | 4 | | 5 | * 5 1 - / --- | 20| 5 1 - / --- | 5 | | 20| 1 - / --- | 1 | | 5 | | 20| - / --- | 4 | | 20| / --- | 5 | (finished) --- ENDFIGURE===============================================================The symbols of the input expression are read, one by one; numerals are inserted onto the ``top'' of the stack; and operators retrieve the top two numerals from the stack, perform the operation, and insert the result onto the stack. This is simple and mechanical, and it is easy to write a computer algorithm that does the steps: begin with an empty stack and an input stream. while there is more input to read, do: read the next input symbol; if it's a numeral, then push it onto the stack; if it's an operator then pop two numerals from the stack; perform the operation on the numerals; push the result; end while; // the answer of the expression is waiting for you in the stack: pop the answer; Indeed, the algorithm sketched above forms the heart of the Java Virtual Machine, which ``reads'' and ``executes'' your Java program. Recall that the Java Virtual Machine (JVM) is itself a computer program, whose job is to read byte code and do the instructions. As we saw in the previous section, both Java statements as well as expressions are translated by the Java compiler into byte code. The JVM reads the byte code instructions, and uses a stack, just like the one we used in the arithmetic example, to compute the results of arithmetic expressions. The stack is sometimes called the temporary-value stack, because the subresults of arithmetic expressions are ``temporary'' and the final result is popped and stored into some variable's storage cell. The algorithm for the JVM looks something like this: begin with an empty stack and the byte code. while there is more byte code to read, do: read the next byte-code instruction; if it's loadconst n, then push n onto the stack; if it's load x, then look up x's value in storage and push it onto the stack; if it's an operator then pop two numerals from the stack; perform the operation on the numerals; push the result; if it's store x, then pop a numeral and store it in x's cell in storage; if it's test_and_jump_if_false_to LabelL then pop the stack and see if the value is false (0); if it is, reset the JVM's instruction counter to LabelL ... etc. ... end while;This algorithm is written in machine code, and it is read and executed by the CPU. So, the CPU executes the JVM, which reads and executes byte code. It all works well because of a stack! TABLE 3: specification of a stack======================================== ENDTABLE=============================================================The names, push, pop, and top are traditional; isEmpty is the stack's ``length'' operation. A stack can be implemented in various ways; we start with Figure 4, which uses an array, s, to collect the stack's elements. An extra variable, top, is used to remember which array element contains the most recently inserted object. FIGURE 4: array-based implementation of stack============================ /** Stack0 models a stack data structure */ public class Stack0 { private int INITIAL_SIZE = 5; private int top; // how many elements in the stack private Object[] s; // the stack // invariants: elements on stack are s[top-1] s[top-2] ... s[0] // top is always in range 0 .. s.length-1 /** Constructor Stack0 creates a stack. */ public Stack0() { s = new Object[INITIAL_SIZE]; top = 0; } /** push inserts a new element onto the stack * @param ob - the element to be added */ public void push(Object ob) { if ( top == s.length ) { // array is full---create a new one to hold more objects: Object[] temp = new Object[s.length * 2]; for ( int j = 0; j != top; j = j+1 ) { temp[j] = s[j]; } // copy elements into temp s = temp; // set s to hold address of temp } s[top] = ob; top = top + 1; } /** pop removes the most recently added element * @return the element removed from the stack * @exception RuntimeException if stack is empty */ public Object pop() { if ( top == 0 ) { throw new RuntimeException("Stack error: stack empty"); } top = top - 1; return s[top]; } /** top returns the identity of the most recently added element * @return the element * @exception RuntimeException if stack is empty */ public Object top() { if ( top == 0 ) { throw new RuntimeException("Stack error: stack empty"); } return s[top - 1]; } /** isEmpty states whether the stack has 0 elements. * @return whether the stack has no elements */ public boolean isEmpty() { return ( top == 0 ); } } ENDFIGURE============================================================ If we create a stack, operands, from Figure 4 to perform the computation in Figure 2, the eighth configuration in that Figure would look like this in computer storage: ---- Stack0 operands ==| a1 | ---- a1 : Stack0 -------------- --- | int INITIAL_SIZE ==| 5 | | --- --- | int top ==| 3 | | ------- | Object[] s ==| a2 | | ---- | ... a2 : Object[5] -------------- | 0 1 2 3 4 | ----------------------------- | | a7 | a8 | a9 | null | null | | ----------------------------- a7 : Integer a8: Integer a9: Integer --------------- -------------- -------------- | (holds 20) | (holds 5) | (holds 1)That is, array s holds the addresses of three Integer objects, and top remembers that the stack holds three objects. (Review the section, ``class Object and Wrappers,'' in Chapter 9 to learn why the integers, 20, 5, and 1, must be embedded into Integer objects before they are inserted into the stack.) The next step in Figure 2 removes two objects from the stack (using pop twice), does a subtraction, and inserts a 4 (using push). The resulting configuration looks like this: ---- Stack0 operands ==| a1 | ---- a1 : Stack0 -------------- --- | int INITIAL_SIZE ==| 5 | | --- --- | int top ==| 2 | | ------- | Object[] s ==| a2 | | ---- | ... a2 : Object[5] -------------- | 0 1 2 3 4 | ----------------------------- | | a7 | a10 | a9 | null | null | | ----------------------------- a7 : Integer a8 : Integer a9 : Integer a10 : Integer --------------- -------------- -------------- -------------- | (holds 20) | (holds 5) | (holds 1) | (holds 4)Because the value of top correctly marks the top of the stack, there is no need to erase the value in element 2 of the array---this value will never again be used and will be overwritten if a push is executed next. A stack is the key data structure for translating a program into postfix format (and then, into byte code). To keep it simple, think about how we might translate an infix arithmetic expression, like ((3 + 2) * 4) / (5 - 1) , into 3 2 + 4 * 5 1 - /. This time, the stack holds operator symbols (rather than the operands); the algorithm goes like this: begin with an empty stack and an input stream. while there is more input, do: read the next input symbol; if it's a numeral, then print it to the output filestream; if it's an operator, then push it onto the stack; if it's a '(', then discard it; if it's a ')', // marks the end of an expression! then pop an operator from the stack and print it end while;As an exercise, use the algorithm to translate ((3 + 2) * 4) / (5 - 1) . You can see from the algorithm that the parentheses (especially the right one) plays a critical role in directing stack pops and translation. This is not an accident --- stacks are used to translate so-called bracket languages, and both arithmetic and Java are examples of bracket languages. It is not an accident that Java makes you insert all those tedious { and } symbols and punctuation like ; and keywords like class and while. These are brackets that the Java compiler uses to disassemble a Java program and rebuild it in postfix form! As an exercise, you should try to modify the above algorithm so that it can translate a baby-Java language of arithmetic, assignments, and while-loops into postfix format. If you can do this, you are very close to writing your own Java compiler. Stacks are also used to remember the paths that one travels when one ``searches'' through a graph, network, or tree. Here is a simple example: Say that you must list all four-letter word permutations of the letters, 'a', 'b', 'c' and 'd'. To think of the solution systematically, you might draw a ``tree,'' whose paths show the choices for a word's first letter, second letter, and so on. A sketch of the tree appears in Figure 5. FIGURE 5: search tree for permutations of "abcd"=========================== "" (empty string) / / \ \ First letter: a b c d / | \ / | \ / | \ /|\ / | \ / | \ Second letter: b c d a c d ... ... / \ / \ / \ / \ / \ / \ / \ / \ / \ Third letter: c d b d b c ... ... ... | | | | | | (Last letter:) d c d b c b ... ... ... ENDFIGURE============================================================== Such a tree is sometimes called a search tree, and the paths through the tree are called the search space. (Think about an adventure game where you must open the doors, a, b, c, and d, and the order in which you open the doors affects the outcome of the game. The smartest way for a computer to play the game is to study all possible sequences of moves before making its first move. The computer would generate the search tree seen above.) To ``search'' the tree for all four-letter permutations, you follow the paths from the tree's top (its root) to its end points (it's leaves). At the top of the tree, you select one of the four paths to reach a word's first letter; say that you select the leftmost path, choosing a. This gives you three possible paths to the second letter, and so on. Of course, to generate all permuations, you must traverse all the paths of the tree. Traversal of all paths is simply done with a stack---as one path is traversed, the stack remembers the paths that must be explored later. Figure 6 illustrates the traversal process, where the stack is drawn on its side, its ``top'' positioned to the right. FIGURE 6: tree traversal that generates permutations===================== Stack contents Tree traveral steps -------------- ------------------------ "" Start with a stack that holds the empty string. (empty) Pop stack. The valued popped, "", represents the position at the top of the tree. Next, extend string, "", by a, b, c, d, and push the resulting four strings: "d" "c" "b" "a" Pop stack. Value "a" represents the a-position in "d" "c" "b" the tree. Extend "a" with b, c, d, and push: "d" "c" "b" "ad" "ac" "ab" Pop stack, extend "ab" by c and d, and push: "d" "c" "b" "ad" "ac" "abd" abc" Pop stack, and extend "abc" by d, and push: "d" "c" "b" "ad" "ac" "abd" abcd" Pop stack. The value popped, "abcd", is a completed string, so output it. "d" "c" "b" "ad" "ac" "abd" Pop stack, and extend "abd" by c, and push: "d" "c" "b" "ad" "ac" "abdc" Pop stack. The value popped, "abdc", is a completed string, so output it. "d" "c" "b" "ad" Pop stack, extend "ac" by b and d, and push: "d" "c" "b" "ad" "acd" "acb" etc. ENDFIGURE=============================================================The Figure shows that the search traverses the paths of the tree from left-to-right, completely to the end points (``leaves''). This form of traversal is called depth-first search because it descends to the ``depths'' of the tree as quickly as possible. A stack naturally supports depth-first search. Search trees like the one in Figure 5 are used to represent choices of possible moves in a computer game; a computer ``player'' of say, tic-tac-toe (noughts and crosses), can use such a tree to systematically explore all sequences of moves and calculate all possible outcomes. Stacks help the player remember which move sequences remain to be analyzed. Another well-known example of search, called the ``travelling salesman problem,'' finds the shortest path from a start city to a destination city on a road map; the paths between cities are summarized as a tree, and a stack helps a program calculate the total distance travelled in each path from the start city to the next city to the next, etc., to the destination. A bit of thought will convince you that the search tree itself need not be constructed when programming a solution to the travelling salesman problem: A table that lists adjacent cities and the distances between them will suffice for building the stack. (This is also true for the permutation example, where the letters in the string can be consulted in place of the search tree.) API for Stack Directory of array-implemented Stack API for postfix application Directory of postfix translation/evaluation application Directory of permutation-generator application
http://people.cis.ksu.edu/~schmidt/300s05/Lectures/Week3.html
CC-MAIN-2015-35
refinedweb
2,831
53.85
DispatcherTimer In WinForms, there's a control called the Timer, which can perform an action repeatedly within a given interval. WPF has this possibility as well, but instead of an invisible control, we have the DispatcherTimer control. It does pretty much the same thing, but instead of dropping it on your form, you create and use it exclusively from your Code-behind code. The DispatcherTimer class works by specifying an interval and then subscribing to the Tick event that will occur each time this interval is met. The DispatcherTimer is not started before you call the Start() method or set the IsEnabled property to true. Let's try a simple example where we use a DispatcherTimer to create a digital clock: <Window x: <Grid> <Label Name="lblTime" FontSize="48" HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </Window> using System; using System.Windows; using System.Windows.Threading; namespace WpfTutorialSamples.Misc { public partial class DispatcherTimerSample : Window { public DispatcherTimerSample() { InitializeComponent(); DispatcherTimer timer = new DispatcherTimer(); timer.Interval = TimeSpan.FromSeconds(1); timer.Tick += timer_Tick; timer.Start(); } void timer_Tick(object sender, EventArgs e) { lblTime.Content = DateTime.Now.ToLongTimeString(); } } } The XAML part is extremely simple - it's merely a centered label with a large font size, used to display the current time. Code-behind is where the magic happens in this example. In the constructor of the window, we create a DispatcherTimer instance. We set the Interval property to one second, subscribe to the Tick event and then we start the timer. In the Tick event, we simply update the label to show the current time. Of course, the DispatcherTimer can work at smaller or much bigger intervals. For instance, you might only want something to happen every 30 seconds or 5 minutes - just use the TimeSpan.From* methods, like FromSeconds or FromMinutes, or create a new TimeSpan instance that completely fits your needs. To show what the DispatcherTimer is capable of, let's try updating more frequently... A lot more frequently! using System; using System.Windows; using System.Windows.Threading; namespace WpfTutorialSamples.Misc { public partial class DispatcherTimerSample : Window { public DispatcherTimerSample() { InitializeComponent(); DispatcherTimer timer = new DispatcherTimer(); timer.Interval = TimeSpan.FromMilliseconds(1); timer.Tick += timer_Tick; timer.Start(); } void timer_Tick(object sender, EventArgs e) { lblTime.Content = DateTime.Now.ToString("HH:mm:ss.fff"); } } } As you can see, we now ask the DispatcherTimer to fire every millisecond! In the Tick event, we use a custom time format string to show the milliseconds in the label as well. Now you have something that could easily be used as a stopwatch - just add a couple of buttons to the Window and then have them call the Stop(), Start() and Restart() methods on the timer. Summary There are many situations where you would need something in your application to occur at a given interval, and using the DispatcherTimer, it's quite easy to accomplish. Just be aware that if you do something complicated in your Tick event, it shouldn't run too often, like in the last example where the timer ticks each millisecond - that will put a heavy strain on the computer running your application. Also be aware that the DispatcherTimer is not 100% precise in all situations. The tick operations are placed on the Dispatcher queue, so if the computer is under a lot of pressure, your operation might be delayed. The .NET framework promises that the Tick event will never occur too early, but can't promise that it won't be slightly delayed. However, for most use cases, the DispatcherTimer is more than precise enough. If you need your timer to have a higher priority in the queue, you can set the DispatcherPriority by sending one of the values along on the DispatcherTimer priority. More information about it can be found on this MSDN article.
https://www.wpf-tutorial.com/sr/96/misc-/the-dispatchertimer/
CC-MAIN-2021-39
refinedweb
624
56.05
FilterReader What is Filtering? A normal reader/writer class can read/write what we demand for. But lets say we have a special requirement of writing/reading. For Example: We want to read only those lines from the file which has “.” at the end. Rest we want to skip. For this kind of scenario where by some very frequent and static processing is to be done each time then we can create our own FilterReader which will do the job for us. So that manually we will not be doing it again and again. FilterReader is an abstract class which does not have any abstract method. We can override the methods of it and perform our code instead. We are going to discuss another case here. Aim Write a program using FilterReader which removes tags from the file content. e.g. <tag> should be removed. import java.io.*; class NoTagReader extends FilterReader { boolean intag = false; public NoTagReader(Reader r) { super(r); } public int read(char[] buf, int from, int len) throws IOException { int numchars = 0; while (numchars == 0) { numchars = in.read(buf, from, len); if (numchars == -1) { return -1; } int last = from; for (int i = from; i < from + numchars; i++) { if (!intag) { if (buf[i] == '<') intag = true; else buf[last++] = buf[i]; } else if (buf[i] == '>') { intag = false; } } numchars = last - from; } return numchars; } public int read() throws IOException { char[] buf = new char[1]; int result = read(buf, 0, 1); if (result == -1) return -1; else return (int) buf[0]; } } class FilterReaderDemo { public static void main(String[] args) { try { NoTagReader ntr = new NoTagReader(new FileReader("data.txt")) BufferedReader in = new BufferedReader(ntr); String line; while ((line = in.readLine()) != null) { System.out.println(line); } in.close(); // Close the stream. } catch (Exception e) { System.err.println(e); } } } Recent Comments
http://ankit.co/tutorials/java-tutorials/input-output/filterreader
CC-MAIN-2021-31
refinedweb
296
66.44
Board index » python All times are UTC There is a final-final beta of the Win32 Extensions available. Major changes since the last beta are: makepy's OCX support has changes, Pythonwin's GUI demo now allow syou to select a demo, and most significantly, lots of COM problems on Win95 have been sorted out. If you grabbed the last beta, please grab this, (and make sure you run "guidemo.py" again :-) The final release will (really honestly) come in a week or so. Im probably gunna call that Pythonwin 1.5! Significant Changes: *** OCX Hosting *** The way makepy generates OCX support have changed. Old code will break very slightly - the way tyou construct the class has changed - no changes should be necessary to the actual guts of the code. This change divorces win32com from Pythonwin for OCX support. See the OCX Demos for more details. *** Win95 without DCOM *** This release should work fine on Win95. There were a few small problems with the old release on Win95. **Sources ** All new sources are released ** Installer ** The installer now calls python code to compile all installed .py files. This Python code also ensures the Uninstall will work correctly (ie, remove all .pyc files) There is now a combined installer. A single installer will install the win32 extensions, COM extensions, and Pythonwin. I still intend supporting the seperate installers, so you have the choice of how you want to install. ** Pythonwin ** Pythonwin supports MFC GUI Threads (ie, threads owning windows and with message loops). The GUI Demo shows multiple GUI threads running. win32traceutil has a brother that works in Pythonwin. pythonwin is now fully thread safe. It religously acquires and releases the Python interpreter lock before doing anything. This was a large change which touched almost every source file (fortunately in a somewhat mechanical way). A few minor bugs were also found on the way. More of win32api was also made thread-safe in the process. Changed the structure of the .py files to the new hierarchical namespace. There are now sub-modules pywin.mfc, pywin.framework, pywin.dialogs and pywin.tools. Until the naming conversion is complete (and to avoid breaking everyones code), as soon as the top-level "pywin" is imported, it injects some of the "old names" into the module namespace. However, in some situations code may break - the fix should simply be to change "import {somemodule}" to "from pywin.framework import {somemodule}" The demos are far far more useful. Most now run from the command line or from within Pythonwin. All the demos have been touched, and they all do something useful - even if just display a message box telling you they do nothing useful :-). guidemo.py now includes far more test scripts, including a few you have probably never seen. Thanks to the improved COM support (below) the demo script is capable of demoing the OCX controls if they are installed on your system, without any of the messy "makepy" etc steps required before. This means Pythonwin can now use any OCX installed on the PC without any special steps required by the end-user. ** COM ** Lots of new COM documentation in the HTML directory (and referenced by the readme) Inspired by Chris Tismer, property accesses for all makepy COM object should be about 3 times faster - almost up to method speed. Client side COM has a radically changed (and vastly improved) implementation and interface. No code should break, but there is now a somewhat new "recommended style". In a nutshell, makepy has changed to a UI based program, and the management of the generated .py files is somewhat automatic. It is more analagous with the way VB uses COM objects - you select the type libraries you wish to work with, and that is it - magically the generated support becomes available to Python. (if only the UI was as slick as VB's :) Even better, there is a programmatic interface to this, so your Python code can check, and if necessary build, the required support. All the Python program need know is the description or IID of the typelibrary. This leads to the somewhat bizarre situation for OCX controls, where at runtime Python code generates other Python code which contains its subclass! Bizarre, but incredibly neat! The installer package itself is at.*-*-*.com/ ~mhammond/win32all.exe or.*-*-*.com/ (2481215 bytes) All source zip files are also there (at laby now, and at starwship within 24 hours!) Please let me know of any problems... Mark. cheers - pirx -- Applied Biometrics GmbH : Have a break! Take a ride on Python's Kaiserin-Augusta-Allee 101 : *Starship* 10553 Berlin : PGP key -> we're tired of banana software - shipped green, ripens at home Hi Mark, Now that I was successful to write an ActiveX plugin for MSIE in Delphi, (and this was amazingly easy!) I found that I can use this easily in other products, like C++Builder or Delphi again. Unfortunately this is so easy that I can't understand where the real work is done ;-) Now I want to use the plugin from Python. (I know I could do that from within MSIE using AX scripting, but that's now what I need). Question: I can create an instance of my plugin, read and set its properties and so on. But how can I activate it? There seems to be some magic with ActiveX which I didn't understand yet. Or does PythonCom generally not support COM objects which appear visible in a window? How would I tell it where to appear? I think I need some container for an ActiveForm control. Any clues? I'm a newbe to Python. Am I missing one of the source archives? -Todd Fleming Um - no - here it is... (posted as it doesnt appear in my archives anywhere) Mark. I copied it into your starship mirror, too. > Oops - that file was useless. Try this... 1. Final beta of Win32 Extensions available. 2. One final, final thought 3. Final Call for Visual TriO beta 4. UCalc Fast Math Parser final beta 5. Linder SetupBuilder 3.0 (Final Beta 6) 6. LSPack for CW - OPEN BETA-3 ready for final testing 7. Final beta release of PTUI 8. JIPL: Java interface for Prolog(final beta) 9. Final beta release for the Tcl compiler 10. PIL Win32 binaries for Python 1.6 (final) 11. Final Call, ISQED 2001, NOTE 9/22 deadline extension 12. Release of NT TCL extensions for 8.0 Final
http://computer-programming-forum.com/56-python/8d44a9c8581eaf44.htm
CC-MAIN-2019-18
refinedweb
1,076
75.2
HTML5 Semantics - November 18th, 2011 - 69 Comments12.3,4:5 on the W3C website. Sexy New Semantics We all know about video6 and audio. And canvas is particularly popular at the moment because it allows for 3-D graphics using webGL7,8.”9” by Richard Ishida10,11”12 to see what authors were using as class names on divs and other elements. More recently, in 2008, Opera MAMA13 analyzed 3 million URLs to see the top class names14 and top IDs1516, so we won’t bloat this article by going in depth here. Warning: some were written by me.) The new semantics were built to degrade gracefully. For example, consider what the specification has to say1718 19 20 is that new features should degrade gracefully21. 201122“ and “JAWS, IE and Headings in HTML523.”) Other elements do have a visual effect. The details element24,25. Most of these are attributes of the input element, thereby ensuring graceful degradation to <input type=text> in older browsers. New elements include datalist26, output, progress and meter27.28 and Filament Group’s Responsive Images29, require JavaScript and tweaks to the server’s htaccess file. The worst solutions require old-fashioned techniques, such as browser-sniffing30, now rebranded as “device detection” but still the same old user-agent string-pattern matching, which is hilariously fragile31,3233,”34. It’s perhaps surprising that, even though geolocation3536. If I could choose, I’d prefer place, so I could wear a T-shirt with the slogan “I’ve got the time if you’ve got the place“.) HTML3 had a person element37, “used for names of people to allow these to be extracted automatically by indexing programs,” but it was never implemented. In HTML4, the cite element38 could be used to wrap names of people, but this has been removed in HTML5 — controversially (see “Incite a Riot39”40, multiple family names, and Thai nicknames to consider. (See Richard Ishida’s excellent article “Personal Names Around the World41” for more information and discussion.) The new data element, which replaces time42,43 than with other formats), HTML5 specifies microdata, a mechanism for adding common semantics via agreed-upon markup patterns. HTML5 Doctor has more information on HTML5 microdata44, and Opera 11.6045 supports the Microdata DOM API46.47,48, some people suggest that thinking about which element to use is a waste of time. “There are two types of developers: those who argue about div’s not being semantic and those who create epic shit” writes Thomas Fuchs49, as if the two activities were mutually exclusive. A better argument is that no software cares about or consumes semantics anyway, so why bother? This isn’t true (work is underway already to map assistive technologies to new semantics50),51.) About the Author Bruce53 evangelizes Open Web Standards for Opera54. He wrote the book Introducing HTML555 Jordan MooreNovember 18, 2011 9:02 am Oh, that imaginary picture tag is getting me all hot under the collar. I would love to see that become a reality. Haven’t had time to read this properly yet – instapapered for later though. WesNovember 18, 2011 9:36 am Honestly, the picture tag was a big turnoff for me. This triples the amount of work to simply embed an image, nevermind all the troubles that come with formatting that image. Margins change on smaller devices, and there is every size of handheld device so you need to make at least 3 versions of each image, likely more. Not ideal. Jordan MooreNovember 19, 2011 1:07 pm What’s the ideal alternative then? Serve a gigantic image that works from desktop down to mobile and cripples user data allowances? Current solutions involve either JS or htaccess hacks that aren’t ideal. I’d love to do this natively with the suggested fallback. This is how audio and video tags work, where the appropriate version is served for the end context, why not do this natively with images? It’s difficult to discuss an imaginary tag but I’d say you don’t always have to load 3 versions every time. Perhaps some areas of a layout will only require the standard img tag if they are small enough to begin with. Nathan GardnerMarch 9, 2012 12:24 pm I wrote a PHP script that dynamically resizes the image based on the user agent. So my img src is something like image.php?f=myimage.jpg This way I only have 1 high res version of each image, and the image.php script generates new ones on demand and caches them for future requests. Also, if you want to display the same image but at different sizes on the same page (great for thumbnails), you could pass a width or height to image.php to force a size. Patrick ArltNovember 21, 2011 11:28 pm Is it to late to submit that picture tag to get it in the spec, it’s just what we need to do responsive images. Every solution available still requires desktop browsers to download 2 images or uses and .htaccess and feels like a hack (and no one had ported them to nginx). Having a picture tag would allow authors who care enough to create and serve small images to mobile browsers would be able to do so, and everyone can still use the img tag. John FoliotNovember 18, 2011 9:19 am Bruce, Because pedantry is part of my make-up, I just want to point out a slightly incorrect statement you made with regard to RDFa. There is in fact a W3C specification which allows for the conformant use of RDFa with HTML5 – see: HTML+RDFa 1.1. Not everyone considers RDFa “too hard for authors to write”, only some folks do, and it should be pointed out that the latest version of Drupal (Content Management System) has RDFa support baked in under the hood. Small, under-visited sites such as Yahoo!/Flikr, Best-Buy and Whitehouse.gov also use RDFa today, so don’t be too quick to accept the WHATWG hype that RDFa is a lost cause – that is a biased and incomplete assessment of the real situation on the web. Outside of that, great recap. Semantics Matter. bruceNovember 21, 2011 11:01 am heh. Pednartry is great. So note that I said “Because RDFa was considered to be too hard for authors to write “. I don’t say that statement is true, or that I agree with it, but merely that’s the perception that led to microdata being specced. Peter EdwardsDecember 23, 2011 1:41 am We’re planning on using Drupal7 to produce RDFa in output for our worldwide sites. One thing we’re discussing internally is whether to contribute to schema.org for ontology around learning classes, courses, programmes of study. The use of HTML5 and semantic tags to enable accessibility for screen readers is a big plus. Also the ability to mark, say, Art photos from our Arts sites to make them discoverable by Google users is a win. amidudeNovember 18, 2011 10:01 am My current dilemma is one where my supervisor believes that using HTML5 will magically promote the website higher in search results. I even pointed out to him the words of John Mu from Google in the thread “Does semantic HTML5 matter to Google yet?”, and he still thinks using HTML5 should position the website better in search results. I’m having a hard time explaining to him that it takes much more than HTML5 to receive a better placement in search results. Any suggestions? RobNovember 18, 2011 11:09 am Well, unless you’re still making Flash sites, and have no organic SEO, then yeah.. switching to HTML5 will help. Otherwise, no. Evan 'OldWorld' SkuthorpeNovember 21, 2011 7:31 am A suggestion? Tell him he’s an idiot. Bogdan PopNovember 18, 2011 11:29 am So much semantics talks these days when everyone should focus on content and content quality…. Steve FentonNovember 18, 2011 12:10 pm This is a seriously great article. Semantics complement great content, so it is important to have both. For once, I don’t need to explain my point because the article has already done it for me. Amit GAugust 2, 2012 12:02 pm Well…. Seeing as Semantic was introduced to classify data properly and to focus on content rather than endless similar tags that wrap everything from header to footer. Google is tuning their search engine in a way to make SEO obsolete, content is what is going to matter. edgarNovember 18, 2011 12:22 pm I am currently reading the article… but i clicked on the link of the video(), its crazy MatNovember 18, 2011 12:30 pm Honnestly, cookie based adaptatives images are little less evil than UA sniffing. It’s way better to load all img async with ajax according to device width and pixel ratio, and provide a fallback. It’s more work, but way better than playing with cookies and htaccess. There are lots of big problems with cookies : - JavaScript on the top - racing conditions - if mobile first, users on browser will see crappy img on their first visit (the most important one) - if not, mobile on 3G will take minutes to load your site (and leave) Last but not least, it assumes smaller = slower, which is false (I know, defered loading is the same, but there’s no real solution to detect bandwidth cross-browser) What we need is not a new img tag (well, we need it, but it’s not the most important) : what we need is a bandwidth indicator in the user agent. Matt WilcoxNovember 23, 2011 6:24 am JS is not required to set the cookie in AI (although it’s recommended over the alternative ‘false image in CSS’ method). AI handles the no-cookie + mobile first setting (and cookie race condition) better than you think because of the fallback browser sniffing it does – which is not as nasty as you’re thinking as it only needs detect a desktop OS, and that’s simpler and more reliable than I’d expected. Take a look at the changelog if you want more details on what it’s doing and why :) AI’s not perfect, but it’s not as bad as you’re making the cookie-based technique out to be. We do need some ‘real’ solutions though, rather than work-arounds like AI. HybridixstudioNovember 18, 2011 12:39 pm I do belive that many designers think that having a button animated instead a flash button that will be semantic seo, well in my opinion that doesn’t matter there are many html5 sites that doesnt even have good seo content on their sites, they just look good with html5 and no flash but that doesnt going to put your site in the first places of google search [keyword]. SEO is a huge programming included in the website. Having a awsome design with animation in HTML5 doesn’t help you that much in SEO, it just going to be visible on mobile devices but nothing more. Of course you can design something awsome and have good semantic seo and good seo programming on the site but I’ve seen in many CSS Galleries that some designers say they do SEO with their sites but many of them are only 1 page with many picture and few text (but awsome animation “jquery”). Think about it having cool sliders (jquery) buttons or cool fonts will NOT make your site position on the first places in [search keyword] it just help your site can be visible in iPhone, iPad or Android. As Designers we must design and code our sites so they can be visible to all of our potential customers. [BTW I love HTML5 and FLASH SUX] Damian Rivas | hybridixstudio.com MorelDecember 11, 2011 3:01 am Stupid comment, crappy arguments and final spam at the end of the comment. Congratulations, you are an ignorant who believes to be wise. The next time, shut up and read a book; you need it. AtifNovember 18, 2011 1:30 pm I hate priyanka chopra :p bruceNovember 21, 2011 11:02 am you have a heart of stone. Go away and learn silverlight. Edward MeehanNovember 18, 2011 3:08 pm Have been building standard document style sites for years so the new HTML5 semantics are a breath of fresh air and help standardize markup. I do see the need for more tags to define application UI and functionality, would help clean the slate of heavy nested divs for the sake of visual elements. CSS3 of course will help with this, but the battle of the browsers on this front seems to never end. The new “data-” attribute is interesting to me, and since this attribute is custom I am sure we will start seeing trends happen especially in application development. CraigNovember 18, 2011 4:15 pm Great article! Read earlier this month that time, datetime, and pubdate were eliminated from the HTML5 spec recently. Not sure I’m excited about this, but that’s the way it is. Ian DevlinNovember 19, 2011 1:27 am The time element has returned to the HTML5 specification although the pubdate attribute is still under threat. AshoonerNovember 18, 2011 4:23 pm Structure != Semantics. From 1994: ‘lets nip this one in the bud before the masses get ahold of it.’ From this article i can see they weren’t successful. bruceNovember 21, 2011 11:03 am those filthy masses, eh? Everything was so much better before they got interested. kiziNovember 18, 2011 5:40 pm I think this article really useful to me! thank the writer Luke StevensNovember 18, 2011 9:52 pm Nice article Bruce, you do a great job of covering some of the new *functionality* in HTML5, but fwiw I think the new *semantics* are a separate issue, on which I’ll rant briefly below :) IMO conflating semantics with functionality misses the point of the previous articles, which were more focused on the structural elements. On the structural elements, you say: . ” This is, unfortunately, a myth, and I hope we can put an end to it. Please? :) When doing research for my own HTML5 book () I asked Hickson about the new structural elements, and he said he (and a few others) added them *prior* to any research. As far as I could tell from the WHATWG archives, Hickson drew up the new elements on a whiteboard in 2004, without any consultation or research. But the research backs him up though, right? Well, on the face of it, you’d think so (apart from the vast absence of any classes!). But then you look at what the spec actually says (and you’d know this better than most), and it’s vastly different to what web designers and other authors actually want. For example, how is sectioning a product of the research? It’s an old concept from 1991; yet it’s the foundation of the new structural elements. How is header and footer in the spec — which are intended for any section, not specifically the “overall” page section — at all similar to how authors actually use them? (Which is far more like ARIA banner and contentinfo landmarks.) Who wanted an “aside” element for both sidebars and pull-quotes? Who has been using “article” to denote comments, or forum posts, or widgets (!) as the spec suggests? This is the biggest problem with HTML5′s semantics: Hickson says they’re just to make styling easier, they’re just what everyone has been doing, they’re just what the research says. But then you look at the spec, and it’s nothing like what we’ve been doing at all! I’m not surprised it’s such a flustercuck of confusion — when you tell everyone it’s what they’re already doing, give elements names which intuitively *look* like it’s what we’ve been doing, and then write them up in the spec in pretty esoteric ways which don’t reflect reality at all, you end up with a mess. And that’s what we’ve got. Then when it comes to the supposed benefits of what search engines or AT etc is going to do with it… well, what are they going to do with it? Do they follow the incorrect and extremely messy real world usage, or do they follow the not-at-all-followed-by-authors spec? (I also noted in a WHATWG exchange with yourself Hickson said he doesn’t think UA’s will ever do anything with them, which is maybe for the better!) And I know from the WHATWG archives what a mess the whole lack of “main” or “content” element is regarding what authors actually want — I quoted your WHATWG comments in my book :) HTML5 semantics, in terms of structural elements, are dead on arrival. No search engine here and now asked for, or needs them as currently spec’d (they’ve defined what the actually want with Schema.org), no non-HTML-book-writing HTML authors understand them correctly (ask a HTML5-aware designer what their backwards compatible document outline looks like), and no one *will* be able to use them meaningfully as per the spec because the spec has become, as Hickson often says he wants to avoid, a word of fiction in terms of real world use of these elements. As for nav, it’s an accessibility disaster for IE8 and below with JS disabled (according to Yahoo 2010 research 1-2% of all traffic to their sites have JS disabled:). Those users don’t get the JS fix, styling blows up, and the page now has very broken navigation. So for a theoretical future benefit we do real harm now. I’ve found it quite perplexing that the web standards community has been happy to implicitly declare JavaScript mandatory for a significant subset of users — I don’t think this issue has gotten the attention it deserves. Fortunately the ARIA landmark seems a much safer & saner approach. Anyway, rant over, but it’s really bugged me to see the HTML5 elements (and the story behind them) taught in a way that isn’t reflected in the spec, or in their original creation. I enjoyed the rest of the article (especially those ideas about baking some responsive image stuff into HTML), and if the WHATWG and W3C can sustain their marriage of convenience for the foreseeable future and we actually get a HTML6/HTML.next/HTML-uh-it’s-still-versionless-but-updated-HTML then it will be interesting to see what makes it in :) bruceNovember 21, 2011 8:39 am Great comment Luke. I can’t answer regarding your assertion that “Hickson drew up the new elements on a whiteboard in 2004, without any consultation or research” as you reference a private email that I haven’t seen. But you say that “This is the biggest problem with HTML5′s semantics: Hickson says they’re just to make styling easier” seems contrary to this mailing list conversation from August 2004 () in which James Graham of Opera says “I think that explicit markup for document sections is good (although I would like to see more single-purpose elements such as header or footer to provide addiational semantics for UAs – the ability to seperate out sitewide elements from page-specific content is, in my opinion, particularly important)” and Hixie replies “Yeah, header and footer or similar elements are almost certainly going to be defined at some point, along with content (for the main body of the page), entry or post or article to refer to a unit of text bigger than a section but smaller than a page, sidebar to mean a, well, side bar, note> to mean a note… and so forth. Suggestions welcome. We’ll probably keep it to a minimum though. The idea is just to relieve the most common pseudo-semantic uses of div.” Seems to me that here, they’re talking about semantics and UAs rather than just styling. (If styling were the primary use-case, we’d certainly have some kind of content element, I’m sure). You can see the history of the aside element (as sidebar was eventually renamed) in a post by Lachy at “As for nav, it’s an accessibility disaster for IE8 and below with JS disabled” I agree that its a styling disaster for old IE without JS. The accessibility (in an AT sense) should be unaffected. Pragmatically, if a user is surfing the web with IE6 and JS off, his or her experience of the Web is pretty nasty, and about to get a whole lot worse, not because of unstyled nav but because of increased JS use on websites. (I”m not suggesting that this is laudable, just that it is the way we seem to be headed.) MichaelNovember 18, 2011 9:52 pm I was going to say something similar. Maybe not so offensively, but yeah! My frustration is: 1) new markup where existing markup will work fine. Figcaption is a perfect example. We already have a caption tag. If caption is within a fig it belongs to a fig and can be styled according to how you won’t figs captioned. There’s no need for both caption and figcaption. 2) re-purposing b/i/u… some things should just DIE! The problem is that we want to promote using this markup because people just won’t stop using it. The problem is that it will continue to be misused and have ZERO meaning for the most part. People will just continue to use it as the shortest tag they can use to tack on for some other purpose (e.g. making headings, corners, borders, placeholders, etc.,.) You’re putting the em-Phasis on the wrong syl-Lable. 3) confusion around div/section/article/aside. I’ve been to a dozen sites with articles about HTML5 and how to use it properly. I’ve been to conventions and heard very smart people in the design and development community speak. The one thing that seems consistent is a lot of people just don’t get when or how to use these elements correctly. Oh sure, everyone has an article about how to use them, but then weeks later they usually come back and say “my bad, here’s the real story…” or “well, it’s still a work in progress.” Yes, one can fall back to the good old div… trusty old divitis… or one can move forward into the section? article? no definitely section… no, this is on the right side so it must be an aside? Oh crap, no that’s layout, not semantics… where was I again? wouldn’t think so since people have been writing “div class=’section/article’” for years now. Or is it taking us back to our high school English class and the pain of diagramming sentences that causes us so much grief. That a lot of what this feels like some days, trying to figure out if you’re following the correct grammar rule and some grammar nazi is going to whack you with the big book of HTML5 at some point. I guess what I’m looking for from HTML5 is for it to simplify how I design and develop. I like section, I like new form controls, I like headers and footers (why no body/copy/content?). I know I can fall back to div, but that’s no good in the long run. I need to be able to say without any guesswork/wailing/gnashing of teeth, that [X] is the tag I use [HERE]. bruceNovember 21, 2011 8:51 am ” We already have a caption tag. If caption is within a fig it belongs to a fig and can be styled according to how you won’t figs captioned. There’s no need for both caption and figcaption.” Unfortunately, that’s not the case. It was looked at, but it cause problems in older browsers. If you used a figure inside a table, and the figure’s caption were marked up re-using the caption element, the browser would think that it is the caption for the table rather than the figure. There was a similar problem when the working group then tried to re-use legend for figures (and what is now summary in details). If you’ve ever tried to style a form legend in IE6, you’ll be glad they didn’t reuse it. *think* it’s newness. But I really don’t know. ” re-purposing b/i/u… some things should just DIE! ” well, die is a bit strong. But it does seem to me to be navel gazing, as I hope I indicated in the article. MichaelNovember 23, 2011 9:00 am It seems schizophrenic to say in one breath “we’re going to add all these tags that aren’t supported stylistically in older browsers” and then say “we’re not going to use this tag because it’s not supported stylistically in older browsers.” At what point do we say “IE6 is dead” and forget about the nuances for that platform and actually move FORWARD with a sane normalized standard that isn’t hacked together because some niggle in a copy of IE5.2 mac, IE6 or Netscape Gold that happens to still be floating around for 1.4% of the population. While the content should still display in those older browsers I don’t think we should put a burden on authors due to outdated implementations. Just do like some developers have been doing, like Andy Clarke, and give older browsers flat content with no CSS, or a serious CSS reset and a hint to upgrade to a newer browser. JonNovember 26, 2011 12:37 am Do what I do. If I detect an outdated browser, I forward the user to the following message: “You are using an outdated browser. This site requires the latest version of . Please go to to download the latest version of , you lazy incompetent twig.” :-) ElliotNovember 27, 2011 3:32 pm I like the last bit. Greg BabulaJanuary 17, 2012 8:04 pm Well said Michael, I think you pretty much nailed it with this part, I’ve been developing with this in mind since the new elements became available “div = generic container, section = section of content, article = syndicated content/feed.” DergenNovember 18, 2011 11:01 pm “It may surprise the cool kids in Silicon Valley to learn that a worldwide Web of people use languages other than English and even use different writing systems.” Do you really fucking think that? Is your command of English too limited to understand what a bunch of fatuous bullshit that is? Fredrik EkelundNovember 20, 2011 2:14 am That “removed elements and attributes”-link needs an http:// in its href attribute. It links back to the current article right now. bruceNovember 23, 2011 2:28 am thanks; fixed! JohnNovember 20, 2011 3:41 am Why the fu*k is everyone using unquoted attributes now? I’ve started to miss XHTML. bruceNovember 23, 2011 2:28 am It’s up to you. No style is preferred over the other. To quote, or not to quote: that is the question. Do as you like; the browsers don’t care (so neither does the validator) Jasper KennisNovember 20, 2011 4:11 pm Stop worrying about semantics and all that so much and start building great sites. All this talking just leads to talking. Getting tired of the HTML5 bubble. Playtime’s over, get to work. ElliotNovember 27, 2011 3:25 pm But it’s our jobs to care about doing it right… Soooo, yeah. David BallJanuary 30, 2012 3:53 am Like Bruce said in the article it’s important that we do this properly, I don’t see this as talking unnecessarily or messing about, (although some of it does seem a tad confusing and makes my head spin). It’s important that we mark content up properly, and as Elliot says – it’s our job to care about doing it right! DonNovember 20, 2011 6:21 pm There are also revisions to the structure, syntax, and semantics of HTML, some of which Lachlan Hunt covered in “A Preview of HTML 5.” … 4 The elements of HTML — HTML5 The semantics of the protocol used (e.g. HTTP) must be followed when fetching external resources. (For example, redirects will be followed and 404 responses … Don, bestbusinessbrands.blogspot.com/ FranciscNovember 21, 2011 6:30 am I want to make two points: 1. Embedding YouTube videos already serves HTML5 to non-Flash-supporting devices, so if you have a YouTube video, you don’t need the fallback on your site as it already does that. 2. “you’re closer to Flash than you are to the Web” – Flash IS part of the Web. It’s not a web standard, but it undeniably, part of the Web. A good article, thanks. bruceNovember 21, 2011 9:05 am 1) Use-case: I want to script my own player in capable browsers rather than use the default YouTube one. 2) Philosophically I’d say that Flash, like Silverlight, PDF are content that’s delivered through the web but they aren’t part of the web in the sense that they need browser plugins to render them and they don’t easily interact with other web technologies. Mark SimpsonNovember 21, 2011 6:32 am SCUMBAG SMASHINGMAG Adds comment voting icons. Doesn’t let you vote. MorelDecember 11, 2011 3:03 am You are the proof we do can vote. Stephen CostelloNovember 21, 2011 6:59 am Some great new features looking forward to using them on my new portfolio design over at stephencostello.com in the coming weeks. AnnNovember 21, 2011 11:48 am This is great. Thank you. PaulNovember 21, 2011 3:36 pm Hi Bruce I was wondering why your site looks like a highschool student’s site ? For quite awhile I try to learn the semantic site of HTML5 and I have found that all the famous 5 star developers keep preaching the principles they don’t practice. Do you yourself make use of the principle you teach others to do ? If the answer is NO, then why not ? bruceNovember 22, 2011 1:28 am Hi Paul Yup, you’re right. Feel free to disregard everything I write about aesthetics, proportion, composition, colour balance and whitespace. You’ll find I haven’t written anything about that, because I’m not a designer and have never claimed to be. (Which is why why my site looks crap. I am however revamping it over Xmas to be slightly less unpleasant.) As an aside, I generally find that the worst kind of pseudo-designers are those that disregard or discount the content because they dislike the colours. PaulNovember 22, 2011 7:08 am Hello again Bruce First of all, I apologize if my words are little to harsh, English is not my first language. I don’t want to disregard everything you and many others point too, I want to learn new stuff and at the same time see the real working examples that reflect the real life. Aesthetics aside, how many percents of HTML5 stuffs that you (and many others) teach are being used in their own works ? I feel like readers are being used as beta testers here. Look at this site (smashingmagazine) it doesn’t use ‘time’ element, there is no ‘article’ or ‘section’ – the divs are all over. How do you think readers should feel ? bruceNovember 22, 2011 9:45 am “I feel like readers are being used as beta testers here. Look at this site (smashingmagazine) it doesn’t use ‘time’ element, there is no ‘article’ or ‘section’ – the divs are all over.” Fair point. I don’t control Smashing Mag’s templates. Look at my site. Or any WordPress site using the 2011 theme. Reddit uses time, as does Github. HTML5Doctor, which I co-curate, uses all the new elements as apporpriate. matturDecember 18, 2011 11:24 am As a highschool student myself, I’d like to point out that is a really offensive thing to say about highschool students’ websites. (Admittedly, my site does look exactly like Bruce’s). adumpaulNovember 22, 2011 3:15 am So nice article.Really great. Aaron MartoneNovember 22, 2011 3:47 pm I’m trying to get myself into HTML5 as best as I can, and in all honesty, aside from it’s rather ambiguous nature (which makes it rather hard to get any solid information on), I’ve yet to find a resource that really puts best practices forward in a nice, succinct manner. Instead, there just seems to be an overwhelming amount of people creating things that give you more options, but end up confusing you because you have no grasp of the core mechanics. Do I use Modernizr? What about LESS? What are the hundreds of thousands of “but wait…” situations that crop up from all of this interplay with non-standardized technology? In the end, I just go home defeated and more confused. The perfectionist in me wants to put the best foot forward, but with everyone chiming up on Feature A and Feature B, the message itself isn’t coming through loud enough. FlorisNovember 22, 2011 8:19 am No double quotes on the tag elements? bruceNovember 22, 2011 9:47 am No need, unless they contain whitespace or an equals sign. Jayman PandyaNovember 22, 2011 11:46 am Well Bruce one thing that we share is the LOVE towards Priyanka. After your affair with Peepli Live go for Dostana to see the super gorgeous Priyanka. Coming back to HTML5. I have been tracking you and Molly a lot on this topic and also reading your book thot you co-authored with Remy Sharp. I really love the content that you guys create and I have learnt a lot about HTML5 and the new semantics from you guys. Even though you had to face some harsh comments keep up this awesome task that you have taken up to enlighten people like us. Enjoy Dostana. ;) Get in touch if you come over to Bombay some time :) adumpaulNovember 23, 2011 2:11 am That great post.Thank you for sharing. JonNovember 26, 2011 12:31 am What the fuck is wrong with you people!? If you are still designing and coding your sites to include the older versions of browsers, then you are a part of the PROBLEM and should be taken out back and beaten with the Flash Bible! This is the MAIN reason why HTML5 will take 20 years to evolve, because designers are soooo worried about what grandma and grandpa are going to think of their website when they visit it with IE3. If people are too stupid or unwilling to upgrade their browsers, then that is their loss and should get with the times! RaphaelDDLFebruary 10, 2012 6:13 am I do agree with you, but you must consider that it’s not about the grampa with IE3. It’s the crappy companies that don’t grant access to it’s workers to install anything. This means that YOU could find yourself in a ‘big’ company that still uses XP and just have only IE6 installed. So, even on work, you will browse stuff. And if the developer of the website you access see the monthly access rates for browsers (and you are adding % to IE6) and see that it still have a nice amount of people using it, it WILL code for older browser. Because visits can mean more income. And if the visitors can’t see the website or it looks too crappy, this means less money. And noone wants to loose money. But yeah, i’m a mobile dev and i already don’t give a sh!t to what people using Windows Mobile (6.5 and lower) or Blackberry (6.0 and lower) users see. Because there are more work to do than trying to code for those things that come with the OS and that call themselves ‘browsers’. Kees BotNovember 29, 2011 12:38 am I don’t care much about the media attribute on the source element of a picture. I’d rather see width, height and maybe also a size attribute tell the browser the dimensions of an image so that it can choose the right image based on the space it has available or the current bandwidth of the connection. (So you get a hires image on a retina screen if connected to WIFI, and lowres on a laptop with a 3G dongle.) I also wish browsers would use the width and height attributes of the old IMG element as the native width and height of the image until the image starts loading. (To keep not yet loaded images from collapsing if CSS width:100%; height:auto override the native width and height to make them shrink to the available space.) Gunnar BittersmannDecember 13, 2011 7:08 am A couple of days ago, my first attempt to comment has never shown up, so I give it another try. Yeah, those redefinitions of previosly presentational elements in HTML5 are questionable. The spec once read “The i element represents a span of text […] whose typical typographic presentation is italicized.” In English typography, it is. But you cannot write a spec for the world-wide web based on the habits of just a small part of the world, can you? Now the reference to italicized presentation has been dropped. There’s still a reference to “Western texts” though. b, i, s and u elements make sense with class attributes describing their contents.Which might raise the question: why not just spans with classes? In the section “When, Where, Who?”: “put the time in 24-hour format, terminated by a Z, along with any time-zone offset” is not correct. It can be either but not both. ‘Z’ is just an abbreviation for ‘+00:00′. In “2011-11-13T23:26.083Z-05.00 would be 23:26 pm and 83 milliseconds” the seconds ’00′ are missing, it has to be 2011-11-13T23:26:00.083-05:00. No ‘Z’; ‘:’ in the time-zone offset. “[…] in the time zone lying 5 hours before UTC” is confusing to me. Noon in New York (-05:00) is 5 hours _later_ than noon in London (UTC). Isn’t that the time zone lying 5 hours _behind_ UTC? bruceDecember 13, 2011 11:04 am Gunnar Thanks for your comment. Yup, I made a mistake leaving out the seconds out in that example. The format of the time element recently changed, so maybe that’s where the Z confusion came. I’m (pretty) sure it was right when I checked it. As I write this it’s 19:01 GMT and 14:01 in New York. Thus, New York is GMT -5.00 Marija ZaricFebruary 8, 2012 8:59 am I like html5 semantics. It has very neat and clean markup. It requires a lot of practice and it is great for responsive design. Start using now! MichaelMarch 5, 2012 7:28 pm The main difference between XML and earlier versions of HTML was this very issue of semantics: the writer of an XML document could choose any elements they wanted and, having “invented” those elements, knew exactly what they “meant”. To say it another way, HTML is a *formatting* language, while XML is a *semantic* language. With HTML5, we can see more and more the need for semantic documents. Back in 2006, I remember having a discussion about this with another web developer, and the question was raised: why not replace HTML with XML (which is fully style-able via CSS), where a user can *name* an element whatever they like and associate with that element whatever *meaning* they like. The main problems with the idea were that: 1. There is an implicit “meaning” in most HTML elements. For example, a browser knows to replace its window’s caption with the contents of the HTML <title> element. With XML it wouldn’t know this. As another example, how would a browser know that a custom XML <image> element should be replaced with an actual image, obtained from an HTTP “GET” request to the URL identified by the “src” attribute? With HTML, this is implied. 2. Search engines would have difficulty in determining the most relevant content. For example, currently search engines prioritise the textual content of <h> heading elements. With a custom heading element, it wouldn’t know that the contents are more indicative of the content than a <span> element. In an ideal world, I think all markup should be semantic, and I can think of two solutions to the above: 1. Have standard “xml:” namespaced elements. For example, a custom XML <image> tag could have an “xml:source” attribute (e.g., “”) and an “xml:type” attribute (corresponding to the MIME-type, e.g., “image/png”). Due to the “xml:” namespacing, the browser would know to embed the content (this “embedding” of content could also work with other media types, whether they be CSS files, video files, etc). Similarly, there could be an <xml:script> element with the implied meaning of containing content. 2. Have two types of XML documents – the first being the XML document containing your content, the second document “explaining” to search engines what each element “means”. Anyway, head trip. Adam ClarkMay 9, 2012 8:08 am I’m still suspicous of HTML5. Support is still very patchy and getting it to play nicely in some browsers requires a bit of work. I have this theory; HTML5 is like an unstable mental patient…it requires support to get it to behave properly in public. And if something requires extra effort to get it to cooperate then would you fully trust it? I can understand the arguement that we should embrace the future but I’ll be stepping into the future with caution for the short term… I want HTML5 to be my new best friend. We’ll see…c’mon browsers, get supporting it fully!
http://www.smashingmagazine.com/2011/11/18/html5-semantics/
CC-MAIN-2014-23
refinedweb
7,054
70.02
I an trying to filter low quality reads in bam file using python's pysam. I have used the code from here. I have modified this code a little and the whole code is shown below, but the code is not giving any bam file , import argparse,pysam,re,sys def FilterReads(in_file, out_file): def read_ok(read): """ read_ok - reject reads with a low quality (<30) base call read - a PySam AlignedRead object returns: True if the read is ok """ if any([ord(c)-33 < _BASE_QUAL_CUTOFF for c in list(read.qual)]): return False else: return True _BASE_QUAL_CUTOFF = 30 bam_in = pysam.Samfile(in_file, 'rb') bam_out = pysam.Samfile(out_file, 'wb', template=bam_in) out_count = 0 for read in bam_in.fetch(): if read_ok(read): bam_out.write(read) out_count += 1 print 'reads_written =', out_count bam_out.close() bam_in.close() def GetArgs(): """ GetArgs - read the command line returns - an input bam file name and teh output filtered bam file """ def ParseArgs(parser): class Parser(argparse.ArgumentParser): def error(self, message): sys.stderr.write('error: %s\n' % message) self.print_help() sys.exit(2) parser = Parser(description='Calculate PhiX Context Specific Error Rates.') parser.add_argument('-b', '--bam_file', type=str, required=True, help='Input Bam file.') parser.add_argument('-o', '--output_file', type=str, required=True, help='Output Bam file.') return parser.parse_args() parser = argparse.ArgumentParser() args = ParseArgs(parser) return args.bam_file, args.output_file def Main(): bam_file, output_file = GetArgs() FilterReads(bam_file, output_file) if __name__ == '__main__': Main() I think you need to explain in more detail what you are trying to accomplish. Are you trying to reject any read with any base that has a quality score under 30? If so, I suggest you rethink your approach, because I can't imagine a scenario where that's a good idea. Thanks Bushnell, Yes I want to reject reads having quality score < 30. I want to do it using python's pysam. I you have a better approach please share it. No, I don't have a better approach, because I think removing any read with any base under Q30 is a terrible idea and will lead to extreme sequence-specific coverage bias. I have written tools to remove reads with average quality (based on expected error rates from the quality scores) below a certain level, but that should be used conservatively to avoid sequence-specific bias. I cannot imagine a scenario where it would be a good idea to remove every read that has a single base below a certain quality score, so I won't suggest a way to do it unless you can explain why you want to do so. I suggest you quality-trim the reads and reject reads with length under a specified value prior to mapping. You can do so like this, with the BBMap package (assuming interleaved reads): bbduk.sh in=reads.fq out=trimmed.fq qtrim=r trimq=12 minlen=125 You can set trimq to 30 if you want, but again, I cannot imagine a situation where that would be a good idea. For this command, minlen should be set to some number less than your read length; e.g. for 150bp reads, maybe set it to 100. I am developing a variant caller for detecting heteroplasmy in ngs datasets which will take bam file as input according to workflow. As the link is not working so I have added an image , see the galaxy naive variant caller portion in that figure For calculating heteroplasmy they filtered bases having <30 score. It might be a terrible idea in some cases, but when working with ancient DNA, which might have sequencing errors due to oxidative and hidrolytic damage as to contamination, you might want to trim bases below certain quality (usually <30), anyways, i just write this in order to enrich the discussion, i don't know what kind of data ammarsabir is (was) working with. if in further data handling you have a vcf step, it might be easier to just keep all the data in the bam file, and then trim the reads you want to using vcftools, like following: there's no need to put any extention in the last file name, since the tool writes a default extention. cheers.
https://www.biostars.org/p/226167/
CC-MAIN-2018-43
refinedweb
696
61.26
Android solves this with the beloved ROOM library, on Flutter though, you are stuck with the low-level SQFLite package... Not anymore! MOOR is a library allowing you to work with the Flutter's SQLite database fluently and in pure Dart. Behind the scenes, it uses the SQFLite package. Oh, and if you're wondering, MOOR is just ROOM spelled backwards. Preparing the project Other dependencies include provider and flutter_slidable which are here purely for making the UI possible. pubspec.yaml ... dependencies: flutter: sdk: flutter moor_flutter: ^1.4.0 # For the UI provider: ^3.0.0+1 flutter_slidable: ^0.5.3 ... dev_dependencies: flutter_test: sdk: flutter moor_generator: ^1.4.0 build_runner: ... What we will build Project-based approach to learning is the best approach. In this series, we are going to build a task list app. At the end of this part, you are going to have an app which can create new tasks, complete them & display them. All of this will happen persistently using the Moor's fluent query syntax. Creating a table The biggest benefit of Moor is that you don't have to leave Dart in order to work with the database. This also applies to defining SQL tables. All you need, is to create a class representing the table. Subsequently, columns are specified as get-only properties of the class. What's more, Moor takes this Table class and creates a data class out of it! Moor's data classes support value equality, simple deep copies, and even converting to & from JSON. All the Moor-related code will be inside a file moor_database.dart to keep it organized. Location of the file moor_database.dart import 'package:moor/moor.dart'; import 'package:moor_flutter/moor_flutter.dart'; // Moor works by source gen. This file will all the generated code. part 'moor_database.g.dart'; // The name of the database table is "tasks" // By default, the name of the generated data class will be "Task" (without "s") class Tasks extends Table { // autoIncrement automatically sets this to be the primary key IntColumn get id => integer().autoIncrement()(); // If the length constraint is not fulfilled, the Task will not // be inserted into the database and an exception will be thrown. TextColumn get name => text().withLength(min: 1, max: 50)(); // DateTime is not natively supported by SQLite // Moor converts it to & from UNIX seconds DateTimeColumn get dueDate => dateTime().nullable()(); // Booleans are not supported as well, Moor converts them to integers // Simple default values are specified as Constants BoolColumn get completed => boolean().withDefault(Constant(false))(); } More on defining tables The code above is all we need for the app we're building. However there are some additional things you might want to know - how to define custom primary keys and how to change the name of the generated data class. moor_database.dart // The default data class name "Tasks" would now be "SomeOtherNameIfYouWant" ('SomeOtherNameIfYouWant') class Tasks extends Table { ... // Custom primary keys defined as a set of columns Set<Column> get primaryKey => {id, name}; } The Database class With the table definition done, we need to get the actual database running. Moor makes this simple. Create a class, annotate it and specify the location of the database file. moor_database.dart // This annotation tells the code generator which tables this DB works with (tables: [Tasks]) // _$AppDatabase is the name of the generated class class AppDatabase extends _$AppDatabase { AppDatabase() // Specify the location of the database file : super((FlutterQueryExecutor.inDatabaseFolder( path: 'db.sqlite', // Good for debugging - prints SQL in the console logStatements: true, ))); // Bump this when changing tables and columns. // Migrations will be covered in the next part. int get schemaVersion => 1; } At this point, it's good if we finally generate the code. As usual, it's done through the build_runner command. We will use watch instead of build, so that we don't have to constantly rerun the command. Queries Moor supports all kinds of queries in the fluent syntax and it also allows you to write custom SQL. Most of the time though, you don't have to leave the comfort of Dart. In this part, we will take a look at the basic queries and leave the more advanced ones for the next part. Queries can be put into the AppDatabase class. moor_database.dart class AppDatabase extends _$AppDatabase { ... // All tables have getters in the generated class - we can select the tasks table Future<List<Task>> getAllTasks() => select(tasks).get(); // Moor supports Streams which emit elements when the watched data changes Stream<List<Task>> watchAllTasks() => select(tasks).watch(); Future insertTask(Task task) => into(tasks).insert(task); // Updates a Task with a matching primary key Future updateTask(Task task) => update(tasks).replace(task); Future deleteTask(Task task) => delete(tasks).delete(task); }(tables: [Tasks]) Making the UI Once you have the database class fully set up, you can use it throughout the app however you please. It doesn't require any additional setup. One thing you should keep in mind is that the AppDatabase class should be a singleton. In this app, we will accomplish it with the Provider package. main.dart import 'package:flutter/material.dart'; import 'package:provider/provider.dart'; import 'data/moor_database.dart'; import 'ui/home_page.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { Widget build(BuildContext context) { return Provider( // The single instance of AppDatabase builder: (_) => AppDatabase(), child: MaterialApp( title: 'Material App', home: HomePage(), ), ); } } Since this is only a simple app, we won't use any fancy state management solution. Simple Stateful widgets will suffice. If you'd like to step up your state management game, I highly recommend the BLoC library. This is not a tutorial about Flutter's widgets, so the rest of the UI code will be here without any real explanation. Check out the video tutorial for more info. home_page(), ], )); } StreamBuilder<List<Task>> _buildTaskList(BuildContext context) { final database = Provider.of<AppDatabase>(context); return StreamBuilder( stream: database.watchAllTasks(), builder: (context, AsyncSnapshot<List<Task>> snapshot) { final tasks = snapshot.data ?? List(); return ListView.builder( itemCount: tasks.length, itemBuilder: (_, index) { final itemTask = tasks[index]; return _buildListItem(itemTask, database); }, ); }, ); } Widget _buildListItem(Task itemTask, AppDatabase database) { return Slidable( actionPane: SlidableDrawerActionPane(), secondaryActions: <Widget>[ IconSlideAction( caption: 'Delete', color: Colors.red, icon: Icons.delete, onTap: () => database.deleteTask(itemTask), ) ], child: CheckboxListTile( title: Text(itemTask.name), subtitle: Text(itemTask.dueDate?.toString() ?? 'No date'), value: itemTask.completed, onChanged: (newValue) { database.updateTask(itemTask.copyWith(completed: newValue)); }, ), ); } } The following code is for the bottom bar where new tasks can be inputted. new_task_input_widget.dart import 'package:flutter; TextEditingController controller; void initState() { super.initState(); controller = TextEditingController(); } Widget build(BuildContext context) { return Container( padding: const EdgeInsets.all(8.0), child: Row( mainAxisSize: MainAxisSize.max, children: <Widget>[ _buildTextField(context), _buildDateButton(context), ], ), ); } Expanded _buildTextField(BuildContext context) { return Expanded( child: TextField( controller: controller, decoration: InputDecoration(hintText: 'Task Name'), onSubmitted: (inputName) { final database = Provider.of<AppDatabase>(context); final task = Task( name: inputName, dueDate: newTaskDate, ); database.insertTask(task); resetValuesAfterSubmit(); }, ), ); }; controller.clear(); }); } } Conclusion You've learned how to use Moor to fluently create tables and queries in your Flutter apps. There's a lot more that Moor can handle, so be sure to stick around for the future parts of this series. Very good tutorial! Is there any way to use your own model? Cause i have a model that i use for an api service and i want to store that model to the db. Custom model is not possible, you’ll have to use the Moor data class to store the object in the database. Hello @resocoder is it moor_flutter supports json data column and we can make select query on the json data? Hi! Moor currently supports neither JSON data (unless you convert it to a String) nor querying JSON. You should check out the SEMBAST NoSQL local DB, if you’re interested. Yes, Thank you @resocoder. Hi @resocoder! Very good tutorial! Is there any way to use your own model class? because i have a model that i use for an api service and i want to store that model to the db. Hmm, I don’t think that’s possible. You have to use Moor’s generated classes for the DB. You could, however, convert your existing model objects into Moor objects. Thank you for the article. But I can’t understand – For example, you have an SQLite database on a VPS. How to connect to that database via Moor? Or moor – only for local databases? Yes, Moor is only a local DB. The function update(tasks).replace(task), return a int or bool? Hello, what if I want to delete all data in a table? Fixed. Here the query —> return delete(subCategories).go(); Your tutorials seem great to me! Can you teach about how to do with several tables with SQLite and Flutter? It’s not working for me, I m using latest 2.2.0 version of moor. eg Future insertTask(Insertable pin) => into(pins).insert(pin); it gives error Undefined name ‘pins’. Try correcting the name to one that is defined, or defining the name. Pin is table class Great article! I have a question, is there a way to backup my database using moor? Hi, your tutorial is great, thanks, could you help me in the use of batch insert. I was reading but I Can’t achieve this. Thank you for the tutorials. Did moor work on windows desktop application ? because am trying to get this example work with a windows application, but there is no auto generated classes. (for example : _$AppDatabase) Hi Matt, thanks for the tutorial, but i am getting an error, once i run ‘ flutter packages pub run build_runner watch ‘ the file doesn’t get generated , i’ve tried couple of times but i cannot figure out what’s wrong. —— look at the codes import ‘package:moor_flutter/moor_flutter.dart’; part ‘moor_database.g.dart’; class Tasks extends Table { // Defining properties (Table columns) // by default the id id set as primary key when applying autoIncrement(), // but can be overriden using a setter IntColumn get id => integer().autoIncrement().call(); TextColumn get name => text().withLength(min: 1, max: 50)(); DateTimeColumn get dueDate => dateTime()(); BoolColumn get completed => boolean().withDefault(Constant(false))(); } // The database class @UseMoor(tables: [Tasks]) class AppDatabase extends _$AppDatabase { AppDatabase() : super(FlutterQueryExecutor.inDatabaseFolder(path: ‘db.sqlite’, logStatements: true)); } —– this is the error [SEVERE] moor_generator:moor_generator on lib/data/moor_database.dart: Error running MoorGenerator NoSuchMethodError: The getter ‘typeConverter’ was called on null. Receiver: null Tried calling: typeConverter Thanks for helping me out Same problem on my end. Have you figured it out?, ‘db.sqlite’)); return VmDatabase(file); }); } // this annotation tells moor to prepare a database class that uses both of the // tables we just defined. We’ll see how to use that database class in a moment. @UseMoor(tables: [Tasks]) class AppDatabase extends _$AppDatabase { AppDatabase() : super(_openConnection()); @override int get schemaVersion => 1; Future<List> getAllTasks() => select(tasks).get(); Stream<List> watchAllTasks() => select(tasks).watch(); Future insertTask(Task task) => into(tasks).insert(task); Future updateTask(Task task) => update(tasks).replace(task); Future deleteTask(Task task) => delete(tasks).delete(task); } Hi, Thanks for the tutorial . as following your tutorial…. the task is not been saved on textField submit, also giving no errors , so unable to figure out the actual problem . i fixed with 2 things: – final database = Provider.of(context); to final database = Provider.of(context, listen: false); – final task = Task( name: inputName, dueDate: newTaskDate, ); to final task = TasksCompanion( name: Value(inputName), dueDate: Value(newTaskDate), ); Maybe because of more recent version of Provider package and moor_ffi instead of moor_flutter package like it’s recommand in the docs. And in the moor_database.dart: Future insertTask(TasksCompanion task) => into(tasks).insert(task); Because we send partials data to the insert, so we use de Companion class I had the same issue, all i did was add the listen: value and it worked for me Your tutorials are life-changing, it makes me a batter app dev. thank you, my man. Thanks for another great tutorial for Flutter. I love them. My question is where would be place the database files using the clean architecture explained in the another tutorial? Thanks in advance Best regards Thanks a lot. I’m using latest moor(^3.0.2) and provider(^4.1.1), everything works perfect except using Provider.of(context). It fails to get the provider …… I changed to use context.read(), it works! Just share it in case others have similar problems.
https://resocoder.com/2019/06/26/moor-room-for-flutter-tables-queries-fluent-sqlite-database/
CC-MAIN-2020-24
refinedweb
2,057
60.11
.5: Functions as Parameters About This Page Questions Answered: Could a function operate on other functions? How do I pass a function as a parameter to another function? Why is it great that I can do that? Topics: The main topic is higher-order functions, especially ones that take in functions as parameters. We’ll touch on a number of other topics as well, including multiple parameter lists, generating collections, and nesting collections within another. Several examples and practice problems involve image processing. What Will I Do? Read and program. There are numerous small assignments spread throughout. Rough Estimate of Workload:? Three and a half hours. Points Available: A50 + B25 + C15. Related Projects: HigherOrder (new). In one example, we’ll briefly revisit AuctionHouse1, too. Introduction We have established that programs feature: - data — such as numbers, text, and various other objects — which we can store in memory and which we process with: - operations — functions — that do something with data and may be attached to the data that they operate on (like methods are attached to objects in OOP). However, this distinction isn’t as clear-cut as it may have seemed so far. It turns out that functions are data, too: we can store a function in a variable, pass a function as a parameter to another function, or have a function return a function. We’ll soon get to why it might be useful to do that. But first, let’s look at a simple, concrete example. Passing a Function as a Parameter Our short-term plan is: - We’ll define two simple functions — nextand doubled— as examples of operations that take in a single integer and also return an integer. - We’ll define a function named twice, which lets us apply any such Int-to- Intfunction two times. So that this function knows what it should do two times, we pass that function as a parameter to twice. Let’s start at Step 1 and write a couple of perfectly mundane functions. Here’s one: def next(number: Int) = number + 1next: (number: Int)Int next(100)res0: Int = 101 nextis now a function that takes in an Int... Int. In other words, nextis a function of type Int => Int, where the Intto the left of the arrow is the parameter type and the Inton the right is the return type. doubled is also a function of type Int => Int: def doubled(original: Int) = 2 * originaldoubled: (original: Int)Int doubled(100)res1: Int = 200 Defining twice The twice function should receive an operation as a parameter and perform that operation two times. We’d like to be able to use it like this: twice(next, 1000)res2: Int = 1002 Notice: the first parameter of twice is a function! That’s how we indicate which function we want to apply twice. The second parameter is the target of the function applications; it’s a garden-variety integer. Applying next twice produced a number that’s two greater than the input. Doubling a number twice yields a quadruple: twice(doubled, 1000)res3: Int = 4000 Now to define twice: def twice(operation: Int => Int, target: Int) = operation(operation(target))twice: (operation: Int => Int, target: Int)Int Int => Int. That means that we can (and must) pass in a function that takes in a single integer and returns an integer. That is, we pass in a function such as nextor doubled. twicefirst calls the given function on the given integer. Then it calls the same function another time on the return value of the first function call. defto define any function named operation. operationis a regular parameter variable. The only remarkable thing about it is that it stores a reference to a function. The command operation(...)therefore calls whichever function was passed to twicethis time. twicerefers to a function, which... Int parameter and returns an `Int; - takes as its second parameter an integer; and - returns an integer. A snack for thought Compare: - You can pass a function as a parameter to another function. - A computer program can take another computer program as input. A compiler, for example, takes in a program and produces a different representation of it. A virtual machine takes in a program and runs it. - In mathematics, differentiation means taking in a function as “input” and producing a derivative function as “output”. Higher-Order Functions as High-Level Abstractions Any function with one or more parameters is an abstraction — a generalization — of all the concrete cases that result from calling the function on different parameter values. Our function doubled, for example, is an abstraction of all possible doublings of an Int. A function that takes another function as a parameter is an abstraction of abstractions. Our function twice, for example, is an abstraction of all possible scenarios where we twice perform an Int-to- Int operation such as doubled or Functions that receive functions as parameters and/or return functions are known as higher-order functions (korkeamman asteen funktio). In contrast, the ordinary functions you already know can be referred to as first-order functions (ensimmäisen asteen funktio). Some programming languages support only first-order functions but there are many languages that let us work with higher-order functions, too. Scala is one of the latter, as you’ve already seen. Additional terms You may hear programmers speak about “functions as first-class citizens” or “first-class functions”. This refers to precisely the idea that we’ve just introduced: you can store functions in variables and use functions as parameters and return functions from other functions just like you can do the same with, say, numbers. In other words, functions being first-class citizens means that there’s more than just first-order functions available in a language. Uses for Higher-Order Functions The twice function we just wrote isn’t too amazing. Higher-order functions may seem like a gimmick with little practical significance. That impression is badly mistaken, however. As we proceed with this and later chapters, you’ll find higher-order functions to be tremendously useful. The short list of examples below should give a some idea of what’s coming. - Scenario:" we want to be able to edit the pixels in an image in various ways, only some of which we know in advance. For instance, we want to be able to transform each pixel in a color photo into grayscale, or soften or brighten an image, or what not. We need a convenient way to say: “Perform this particular operation on every pixel in the image.” - Solution: We call a higher-order method that takes in the pixel-transforming operation as a parameter and performs it on each pixel. - Scenario: We have an object that represents a button in a GUI. We want to be able to say: “When that button is pressed, perform this operation.” - Solution: we call a higher-order method and pass in a function that will be invoked whenever the button is clicked. - Scenario: we need a method that sorts a list of objects — let’s say they are Personobjects. As part of the sorting algorithm, the method needs to compare two objects (at a time) so as to determine their correct order. We want to have manifold criteria for sorting; we could sort people by their name or by their year of birth, for example. Therefore, we want a convenient way to write: “Sort these objects; here’s how you should compare the objects this time.” - Solution: we call a higher-order sorting method and pass in a function that takes two objects, requests a particular piece of information from each one (e.g., their names), and uses that information to compare the objects. - Scenario: We have a collection of elements — let’s say each element is a measurement for a scientific study. We want to perform diverse operations on this collection, not all of which we know in advance. - Solution: we use a collection that has a flexible selection of higher-order methods. For instance, we can tell the collection to apply a particular function to each of its elements. Later in O1, you’ll see scenarios just like the ones outlined above. Example: Comparing Strings Our twice function takes in a function of type Int => Int. You can also write higher-order functions that operate on other kinds of functions, of course. As an example, consider string comparison. You can compare strings in different ways. For instance, the three functions below compare two strings by their lengths, by value of the contained numerical characters, and by the strings’ position according to the Unicode “alphabet”, respectively. def compareLengths(string1: String, string2: String) = string1.length - string2.length def compareIntContent(string1: String, string2: String) = string1.toInt - string2.toInt def compareChars(string1: String, string2: String) = string1.compareToIgnoreCase(string2) Let’s write a function areSorted that takes in three strings and reports whether or not they are in the right order. What “right order” means is left for areSorted’s caller to decide: as a fourth parameter, the caller passes in a function that compares a pair of strings according to some criterion. areSorted should work like this: areSorted("Java", "Scala", "Haskell", compareLengths)res4: Boolean = true areSorted("Haskell", "Java", "Scala", compareLengths)res5: Boolean = false areSorted("Java", "Scala", "Haskell", compareChars)res6: Boolean = false areSorted("Haskell", "Java", "Scala", compareChars)res7: Boolean = true areSorted("200", "123", "1000", compareIntContent)res8: Boolean = false areSorted("200", "123", "1000", compareLengths)res9: Boolean = true And here is an implementation for the function: def areSorted(first: String, second: String, third: String, compare: (String, String) => Int) = compare(first, second) <= 0 && compare(second, third) <= 0 compareparameter is “a function that takes two strings and returns an integer”. twice, we could have but did not need to write (Int) => Intas the parameter type.) areSorteduses the compareparameter twice to check whether the values are in order. Example: Searching a Collection Let’s return for a moment to class AuctionHouse from Chapter 5.3 and set ourselves these goals: AuctionHouseobjects should have a method that we can use to get a list of all the open auctions, that is, all the items that haven’t expired or been sold already. AuctionHouseobjects should have a method that we can use to get a list of all the items whose description contains a given word. - We should be able to similarly request other lists of items that match a criterion. We should be able to select any criterion we choose. One option would be to write separate methods in AuctionHouse for each specific need: findAllOpenItems, findAllMatchingKeyword, and so on. But that would mean that we should correctly anticipate all the ways in which someone might wish to use our class. A much more flexible solution is to write a generic method findAll that takes in a criterion as a parameter and returns a list of all the items that match the given criterion. We can represent the criterion as a function: class AuctionHouse { private val items = Buffer[EnglishAuction]() // ... other methods here ... def findAll(checkCriterion: EnglishAuction => Boolean) = { val found = Buffer[EnglishAuction]() for (currentItem <- this.items) { if (checkCriterion(currentItem)) { found += currentItem } } found.toVector } } findAlltakes a function as a parameter. That function 1) takes an auction as a parameter; 2) works out whether that auction meets a particular criterion; and 3) returns a Booleanto indicate whether or not the criterion was met. ifin combination with the function we got as a parameter. Now we can use our method: object FindAllTest extends App { def checkIfOpen(candidate: EnglishAuction) = candidate.isOpen def checkIfHandbag(candidate: EnglishAuction) = candidate.description.toLowerCase.contains("handbag") val house = new AuctionHouse("ReBay") house.addItem(new EnglishAuction("A glorious handbag", 100, 14)) house.addItem(new EnglishAuction("Collectible Easter Bunny China Thimble", 1, 10)) println(house.findAll(checkIfOpen)) // finds both auctions println(house.findAll(checkIfHandbag)) // finds only the first auction } In Chapter 6.2, you’ll see that Scala’s collection classes (such as Vector) have a variety of handy higher-order methods that you can use for things like findAll, and much more. Example: Transforming Pixel Colors The notion of transforming an image by applying an operation to each of its pixels already came up. Here’s an example of such an operation: def swapGreenAndBlue(original: Color) = Color(original.red, original.blue, original.green) The Pic class has a higher-order method named transformColors. With this method, we can easily apply this operation to every pixel of an image: val originalPic = Pic("defense.png")originalPic: Pic = defense.png val manipulatedPic = originalPic.transformColors(swapGreenAndBlue)manipulatedPic: Pic = defense.png (transformed) originalPic.leftOf(manipulatedPic).show() transformColorstakes in a function of type Color => Color; here we pass in swapGreenAndBlue. transformColorsapplies that function to each pixel and returns a new image with the resulting colors. Assignment: Color Filters A realistic grayscale filter An operation that is applied to the pixels of an image is often called a filter (suodin). The above program, for instance, implements a filter that swaps blue with green. Another example is a filter that turns a color image into a grayscale one. You can find the code for such a filter in Task1.scala within project HigherOrder. Open that file. Read the code, which resembles the other filter that we just wrote. You’ll also find a short task description; do what it asks you to. A+ presents the exercise submission form here. Creating Pictures with a Higher-Order Function Just like we transformed existing images by applying a function to each pixel, we can apply a function to generate a new image from scratch. There’s a tool for that: val size = 256size: Int = 256 def blueGradient(x: Int, y: Int) = Color(0, 0, x.toDouble / (size - 1) * Color.Max)blueGradient: (x: Int, y: Int)Color val pic1 = Pic.generate(size, size, blueGradient)pic1: Pic = generated pic pic1.show() Color.Maxequals the number of different values for each of the RGB components, which is 256 since the values are between 0 and 255.) Pic.generate(a method in Pic’s companion object; Chapter 5.1) to produce a new image. We pass in the desired image’s width and height and a function that will be invoked on each pixel of the new image to determine its color. showmethod displays the image shown on the right. Here’s another example of Pic.generate. In this example, the formula for selecting pixel colors is a bit more involved. def artwork(x: Int, y: Int) = if (x * x > y * 100) Red else if (x + y < 200) Black else if (y % 10 < 5) Blue else Whiteartwork: (x: Int, y: Int)Color Pic.generate(size, size * 2, artwork).show() Try it. Open Task4.scala and do the mini-assignment therein. A+ presents the exercise submission form here. Go ahead and try generating other images as well. Interlude: Functions with Multiple Parameter Lists Before we move on to the rest of the chapter, you should acquaint yourself with a particular feature of the Scala language. So far, we’ve written all the parameters of a function in a comma-separated list within a single pair of round brackets. In other words, these functions have had a single parameter list (parametriluettelo). Many functions do. You can also define a Scala function with multiple parameter lists: def myFunc(first: Int, second: Int)(third: Int, fourth: Int) = first * second + third * fourthmyFunc: (first: Int, second: Int)(third: Int, fourth: Int)Int myFunctwo parameter lists. In effect, we’ve grouped the function’s four parameters in two separate lists. Two pairs of brackets are also needed when we calling that function: myFunc(1, 2)(3, 4)res10: Int = 14 myFunc(1, 2, 3, 4)<console>:9: error: too many arguments for method myFunc: (first: Int, second: Int)(third: Int, fourth: Int)Int It’s occasionally convenient to use multiple parameter lists. We won’t really go into that here, though, and in O1 you won’t need to define functions with multiple parameter lists. However, you will at times need to call some of Scala’s library functions that require you to pass in parameters in multiple lists. Our next example features such a function. Further reading If you want to find out more about why multiple parameter lists make sense, you can start by reading up on currying on the internet. Warning: the sources you find may not be readily understandable based on what we’ve covered in O1 (because they use either a different programming language or features of Scala that we haven’t discussed). You may also wish to look into how multiple parameter lists interact with Scala’s type inference. Creating Collections with a Higher-Order Function The tabulate method Just like we could use Pic.generate to create pictures, we can use a function to generate collections of elements. To that end, Scala provides a method named tabulate. Recall the two simple functions from the top of the chapter: def next(number: Int) = number + 1next: (number: Int)Int def doubled(original: Int) = 2 * originaldoubled: (original: Int)Int Let’s create a vector of integers where each element equals twice its index: Vector.tabulate(10)(doubled)res11: Vector[Int] = Vector(0, 2, 4, 6, 8, 10, 12, 14, 16, 18) tabulatetakes two parameter lists. The first specifies the number of elements we want and the second supplies a function that is called on each index to generate the element for that index. tabulaterepeatedly calls the function it receives, passing in each index in turn. Here, doubledhas been called on each of the numbers from 0 to 9. Here’s a similar example with Buffer.tabulate(5)(next)res12: Buffer[Int] = ArrayBuffer(1, 2, 3, 4, 5) As you see, tabulate also works for creating buffers. More examples of tabulate tabulate uses its parameter function on the collection’s indices, which means that the parameter function must take in Ints. The function does not, however, have to return an Int: def parity(index: Int) = index % 2 == 0parity: (index: Int)Boolean val parities = Vector.tabulate(5)(parity)parities: Vector[Boolean] = Vector(true, false, true, false, true) println(parities.mkString("\t"))true false true false true paritychecks whether a given integer is even and returns a Boolean. Booleans. mkStringmethod is often useful for formatting output. Here, by way of example, we’ve used the tabulator character \tto separate the elements in the resulting string. And here is a vectorful of more-or-less ascending random numbers: import scala.util.Randomimport scala.util.Random def randomElement(upperLimit: Int) = Random.nextInt(upperLimit + 1)randomElement: (upperLimit: Int)Int println(Vector.tabulate(30)(randomElement).mkString(","))0,0,1,3,4,3,2,1,4,1,0,11,2,13,12,7,6,8,16,4,7,16,14,4,10,24,19,26,15,24 Practice on tabulate You’ll find a very similar program in Task5.scala. Read the instructions in the A+ presents the exercise submission form here. “Multidimensional” Collections Speaking of tabulate, it sounds like it makes “tables” of things. Why the name? Presumably, the reasoning behind the name is that tabulate is a nice way to create “multidimensional” collections. For instance, say we wish to represent this table of numbers in our program: (Readers who have studied mathematics may see this table as a matrix.) How could we represent this in Scala? Do we need a “two-dimensional vector” with indices for rows and columns separately, or what? To answer that, let’s first decide how we wish to determine the value in each cell of the table. For this toy example, we’ll use this rather arbitrary function: def dataAt(row: Int, column: Int) = row * 10 + column + 3dataAt: (row: Int, column: Int)Int “Multidimensionality” is just nesting val table = Vector.tabulate(2, 3)(dataAt)table: Vector[Vector[Int]] = Vector(Vector(3, 4, 5), Vector(13, 14, 15)) tabulate’s first parameter list: the height and width of the collection. We don’t actually need tabulate for constructing a two-dimensional collection. We can also construct one manually: val twoColumnsFourRows = Vector(Vector(1, 2), Vector(3, 4), Vector(5, 6), Vector(7, 8))twoColumnsFourRows: Vector[Vector[Int]] = Vector(Vector(1, 2), Vector(3, 4), Vector(5, 6), Vector(7, 8)) Frequently asked question: Which index is the row and which is the column? Answer: That depends entirely on how the programmer has nested the collections in the particular program. You can write a program where each inner vector represents a row and lists the elements in each column of that row; you can just as well write a program where each inner vector represents a column and lists the elements of each row in that column. In this chapter, we have happened to use the former style, but there is no hard-and-fast rule for this. In fact, it’s not necessary in the first place to use two separate indices and nested collections. We could represent a two-by-three table of numbers with just one single-dimensional vector of six elements, deciding that, say, the indices from 0 to 2 represent the first row and the indices from 3 to 5 the second row. Usually, it’s more convenient to nest collections, though. Depending on circumstances, how you choose to index a collection can have an impact on efficiency. O1’s follow-on courses will say more about that aspect. Looping over nested collections Since a “multidimensional” collection is just a bunch of single-dimensional collections nested inside an outer collection, there is nothing fundamentally new about using such a collection. You can process a nested collection just like you’ve processed other collections. A for loop works, for instance. Our next example first uses tabulate to produce a multiplication table: def multiply(row: Int, column: Int) = (row + 1) * (column + 1)multiply: (row: Int, column: Int)Int val vectorOfRows = Vector.tabulate(10, 10)(multiply)vectorOfRows: Vector[Vector[Int]] = Vector(Vector(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), Vector(2, 4, 6, ..., 20), , ..., Vector(10, 20, 30, ..., 100)) Suppose we now wish to print out this multiplication table row by row. To do that, we can loop over the outer vector. Each of the inner vectors that it contains represents a row in the table. for (numbersOnRow <- vectorOfRows) { println(numbersOnRow.mkString(" vectorOfRowsis a Vector[Vector[Int]], whose elements are vectors, too, ... numbersOnRowhas the type Vector[Int]. Merging Pictures with combine This optional section continues with our image-processing theme and presents additional examples of higher-order methods. Averaging two images Suppose we want to combine these two pictures: val pic1 = Pic("lostgarden/tree-tall.png")pic1: Pic = lostgarden/tree-tall.png val pic2 = Pic("lostgarden/girl-horn.png")pic2: Pic = lostgarden/girl-horn.png One way to combine images is to compute a pixel-by-pixel average of their color values at each position. That is, at each pair of coordinates, we get the color of both images and apply the following function to get the output color at that position: def naiveAverage(color1: Color, color2: Color) = Color((color1.red + color2.red) / 2, (color1.green + color2.green) / 2, (color1.blue + color2.blue) / 2) We also need a way to apply this operation to the target image. That’s easily done with the combine method that is available on Pic objects. This method combines two images using whichever function we pass to it: val combinedPic = pic1.combine(pic2, naiveAverage)combinedPic: Pic = combined pic combinedPic.show() Try the program for yourself; you can find it as Example7.scala. Try running it on other inputs as well, if you feel like it. An image as a stencil for another Another way to combine two images is to use one of them as a “stencil” or “silhouette” that “selects” a shape in the other image. The optional assignment in Task6.scala lets you do that. A+ presents the exercise submission form here. More combinations of images If you enjoyed the previous exercise, you may also want to experiment with this code: val photo = Pic("kid.png").scaleBy(1.3) val drawing = Pic("bird.png") def isBright(color: Color) = color.intensity > 60 def selectColor(c1: Color, c2: Color) = if (isBright(c2)) Black else c1 photo.combine(drawing, selectColor).show() intensitymethod essentially tells you how bright the color is. Pure Whitehas an intensity of 255, for example, and pure Blackan intensity of zero. What does the resulting image look like and why? What happens if you give intensity a threshold more or less than 60? Try 20 and 200, for instance. What happens if you swap c1 and c2 in the body of selectCode? Practice on Writing Higher-Order Functions In each of the three programming assignments that conclude this chapter, you’ll use higher-order functions to work on collections. These assignments differ from earlier ones in that now, you won’t just call higher-order functions but also implement them yourself. These programs, too, are in project HigherOrder. Assignment: repeatForEachElement In this assignment, you’ll implement a higher-order function that takes in a function and calls that function on each element in a vector of integers. In Task7.scala, you can find the beginnings of a function definition and a couple of use cases, but the function body is missing. Implement the function so that it works as described. Once you do that, the use cases at the end of Task7.scala will also work and produce the specified output. Instructions and hints: - The second parameter of repeatForEachElementis a function of type Int => Unit. Which is why you can pass in functions like printCubeand printIfPositivewhen you call repeatForEachElement. - You’ll probably want to use a forloop. A+ presents the exercise submission form here. Assignment: transformEachElement In this assignment, you’ll write an entire higher-order function. The function should transform the given buffer’s contents by replacing each existing element into a new one that’s determined by the given function. (This idea is similar to what transformColor did for images, above, except that you’ll now modify the existing buffer “in place” instead of generating a new collection.) For a detailed task description, see Task8.scala. A+ presents the exercise submission form here. Assignment: turnElementsIntoResult In this assignment, you’ll implement one more higher-order function as well as a couple of use cases for it. You can think of this assignment as two steps. As Step 1, define turnElementsIntoResult as instructed in Task9.scala. Hint: use a for loop and a gatherer that tracks the accumulating result. You may also want to take a look at the animation below. Once turnElementsIntoResult correctly produces a sum (which is the use case in the given code), proceed to Step 2. Follow the instructions to define positiveCount and productOfNonZeros and use those two functions in combination with turnElementsIntoResult. A+ presents the exercise submission form here. On Collections and Higher-Order Functions The three tasks above featured very generic higher-order functions that enable you to do a great many things with a collection: you can repeat an operation on each element, transform each element to another, or use the elements to compute a result. With higher-order functions such as these, you can operate on collections without having to write loops: you just pass in a function that says what do with each element, and the higher-order function takes care of repetition. Since functions such as these are so practical, they are also available as part of the Scala API. Very soon, in Chapter 6.2, you’ll see that Scala’s collections have an array of flexible higher-order methods that you’ll find extremely useful. Some of those methods bear a great resemblance to the three functions you just wrote. But I don’t want to def all those function names! Perhaps you find it hard to believe that it could be more practical to use higher-order functions than loops to work with collections. Perhaps you find it irritating to define all those parameter functions separately and to come up with contrived names for each one (such as printIfPositive). It’s true that it’s sometimes a pain to have to name each tiny parameter function. But we’ll salve that pain in the next chapter with anonymous functions. Summary of Key Points - Functions are data, too. You can store them in variables, pass them as parameters to other functions, and so forth. - A higher-order function is a function that operates on one or more other functions. Such functions can implement very generic and useful services: you can call a highly abstract higher-order function and pass in another function that specifies precisely what the higher-order function should do. - You can nest collections within another collection. This is one way of representing two-dimensional or multidimensional information. - Links to the glossary: higher-order function; parameter list; filter. grayscale assignment draws inspiration from a similarly themed assignment by Jessen Havill. The hidden-pics assignments are adaptations of a programming assignment published by Nick Parlante and originally conceived by David J. Malan. The two pictures in the image-averaging example are by Daniel Cook, who has published them under the Creative Commons Attribution 3.0 license. The painting from the color-swapping example is The Defense of the Sampo by Akseli Gallén-Kallela. nextsimply adds one to its parameter and returns the result.
https://plus.cs.aalto.fi/o1/2018/w05/ch05/
CC-MAIN-2020-24
refinedweb
4,885
56.15
Welcome to Cisco Support Community. We would love to have your feedback. For an introduction to the new site, click here. And see here for current known issues. I'm trying to configure in a 877W a connection to a DSL. With the configuration below, I can ping from the router any Internet address but from any PC on the network I can't. Can somebody tell me what is wrong? no aaa new-model ! dot11 syslog no ip source-route ip cef no ip dhcp use vrf connected ip dhcp excluded-address 192.168.15.1 ip dhcp pool dpool1 import all network 192.168.15.0 255.255.255.0 default-router 192.168.15.1 dns-server xx.xx.xx.xx no ip domain lookup ip name-server xx.xx.xx.xx archive log config hidekeys interface ATM0 no ip address ip nat outside ip virtual-reassembly no atm ilmi-keepalive pvc 8/32 pppoe-client dial-pool-number 1 ! dsl operating-mode auto interface FastEthernet0 interface FastEthernet1 interface FastEthernet2 interface FastEthernet3 interface Vlan1 ip address 192.168.15.1 255.255.255.0 ip access-group 100 in ip nat inside interface Dialer0 ip address negotiated encapsulation ppp dialer pool 1 dialer-group 1 ppp authentication chap pap callin ppp chap hostname xxx ppp chap password 0 xxx ppp pap sent-username xxx password 0 xxx ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 Dialer0 no ip http server no ip http secure-server ip nat inside source list 1 interface Dialer0 overload access-list 1 permit 0.0.0.0 255.255.255.0 access-list 1 permit 192.168.15.0 0.0.0.255 access-list 100 permit ip any any dialer-list 1 protocol ip permit dialer-list 2 protocol ip permit control-plane line con 0 no modem enable line aux 0 line vty 0 4 scheduler max-task-time 5000 end Can you remove "ip nat outside" from the ATM0 interface config and add "ip nat outside" under the Dialer0 interface and then retest. Jon Now I can ping Internet addresses from computers inside the network and I can also open connections to ftp servers, but I cannot open web sites through the IE. Any idea? I have made a few changes in the configuration, and now I've got new problems. With the conf below the Internet access through LAN goes real slow and the WLAN does not even appear at the computers availables WLANs list. Any suggestion? version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption hostname cisco boot-start-marker boot-end-marker logging buffered 51200 logging console critical enable secret 5 xxx dot11 ssid cisco1 vlan 1 authentication open authentication key-management wpa wpa-psk ascii 0 xxx username xxx privilege 15 password 0 xxx bridge irb interface Dot11Radio0 no ip route-cache cef no ip route-cache encryption vlan 1 mode ciphers tkip broadcast-key vlan 1 change 45 ssid cisco1 speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 power local cck 13 power local ofdm 13 power client 13 station-role root bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding interface Dot11Radio0.1 encapsulation dot1Q 1 native interface BVI1 ip http server ip http authentication local ip http secure-server access-list 1 remark SDM_ACL Category=18 bridge 1 protocol ieee bridge 1 route ip
https://supportforums.cisco.com/t5/lan-switching-and-routing/internal-routing-problem-877w/td-p/1297122
CC-MAIN-2017-51
refinedweb
623
50.87
std::shared_ptr::owner_before From cppreference.com < cpp | memory | shared ptr Checks whether this shared_ptr precedes other in implementation defined owner-based (as opposed to value-based) order. The order is such that two smart pointers compare equivalent only if they are both empty or if they both own the same object, even if the values of the pointers obtained by get() are different (e.g. because they point at different subobjects within the same object) This ordering is used to make shared and weak pointers usable as keys in associative containers, typically through std::owner_less. [edit] Parameters [edit] Return value true if *this precedes other, false otherwise. Common implementations compare the addresses of the control blocks. [edit] Example Run this code #include <iostream> #include <memory> struct Foo { int n1; int n2; Foo(int a, int b) : n1(a), n2(b) {} }; int main() { auto p1 = std::make_shared<Foo>(1, 2); std::shared_ptr<int> p2(p1, &p1->n1); std::shared_ptr<int> p3(p1, &p1->n2); std::cout << std::boolalpha << "p2 < p3 " << (p2 < p3) << '\n' << "p3 < p2 " << (p3 < p2) << '\n' << "p2.owner_before(p3) " << p2.owner_before(p3) << '\n' << "p3.owner_before(p2) " << p3.owner_before(p2) << '\n'; std::weak_ptr<int> w2(p2); std::weak_ptr<int> w3(p3); std::cout // << "w2 < w3 " << (w2 < w3) << '\n' // won't compile // << "w3 < w2 " << (w3 < w2) << '\n' // won't compile << "w2.owner_before(w3) " << w2.owner_before(w3) << '\n' << "w3.owner_before(w2) " << w3.owner_before(w2) << '\n'; } Output: p2 < p3 true p3 < p2 false p2.owner_before(p3) false p3.owner_before(p2) false w2.owner_before(w3) false w3.owner_before(w2) false
http://en.cppreference.com/w/cpp/memory/shared_ptr/owner_before
CC-MAIN-2018-09
refinedweb
256
56.96
This is a simple question that goes like this: Is the code for variables declared (and instantiated) in libraries executed after or before setup() is called? And here’s the longer version: So, I have this code with tact switches. Simple stuff, pinMode with INPUT_PULLUP. They way it works, the buttons are initialized (ie call pinMode) directly from code like this: Buttons.cpp: // The buttons class goes like this (heavely simplified to just show how it works): #include "Buttons.h" TButtons ButtonsHandler(PIN_B1, PIN_B2, PIN_B3, 3); //where PIN_B# is a pin number of course TButtons::TButtons(byte ButtonPins[], byte ButtonCount){ for (byte i= 0; i < ButtonCount; i++){ pinMode(ButtonPins[i], INPUT_PULLUP); } } byte TButtons::GetButton(){ for(byte i= 0; i < ButtonCount; i++) if (digitalRead(ButtonPins[i]) == LOW) return i; return 0; } Now here’s the problem. In my contraption, I’d like to be able to enter a special mode that should happen before anything else. For that I decided to check if a button was pressed or not from startup. So I did this: In main.ino: #include "Buttons.h" void setup(){ if (ButtonsHandler.GetButton() == PIN_B1) GoSpecialMode(); else DoTheRestOfTheStuff(); } void loop(){ and the rest of things. } But it doesn’t work, it ignores the button being pressed. My guess is that the pin is not yet “functional” when I try to digitalRead() it. If I do this: void setup(){ pinMode(Pin_B1, INPUT_PULLUP); //This should have been done already by TButtons' constructor at this time delay(50); if (ButtonsHandler.GetButton() == PIN_B1) GoSpecialMode() } it works perfectly. I did check that the object is being instantiated before setup() and it does and sets up everything as it should. So it got my thinking. What is going on? I’d imagine that setup() is called AFTER all the code from libraries is executed, so ButtonHandler should have been initialized by then. Which it does (I logged the order of execution in EEPROM and then checked).
https://forum.arduino.cc/t/setup-and-order-of-execution-of-code-outside-it/439351
CC-MAIN-2022-33
refinedweb
321
65.52
Annotation The article will familiarize developers of application software with tasks set before them On default by operating system in the article Windows is meant. By 64-bit systems x86-64 (AMD64) architecture is understood. By the development environment – Visual Studio 2005/2008. You may download demo sample which will be touched upon in the article from this address:. Introduction Parallel computing and large RAM size are now available not only for large firmware complexes meant for large-scale scientific computing, but are being used also for solving everyday tasks related to work, study, entertainment and computer games. The possibility of paralleling and large RAM size, on the one hand, make the development of resource-intensive applications easier, but on the other hand, demand more qualification and knowledge in the sphere of parallel programming from a programmer. Unfortunately, a lot of developers are far from possessing such qualification and knowledge. And this is not because they are bad developers but because they simply haven’t come across such tasks. This is of no surprise as creation of parallel systems of information processing has been until recently carried out mostly in scientific institutions while solving tasks of modeling and forecasting. Parallel computer complexes with large memory size were used also for solving applied tasks by enterprises, banks etc, but until recently specificity of development, testing and debugging of such systems by themselves. By resource-intensive software we mean program code which uses efficiently abilities of multiprocessor systems and large memory size (2GB and more). That’s why we’d like to bring some knowledge to developers who may find it useful while mastering modern parallel 64-bit systems in the nearest future. It will be fair to mention that problems related to parallel programming have been studied in detail long ago and described in many books, articles and study courses. That’s why this article will devote most attention to the sphere of organizational and practical issues of developing high-performance applications and to the use of 64-bit technologies. While talking about 64-bit systems we’ll consider that they use LLP64 data model (see table 1). It is this data model that is used in 64-bit versions of Windows operating system. But information given here may be as well useful while working with systems with a data model different from LLP64. Table 1. Data models and their use in different operating systems. Be brave to use parallelism and 64-bit technology Understanding the conservatism in the sphere of developing large program systems, we would like, though, to advise you to use those abilities which are provided by multicore 64-bit processors. It may become a large competitive advantage over similar systems and also become a good reason for news in advertisement companies. It is senseless to delay 64-bit technology and parallelism as their mastering is inevitable. You may ignore all-round passion for a new programming language or optimizing a program for MMX technology. But you cannot avoid increase of the size of processed data and slowing down of clock frequency rise. Let’s touch upon this statemzent picture 1). There is an article on this topic which is rather interesting: "The Free Lunch Is Over. A Fundamental Turn Toward Concurrency in Software" [1]. Picture 1. Rise of clock frequency and the number of transistors on a dice. During last 30 years productivity has been determined by clock frequency, optimization of command execution and cache enlarging. In next years it will be determined by the number of cores. Development of parallel programming means will become the main direction of programming technologies’ development. Parallel programming will allow not only to solve the problem of slowing down of clock frequency rise speed but in general come to creation of scalable software which will use fully abort because of memory shortage after several hours of work. Thirdly, this is an opportunity to easily work with data arrays of several GB. Sometimes it results in amazing rise of productivity through means of excluding access operations to the hard disk. give “Yes” answer at least for one. Provide yourself with good hardware So, you decided to use parallelism and 64-bit technologies in your program developments. Perfect. So, let’s at first touch upon some organizational questions. Despite that you have to face the development of complex programs processing large data sizes, can feel deeply all the troubles caused by the lack of RAM memory working it so that those processes which took 10 minutes take less than 5 now. I want you once again to pay your attention that the aim is not to occupy a programmer with a useful task in his leisure-time but to speed up all the processes in general. Installation of a second computer (dual-processor system) with the purpose that the programmer will switch over to other tasks while waiting is wrong at all. A programmer’s labor is not that of a street cleaner, who can clear a bench from snow during a break while breaking ice. A programmer’s labor needs concentration on the task and keeping in mind a lot of its elements. Don’t try to switch over a programmer (this try will be useless), try to make it so that he could continue to solve the task he’s working at as soon as possible. According to the article “Stresses of multitask work: how to fight them” [2] to go deeply into some other task or an interrupted one a person needs 25 minutes. If you don’t provide continuity of the process, half the time will be wasted on this very switching over. It doesn’t matter what it is – playing ping-pong or searching for an error in another program. Don’t spare money to buy some more GB of memory. This purchase will be repaid after several steps of debugging a program allocating large memory size. Be aware that lack of RAM memory causes swapping and can slow down the process of debugging from minutes to hours. Don’t spare money to provide the machine with RAID subsystem. Not to be a theorist, here you are an example from our own experience (table 2). Configuration (pay attention to RAID) Time of building of an average project using large number of exterior libraries. AMD Athlon™ 64 X2 Dual Core Processor 3800+, 2 GB of RAM,2 x 250Gb HDD SATA - RAID 0 95 minutes AMD Athlon™ 64 X2 Dual Core Processor 4000+, 4 GB of RAM,500 Gb HDD SATA (No RAID) 140 minutes Table 2. An example of how RAID influences the speed of an application’s building. Dear managers! Trust me that economizing on hardware is compensated by delays of programmers’ work. Such companies as Microsoft provide the developers with latest models of hardware not because of generosity and wastefulness. They do count their money and their example shouldn’t be ignored. At this point the part of the article devoted to managers is over, and we would like to address again creators of program solutions. Do demand for the equipment you consider to be necessary for you. Don’t be shy, after all your manager is likely just not to understand that it is profitable for everybody. You should enlighten him. Moreover, in case the plan isn’t fulfilled it is you who will seem to be guilty. It is easier to get new machinery than try to explain on what you waste your time. Imagine yourself how can your excuse projecting a new system and even not the study of theory, this task is to demand to buy all the necessary hardware and software in good time. Only after that you may begin to develop resource-intensive program solutions efficiently. It is impossible to write and check parallel programs without multicore processors. And it is impossible to write a system for processing large data sizes without necessary RAM memory size. Before we switch over to the next topic, we would like to share some ideas with you which will help you to make your work test packs) for which usual machines are not productive enough, may be solved by using several special high-performance machines with remote access. Such an example of remote access is Remote Desktop or X-Win. Usually simultaneous test launches are carried out only by few developers. And for a group of 5-7 developers 2 dedicated high-performance machines are quite enough. It won’t be the most convenient solution but it will be rather saving in comparison to providing every developer with such workst a lot of effort and time. Causes why the debugger is not so attractive Bad applicability of the debugger while working debugging variant there occurs allocation of larger memory size for control of going out of the arrays’ limits, memory fill during allocation/deletion etc. This slows down the work even more. One can truly notice that a program may be debugged not necessarily at large working data sizes and one may manage with testing tasks. Unfortunately, this is not so. An unpleasant surprise consists in that while developing 64-bit systems you cannot be sure of the correct work of algorithms, testing them at small data sizes instead of working sizes of many GB. Here you are another simple example demonstrating the problem of necessary testing at large data sizes. #include #include #include #include©; } std::cout << "Array size=" << buffer.size() << std::endl; return 0; } This program reads the file and saves in the array all the symbols related to capital English letters. was used correctly on the 32-bit system – with taking into consideration this limit and no errors occurred. On the 64-bit system we’d like to process files of larger size as there is no limit of the array’s size of 2 GB. Unfortunately, the program is written incorrectly from the point of view of LLP64 data model (see table 1) used in the 64-bit Windows version. The loop contains int type variable whose size is still 32 bits. If the file’s size is 6 GB, condition "i != fileSize" will never be fulfilled and an infinite loop will occur. This code is mentioned to show how difficult it is to use the debugger while searching for errors which occur only at a large memory size. On getting an eternal loop while processing the file on the 64-bit system you may take a file of 50 bites for processing and watch how the functions works under the debugger. But an error won’t occur at such data size and to watch the processing of 6 billion elements under the debugger is impossible. Of course, you should understand that this is only an example and that it can be debugged easily and the cause of the loop may be found. Unfortunately, this often becomes practically impossible in complex systems because of the slow speed of the processing of large data sizes. To learn more about such unpleasant examples see articles “Forgotten problems of 64-bit program development” [3] and “20 issues of porting C++ code on the 64-bit platform” [4]. 2) Multi-threading The method of several instruction threads executed simultaneously for speeding up the processing of large data size has been used for a long time and rather successfully in cluster systems and high-performance servers. But only with the appearance of multicore processors on market, the possibility of parallel data processing is being widely used by application software. And the urgency of the parallel system development will only increase in future. Unfortunately, it is not simple to explain what is difficult about debugging of parallel programs. Only on facing the task of searching and correcting errors in parallel systems one may feel and understand the uselessness of such a tool as a debugger. But in general, all the problems may be reduced to the impossibility of reproduction of many errors and to the way the debugging process influences the sequence of work of parallel algorithms. To learn more about the problems of debugging parallel systems you may read the following articles: “Program debugging technology for machines with mass parallelism” [5], "Multi-threaded Debugging Techniques" [6], "Detecting Potential Deadlocks" [7]. The difficulties described are solved by using specialized methods and tools. You may handle 64-bit code by using static analyzers working with the input program code and not demanding its launch. Such an example is the static code analyzer Viva64 [8]. To debug parallel systems you should pay attention to such tools as TotalView Debugger (TVD) [9]. TotalView is the debugger for languages C, C++ and Fortran which works at Unix-compatible operating system and Mac OS X. It allows to control execution threads, show data of one or all the threads, can synchronize the threads through breakpoints. It supports also parallel programs using MPI and OpenMP. Another interesting application is the tool of multi-threading analysis Intel® Threading Analysis Tools [10]. Use of a logging system All the tools both mentioned and remaining undiscussed are surely useful and may be of great help while developing high-performance applications. But one shouldn’t forget about such time-proved methodology as the use of logging systems. Debugging by logging method hasn’t become less urgent for several decades and still remains a good tool about which we’ll speak in detail. The only change concerning logging systems is growing demands towards them. Let’s try to list the properties a modern logging system should possess for high-performance systems: - The code providing logging of data in the debugging version must be absent in the output version of a software product. Firstly, this is related to the increase of performance and decrease of the software product’s size. Secondly, it doesn’t allow to use debugging information for cracking of an application and other illegal actions. - The logging system’s interfaces should be compact not the number of its details. carry out realized. This allows to include the debugging information into Release-versions what is important when carrying out debugging at large data size. Unfortunately, when compiling the un pairs of brackets what is often forgotten. That’s why let’s bring some improvement: is turned off this code doesn’t matter at all and you can safely use it in critical code sections. enum E_LogVerbose { Main, Full }; #ifdef DEBUG_MODE void WriteLog(E_LogVerbose, const char *strFormat, ...) { ... } #else ... #endif WriteLog (Full, "Coordinate = (%d, %d)\n", x, y); This is convenient in that way that you can decide whether to filter unimportant messages or not after the program’s shutdown by using a special utility. The disadvantage of this method is that all the information is shown – both important and unimportant, what may influence the productivity badly. That’s why you may create several functions of WriteLogMain, WriteLogFull type and so on, whose realization will depend upon the mode of the program’s building. We mentioned that the writing of the debugging information must not influence the speed of the algorithm’s work too much. We can reach this by creating a system of gathering messages, the writing of which occurs in the thread executed simultaneously. The outline of this mechanism is shown on picture 2. Picture 2. Logging system with lazy data write. As you can see on the picture the next data portion is written into an intermediate array with strings of fixed length. The fixed size of the array and its strings allows to exclude expensive operations of memory allocation. of anticipatory writing of information into the file. The described mechanism provides practically instant execution of WriteLog function. If there are offloaded processor’s cores in the system the writing into the file will be virtually transparent for the main program code. The advantage of the described system is that it can function practically without changes while debugging the parallel program, when several threads are being written into the log simultaneously. You need just to add a process identifier so that you can know later from what threads the messages were received (see picture 3). Picture 3. Logging system while debugging multithread applications. The last improvement we’d like to offer is organization to show this by an example. Program code: class NewLevel { public: NewLevel() { WriteLog("__BEGIN_LEVEL__\n"); } {. The use of right data types from the viewpoint of 64-bit technologies The use of. To such types int, unsigned, long, unsigned long, ptrdiff_t, size_t and pointers can be referred. Unfortunately, there are practically no popular literature and articles which touch upon the problems of choosing types. And those sources which do, for example "Software Optimization Guide for AMD64 Processors" [12], are seldom read by application programmers. The urgency of right choice of base types for processing data is determined by two important causes: the correct code’s work and its efficiency. Due to the historical development the base and the most often used integer type in C and C++ languages is int or unsigned int. It is accepted to consider int type the most optimal as its size coincides with the length of the processor’s computer word. The computer word is a group of RAM memory bits taken by the processor at one call (or processed by it as a single group) and usually contains 16, 32 or 64 bits. The tradition to make data models LLP64 and LP64 which are used in 64-bit Windows operating system and most Unix systems (Linux, Solaris, SGI Irix, HP UX 11). It is a bad decision to leave int type size of 32-bit due to many reasons, but it is really a reasonable way to choose the lesser of two evils. First of all, it is related to the problems of providing backward compatibility. To learn more about the reasons of this choice you may read the blog "Why did the Win64 team choose the LLP64 model?" [13] and the article "64-Bit Programming Models: Why LP64?" [14]. For developers of 64-bit applications all said above is the reason to follow two new recommendations in the process of developing software. Recommendation 1. Use ptrdiff_t and size_t types for the loop counter and address arithmetic’s counter instead of int and unsigned. Recommendation 2. Use ptrdiff_t and size_t types for indexing in arrays instead of int and unsigned. In other words you should use whenever possible data types whose size is 64 bits in a 64-bit system. Consequently you shouldn’t use constructions like: for (int i = 0; i != n; i++) array[i] = 0.0; Yes, this is a canonical code example. Yes, it is included in many programs. Yes, with it learning C and C++ languages begins. But it is recommended not to use it anymore. Use either an iterator or data types ptdriff_t and size_t). Consequently it cannot be used instead of ptrdiff_t and size_t types. 2) The use by examples why we are so insistent asking you to use ptrdiff_t/size_t type instead of usual int/unsigned type. We’ll begin with an example illustrating the typical error of using unsigned type for the loop counter in 64-bit code. We have already described a similar example before but let’s see it once again as this error is widespread: size_t Count = BigValue; for (unsigned Index = 0; Index != Count; ++Index) { ... } This is typical code variants of which size_t type instead of unsigned. The next example shows the error of using int type for indexing large arrays: double *BigArray; int Index = 0; while (...) BigArray[Index++] = 3.14f; This code doesn’t seem suspicious to an application developer accustomed to the practice of using variables of int or unsigned types as arrays’ indexes. Unfortunately, this code won’t work in a 64-bit system if the size of the processed array BigArray becomes more than four billion items. In this case an overflow of Index variable will occur and the result of the program’s work will be incorrect (not the whole array will be filled). Again, the correction of the code is to use ptrdiff_t or size_t types for indexes. As the last example we’d like to demonstrate the potential danger of mixed use of 32-bit and 64-bit types, which you should avoid whenever possible. Unfortunately, few developers think about the consequences of inaccurate mixed arithmetic and the next example is absolutely unexpected for many (the results are received with the use of ptrdiff_t and size_t types from the viewpoint of productivity. For demonstration we’ll take a simple algorithm of calculating the minimal length of the path in the labyrinth. You may see the whole code of the program through this link:. In this article we place only the text of functions FindMinPath32 and FindMinPath64.)[eight)[eight; } FindMinPath32 function is written in classic 32-bit style with the use of unsigned types. FindMinPath64 function differs from it only in that all the unsigned types in it are replaced Table 2. Execution time size_t type instead of unsigned allows the compiler to construct more efficient code working 8% faster! It is a simple and clear example of how the use of data not equal to the computer word’s size decreases the algorithms productivity. Simple replacement of int and unsigned types with ptrdiff_t and size_t may give great productivity increase. First of all this refers to the use of these data types for indexing arrays, address arithmetic and organization of loops. We hope that having read all said above you will think if you should continue to write: for (int i = 0; i !=n; i++) array[i] = 0.0; To automate the error search in 64-bit code developers of Windows-application may take into consideration the static code analyzer Viva64 [8]. Firstly, its use will help to find most errors. Secondly, while developing programs under its control you will use 32-bit variables more seldom, avoid mixed arithmetic with 32-bit and 64-bit data types what will at once increase productivity of your code. For developers of Unix-system such static analyzer may be of interest as Gimpel Software PC-Lint [15] and Parasoft C++test [16].]. Additional ways of increasing productivity of program systems In the last part of this article we’d like to touch upon some more technologies which may be useful for you while developing resource-intensive program solutions. Intrinsic- functions Intrinsic-functions are special system-dependent functions which execute actions impossible to be executed on the level of C/C++ code or which execute these actions much more efficiently. As the matter of fact they allow to avoid using inline-assembler because it is often impossible or undesirable. Programs may use intrinsic-functions for creating faster code due to the absence of overhead costs on the call of a usual function type. In this case, of course, the code’s size will be a bit larger. In MSDN the list of functions is given which may be replaced with their intrinsic-versions. For example, these functions are memcpy, strcmp etc. In Microsoft Visual C++ compiler there is a special option «/Oi» which allows to automatically replace calls of some functions with intrinsic-analogs. Beside automatic replacement of usual functions with intrinsic-variants we can explicitly use intrinsic-functions in the code. It in compilers, while assembler code has to be updated manually. - Inline optimizer doesn’t work with assembler code, that’s why you need exterior linking of the module, while intrinsic-code doesn’t need this. - Intrinsic-code is easier to port than assembler code. - The use of intrinsic-functions in automatic mode (with the help of the compiler’s key) allows to get some per cent of productivity increase free and “manual” mode even more. That’s why the use of intrinsic-functions is justified. - To learn more about the use of intrinsic-functions you may see Visual C++ group’s blog [21]. - Data alignment doesn’t influence the code productivity so greatly as it was 10 years ago. But sometimes you can get a little profit in this sphere too saving some memory and productivity. struct foo_original {int a; void *b; int c; }; This structure takes 12 bytes in 32-bit mode but in 64-bit mode it takes 24 bytes. In order to make it so that this structure takes prescribed 16 bytes in 64-bit mode you should change the sequence order of fields: struct foo_new { void *b; int a; int c; }; In some cases it is useful to help the compiler explicitly by defining the alignment manually in order to increase productivity. For example, SSE); Sources "Porting and Optimizing Multimedia Codecs for AMD64 architecture on Microsoft Windows" [19], "Porting and Optimizing Applications on 64-bit Windows for AMD64 Architecture" [20] offer detailed review of these problems. Files mapped into memory With the appearance of 64-bit systems the technology of mapping of files into memory became more attractive because the data access hole increased. to 32-bit architectures. Only a region of the file can be mapped into the address space, and to access such a file by memory mapping, those regions will have to be mapped into and out of the address space as needed. On 64-bit windows you have much larger address space, so you may map whole file at once. Keyword __restrict One of the most serious problems for a compiler is aliasing. When the code reads and writes memory it is often impossible at the step of compilation to determine whether more than one index is provided with access to this memory space, i.e. whether more than one index can be a "synonym" for one and the same memory space. That's why the compiler should be very careful working inside a loop in which memory is both read and written while storing data in registers and not in memory. This insufficient use of registers may influence the performance greatly. The keyword __restrict is used to make it easier for the compiler to make a decision. It "tells" the compiler to use registers widely. Keyword __restrict allows the compiler not to consider the marked pointers aliased, i.e. referring to one and the same memory area. In this case the compiler can provide more efficient optimization. Let’s look at the example: int * __restrict a; int *b, *c; for (int i = 0; i < 100; i++) { *a += *b++ - *c++ ; // no aliases exist } In this code the compiler can safely keep the sum in the register related to variable “a” avoiding writing into memory. MSDN is a good source of information about the use of __restrict keyword. SSE- instructions Applications executed on 64-bit processors (independently of the mode) will work more efficiently if SSE-instructions are used in them instead of MMX/3DNow. This is related to the capacity of processed data. SSE/SSE2 instructions operate with 128-bit data, while MMX/3DNow only with 64-bit data. That’s why it is better to rewrite the code which uses MMX/3DNow with SSE-orientation. We won’t dwell upon SSE-constructions in this article offering the readers who may be interested to read the documentation written by developers of processor architectures. Some particular rules of using language constructions 64-bit architecture gives new opportunities for optimizing the programming language on the level of separate operators. These are the methods (which have become traditional already) of “rewriting” pieces of a program for the compiler to optimize them better. Of course we cannot recommend these methods for mass use but it may be useful to learn about them. On the first place of the whole list of these optimizations is manual unrolling of the loops. The essence of this method is clear (/fp:fast key for Visual C++) but not always. - Another syntax optimization is the use of array notation instead of pointer one. Conclusion Despite that you’ll have to face many difficulties while creating program systems using hardware abilities of modern computers efficiently, it is worthwhile. Parallel 64-bit systems provide new possibilities in developing real scalable solutions. They allow to enlarge the abilities of modern data processing software tools, be it games, CAD-systems or pattern recognition. We wish you luck in mastering new technologies! References. - herb Sutter. The Free Lunch Is Over. A Fundamental Turn Toward Concurrency in Software. <a href="" target="_blank"> -.
http://www.gamedev.net/page/resources/_/technical/general-programming/development-of-resource-intensive-applications-r2496
CC-MAIN-2016-30
refinedweb
4,734
52.29
Introducing Shiny for Python – R Shiny Now Available in Python Want to share your content on python-bloggers? click here. Shiny has been an R-exclusive framework from day one. Well, not anymore. At the recent Posit (RStudio) Conference, Posit’s CTO Joe Cheng announced Shiny for Python. Finally bringing the widely popular web framework to Python. As of August 2022, you really shouldn’t use Shiny for Python for mission-critical apps in production. The whole library is still in the very early stages, and a lot is expected to change in the future. Nevertheless, there’s no better time to prepare for the future and maybe (just maybe) even say adios to Dash and Streamlit. Did you know RStudio is rebranding to Posit? Learn why in our latest blog post. Table of contents: - What is Shiny and Shiny for Python (Py Shiny)? - How to Install and Configure Shiny in Python Ecosystem - Make Your First Shiny Dashboard in Python - Summary of Shiny for Python (Py Shiny) What is Shiny and Shiny for Python (Py Shiny)? In case you’re new to the topic, Shiny is a package that makes it easy to build interactive web applications and dashboards. It was previously limited to R programming language, but Posit PBC (formerly RStudio PBC), the creators of Shiny, announced Shiny for Python. The R/Python packages are available to download on CRAN/PyPi, so getting started boils down to a single shell command. Shiny applications can be deployed as standalone web pages or embedded into R Markdown documents (R only). Overall, the deployment procedure is simple and can be done for free to an extent. Deployment sounds like a nightmare? Here are 3 simplified ways to share Shiny apps. You can also extend Shiny apps with CSS themes, HTML widgets, and JavaScript actions, in case you ever run into the limitations of the core package itself. Long story short, there’s nothing you can’t do in Shiny. That claim has more weight on the R side of the story as of now, but Python support will only get better with time. Up next, let’s see how to install the library. How to Install and Configure Shiny in Python Ecosystem Installing Shiny in Python is as simple as installing any other Python package since it’s available on PyPi. Run the following command from the shell: pip install shiny In addition, you should also install the following packages if you want to follow along with the code: numpy: Fast and efficient math in Python pandas: Python library for handling data matplotlib: A standard visualization library pandas_datareader: Needed to fetch stock prices from the web (only for this article) jinja2: Needed to render Pandas DataFrames in Shiny apps (only for this article) Run the following command to install them all: pip install numpy pandas matplotlib pandas_datareader jinja2 Once that’s out of the way, you can use a shell command to set up a directory structure for the Shiny application. For example, the one below will create a folder demo_app with app.py inside it: shiny create demo_app Finally, you can run app.py with the following command: shiny run --reload demo_app/app.py Image 2 – Running a Shiny for Python application It will run the Shiny app locally on port 8000. Once opened in the browser, it looks like this: Image 3 – The default Shiny for Python application The app doesn’t look bad and shows you how easy it is to get started with Shiny in Python. The underlying source code can be found in app.py. Here’s what it contains: from shiny import App, render, ui app_ui = ui.page_fluid( ui.h2("Hello Shiny!"), ui.input_slider("n", "N", 0, 100, 20), ui.output_text_verbatim("txt"), ) def server(input, output, session): @output @render.text def txt(): return f"n*2 is {input.n() * 2}" app = App(app_ui, server) If you have any experience in R Shiny, this Python script will look familiar. Sure, Python doesn’t use $ to access methods and properties, and also uses function decorators for rendering. It’s a different syntax you’ll have to get used to, but it shouldn’t feel like a whole new framework. Next, we’ll code a Shiny dashboard from scratch to get familiar with the library. Make Your First Shiny Dashboard in Python We’re going to make a simple stock monitoring app. It will allow you to select a stock and inspect its 30-day performance, both visually and through a table. The table will show more metrics, such as open and close price, volume, and so on. The chart will show only the adjusted close price per day. The data is fetched from Yahoo Finance through the pandas_datareader library. It will automatically download single stock data based on a selected stock name, and rerender the table and chart as soon as you make any change. We’ve also decided to include a bit of custom CSS, so you can see how easy it is to tweak the visuals. Here’s the full code snippet for app.py: import matplotlib.pyplot as plt from datetime import datetime, timedelta from shiny import App, render, ui, reactive from pandas_datareader import data as pdr plt.rcParams["axes.spines.top"] = False plt.rcParams["axes.spines.right"] = False # Configuration time_end = datetime.now() time_start = datetime.now() - timedelta(days=30) tickers = {"AAPL": "Apple", "MSFT": "Microsoft", "GOOG": "Google", "AMZN": "Amazon"} # App UI - One input to select a ticker and two outputs for chart and table app_ui = ui.page_fluid( # Adjust the styles to center everything ui.tags.style("#container {display: flex; flex-direction: column; align-items: center;}"), # Main container div ui.tags.div( ui.h2("Historical Stock Prices"), ui.input_select(id="ticker", label="Ticker:", choices=tickers), ui.output_plot("viz"), ui.output_table("table_data"), id="container") ) # Server logic def server(input, output, session): # Store data as a result of reactive calculation @reactive.Calc def data(): df = pdr.get_data_yahoo(input.ticker(), time_start, time_end) return df.reset_index() # Chart logic @output @render.plot def viz(): fig, ax = plt.subplots() ax.plot(data()["Date"], data()["Adj Close"]) ax.scatter(data()["Date"], data()["Adj Close"]) ax.set_title(f"{input.ticker()} historical prices") return fig # Table logic @output @render.table def table_data(): return data() # Connect everything app = App(app_ui, server) Once launched, here’s what the dashboard looks like: Image 4 – Stock monitoring Shiny dashboard The app is relatively simple but shows you how to work with UI elements and render tables and charts. You can see how we stored fetched stock data as a reactive expression and used it when rendering both UI elements. That is a preferred way over fetching the data twice. You now know how to make basic Shiny apps in Python, so let’s wrap things up next. Summary of Shiny for Python (Py Shiny) Bringing Shiny to Python is definitely a good step forward for the web framework and RStudio/Posit. The library is still in alpha, so a lot is expected to change in the future. You may find some things buggy, so we don’t advise using it in production environments at the moment. It will get better and more feature-rich with time, so stay tuned to our blog for updates. P.S. – With Shiny for Python you can also run applications entirely in the browser using an experimental mode: Shinylive. We’ll explore this topic in future posts so be sure to subscribe to our Shiny Weekly newsletter. What are your thoughts on Shiny for Python? Is it something you’ve been waiting for or you’re happy to stick with R? Please let us know in the comment section below. Also, feel free to continue the discussion on Twitter – @appsilon. We’d love to hear your thoughts. Want to use R and Python together? Here are 2 packages you must try. The post Introducing Shiny for Python – R Shiny Now Available in Python appeared first on Appsilon | Enterprise R Shiny Dashboards. Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2022/08/introducing-shiny-for-python-r-shiny-now-available-in-python/
CC-MAIN-2022-40
refinedweb
1,336
65.12
In previous tutorials, all the variables covered had been scalar. That means they hold one value each. But there are many occasions when being able to hold a list of values or even a two dimensional list (grid) or three dimensions can be very useful. This list of values is called an array and it holds the specified number of values all of the same type. If a single int occupies 4 bytes then an array of 10 ints is 40 bytes in size. And they are held together in one place in RAM. Some Rules With Arrays Arrays are declared like this: type variable [number of elements] syntax. For example 10 ints in a variable called primes are declared like this: int primes[10]; You can initialize in the declaration by putting the values inside {} and separating them with commas. int primes[10]={2,3,5,7,11,13,17,19,23,29}; You don’t have to initialize every value. If you only wanted to initialize the first 8, then this would do. The last two elements are zero. int primes[10]={2,3,5,7,11,13,17,19}; Finally you can make it read-only by putting a const on the front. const int primes[10]={2,3,5,7,11,13,17,19,23,29}; The compiler will generate a syntax error if you try to modify any element of the primes array. Here’s an example program. #include <stdio.h> const int primes[10]={2,3,5,7,11,13,17,19,23,29}; int main(int argc, char* argv[]) { int i; int numelements= sizeof(primes)/sizeof(int) ; for ( i=0; i<numelements; i++) { printf("Primes[%d] = %d\n\r",i,primes[i]) ; } return 0; } When run this outputs: Primes[0] = 2 Primes[1] = 3 Primes[2] = 5 Primes[3] = 7 Primes[4] = 11 Primes[5] = 13 Primes[6] = 17 Primes[7] = 19 Primes[8] = 23 Primes[9] = 29 I’ve introduced the for statement that lets you create loops and sizeof() function that returns the size of any variable or type. I will cover for loops in detail a future tutorial on loop statements but I’ll explain what it does below. Let’s look at some of this code to try and understand what it does: int numelements= sizeof(primes)/sizeof(int); This declares the int variable numelements and sets it to the sizeof(primes) which is 40 divided by sizeof(int) which is 4, so numelements is set to 40/4 = 10. I could do this explicitly that’s not always the best thing to do. Here’s what that would look like: int numelements = 10; But doing it the sizeof() way is usually better. If I change the number of elements in the array say to 20, I don’t have to remember to change it because the sizeof() calculations determines it correctly. Had it explicitly been declared numelements = 10 then I’d have to remember to change that as well. The for loop starts with 0 and counts up as long as i is less than numelemenets which means it goes u to nine. This is a very important thing to remember, when we count elements in arrays, they start at 0. Arrays start at 0 Most computer languages, but not all e.g. Visual Basic start arrays with the first element at index 0. I always remember this as the first element is zero distance from the start of the array. Visual Basic starts at 1. So in a ten element array the indices run from 0 to 9, not 1 to 10. For Loops This is a very convenient way to index through the elements of an array. It uses an index variable and has three parts all within brackets and separated by semi-colons ; for ( i=0; i<numelements; i++) { In the example, snipped above the three parts of the for loop are: - i=0 - i<numelements - i++ The first part is where the loop index is initialized. i=0. You can leave this blank if the loop variable has already been initialized. So this works as well: int i=0; for (;i<numelements; i++) { The second part is where the loop is checked to see if it has finished. Because the last element is at index 9 (not 10), this needs to check if i < numelements. If this is true the loop continues. Finally the third part is where the loop variable is incremented. It’s traditional to increment but you can call any statement or function here, so long as the loop variable is altered. If you don’t it will loop forever or until it crashes. Below is an infinite loop and you will have to wait forever for it to finish. Clearly this is not a good thing. for (;;) { ... } Any or all three parts can be empty as the infinite loop above shows. However we do have a way of breaking out of a loop early using the statement break. I’ll cover that in a future tutorial on looping. The Array Size is always Static The size must always be specified at compile time, so you can’t do this: int a= 50; int values[a]; Because arrays always have to be declared statically, you have to use things call pointers to have dynamic arrays, i.e. arrays that change size at runtime. When we get to pointers in a future tutorial, you’ll see how dynamic data structures can be created. You can use a #define but it’s still statically declared. Like this: #define TENCATS 10 int values[TENCATS]; Arrays have dimensions So far all the arrays I’ve described have been single dimension arrays, like houses in a street. But you can have two, three or many dimensions. The only limit is usually memory. I suspect some compilers possibly also limit you to 256 dimension arrays as it would be a logical limit. To declare a multi dimension array you put how many elements there are in each dimension, in square brackets. Imagine a set of cubes 2 high, three wide and four deep. You’d declare it as a three dimension array like this: int cubes[2][3][4]; That has 2 x 3 x 4 = 24 ints. Likewise you might declare a two-dimension chess board with 8 x 8 squares as int board[8][8]; What is a #define? It’s a way of giving a name to a piece of text. Before the C compiler does its magic, it runs a program called a pre-processor through your source code looking for things like #defines. Everywhere it finds one, it replaces it with the text, In the example above I created a #define called TENCATS which has the text 10. Note, I use uppercase for TENCATS. You don’t have to, it is a convention rather than a hard and fast rule, but it tells anyone reading the program that it is a #define. And the text that is represents is not put inside quotes or double quotes. When the compiler compiles int values[TENCATS] the pre-processor first changes its copy of the source in memory so every instance of TENCATS that it finds becomes becomes 10, and it actually compiles int values[10]. Why does that matter? Why not just use 10? Because we would typically access this array using a loop and we need to know how many elements there are. If you used 10 instead of a #define TENCATS and later needed to change it to 50, then you would have to go through your program and change every relevant 10 to 50. You might change a wrong 10 or maybe overlook one. This can introduce unnecessary bugs. If you use the #define then you only need to change the value once e.g. #define TENCATS 50. The pre-processor will then substitute 50 everywhere that it finds TENCATS. That’s it for arrays though we’ll revisit them when we look at pointers. In the next tutorial I’ll look at structs. Link to previous tutorial. Link to next tutorial.
https://learncgames.com/tutorials/tutorial-four-all-about-arrays-in-c/
CC-MAIN-2021-43
refinedweb
1,361
71.34
Breaking up not so hard to do Separating from a class you don't control If I were a design pattern, I think I would be the Adapter. Sure, I thought I could be a Command pattern, but I don't like people telling me when to do things. In Robert C. Martin's latest article The Adapter Pattern he starts with the following: public class Button { private Light light; public Button(Light light) { this.light = light; } public void press() { light.turnOn(); } } Uncle Bob asks, if you can not modify Light, then how do you decouple the Button from the Light. In the code snippet the Button knows about the turnOn() method in the Light. He refers to his first solution as the object version of the Adapter. He introduces an interface named Switchable that the Button knows about. He then introduces a LightAdapter class that implements Switchable and contains a handle to a Light object. The second pass is the class form of adapter. As before the Button depends on an object of type Switchable (an interface). This time, however, the LightClassAdapter implements Switchable and extends Light. After railing against the limitations of single inheritance, Bob shows how to implement object-form Adapters as anonymous inner classes. So once I realized it was too much work to be an Adapter myself, I was thinking I could be a Chain of Responsibility pattern, but In Projects and Communities, the JavaWorld article on the Chain of Responsibility discusses pitfalls and how to improve on the classic implementation. The Jini community announces the update to Jan Newmarch's Guide to Jini Technologies that now covers the Jini 2.0 release. In Also in Java Today, Srini Penchicala has written a guide to Monitoring and Managing Tomcat Clusters Using JMX. His introduction says "the Tomcat 5 servlet container provides built-in support for monitoring server components using the Java Management Extensions (JMX) API. This article concentrates on the clustering and load balancing components, and provides an overview of configuring and using JMX MBeans in managing the elements in a J2EE cluster. You'll also see how to use the JMX API to automate various administration tasks in the Tomcat container." Some point, usually early in your Java career, you have wrestled with classpath. The article IBM Cloudscape: Understanding Java class path takes Java newbies who are cloudscape users through the fun of setting up this environment variable. Cloudscape is a small Java database. Alex Toussaint reports JSR 94: Java Rule Engine API - Final in today's Weblogs.This was his first JSR and he reports that "On the one hand, you have great discussions around the technology itself and trying to figure out some of the best ways of doing certain things. You also get to work with very smart people who have different perspectives which helps open your horizons. All of this is cool. On the other hand though... you have to deal with the legal stuff like distribution licenses, export controls, and achieve compromises in order to move things forward to a final state." You don't want to hassle with doing something about an exception so you put in an empty catch block and figure it's been taken care of. In today's Forums. moderator Ron Hitchesns writes "Empty catch blocks are ticking time bombs. Do you ever silently swallow exceptions? How many times have you run across this in other peoples code? Are catch blocks that just print a stack trace and keep going any better? Do IDE tools that auto-generate empty try/catch blocks exacerbate this problem?" Johnm writes on referring to objects by their interfaces saying "Refactoring is a marvelous tool that we in the software business have but even refactoring is not a silver bullet. There are some categories of risk for which the pro-active mitigation costs are much smaller than the costs of dealing with the potential negative outcomes." In today's java.net News Headlines : 17 What'shappening in the Java Community Process live chat - August 19 Fort Worth Java User's Group: Runtime Code Generation for Java and15 reads
https://weblogs.java.net/blog/editors/archives/2004/08/breaking_up_not.html
CC-MAIN-2015-40
refinedweb
689
62.98
I have a dataset of B x T x C, where B is batches, T is timestep (uneven), and C is characters (uneven). I would like to use EmbeddingBag to get a mean-embedding of each timestep of characters. For example, lets say I have three datapoints in my batch: - [[], [0, 4], [1, 1], [5]] - This has 4 time steps, and 0, 2, 2, 1, respectively, characters in each timestep. - [[1], [2, 3]] - This has 2 time steps, and 1, 2, respectively, characters in each timestep. - [[2, 4, 5], []] - This has 2 timesteps, and 3, 1, respectively, characters in each timestep So let’s init that: all_tensors = [[[], [0, 4], [1, 1], [5]], [[1], [2,3]], [[2, 4, 5], []]] And I know this is what I want my embedder to look like: embedder = torch.nn.EmbeddingBag(num_embeddings = 6, embedding_dim = 2, mode = 'mean') And…this is where I am stuck. Is there a good tutorial for how this problem should be approached? Edit: Think I got a bit closer… def pad_array(base_input): for index1, datapoint in enumerate(base_input): base_input[index1] = torch.LongTensor(np.asarray([np.pad(a, (0, 5 - len(a)), 'constant', constant_values=0) for a in datapoint])) return base_input all_tensors = [[[], [0, 4], [1, 1], [5]], [[1], [2,3]], [[2, 4, 5], []]] paddedchar_tensors = pad_array(all_tensors) paddedchar_tensors = rnn_utils.pad_sequence(padded_tensors, batch_first=True) This gives me paddedcode_tensors as: tensor([[[0, 0, 0, 0, 0], [0, 4, 0, 0, 0], [1, 1, 0, 0, 0], [5, 0, 0, 0, 0]], [[1, 0, 0, 0, 0], [2, 3, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], [[2, 4, 5, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]]) But once again, I am stuck, running this through the EmbeddingBag gives me this error: ValueError: input has to be 1D or 2D Tensor, but got Tensor of dimension 3
https://discuss.pytorch.org/t/how-to-use-embeddingbag-with-uneven-3d-data/97530
CC-MAIN-2022-40
refinedweb
315
58.86
HI, I want to Display an output in C for a limited amount of time and then make it disapear. Any ideas how to do it ? Thanks This is a discussion on Display output for a limited time within the C Programming forums, part of the General Programming Boards category; HI, I want to Display an output in C for a limited amount of time and then make it disapear. ... HI, I want to Display an output in C for a limited amount of time and then make it disapear. Any ideas how to do it ? Thanks But here is a quick referance for it, hope it helps ^_^ ctime (time.h) - C++ Reference I didn't see though anything concerning displaying the output for a time period and then taking it off. Maybe make a print statement with a bunch of carrige returns after the data you want shown disappears? Sorry, I have no idea, but hope that gets you started in the right direction ^__^ "output" is a very general term. Do you mean output to the console? Or to a gui window? What OS are you using? The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss Do you know any way to clear the screen? That is essential to your goal. It depends on what you mean by "disappear" too. For example, by disappear, do you mean the whole content of the terminal screen being cleared? A simple way to achieve that is to output a bunch of newlines ('\n') until the previous output scrolls off the top of the screen. Or do you just mean that a specific line is overwritten somehow? There are no universal techniques. If you are using a particular API to produce the original output (for example, win32 console functions, or curses under unix) then you need to look up techniques that involve the same API. If you are just writing directly to stdout or stderr, you might get away with the following. A backspace character ('\b') moves the cursor one left, a space character (typically) will overwrite any character at the cursor location and move one right, a return ('\r') will move to the left margin without a newline. As long as you keep the cursor on the same line as the output you want to overwrite, and keep track of how many characters you have written to that line, combinations of backspace, return, and space characters can be used to selectively overwrite that output. All bets are off with this technique if you have outputted a newline, as there is no general way from vanilla C to move the cursor up one line. Note that this technique is also affected by configuration settings of the display itself (the display or, in your case, the ubuntu terminal window). It can also be upset if you mix up usage of stdout and stderr. In time.h there is a structure "timeval" with arguments "tv_sec" and "tv_usec". Use the gettimeofday() function to get the time value into the above variables. After this maybe u could track the value of time using a temprory variable and a loop and use the idea grumpy gave to make output dissapear. hope this helps. Gaurav
http://cboard.cprogramming.com/c-programming/145770-display-output-limited-time.html
CC-MAIN-2015-48
refinedweb
546
70.63
This is the first of a mutli-part entry on the Bubble Chart from the Silverlight Toolkit. I became interested in this chart when I was working on my soon-to-be-released videos on the PieChart and the Column Charts, which share a common idiom and form. <!-- pseudo chart layout --> <charting:Chart x: <charting:[PieSeries | ColumnSeries | BubbleSeries] </charting:Chart> For a pie chart whose name is DrillDown , if you want your margin to be 1 and you want to place it in row 1 of the grid with a column span of 3, and if you have a data object you will bind to that has properties Letter and Count, your Xaml will look like this (note the substitutions for the values in angle brackets) <charting:Chart x: <charting:PieSeries </charting:Chart> Creating the Column Chart just substitutes a ColumnSeries for a PieSeries, <charting:Chart x: <charting:ColumnSeries </charting:Chart> Easy as, er, pie. Who took my ItemSource? The code behind is a great place to put the itemsource, which can be a collection of any object that has the two properties Letter and Count. (For the video I used the same set of words that I generate for the other controls, and created a dopey useful little class that counts how many words begin with each letter of the alphabet) The key code in wiring this up looks like this: PieSeries pieSlice = DrillDownChart.Series[0] as PieSeries; pieSlice.ItemsSource = frequencyCounters; where frequencyCounters is defined as private List<FrequencyCounter> frequencyCounters; and FrequencyCounter itself is defined (in part) as public class FrequencyCounter { public int Count { get; set; } public char Letter { get; set; } There is also a static method to do the counting, but we can ignore that for now. If you feed this collection to the pie chart, zap (xap?) instant analysis of how many words begin with each letter (essential for quantitative analysis of meaningless information). Swap in the column chart and the same information now shows the letters distributed in columns. The Bubbles Chart What caught my attention when I was building these examples, however, was the Bubbles chart. At first glance, it seemed to follow the same pattern, <charting:Chart.Series> <charting:BubbleSeries </charting:Chart.Series> </charting:Chart> But wait! Note that there are now three values! That third sets the size of the bubble, and in the examples shown on the Toolkit site it is set to the same binding as the dependent value , so that is what I’ve done tonight (it’s late, I’m tired, and let’s start easy). It makes for a very interesting chart, where the size of the bubble represents the size of the dependent value (in this case the count) and has a lot of appeal for those of us who like this kind of representation in cloud tagging, But What About A Third Axis Of Information? The obvious question, though, is whether this doesn’t also offer the opportunity to represent a third piece of information. For example, might we not use the x axis to represent the Tag, and the Y axis to represent the number of articles on the tag, and the size of the bubble to represent the average length of the article? Or perhaps the size of the bubble could represent the average rating articles on that tag. That latter approach would give the user a pretty good idea at a glance if a blog was writing about what people care about. Large bubbles high up in the chart would mean a very responsive blog (lots of articles on popular topics). Once could begin to talk about blogs that float and blogs that sink. I will pursue this tomorrow. This is exciting. 4 Responses to Bubble chart
https://jesseliberty.com/2008/12/17/bubble-chart/
CC-MAIN-2021-25
refinedweb
624
65.96
Isolated, Shallow, Integrated, and End to end testing. What is it and when do you use it? In Angular, we have a lot of different types of testing. I can imagine that you have a lot of questions when you see them. Better said, I had that question so I went into a deep dive to make it all clear. At the end of this post, I hope this makes all sense for you as well. Test tooling for Angular In Angular, we use a few different tools to set up automated testing. If you have created the Angular project with the CLI and didn't say it should ignore the testing then the testing capabilities of the CLI are set up for you. By default, the Angular CLI sets Jasmin as our testing framework and Karma as our test runner. But if you want to make use of Jest or another testing framework, you are free to do so. Types of Testing in Angular In Angular, we have 4 different main types of testing. • Isolated unit testing • Shallow unit testing • Integration testing • End to end testing Isolated unit testing A unit might contain business logic that needs to be tested in isolation. There are a few Angular units that can be tested in isolation. • Pipe • Service • Class • Component • Directives In isolation we always should mock our dependencies, otherwise, it isn't isolation anymore. import { FormComponent } from './my-form.component' describe('NgqFormComponent', () => { let component: FormComponent let mockApiService beforeEach(() => { mockApiService = jasmine.createSpyObj([ 'logout', 'init', 'getApiJson', 'getCurrentRoute', // add morehere ]) component = new FormComponent(mockApiService) }) it('should be defined', () => { expect(component).toBeDefined() }) }) For example, the FormComponent has a dependency on the apiService. So we mock it with a createSpyObj from Jasmine, which makes a mock service with all the public methods that our real service has. Thanks to this method we can also test if a certain method is being called. When need to overwrite a method, we can do that too. In isolated unit testing, we don't test the template parts for a component, only the logic behind it. In this test, we test all the methods that have the expected behavior. Shallow unit testing With shallow unit testing, we test a component with a template, but we ignore the rendering of child components by passing NO_ERRORS_SCHEMA into our configuration of the test module. beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [ FormComponent, ], imports: [ BrowserModule, ], ], schemas: [NO_ERRORS_SCHEMA] }) .compileComponents(); })); This will make sure that we don't get any problems with errors from not loading child components. Integration testing With integration testing, we test how 2 or more components work with each other. We can do this when this makes sense. For example when they are depending on each other. When you want to test more components together, you import them via the testing module. beforeEach(async(() => { const todo1 = new TODOItem('Buy milk', 'Remember to buy milk'); todo1.completed = true; const todoList = [ todo1, new TODOItem('Buy flowers', 'Remember to buy flowers'), ]; TestBed.configureTestingModule({ declarations: [ AppComponent, NavbarComponent, TodoListComponent, TodoItemComponent, FooterComponent, AddTodoComponent, TodoListCompletedComponent, ], imports: [ BrowserModule, NgbModule.forRoot(), FormsModule, appRouterModule ], providers: [ {provide: APP_BASE_HREF, useValue : completedTodoPath }, { provide: TodoListService, useValue: { todoList: todoList } } ] }) .compileComponents(); })); End to end testing With end to end testing, we test pieces of the application in a working application. We can test the working combination of the frontend and backend. For this type of testing, we can use Selenium, Protractor, Cypress, or another alternative. End-to-end testing is also a good way to test the differences in different browsers across multiple platforms. Thanks We learned which types of testing there are in Angular and when to use one. For me, this all makes so much more sense right now. If you have questions about some of the testing possibilities in Angular, please let me know in the comments. I will do my best to help you further 👍 In the end, we all need some help from others. So don’t be shy! Just ask 😉 Happy Coding 🚀
https://byrayray.dev/posts/2020-10-28-introduction-to-angular-testing
CC-MAIN-2021-17
refinedweb
659
56.76
From: Andy Little (andy_at_[hidden]) Date: 2006-09-23 05:34:52 . OK. Maybe I oughtta move my iterators out of namespace fusion too :-). Anyway its only quick test code at the moment. >> So far I have made 5 fusion style iterators. Next is the matrix minor >> iterator, >> which extracts minors from a matrix, as a prelude to extracting cofactors. > > Cool! BTW 'Minors' is section 2.7.1 in my "Quaternions and Rotation Sequences, A Primer with Applications to Orbits, Aerospace and Virtual Reality." by Jack B. Kuipers. IOW that is where I am up to in the book so far! Question is... will I get to the end of the book or the end of compiler resources first? :-) regards Andy Little Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2006/09/110608.php
CC-MAIN-2020-10
refinedweb
144
70.39
, I noticed ./examples/dash_control.py -dAgg from the latest cvs, is not working.. I think the problem may be in lines.set_dashes() If I change self._dashSeq = seq[1] to self._dashSeq = seq it seems to work. Also, what happened to matplotlib.path - it was being used by Cairo for draw_markers() Steve >>>>> "Steve" == Steve Chaplin <stevech1097@...> writes: Steve> John, I noticed ./examples/dash_control.py -dAgg from the Steve> latest cvs, is not working.. Steve> I think the problem may be in lines.set_dashes() If I Steve> change self._dashSeq = seq[1] to self._dashSeq = seq it Steve> seems to work. OK, great. Thanks. Steve> Also, what happened to matplotlib.path - it was being used Steve> by Cairo for draw_markers() Hmm. I thought I posted this already. I rewrote the path handling to use agg paths, which are a more complete implementation (eg supporting move_rel, line_rel and friends. I found my post as an emacs backup file, so will just paste it in here -- it provides a code snippent to convert an agg path to a list path we were using previously. To: Steve Chaplin <stevech1097@...> Cc: matplotlib-devel@... Subject: Re: [matplotlib-devel] refactoring the backend drawing methods From: John Hunter <jdhunter@...> Gcc: nnml:outgoing-mail References: <m3u0on14wc.fsf@...> <1110376248.3691.24.camel@...> X-Draft-From: ("nnml:mail-list.matplotlib-devel" 1256) --text follows this line-- >>>>> "Steve" == Steve Chaplin <stevech1097@...> writes: Steve> I've implemented draw_markers() for Cairo, and tested it Steve> using line- styles.py - it seems to be working OK. I did Steve> find that it caused draw_lines() to stop working and had to Steve> modify it to get it working again. Yes, sorry for failing to update you on this. Steve> I don't think 'fill' and 'fill_rgb' information should be Steve> encoded into the 'path', and would to prefer to have Steve> rendering separated into two independent steps: 1) call a Steve> function to parse 'path' and generate a path - the path is Steve> a general path (with no fill or colour info) that you can Steve> later use any way you wish. 2) set colour etc as desired Steve> and fill/stroke the path. Steve> The draw_markers() code I've written calls generate_path() Steve> before drawing each marker and it reads the fill value and Steve> the fill_rgb each time which it unnecessary since the Steve> values are the same for all the markers. Passing the Steve> fill_rgb as an extra argument to draw_markers() would be Steve> one way to 'fix' this. Done. I also wrapped agg's path storage and am using this rather than the list storage. You can get the old representation from the new rather easily, as illustrated here import matplotlib.agg as agg path = agg.path_storage() path.move_to(10,10) path.line_to(20,30) path.curve3_rel(100,200) path.end_poly() print [ path.vertex() for i in range(path.total_vertices())] Steve> Cairo (and probably Agg, PS, SVG) supports rel_move_to() Steve> and rel_line_to () - so you can define a path using Steve> relative rather than absolute coords, which can sometimes Steve> be useful. For example, instead of translate(x,y) Steve> generate_absolute_path(path) stroke() you can use Steve> move_to(x,y) generate_relative_path(path) stroke() and the Steve> path is stroked relative to x,y with no need to translate Steve> the coordinates. agg has move_rel and line_rel, etc, but I don't think it works the same way, because it computes the actual move_to, line_to, etc under the hood and stores these values, so I'm not sure it's possible to actually store a relative moveto in agg. Could be missing something, though. As far as I can see, everything gtes converted to one of the 6 primitive commands (STOP, MOVETO, LINETO, CURVE3, CURVE4, ENDPOLY) under the hood. JDH I I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/matplotlib/mailman/matplotlib-devel/thread/m3br9obza6.fsf@peds-pc311.bsd.uchicago.edu/
CC-MAIN-2017-17
refinedweb
670
65.83
SPOPS -- Simple Perl Object Persistence with Security # Define an object completely in a configuration file my $spops = { myobject => { class => 'MySPOPS::Object', isa => qw( SPOPS::DBI ), ... } }; # Process the configuration and initialize the class SPOPS::Initialize->process({ config => $spops }); # create the object my $object = MySPOPS::Object->new; # Set some parameters $object->{ $param1 } = $value1; $object->{ $param2 } = $value2; # Store the object in an inherited persistence mechanism eval { $object->save }; if ( $@ ) { print "Error trying to save object: $@\n", "Stack trace: ", $@->trace->as_string, "\n"; } SPOPS -- or Simple Perl Object Persistence with Security -- allows you to easily define how an object is composed and save, retrieve or remove it any time thereafter. It is intended for SQL databases (using the DBI), but you should be able to adapt it to use any storage mechanism for accomplishing these tasks. (An early version of this used GDBM, although it was not pretty.) The goals of this package are fairly simple: So this is a class from which you can derive several useful methods. You can also abstract yourself from a datasource and easily create new objects. The subclass is responsible for serializing the individual objects, or making them persistent via on-disk storage, usually in some sort of database. See "Object Oriented Perl" by Conway, Chapter 14 for much more information. The individual objects or the classes should not care how the objects are being stored, they should just know that when they call fetch() with a unique ID that the object magically appears. Similarly, all the object should know is that it calls save() on itself and can reappear at any later date with the proper invocation. This module is meant to be overridden by a class that will implement persistence for the SPOPS objects. This persistence can come by way of flat text files, LDAP directories, GDBM entries, DBI database tables -- whatever. The API should remain the same. Please see SPOPS::Manual::Intro and SPOPS::Manual::Object for more information and examples about how the objects work. The following includes methods within SPOPS and those that need to be defined by subclasses. In the discussion below, the following holds: Also see the "ERROR HANDLING" section below on how we use exceptions to indicate an error and where to get more detailed infromation. new( [ \%initialize_data ] ) Implemented by base class. This method creates a new SPOPS object. If you pass it key/value pairs the object will initialize itself with the data (see initialize() for notes on this). You can also implement initialize_custom() to perform your own custom processing at object initialization (see below). Note that you can use the key 'id' to substitute for the actual parameter name specifying an object ID. For instance: my $uid = $user->id; if ( eval { $user->remove } ) { my $new_user = MyUser->new( { id => $uid, fname = 'BillyBob' ... } ); ... } In this case, we do not need to know the name of the ID field used by the MyUser class. You can also pass in default values to use for the object in the 'default_values' key. We use a number of parameters from your object configuration. These are: If set to true, you will use the SPOPS::Tie::StrictField tie implementation, which ensures you only get/set properties that exist in the field listing. You can also pass a true value in for strict_field in the parameters and achieve the same result for this single object Hashref of column aliases to arrayrefs of fieldnames. If defined objects of this class will use "LAZY LOADING", and the different aliases you define can typically be used in a fetch(), fetch_group() or fetch_iterator() statement. (Whether they can be used depends on the SPOPS implementation.) Hashref of field alias to field name. This allows you to get/set properties using a different name than how the properties are stored. For instance, you might need to retrofit SPOPS to an existing table that contains news stories. Retrofitting is not a problem, but another wrinkle of your problem is that the news stories need to fit a certain interface and the property names of the interface do not match the fieldnames in the existing table. All you need to do is create a field map, defining the interface property names as the keys and the database field names as the values. Hashref of field names and default values for the fields when the object is initialized with new(). Normally the values of the hashref are the defaults to which you want to set the fields. However, there are two special cases of values: in debugging. To get around the synchronization issue, you can set this dynamically using various methods with SPOPS::ClassFactory. A simple implementation, SPOPS::Tool::DBI::FindDefaults, is shipped with SPOPS. As the very last step before the object is returned we call initialize_custom( \%initialize_data ). You can override this method and perform any processing you wish. The parameters from \%initialize_data will already be set in the object, and the 'changed' flag will be cleared for all parameters and the 'saved' flag cleared. Returns on success: a tied hashref object with any passed data already assigned. The 'changed' flag is set and the and 'saved' flags is cleared on the returned object. Returns on failure: undef. Examples: # Simplest form... my $data = MyClass->new(); # ...with initialization my $data = MyClass->new({ balance => 10532, account => '8917-918234' }); clone( \%params ) Returns a new object from the data of the first. You can override the original data with that in the \%params passed in. You can also clone an object into a new class by passing the new class name as the '_class' parameter -- of course, the interface must either be the same or there must be a 'field_map' to account for the differences. Note that the ID of the original object will not be copied; you can set it explicitly by setting 'id' or the name of the ID field in \%params. Examples: # Create a new user bozo my $bozo = $user_class->new; $bozo->{first_name} = 'Bozo'; $bozo->{last_name} = 'the Clown'; $bozo->{login_name} = 'bozosenior'; eval { $bozo->save }; if ( $@ ) { ... report error .... } # Clone bozo; first_name is 'Bozo' and last_name is 'the Clown', # as in the $bozo object, but login_name is 'bozojunior' my $bozo_jr = $bozo->clone({ login_name => 'bozojunior' }); eval { $bozo_jr->save }; if ( $@ ) { ... report error ... } # Copy all users from a DBI datastore into an LDAP datastore by # cloning from one and saving the clone to the other my $dbi_users = DBIUser->fetch_group(); foreach my $dbi_user ( @{ $dbi_users } ) { my $ldap_user = $dbi_user->clone({ _class => 'LDAPUser' }); $ldap_user->save; } initialize( \%initialize_data ) Implemented by base class; do your own customization using initialize_custom(). Cycle through the parameters inn \%initialize_data and set any fields necessary in the object. This allows you to construct the object with existing data. Note that the tied hash implementation optionally ensures (with the 'strict_field' configuration key set to true) that you cannot set infomration as a parameter unless it is in the field list for your class. For instance, passing the information: firt_name => 'Chris' should likely not set the data, since 'firt_name' is the misspelled version of the defined field 'first_name'. Note that we also set the 'loaded' property of all fields to true, so if you override this method you need to simply call: $self->set_all_loaded(); somewhere in the overridden method. initialize_custom( \%initialize_data ) Called as the last step of new() so you can perform customization as necessary. The default does nothing. Returns: nothing You should use the hash interface to get and set values in your object -- it is easier. However, SPOPS will also create an accessor/mutator/clearing-mutator for you on demand -- just call a method with the same name as one of your properties and two methods ('${fieldname}' and '${fieldname}_clear') will be created. Similar to other libraries in Perl (e.g., Class::Accessor) the accessor and mutator share a method, with the mutator only being used if you pass a defined value as the second argument: # Accessor my $value = $object->fieldname; # Mutator $object->fieldname( 'new value' ); # This won't do what you want (clear the field value)... $object->fieldname( undef ); # ... but this will $object->fieldname_clear; The return value of the mutator is the new value of the field which is the same value you passed in. Generic accessors ( get()) and mutators ( set()) are available but deprecated, probably to be removed before 1.0: You can modify how the accessors/mutators get generated by overriding the method: sub _internal_create_field_methods { my ( $self, $class, $field_name ) = @_; ... } This method must create two methods in the class namespace, '${fieldname}' and '${fieldname}_clear'. Since the value returned from AUTOLOAD depends on these methods being created, failure to create them will probably result in an infinite loop. get( $fieldname ) Returns the currently stored information within the object for $fieldname. my $value = $obj->get( 'username' ); print "Username is $value"; It might be easier to use the hashref interface to the same data, since you can inline it in a string: print "Username is $obj->{username}"; You may also use a shortcut of the parameter name as a method call for the first instance: my $value = $obj->username(); print "Username is $value"; set( $fieldname, $value ) Sets the value of $fieldname to $value. If value is empty, $fieldname is set to undef. $obj->set( 'username', 'ding-dong' ); Again, you can also use the hashref interface to do the same thing: $obj->{username} = 'ding-dong'; You can use the fieldname as a method to modify the field value here as well: $obj->username( 'ding-dong' ); Note that if you want to set the field to undef you will need to use the hashref interface: $obj->{username} = undef; id() Returns the ID for this object. Checks in its config variable for the ID field and looks at the data there. If nothing is currently stored, you will get nothing back. Note that we also create a subroutine in the namespace of the calling class so that future calls take place more quickly. fetch( $object_id, [ \%params ] ) Implemented by subclass. This method should be called from either a class or another object with a named parameter of 'id'. Returns on success: an SPOPS object. Returns on failure: undef; if the action failed (incorrect fieldname in the object specification, database not online, database user cannot select, etc.) a SPOPS::Exception object (or one of its subclasses) will be thrown to raise an error. The \%params parameter can contain a number of items -- all are optional. Parameters: For most SPOPS implementations, you can pass the data source (a DBI database handle, a GDBM tied hashref, etc.) into the routine. For DBI this variable is db, for LDAP it is ldap, but for other implementations it can be something else. You can use fetch() not just to retrieve data, but also to do the other checks it normally performs (security, caching, rulesets, etc.). If you already know the data to use, just pass it in using this hashref. The other checks will be done but not the actual data retrieval. (See the fetch_group routine in SPOPS::DBI for an example.) A true value skips security checks, false or default value keeps them. A true value skips any use of the cache, always hitting the data source. In addition, specific implementations may allow you to pass in other parameters. (For example, you can pass in 'field_alter' to the SPOPS::DBI implementation so you can format the returned data.) Example: my $id = 90192; my $data = eval { MyClass->fetch( $id ) }; # Read in a data file and retrieve all objects matching IDs my @object_list = (); while ( <DATA> ) { chomp; next if ( /\D/ ); my $obj = eval { ObjectClass->fetch( $_ ) }; if ( $@ ) { ... report error ... } else { push @object_list, $obj if ( $obj ) } } fetch_determine_limit() This method has been moved to SPOPS::Utility. save( [ \%params ] ) Implemented by subclass. This method should save the object state in whatever medium the module works with. Note that the method may need to distinguish whether the object has been previously saved or not -- whether to do an add versus an update. See the section "TRACKING CHANGES" for how to do this. The application should not care whether the object is new or pre-owned. Returns on success: the object itself. Returns on failure: undef, and a SPOPS::Exception object (or one of its subclasses) will be thrown to raise an error. Example: eval { $obj->save }; if ( $@ ) { warn "Save of ", ref $obj, " did not work properly -- $@"; } Since the method returns the object, you can also do chained method calls: eval { $obj->save()->separate_object_method() }; Parameters: For most SPOPS implementations, you can pass the data source (a DBI database handle, a GDBM tied hashref, etc.) into the routine. A true value forces this to be treated as a new record. A true value skips the security check. A true value skips any caching. A true value skips the call to 'log_action' remove() Implemented by subclass. Permanently removes the object, or if called from a class removes the object having an id matching the named parameter of 'id'. Returns: status code based on success (undef == failure). Parameters: For most SPOPS implementations, you can pass the data source (a DBI database handle, a GDBM tied hashref, etc.) into the routine. A true value skips the security check. A true value skips any caching. A true value skips the call to 'log_action' Examples: # First fetch then remove my $obj = MyClass->fetch( $id ); my $rv = $obj->remove(); Note that once you successfully call remove() on an object, the object will still exist as if you had just called new() and set the properties of the object. For instance: my $obj = MyClass->new(); $obj->{first_name} = 'Mario'; $obj->{last_name} = 'Lemieux'; if ( $obj->save ) { my $saved_id = $obj->{player_id}; $obj->remove; print "$obj->{first_name} $obj->{last_name}\n"; } Would print: Mario Lemieux But trying to fetch an object with $saved_id would result in an undefined object, since it is no longer in the datastore. object_description() Returns a hashref with metadata about a particular object. The keys of the hashref are: Class of this object ID of this object. (Also under 'oid' for compatibility.) Field used for the ID. Name of this general class of object (e.g., 'News') Title of this particular object (e.g., 'Man bites dog, film at 11') URL that will display this object. Note that the URL might not necessarily work due to security reasons. url_edit ($) URL that will display this object in editable form. Note that the URL might not necessarily work due to security reasons. You control what's used in the 'display' class configuration variable. In it you can have the keys 'url', which should be the basis for a URL to display the object and optionally 'url_edit', the basis for a URL to display the object in editable form. A query string with 'id_field=ID' will be appended to both, and if 'url_edit' is not specified we create it by adding a 'edit=1' to the 'url' query string. So with: display => { url => '/Foo/display/', url_edit => '/Foo/display_form', } The defaults put together by SPOPS by reading your configuration file might not be sufficiently dynamic for your object. In that case, just override the method and substitute your own. For instance, the following adds some sort of sales adjective to the beginning of every object title: package My::Object; sub object_description { my ( $self ) = @_; my $info = $self->SUPER::object_description(); $info->{title} = join( ' ', sales_adjective_of_the_day(), $info->{title} ); return $info; } And be sure to include this class in your 'code_class' configuration key. (See SPOPS::ClassFactory and SPOPS::Manual::CodeGeneration for more info.) as_string Represents the SPOPS object as a string fit for human consumption. The SPOPS method is extremely crude -- if you want things to look nicer, override it. as_html Represents the SPOPS object as a string fit for HTML (browser) consumption. The SPOPS method is double extremely crude, since it just wraps the results of as_string() (which itself is crude) in '<pre>' tags. is_loaded( $fieldname ) Returns true if $fieldname has been loaded from the datastore, false if not. set_loaded( $fieldname ) Flags $fieldname as being loaded. set_all_loaded() Flags all fieldnames (as returned by field_list()) as being loaded. is_checking_fields() Returns true if this class is doing field checking (setting 'strict_field' equal to a true value in the configuration), false if not. is_changed() Returns true if this object has been changed since being fetched or created, false if not. has_change() Set the flag telling this object it has been changed. clear_change() Clear the change flag in an object, telling it that it is unmodified. is_saved() Return true if this object has ever been saved, false if not. has_save() Set the saved flag in the object to true. clear_save() Clear out the saved flag in the object. Most of this information can be accessed through the CONFIG hashref, but we also need to create some hooks for subclasses to override if they wish. For instance, language-specific objects may need to be able to modify information based on the language abbreviation. We have simple methods here just returning the basic CONFIG information. no_cache() (bool) Returns a boolean based on whether this object can be cached or not. This does not mean that it will be cached, just whether the class allows its objects to be cached. field() (\%) Returns a hashref (which you can sort by the values if you wish) of fieldnames used by this class. field_list() (\@) Returns an arrayref of fieldnames used by this class. Subclasses can define their own where appropriate. These objects are tied together by just a few things: global_cache A caching object. Caching in SPOPS is not tested but should work -- see Caching below. Caching in SPOPS is not tested but should work. If you would like to brave the rapids, then call at the beginning of your application: SPOPS->set_global_use_cache(1); You will also need to make a caching object accessible to all of your SPOPS classes via a method global_cache(). Each class can turn off caching by setting a true value for the configuration variable no_cache or by passing in a true value for the parameter 'skip_cache' as passed to fetch, save, etc. The object returned by global_cache() should return an object which implements the methods get(), set(), clear(), and purge(). The method get() should return the property values for a particular object given a class and object ID: $cache->get({ class => 'SPOPS-class', object_id => 'id' }) The method set() should saves the property values for an object into the cache: $cache->set({ data => $spops_object }); The method clear() should clear from the cache the data for an object: $cache->clear({ data => $spops_object }); $cache->clear({ class => 'SPOPS-class', object_id => 'id' }); The method purge() should remove all items from the cache. This is a fairly simple interface which leaves implementation pretty much wide open. These have gone away (you were warned!) The previous (fragile, awkward) debugging system in SPOPS has been replaced with Log::Log4perl instead. Old calls to DEBUG, _w, and _wm will still work (for now) but they just use log4perl under the covers. Please see SPOPS::Manual::Configuration under LOGGING for information on how to configure it. There is an issue using these modules with Apache::StatINC along with the startup methodology that calls the class_initialize method of each class when a httpd child is first initialized. If you modify a module without stopping the webserver, the configuration variable in the class will not be initialized and you will inevitably get errors. We might be able to get around this by having most of the configuration information as static class lexicals. But anything that depends on any information from the CONFIG variable in request (which is generally passed into the class_initialize call for each SPOPS implementation) will get hosed. Method object_description() should be more robust In particular, the 'url' and 'url_edit' keys of object_description() should be more robust. Objects composed of many records An idea: Make this data item framework much like the one Brian Jepson discusses in Web Techniques: At least in terms of making each object unique (having an OID). Each object could then be simply a collection of table name plus ID name in the object table: CREATE TABLE objects ( oid int not null, table_name varchar(30) not null, id int not null, primary key( oid, table_name, id ) ) Then when you did: my $oid = 56712; my $user = User->fetch( $oid ); It would first get the object composition information: oid table id === ===== == 56712 user 1625 56712 user_prefs 8172 56712 user_history 9102 And create the User object with information from all three tables. Something to think about, anyway. None known. Copyright (c) 2001-2004 intes.net, inc; (c) 2003-2004-2004-2004 Chris Winters. All rights reserved. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.. SPOPSx::Ginsu - Generalized Inheritance Support for SPOPS + MySQL -- store inherited data in separate tables. Chris Winters <chris@cwinters.com> The following people have offered patches, advice, development funds, etc. to SPOPS: -wand helped out with permission issues with SPOPS::GDBM.
http://search.cpan.org/~cwinters/SPOPS/SPOPS.pm
CC-MAIN-2016-36
refinedweb
3,468
61.46
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. Hi, On Sun, Sep 15, 2013 at 06:55:17PM +0200, Bernd Edlinger wrote: > Hello Richard, > > attached is my second attempt at fixing PR 57748. This time the > movmisalign path is completely removed and a similar bug in the read > handling of misaligned structures with a non-BLKmode is fixed > too. There are several new test cases for the different possible > failure modes. I think the third and fourth testcases are undefined as the description of zero-length arrays extension clearly says the whole thing only makes sense when used as the last field of the outermost-aggregate type. I have not really understood what the third testcase is supposed to test but I did not try too much. Instead of the fourth testcase, you can demonstrate the need for your change in expand_expr_real_1 by augmenting the original testcase a little like in attached pr57748-m1.c. The hunk in expand_expr_real_1 can prove problematic if at any point we need to pass some other modifier to the expansion of tem. I'll try to see if I can come up with a testcase tomorrow. But perhaps we never do (and can hope we never will) and then it would be sort of OKish (note that I cannot approve anything) even though it can pessimize unaligned access paths (by not using movmisalign_optab even when perfectly possible - which is always when there is no zero sized array). It really just shows how evil non-BLKmode structures with zero-sized arrays are and how they complicate things. The expansion of component_refs is reasonably built around the assumption that we'd expand the structure in its mode in the most efficient manner and then chuck the correct part out of it, but here we need to tell the expansion of the structure to hold itself back because we'll be looking outside of the structure (as specified by mode). I'm not sure to what extent the hunk adding tests for bitregion_start and bitregion_end being zero is connected to this issue. I do not see any of the testcases exercising that path. If it is indeed another problem, I think it should be submitted (and potentially committed) as a separate patch, preferably with a testcase. Having said all that, I think that removing the misalignp path from expand_assignment altogether is a good idea. I have verified that when the expander is now presented with basically the same thing that 4.7 choked on, expand_expr (..., EXPAND_WRITE) can cope with it (see attached file c.c) and doing that simplifies this complex code path. Thanks, Martin > > This patch was boot-strapped and regression tested on x86_64-unknown-linux-gnu > and i686-pc-linux-gnu. > > Additionally I generated eCos and an eCos-application (on ARMv5 using packed > structures) with an arm-eabi cross compiler, and looked for differences in the > disassembled code with and without this patch, but there were none. > > OK for trunk? > > Regards > Bernd. > 2013-09-15 Bernd Edlinger <bernd.edlinger@hotmail.de> > > PR middle-end/57748 > * expr.c (expand_assignment): Remove misalignp code path. > Check for bitregion in offset arithmetic. > (expand_expr_real_1): Use EXAND_MEMORY on base object. > > testsuite: > > PR middle-end/57748 > * gcc.dg/torture/pr57748-1.c: New test. > * gcc.dg/torture/pr57748-2.c: New test. > * gcc.dg/torture/pr57748-3.c: New test. > * gcc.dg/torture/pr57748-3a.c: New test. > * gcc.dg/torture/pr57748-4.c: New test. > * gcc.dg/torture/pr57748-4a.c: New test. > /* PR middle-end/57748 */ /* { dg-do run } */ #include <stdlib.h> extern void abort (void); typedef long long V __attribute__ ((vector_size (2 * sizeof (long long)), may_alias)); typedef struct S { V a; V b[0]; } P __attribute__((aligned (1))); struct __attribute__((packed)) T { char c; P s; }; void __attribute__((noinline, noclone)) check (struct T *t) { if (t->s.b[0][0] != 3 || t->s.b[0][1] != 4) abort (); } void __attribute__((noinline, noclone)) check_1 (P *p) { if (p->b[0][0] != 3 || p->b[0][1] != 4) abort (); } int __attribute__((noinline, noclone)) get_i (void) { return 0; } void __attribute__((noinline, noclone)) foo (P *p) { V a = { 3, 4 }; int i = get_i(); p->b[i] = a; } int main () { struct T *t = (struct T *) malloc (128); foo (&t->s); check (t); check_1 (&t->s); return 0; } #include <stdlib.h> extern void abort (void); typedef long long V __attribute__ ((vector_size (2 * sizeof (long long)), may_alias)); typedef struct S { V a; } P __attribute__((aligned (1))); struct __attribute__((packed)) T { char c; P s; }; void __attribute__((noinline, noclone)) foo (P *p) { V a = { 3, 4 }; p->a = a; } int main () { struct T *t = (struct T *) malloc (128); foo (&t->s); return 0; }
https://gcc.gnu.org/ml/gcc-patches/2013-09/msg01251.html
CC-MAIN-2019-47
refinedweb
786
62.98
We were instructed to make a program where the user will input the Planets' name and mass then the data will be shown in a list (optional:the list should be in alphabetical order). We need to make some sort of menu where it displays 3 choices: Add Planet and Mass, View Record, and exit. I got a help from a friend with some parts except the part when displaying the data. it only says "null". it should look like: Planets and Their Masses: Earth 1234 Jupiter 12345 Mars 1234 here's the code that needs editing...I've been searching on how to solve this and can't seem to find the solution, and I need to submit this 5 hours from now. Code java: import java.io.*; import java.util.Arrays; public class Planets { int a=0; int b=0; String[] planet = new String[8]; double[] mass = new double[8]; public static void main(String[]args)throws Exception { int choice; BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); do { System.out.println("[1] Add Planet/Mass"); System.out.println("[2] View Records"); System.out.println("[3] Exit"); System.out.print("Select your choice: "); choice = Integer.parseInt(br.readLine()); if (choice==1) { System.out.println("\n\n"); Planets i = new Planets(); i.Input(); } if (choice==2) { System.out.println("\n"); Planets v = new Planets(); v.View(); } } while (choice!=3); System.out.println("Thank You!"); } public void Input()throws Exception { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter planet name: "); planet[0] = br.readLine(); System.out.println("Enter mass: "); mass[0] = Double.parseDouble(br.readLine()); } void View() { System.out.println("Planets and their Masses: "); System.out.println("" +planet[0] +mass[0]); } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/15598-inputting-then-displaying-through-arrays-urgent-printingthethread.html
CC-MAIN-2014-35
refinedweb
285
53.98
Future and Callable in Java Tutorial with examples Explaining Futures and CallableJava 5 introduced java.util.concurrent.Callable interface in concurrency package that is similar to Runnable interface but it can return any Object and able to throw Exception. Callable interface use Generic get() method that can wait for the Callable to finish and then return the result.. The Callable object uses generics to define the type of object which is returned. If you submit a Callable object to an Executor the framework returns an object of type java.util.concurrent.Future. This Future object can be used to check the status of a Callable and to retrieve the result from the Callable. On the Executor you can use the method submit to submit a Callable and to get a future. To retrieve the result of the future use the get() method. Here is a simple example of Callable task that returns the name of thread executing the task after one second. We are using Executor framework to execute 100 tasks in parallel and use Future to get the result of the submitted tasks. package com.tutorialsdesk.threads.callable; import java.util.concurrent.Callable; public class MyCallable implements Callable<long> { @Override public Long call() throws Exception { long sum = 0; for (long i = 0; i <= 100; i++) { sum += i; } return sum; } } package com.tutorialsdesk.threads.callable; import java.util.ArrayList; import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; public class CallableFutures { private static final int NTHREDS = 10; public static void main(String[] args) { ExecutorService executor = Executors.newFixedThreadPool(NTHREDS); List(); } } NEXT READ Thread Pool Executor in Java. Hope we are able to explain you Importance of Future and Callable in Java, if you have any questions or suggestions please write to us using contact us form.(Second Menu from top left). Please share us on social media if you like the tutorial.
http://www.tutorialsdesk.com/2014/10/future-and-callable-in-java-tutorial.html
CC-MAIN-2019-09
refinedweb
331
50.63
Hi I had installed fuse 2.9.7 and s3fs -fuse-1.80. followed the procedure stated in the cloud archiving document by creating a password file s3fs s3testdemo /tmp/test -o nocopyapi -o use_path_request_style -o nomultipart -o sigv2 -o url= -d -f -o curldbg -o f2 -o allow_other -o passwd_file=/etc/passwd-s3fs where s3testdemo is the HCP name space,/tmp/test is the mount dir on my Linux vm. However I am getting the below error, [CRT] s3fs.cpp:set_s3fs_log_level(253): change debug level from [CRT] to [INF] s3fs: unable to access MOUNTPOINT /tmp/test: Transport endpoint is not connected any thoughts/workaround on the issue would be appreciated. Couple of things to double check: 1. You need -o no_check_certificate when using HTTPS with self-signed certificate (which is what HCP would have by default out of the box) 2. The endpoint must be the FQDN of the HCP tenant, not namespace. 3. The HCP namespace must have S3 protocol enabled.
https://community.hitachivantara.com/thread/10742-s3fs-transport-endpoint-not-connected
CC-MAIN-2018-34
refinedweb
163
63.39
Unit 1: The Unix System Table of Contents - 1. UNIX and You - 2. Unix Design Philosophy - 3. Standard Input, Standard Output, and Standard Error - 4. Reading and Writing to /dev/nulland other /dev's - 5. File Permissions and Ownership chmodand chown 1 UNIX and You 1.1 The 1000 Foot View of the UNIX system Why UNIX? - UNIX is an important OS in the history of computing - Two major OS'es variants, UNIX-based and Windows-Based - Used in a lot of back-end systems and personal computing - UNIX derivatives are open source and well known to the community and developed in the open where we can study and understand them. - The skills you learn on UNIX will easily translate to other OS platforms because all UNIX-based systems share standard characteristics. 2 Unix Design Philosophy The Unix Design Philosophy is best exemplified through a quote by Doug McIlroy, a key contributor to early Unix systems: This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. All the Unix command line tools we've looked at so far meet this philosophy, and the tools we will program in the class will as well. 2.1 Write programs that does one thing and does it well If we look at the command line tools for processing files, we see that there is a separate tool for each task. For example, we do not have a tool called headortail that can take either the first or last lines of a file. We have separate tools for each of the tasks. While this might seem like extra work, it actually enables the user to be more precise about what he/she is doing, as well as be more expressive. It also improves the readability and comprehension of commands; the user doesn't have to read lots of command line arguments to figure out what's going on. 2.2 Write programs that work well together The command line tools we look at also inter-operate really well because they compliment each other. For example, consider some of the pipelines you wrote in lab and how you can use cut to get a field from structure data, then you can use grep to isolate some set of those fields, and finally you can use wc to count how many fields remain. 2.3 Write programs to handle text streams Finally, the ability to handle text streams is the basis of the pipeline and what enables small and simple Unix commands to be "glued" together to form more complex and interesting Unix operations. This philosophy leads to the development of well formed Unix command line tools that have the three properties: - They can take input from the terminal through a pipeline or by reading an input file provided as an argument - They write all their output back to the terminal such that it can be read as input by another command. - They do not write error information to standard output in a way that can interfere with a pipeline. This process of taking input from the terminal and writing output to the terminal is the notion of handling text streams through the pipeline. In this lecture, we will look at this process in more detail. 3 Standard Input, Standard Output, and Standard Error Every Unix program is provided with three standard file streams or standard file descriptors to read and write input from. - Standard Input ( stdinfile stream, file descriptor 0): The primary input stream for reading information printed to the terminal - Standard Output ( stdoutfile stream, file descriptor 1): The primary output stream for printing information and program output to the terminal - Standard Error ( stderrfile stream, file descriptor 2): The primary error stream for printing information to the terminal that resulted from an error in processing or should not be considered part of the program output. 3.1 Pipelines Pipelines are ways to connect the standard output of one program to the standard input of another program. This fits into the UNIX design philosophy well by allowing smaller programs to work well together, connecting computation via a text stream. We denote a pipeline using the the | (pipe symbol). Consider this simple C++ program that reads a string from a user and prints it in reverse: #include <iostream> using namespace std; int main(){ string s; cin >> s; //read from stdin for(int i=s.size()-1;i>=0;i--) cout << s[i]; //write to stdout cout << endl; } We can run this program directly on the command line like so: $ ./reverse Hello olleH But that requires us the user to type, but what if we could use another program to produce the output. There is a build in command called echo that will do just that: $ echo "Hello" Hello $ echo "Hello" | ./reverse olleH This time, we have piped the output of the echo command to the input of the reverse command, producing the same output as before. Now, what if we wanted to reverse our reserved output? We could also pipe that output again to the next program. $ echo "Hello" | ./reverse | ./reverse Hello This is the power of a pipeline, and if you imagined, there is nothing stopping us from doing more complicated stuff. 3.2 Sample Commands Using stdin and stdout As an example of using pipes to control standard input and output, lets look at the head and tail command, each either prints the first lines or last lines of a file respectively. For example, the file file1 in this example has 1000 lines, each labeled. We could just print the first 10 lines like so: $ head file1 file1 line 1 file1 line 2 file1 line 3 file1 line 4 file1 line 5 file1 line 6 file1 line 7 file1 line 8 file1 line 9 file1 line 10 A similar command for tail prints the last 10 lines $ tail file1 file1 line 991 file1 line 992 file1 line 993 file1 line 994 file1 line 995 file1 line 996 file1 line 997 file1 line 998 file1 line 999 file1 line 1000 We could connect these two commands together to print any consecutive lines of a file. For example, suppose we want to print lines 800-810: $ head -810 file1 | tail file1 line 801 file1 line 802 file1 line 803 file1 line 804 file1 line 805 file1 line 806 file1 line 807 file1 line 808 file1 line 809 file1 line 810 The -810 flag to head, says to print the first 810, then tail, reads from stdin and prints the last 10 of those lines, giving us the lines between 800 and 810. The cat command, which you should be familiar with already, prints each of the files specified as its arguments to the terminal. For example, consider the following two files: $ cat BeatArmy.txt Beat Army $ cat GoNavy.txt Go Navy $ cat GoNavy.txt BeatArmy.txt Go Navy Beat Army cat will also read from the stdin using a pipe with no arguments, but we can also use - by itself to say when in the sequence of files to read from stdin. For example. $ cat GoNavy.txt | cat BeatArmy.txt - BeatArmy.txt Beat Army Go Navy Beat Army So we can now hook up our previous command to surround the 800-810 file with "Go Navy" and "Beat Army" $ head -810 file1 | tail | cat GoNavy.txt - BeatArmy.txt Go Navy file1 line 801 file1 line 802 file1 line 803 file1 line 804 file1 line 805 file1 line 806 file1 line 807 file1 line 808 file1 line 809 file1 line 810 Beat Army 3.3 Pipes and stderr One challenge with a pipeline is that all the output of a program gets redirected to the input of the next program. What if there was a problem or error to report? Given the description of the standard file descriptors, we can better understand a pipelines with respect to the standard file descriptors. Head writes to stdout--. .---the stdout of head is the stdin of cat | | v v head -3 BAD_FILENAME | cat GoNavy.txt - BeatArmy.txt \_/ | A pipe just connects the stdout of one command to the stdin of another The pipe ( |) is a semantic construct for the shell to connect the standard output of one program to the standard input of another program, thus piping the output to input. The fact that input is connected to output in a pipeline actually necessitates stderr because if an error was to occur along the pipeline, you would not want that error to propagate as input to the next program in the pipeline especially when the pipeline can proceed despite the error. There needs to be a mechanism to report the error to the terminal outside the pipeline, and that mechanism is standard error. As an example, consider the the case where head is provided a bad file name. #> head -3 BAD_FILENAME| cat BeatArmy.txt - GoNavy.txt head: BAD_FILENAME: No such file or directory <--- Written to stderr not piped to cat Go Navy! Beat Army! Here, head has an error BAD_FILENAME doesn't exist, so head prints an error message to stderr and does not write anything to stdout, and thus, cat only prints the contents of the two files to stdout. If there was no stderr, then head could only report the error to stdout and thus it would interfere with the pipeline; head: BAD_FILENAME: No such file or directory is not part of the first 3 lines of any file. 3.4 Redirecting stdin, stdout, and stderr In addition to piping the standard file streams, you can also redirect them to a file on the filesystem. The redirect symbols is > and <. Consider a dummy command below: cmd < input_file > output_file 2> error_file This would mean that cmd (a fill in for a well formed Unix command) will read input from the file input_file, all output that would normally go to stdout is now written to output_file, and any error messages will be written to error_file. Note that 2 and the > together ( 2>) indicates to redirect file descriptor 2, which maps to stderr (see above). You can also use redirects interspersed in a pipeline like below. cmd < input_file | cmd 2> error_file | cmd > output_file However, you cannot mix two redirects for the same standard stream, like so: cat input_file > output_file | head This command will result in nothing being printed to the screen via head and all redirected to output_file. This is because the > and < redirects always take precedence over a pipe, and the last > or < in a sequence takes the most precedence. For example: cat input_file > out1 > out2 | head will write the contents of the input file to the out2 file and not to out1. Output redirects will automatically truncate the file being redirected to. That is, it will essentially erase the file and create a new one. There are situations where, instead, you want to append to the end of the file, such as cumulating log files. You can do such output redirects with >> symbols, double greater-then signs. For example, cat input_file > out cat input_file >> out will produce two copies of the input file concatenated together in the output file, out. 4 Reading and Writing to /dev/null and other /dev's There are times when you are building Unix commands that you want to redirect your output or error information to nowhere … you just want it to disappear. This is a common enough need that Unix has built in files that you can redirect to and from. Perhaps the best known is /dev/null. Note that this file exists in the /dev path which means it is not actually a file, but rather a device or service provided by the Unix Operating System. The null device's sole task in life is to turn things into null or zero them out. For example, consider the following pipeline with the BAD_FILENAME from before. #> head -3 BAD_FILENAME 2> /dev/null | cat BeatArmy.txt - GoNavy.txt Go Navy! Beat Army! Now, we are redirecting the error from head to /dev/null, and thus it goes nowhere and is lost. If you try and read from /dev/null, you get nothing, since the null device makes things disappear. The above command is equivalent to touch since head reads nothing and then writes nothing to file, creating an empty file. head /dev/null > file. You may think that this is a completely useless tool, but there are plenty of times where you need something to disappear – such as input or output or your ic221 homework – that is when you need /dev/null. 4.1 Other useful redirect dev's Unix also provides a number of device files for getting information: /dev/zero: Provide zero bytes. If you read from /dev/zeroyou only get zero. For example the following writes 20 zero bytes to a file: head -c 20 /dev/zero > zero-20-byte-file.dat /dev/urandom: Provides random bytes. If you read from /dev/urandomyou get a random byte. For example the following writes a random 20 bytes to a file: head -c 20 /dev/urandom > random-20-byte-file.dat 4.2 (Extra) A note on the /dev directory and the OS The files you find in /dev are not really files, but actually devices provided by the Operating System. A device generally connects to some input or output component of the OS. The three devices above ( null, zero, and urandom) are special functions of the OS to provide the user with a null space ( null), a consistent zero base ( zero) , and a source of random entropy ( urandom). If we take a closer look at the /dev directory you see that there is actually quite a lot going on here. alarm hidraw0 network_throughput ram9 tty13 tty35 tty57 ttyS2 vboxusb/ ashmem hidraw1 null random tty14 tty36 tty58 ttyS20 vcs autofs hpet oldmem rfkill tty15 tty37 tty59 ttyS21 vcs1 binder input/ parport0 rtc@ tty16 tty38 tty6 ttyS22 vcs2 block/ kmsg port rtc0 tty17 tty39 tty60 ttyS23 vcs3 bsg/ kvm ppp sda tty18 tty4 tty61 ttyS24 vcs4 btrfs-control lirc0 psaux sda1 tty19 tty40 tty62 ttyS25 vcs5 bus/ log= ptmx sda2 tty2 tty41 tty63 ttyS26 vcs6 cdrom@ loop0 pts/ sg0 tty20 tty42 tty7 ttyS27 vcsa cdrw@ loop1 ram0 sg1 tty21 tty43 tty8 ttyS28 vcsa1 char/ loop2 ram1 shm@ tty22 tty44 tty9 ttyS29 vcsa2 console loop3 ram10 snapshot tty23 tty45 ttyprintk ttyS3 vcsa3 core@ loop4 ram11 snd/ tty24 tty46 ttyS0 ttyS30 vcsa4 cpu/ loop5 ram12 sr0 tty25 tty47 ttyS1 ttyS31 vcsa5 cpu_dma_latency loop6 ram13 stderr@ tty26 tty48 ttyS10 ttyS4 vcsa6 disk/ loop7 ram14 stdin@ tty27 tty49 ttyS11 ttyS5 vga_arbiter dri/ loop-control ram15 stdout@ tty28 tty5 ttyS12 ttyS6 vhost-net dvd@ lp0 ram2 tpm0 tty29 tty50 ttyS13 ttyS7 watchdog dvdrw@ mapper/ ram3 tty tty3 tty51 ttyS14 ttyS8 watchdog0 ecryptfs mcelog ram4 tty0 tty30 tty52 ttyS15 ttyS9 zero fb0 mei ram5 tty1 tty31 tty53 ttyS16 uinput fd@ mem ram6 tty10 tty32 tty54 ttyS17 urandom full net/ ram7 tty11 tty33 tty55 ttyS18 vboxdrv fuse network_latency ram8 tty12 tty34 tty56 ttyS19 vboxnetctl You will learn more about /dev's in your OS class, but for now you should know that this is a way to connect the user-space with the kernel-space through the file system. It is incredibly powerful and useful, beyond just sending stuff to /dev/null. What each of the files are is the input to some OS process. For example, each of the tty information is a terminal that is open on the computer. The ram refer to what is currently in the computer's memory. The dvd and cdrom, that is the file that you write and read to when connecting with the cd/dvd-rom. And the items under disk, that a way to get to the disk drives. 5 File Permissions and Ownership chmod and chown Continuing our exploration of the UNIX file system and command line operations, we now turn our attention to the file ownership and permissions. One of the most important services that the OS provides is security oriented, ensuring that the right user access the right file in the right way. Lets first remind ourselves of the properties of a file that are returned by running ls -l: .- Directory? | .-------Permissions .- Directory Name | ___|___ .----- Owner | v/ \ V ,---- Group V drwxr-x--x 4 aviv scs 4096 Dec 17 15:14 ic221 -rw------- 1 aviv scs 400 Dec 19 2013 .ssh/id_rsa.pub ^ \__________/ ^ File Size -------------' | '- File Name in bytes | | Last Modified --------------' There are two important parts to this discussion: the owner/group and the permissions. The owner and the permissions are directly related to each other. Often permissions are assigned based on user status to the file, either being the owner or part of a group of users who have certain access to the file. 5.1 File Ownership and Groups The owner of a file is the user that is directly responsible for the file and has special status with respect to the file permission. Users can also be grouped together in group, a collection of users who posses the same permissions. A file also has a group designation to specify which permission should apply. You all are already aware of your username. You use it all the time, and it should be a part of your command prompt. To have UNIX tell you your username, use the command, who am i: aviv@saddleback: ~ $ who am i aviv pts/24 2014-12-29 10:44 (potbelly.academy.usna.edu) The first part of the output is the username, for me that is aviv, for you it will be your username. The rest of the information in the output refers to the terminal, the time the terminal was created, and from which host you are connected. We will learn about terminals later in the semester. (And yes, I name my computers after pigs.) You can determine which groups you are in using the groups command. aviv@saddleback: ~ $ groups scs sudo On this computer, I am in the scs group which is for computer science faculty members. I am also in the sudo group, which is for users who have super user access to the machine. Since saddleback is my personal work computer, I have sudo access. 5.2 The password and group file Groupings are defined in two places. The first is a file called /etc/passwd which manages all the users of the system. Here is my /etc/passwd entry: aviv@saddleback: ~ $ grep aviv /etc/passwd aviv:x:35001:10120:Adam Aviv {}:/home/scs/aviv:/bin/bash The first two parts of that file describe the userid and groupid, which are 35001 and 10120, respectively. These numbers are the actual group and user names, but Unix nicely converts these numbers into names for our convenience. The translation between userid and username is in the password file. The translation between groupid and group name is in the group file, /etc/group. Here is the SCS entry in the group file: aviv@saddleback: ~ $ grep scs /etc/group scs:*:10120:webadmin,www-data,lucas,slack There you can see that the users webadmin, www-data, lucas and slack are also in the SCS group. While my username is not listed directly, I am still in the scs group as defined by the entry in the password file. Take a moment to explore these files and the commands. See what groups you are in. 5.3 File Permissions We can now turn our attention to the permission string. A permission is simply a sequence of 9 bits broken into 3 octets of 3 bits each. An octet is a base 8 number that goes from 0 to 7, and 3 bits uniquely define an octet since all the numbers between 0 and 7 can be represented in 3 bits. Within an octet, there are three permission flags, read, write and execute. These are often referred to by their short hand, r, w, and x. The setting of a permission to on means that the bit is 1. Thus for a set of possible permission states, we can uniquely define it by an octal number rwx -> 1 1 1 -> 7 r-x -> 1 0 1 -> 5 --x -> 0 0 1 -> 1 rw- -> 1 1 0 -> 6 A full file permission consists of the octet set in order of user, group, and global permission. ,-Directory Bit | | ,--- Global Permission v / \ -rwxr-xr-x \_/\_/ | `--Group Permission | `-- User Permission These define the permission for the user of the file, what users in the same group of the file, and what everyone else can do. For a full permission, we can now define it as 3 octal numbers: -rwxrwxrwx -> 111 111 111 -> 7 7 7 -rwxrw-rw- -> 111 110 110 -> 7 6 6 -rwxr-xr-x -> 111 101 101 -> 7 5 5 To change a file permission, you use the chmod command and indicate the new permission through the octal. For example, in part5 directory, there is an executable file hello_world. Let's try and execute it. To do so, we insert a ./ in the front to tell the shell to execute the local file. > ./hello_world -bash: ./hello_world: Permission denied The shell returns with a permission denied. That's because the execute bit is not set. #> ls -l hello_world -rw------- 1 aviv scs 7856 Dec 23 13:51 hello_world Let's start by making the file just executable by the user, the permission 700. And now we can execute the file: #> chmod 700 hello_world #> ls -l hello_world -rwx------ 1 aviv scs 7856 Dec 23 13:51 hello_world #> ./hello_world Hellow World! This file can only be execute by the user, not by anyone else because the permissions for the group and the world are still 0. To add group and world permission to execute, we use the permission setting 711: #> chmod 711 hello_world #> ls -l hello_world -rwx--x--x 1 aviv scs 7856 Dec 23 13:51 hello_world At times using octets can be cumbersome, for example, when you want to set all the execute or read bits but don't want to calculate the octet. In those cases you can use shorthands. r, w, xshorthands for permission bit read, write and execute - The +indicates to add a permission, as in +xor +w - The -indicates to remove a permission, as in -xor -w u, g, ashorthand's for permission bit user, group, and global (or all) Then we can change the permission chmod +x file <-- set all the execute bits chmod a+r file <-- set the file world readable chmod -r file <-- unset all the read bits chmod gu+w file <-- set the group and user write bits to true Depending on the situations, both the octets and the shorthand's are preferred. 5.4 Changing File Ownership and Group The last piece of the puzzle is how do we change the ownership and group of a file. Two commands: chown user file/directory: change owner of the file/directory to the user chgrp group file.directory: change group of the file to the group Permission to change the owner of a file is reserved only for the super user for security reasons. However, changing the group of the file is reserved only for the owner. aviv@saddleback: demo $ ls -l total 16 -rwxr-x--- 1 aviv scs 9133 Dec 29 10:39 helloworld -rw-r----- 1 aviv scs 99 Dec 29 10:39 helloworld.cpp aviv@saddleback: demo $ chgrp mids helloworld aviv@saddleback: demo $ ls -l total 16 -rwxr-x--- 1 aviv mids 9133 Dec 29 10:39 helloworld -rw-r----- 1 aviv scs 99 Dec 29 10:39 helloworld.cpp Note now the hello world program is in the mids group. I can still execute it because I am the owner: aviv@saddleback: demo $ ./helloworld Hello World However if I were to change the owner, to say, pepin, we get the following error: aviv@saddleback: demo $ chown pepin helloworld chown: changing ownership of ‘helloworld’: Operation not permitted Consider why this might be. If any user can change the ownership of a file, then they could potentially upgrade or downgrade the permissions of files inadvertently, violating a security requirement. As such, only the super user, or the administrator, can change ownership settings.
https://www.usna.edu/Users/cs/aviv/classes/ic221/s17/units/01/unit.html
CC-MAIN-2018-22
refinedweb
4,050
65.35
Buzz kill Government places bee species on endangered list for first time, A2 TUESDAY, OCTOBER 4, 2016 75¢ newsstand Dillons site has online buying SERVICE DOG LOOKING FOR GENEROUS HEARTS VONACHEN CASE REQUEST DENIED The Kansas Supreme Court on Monday denied Reno County District Attorney Keith Schroeder’s request for a temporary stay or injunction that would bar plans for a new mental evaluation for Samuel Vonachen. Read more on A2 Q New feature at Marketplace in Hutch lauded as a time-saver. BY JOHN GREEN The Hutchinson News jgreen@hutchnews.com ClickList, Kroger’s online grocery shopping system, has come to Hutchinson. The Dillons Marketplace on East 30th Avenue launched the service Tuesday. “Kroger has been testing on-line ordering in select stores since November 2014 and the feedback from our customers across the country has been overwhelmingly positive,” state Dillons E-Commerce Manager Tony Salinas in a news release. “Senior citizens, parents with young children, and busy professionals all appreciate this new convenience.” See DILLONS / A7 ROYALS NOW IN FALL BACK AND ASSESS MODE Sandra J. Milburn/The Hutchinson News. Read more on B1 ClickList is a service now offered at the Dillons Marketplace store. Customers order online, then arrive at the delivery area, pay for their groceries and a Dillons employee will load them into their car. Photos by Lindsey Bauman/The Hutchinson News Service dog Indigo stands. This helpful pal needs aid now Justices will stop at Reno schools today Q Trinity cancels a Kan. high court judge’s visit ahead of HCC event. BY MARY CLARKIN The Hutchinson News mclarkin@hutchnews.com Trinity Catholic Junior-Senior High School canceled a scheduled appearance today of Kansas Supreme Court Associate Justice Dan Biles, after realizing Biles is on the Nov. 8 ballot. The Kansas Supreme Court will conduct a special session at 6:30 tonight at Hutchinson Community College’s Stringer Fine Arts Center, 600 E. 11th Ave. The event is open to the public, and the court will hear oral arguments in two cases – one involving a methamphetamine conviction and the second involving a dispute between businesses. When the top court visits a community, the justices typically talk to students in the area as part of STATE TAX REVENUES COME UP SHORT TOPEKA – Kansas collected nearly $45 million less in taxes in September than expected, providing a major blow to state finances and scrambling the budget. Read more on A2 GoFundMe account aims to aid ailing canine BY KATHY HANKS The Hutchinson News khanks@hutchnews.com For the past three years, Indigo has taken care of Bob Lucas – serving as his eyes, keeping him safe. Now it’s Lucas’ turn to care for the big yellow Labrador retriever. Six months ago, Lucas and his wife, Georgia, of Hutchinson, began noticing the guide dog was limping slightly. They had hoped it was something that would go away. Instead, in recent weeks it has gotten worse. Indigo’s veterinarian has recommended surgery for a damaged ACL around the knee of the back left leg. See DOG / A8 RUNAWAY WIN Hutchinson bulldozes Kansas Wesleyan JV 64-0 at Gowans Stadium Bob Lucas walks with Indigo outside their Hutchinson home on Monday. B1 See JUSTICES / A7 INTERCEPTED LETTER Service dog requiring surgery LOCAL 3-DAY OUTLOOK TODAY 79 52 Dear Indigo, May the account set up for your fiscal needs fetch a generous response. YEAR 145 NO. 94 Wednesday TV LISTINGS CLASSIFIEDS LOTTERIES A5 B5 A2 COMICS OBITUARIES CROSSWORD B7 A7 B8 SPORTS SCOREBOARD OPINION B1 B2 A6 80 60 Follow us: Thursday Want more? See a full weather report on Page A8 76 45 A2 Tuesday, October 4, 2016 The Hutchinson News PAGE TWO CThings a ltoedontoday dar of Events Things to do Tomorrow 10:45 a.m. Babytime, Hutchinson Public Library, 901 N. Main. 11:30 a.m. Reno Retired Employees Luncheon, Sirloin Stockade, 1526 E. 17th. 5 p.m. South Hutchinson Farmers Market, Lionette Field, 101 W. Ave. C. 6:30 p.m. Author Rod Beemer: “Notorious Kansas Bank Heists,” Reno County Museum, 100 S. Walnut. Hutchinson Zoo’s always a site to put on your list From 10 a.m. to 4:45 p.m. today, explore the Hutchinson Zoo, 6 Emerson Loop E Carey Park. It’s a recreational and animal rehabilitation facility that promotes conservation education on behalf of wildlife. Remember library’s Toddler Time today At 9:15 a.m. or 10 a.m. today, bring your young ones to one of the Toddler Time sessions in the children’s activity room at the Hutchinson Public Library, 901 N. Main. 7:15 a.m. Ave A. “Walk to School Day”. Safely walk students to Ave. A Elementary School. 9:30 a.m. Preschool Storytime, Hutchinson Public Library, 901 N. Main. 10 a.m. Reno County Farmers’ Market, Second and Washington, Hutchinson. 10:45 a.m. Babytime, Hutchinson Public Library, 901 N. Main. 10 a.m. to 5 p.m. Take the children to the Kansas Kids Museum in the Hutchinson Mall food court, 1500 E. 11th. Travel to the Sedgwick County Zoo, 5555 W. Zoo Blvd. 1 to 5 p.m. See the art at the Birger Sandzen Memorial Gallery, 401 N. First, Lindsborg. Have an event you’d like to add? Submit it at hutchnews.com/calendar. Please submit events at least a week in advance. Justices deny Reno DA’s motion in Vonachen case BY THE NEWS STAFF The Kansas Supreme Court on Monday denied Reno County District Attorney Keith Schroeder’s request for a temporary stay or injunction that would bar plans for a new mental evaluation for Samuel Vonachen. Vonachen is the Hutchinson teen convicted in August in the deaths of his mother and sister as a result of a fire at the family home. He is awaiting sentencing. Schroeder lodged legal action before the Supreme Court against Reno County District Judge Trish Rose and public defender Sarah McKinnon in September. He contends Vonachen should be held in Reno County Correctional Facility – not Kan. collected $45M less than expected in Sept. BY JONATHAN SHORMAN The Topeka Capital-Journal TOPEKA – Kansas collected nearly $45 million less in taxes in September than expected, providing a major blow to state finances and scrambling the budget. State government now faces a shortfall greater than $60 million, just three months – or one quarter – into the fiscal year. Barring a dramatic turnaround, lawmakers and Gov. Sam Brownback will confront a bleak financial situation when the Legislature returns in January. Senate President Susan Wagle, R-Wichita, argued Brownback is required by law to make cuts himself because state finances are in the red. The governor’s office said the administration is not planning to make allotments. The figures are sure to provide ammunition to campaigns ahead of the November general election just weeks away. Democrats and moderate Republicans have made the budget and taxes a prominent issue. Month after month, with only a few exceptions, the revenue reports have disappointed. Sometimes the shortfall is small – $10.5 million in August. In other months, the numbers are staggering: May fell $76 million below expectations. On Tuesday, a working group assembled by Brownback to examine why estimates are so frequently off will present its findings and recommendations. Individual income tax collections were off by about $14 million, or 6 percent. The Kansas Department of Revenue cited weaker than expected quarterly payments related to capital gains and the stock market in explaining the figures. Corporate income taxes came in short by $17 million, or 21 percent. Retail sales tax collections fell short by $9 million, or about 5 percent. The approximately $45 million monthly shortfall represents a miss of about 8 percent. So far during the fiscal year, which began in July, Kansas has collected $69 million less in taxes than projected. “The significant contributors to less-than-expected September receipts were individual estimated payments related to capital gains and the stock market; a continued regional trend of low corporate tax receipts and sales tax receipts,” Revenue Secretary Nick Jordan said in a statement. “Withholding tax receipts, which are an indicator of jobs and income, continues to perform above the previous year.” Even before the September figures, Kansas faced a budget shortfall. The state began the fiscal year in July with an anticipated positive ending balance of only $5 million that was quickly wiped away by poor July and August tax collections. Though the state now faces a $60 million budget hole at least, continued monthly revenue shortfalls could realistically push the figure past $100 million. A twice-yearly revenue forecast will also be released in November, and that projection could send the figure even higher. “Sam Brownback’s continued refusal to truthfully acknowledge and address the failures of his economic policies not only threatens the future of our state, but insults our intelligence,” Senate Minority Leader Anthony Hensley, D-Topeka, said in a statement. “Kansas voters have the power to put an end to the ongoing Brownback budget crisis when they cast their ballots in November.” State budget director Shawn Sullivan asked agencies in August to provide budget scenarios featuring a 5 percent cut. Last week, he said Brownback’s budget proposal in January wouldn’t include across-the-board cuts, but acknowledged the current budget will have to be adjusted. Wagle, citing a state law, said Brownback needs to make cuts to eliminate the budget shortfall. Hensley has also previously said Brownback should act before the legislative session. “I served in the Kansas Senate when Kathleen Sebelius was governor and Mark Parkinson was governor and both of them cut across the board when we had a shortfall,” Wagle said. “Clearly, our statutes require that when there is a shortfall in revenues and when the budget is in the red, the governor is supposed to allot.” the Reno County Youth Services Detention Facility – pending sentencing. The Supreme Court denied Schroeder’s bid to block the mental examination. Schroeder had claimed the pre-sentence mental examination/evaluation report before sentencing was “needless.” Rose had ordered that Vonachen continue to be housed at the Reno County juvenile detention center, rather than being moved to the county jail, pending sentencing. Then, during a hearing on a subsequent state motion to have him moved to the jail, Rose ordered that Vonachen be sent to Larned State Hospital for a mental evaluation before he is sentenced in the case. Last week, Schroeder said in a filing with the Supreme Court, he had learned from a staff member at Reno County Youth Services Detention Facility that Vonachen said he had met with his attorney and learned he would not be transferred to Larned State Hospital for evaluation, but rather an evaluation would be completed while he remains in the juvenile facility in Hutchinson. Schroeder said he had received no motions, orders or notices pertaining to that development. He also asserted to the high court that the law requires a defendant convicted of a felony to be committed to a state security hospital or other suitable local mental health facility for the examination or evaluation. US bees added to endangered species list A yellow-faced bee is shown in Hawaii. Federal authorities added seven yellow-faced bee species, Hawaii’s only native bees, for protection under the Endangered Species Act on Friday, a first for any bees in the United States. BY AMY B. WANG The Washington Postish looking bees are more commonly known as “yellow-faced” or “masked” for their yellow-to-white facial markings. These species are responsible for pollinating some of Hawaii’s indigenous plant species, many of which are threatened themselves. Karl Magnacca, a Hawaiibased entomologist, told The Associated Press that efforts to have the bees federally protected took nearly a decade. “It’s good to see it finally come to fruition,” he told the AP, adding that yell0wfaced bees tend to favor the more dominant trees and shrubs in Hawaii, which helps “maintain the structure of the whole forest.” Magnacca did much of the initial research on the bees in support of the Xerces Society for Invertebrate Conservation, a nonprofit that aims to protect pollinators and other invertebrates. (The group says it takes its name from the Xerces Blue butterfly, “the first butterfly known to go extinct in North America as a result of human activities.”) For years, the Oregonbased group had pushed for yellow-faced bees to be recognized and protected. In 2009, the Xerces Society first submitted petitions to the U.S. Fish and Wildlife Service, and the group celebrated news of the federal agency’s ruling on Friday – even as representatives noted there could have been more done to protect the insects. The endangered species designation “is excellent news for these bees, but there is much work that John Kaia Associated Press needs to be done to ensure that Hawaii’s bees thrive,” Xerces Society spokesman Matthew Shepherd wrote in a statement on the group’s website. “[Yellow-faced bees] are often found in small patches of habitat hemmed in by agricultural land or developments. Unfortunately, the [Fish and Wildlife Service] has not designated any ‘critical habitat,’ areas of land of particular importance for the endangered bees.” According to the federal agency, yellow-faced bees have been threatened by non-native bees and other invasive animal species, as well as by human development. Though there is no evidence yet, researchers noted, too, that yellow-faced bees could be compromised by diseases transmitted by non-native insects. “The small number of remaining [yellow-faced bee] populations limits this species’ ability to adapt to environmental changes,” the agency wrote in its Sept. 30 final ruling. “The effects of climate change are likely to further exacerbate these threats.” Magnacca told the AP there are a lot more rare insects that deserve protection. “It may not Published daily and Sunday 300 West 2nd Hutchinson, KS 67504-0190 Monday’s numbers: Daily Pick Midday 2-5-9 Evening: 9-5-7 Kansas Cash: 10-12-18-22-31 Super Cashball: 7 Estimated jackpot: $150,000 2by2: Red: 13-24 White: 9-25 Outside Hutchinson 1(800)766-3311 fax (620) 662-4186 (USPS 254820) Accounting and human resources John D. Montgomery, Rex Christner, business & HR director editor and publisher Ext. 400, email: jmont@hutchnews.com Ext. 410, email: rchristner@harrisbusiness.com News department Newspaper production and comercial printing managing editor and news director Ext. 300, email: rsylvester@hutchnews.com Advertising sales and business marketing services Jeanny Sharp, marketing solutions director Ext. 200, email: jsharp@hutchnews.com Anita Stuckey, print marketing solutions manager Ext. 222, email: astuckey@hutchnews.com Kevin Rogg, digital marketing solutions manager Ext. 210, email: krogg@hutchnews.com Newspaper delivery and digital subscriber services Sara Bass, circulation marketing & operations director Ext. 100, email: sbass@hutchnews.com Subscription information Single copy: 75 cents daily Sunday: $2.00 To start a subscription call:. honeybees, since their work in pollinating crops makes them economically valuable to humans. However, in a research paper published in the journal Nature Communications last summer, scientists argued that wild bees may deserve just as much attention, even if fewer wild species are responsible for crop pollination. “There’s more than just economic reasons to protect nature and the species in it,” Taylor Ricketts, a co-author of the paper and director of the Gund Institute for Ecological Economics at the University of Vermont, told The Post last June. Wild bees are important to the larger ecosystem, likely integral to maintaining the habitat for other species that indirectly affect humans, Ricketts said. The designation of these seven bees as endangered species is a start, conservationists say. In the same ruling, the U.S. Fish and Wildlife Service also designated the band-rumped storm petrel, the orangeblack Hawaiian damselfly and the anchialine pool shrimp as endangered species. The designations take effect Oct. 31. Ron Sylvester, (620) 694-5700 LOTTERIES necessarily be appropriate to list them as endangered, but we have this huge diversity that we need to work on and protect here in Hawaii,” he said. “There’s a huge amount of work that needs to be done.” The designation is considered a victory for conservationists and echoes a broader effort, made in recent years, to recognize the contributions and importance of bees. That effort may be paying off: Last May, the White House released its National Strategy to Promote the Health of Honeybees and Other Pollinators in a bid to protect pollinators around the country. “I have to say that it is mighty darn lovely having the White House acknowledge the indigenous, unpaid and invisible workforce that somehow has managed to sustain all terrestrial life without health-care subsidies, or a single COLA, for that past 250 million years,” Sam Droege, a U.S. Geological Survey wildlife biologist and one of the country’s foremost experts on native bee identification, told The Washington Post last May. Much of the publicity so far has been focused on 694-5730 or 1 (800) 766-5730 Periodical-class postage paid at Hutchinson, KS 67504-0190. Subscription renewal policy Postmaster: Send address changes to: The Hutchinson News, P.O. Box 190, Hutchinson, KS 67504-0190. Suggested News+ home delivery (7-day) For your convenience, subscriptions are automatically renewed and delivery continues at the current full rate unless our office is notified otherwise. 3-month Hutchinson $78.00 $117.00 Plus tax Plus tax Gregg Beals, production director Ext. 700, email: gbeals@hutchnews.com Jarod Wannamaker, prepress manager Ext. 500, email: jwannamaker@hutchnews.com Information technology Nick Hemphill, IT manager Ext. 443, email: nhemphill@harrisbusiness.com Newspaper printing Mike Heim, press manager Ext. 820, email: mheim@hutchnews.com Newspaper packaging Jeremy Coen, packaging manager Ext. 701, email: jcoen@hutchnews.com. Subscriber Services 694-5730 1 (800) 766-5730 Sales & Service 694-5700 (Dept. 2) 1 (800) 766-5704 News 694-5700 (Dept. 3) 1 (800) 766-5740 The Hutchinson News Tuesday, October 4, 2016 A3 LOCAL PAID ADVERTISEMENT Burrton instructor garners a business teaching honor BY THE NEWS STAFF BURRTON – Burrton High School’s business/ computer teacher Kenna Teel was named Secondary Business Teacher of the Year in Kansas. The honor was announced Monday at the Kansas Business Education Association’s annual convention in Wichita. Teel lives in Newton and has taught over 20 years, including overseas for two years in Dubai, and at Kingman High School and for Johnson County and Pratt community colleges. She is a f ormer president of the Kanas Business Education Association and is in her fifth year at Burrton. At Burrton High, every freshman has to take computer applications and Teel teaches those classes, as well as electives. She teaches subjects that are constantly changing. “It’s pretty much different every year. I just kind of roll with what’s happening,” she said. “It’s a constant challenge because the business world changes constantly. It’s a constant learning process for me.” Teel doesn’t know who nominated her for the award. Nominees were required to submit a resume and letters of recommendation. The latter came from Courtesy photo Burrton High School teacher Kenna Teel, right, receives the Secondary Business Teacher of the Year honor from Kim Dhority, leader of the Kansas Business Education Association. The award for Teel was announced Monday at the association’s state convention in Wichita. administrators, colleagues and from a former student. There is no monetary prize with the award. Collision on US 50 sends 1 to hospital BY THE NEWS STAFF NEWTON – A 2014 Dodge passenger car ran a stop sign at U.S. 50 about three miles west of Newton, and the driver of a 2010 Freightliner, Jose A. Valdivia Navarro, 25, of Newton, was sent to Newton Medical Center with injuries, according to the Lawmakers fear KanCare backlog will grow again BY JONATHAN SHORMAN Topeka Capital-Journal Lawmakers fear a backlog of KanCare applications the state has spent months reducing will once again balloon. The bipartisan worry belies assurances by the Kansas Department of Health and Environment, which oversees the state’s privatized Medicaid program. Members of the Legislature’s KanCare Oversight Committee express concern the agency isn’t prepared. KDHE is on track to clear the backlog in October, according to a state audit released last month, after it told federal officials in June the number of unprocessed applications was far greater than previously disclosed. Internal auditing documents suggest the backlog undercounting may have continued undiscovered if not for inquiries from the federal government. Interview notes by auditing staff, obtained by The Topeka Capital-Journal through a records request, show one of the state’s KanCare contractors told auditors that questions from the federal Centers for Medicare and Medicaid Services led to the discovery of the low-ball figures. As the agency seeks to clear the last of the backlog, it faces questions over whether it can prevent the problem from reemerging. “I’m just worried that we’re going to be back in the same trouble come January and that really bothers me,” said Rep. Dan Hawkins, a Wichita Republican who chairs the KanCare Oversight Committee. The KanCare backlog first developed in 2015, spurred by problems with a new electronic eligibility system called KEES. By spring, KDHE indicated it was reducing the number of unprocessed applications. Kansas Highway Patrol. The accident occurred about 10:44 a.m. Monday. The driver of the car, Frank T. Haas, 55, of Hutchinson, wasn’t injured, the report said. Haas was driving north on Ridge Road approaching U.S. 50, while the Freightliner was traveling west on U.S. 50. The Freightliner struck the Dodge as the car entered the intersection without stopping. Both vehicles traveled northwest through the intersection, went through the guard rail and came to rest in the creek, the report said. Both drivers were wearing their seat belts. Man hurt in Barton rollover BY ASHLEY BOOKER The Hutchinson News abooker@hutchnews.com An Ellinwood man was hospitalized after rolling his vehicle in Barton County early Saturday morning. Barton County Sheriff ’s Office deputies were sent to an injury crash south of Ellinwood around 2:30 a.m. According to the Barton County Sheriff ’s Office, it appears that Chantz Clawson, 23, was traveling south on Southeast 105 Avenue in a 2009 Chevrolet Silverado 1500 pickup when he failed to negotiate a curve onto Southeast 20 Road. His pickup left the road, went into the south ditch, struck a field drive, went airborne and rolled about two and a half times. Clawson was sent to Great Bend Regional Hospital with injuries. He has since been released from the hospital. COVERING THE BETTER PART OF KANSAS PRESENTS A Saturday, Oct. 8th 7 a.m. - 1:30 p.m. Sunflower North Building Kansas State Fairgrounds 75 BOOTHS! $1 Admission Fee Children 6 & under get in free. For more information call 620-694-5704 A4 Tuesday, October 4, 2016 The Hutchinson News NEWS Interfaith Housing celebrates 25 years of service BY ASHLEY BOOKER The Hutchinson News abooker@hutchnews.com A group of around 250 people helped celebrate Interfaith Housing Services’ 25 years of service to not only individuals, but entire communities across Kansas. While it was a celebration of the nonprofit organization and those it’s influenced, speakers made sure to recognize the people who made it possible: the donors, volunteers, staff, and current and founding board members. Without them, and a mission guided through standing strong to Christian principles, it was noted that the organization wouldn’t have made such a long-term impact on the lives of others. Former Interfaith Housing Services President John Scott was the keynote speaker and said he’d been overwhelmed with recent proclamations and awards in his name, and countless cards and phone calls thanking him for his service. “It’s overwhelming,” he said, his voice wavering. “It’s very humbling for me. After 25 years all of this is happening, and 25 years ago all I wanted was just a job, but God had other plans. Instead of a job, He gave me a mission.” Scott said he felt he shouldn’t be recognized for all that Interfaith has done. Ashley Booker /The Hutchinson News Attendees enjoy a meal provided at the Interfaith Housing Services 25th Anniversary Celebration on Monday. “I’ve just been lucky,” he said. “I’ve been in a place where people have come together for a common vision, for a common cause, and done great things together.” While many things were shared about the organization’s past, Mike Smith, board chair and co-interim CEO, closed the gathering with a few words about the future. “Thank you again for remembering us and our work in the community,” he said to the crowd. “Together we will continue to make a difference by improving all of our lives by improving the lives of the people we serve.” In looking towards it future, Smith announced the expansion of what they call Santa Fe Two in Dodge City, which will continue low-income housing in Ford County. He said Interfaith will also continue to collaborate with the city of Hutchinson, Hutchinson Community Foundation and United Way of Reno County to improve and refurbish the SW Bricktown neighborhood. He also announced that Interfaith recently accepted a donation of the former St. Elizabeth Hospital. While the details aren’t yet finalized, Smith noted that in the long run, people in the room may have the opportunity to live in the room they were born. Scott said in 1991 no one could have imagined what IHS would become in 25 years, or the kind of impact it would have on not only this community, but the state as a whole. For that, he said he was grateful. “May God richly bless IHS and each one of you,” he said in closing. Pine Village care center’s Benefit Day event slated BY THE NEWS STAFF MOUNDRIDGE – Pine Village’s Benefit Day Auction & Dinner is set for Oct. 20 at the Wellness Center, 86 22nd Ave. The sausage dinner is from 5 to 6:30 p.m., with homemade pie for dessert, and the auction begins at 6 p.m. All proceeds will go toward a new kitchen for the nonprofit Continuing Care Retirement Community’s main dining hall. Auction items are currently on the Pine Village website,. Some of the highlights this year include a hand-crafted cedar chest, a compact meat smoker and a handful of themed baskets. Other items of interest include a Bill Snyder signed K-State football, KU basketball tickets, Royals tickets and a signed KU basketball. There also will be an assortment of paintings as well as various gift certificates. To donate an item for the auction or make a financial contribution, contact Julie Kern, director of marketing, at (620) 345-2901 or julie.kern@pinevillageks.org. BRIEFS Alternative active commute for students with items in the garage as well as smoke damage to the walls and ceiling. The release, distributed by Deputy Fire Chief Doug Hanen, states that “outside cooking appliances are meant to be used away from (a) structure and definitely not inside open structures, such as a garage or enclosed porch area.” The Reno County Health Department is inviting Avenue A Elementary School students to walk to school on Wednesday. Adult community volunteers will meet the children at Avenue A. Park, at the corner of Washington and Avenue A, at 7:15 a.m. Wednesday and walk safely to the elementary school, according to a RCHD release. The Walking School Bus Program is “to encourage our youth to be physically active” through providing a “safe walking option to actively commute to school.” It is being hosted in partnership with United Way of Reno County. Roads closing for bridge replacement Garage fire on Sunday costs $3,500 in damages Supreme Court declines Kansas serial killer case Hutchinson firefighters responded Sunday afternoon to a structure fire in a neighborhood after hot material fell out of a smoker. According to a news release, the fire started at about 1:26 p.m. in the garage of a house in the 500 block of Molly Mall after debris from the smoker dropped onto contents in the garage. “Several” people were at home when the fire occurred, but no injuries were reported. The fire took less than five minutes to contain, but in that time it caused an estimated $3,500 in damages, through both direct contact TOPEKA –. – From staff, wire reports A bridge replacement project will close 69th Avenue between Lerado Road and Andre Road to all through traffic beginning today, according to Reno County Public Works. The road will remain closed until further notice. PICK YOUR DAY IN OCTOBER 1 OFF $ either October 9, Sunday Buffet or October 25, Chicken Buffet. WE ARE OPEN: Mon - Fri: 7a.m. - 7p.m. • Saturday: 7a.m. - 2p.m. Closed Sundays (Except for Monthly Second Sunday Buffet) True Talent: A Journey Through Art History will run Tuesdays and Thursday from August 30th – October 20th. For more information, visit kansasnie.com under the “Teachers” tab. Questions may be directed to Jeremiah Thornton, 785.822.1470 or jthornton@harrisbusiness.com. Chapter 11: Compare and Contrast As they were about to leave the portal, Skye raised her hand awkwardly. “Mr. Thomas? I know that time is short, but can Wes and I talk to our teammate for a minute?” Mr. Thomas turned and saw that even though she was trying to hide it, Piper seemed a bit down. “No time!” Stick Monkey shook his head and gave Mr. Thomas a worried look. “There’s a little time.” Mr. Thomas knew Stick Monkey had reason to be concerned, but his students obviously needed to help each other. “Let me know when you are ready.” Piper started to argue. She did not want the attention directed at her. She did not want to dampen the fun everyone was having. “What’s wrong Piper?” Wes asked. “I have known you since we were in preschool, and when you are this quiet, I know something is up.” “I don’t feel like I’m that great of an artist Written and Illustrated by Mallory Goeke compared to the two of you,” Piper replied. “Wes, you draw these amazing monsters that look so cool.” Piper looked down at the ground sadly. “Skye, I know we just met, but you create amazing drawings too. All I can draw are cute things,” she said crossing her arms and trying to avoid eye contact with Wes. “I really like drawing cute and luffy animals, but I feel like I should be creating these amazing projects like you guys.” Skye and Wes put their hands on Piper’s shoulders reassuringly. “You are a member of The Talented Trio!” Wes smiled at her. “Trio means three, without you we wouldn’t have a team.” “Piper, if there’s anything that we have learned so far, it’s that you really do care about art!” Skye nudged her friend. “I think you are really talented!” “I didn’t mean to make fun of your work earlier,” Wes added. “I know you enjoy drawing cute animals and, if that makes you happy, then I’m glad that you’ve found your style.” Although their words did not take away all of the negative feelings Piper had about her work, she appreciated that they cared. A small smile spread across her face. “I suppose we should get going. I really do want to see what Mr. Thomas has to show us at our inal stop.” “Maybe the next artist we learn about will help us even more! Let’s found out okay?” Skye took her friend’s hand and they started to exit the portal with Wes not far behind. They soon caught up to Mr. Thomas and Stick Monkey standing in front of a blue house. “Woah! Skye giggled. “The artist who lives here must be amazing!” “Students, this is La Casa Azul, The Blue House, and this is where our next artist lived and created a vast majority of her work.” Mr. Thomas motioned for his students to follow him. “She overcame a lot of pain and heartache in her life and her paintings didn’t shy away from that hurt.” They stopped at a large window and Mr. Thomas let the kids look in. “Her name was Frida Kahlo.” Has there ever been a time when you have experienced doubt or insecurity and needed the help of your friends? Can you think of a time your words really helped a friend experiencing those same feelings? The Hutchinson News Tuesday, October 4, 2016 A5 ADVICE Perfect woman’s ideology isn’t ideal Dear Annie: I am a soonto Dear Annie Annie Lane other, e.g., yin and yang, night and day, peanut butter and jelly. But your pairing sounds more like Pop Rocks and soda – explosive and causing much bellyaching. My question for you is: Why the rush into dating? I suggest you put that on old until your divorce is finalized. Once you’ve turned the last page in that painful chapter of your life, you can attempt to start fresh. You are used to being in a relationship and may compulsively be seeking a woman to fill that role. Don’t be in such a rush to partner up that you settle for someone and find yourself wanting to excuse away major issues. Dear Annie: I’m a 14-year-old boy from New Jersey. I just started high school and am involved in clubs and on the junior varsity football team. I’m not a straight-A student, but I make pretty good grades, mostly B’s and some A’s. th ey barely even notice me. I sometimes feel as if I’m being punished for being the good kid. What can I do to make them take more notice of me? – Middle Child Dear Middle: It’s not easy being golden. You’re a great blessing in your parents’ lives, and there’s no doubt in my mind that they know it, but they’re focusing their attention on the ongoing crises with your elder brother. Tell them how you feel. It might not exactly be fair that you have to remind them you need attention, too, but it’s fortunate for your family to have someone as mature and patient as you on the team. Send your questions for Annie Lane to dearannie@ creators.com. To find out more about Annie Lane and read features by other Creators Syndicate columnists and cartoonists, visit the Creators Syndicate website at. com. Name brand or generic, bleach is bleach Dear Heloise: Is there any difference in strength of name-brand bleach and a generic brand? And does the strength lessen over time? – Joyce C., via email Dear Joyce: Bleach is bleach. It can be confusing, because today it’s more common to buy concentrated bleach. Concentrated bleach used to be 5.25 to 6 percent sodium hypochlorite solution, but now it is 8.25 percent. When reading the bottle, it really doesn’t make a difference whether it’s name brand or generic, because you should buy based on the percentage for your household needs, with the higher percentage better for sanitizing and disinfecting. You will need to familiarize yourself with how to dilute the bleach, depending on what you are cleaning, with adjustments being made for strength. Once opened, bleach will last several months without losing much strength. It Hints from Heloise Heloise does lose strength over time, but it still will be powerful enough to use. How fast it loses strength depends on where it is stored, temperature, etc., but it should be fine up to five months after opening. – Heloise Dear Heloise: I read a hint where a person put cardboard on the bottom of her fabric grocery bags. My mother and I use old car license plates. We put duct tape around the plate so it does not cut through the fabric. It’s a great way to recycle car plates. Friends who do not use fabric bags are happy to give us theirs. They are sturdy, washable and last forever. – Adele H., Plymouth, Ind. Dear Adele: This certainly would work; however, some states require you to return the plates so they cannot be used by another person. A lot of people keep these plates as a collector’s item, though, so just don’t lose them! – Heloise Dear Heloise: I like to use erasable pens when entering notes in my pocket calendar, when making shopping lists and especially when doing crossword puzzles. Since I am prone to mistakes, the eraser on the pen gets loaded with smeary ink and makes a mess of my work. I have found that a quickand-easy way to clean up the eraser is to lightly brush it with an emery board. The old ink is easily “filed” off, and the eraser looks and works like new. – Jane A., Beavercreek, Ohio Dear Heloise: When putting your water or any drink bottle in the car in the cup holder, in order for the condensation to not make a puddle, I utilize a small, TUESDAY EVENING 6 PM 6:30 tight sock. Just put your bottle in any sock, and lo and behold, you get no condensation anywhere. This way, too, you don’t have to throw away any old or tight socks. – Judy N., Lake Worth, Fla. Dear Heloise: I have an old pot outside that has some flowers coming up. I put an old, plastic flying disc under it, lip side up, to catch the runoff water. – Sharon W., Georgetown, Ky. Send a money-saving or timesaving hint to Heloise, P.O. Box 795000, San Antonio, TX 782795000, or you can fax it to 1-210-HELOISE or email it to Heloise@Heloise.com. wer your letter personally but will use the best hints received in my column. October 4, News Wheel Big Bang Big Bang ET Rules Fam. Guy Simpsons PBS NewsHour (N) PBS NewsHour (N) News Mod Fam News Inside Ed. The Voice (N) ‘PG’ Brooklyn New Girl The Flash ‘PG’ Å Bones ‘14’ Å Contenders -- 16 Doctors Opinion Dancing With Stars NCIS (N) ‘PG’ Vice Presidential Debate (N) (Live) Å Vice Presidential Debate (N) News No Tomorrow ‘PG’ News at 9 (:35) TMZ Bones ‘14’ Å Two Men Two Men Vice Presidential Debate (N) (Live) Å Vice Presidential Debate (N) (Live) Å Vice Presidential Debate (N) (Live) Å Vice Presidential Debate (N) (Live) Å News Hollywood Rules Broke Girl Roadtrip World News News J. Fallon Last Man How I Met Broke Girl World American J. Kimmel Colbert Cops ‘PG’ Cops ‘PG’ ›› Austin Powers in Goldmember (2002) How I Met How I Met How I Met How I Met Rosa de Guadalupe Despertar Contigo (N) Tres Veces Ana ‘14’ El color de la pasión Impacto Noticiero Hardball Matthews MSNBC Debate Prev Vice Presidential Debate (N) Live Post Live Post Debate (N) Debate Night Debate Night Vice Presidential Debate (N) Debate Night in America (N) On the Record With The O’Reilly Factor Vice Presidential Debate (N) O’Reilly The Kelly File (N) Chrisley Chrisley WWE SmackDown! (N) (Live) ‘PG’ Å Chrisley Chrisley Mod Fam Mod Fam Seinfeld Pre-Game MLB Baseball Baltimore Orioles at Toronto Blue Jays. (N) (Live) MLB Post. Arrow ‘14’ Å Arrow ‘14’ Å Arrow ‘14’ Å Arrow ‘14’ Å Arrow ‘14’ Å ›› The Equalizer (2014) Denzel Washington. Premiere. Å Atlanta Atlanta Atlanta Jack E:60 (N) NBA Preseason Basketball: Knicks at Rockets NBA Basketball To Be Announced WNBA Basketball Baseball Tonight (N) UFC UFC College Football Baylor at Iowa State. Best of WEC ‘14’ Black Ink: Chicago Black Ink: Chicago Love & Hip Hop Love & Hip Hop Basketball Wives LA (4:25) ››› 8 Mile 2016 Hip Hop Awards Show-stopping performances. ‘14’ Wild/Out Wild/Out Wild/Out Criminal Minds ‘14’ Criminal Minds ‘14’ Criminal Minds ‘14’ Criminal Minds ‘14’ Saving Hope ‘14’ Dance Moms ‘PG’ Dance Moms (N) ‘PG’ Dance Moms (N) ‘PG’ Å (:44) Dance Moms ‘PG’ Å Fixer Upper ‘G’ Å Fixer Upper ‘G’ Å Fixer Upper ‘G’ Å Hunters Hunt Intl Fixer Upper ‘G’ Å Chopped Junior ‘G’ Chopped Junior ‘G’ Chopped ‘G’ Chopped ‘G’ Star Chopped The Killing of JonBenet (:45) Married at First Sight ‘14’ (:01) Born This Way Cleveland Abduction Dungeon Cove Dungeon Cove Dungeon Cove Taking Fire (N) ‘14’ Dungeon Cove Say Yes Counting Counting On (N) ‘PG’ Sweet 15 Gypsy Wedding (:04) Counting On Stuck ›› My Babysitter’s a Vampire Best Fr. K.C. Vampire Bunk’d Best Fr. Liv-Mad. Thunder Thunder Nicky School Full H’se Full H’se Full H’se Full H’se Friends Friends (4:30) ›››› Forrest Gump ›› Jumanji (1995) Robin Williams, Bonnie Hunt. The 700 Club ‘G’ Griffith Griffith Andy Griffith Show Raymond Raymond Raymond Raymond King King Counting Cars ‘PG’ Cnt. Cars Cars Forged in Fire ‘PG’ Forged in Fire (:04) Forged in Fire (4:30) ›› Sinister › I, Frankenstein (2014) Aaron Eckhart. Aftermath (N) ‘14’ Cabin-Woods Jokers Jokers Jokers Jokers Jokers Jokers Ad. Ruins Do Better Do Better Jokers Last Man Last Man Last Man Last Man ›› Liar Liar (1997) Jim Carrey, Maura Tierney. Liar Liar The Profit ‘PG’ Shark Tank ‘PG’ Shark Tank ‘PG’ The Profit (N) ‘PG’ Shark Tank ‘PG’ Mary Pickford: Muse ›› Little Annie Rooney (1925) ‘NR’ Mothers of Men ‘NR’ Yours, Mine (5:30) ›› 2012 (2009, Action) John Cusack. Premiere. Å Halt and Catch Fire Halt and Catch Fire River Monsters ‘PG’ River Monsters ‘PG’ North America ‘PG’ North America ‘PG’ River Monsters ‘PG’ (3:55) ›› Notorious 2016 Hip Hop Awards Show-stopping performances. ‘14’ (:36) 2016 Hip Hop Awards ‘14’ Futurama Futurama Tosh.0 Tosh.0 Tosh.0 Tosh.0 Tosh.0 (N) Drunk Daily At Mid. E! News (N) ‘PG’ Rob & Chyna ‘14’ Rob & Chyna ‘14’ Rob & Chyna ‘14’ E! News (N) ‘PG’ Below Deck ‘14’ Below Deck ‘14’ Below Deck (N) ‘14’ Below Deck ‘14’ Happens OC Bizarre Foods Delicious Delicious Delicious Delicious Bizarre Foods Bizarre Foods We Bare Gumball Regular Steven King/Hill Cleveland American Burgers Fam. Guy Fam. Guy The Waltons ‘G’ Bonanza ‘G’ Walker, Tex. Ranger Walker, Tex. Ranger Medicine Woman Holy Mass Mother Angelica Live News Rosary Threshold of Hope Cate Women of HBO MAX SHOW STZENC 426 407 432 510 (5:45) Westworld ››› Cast Away (2000) Tom Hanks. ‘PG-13’ Å Westworld ‘MA’ Deepwa (5:15) Mad Max: Fury Road ‘R’ (:20) ›› True Story (2015) ‘R’ ›› Focus (2015) Will Smith. Clearing Shameless ‘MA’ 60 Minutes Sports (N) Inside the NFL ‘PG’ FSU FSU Inside the NFL ‘PG’ (5:00) ››› Cars ››› About a Boy (2002) Å (:45) ››› Bull Durham (1988) ‘R’ Å Boiler Rm PREMIUM CHANNELS AAT KANSAS COSMOSNOW SHOWING PHERE CAREY DIGITAL DOME THEATER PLEASE NOTE: Prices and times are subject to change. Call 620.662.2305 ext.347 for advance tickets - credit card required. Extreme Weather: (COMING 10/24/16) Robots: Daily: See cosmo.org for showtimes National Parks Adventure: Daily: See cosmo.org for showtimes Pete’s Dragon: Fri - Sun: 7 p.m. Bryce Dallas Howard, Robert Redford, Oakes Fegley The adventures of an orphaned boy named Pete and his best friend Elliot, who just so happens to be a dragon. NOW SHOWING at B&B THEATRES DEEPWATER HORIZON (UR) 3:30 - 4:00 - 6:30 - 6:50; MASTERMINDS (2016) (PG-13) 5:30 - 7:50; MISS PEREGRINE'S HOME FOR PECULIAR CHILDREN (PG-13) 3:30 - 6:30 - 7:15 3D 4:15; STORKS (PG) 3:30 - 6:40; THE MAGNIFICENT SEVEN (2016) (PG-13) 4:00 - 7:00; SULLY (R) 4:15 - 7:00; THE HUTCHINSON FOX THEATRE NEW MOVIES WILL BE ADDED AS OPEN WEEKENDS ARE AVAILABLE! *ALL MOVIES SUBJECT TO CHANGE FOR COMPLETE TV AND MOVIE LISTINGS GO TO HUTCHNEWS.COM/TV Today’s Birthday (10/04/16). Take leadership for a personal passion this year. Strengthen communication channels. New social pursuits this spring lead to energized health and vitality. Change directions for fun, family and romance this autumn, before friends inspire you to act for a shared cause. Pull and grow together. Aries (March 21-April 19) -- Today is an 8 -- Handle financial matters, especially regarding family funds. A new responsibility presents itself, leading to an intensely creative moment. Use your skills and experience. Romance blossoms through communication. Taurus (April 20-May 20) -- Today is a 7 -- Collaborate with your partner to strengthen foundational infrastructure to handle a new assignment. Stick to tried-and-true techniques. Practice makes perfect, and hones for efficiency. Refine and edit. Gemini (May 21-June 20) -- Today is an 8 -- Saving money may be easier than earning it. Conserve resources without suffering. A little discipline goes a long way. Get lost in your work. Pamper yourself afterward. Cancer (June 21-July 22) -- Today is a 7 -- Listen with your heart. Be careful and thorough to advance. Play games and sports with your crew. Work out strategies. Discover a new view with unimagined beauty. Leo (July 23-Aug. 22) -- Today is a 7 -- Enjoy home and family. Take time for another’s problems, and listen for solutions. No bending the rules. Hold others to them, too. Work out a plan together. Virgo (Aug. 23-Sept. 22) -- Today is an 8 -- Communications heat up. Keep a cool head and stay on message. Friends help you make a long-distance connection. Get support from someone with more experience. Gather information. Libra (Sept. 23-Oct. 22) -- Today is a 9 -- Bring home the bacon. Stick to the schedule! Your team is hot; watch the ball and pass when appropriate. There’s money to be made, and it takes coordination Scorpio (Oct. 23-Nov. 21) -- Today is a 9 -- Groom your personal style and branding. Add something new. Make a good impression with someone you care about. Keep your promises. Pay down debt. Gain strength. Sagittarius (Nov. 22-Dec. 21) -- Today is a 6 -- Take private time to get organized and make plans. Review and revise. Get peacefully productive. You’re especially sensitive and intuitive. Slow down and consider all the angles. Capricorn (Dec. 22-Jan. 19) -- Today is an 8 -- Confer with allies. Committees are especially effective. Private meetings get practical results. Teach each other. Put sweat equity into a shared project. Celebrate what gets accomplished. Aquarius (Jan. 20-Feb. 18) -- Today is an 8 -- Compete for more responsibilities. Keep your focus, and winning is a distinct possibility. Listen to a mentor or teacher. Prepare for the test. Review your notes. Pisces (Feb. 19-March 20) -- Today is a 7 -- Indulge your curiosity. A loved one needs more attention; take them on an adventure and try something new. Investigate options and choose together. Explore and discover. Prince probe focuses on doctors, black market BY MICHAEL TARM AND AMY FORLITI Associated Press MINNEAPOLIS – More. Prince had a reputation for clean living, and some friends said they never saw any sign of drug use.. Prince performs at the Billboard Music Awards at the MGM Grand Garden Arena in Las Vegas on May 19, 2013. Chris Pizzello/ Associated Press Tuesday, October 4, 2016 GOREN BRIDGE WITH BOB JONES ©2016 Tribune Content Agency, LLC POOR BIDDING REWARDED Both vulnerable, South deals. NORTH ♠A532 ♥J8 ♦984 ♣AK96 WEST ♠8 ♥ A 10 6 3 ♦J765 ♣8532 South had a chance. The opening lead went to the 10 and queen. As the cards lie, the jack of clubs is falling and South has four club tricks, but declarer didn’t know that. He ran off all of his trumps, leaving this position with one trump remaining: EAST ♠96 ♥K942 ♦ Q 10 3 2 ♣ J 10 7 SOUTH ♠ K Q J 10 7 4 ♥Q75 ♦ AK ♣Q4 The bidding: SOUTH WEST NORTH EAST 1♠ Pass 3NT* Pass 4NT Pass 5♥ Pass 6♠ All pass *12-14 points, four-card spade support Opening lead: Five of ♣ It’s a good idea to never bid Blackwood with a side suit that has two fast losers, and this deal is an example of why. South should have contented himself with a fourdiamond cue bid and then passed four spades when his partner couldn’t cue bid four hearts. The defense could have taken the first two heart tricks, but it was reasonable for West to lead a club and NORTH ♠ Void ♥J ♦984 ♣AK9 WEST ♠ Void ♥ A 10 ♦J7 ♣?xx EAST ♠ Void ♥K9 ♦ Q 10 3 ♣?x SOUTH ♠7 ♥Q75 ♦ AK ♣4 On the last trump, West shed a diamond, dummy a heart, and East the nine of hearts. Had East discarded the king of hearts, instead, this column might have had a different hero. South exited with a low heart to East’s king and East exited with a diamond. South won and cashed his other diamond, and West could not defend the position. West had to keep the ace of hearts so he discarded a low club. It no longer mattered who held the jack of clubs. South had four club tricks and his slam. A6 Tuesday, October 4, 2016 The Hutchinson News OPINION EDITORIAL We can’t ignore trade issue in presidential race T his presidential election is creating one of the fiercest political battles against American trade in recent history, and that’s bad news for Kansas farmers. The recent presidential debate should have brought a collective chill across Kansas. Donald Trump lambasted the North American Free Trade Agreement, or NAFTA, as “the single worst trade agreement ever approved in this country.” This came the same week that wheat traders from Mexico visited Great Bend to see the sources of the 2.4 million metric tons of wheat that country imported from the U.S. last year. Mexico is second only to Japan in American wheat exports. Kansas also has played host to visitors from Cuba, who want to lift embargoes and open agricultural trade. Both Trump and Democratic opponent Hillary Clinton said they would fight the TransPacific Partnership, or TPP, a trade agreement favored by Kansas farmers. The TPP would help farmers sell beef and grain to Pacific Rim countries, lowering tariffs and cutting down on unethical global labor practices. Clinton appeared to do an about-face on the TPP, which she strongly supported as Secretary of State. The agreement looked headed for passage by Congress and signed by President Obama before the end of his term. But the recent anti-trade rhetoric has looked to undo years of work – to the detriment of Kansas agriculture. Trump seeks to undo decades of Republican support for free trade. Clinton seems to surrender years of support in order to win over supporters of Bernie Sanders, who also spouted anti-trade positions. Clinton offered a caveat at last week’s debate. “We are 5 percent of the world’s population; we have to trade with the other 95 percent,” she said. “And we need to have smart, fair trade deals.” Trump made no such concession. Meanwhile, Trump’s signature platform is the costly wall on our borders to block Mexico – that country buying all of that wheat. Trump also vowed to push for an economic policy that gives the richest people big tax breaks, which he promises will give them incentive to create jobs. We’ve seen how well that experiment works in Kansas. It doesn’t. Polls show Trump with a hefty lead among Kansans – ahead of Clinton by double digits in some counts. With only six electoral votes, Kansas isn’t likely to sway an election one way or another. Since this state hasn’t gone with a Democrat since Lyndon Johnson in 1964, Kansas is already being called by many observers as a victory for Trump. It shouldn’t be a given. Kansas voters need to think hard about who they are supporting for president and listen closely to the issues in the coming month before Election Day, including global trade. The decision shouldn’t be about being Republican or Democrat, liberal or conservative. Both candidates have shown disregard for what those parties have stood for in the past. Here in farm country, we should be asking which candidate is best for Kansas and its agricultural economy. Don’t let party labels cause us to vote against our own best interests. ‘Fool me once … ’ Those following the presidential race may have noticed a reprise of the all-too-familiar old refrain calling for the nation to venture down economist Arthur Laffer’s road to prosperity yet again. It seems the path to the economic salvation of America being touted is eerily familiar to the “roadmap” Kansas has been following. All that is needed, it goes, are tax cuts sufficient enough to spur business growth and, like magic, a much-needed boost to the economy will follow. Six years into just such a radical curtailment of income tax revenues, our state is facing the possibility of a financial meltdown. The projected “shot of adrenalin” that it was hoped would stimulate the economy failed to materialize, and the consequences of this experiment in “trickle-down economics” are becoming painfully clear. Lest there be any confusion as to a shared vision, in his quest to take up residency in the Oval Office, candidate Donald Trump has called on both our Gov. Sam Brownback and Secretary of State Kris Kobach to join his team of advisers. There is no doubting the importance of the outcome in the presidential election to the nation. There is also little doubt as to who will prevail in winning the majority of votes for the office in the Sunflower State. Taking into consideration past Kathie Moore Email: klm news45@ gmail.com results, the flood of polling data and the consensus of pundits, the final outcome of that contest will be determined by a smattering of toss-up states. So it would appear the die is cast: No surprises, no doubts. Then an old cautionary proverb came to mind. As with most bits of folk wisdom passed down for generations, the origin is unclear, but the sentiment is forthright and rings as true today as it has through the years. “Fool me once, shame on you. Fool me twice, shame on me.” So is there just that faint possibility that, having seen the light, the voters of Kansas might shock the nation, sending the urgent warning, “Turn back! There’s danger ahead”? OK, that’s probably not going to happen. But to anyone tempted to sit out this election, I suggest you look back to the primary election in August. Voters who were determined to have a say, to bring about change, had a huge unanticipated impact. Challenged incumbents on both the state and federal level were summarily given notice that their services would no longer be required. As reflected in many opinion polls, the public has become disheartened and hungry for change. The tens of thousands of votes needed to move Kansas from the red to the blue column in the presidential contest are a long shot indeed, but on the state and local level a mere handful have been known to bring about surprising results. There is much at stake for the future of Kansas. Finding the way back to stability will present a challenge for those we charge with making critical choices. Recent decisions in the ongoing battle over Kansas’ onerous voter registration laws have cleared the way for thousands of Kansans previously denied the right to freely participate in the process. Those who have not yet registered have until Oct. 18 to complete their registration. Requests for advance voting by mail can be made starting Oct. 19, with on-site advance voting scheduled to begin Oct. 20. The official Election Day will take place on Tuesday, Nov. 8. Questions should be directed to the county clerk in the new annex at the southeast corner of First and Adams. Wouldn’t it be foolish to miss this opportunity to make a difference? Kathie Moore, rural Hutchinson, is a freelance artist retired from the U.S. Postal Service. COLUMNIST WESTERN FRONT Pick Schlickau again for Reno Schlickau in November. SANDRA COLEMAN Haven We love the enthusiasm of youth but know the value of experience. James Schlickau has both. If you have had the opportunity to visit with him one-on-one about fiscal issues, you will know the deep, quiet confidence he inspires. Although he has actively promoted economic growth policies, he always demonstrates sensitivity to the taxpayer burden. In dealing with zoning ordinances, James was highly receptive to the people directly affected, protecting the agricultural community from impractical urban encroachment. He uses discretion in working with administrators and is down-to-earth and sensible when working with the public. Decisive, ethical and highly organized, James provides valuable efficiency and continuity to the board. He ran no negative ads against his opponent, a testament to his professionalism. Re-elect Commissioner James Vote for Terrell on Nov. 8 I’m writing in support of Patsy Terrell for the state legislature. Patsy will work for strong public schools for our children, advocate for a tax policy that is fair and responsible for all, fight for transparency in state government, invest in our infrastructure including a renewed commitment to our roads and highways, and allow our judges to interpret the law without prejudice. The people of Reno County and the state spoke loudly with their vote last August for candidates who will part ways with the governor’s agenda. Electing Patsy to the legislature will put a wonderful exclamation point on the movement for a Kansas that works for all of us, not just a few. Please support the anti-Brownback candidate on Nov. 8 and vote for Patsy Terrell. STEVE SNOOK Hutchinson Editorial Board JOHN D. MONTGOMERY EDITOR-PUBLISHER JASON PROBST OPINION/ WEEKEND EDITOR COMMUNITY COLUMNIST RON SYLVESTER MANAGING EDITOR JEANNY SHARP MARKETING SOLUTIONS DIRECTOR KELTON BROOKS REPORTER VP debate should include issues of faith In every election cycle since Jimmy Carter introduced “born again” into the political lexicon, a politician’s faith has been an object of curiosity and contention. tonight’s debate between the two candidates for vice president. Gov. Mike Pence, R-Ind., is a pro-life evangelical Christian who believes in traditional marriage. Sen. Tim Kaine, D-Va., is a Roman Catholic who takes the opposite view. Most evangelicals believe “All Scripture is Godbreathed and is useful for teaching, rebuking, correcting and training in righteousness ...” (2 Timothy 3:16). Many Roman Catholics accept the authority of Scripture, but also place tradition Cal Thomas Email: tcaedi tors@tribpub. com tonight. Cal Thomas is a columnist for the Tribune Content Agency. The Hutchinson News Tuesday, October 4, 2016 A7 OBITUARIES DIRECTORY Lavon M. Nance Carl Bott Lee Orosco Linda Neal RENO COUNTY MONTEZUMA – Lavon May Nance passed away Oct. 2, 2016. She was born Aug. 26, 1938, in Scott City, to Ted and Alma Rapier. Lavon grew up on a farm south of NaNCe Marienthal with three sisters and two brothers. She graduated from Leoti High School in 1956. She married Deane Nance May 26, 1956, in Clayton, New Mexico. Lavon and Deane made their home in Montezuma where they raised their family. Lavon took great pride in maintaining her beautiful yard and home. She was an avid antique collector but was especially fond of costume jewelry, roosters, and Raggedy Ann and Andy memorabilia. She was a member of the United Methodist Church of Montezuma and was recently attending the Gospel Mennonite Church also of Montezuma. Lavon is survived by: daughter, Jan Nance of Hutchinson; son, Kyle Nance and fiance Ramona Buckner of Montezuma; daughter, Jodi Holmes (Todd) of Montezuma; and grandson, Grant Holmes of Lawrence. In addition, Lavon was surrounded by a large group of friends she considered family, including Jared and Mel Isaac and family, Jackie Boyd, Doug and Sharon Classen and many others. Lavon was preceded in death by her husband Deane; both parents; and a brother Eugene. Graveside Services will be held at 10:30 a.m. Thursday, Oct. 6, 2016, at Evans Cemetery, south of Montezuma. Visitation will be held from noon to 8 p.m. Wednesday, Oct. 5, 2016, at Swaim Funeral Home, Montezuma. Memorials are suggested to the Shriners Children’s Hospital, in care of the funeral home. Thoughts and memories may be shared in the online guest book at. DIGHTON – Carl K. Bott, 89, died Sept. 16, 2016, in Lawrence. Memorial Service will be 11:00 a.m. Saturday Oct. 8, 2016, at the United Methodist Church in Dighton. Memorials are to the United Methodist Church. Complete obituary information will be found at the Garnand Funeral Home website. ULYSSES – Lee Orosco, 47, died Oct. 2, 2016. Survived by: wife, Angela (Pister) Orosco; children, Danielle, Otto and Bailey. Funeral 2 p.m. Thursday at Oasis Church, 712 E Hampton Rd., Ulysses. Visitation 2 p.m. to 8 p.m. Wednesday at Garnand Funeral Home. Complete obituary information on Garnand Funeral Home website. DODGE CITY – Linda Lou Neal, 69, died Oct. 1, 2016, in Dodge City. She was born on Nov. 15, 1946. No services will be held at this time. There is no visitation as cremation has taken place. Memorials to Alzheimer’s awareness, in care of Swaim Funeral Home, 1901 Sixth Ave, Dodge City, KS 67801. Keith Crawford Sr. Hutchinson AROUND THE STATE Donovan Bachman Hesston Larry Lee Bergman Hillsboro Carl Bott Dighton Christine O. ‘Chris’ Keeler Great Bend Mildred Laubach Syracuse Barbara Manka Larned Milton R. Miller Hesston Lavon M. Nance Montezuma Linda Neal Dodge City Lee Orosco Ulysses Al Orrison Dodge City SPONSORED BY: Keith Crawford Sr. Keith Crawford Sr. went to be with the Lord on Sept. 27, 2016, in Hutchinson. He was born July 27, 1944, in Muskogee, Okla. Keith worked as a machinist until he retired. He married Crawford Sr. his beloved wife, Gloria Crawford who preceded him in death. He was also preceded in death by a daughter, Tammi Crawford; seven siblings; and a granddaughter. He is survived by three sons, Keith Crawford Jr. (Leticia) of Colorado Springs, Colo., Eric Crawford (Melissa) of Las Vegas, Nev., and Danny Crawford of Hutchinson; siblings, Luther Crawford, Mildred Harness, and Annette Crawford, all of Hutchinson, Ann Robinson of Great Bend; 13 grandchildren; and a great-grandson. The Crawford family would like to thank all of the friends and family for your support and a special thanks to Hospice Homecare of Reno. We would also like to thank the Second Missionary Baptist Church, where Keith was a member. To leave a condolence for Keith’s family please visit. Barbara Manka LARNED – Barbara J. Manka, 75, died Oct. 2, 2016. Born Dec. 5, 1940, daughter of John F. and Edna Firestone Arnold. Survivors; sons, James, Robert and Greg; daughter, Susan Vondracek. Memorial Service 10:30 a.m. Friday at Beckwith Mortuary Chapel, with David Arnold presiding. Cremation has taken place. Visit Beckwith Mortuary website for full information. Donovan Bachman HESSTON – Donovan Bachman died Oct. 3, 2016. Visitation 4 to 8 p.m., family present 6 to 8 p.m., Tuesday, Oct. 4, at Miller-Ott Funeral Home. Graveside service 12:30 p.m. Wednesday at Restlawn Cemetery, Newton. Memorial service 2 p.m. Wednesday at First Mennonite Church, Newton. Memorials: Schowalter Villa Good Samarian Fund or Mennonite Central Committee, care of MillerOtt Funeral Home, Hesston. Mildred Laubach SYRACUSE – Mildred “Mickey” (Tatum) Laubach, 90, died Oct. 1, 2016, at the Homestead Health in Garden City. Born Sept. 6, 1926, in Taylor, Miss., daughter of Nathan Brooks and Grace Corinne Tatum. Graveside, Thursday, Oct. 6, 2016, 1:00 A.M. (MDT), Syracuse Cemetery, Syracuse. Visitation, Wednesday, Oct. 5, 2016, 3:00 P.M. to 7:00 P.M. (MDT) at Fellers Funeral Home. Memorials local charity church choice. Larry Lee Bergman HILLSBORO – Larry Lee Bergman, 75, died Sept. 30, 2016. He was born June 21, 1941, son of Alvin and Elrena (Redger) Bergman. His wife, Avis Bergman, survives him. Celebration of Life 10 a.m. Saturday at First Baptist Church, Durham. Visitation 6 to 8 p.m. Friday at Jost Funeral Home, Hillsboro. Memorials to American Heart Association. Milton R. Miller HESSTON – Milton Miller, died Oct. 2, 2016, at Schowalter Villa. Visitation 5:30 to 7:30 p.m., Wednesday, at Schowalter Villa Chapel. Graveside service 9 a.m., Thursday, at Eastlawn Cemetery. Memorial service 10 a.m., Thursday at Schowalter Villa Chapel. Memorials to Mennonite Disaster Service in care of Miller-Ott Funeral Home, Hesston. Al Orrison DODGE CITY – Al Orrison, 96, died Oct. 1, 2016 at Dodge City. Funeral 10 a.m. Wednesday at Ziegler Funeral Chapel, Dodge City. Burial Kansas Veterans Cemetery, Ft. Dodge. Visitation noon to 8 p.m. Tuesday at Ziegler Funeral Chapel. Memorials: VFW Post 1714 or Ford County Humane Society care of Ziegler Funeral Chapel, 1901 N. 14th, Dodge City, Kansas 67801. Christine O. ‘Chris’ Keeler GREAT BEND – Christine O. “Chris” Keeler, 78, died Sunday, Oct. 2, 2016. She was born July 2, 1938, to Donovan and Mary (Risse) Neeland. Survivors include: husband, Charlie; and four daughters, Sherri, Donna, Lori, and Lisa. Services will be held 10:30 a.m. Wednesday, Oct. 5, at St. Rose Church, Great Bend. Bryant Funeral Home. Leave a message of sympathy at hutchareaobituaries.com Dillons • From Page A1 To use the service, customers order online at. Shoppers can enter orders via computers or smartphones with internet access. At the website, the customer creates a shopping list, selects a pickup time and then places the order. Online shoppers can choose from any of the 40,000 items available instore, including fresh meat and produce. Specially trained Dillons ClickList employees fill the order and store it in temperature-appropriate zones until the customer picks it up in a designated parking space on the east side of the store. While the website is now live for Hutchinson, order pickups will not begin until Thursday. The company remodeled storage space within the store to house the services, including designated refrigerators and freezers, and technology to support the program, said Sheila Lowrie, Kroger community and public relations manager. Lowrie declined to say how much Kroger invested in bringing the service to Hutchinson. “We also hired dedicated ClickList associates who received extensive training, not only on the tech side, but how to shop for the freshest produce,” Lowrie said. “Customers told us, when someone else is shopping for their groceries, they were most concerned about produce and dairy freshness. That is why the extra training. What we’ve heard from customers is our ClickList associates pick better produce than they typically would buy when shopping.” “One example that comes to mind is that most people don’t know how to select a ripe pineapple,” she said. “With the training we give our associates, they have that knowledge. If you’re buying an avocado, depending on whether you’re slicing it or making guacamole, they can select the right one.” When placing an order, the customer must select a one-hour window on the designated date for pickup – for example, 5 to 6 p.m. Thursday. Then, on the day of pickup, they should Justices • From Page A1 their educational outreach effort. Biles and Reno County District Judge Joe McCarville were slated to speak at an assembly at Trinity, with students from nearby Central Christian School coming over for the event. Biles and McCarville will visit Central Christian instead of Trinity. Other members of the high court and local judges are fanning out to speak to students at some other schools in Reno County today. The court did not publicly announce the list of high schools in advance, for security reasons. Hutchinson USD 308 is a plaintiff in a school finance lawsuit pending before the Supreme Court. The justices are not speaking on the USD 308 campus. At Trinity, Principal Joe Hammersmith said he set up the visit but realized afterward that Biles was up for retention this year. He notified the court Friday of the decision to opt out of a judicial visit, according to a spokeswoman for the court. “The decision that was made was a local decision,” said Amy Pavlacka, director of communications for the Catholic Diocese of Wichita. She also said it was the practice of diocesan Sandra J. Milburn/The Hutchinson News ClickList is now being offered to customers at the Dillons Marketplace store. The area where people go to pick up the groceries is on the northeast side of the grocery store, near the pharmacy drivethrough. MORE INFORMATION A few other things for customers to know: Coupons electronically linked to a customer’s loyalty card will automatically redeem to reduce the cost of the order. Paper coupons will be deducted at time of pickup. Online prices reflect the price in-store on the day an order is placed. Some prices may change between placement of the order and pick up. The receipt will show the price charged. Customers should bring any concern about a specific price to the associate’s attention. Pharmacy prescriptions are not included in the program. When you place your order online, you can indicate whether you would like to allow substitutions, in case an item ordered is out of stock. For items not available, store associates will offer a substitution to the customer, which the customer may accept or decline. If the out-of-stock item is available in a larger quantity, the order will be upgraded to the larger item, or, if the same brand and item is available in a different package (for example, boxed sugar instead of bagged sugar), that item will be substituted. If the brand is not available, the same type of item from a different brand may be substituted. O O O O arrive at the store any time during that window. Customers arriving at the store for pickup should then call a number posted at several designed ClickList parking spaces outside the store to notify store staff they are waiting. “If someone doesn’t have a cellphone they can make arrangements inside the store, though our teams are observant and usually aware when a customer pulls into the lane,” Lowrie said. After the customer pays for the groceries while remaining in their car – payment must be either by debit or credit card; Dillons does not accept government assistance, cash or checks for ClickList – the worker will load up the groceries in the car. There is a $4.95 service charge on every ClickList order, although, as an introductory offer, Dillons will waive the service charge on each customer’s first three orders. If not picked up at the scheduled time, store employees will give the customer a reminder call, Lowrie said. If pickup cannot then be rescheduled rather promptly, the store will restock the groceries in the store and the customer will have to redo the order. The store will not charge a restocking fee on the cancelled order, Lowrie said. There is no limit on the TIPS FOR COURT: COME EARLY, TRAVEL LIGHT BY MARY CLARKIN The Hutchinson News mclarkin@hutchnews.com Those planning to watch the Kansas Supreme Court in action Tuesday night at Hutchinson Community College’s Stringer Fine Arts Center, 600 E. 11th Ave., are advised to arrive early and leave big bags and electronics at home. There will be security screening at Stringer. The court will start its session at 6:30 p.m., but the audience is advised to arrive before 6 p.m. Court staff gave these tips for those planning to witness the first-ever Supreme Court proceedings in Hutchinson. To get through security screening as quickly as possible: Do not bring large bags, large purses, backpacks, computer cases or briefcases. Do not bring knives, pepper spray, firearms or weapons. Do not bring electronic devices like laptop computers, handheld games, personal digital assistants or tablets. If you have to carry a cellphone, it must be turned off or its ringer silenced, and it must be stored out of sight. Do not bring food or drink. Court and college staff will not be responsible for property left outside the auditorium. Also, showing up early is the best way to guarantee a seat in Stringer’s B.J. Warner Recital Hall, which has seating for 422 people. When the Supreme Court had an evening session in Hays, nearly 700 people attended. After the session concludes, the justices will greet the public in an informal reception at Stringer. • • • • ministries not to appear to be endorsing any candidate during an election season. Five of the seven members of the state’s Supreme Court are up for retention this year: Chief Justice Lawton Nuss and Associate Justices Biles, Carol Beier, Marla Luckert and Caleb Stegall. There is a campaign underway urging voters to vote “no” on the retention of all but Stegall. A majority of the seven-member court, including Biles, was appointed by former size of an order, large or small, though the same service charge will attach. Orders placed before midnight are available for pickup the next day, again, at a time chosen by the customer. “One thing that’s really handy is you can start an order and then continue to add to it,” Lowrie said. A customer can fill the order, log out, and then return later in the day to change it, or even the next day if the order is not scheduled for pickup until the following day. Curbside pickup is available seven days a week, between 8 a.m. and 9 p.m. “We’re hearing from our customers they love it for the time saving and convenience,” Lowrie said. “We’re seeing customers who are not only busy parents juggling schedules for kids, but we’re seeing more seniors use it, so they don’t have to walk through the store.” “Maybe it’s someone shopping for an elderly mother on a weekly basis,” she said. “They can order online, pick up the groceries and spend more time with the elderly parents versus spending that time shopping. More business owners are using it, to buy supplies for breakrooms or potluck. They can shop online and send an employee to pick the order up, rather than pay them to shop.” Besides Hutchinson, the service is now available at Dillons Marketplace stores in Wichita, Andover and Derby, as well as in Topeka, Lawrence and at Baker’s in Omaha, Nebraska. Gov. Kathleen Sebelius, a Democrat. Nuss and Luckert were appointed by former Gov. Bill Graves, a Republican. Stegall, the most recent appointee, was named to the court by Gov. Sam Brownback. Kansans for Life wants all but Stegall ousted before the high court considers dismemberment abortion legislation. Also, all four justices under fire this year voted to overturn the death sentences for convicted murderers Reginald and Jonathan Carr, based on procedural factors. The U.S. Supreme Court subsequently ruled the Kansas Supreme Court was wrong in its ruling. The two members of the Kansas Supreme Court who are not on the ballot this fall – Lee Johnson and Eric Rosen – are Sebelius appointees who survived a vote-no campaign in 2014. The margin was close. Johnson and Rosen each received less than 53 percent of the vote. A8 Tuesday, October 4, 2016 TODAY The Hutchinson News WEDNESDAY THURSDAY FRIDAY SATURDAY 79/52 80/60 76/45 63/41 69/48 Chance of showers and storms Sunny Chance of storms Chance of storms Sunny KANSAS Today: Showers and thunderstorms likely, mainly after 2 p.m. Mostly cloudy. Tonight: Showers and thunderstorms likely, mainly before 8 p.m. Wednesday: Sunny. COLORADO Today: Mostly sunny. Tonight: Partly cloudy, with a low around 39. Wednesday: Mostly sunny, with a high near 68. Winds could gust as high as 16 mph. Denver Kansas 67 80 Salina Kansas City 70 Dodge City Colorado 77 79 Hutchinson 80 St. Louis Pittsburg Yesterday Hi Lo Prc Atlanta 85 61 Baltimore 76 61 Boston 68 55 Charlotte, N.C. 83 60 Chicago 67 59 Cincinnati 74 52 Cleveland 71 52 Dallas-Fort Worth 87 63 Denver 81 56 Detroit 66 13 Honolulu 85 74 Houston 87 63 Las Vegas 76 60 Los Angeles 76 59 Mpls-St. Paul 76 53 New Orleans 88 68 New York City 72 60 Orlando 88 75 .13 Philadelphia 76 62 Phoenix 86 67 Pittsburgh 70 50 .04 St Louis 76 64 San Diego 75 64 San Francisco 65 54 Seattle 60 49 Washington, D.C. 78 64 Kansas temperatures Chanute Coffeyville Concordia Dodge City Elkhart Emporia Garden City Goodland Hi Lo Prec. 79 85 82 87 88 81 90 87 57 53 55 60 57 54 60 55 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Oklahoma City 83 Oklahoma Great Bend Hays Hill City Hutchinson Lawrence Liberal Manhattan Med. Lodge MISSOURI Today: Partly sunny. Tonight: Showers and thunderstorms. Low around 62. Wednesday: Mostly sunny. High near 80. Chance of precipitation is 60 percent. NA 0.03 T 0.00 0.00 0.00 0.00 0.00 National forecast Forecast highs for Tuesday, Oct. 4 Olathe Parsons Pratt Russell Salina Topeka Wichita Winfield Sunny Pt. Cloudy Fronts Cold -0s 0s 10s 20s 30s 40s High: 98 at Rio Grande Village, Texas Low: 21 at Burns, Ore. and Truckee, Calif. m - indicates missing information. Hutchinson almanac Showers Rain 50s 60s T-storms Pressure Warm Stationary 70s Cloudy 80s Low High 90s 100s 110s Hi Lo Prec. 78 84 79 82 84 80 83 82 56 52 57 61 60 55 60 60 0.00 0.00 0.00 0.00 T 0.00 0.00 0.00 Flurries Snow Ice Hutchinson precipitation SUNSET TONIGHT: 7:11 p.m. Daily rainfall (Yesterday 6:30 p.m.) 0.00” Normal daily rainfall 0.08” Rainfall month to date 0.00” Normal for the month 0.26” Year to date 33.52” Normal for the year 24.36” Record high for this date 97 IN 1954 Lo Prec. NA NA 75 59 84 61 83 59 81 53 88 61 82 56 82 58 Tomorrow Hi Lo Otlk 83 64 Clr 68 55 Cldy 64 49 Clr 75 59 PCldy 78 61 Cldy 80 61 Cldy 77 57 Cldy 91 71 Cldy 67 38 PCldy 77 58 Cldy 88 71 Cldy 90 74 Cldy 81 59 Clr 79 62 Clr 68 56 PCldy 88 73 PCldy 69 53 PCldy 88 74 Rain 71 57 PCldy 88 63 Clr 73 53 Clr 82 64 Cldy 74 65 PCldy 69 54 Clr 62 51 Rain 69 58 Cldy National temperature extremes Yesterday as of 6:30 p.m. Hi Otlk PCldy Cldy Cldy PCldy Cldy PCldy Clr Cldy PCldy PCldy Cldy Cldy Clr Clr Cldy PCldy Cldy Rain Cldy Clr Clr PCldy PCldy PCldy Rain PCldy -10s 82 Missouri OKLAHOMA Today: Partly sunny, with gusts as high as 33 mph. Tonight: Mostly cloudy, with a low around 65. Wednesday: Sunny, with a high near 88. Gusts as high as 23 mph. Today Hi Lo 84 65 71 55 61 55 80 62 73 57 78 55 75 54 88 68 65 43 72 56 86 73 88 68 81 57 77 61 73 58 88 67 68 58 88 74 71 57 85 62 75 51 81 60 73 64 66 57 61 50 73 59 Record low for this date 33 IN 2014 Moon phases SUNRISE TOMORROW: 7:31 a.m. First Full Last New Oct. 8 Oct. 15 Oct. 22 Oct. 30 This photo was taken by Tami Zitterkopf, Hutchinson. Submit your photo at hutchnews.com. Note: Totals provided by the National Weather Service. NWS adjusts precipitation data regularly, meaning some totals can change significantly from day to day. N. Carolina governor declares emergency ahead of hurricane THE ASSOCIATED PRESS.. Lindsey Bauman/The Hutchinson News Service dog Indigo sits. Dog • From Page A1 The local vet referred Indigo to Kansas State University’s School of Veterinary Medicine. They specialize in caring for dogs, like Indigo, trained through the Kansas Specialty Dog Service. They are estimating the surgery, plus travel expenses and a four-day stay in Manhattan, will cost $5,000. Lucas relies on disability payments and doesn’t have the money for the surgery. A friend has set up an account at www. gofundme.com/2rlbt44. Lucas, who is legally blind, has grown to trust Indigo to guide him where he needs to go.The two bonded immediately and Indigo has given the outgoing man the freedom he lost because of debilitating macular degeneration. Over the years, there have been daily walks in Carey Park. Lucas and his wife, Georgia, still marvel about the time Indigo saved him from a flooded path. Or the time when a bicyclist was coming upon them just as Lucas began to walk in front of it. Indigo jerked loose and stood in front of Lucas to stop him from moving. “I never saw him like that before,” Lucas said. The two have become inseparable. “I go to the bathroom and he comes right in,” Lucas said. Now Indigo needs help and Lucas is appreciative of the people who have already donated $510 through GoFundMe. They plan to use that money to set up an appointment at the vet school and have an initial visit. They will have to wait until they have the rest of the money to have the surgery. Indigo will be in Manhattan for four days and Lucas doesn’t want to leave his side. There will be a six- to eight-week recovery period back in Hutchinson. “Yesterday he was limping real bad,” Lucas said, as his faithful companion sat by his feet with eyes that appeared filled with pain. For now, Lucas tries to ease the pain by offering the dog anti-inflammatory pills and avoiding long walks. KANSAS LEGISLATIVE DEBATE WEDNESDAY, OCT. 5 6 p.m. - 7:30 p.m. Hutchinson High School Career and Technical Education Academy 800 15th Circle Register To Win 2 FREE Tickets To Hutch Brewfest* Doors open at 5:30 p.m. Questions may be sent to debate@hutchnews.com Deadline - Friday, Sept. 30 Cosponsored by: • The Hutchinson News • Hutchinson/Reno County Chamber of Commerce DISTRICT 102 *Must be 21 and over to enter Go to: Rep. Jan Pauls Republican Hutchinson Patsy Terrell Democrat Hutchinson DISTRICT 104 Rep. Steve Becker Republican Buhler Betty Taylor Democrat Hutchinson B SPORTS THE HUTCHINSON NEWS TUESDAY, OCTOBER 4, 2016 After 2 Series trips, KC has early offseason 10 for10 After a scramble to finish the regular season, the playoffs are finally on deck. With 10 teams left, here are 10 questions to carry fans through October: 1 CAN THE CUBS DO IT? Joe Maddon’s crew romped to 103 wins, and now the North Side of Chicago is set up for either the biggest party of all time or the most brutal disappointment ever. Nothing in between for Royals spent the first day pondering how a season so full of promise fell so short QUESTIONS FOR PLAYOFF TEAMS BY DAVE SKRETTA AP Sports Writer. 2 KANSAS CITY, Mo. – For two decades, finishing at .500 would have given the Royals reason to celebrate. Two trips to the World Series changes perspective.. “We expected to perform much better WILL KERSHAW COME UP ACES? Dodgers See PLAYOFFS / B3 BLUE DRAGONS MOW OVER JV RIVALS AT See ROYALS / B3 Late-season grind poses a volleyball test for team BY KELTON BROOKS The Hutchinson News kbrooks@hutchnews.com Someone connect a heart monitor to the Hutchinson Community College volleyball team. With 10 matches in 11 days against at least five nationally ranked teams and needing to win out in the Jayhawk Conference NOTES to be crowned conference champions, the Blue Dragons’ hearts will either ache or swell with joy. “Everyone we face now becomes critical,” said Hutchinson coach Patrick Hall. “We have to win out in the conference. Our fate is in our hands. I know it’s cliche to say, but we have to take it one match at a time.” “We have to take every team seriously. We can’t just look forward to playing nationally ranked teams. We need to come back to conference games and be just as focused.” The Blue Dragons own the tiebreak against Seward County and have another match against Colby in the last game of the season. The Blue Dragons outlasted Colby in a 3-2 match win to start the season. Hutchinson is currently on a twomatch win streak after dropping its previous four to stiff competition. Despite a 3-0 loss to Indian Hills, the Blue Dragons stayed with the highly ranked team all match long, losing 25-21, 25-19 and 25-18. Playing top-level competition has been beneficial for Hutchinson. The young team has played against bigger teams, allowing it to prepare for a different style of play than it would within the Jayhawk. But Hall believes the team has much more work to do, stating, “The learning process can be very complex.” “Sometimes that application falters, and that has an effect on a young team, and that manifests itself as bad chemistry when, in actuality, players are not performing to their capability,” Hall said. “When you have a lot of youth, they don’t know how to handle ups and downs like an experienced team HCC Photos by Travis Morisse/The Hutchinson News Hutchinson Community College’s Morgan Wheeler rushes for yardage as Kansas Wesleyan’s Corey Scoma attemps a tackle Monday at Gowans Stadium. Wesleyan’s walloped BY KELTON BROOKS roster and Kansas Wesleyan JV has a generous three. Kansas Wesleyan was an 0-2 NAIA team heading into the game A David and Goliath reenagainst the previous ranked actment was Blue Dragons. not going to All the tools HCC 64 for a possible happen Monday night at Gowans KWU JV 0 Goliath takeStadium. down. Even after Nah. playing two games in three A 38-point first quarter and games. a 64-0 shutout put an end to However, the teams did have that tale. The Saturday game a few attributes similar to the against Iowa Central ended storied warriors. Hutchinson with a final score of 38-0. Community College has eight players over 300 pounds on its See HCC / B8 The Hutchinson News kbrooks@hutchnews.com HCC’s Guy Victoria sacks Cody Springsguth in the first quarter. See NOTES / B8 Chiefs head into week off reeking from 43-14 drubbing from Pittsburgh Steelers BY DAVE SKRETTA AP Sports Writer KANSAS CITY, Mo. – In the aftermath of a lopsided loss to the Pittsburgh Steelers, in primetime no less, Chiefs quarterback Alex Smith acknowledged that “you own this right now, you wear it and it stinks.” The Chiefs will smell for quite a while. Their embarrassing 43-14 loss on Sunday night left them 2-2 through their first four games, and with more questions than answers heading into a week off. Their offense was inept, Don Wright/Associated Press their defense was fileted by Ben Pittsburgh Steelers defensive end Cameron Roethlisberger and Co., and even Heyward (97) sacks Kansas City Chiefs their usually solid special teams quarterback Alex Smith (11) Sunday in Pittsburgh. were a stumbling mess. “You own this right now, you wear it, and it stinks.” Alex Smith, Chiefs quarterback “You’d love to go into it on a better note,” Smith said of the bye, “but who knows? Whether we like it or not, it���s here. We have to use it. Get healthy, regroup and bounce back from this.” Here are the grimy details from the Chiefs’ worst loss under Andy Reid: They lost two turnovers, both of which turned into touchdowns. One was coughed up by suddenly fumble-prone Spencer Ware and the other was an interception on a screen pass flawed from the start. Smith was sacked four times • • by a Steelers defense that had just one through the first three games. Dustin Colquitt shanked a punt that gave the Steelers great field position, and Tyreek Hill’s punt return touchdown was wiped out by an illegal blocking penalty by Demetrius Harris. • • suspension. “You learn a lot about yourself when you get your butt kicked,” Chiefs linebacker Derrick Johnson said. “The best thing about it is it counts as one game. It does not put a damper on the season, and it does not set us into panic mode. Do we need to figure it out? Yes we need to figure it out.” The onus on that falls on Reid’s shoulders. Long recognized as an offensive mastermind, Reid has stumbled through most of the first four weeks. The Chiefs needed a frantic comeback to beat the Chargers See CHIEFS / B3 B2 Tuesday, October 4, 2016 TV-RADIO-FYI TELEVISION MLB BASEBALL 7 p.m. TBS — AL Wild Card Game, Baltimore at Toronto NBA BASKETBALL 7 p.m. ESPN — Preseason, New York at Houston 9:30 p.m. ESPN — Preseason, L.A. Clippers at Golden State NHL HOCKEY 6 p.m. NBCSN — Preseason, Carolina vs. Buffalo, at Marquette, Mich. SOCCER 7:55 a.m. FS2 — Women, FIFA U-17 World Cup, United States vs. Ghana, at Amman, Jordan 11 a.m. FS2 — Women, FIFA U-17 World Cup, Brazil vs. North Korea, at Zarqa, Jordan 3 p.m. FS2 — Women, FIFA U-17 World Cup, Nigeria vs. England, at Zarqa, Jordan (same-day tape) 5 p.m. FS2 — Women, FIFA U-17 World Cup, Paraguay vs. Japan, at Amman, Jordan (same-day tape) WNBA BASKETBALL 7 p.m. ESPN2 — Playoffs, Semiinals (best-of-5 series), Game 4, Los Angeles at Chicago CHANNEL FINDER Network Cox DirecTV Dish U-Verse ABC 10 10 10 10 beIN 292 620 392 662 BTN 273-75 610 392 650 CBS 12 12 12 12 CBSSN 260 221 158 643 ESPN 32 206 140 602 ESPN2 33 209 144 606 ESPNClassic 246 614 NA 603 ESPNNews 245 207 142 604 ESPNU 244 208 141 605 FS1 60 219 150 652 FSKC 34 671 418 750 Fox 4 24 24 24 KSCW 5 33 5 5 Longhorn 285 677 407 611 NBC 3 3 3 3 NBCSN 78 220 159 640 PAC-12 247 NA 406 759 SEC 276-77 611 408 607 FYI High school golf Hutchinson at Derby, 8:15 a.m. Buhler at Hesston, 10 a.m. High school soccer Hutchinson at Maize, 6:30 p.m. Mulvane at Buhler, 6:30 p.m. High school volleyball Buhler at Circle, 5 p.m. Hutchinson at Salina South, 5 p.m. Nickerson at Hillsboro, 5 p.m. Trinity Catholic at Bennington, 5 p.m. Fairield, Cunningham at Central Christian, 6:15 p.m. Pretty Prairie at Attica, 6:15 p.m. Junior college golf Hutchinson at Iowa Western AUTO RACING NASCAR SPRINT CUP-CITIZEN SOLDIER 400 RESULTS Sunday At Dover International Speedway Dover, Del. Lap length: 1 mile (Start position in parentheses) 1. (2) Martin Truex Jr, Toyota, 400 laps, 0 rating, 45 points. 2. (3) Kyle Busch, Toyota, 400, 0, 40. 3. (9) Chase Elliott, Chevrolet, 400, 0, 38. 4. (1) Brad Keselowski, Ford, 400, 0, 38. 5. (4) Matt Kenseth, Toyota, 400, 0, 36. 6. (5) Joey Logano, Ford, 400, 0, 35. 7. (8) Jimmie Johnson, Chevrolet, 399, 0, 35. 8. (14) Austin Dillon, Chevrolet, 399, 0, 33. 9. (7) Denny Hamlin, Toyota, 399, 0, 32. 10. (18) Jeff Gordon, Chevrolet, 399, 0, 32. 11. (22) Ricky Stenhouse Jr, Ford, 399, 0, 30. 12. (17) Kasey Kahne, Chevrolet, 399, 0, 29. 13. (15) Tony Stewart, Chevrolet, 399, 0, 28. 14. (10) Carl Edwards, Toyota, 399, 0, 27. 15. (11) Kurt Busch, Chevrolet, 398, 0, 26. 16. (27) Aric Almirola, Ford, 398, 0, 25. 17. (19) Ryan Newman, Chevrolet, 397, 0, 24. 18. (25) Greg Bifle, Ford, 396, 0, 24. 19. (21) AJ Allmendinger, Chevrolet, 396, 0, 22. 20. (23) Trevor Bayne, Ford, 395, 0, 21. 21. (34) Brian Scott, Ford, 395, 0, 20. 22. (26) Paul Menard, Chevrolet, 395, 0, 19. 23. (16) Chris Buescher, Ford, 394, 0, 18. 24. (28) Clint Bowyer, Chevrolet, 394, 0, 17. 25. (12) Kyle Larson, Chevrolet, 394, 0, 16. 26. (29) Casey Mears, Chevrolet, 394, 0, 15. 27. (35) Matt DiBenedetto, Toyota, 393, 0, 14. 28. (24) Danica Patrick, Chevrolet, 393, 0, 13. 29. (31) Landon Cassill, Ford, 392, 0, 12. 30. (32) David Ragan, Toyota, 392, 0, 11. 31. (33) Regan Smith, Chevrolet, 391, 0, 10. 32. (30) Ty Dillon, Chevrolet, 390, 0, 0. 33. (37) Michael Annett, Chevrolet, 387, 0, 8. 34. (36) Timmy Hill, Ford, 386, 0, 0. 35. (39) Reed Sorenson, Chevrolet, 385, 0, 6. 36. (38) Jeffrey Earnhardt, Ford, 384, 0, 5. 37. (6) Kevin Harvick, Chevrolet, 354, 0, 4. 38. (20) Ryan Blaney, Ford, 281, 0, 3. 39. (40) Josh Wise, Chevrolet, engine, 196, 0, 2. 40. (13) Jamie McMurray, Chevrolet, engine, 192, 0, 1. Race Statistics Average Speed of Race Winner: 130.967 mph. Time of Race: 0 hours, 0 minutes, 0 seconds. Margin of Victory: seconds. Caution Flags: 4 for 22 laps. Lead Changes: 14 among 6 drivers. Lap Leaders: B.Keselowski 1-5; M.Truex 6-20; Ky.Busch 21-35; M.Truex 36; Ky.Busch 37-105; M.Truex 106; Ky.Busch 107-124; M.Truex 125-188; J.Johnson 189-277; B.Keselowski 278-279; G.Bifle 280-286; M.Truex 287-365; J.Johnson 366; J.Gordon 367-373; M.Truex 374-400 Leaders Summary (Driver, Times Led, Laps Led): M.Truex, 6 times for 181 laps; Ky.Busch, 3 times for 99 laps; J.Johnson, 2 times for 88 laps; G.Bifle, 1 time for 6 laps; J.Gordon, 1 time for 6 laps; B.Keselowski, 2 times for 5 laps. Wins: Ky.Busch, 4; B.Keselowski, 4; D.Hamlin, 3; K.Harvick, 3; M.Truex, 3; C.Edwards, 2; J.Johnson, 2; M.Kenseth, 2; C.Buescher, 1; Ku.Busch, 1; K.Larson, 1; J.Logano, 1; T.Stewart, 1. Top 16 in Points: 1. M.Truex, 3000; 2. K.Harvick, 3000; 3. Ky.Busch, 3000; 4. M.Kenseth, 3000; 5. J.Logano, 3000; 6. C.Elliott, 3000; 7. B.Keselowski, 3000. 8. Ku.Busch, 3000; 9. D.Hamlin, 3000; 10. C.Edwards, 3000; 11. J.Johnson, 3000; 12. A.Dillon, 3000; 13. T.Stewart, 2074; 14. K.Larson, 2073; 15. J.McMurray, 2053; 16. C.Buescher, 2045. NASCAR Driver Rating Formula A maximum of 150 points can be attained in a race. The formula combines the following categories: Wins, Finishes, Top-15 Finishes, Average Running Position While on Lead Lap, Average Speed Under Green, Fastest Lap, Led Most Laps, Lead-Lap Finish. BASEBALL American League East W L Pct GB x-Boston 93 69 .574 — y-Toronto 89 73 .549 4 y-Baltimore 89 73 .549 4 New York 84 78 .519 9 Tampa Bay 68 94 .420 25 Central W L Pct GB x-Cleveland 94 67 .584 — Detroit 86 75 .534 8 Kansas City 81 81 .500 13½ Chicago 78 84 .481 16½ Minnesota 59 103 .364 35½ West W L Pct GB x-Texas 95 67 .586 — Seattle 86 76 .531 9 Houston 84 78 .519 11 Los Angeles 74 88 .457 21 Oakland 69 93 .426 26 x-clinched division y-clinched wild card Saturday’s Games N.Y. Yankees 7, Baltimore 3 Cleveland 6, Kansas City 3 Atlanta 5, Detroit 3 Minnesota 6, Chicago White Sox 0 Toronto 4, Boston 3 Tampa Bay 4, Texas 1 Houston 3, L.A. Angels 0 Oakland 9, Seattle 8, 10 innings Sunday’s Games Baltimore 5, N.Y. Yankees 2 L.A. Angels 8, Houston 1 Tampa Bay 6, Texas 4, 10 innings Toronto 2, Boston 1 Atlanta 1, Detroit 0 Minnesota 6, Chicago White Sox 3 Oakland 3, Seattle 2 Cleveland 3, Kansas City 2 TODAY’S MAJOR LEAGUE LEADERS AMERICAN LEAGUE BATTING–Altuve, Houston, .338; Betts, Boston, .318; Pedroia, Boston, .318; Cabrera, Detroit, .316; Trout, Los Angeles, .315; Ortiz, Boston, .315; Ramirez, Cleveland, .312; Martinez, Detroit, .307; Escobar, Los Angeles, .304; Andrus, Texas, .302. RUNS–Trout, Los Angeles, 123; Donaldson, Toronto, 122; Betts, Boston, 122; Kinsler, Detroit, 117; Springer, Houston, 116; Bogaerts, Boston, 115; The Hutchinson News SCOREBOARD Altuve, Houston, 108; Cano, Seattle, 107; Desmond, Texas, 107; Pedroia, Boston, 105; Machado, Baltimore, 105. RBI–Encarnacion, Toronto, 127; Ortiz, Boston, 127; Pujols, Los Angeles, 119; Betts, Boston, 113; Ramirez, Boston, 111; Cabrera, Detroit, 108; Trumbo, Baltimore, 108; Cruz, Seattle, 105; Beltre, Texas, 104; Hosmer, Kansas City, 104. HITS–Altuve, Houston, 216; Betts, Boston, 214; Pedroia, Boston, 201; Cano, Seattle, 195; Bogaerts, Boston, 192; Cabrera, Detroit, 188; Machado, Baltimore, 188; Abreu, Chicago, 183; Lindor, Cleveland, 182; Desmond, Texas, 178; Kinsler, Detroit, 178. DOUBLES–Ortiz, Boston, 48; Ramirez, Cleveland, 46; Cabrera, Chicago, 42; Betts, Boston, 42; Altuve, Houston, 42; Longoria, Tampa Bay, 41; Kipnis, Cleveland, 41; Machado, Baltimore, 40; Schoop, Baltimore, 38; Correa, Houston, 36; Pedroia, Boston, 36; Dickerson, Tampa Bay, 36; Seager, Seattle, 36. TRIPLES–Eaton, Chicago, 9; Dyson, Kansas City, 8; Bradley Jr., Boston, 7; Andrus, Texas, 7; Buxton, Minnesota, 6; Bourn, Baltimore, 6; Miller, Tampa Bay, 6; Escobar, Kansas City, 6; Anderson, Chicago, 6; Gardner, New York, 6. HOME RUNS–Trumbo, Baltimore, 47; Cruz, Seattle, 43; Encarnacion, Toronto, 42; Davis, Oakland, 42; Dozier, Minnesota, 42; Frazier, Chicago, 40; Cano, Seattle, 39; Ortiz, Boston, 38; Cabrera, Detroit, 38; Davis, Baltimore, 38. STOLEN BASES–Davis, Cleveland, 43; Altuve, Houston, 30; Trout, Los Angeles, 30; Dyson, Kansas City, 30; Upton Jr., Toronto, 27; Betts, Boston, 26; Andrus, Texas, 24; Martin, Seattle, 24; Ramirez, Cleveland, 22; Kiermaier, Tampa Bay, 21; Desmond, Texas, 21. PITCHING–Porcello, Boston, 22-4; Happ, Toronto, 20-4; Kluber, Cleveland, 18-9; Price, Boston, 17-9; Sale, Chicago, 17-10; Iwakuma, Seattle, 16-12; Tillman, Baltimore, 16-6; Verlander, Detroit, 16-9; Sanchez, Toronto, 15-2; Hamels, Texas, 15-5. ERA–Sanchez, Toronto, 3.00; Verlander, Detroit, 3.04; Tanaka, New York, 3.07; Kluber, Cleveland, 3.14; Porcello, Boston, 3.15; Happ, Toronto, 3.18; Quintana, Chicago, 3.20; Hamels, Texas, 3.32; Pomeranz, Boston, 3.32; Sale, Chicago, 3.34. STRIKEOUTS–Verlander, Detroit, 254; Archer, Tampa Bay, 233; Sale, Chicago, 233; Price, Boston, 228; Kluber, Cleveland, 227; Pineda, New York, 207; Hamels, Texas, 200; Porcello, Boston, 189; Duffy, Kansas City, 188; Pomeranz, Boston, 186. SAVES–Britton, Baltimore, 47; Rodriguez, Detroit, 44; Dyson, Texas, 38; Robertson, Chicago, 37; Colome, Tampa Bay, 37; Osuna, Toronto, 36; Allen, Cleveland, 32; Kimbrel, Boston, 31; Madson, Oakland, 30; Davis, Kansas City, 27; Jeffress, Texas, 27. BASEBALL’S TOP TEN AMERICAN LEAGUE G AB R H Pct. Altuve Hou 161 640 108 216 .338 Betts Bos 158 672 122 214 .318 Pedroia Bos 154 633 105 201 .318 MiCabrera Det 158 595 92 188 .316 Trout LAA 159 549 123 173 .315 Ortiz Bos 151 537 79 169 .315 JoRamirez Cle 152 565 84 176 .312 JMartinez Det 120 460 69 141 .307 YEscobar LAA 132 517 68 157 .304 Andrus Tex 147 506 75 153 .302 Home Runs Trumbo, Baltimore, 47; NCruz, Seattle, 43; KDavis, Oakland, 42; BDozier, Minnesota, 42; Encarnacion, Toronto, 42; Frazier, Chicago, 40; Cano, Seattle, 39; MiCabrera, Detroit, 38; Ortiz, Boston, 38; CDavis, Baltimore, 38. Runs Batted In Ortiz, Boston, 127; Encarnacion, Toronto, 127; Pujols, Los Angeles, 119; Betts, Boston, 113; HRamirez, Boston, 111; MiCabrera, Detroit, 108; Trumbo, Baltimore, 108; NCruz, Seattle, 105; Beltre, Texas, 104; Hosmer, Kansas City, 104. Pitching Porcello, Boston, 22-4; Happ, Toronto, 20-4; Kluber, Cleveland, 18-9; Price, Boston, 17-9; Sale, Chicago, 17-10; Tillman, Baltimore, 16-6; Verlander, Detroit, 16-9; Iwakuma, Seattle, 16-12; AaSanchez, Toronto, 15-2; Hamels, Texas, 15-5. National League East W L Pct GB x-Washington 95 67 .586 — y-New York 87 75 .537 8 Miami 79 82 .491 15½ Philadelphia 71 91 .438 24 Atlanta 68 93 .422 26½ Central W L Pct GB x-Chicago 103 58 .640 — St. Louis 86 76 .531 17½ Pittsburgh 78 83 .484 25 Milwaukee 73 89 .451 30½ Cincinnati 68 94 .420 35½ West W L Pct GB x-Los Angeles 91 71 .562 — y-San Francisco 87 75 .537 4 Colorado 75 87 .463 16 Arizona 69 93 .426 22 San Diego 68 94 .420 23 x-clinched division y-clinched wild card Saturday’s Games N.Y. Mets 5, Philadelphia 3 St. Louis 4, Pittsburgh 3 San Francisco 3, L.A. Dodgers 0 Washington 2, Miami 1 Cincinnati 7, Chicago Cubs 4 Atlanta 5, Detroit 3 Arizona 9, San Diego 5 Milwaukee 4, Colorado 3, 10 innings Sunday’s Games Washington 10, Miami 7 Philadelphia 5, N.Y. Mets 2 San Francisco 7, L.A. Dodgers 1 Arizona 3, San Diego 2 Atlanta 1, Detroit 0 Chicago Cubs 7, Cincinnati 4 Milwaukee 6, Colorado 4, 10 innings St. Louis 10, Pittsburgh 4 TODAY’S MAJOR LEAGUE LEADERS NATIONAL LEAGUE BATTING–LeMahieu, Colorado, .348; Murphy, Washington, .347; Votto, Cincinnati, .326; Blackmon, Colorado, .324; Segura, Arizona, .319; Marte, Pittsburgh, .311; Seager, Los Angeles, .308; Molina, St. Louis, .307; Ramos, Washington, .307; Prado, Miami, .305; Braun, Milwaukee, .305. RUNS–Bryant, Chicago, 121; Arenado, Colorado, 116; Blackmon, Colorado, 111; Goldschmidt, Arizona, 106; Seager, Los Angeles, 105; LeMahieu, Colorado, 104; Segura, Arizona, 102; Freeman, Atlanta, 102; Votto, Cincinnati, 101; Myers, San Diego, 99. RBI–Arenado, Colorado, 133; Rizzo, Chicago, 109; Kemp, Atlanta, 108; Murphy, Washington, 104; Duvall, Cincinnati, 103; Bryant, Chicago, 102; Gonzalez, Colorado, 100; Bruce, New York, 99; Yelich, Miami, 98; Votto, Cincinnati, 97. HITS–Segura, Arizona, 203; Seager, Los Angeles, 193; LeMahieu, Colorado, 192; Blackmon, Colorado, 187; Murphy, Washington, 184; Prado, Miami, 183; Arenado, Colorado, 182; Votto, Cincinnati, 181; Freeman, Atlanta, 178; Bryant, Chicago, 176. DOUBLES–Murphy, Washington, 47; Rizzo, Chicago, 43; Freeman, Atlanta, 43; Gonzalez, Colorado, 42; Segura, Arizona, 41; Belt, San Francisco, 41; Seager, Los Angeles, 40; Kemp, Atlanta, 39; Yelich, Miami, 38; Rendon, Washington, 38; Molina, St. Louis, 38; Markakis, Atlanta, 38; Villar, Milwaukee, 38. TRIPLES–Crawford, San Francisco, 11; Owings, Arizona, 11; Hernandez, Philadelphia, 11; Lamb, Arizona, 9; Turner, Washington, 8; LeMahieu, Colorado, 8; Belt, San Francisco, 8; Segura, Arizona, 7; Panik, San Francisco, 7; Wong, St. Louis, 7; Revere, Washington, 7; Fowler, Chicago, 7; Inciarte, Atlanta, 7; Harrison, Pittsburgh, 7; Bourjos, Philadelphia, 7. HOME RUNS–Arenado, Colorado, 41; Carter, Milwaukee, 41; Bryant, Chicago, 39; Kemp, Atlanta, 35; Freeman, Atlanta, 34; Bruce, New York, 33; Duvall, Cincinnati, 33; Rizzo, Chicago, 32; Tomas, Arizona, 31; Cespedes, New York, 31. STOLEN BASES–Villar, Milwaukee, 62; Hamilton, Cincinnati, 58; Marte, Pittsburgh, 47; Nunez, San Francisco, 40; Perez, Milwaukee, 34; Turner, Washington, 33; Segura, Arizona, 33; Goldschmidt, Arizona, 32; Gordon, Miami, 30; Jankowski, San Diego, 30. PITCHING–Scherzer, Washington, 20-7; Lester, Chicago, 19-5; Arrieta, Chicago, 18-8; Cueto, San Francisco, 18-5; Roark, Washington, 16-10; Hendricks, Chicago, 16-8; Martinez, St. Louis, 16-9; Maeda, Los Angeles, 16-11; Fernandez, Miami, 16-8; Hammel, Chicago, 15-10. ERA–Hendricks, Chicago, 2.13; Lester, Chicago, 2.44; Syndergaard, New York, 2.60; Bumgarner, San Francisco, 2.74; Cueto, San Francisco, 2.79; Roark, Washington, 2.83; Fernandez, Miami, 2.86; Scherzer, Washington, 2.96; Martinez, St. Louis, 3.04; Arrieta, Chicago, 3.10. STRIKEOUTS–Scherzer, Washington, 284; Fernandez, Miami, 253; Bumgarner, San Francisco, 251; Ray, Arizona, 218; Syndergaard, New York, 218; Cueto, San Francisco, 198; Lester, Chicago, 197; Arrieta, Chicago, 190; Gray, Colorado, 185; Strasburg, Washington, 183. SAVES–Familia, New York, 51; Jansen, Los Angeles, 47; Melancon, Washington, 47; Ramos, Miami, 40; Gomez, Philadelphia, 37; Chapman, Chicago, 36; Casilla, San Francisco, 31; Rodney, Miami, 25; Johnson, Atlanta, 20; Papelbon, Washington, 19; Oh, St. Louis, 19. BASEBALL’S TOP TEN NATIONAL LEAGUE G AB R H Pct. LeMahieu Col 146 552 104 192 .348 DMurphy Was 142 531 88 184 .347 Votto Cin 158 556 101 181 .326 Blackmon Col 143 578 111 187 .324 Segura Ari 153 637 102 203 .319 SMarte Pit 129 489 71 152 .311 Seager LAD 157 627 105 193 .308 Molina StL 147 534 56 164 .307 WRamos Was 131 482 58 148 .307 Braun Mil 135 511 80 156 .305 GAME DAY October 16 at Raiders 3:05 p.m. TV: CBS October 23 at Saints 3:05 p.m. TV: CBS October 30 at Colts 12:00 p.m. TV: CBS October 16 at Salt Lake 4:00 p.m. TV: TBD October 19 vs. Central FC 7:00 p.m. TV: TBD October 23 vs San Jose 3:00 p.m. TV: TBD October 8 vs. TCU 11 a.m. TV: ESPNU October 15 at Baylor TBA TV: TBA October 22 vs. Oklahoma State TBA TV: TBA October 8 vs Texas Tech 6 p.m. TV: TBA October 15 at Oklahoma TBA TV: TBA Home Runs Arenado, Colorado, 41; Carter, Milwaukee, 41; Bryant, Chicago, 39; FFreeman, Atlanta, 34; Duvall, Cincinnati, 33; Rizzo, Chicago, 32; Cespedes, New York, 31; Tomas, Arizona, 31; 3 tied at 30. Runs Batted In Arenado, Colorado, 133; Rizzo, Chicago, 109; DMurphy, Washington, 104; Duvall, Cincinnati, 103; Bryant, Chicago, 102; CGonzalez, Colorado, 100; Yelich, Miami, 98; Votto, Cincinnati, 97; Goldschmidt, Arizona, 95; ARussell, Chicago, 95. Pitching Scherzer, Washington, 20-7; Lester, Chicago, 19-5; Cueto, San Francisco, 18-5; Arrieta, Chicago, 18-8; Hendricks, Chicago, 16-8; Fernandez, Miami, 16-8; CMartinez, St. Louis, 16-9; Roark, Washington, 16-10; Maeda, Los Angeles, 16-11; Strasburg, Washington, 15-4. Interleague 2016 POSTSEASON BASEBALL GLANCE WILD CARD Tuesday, Oct. 4: Baltimore (Tillman 16-6) at Toronto (Stroman 9-10), 7:08 p.m. (TBS) Wednesday, Oct. 5: San Francisco (Bumgarner 15-9) at New York (Syndergaard 14-9), 7:09 p.m. (ESPN) DIVISION SERIES (Best-of-5; x-if necessary) AMERICAN LEAGUE Texas vs. Baltimore-Toronto winner Thursday, Oct. 6: Baltimore-Toronto winner at Texas, 3:38 p.m. (TBS) Friday, Oct. 7: Baltimore-Toronto winner at Texas, 12:08 p.m. (TBS) Sunday, Oct. 9: Texas at Baltimore-Toronto winner, 6:38 p.m. (TBS) x-Monday, Oct. 10: Texas at Baltimore-Toronto winner, TBA (TBS) x-Wednesday, Oct. 12: Baltimore-Toronto winner at Texas, TBA (TBS) Cleveland vs. Boston Thursday, Oct. 6: Boston (Porcello 22-4) at Cleveland (Bauer 12-8), 7:08 p.m. (TBS) Friday, Oct. 7: Boston (Price 17-9) at Cleveland (Kluber 18-9), 3:38 p.m. (TBS) Sunday, Oct. 9: Cleveland (Tomlin 13-9) at Boston, 3:08 p.m. (TBS) x-Monday, Oct. 10: Cleveland at Boston, TBA (TBS) x-Wednesday, Oct. 12: Boston at Cleveland, TBA (TBS) NATIONAL LEAGUE Chicago vs. San Francisco-New York winner Friday, Oct. 7: San Francisco-New York winner at Chicago, 8:15 p.m. (FS1) Saturday, Oct. 8: San Francisco-New York winner at Chicago, 7:08 p.m. (MLB) Monday, Oct. 10: Chicago at San Francisco-New York winner, TBA (FS1 or MLB) x-Tuesday, Oct. 11: Chicago at San Francisco-New York winner, TBA (FS1) x-Thursday, Oct. 13: San Francisco-New York winner at Chicago, TBA (FS1) Washington vs. Los Angeles Friday, Oct. 7: Los Angeles (Kershaw 12-4) at Washington (Scherzer 20-7), 4:38 p.m. (FS1) Saturday, Oct. 8: Los Angeles (Hill 12-5) at Washington, 3:08 p.m. (FS1) Monday, Oct. 10: Washington at Los Angeles (Maeda 16-10), TBA (FS1 or MLB) x-Tuesday, Oct. 11: Washington at Los Angeles, TBA (FS1) x-Thursday, Oct. 13: Los Angeles at Washington, TBA (FS1) LEAGUE CHAMPIONSHIP SERIES (Best-of-7; x-if necessary) AMERICAN LEAGUE Friday, Oct. 14: Game 1 (TBS) Saturday, Oct. 15: Game 2 (TBS) Monday, Oct. 17: Game 3 (TBS) Tuesday, Oct. 18: Game 4 (TBS) x-Wednesday, Oct. 19: Game 5 (TBS) x-Friday, Oct. 21: Game 6 (TBS) x-Saturday, Oct. 22: Game 7 (TBS) NATIONAL LEAGUE Saturday, Oct. 15: Game 1 (Fox or FS1) Sunday, Oct. 16: Game 2 (Fox or FS1) Tuesday, Oct. 18: Game 3 (Fox or FS1) Wednesday, Oct. 19: Game 4 (Fox or FS1) x-Thursday, Oct. 20: Game 5 (Fox or FS1) x-Saturday, Oct. 22: Game 6 (Fox or FS1) x-Sunday, Oct. 23: Game 7 (Fox or FS1) WORLD SERIES (Best-of-7; x-if necessary) All games televised by Fox Tuesday, Oct. 25: NL at AL Wednesday, Oct. 26: NL at AL Friday, Oct. 28: AL at NL Saturday, Oct. 29: AL at NL x-Sunday, Oct. 30: AL at NL x-Tuesday, Nov. 1: NL at AL x-Wednesday, Nov. 2: NL at AL CROSS COUNTRY High School STERLING CROSS COUNTRY INVITATIONAL Boys Team results: Circle 86 Individual medalists: 1. Joshua Reed, Salina South 17:00.1; 2. Tim Kemboi, El Dorado 17:12.3; Girls Team results: Circle 49 Individual medalists: 1. Celia Biel, Trinity Catholic 20:01.6; 2. Erin Topham, Berean 20:40.2; FOOTBALL NFL AMERICAN CONFERENCE East W L T Pct PF New England 3 1 0 .750 81 Buffalo 2 2 0 .500 87 N.Y. Jets 1 3 0 .250 79 Miami 1 3 0 .250 71 South W L T Pct PF Houston 3 1 0 .750 69 Jacksonville 1 3 0 .250 84 Indianapolis 1 3 0 .250 108 Tennessee 1 3 0 .250 62 North W L T Pct PF Pittsburgh 3 1 0 .750 108 Baltimore 3 1 0 .750 84 Cincinnati 2 2 0 .500 78 Cleveland 0 4 0 .000 74 West W L T Pct PF Denver 4 0 0 1.000 111 Oakland 3 1 0 .750 108 Kansas City 2 2 0 .500 83 San Diego 1 3 0 .250 121 NATIONAL CONFERENCE East W L T Pct PF Philadelphia 3 0 0 1.000 92 Dallas 3 1 0 .750 101 N.Y. Giants 2 1 0 .667 63 Washington 2 2 0 .500 99 South W L T Pct PF Atlanta 3 1 0 .750 152 Tampa Bay 1 3 0 .250 77 Carolina 1 3 0 .250 109 New Orleans 1 3 0 .250 114 North W L T Pct PF Minnesota 3 0 0 1.000 64 Green Bay 2 1 0 .667 75 Chicago 1 3 0 .250 62 Detroit 1 3 0 .250 95 West W L T Pct PF Los Angeles 3 1 0 .750 63 Seattle 3 1 0 .750 79 San Francisco 1 3 0 .250 90 Arizona 1 3 0 .250 92 Thursday’s Games Cincinnati 22, Miami 7 Sunday’s Games Jacksonville 30, Indianapolis 27 PA 61 68 105 89 PA 73 111 125 84 PA 80 72 82 115 PA 64 106 92 108 PA 27 77 61 112 PA 124 128 118 130 PA 40 67 97 102 PA 76 54 107 80 October 22 vs Texas TBA TV: TBA Buffalo 16, New England 0 Chicago 17, Detroit 14 Seattle 27, N.Y. Jets 17 Washington 31, Cleveland 20 Houston 27, Tennessee 20 Atlanta 48, Carolina 33 Oakland 28, Baltimore 27 Dallas 24, San Francisco 17 Los Angeles 17, Arizona 13 New Orleans 35, San Diego 34 Denver 27, Tampa Bay 7 Pittsburgh 43, Kansas City 14 Monday’s Games N.Y. Giants at Minnesota, 7:30 p.m. Thursday, Oct. 6 Arizona at San Francisco, 7:25 p.m. Sunday, Oct. 9 N.Y. Jets at Pittsburgh, 12 p.m. New England at Cleveland, 12 p.m. Tennessee at Miami, 12 p.m. Houston at Minnesota, 12 p.m. Washington at Baltimore, 12 p.m. Chicago at Indianapolis, 12 p.m. Philadelphia at Detroit, 12 p.m. Atlanta at Denver, 3:05 p.m. Cincinnati at Dallas, 3:25 p.m. San Diego at Oakland, 3:25 p.m. Buffalo at Los Angeles, 3:25 p.m. N.Y. Giants at Green Bay, 7:30 p.m. Monday, Oct. 10 Tampa Bay at Carolina, 7:30 p.m. WEEK 4 TOTAL YARDAGE AMERICAN FOOTBALL CONFERENCE OFFENSE Yard Rush Pass Oakland 1569 507 1062 Pittsburgh 1498 449 1049 Cincinnati 1487 323 1164 Cleveland 1485 597 888 N.Y. Jets 1443 450 993 San Diego 1443 380 1063 Indianapolis 1397 352 1045 Tennessee 1392 508 884 New England 1385 542 843 Baltimore 1385 377 1008 Denver 1369 423 946 Kansas City 1354 361 993 Houston 1338 450 888 Miami 1319 311 1008 Jacksonville 1283 301 982 Buffalo 1228 493 735 DEFENSE Yard Rush Pass Baltimore 1024 320 704 Denver 1133 455 678 Houston 1151 501 650 Jacksonville 1218 423 795 Cincinnati 1291 390 901 Tennessee 1403 440 963 N.Y. Jets 1421 281 1140 Buffalo 1426 384 1042 New England 1463 405 1058 Kansas City 1480 518 962 San Diego 1486 328 1158 Cleveland 1512 473 1039 Indianapolis 1531 423 1108 Pittsburgh 1579 313 1266 Miami 1607 519 1088 Oakland 1840 538 1302 NATIONAL FOOTBALL CONFERENCE OFFENSE Yard Rush Pass Atlanta 1915 498 1417 Dallas 1583 596 987 Carolina 1546 487 1059 New Orleans 1544 327 1217 Arizona 1528 403 1125 Washington 1520 372 1148 Detroit 1504 369 1135 Seattle 1430 372 1058 Tampa Bay 1364 330 1034 Chicago 1340 329 1011 N.Y. Giants 1190 297 893 San Francisco 1171 456 715 Philadelphia 1109 358 751 Los Angeles 1076 307 769 Green Bay 881 301 580 Minnesota 796 153 643 DEFENSE Yard Rush Pass Philadelphia 823 213 610 Minnesota 885 252 633 N.Y. Giants 1019 232 787 Green Bay 1050 128 922 Seattle 1056 321 735 Arizona 1254 440 814 Chicago 1334 494 840 Carolina 1391 361 1030 Tampa Bay 1417 383 1034 Dallas 1433 379 1054 Los Angeles 1518 418 1100 Detroit 1545 458 1087 San Francisco 1560 562 998 Washington 1654 532 1122 Atlanta 1677 409 1268 New Orleans 1691 486 1205 AVERAGE PER GAME AMERICAN FOOTBALL CONFERENCE OFFENSE Yards Rush Pass Oakland 392.2 126.8 265.5 Pittsburgh 374.5 112.2 262.2 Cincinnati 371.8 80.8 291.0 Cleveland 371.2 149.2 222.0 N.Y. Jets 360.8 112.5 248.2 San Diego 360.8 95.0 265.8 Indianapolis 349.2 88.0 261.2 Tennessee 348.0 127.0 221.0 New England 346.2 135.5 210.8 Baltimore 346.2 94.2 252.0 Denver 342.2 105.8 236.5 Kansas City 338.5 90.2 248.2 Houston 334.5 112.5 222.0 Miami 329.8 77.8 252.0 Jacksonville 320.8 75.2 245.5 Buffalo 307.0 123.2 183.8 DEFENSE Yards Rush Pass Baltimore 256.0 80.0 176.0 Denver 283.2 113.8 169.5 Houston 287.8 125.2 162.5 Jacksonville 304.5 105.8 198.8 Cincinnati 322.8 97.5 225.2 Tennessee 350.8 110.0 240.8 N.Y. Jets 355.2 70.2 285.0 Buffalo 356.5 96.0 260.5 New England 365.8 101.2 264.5 Kansas City 370.0 129.5 240.5 San Diego 371.5 82.0 289.5 Cleveland 378.0 118.2 259.8 Indianapolis 382.8 105.8 277.0 Pittsburgh 394.8 78.2 316.5 Miami 401.8 129.8 272.0 Oakland 460.0 134.5 325.5 NATIONAL FOOTBALL CONFERENCE OFFENSE Yards Rush Pass Atlanta 478.8 124.5 354.2 N.Y. Giants 396.7 99.0 297.7 Dallas 395.8 149.0 246.8 Carolina 386.5 121.8 264.8 New Orleans 386.0 81.8 304.2 Arizona 382.0 100.8 281.2 Washington 380.0 93.0 287.0 Detroit 376.0 92.2 283.8 Philadelphia 369.7 119.3 250.3 Seattle 357.5 93.0 264.5 Tampa Bay 341.0 82.5 258.5 Chicago 335.0 82.2 252.8 Green Bay 293.7 100.3 193.3 San Francisco 292.8 114.0 178.8 Los Angeles 269.0 76.8 192.2 Minnesota 265.3 51.0 214.3 DEFENSE Yards Rush Pass Seattle 264.0 80.2 183.8 Philadelphia 274.3 71.0 203.3 Minnesota 295.0 84.0 211.0 Arizona 313.5 110.0 203.5 Chicago 333.5 123.5 210.0 N.Y. Giants 339.7 77.3 262.3 Carolina 347.8 90.2 257.5 Green Bay 350.0 42.7 307.3 Tampa Bay 354.2 95.8 258.5 Dallas 358.2 94.8 263.5 Los Angeles 379.5 104.5 275.0 Detroit San Francisco Washington Atlanta New Orleans 386.2 390.0 413.5 419.2 422.8 114.5 140.5 133.0 102.2 121.5 271.8 249.5 280.5 317.0 301.2 College THE AP TOP 25 The Top 25 teams in The Associated Press college football poll, with irst-place votes in parentheses, records through Oct. 2, total points based on 25 points for a irst-place vote through one point for a 25th-place vote, and previous ranking: Record Pts P NR 18. Florida 4-1 391 23 19. Boise St. 4-0 385 24 20. Oklahoma 2-2 324 NR 21. Colorado 4-1 276 NR 22. West Virginia 4-0 240 NR 23. Florida St. 3-2 230 12 24. Utah 4-1 86 18 25. Virginia Tech 3-1 85 NR. COLLEGE FOOTBALL SCHEDULE (Subject to change) Wednesday, Oct. 5 SOUTHWEST Georgia Southern (3-1) at Arkansas St. (0-4), 7 p.m. Thursday, Oct. 6 SOUTH Norfolk St. (0-4) at NC A&T (3-1), 6:30 p.m. W. Kentucky (3-2) at Louisiana Tech (2-3), 7 p.m. Temple (3-2) at Memphis (3-1), 7 p.m. Friday, Oct. 7 EAST Clemson (5-0) at Boston College (3-2), 6:30 p.m. SOUTH Tulane (3-2) at UCF (3-2), 7 p.m. SOUTHWEST SMU (2-3) at Tulsa (3-1), 7 p.m. FAR WEST Boise St. (4-0) at New Mexico (2-2), 8 p.m. Saturday, Oct. 8 EAST Cincinnati (3-2) at UConn (2-3), 10:30 a.m. Maryland (4-0) at Penn St. (3-2), Noon Rhode Island (1-4) at Villanova (4-1), Noon Stetson (2-2) at Brown (1-2), 11:30 a.m. Colgate (1-3) at Lehigh (3-2), 11:30 a.m. Georgia Tech (3-2) at Pittsburgh (3-2), 11:30 a.m. Lafayette (1-4) at Fordham (2-2), 12 p.m. Princeton (2-1) at Georgetown (3-1), 12 p.m. Cornell (3-0) at Harvard (3-0), 12 p.m. Bucknell (1-3) at Holy Cross (2-3), 12 p.m. CCSU (1-3) at Penn (1-2), 12 p.m. Dartmouth (2-1) at Yale (0-3), 12 p.m. Houston (5-0) at Navy (3-1), 2 p.m. St. Francis (Pa.) (2-3) at Robert Morris (1-4), 2 p.m. Richmond (3-2) at Albany (NY) (4-0), 2:30 p.m. Kent St. (1-4) at Buffalo (1-3), 2:30 p.m. Maine (1-3) at Delaware (2-2), 2:30 p.m. Columbia (0-3) at Wagner (3-1), 5 p.m. Stony Brook (2-2) at Towson (2-2), 6 p.m. Michigan (5-0) at Rutgers (2-3), 7 p.m. SOUTH Albany St. (Ga.) (2-3) at Charleston Southern (3-2), 10:45 a.m. LSU (3-2) at Florida (4-1), Noon Auburn (3-2) at Mississippi St. (2-2), Noon Notre Dame (2-3) at NC State (3-1), Noon East Carolina (2-3) at South Florida (4-1), Noon Samford (3-1) at Furman (0-5), 12 p.m. Monmouth (NJ) (3-2) at Howard (1-4), 12 p.m. Missouri S&T (3-2) at Kennesaw St. (3-1), 12 p.m. Bethune-Cookman (0-4) at SC State (1-3), 12:30 p.m. ETSU (2-2) at VMI (2-2), 12:30 p.m. Hampton (1-2) at Delaware St. (0-4), 1 p.m. Tennessee Tech (2-3) at Jacksonville St. (3-1), 1 p.m. North Greenville (3-2) at The Citadel (4-0), 1 p.m. Alcorn St. (1-3) at Alabama A&M (1-4), 2 p.m. Austin Peay (0-4) at UT Martin (2-3), 2 p.m. Army (3-1) at Duke (2-3), 2:30 p.m. New Hampshire (3-2) at Elon (2-3), 2:30 p.m. Charlotte (1-4) at FAU (1-4), 2:30 p.m. Texas St. (2-2) at Georgia St. (0-4), 2:30 p.m. William & Mary (2-3) at James Madison (4-1), 2:30 p.m. Stephen F. Austin (3-2) at Nicholls (1-3), 2:30 p.m. Virginia Tech (3-1) at North Carolina (4-1), 2:30 p.m. Mercer (2-2) at Chattanooga (5-0), 3 p.m. Vanderbilt (2-3) at Kentucky (2-3), 3 p.m. Florida A&M (2-3) at NC Central (3-2), 3 p.m. SE Missouri (2-3) at E. Kentucky (1-3), 5 p.m. Presbyterian (1-3) at Gardner-Webb (2-3), 5 p.m. UMass (1-4) at Old Dominion (2-2), 5 p.m. Campbell (3-2) at Jacksonville (2-2), 6 p.m. Idaho (2-3) at Louisiana-Monroe (1-3), 6 p.m. Kentucky Wesleyan (1-4) at Northwestern St. (0-4), 6 p.m. McNeese St. (3-2) at SE Louisiana (1-3), 6 p.m. Morgan St. (2-2) at Savannah St. (1-3), 6 p.m. Wofford (3-2) at W. Carolina (1-3), 6 p.m. Syracuse (2-3) at Wake Forest (4-1), 6 p.m. Georgia (3-2) at South Carolina (2-3), 6:30 p.m. Florida St. (3-2) at Miami (4-0), 7 p.m. MIDWEST TCU (3-2) at Kansas (1-3), Noon Iowa (3-2) at Minnesota (3-1), Noon Marist (1-3) at Butler (2-3), 12 p.m. Morehead St. (2-3) at Dayton (3-2), 12 p.m. Bowling Green (1-4) at Ohio (3-2), 1 p.m. Drake (2-3) at Valparaiso (2-3), 1 p.m. Miami (Ohio) (0-5) at Akron (3-2), 2 p.m. Toledo (3-1) at E. Michigan (4-1), 2 p.m. Youngstown St. (3-1) at Illinois St. (2-3), 2 p.m. N. Dakota St. (4-0) at Missouri St. (3-1), 2 p.m. N. Iowa (2-2) at South Dakota (1-3), 2 p.m. Ball St. (3-2) at Cent. Michigan (3-2), 2:30 p.m. Purdue (2-2) at Illinois (1-3), 2:30 p.m. BYU (2-3) at Michigan St. (2-2), 2:30 p.m. Indiana (3-1) at Ohio St. (4-0), 2:30 p.m. Indiana St. (3-2) at W. Illinois (3-1), 3 p.m. N. Illinois (1-4) at W. Michigan (5-0), 5:30 p.m. Tennessee St. (4-0) at E. Illinois (3-2), 6 p.m. Sam Houston St. (4-0) at Incarnate Word (1-4), 6 p.m. Texas Tech (3-1) at Kansas St. (2-2), 6 p.m. S. Dakota St. (2-2) at S. Illinois (2-2), 6 p.m. SOUTHWEST Oklahoma (2-2) vs. Texas (2-2) at Dallas, Noon Southern Miss. (4-1) at UTSA (1-3), Noon Alabama St. (1-4) at Prairie View (3-2), 2 p.m. Iowa St. (1-4) at Oklahoma St. (3-2), 2:30 p.m. Tennessee (5-0) at Texas A&M (5-0), 2:30 p.m. Lamar (1-3) at Abilene Christian (0-5), 6 p.m. Alabama (5-0) at Arkansas (4-1), 6 p.m. Marshall (1-3) at North Texas (2-3), 6 p.m. FIU (1-4) at UTEP (1-4), 7 p.m. FAR WEST Air Force (4-0) at Wyoming (3-2), 2:30 p.m. Davidson (2-3) at San Diego (3-1), 3 p.m. Colorado (4-1) at Southern Cal (2-3), 3 p.m. N. Colorado (3-1) at E. Washington (4-1), 3:05 p.m. MVSU (0-5) at Montana (3-1), 3:30 p.m. Hawaii (2-3) at San Jose St. (1-4), 3:30 p.m. Fresno St. (1-4) at Nevada (2-3), 6 p.m. N. Arizona (1-4) at Montana St. (2-3), 6:10 p.m. Washington (5-0) at Oregon (2-3), 6:30 p.m. UC Davis (1-4) at S. Utah (2-2), 7 p.m. Portland St. (2-3) at Weber St. (2-2), 7 p.m. California (3-2) at Oregon St. (1-3), 8 p.m. North Dakota (3-2) at Sacramento St. (1-4), 8 p.m. Utah St. (2-3) at Colorado St. (2-3), 9 p.m. Arizona (2-3) at Utah (4-1), 9 p.m. UCLA (3-2) at Arizona St. (4-1), 9:30 p.m. UNLV (2-3) at San Diego St. (3-1), 9:30 p.m. Washington St. (2-2) at Stanford (3-1), 9:30 p.m. High School CHAPARRAL 70, DOUGLASS 12 Douglass 0 0 12 0 -- 12 Chaparral 21 19 14 16 -- 70 C – Jacob Jenkins 46 pass from Andrew Clark (Escobar kick) C – Jenkins 2 run (Escobar kick) C – Parker Patterson 11 pass from Jake Burke (Escobar kick) C – Jenkins 4 run (Escobar kick) C – Jenkins 5 run (kick fail) C – Jenkins 13 run (kick fail) D – Caleb Eck 28 pass from Hunter Chadick (run fail) D – Chadick 2 run (pat fail) C – Patterson 58 pass from Clark (kick fail) C – Talon Borghoff 23 run (Burke pat good) C – Burke 14 run (Quinton Pfaff pass from Burke) C – Dalton Hurt recover fumble (Tanner Asper pat good) GOLF PGA TOUR CHAMPIONS STATISTICS Through Sept. 25 Charles Schwab Cup Money List 1, Bernhard Langer, (18), $2,512,659. 2, Miguel Angel Jimenez, (9), $1,44 1,237. 3, Joe Durant, (20), $1,357,037. 4, Woody Austin, (19), $1,22 1,814. 5, Colin Montgomerie, (17), $1,216,318. 6, Gene Sauers, (17), $1,18 1,045. 7, Kevin Sutherland, (18), $1,110,397. 8, Scott McCarron, (19), $1,100,935. 9, Duffy Waldorf, (21), $1,02 1,254. 10, David Frost, (21), $954,224. Scoring Average (Actual) 1, Bernhard Langer, 68.43. 2, Kevin Sutherland, 69.66. 3, Joe Durant, 69.81. 4, Jeff Maggert, 70.10. 5, Scott McCarron, 70.14. 6 (tie), Jeff Sluman and Duffy Waldorf, 70.23. 8, Colin Montgomerie, 70.25. 9, Tom Lehman, 70.26. 10, Bart Bryant, 70.30. Driving Distance 1, John Daly, 305.3. 2, Doug Garwood, 298.4. 3, John Huston, 297.2. 4, Brandt Jobe, 295.9. 5, Wes Short, Jr., 291.8. 6 (tie), Kenny Perry and Kevin Sutherland, 291.1. 8, Scott McCarron, 291.0. 9, Grant Waite, 289.6. 10, Scott Parel, 288.4. Driving Accuracy Percentage 1, Jeff Hart, 82.08%. 2, Fred Funk, 80.49%. 3, Joe Durant, 77.60%. 4, Paul Goydos, 76.79%. 5, Bernhard Langer, 76.75%. 6, Rod Spittle, 76.42%. 7, Jose Coceres, 76.03%. 8, Joey Sindelar, 75.74%. 9, Tom Byrum, 75.34%. 10, Olin Browne, 75.00%. Greens in Regulation Percentage 1, Bernhard Langer, 78.01%. 2, Kenny Perry, 76.25%. 3, Scott Dunlap, 74.37%. 4, Joe Durant, 74.32%. 5, Tom Lehman, 74.07%. 6, Kevin Sutherland, 73.50%. 7, Jeff Maggert, 73.09%. 8, Doug Garwood, 72.55%. 9, Tom Byrum, 72.33%. 10, Gene Sauers, 72.10%. Total Driving 1, Joe Durant, 26. 2, Bernhard Langer, 29. 3, Jeff Sluman, 40. 4, Scott Dunlap, 42. 5 (tie), Jim Carter, Paul Goydos, Wes Short, Jr. and Rod Spittle, 50. 9, Lee Janzen, 51. 1 Tied With Bart Bryant, 52. Putting Average 1, Miguel Angel Martin, 1.717. 2 (tie), Olin Browne and Bernhard Langer, 1.724. 4 (tie), Jeff Sluman and Colin Montgomerie, 1.750. 6, Jeff Maggert, 1.751. 7, Tom Pernice Jr., 1.752. 8, Kirk Triplett, 1.753. 9, Duffy Waldorf, 1.754. 1 Tied With Steve Lowery, 1.755. Birdie Average 1, Bernhard Langer, 4.63. 2, Kenny Perry, 4.18. 3, Scott Dunlap, 4.14. 4, Jeff Maggert, 4.10. 5, Duffy Waldorf, 4.09. 6 (tie), Jeff Sluman and Kirk Triplett, 4.08. 8, Joe Durant, 4.07. 9 (tie), Stephen Ames and Colin Montgomerie, 4.00. Eagles (Holes per) 1, Scott McCarron, 73.3. 2, Wes Short, Jr., 85.8. 3, John Huston, 90.0. 4, Tom Lehman, 106.0. 5, Bernhard Langer, 112.0. 6, Scott Parel, 117.0. 7, Grant Waite, 126.0. 8, Jeff Maggert, 132.8. 9, John Daly, 140.4. 10, Mike Grob, 141.0. Sand Save Percentage 1, Scott Verplank, 62.96%. 2, Loren Roberts, 60.26%. 3, Miguel Angel Martin, 58.97%. 4, Jeff Hart, 56.34%. 5, Esteban Toledo, 53.85%. 6, Glen Day, 52.70%. 7 , Ian Woosnam, 52.27%. 8, Bernhard Langer, 52.17%. 9, Mike Goodes, 52.00%. 10, Tom Byrum, 50.68%. All-Around Ranking 1, Bernhard Langer, 47. 2, Jeff Maggert, 105. 3, Joe Durant, 147. 4, Scott McCarron, 167. 5, Jeff Sluman, 172. 6, Tom Byrum, 179. 7 (tie), Scott Dunlap and Kevin Sutherland, 181. 9, Duffy Waldorf, 194. 10, Wes Short, Jr., 201. LPGA TOUR STATISTICS Through Oct. 2 Scoring 1, Lydia Ko, 69.320. 2, In Gee Chun, 69.525. 3, Ariya Jutanugarn, 69.865. 4, Sei Young Kim, 69.961. 5, Amy Yang, 69.985. 6, Ha Na Jang, 70.000. 7, Brooke M. Henderson, 70.103. 8, So Yeon Ryu, 70.147. 9, Haru Nomura, 70.198. 10, Lexi Thompson, 70.290. Driving Distance 1, Joanna Klatten, 280.500. 2, Lexi Thompson, 278.902. 3, Sadena Parks, 274.860. 4, MaudeAimee Leblanc, 271.867. 5, Benyapa Niphatsophon, 271.077. 6, Carlota Ciganda, 270.754. 7, Sei Young Kim, 270.686. 8, Brittany Lincicome, 269.057. 9, Cydney Clanton, 267.449. 10, Paula Reto, 267.354. Greens in Regulation 1, Anna Nordqvist, 77.8%. 2, Ha Na Jang, 77.7%. 3, Lexi Thompson, 77.7%. 4, So Yeon Ryu, 76.9%. 5, Stacy Lewis, 74.5%. 6, Gerina Piller, 74.1%. 7, Shanshan Feng, 74.0%. 8, Joanna Klatten, 73.5%. 9, Jessica Korda, 73.3%. 10, Carlota Ciganda, 73.2%. Putts per GIR 1, Lydia Ko, 1.721. 2, In Gee Chun, 1.740. 3, Sei Young Kim, 1.746. 4, Ariya Jutanugarn, 1.753. 5, Haru Nomura, 1.757. 6, Hyo Joo Kim, 1.757. 7, Minjee Lee, 1.763. 8, Mi Jun Hur, 1.767. 9, Mirim Lee, 1.769. 10, Juli Inkster, 1.773. Birdies 1, Ariya Jutanugarn, 381. 1, Brooke M. Henderson, 377. 3, Haru Nomura, 343. 4, Sei Young Kim, 341. 5, Minjee Lee, 321. 6, Chella Choi, 314. 7 (tie), Hyo Joo Kim and Stacy Lewis, 307. 9, Lydia Ko, 301. 10, Anna Nordqvist, 294. Eagles 1, Lexi Thompson, 12. 2 (tie), Sei Young Kim and Minjee Lee. 11. 4 (tie), Ha Na Jang and Mi Hyang Lee, 10. 6 (tie), Catriona Matthew and MaudeAimee Leblanc, 9. 8 (tie), Mi Jun Hur and Moriya Jutanugarn, 8. 10. Eight tied with 7. Sand Save Percentage 1, Jenny Shin, 65.82%. 2, In-Kyung Kim, 58.33%. 3, Brittany Lincicome, 58.24%. 4, Mika Miyazato, 57.50%. 5, Karrie Webb, 56.92%. 6, Ashleigh Simon, 56.90%. 7, So Yeon Ryu, 56.84%. 8, Laetitia Beck, 56.72%. 9, Lydia Ko, 56.32%. 10, Charley Hull, 56.04%. Rounds Under Par 1, Lydia Ko, 80.00%. 2, Ha Na Jang, 75.00%. 3, In Gee Chun, 70.49%. 4, Amy Yang, 70.15%. 5, Brooke M. Henderson, 70.10%. 6, Ariya Jutanugarn, 68.54%. 7, Sei Young Kim, 65.79%. 8, Stacy Lewis, 64.63%. 9, Shanshan Feng, 64.52%. 10, Haru Nomura, 63.95%. SOCCER MLS EASTERN W L T Pts GF New York 14 9 9 51 56 New York City FC 14 9 9 51 57 Toronto FC 13 9 10 49 46 Montreal 11 10 11 44 47 D.C. United 10 9 13 43 48 Philadelphia 11 12 9 42 52 New England 10 13 9 39 40 Columbus 8 12 11 35 45 Orlando City 7 11 14 35 49 Chicago 6 16 9 27 36 WESTERN W L T Pts GF FC Dallas 16 8 8 56 48 Colorado 13 5 12 51 33 Los Angeles 11 6 15 48 53 Real Salt Lake 12 11 9 45 43 Seattle 13 13 5 44 41 Sporting Kansas City 12 13 7 43 40 Portland 11 13 8 41 46 San Jose 8 10 13 37 31 Vancouver 9 15 8 35 41 Houston 7 12 11 32 36 NOTE: Three points for victory, one point for tie. Wednesday’s Games D.C. United 3, Columbus 0 Montreal 3, San Jose 1 Orlando City 0, Toronto FC 0, tie Seattle 1, Chicago 0 Friday’s Games New York City FC 2, Houston 0 Saturday’s Games New York 3, Philadelphia 2 Columbus 3, Chicago 0 D.C. United 2, Toronto FC 1 New England 3, Sporting Kansas City 1 Colorado 1, Portland 0 FC Dallas 1, Los Angeles 0 San Jose 2, Real Salt Lake 1 Sunday, October 2 Montreal 1, Orlando City 0 Seattle 2, Vancouver 1 Saturday, October 8 Colorado at Houston, 7:30 p.m. GA 42 53 35 48 42 51 52 49 58 52 GA 39 27 39 44 40 41 49 36 51 40 NWSL W L T Pts GF x-Portland 12 3 5 41 35 x-Washington 12 5 3 39 30 x-Chicago 9 5 6 33 24 x-Western New York 9 6 5 32 40 Seattle 8 6 6 30 29 FC Kansas City 7 8 5 26 18 Sky Blue FC 7 8 5 26 24 Houston 6 10 4 22 29 Orlando 6 13 1 19 20 Boston 3 15 2 11 14 x-Clinched playoff berth NOTE: Three points for victory, one point for tie. Saturday, Sept. 24 FC Kansas City 2, Orlando 1 Western New York 4, Boston 0 Chicago 3, Washington 1 Sunday, Sept. 25 Portland 3, Sky Blue FC 1 Seattle 3, Houston 2 End regular season PLAYOFFS Semiinals Friday, Sept. 30 Washington 2, Chicago 1, ET Sunday, Oct. 2 Western New York 4, Portland 3, ET Championship Sunday, Oct. 9 At Houston Washington vs. Western New York, 4 p.m. GA 19 21 20 26 21 20 30 29 30 47 TENNIS ATP RAKUTAN JAPAN OPEN RESULTS Monday At Ariake Colosseum Tokyo Purse: $1.5 million (ATP-500) Surface: Hard-Outdoor Singles First Round Joao Sousa, Portugal, def. Martin Klizan, Slovakia, 4-6, 6-3, 6-3, 6-3. Marin Cilic (4), Croatia, def. Benoit Paire, France, 6-0 4-6, 6-3. Fernando Verdasco, Spain, def. Go Soeda, Japan, 6-7 (2), 6-3, 6-3. Nei Nishikori (1), Japana, def. Nicolas Almagro, Spain, 4-6, 6-2, 6-2. Doubles First Round Juan Sebastian Cabal and Robert Farah, Colombia, def. Taro Daniel and Yasutaka Uchiyama, Japan, 6-4, 6-2. ATP-WTA CHINA OPEN RESULTS Monday Olympic Park at the China National Tennis Center Beijing Purse: $4.16 million (ATP-500); $5.4 million (WTA-Premier) Surface: Hard-Outdoor Singles Men First Round David Ferrer (5), Spain, def. Pablo Cuevas, Uruguay, 6-4, 7-6 (3). Roberto Bautista Agut (7), Spain, def. John Millman, Australia, 6-4, 3-6, 6-3. Kyle Edmund, Britain, def. Guillermo Garcia-Lopez, Spain, 6-3, 6-2. Lucas Pouille (6), France, def. Lu Yen-hsun, Taiwan, 6-1, 6-2. Women First Round Angelique Kerber (1), Germany, def. Katerina Siniakova, Czech Republic, 6-4, 6-4. Peng Shuai, China, def. Venus Williams (6), United States, 7-5, 6-1. Karolina Pliskova (5), Czech Republic, def. Lucie Safarova, Czech Republic, 6-7 (7), 6-1, 6-4. Timea Bacsinszky (12), Switzerland, def. Lara Arruabarrena, Spain, 4-6, 6-4, 6-1. Caroline Garcia, France, def. Julia Goerges, Germany, 4-6, 6-3, 6-4. Elina Svitolina (16), Ukraine, def. Tatjana Maria, Germany, 6-2, 6-2. Daria Gavrilova, Australia, def. Christina McHale, United States, 6-4, 6-4. Daria Kasatkina, Russia, def. Louisa Chirico, United States, 6-2, 6-1. Barbora Strycova, Czech Republic, def. Laura Siegemund, Germany, 5-7, 7-5, 7-5. Second Round Garbine Muguruza (2), Spain, def. Yulia Putintseva, Kazakhstan, 6-2, 7-6 (5). Yaroslava Shvedova, Kazakhstan, def. Belinda Bencic, Switzerland, 6-4, 7-6 (4). Madison Keys (8), United States, def. Kristina Mladenovic, France, 7-5, 6-4. Agnieszka Radwanska (3), Poland, def. Ekaterina Makarova, Russia, 6-3, 6-4. Doubles Men First Round Gong Mao-xin and Zhang Ze, China, def. Steve Johnson and Sam Querrey, United States, 3-6, 6-3, 10-4. Pablo Carreno Busta and Rafael Nadal, Spain, def. Rohan Bopanna, India, and Daniel Nestor (3), Canada, 7-6 (3), 6-4. Paolo Lorenzi, Italy, and Guido Pella, Argentina, def. Treat Huey, Philippines and Max Mirnyi (4), Belarus, 3-6, 6-1, 10-4. Jack Sock, United States, and Bernard Tomic, Australia, def. Andre Begemann, Germany, and Leander Paes, India, 3-6, 7-5, 10-7. Women First Round Vania King, United States, and Monica Niculescu, Romania, def. Xu Yifan and Zheng Saisai, China, 7-5, 6-2. Martina Hingis, Switzerland, and Coco Vandeweghe (7), United States, def. Maria Irigoyen, Argentina, and Tatjana Maria, Germany, 6-1, 6-1. Timea Babos, Hungary, and Yaroslava Shvedova (8), Kazakhstan, def. Olga Savchuk, Ukraine, and Wang Yafan, China, 6-3, 6-1. VOLLEYBALL High School BURRTON INVITATIONAL Pool A (High school gym) Inman def. Fairield 20-25, 25-11, 25-20; Berean Academy def. Burrton 25-14, 25-22; Inman def. Burrton 25-12, 25-15; Berean Academy def. Fairield 25-17, 25-19; Berean Academy def. Inman 20-25, 25-23, 25-22; Fairield def. Burrton 25-22, 24-26, 25-23 Pool B (Middle school gym) Cunningham def. Skyline 25-11, 25-9; Smoky Valley def. Sterling 25-9, 25-17; Cunningham def. Sterling 25-16, 25-22; Smoky Valley def. Skyline 25-9, 25-10; Smoky Valley def. Cunningham 25-17, 25-18; Sterling def. Skyline 25-5, 22-25, 25-19 Championship bracket Smoky Valley def. Inman 25-22, 25-12; Berean Academy def. Cunningham 25-17, 25-15, 19-25; Smoky Valley def. Berean Academy 25-11, 25-12. ETC. TRANSACTIONS BASEBALL American League CHICAGO WHITE SOX — Named Rick Renteria manager. MINNESOTA TWINS — Named Derek Falvey executive vice president and chief baseball oficer. National League ARIZONA DIAMONDBACKS — Fired general manager Dave Stewart and manager Chip Hale. COLORADO ROCKIES — Announced the resignation of manager Walt Weiss. PHILADELPHIA PHILLIES — Fired hitting coach Steve Henderson. Can-Am League QUEBEC CAPITALES — Exercised the 2017 contract options for RHP Sam Brunner, RHP Karl Gelinas, RHP Reinaldo Lopez, LHP Jordan Mills, RHP Mark Smyth, C Maxx Tissenbaum, INF Yurisbel Gracial, INF Jonathan Malo, OF Yeicok Calderon, OF Tanner Nivins, OF Asif Shah, RHP Shaun Ellis, RHP Ryan Leach, LHP Sheldon McDonald, RHP Jasvir Rakkar, C Adam Ehrich, INF Lachlan Fontaine, INF Jordan Lennerton, INF William Salas, OF Maruc Knecht and OF Roel Santos. SUSSEX COUNTY MINERS — Released LHP Darren Fischer. Exercised the 2017 contract option on LHP Francisco Rodriguez. BASKETBALL National Basketball Association ATLANTA HAWKS — Signed G Josh Magette. CLEVELAND CAVALIERS — Signed G Toney Douglas. Waived F-C Eric Moreland. FOOTBALL National Football League ARIZONA CARDINALS — Signed LB Cap Capi to the practice squad. CLEVELAND BROWNS — Waived LB Armonty Bryant. DETROIT LIONS — Signed RB Mike James to the practice squad. Released RB George Winn from the practice squad. GREEN BAY PACKERS — Signed FB Joe Kerridge to the practice squad. NEW ENGLAND PATRIOTS — Released TE Clay Harbor. TENNESSEE TITANS — Fired special teams coordinator Bobby April. Named Steve Hoffman special teams coordinator. Signed NT Antwaun Woods to the practice squad. Released WR Jordan Leslie from the practice squad. HOCKEY NHL STANDINGS NHL — Suspended Chicago D Niklas Hjalmarsson for the remainder of the preseason and one regular-season game for charging St. Louis F Ty Rattie during an Oct. 1 preseason game. ANAHEIM DUCKS — Assigned LW Max Jones to London (OHL). Released LW David Booth and RW David Jones from their professional tryout agreements. Released LW Antoine Laganiere from his professional tryout agreement and assigned him to San Diego (AHL). CALGARY FLAMES — Assigned D Ryan Culkin, RW Matt Frattin, G Jon Gillies, C Mark Jankowski, LW Morgan Klimchuk, D Oliver Kylington and RW Emile Poirier to Stockton (AHL). Released D Mikhail Grigorev and D Colby Robak from their training camp. CHICAGO BLACKHAWKS — Assigned Fs Luke Johnson, Tanner Kero and Martin Lundberg; D Erik Gustafsson, Robin Norell and Ville Pokka, and G Lars Johansson to Rockford (AHL). Released Fs Chris DeSousa and Jake Dowell from their tryout agreements. DALLAS STARS — Assigned D Ludwig Bystrom, G Philippe Desrosiers, D Nick Ebert, F Brendan Ranford and F Branden Troock to Texas (AHL). Released F Brandon DeFazio from his professional tryout agreement and D Brandon Anselmini, G Landon Bow and F Michael McMurtry from their amateur tryout agreements. DETROIT RED WINGS — Assigned LW to Val-d’Or Foreurs (QMJHL). Released RW Colin Campbell from his professional tryout and sent him to Grand Rapids (AHL). LOS ANGELES KINGS — Assigned F Michael Amadio, F Justin Auger, F Patrick Bjorkstrand, D Erik Cernak, D Alex Lintuniemi, F Joel Lowry and D Damir Sharipzianov to Ontario (AHL). Assigned F Sean Backman, F Paul Bissonnette, G Jack Flinn, F Justin Gutierrez, F T.J. Hensick, F Sam Herr, F Lucas Lessio and F Brett Sutter to Ontario (AHL) training camp. LAS VEGAS — Named Kerry Bubolz team president. NEW JERSEY DEVILS — Recalled D Karl Stollery, D Vojtech Mozik, F Blake Coleman and F Blake Pietila from Albany (AHL). Signed D Colby Sissons to a three-year, entry-level contract. ECHL READING ROYALS — Signed F Kris Newbury to a tryout agreement. Released G Nick Niedert from his tryout agreement. Signed G Nick Niedert. Suspended F Joe Rehkamp. The Hutchinson News Tuesday, October 4, 2016 B3 SPORTS Ryder Cup win begs question: Does US still need task force? BY TIM DAHLBERG.” AP Sports Writer CHASKA, Minn. – Mickelson Playoffs • From Page B1. 3 ONE MORE BIG SWING FOR BIG PAPI? David Ortiz had a Royals • From Page B1 Chiefs • From Page B1 in their opener, then failed to score a touchdown in a loss at Houston, before scoring just one offensive TD against the Jets. On Sunday night, the Chiefs didn’t reach the end zone until the fourth quarter. David J. Phillip/Associated Press U.S. captain Davis Love III is surrounded by his players as they pose for a picture during the closing ceremony of the Ryder Cup golf tournament Sunday at Hazeltine National Golf Club in Chaska, Minn. dream season in his final year – huge numbers, an MVP candidate, a worstto. 4 Even more damning about the performance? It came a week after the Eagles, led by former Chiefs offensive coordinator Doug Pederson, blitzed past Pittsburgh in a 34-3 rout. “This one we didn’t sustain. We just weren’t able to sustain drives,” Reid said Monday. “We had a couple opportunities in the red zone we didn’t take advantage of. I have to look in the mirror on Andrew Miller (Indians), Mark Melancon (Nationals) and Jay Bruce (Mets). We’ll soon see which trade-deadline guys deliver. 5. 9 Mark Trumbo.” 10 8.”. and he was one of the only power bats in the lineup. Along with bringing him back, the Royals will be looking for another impact bat in the outfield from a relatively week free-agent class. Speaking of money The Royals will have to decide whether to exercise a $10 million option on All-Star closer Wade Davis and a $6.5 million option on light-hitting shortstop Alcides Escobar. Davis is a relative easy one. Despite dealing.” that one.”.” Reid said the Chiefs would spend Monday digesting the loss, then cut his players loose for the remainder of the bye week. The NFL has rules about how much they can be in the building during a week off. Still, that doesn’t mean the loss won’t linger with them. “We’re 2-2, so it’s not the end of the world, right? Even though it feels that way,” Reid said. “We have a week off to step back and analyze and try to fix some of the issues.” Henrik duo. Watson used his experience to help golfers like Brandt Snedeker stay sharp in big moments. “He’s the reason why I got my point today,” Snedeker said. “He was in my ear all day.” Public Notices As taxpayers and citizens, we have a right to know about decisions and activities of our government. Public notices are legally required publications of certain important government records and of court proceedings and notifications. To view these notices online go to hutchnews.com/classifieds/ community/announcements/ IN THE DISTRICT COURT OF RENO COUNTY, KANSAS MICHAEL ) Case No. VLAEMINCK, ) 15-cv-187 Plaintiff, ) v. ) CAROL CHIEN ) VLAEMINCK and ) ANITA CHOU, ) Defendants. ) NOTICE OF SUIT TO: CAROL CHIEN VLAEMINCK and all other persons who are or who may be concerned: You are hereby notified that a Petition has been filed in the District Court of Reno County, Kansas by Michael Vlaeminck, praying for damages from fraud by silence, constructive fraud, breach of fiduciary duty, and conversion and you are hereby required to plead to said Petition on or before the 31st day of October, 2016 in said Court at Hutchinson, Reno County Kansas. Should you fail therein, judgment and decree will be entered in due course upon said Petition. Respectfully Submitted, HINKLE LAW FIRM LLC By: /s/ Travis M. Pfannenstiel Matthew K. Holcomb, SC No. 23140 Travis M. Pfannenstiel, SC No. 26354 301 N. Main Street, Suite 2000 Wichita, KS 67202-4820 Tel: (316) 267-2000 Fax: (316) 264-1556 Email: mholcomb@hinklaw.com Email: tpfannenstiel@hinklaw.com Attorneys for Michael Vlaeminck 604122 IN THE DISTRICT COURT OF RENO COUNTY, KANSAS CIVIL DEPARTMENT Bayview Loan Servicing, LLC, a Delaware Limited Liability Company Plaintiff, vs. Daniel R. Wilson, Tamara Sue Decker, Jane Doe, John Doe, and Kansas Department of Revenue, et al., Defendants ) ) ) ) ) ) ) ) ) ) ) ) ) Case No. 16CV242 Court No. 2 Title to Real Estate Involved Pursuant to K.S.A. §60 Reno County, Kansas by Bayview Loan Servicing, LLC, a Delaware Limited Liability Company, praying for foreclosure of certain real property legally described as follows: THE FOLLOWING DESCRIBED LOTS, TRACTS OR PARCELS OF LAND, LYING, BEING AND SITUATE IN THE COUNTY OF RENO AND STATE OF KANSAS, TO WIT: TRACT FIVE AND NORTH 23 FEET OF LOT SIX, WEST URBAN ACRES, A SUBDIVISION IN THE SOUTHWEST 1/4 OF SECTION ELEVEN, TOWNSHIP 23 SOUTH, RANGE SIX WEST OF THE 6TH P.M., RENO COUNTY, KANSAS. Tax ID No.: 1-11712 Commonly known as 429 Urban Dr, Hutchinson, KS 67501 (“the Property”) MS177123 for a judgment against defendants and any other interested parties and, unless otherwise served by personal or mail service of summons, the time in which you have to plead to the Petition for Foreclosure in the District Court of Reno County Kansas will expire on October 31, 2016. If you fail to plead, judgment and decree will be entered in due course upon the request of plaintiff. MILLSAP & SINGER, LLC By: /s/ Chad R. Doornink #23536 cdoornink@msfirm.com 8900 Indian Creek Parkway, Suite 180 Overland Park, KS 66210 (913) 339-9132 (913) 339-9045 (fax) By: 604101 B4 Tuesday, October 4, 2016 PROTESTS Kansas volleyball takes a knee before national anthem BY JESSE NEWELL kneel before national anthem after many discussions and deciding it would become LAWRENCE – The Kansas more involved in community volleyball team united to efforts. Bechard spoke after make a statement about his team’s 3-1 victory over social injustice on Saturday Baylor on Oct. 1, 2016. afternoon, kneeling together “We come from all difbefore the national anthem ferent backgrounds, but we in its home match against still know what it’s like to Baylor. treat each other the right The players locked way, be compassionate for elbows and arms while the each other, be tolerant of public-address announcer views,” Bechard said. “The asked fans to take a moment team thought, ‘What a great to reflect on how they could message to send.’ ” help create Bechard “a more just, reiterated respectful his players and inclusive didn’t want to nation.” disrespect the After the anthem with gesture, KU’s their actions. players rose “They all to stand for said, ‘Hey, the national we love our anthem. country. We Middle love our flag,’” blocker Kayla Bechard said. Cheadle said “‘But is there the team some way we started having can challenge discussions everybody in about doing the gym today something folmaybe just lowing recent Kayla Cheadle, KU midblocker to be a little news reports better person about the when it comes death of Terence Crutcher, to the decisions we make an unarmed black man who about other people and how was killed by Tulsa police we treat each other?’” officers on Sept. 16. The Jayhawks will be “We decided that we increasing their work in the wouldn’t do the anthem, community as well. Bechard because we want to have said each player would respect for what’s happening have an individual plan for and respect for our country,” volunteering, whether that Cheadle said. “So we felt like was assisting with the Boys having it before the anthem and Girls Club or preparing was more respectful and still meals for the homeless. sent a positive message.” “Not only did we want The volleyball team is the awareness in our gym,” first KU program to kneel Bechard said, “but we want as a form of bringing social some action as the season awareness. goes on.” KU coach Ray Bechard deCheadle says the team is scribed his team as “diverse still considering kneeling beas any volleyball team in the fore its other matches while country.” committing to the extra KU volleyball coach Ray charity outings. Bechard explains team’s “That could be a very posidecision to kneel before tive influence, and bring a lot national anthem of positive energy through Kansas coach Ray Bechard Lawrence, as well,” Cheadle said his team decided to said. Tribune News Service “(W)e want to have respect for what’s happening and respect for our country. So we felt like having it before the anthem was more respectful and still sent a positive message.” ECU discussing band members’ silence THE ASSOCIATED PRESS GREENVILLE, N.C. – East Carolina athletic director Jeff Compher says his department is working with the university and the school of music after several band members kneeled in protest while others performed the national anthem before last week’s game against Central Florida. Compher said in a statement Monday that there have been “ongoing conversations” between the music and athletic departments The Hutchinson News SPORTS and the university and says he’s “confident that there will be a positive resolution” in the future. Fans booed the protestors after the anthem was completed, and Chancellor Cecil Staton quickly issued a statement affirming the band members’ right to express themselves. Coach Scottie Montgomery, who was in the locker room with his team during the anthem, says he was made aware of the situation and trusts the university to handle it properly. Injuries show need for backup QBs BY JOHN RABY Kansas State coach Bill Snyder, front left, greets West Virginia coach Dana Holgorsen after a 2012 game in Morgantown, W.Va. The two teams met again Saturday in Morgantown. AP Sports Writer CHARLESTON, W.Va. – Injuries to starting quarterbacks are creeping up in the Big 12, highlighting the need for backups to be prepared for significant playing time. Texas Tech coach Kliff Kingsbury said Monday that the status of FBS total offense leader Patrick Mahomes is day-to-day. Mahomes left last week’s game against Kansas after going down hard on his right (throwing) shoulder and he didn’t return. If Mahomes isn’t ready, junior Nic Shimonek would get his first career start Saturday for Texas Tech (3-1, 1-0 Big 12) at Kansas State (2-2, 0-1). Oklahoma’s Baker Mayfield tweaked his right ankle when he was sacked by TCU’s Josh Carraway and fumbled in the second quarter Saturday. Mayfield’s ankle was heavily taped at halftime and he returned to the field. Coach Bob Stoops said Mayfield is “a little bit sore but he’ll be ready to go” when the 20th-ranked Sooners (2-2, 1-0) play Texas (2-2, 0-1) in their annual rivalry game Saturday in Dallas. The injuries are a wakeup call not only for the need to having reserves ready, but for opposing teams to prepare to face them. Just in case, Oklahoma has freshman Austin Kendall, who threw two touchdown passes against Louisiana-Monroe earlier this season. And Shimonek completed 15 of 21 passes with four TDs in the 55-19 win Saturday over Kansas State. He also got extensive work in Texas Tech’s season opener against Stephen Christopher Jackson/Associated Press F. Austin. “I was excited to see him get his opportunity,” Kingsbury said. “It was fun for me to get to see him do it out under the lights and I thought he handled himself very well.” Kingsbury said Shimonek “just showed up” at Texas Tech two summers ago as a walk-on transfer from Iowa. He had to work his way up the depth chart, playing behind Mahomes and Davis Webb in 2015. “We could tell he could throw pretty well, we didn’t know what we had on our hands,” Kingsbury said. “What jumped out to me was his work ethic. He made himself the player that he is. I’ve got to give him all the credit.” Kansas State coach Bill Snyder was impressed by Shimonek’s accuracy and confidence. “It didn’t look to me like there was any drop off ” from Mahomes, Snyder said. “Obviously Texas Tech is doing it the right way because that youngster came in and played so very well.” Teams like Texas, Iowa State and Kansas use their backup quarterbacks regularly. For other teams that don’t, “you’ve got to keep them sharp, give them reps, continue to remind them that they’re one play away,” said West Virginia coach Dana Holgorsen. West Virginia got a scare in its season opener when Skyler Howard hurt his ribs against Missouri and sat out a few series. Two backups had turnovers during brief stints, and Holgorsen said that was an eye opener for them. “Our backup quarterbacks weren’t ready to go in there,” Holgorsen said. Since that game, “I thought they’ve been more engaged and practiced better to the point to where if that happens again, they’re going to be more ready to play.” Other news from the Big 12 coaches’ teleconference: –No. 13 Baylor (5-0, 2-0) hasn’t allowed a point in the fourth quarter all season, and interim coach Jim Grobe attributes that in part to conditioning and becoming familiar with the opponents’ offense. But after a nail-biting 45-42 win over Iowa State, Grobe said his team needs to correct its mistakes during a bye week, particularly on defense. “We’ve got to get back to being a better fundamental football team,” Grobe said. –TCU coach Gary Patterson said he doesn’t believe the winner of the Big 12 race will come out unscathed in league play. Baylor, No. 22 West Virginia, Texas Tech and Oklahoma are the remaining teams that haven’t lost in Big 12 games. “There’s a chance, because where everybody has to go to, that the winner of this league may have two losses. It could be one,” Patterson said. “For me there’s a lot of parity in it.” –After Texas got three extra points blocked at Oklahoma State, one of which was returned for two points, Longhorns coach Charlie Strong promised that the problem will be fixed. “We didn’t do a good job of reacting to it,” he said. Iowa State walk-on leads nation in punt returns BY LUKE MEREDITH AP Sports Writer AMES, Iowa –, ‘Man. It’d be so sweet playing at Jack Trice (Stadium) at night,’” Ryen said. “After watching that game I was like, ‘You know what? Screw it. I’m going to go try to live the dream.’” The Cyclones (1-4, 0-2 Big 12) are thrilled he did. Ryen, a junior wide receiver, enters this weekend’s game at Oklahoma State (3-2, 1-1 Big 12) as the nation’s leader in punt return average. He is averaging 22.3 yards on seven returns. Four of them have gone at least 25 yards, with a long of 55. Not bad for a kid who received hardly any attention from Division I schools in high school. Ryen grew up in tiny Ida Grove, Iowa, as a track star, winning the 100 and 200 meter races at the state meet as a senior. Despite also scoring 19 touchdowns that year, Ryen’s only football offers were from FCS and Division II schools. Like many kids looking for a way to help pay for college, Ryen accepted a track scholarship. But Ryen left Northern Iowa after just one season to pursue a football career at Iowa State – despite the fact that there was no promise he’d ever see the field. Iowa State was happy to take a chance on Ryen as a walk-on, redshirting him in 2014. The following season, Ryen flashed his speed with an eye-opening performance in the spring game. Ryen was arguably the best player on the field that day. “He’s had that (walk-on) mindset ever since,” running back Mike Warren said. “He’s going to come to work every day and bring his best to the table.” By the fall of 2015, Ryen had so impressed the coaching staff that they gave him a scholarship after just two games. Ryen didn’t put up monster numbers last year, catching 18 passes for 191 yards and a touchdown and rushing for 71 yards. But Ryen was one of just eight players nationally who scored on a run, a pass and a punt return. This season, new coach Matt Campbell and the Cyclones have expanded Ryen’s role. He has helped spark a team that has scored 86 points in its last two games. Ryen was a force against Baylor last week, catching five passes for a team-high 75 yards as Iowa State nearly pulled off the upset. “He works really hard. He wants to get better. I don’t know if he has plans to play at the next level or not, but he works like he wants to,” quarterback Joel Lanning said. The work ethic and a fearlessness on the field – similar to what Ryen showed in chasing his Division I dream – have made him one of the nation’s top return men. “You can’t be scared when you’re back there,” Ryen said. “I have no fear of catching a punt. I always think of the outcome like, ‘If I can a big return here, it can change the game.’” The Hutchinson News Tuesday, October 4, 2016 B5 CLASSIFIED BUS DRIVERS Trinity Catholic Jr/Sr High School is in need of drivers to transport students to and from various athletic events. A current CDL with a S (school bus) and P (passenger) endorsements is required. Interested applicants may pick up an application at 1400 E. 17th or call 620-662-5800 for more information. Editing All ads are subject to the approval of the Hutchinson News, which reserves the right to edit, reject or properly classify any ad. Please check your ad. Please read your ad on the first day. The News accepts responsibility for the first incorrect insertion and then only the extent of a corrected insertion or refund of the price paid. 620-694-5704 or outside Hutchinson 1-800-766-5704 Benefits include: Competitive pay rate, set schedule, one week paid vacation, free meals, closed on Sunday. Apply online at www. CarriageCrossingRestaurant. com or in person at Carriage Crossing. E.O.E. Federal Equal Employment Opportunity Laws: Prohibit employment discrimination based on age, race, color, religion, gender (including gender identity), sexual orientation, pregnancy, or national origin. Also employment discrimination against qualified individuals with disabilities. OPEN ROUTES AVAILABLE The Hutchinson News Garden City 7 days a week, early morning hours, responsible for finding your own sub when needed, needs to have reliable transportation. Contact Kim at kcline@ hutchnews.com 719-691-9199 for more information Great Bend 7 days a week, early morning hours, responsible for finding your own sub when needed, needs to have reliable transportation. Contact Mary at mfistler@ hutchnews.com 620-694-5700 ext. 121 for more information P & G DRYWALL Wanted - Experienced Drywall Finisher/ Some Hanging. Drivers License Required. 620-728-9031 Looking for the perfect employee? The Hutchinson News and Job Network will job listings. hutchareajobs.com PIANO ACCOMPANIST Trinity Catholic is currently looking to fill the position of Piano Accompanist for the JH & HS Choir classes. This position is open immediately. Interested persons may contact Joe Hammersmith (662-5800; jhammersmith @trinity-hutch.com) for more information, or may come to the school office for an application. The Center for Counseling and Consultation a community mental health services provider is seeking an Executive Director. The Executive Director will be directly responsible to the Board of Directors for the overall management of The Center to include staff management, office management, financial and statistical reporting, program development, program implementation, program evaluation and community relations. Must have an advanced degree, at least 10 years of executive-level management experience, strong communication skills and substantive experience managing relationships with boards, commissions, advisory councils and government agencies. Interested candidates who meet the above qualifications may apply by emailing cover letter, resume and salary requirements to tsowder@syndeohro.com. VP of Human Resources EOE The Center for Counseling in Great Bend, KS is seeking to hire a full-time licensed substance abuse counselor (LCAC). Competitive salary and excellent benefits (State KS Health Plan, KPERS, health club, and liberal paid time off). Duties will include providing individual and group counseling and conducting evaluations. For more information or to apply online visit or you can send a resume to Gail @ gails@thecentergb.org. Sandstone Heights Nursing Home in Little River is seeking a full time CNA for evening shift, 2 pm to 10 pm . Applicant must be dependable, hardworking, and caring individual. Apply in person or call Kelli at 620-897-6266. preferred. Good DMV/MVR & CSA required. Regional runs, home 1-2 times a week. Dry van freight. Good pay & miles, .40 cpm & bonuses. Call Rick @ New Image Trucking 620-474-9563 FULL AND PT DRIVERS NEEDED TO PULL HOPPER TRAILERS. Fall Parade of Homes October 15th & 16th 11am-4pm Tour Homes from the Area’s Finest Builders Hutchinson: •2801 Morris Rd. •2909/07 Dickens Drive •2900 East 49th •2700 Timber Lane •4308 West Red Tail Road •4305 East Red Tail Road Free Admission Apartments - Unfurn. STUDIO, 1 BEDROOMS $400 TO $450 YOU PAY ELECTRIC 401 E AVE A, HUTCH 620-200-2311 Unique properties for every budget. 1 & 2 bedroom apartments, billm@thecentergb.org. EOE Brookdale Hutchinson Is Now Hiring! Nursing Part-time C.N.A. - Overnight 10pm-6am Full-Time C.M.A. Second Shift Activities Full-Time Personal Banker II Commerce Bank is currently accepting applications for a full-time Personal Banker II position. Qualified applicants should have previous experience as a Personal Banker/ Teller, Financial Services Representative or similar position. We offer competitive wages, tuition assistance, along with other outstanding benefits. Please apply online at Part-time Weekend (only) Activity Assistant - must have experience in a Senior Living setting Join our Facebook page today to see how much fun we are having! BrookdaleHutchinson/ 2 BEDROOMS 4-PLEX, WASHER/ DRYER HOOKUPS, WATER/TRASH PAID 620-665-0371 604 & 608 Madison, 1 bedrooms, central heat, stove & refrigerator included, NO PETS, $250/200 620-960-2126 NO TEXTING 908 E 17TH, APT C1, Dining Competitive Wages! Must be 18 to apply! Apartments - Unfurn. Fair Housing Act Sale and Rental of Housing: No one may take any of the following actions based on race, color, national origin, religion, gender, familial status or handicap. SEE ALL OF TOMORROW’S OPEN HOUSES TODAY. **3311 Sycamore Rd., 3 bedroom, 1 bath, 2 car detached & 2 car attached garage. $850. **1527 Orchard, 3 bedroom, 1 bath, $650. **326 E 12th, 1 Bedroom Apartment, $450. 620-727-5777 --55 Halsey Dr, 3 bedrooms, 1 bath, $675. --1006 W 18th, 3 bedrooms, 1 bath, $600. 620-664-6898 or 663-7676 or 708-0397 WATER /TRASH PAID, 620-200-7785 OR 474-0277 Country living, 5 acres, 997 Westridge, 3 bedroom, 1 bath, 2 car garage, $710/710, 215-397-7583 All new 1 & 2 bedrooms for rent, $375 & up, some all bills paid, clean, 716 E 4th, 208 E B, coin laundries, 662-8176 HUTCHINSON & SOUTH HUTCHINSON UNITS AVAILABLE!! $450/450, 2 BEDROOM, Full-Time Server 11am to 7pm 4 days per week including every weekend Part-Time Server 6am to 2pm every other weekend with additional hours as needed Real Estate Apply online at brookdale. careers.com or in person at 2416 Brentwood St. Hutchinson, KS. Place your next ad online at ROYAL APARTMENTS One half month free rent with 12 month lease. One and two bedrooms available. Remodeled, Clean, New Appliances, Spacious. LEASE-DEPOSITNO PETS Pool, Storm Shelter Balcony. 326 East 1st, Suite D 669-5008, For After Hours669-7777 or 669-7070 Miscellaneous For Sale SILAS IS BUYING AND HAULING RUNNING OR NOT AUTOS, TRUCKS, AND TRACTORS IN ANY CONDITION. BEST PRICES PAID!! 620-665-4040 SILAS IS BUYING AND HAULING RUNNING OR NOT AUTOS,TRUCKS, AND TRACTORS IN ANY CONDITION. BEST PRICES PAID!! 620-665-4040 duplexes & houses. No pets. See our properties at Top drivers earned over 70k. Must be at least 25 years or contact us at old with 3 years experience. 620-663-3341 Benefits include home every Food Service/Restaurants weekend, dedicated lanes, insurance, retirement, Duplexes vacation pay, monthly and yearly bonuses. Apply in person at 1301 Landon, 2 bedroom, Sun Valley Inc. central heat/air, garage, 2201 S Lorraine washer/dryer hook-ups, Hutchinson, Kansas $550/550. 620-474-0745 Carriage Crossing Restaurant Part Time Truck Driver is seeking positive people Large 2 bedroom Central Prairie Co-op with great personalities to in Old Farm Estates, Real Estate in Sterling KS represent us well. 1 year lease, NO PETS, is hiring a part time truck Servers - Previous full$825/month, 620-474-1801. driver at our Adams Corner service experience preferred, 1201 Forrest, 1 bedroom, Location west of Hutchinson, but we will train. 1 car detached garage, KS. Must have a current $2.35 per hour plus tips. remodeled interior, new Houses-Unfurnished Class A CDL with driving Annual averages is around siding, Ready for New experience. Job duties will $13 per hour. Owner. Possible renter. include delivering of dry and Full or Part-time evening **10218 Paganica Plaza, 32K, owner carry or $500 liquid fertilizers & tendering of position includes dinner 3 bedroom, 2 baths, $900. rent. Call 620-960-1442 field application equipment. Must have Saturday **807 A Old Farm Estates, Job is agricultural related and availability. 631 E 4th, 3 bedroom, 2 bedroom, 2 bath, hours will vary on weather Benefits include: storage building, deck, central full basement, $875. and seasonal needs of the Competitive pay rate, set heat/air. All new inside & out. **1215A N Monroe co-op. Contact Bryan or schedule, one week paid $48,500. 620-960-2053 2 bedrooms, 1 bath, $575 Shanon @ 620-422-3221 vacation, free meals 620-200-4729 or email resume to closed on Sunday. or 719-529-0505 bsieren@cpcoop.us Apply online at www. ****Upscale New Home**** CarriageCrossingRestaurant. 2 bedroom, full basement, com or in person at 2 car garage. For rent with Construction Laborers Wanted Carriage Crossing. option to buy. 719-529-0505 E.O.E. 1201 Forrest, 1 bedroom, Apartments - Furn EXPERIENCED CONCRETE 1 car detached garage, FINISHERS fresh remodel, $500/500, Office/Administration IN HUTCHINSON, KS. Call 620-960-1442 ALL RENTAL or real estate CALL TJ’S CONSTRUCTION property advertisements in 1612 W 4TH, HOUSE A, 620-200-1749 Seeking detail oriented this newspaper are subject to 2 BEDROOM, CENTRAL person with strong The Federal Housing Act of HEAT/AIR, $425/425. computer and phone 1968, as amended, 620-474-0745 Call Classified Sales 4 Results skills. Full time position which makes it illegal to 224 W 13th, 3 bedroom, for billing, accounts advertise any ‘’preference, central heat/air, new carpet/ receivable and general limitation, or discrimination paint, washer/dryer hook-ups. office duties. Paid holiday/ based on race, color, $575/575. 620-694-0397 Medical vacation, health/dental, religion, gender or national 401k. origin, or an intention to make 304 W 6th, Hutch: Please send resume to any discrimination.’’ 3 bedroom, 1 bath, Adult Case Manager Western Supply Company This newspaper will not NO Pets/Smoking, $600/600. Center for Counseling PO BOX 16786 knowingly accept any (215) 397-7583 & Consultation in Great Hutchinson,Ks 67504 advertising which is in Bend has openings in An Employee Owned 305 W 25th, Hutchinson violation of the law. our Community Support Company. Recently remodeled 3 bed, Amendments, effective Program as an Adult 1 bath, plus bonus room. March 12, 1989, added Case Manager providing Energy efficient. No smoking ‘handicap’ and ‘familial’ services to adult or pets. $800/800. Available status to discrimination Financial Services SPMI clients. immediately. 316-250-3557 categories. For additional details please visit our website: & complete an Application for Employment. Send Application along with a current resume to Autos •One bedroom & Studio Apts, •2 bedroom Apts & Duplexes No Pets or Smoking One year lease sandhill properties.biz 620-662-0691 Office Space NEW OFFICE AND WAREHOUSE **111-W-2ND 782 SQ. FT. $350.00 MONTH **319 S MAIN OFFICE & RETAIL 400.00 MONTH CALL 620-921-5586 Heavy Equipment CATERPILLAR-1996 D3CXL Hystat dozer, 6 way blade, 80-90% undercarriage, 1256 hours, excellent condition, CATERPILLAR-1987 model 916 wheel loader, 7100 hours, good shape, tires fair 620-546-4606 Motorcycles/Go-Carts ATVs Antiques & Collectibles Church Service in Historic Escue Chapel Sunday, October 9, 9am Santa Fe Trail Center’s Tired Iron Show, Non-denominational and open to the public 2 miles west of Larned on K156 620-285-2054 GARAGEE SALES ON THE GO 1 # Site S ite ffo for or ads! ddss! 2010 HD Heritage Classic ,8,600 miles. Willie G. chrome, true duals Vance and Hines long shots. Bat wing fairing with radio. Rear trunk with lights. Asking $10,500. ph. 316 207 8859 2004 Winnebago Journey, Model 36G, 47K, roof hail damage, extremely clean interior, good rubber. $45,000 OBO. 620-623-4261 E AG 5’ x 10’, ATV’s, 16 ‘ Utility, 18’ Car hauler, 20’ 7K Car hauler, 25’ 7K and 25’ Dually Tandem & Enclosed. FTS Trailer Sales 124 N. Main, South Hutch 620-474-1001 C AT O b brought h to you by b GARAGESALES.HUTCHNEWS.COM GARAGESALES.HUTCH Food and Produce Public Auctions Michigan Apples Variety of choices. Frozen Fruit Available. ORDER BY OCT. 12 Ropps 620-669-9603 Consignors Auction, Sunday, Oct. 9, 1pm. 816 S. Main. Already consigned, furniture, Furniture & Appliances musical instruments, glass, China, primitives, tools, records and more. To consign REFRIGERATORS; you items call for details GAS & ELECTRIC 620-960-6637. RANGES; WASHER & DRYERS; FREEZERS; 1212 W. 4TH. 663-3195 WE BUY GOOD USED FURNITURE. ONE PIECE OR A HOUSE FULL . CALL LARRY @ Pets 620-200-4354 WILLEMS APPLIANCE SERVICE SALE ON GOOD RECONDITIONED APPLIANCES, WITH WARRANTY. OR LET US REPAIR YOUR BROKEN ONE. 620-663-8382 PRAIRIE FIRE POINTERS 3 month old English Pointer pups FDSB registered Sire & Dam are professional guide dogs Lawn & Garden Supplies Call 620-615-1606 or seefirepointers.com CALL DARREN THE TREE & STUMP GUY Miscellaneous For Sale Tree Trimming/Tree Removal/Hedge & Shrub Trimming/ Clean-up, Skid Steer Work/ Pasture Clearing. Call For Reasonable Rates FREE ESTIMATES 620-727-5777 Sporting Equipment Classified Dept. Monday thru Friday 8:00am to 5:00pm CLOSED Saturday & Sunday Carpentry & Remodeling PENNER REMODELING Kitchens, Baths, Roofing, Decks & General Remodeling Since 1979. Arlan Penner 620-664-7990 or 620-662-6957 7 Carpentry & Remodeling SPANGLER CUSTOM BUILDING & REMODELING Help with all your projects. FREE Estimates. Ken Spangler, 620-663-7890 Cleaning, Commercial Home Let us help you turn your trash to treasure with an ad in the Merchandise for Sale category. Call 620-694-5704 for more details. Concrete Services FOLK’S CONCRETE It’s not too late to get your concrete work done! •Free Estimates• •Over 30 Years Experience• 620-200-7155 Home Improvement Tree Removal/Trimming/ Moving CALL DARREN THE TREE & STUMP GUY Tree Trimming/Tree Removal/Hedge & Shrub Trimming/ Clean-up, Skid Steer Work/ Pasture Clearing. Reasonable Rates FREE ESTIMATES 620-727-5777 HOME CLEAN HOME Thorough, Dependable & Affordable. Cleaning is my passion & my clients passion is my cleaning!!! 620-931-7033 Free Estimates Call Marcus 620-727-1267 •Roofing •Concrete Work •Additions & Garages •Siding •Painting •We Finish Basements. Licensed & Insured, 20 year experience Call 620-960-8250 Painting & Papering FOLK’S PAINTING *Interior Work* *Free Estimates* *Over 30 Years Experience* 620-200-7155 Painting & Papering Jim’s Painting Service Interior/Exterior Free estimates Residential/ Commercial Over 30 years of Experience 620-694-9107 Roger’s Painting Painting, Plastering, Texturing, Paperhanging &/or Paper Removal, Sanding & Refinishing Floors, Parking Lot Striping, Pressure Washing Tuesday through Saturday’s Deadline for Classified ads, 3:30pm the day before. Autos 1998 gray Buick LaSabre custom, 4 door, excellent condition 68k, $3300, 620-669-8635 BUYING CARS & TRUCKS RUNNING OR NOT 620-664-1159 PUBLIC NOTICE EXTENSION COUNCIL ELECTION RENO COUNTY EXTENSION COUNCIL TO: The Voters of Reno County, State of Kansas, Election at Large PUBLIC NOTICE is hereby given in accordance with K.S.A. 2-611, as amended, State of Kansas, that on the date at the time and place mentioned below, the citizens of voting age of Reno County shall meet for the purpose of electing twelve members, three members for Agriculture Pursuits, three members for Home Economics Work, three members for 4-H Club and Youth Work, and three members for Community Development Initiatives, as Representatives to the Reno County Extension Council. Reno County – Thursday October 20, 2016 8AM-6PM, Reno County Extension Office, 2 W 10th Ave, South Hutchinson, KS Consideration shall be given to the Extension Program for Reno County. /s/ Carl Cohen Chair, Executive Board featuring O Trailers PREMIER OFFICE SPACE FOR LEASE 2,600 sq. ft. - multiple rooms. Can be divided. Parking available. FIRST NATIONAL CENTER 1 N Main 620-694-2233 Call these local businesses for your service needs. Legal Notices RVs & Campers LE SA Servers - Previous fullservice experience preferred, but we will train. $2.35 per hour plus tips. Annual averages is around $13 per hour. Full or Part-time evening position includes dinner Must have Saturday availability. The Hutchinson News Drivers wanted CDL-A 2 years OTR experience & hazmat endorsement Homes & Lots L Carriage Crossing Restaurant is seeking positive people with great personalities to represent us well. OPEN ROUTES AVAILABLE Medical R Employment Opportunities Drivers Wanted TUESDAY, OCTOBER 4, 2016 GAR Employment Opportunities SCAN ME THE HUTCHINSON NEWS Sunday’s and Mondays Deadline for Classified ads, 3:30pm, Friday Call 1-800-766-5704 or 620-694-5704 to place your ad. Farm Equipment Draft Horse Harnessing & Plowing Demonstration, Santa Fe Trail Center’s Tired Iron Show Sunday Oct 9, 12:30pm two miles west of Larned on K156 620-285-2054 SILAS IS BUYING AND HAULING RUNNING OR NOT AUTOS, TRUCKS, AND TRACTORS IN ANY CONDITION. BEST PRICES PAID!! 620-665-4040 Farm Supplies/Seed Fertilizer Certified Everest; SY Monument; SY Flint AP503 BL 2, Seed treatment available, Jacques Farms, Inc 620-960-3270 620-727-1093 620-694-9563 CERTIFIED SEED WHEAT Denali, Everest, Fuller, Larned, Sy Flint, Sy Monument, Sy Southwind, TAM 111, Jackpot, LCS Pistol, AP503CL2, Double stop CLT. Seed treatment available. SEEMAN FARMS Larned, KS 620-285-5288 620-285-1357 seefrms@ghta.net Certified SY Monument, LCS Mint, Gallagher, Everest, WB 4458, LCS Chrome. Howard Behnke, 620-562-7783, Lyons, KS CERTIFIED: DUSTER, DOUBLESTOP, EVEREST, IBA, JAGGER, KANMARK. JAMES HARRIS, LANGDON, 620-596-2363 Farmers Wants & Services Wells Fencing (formerly Harley’s Fencing) PROVIDING BARBED WIRE, RESIDENTIAL, AND COMMERCIAL FENCE, FENCING MATERIALS & SUPPLIES. 620-899-4410 PRAIRIE FIRE Fencing POINTERS 3 month old English Pointer pups FDSB registered Sire & Dam are WE BUILD professional guide dogs Call 620-615-1606 PASTURE FENCE. or seefirepointers.com YODER FENCE 620-465-2493 Searching for a New Job or Career? find your match 620-474-6588 SUPERIOR PAINTING SERVING HUTCH. FREE ESTIMATES. WOOD REPAIR. CALL TODAY! 620-802-1441 Get your ad included call 620-694-5704 TODAY To Place An Ad in the Service Directory Call: 620-694-5704 or Toll-Free 1-800-766-5704 B6 Tuesday, October 4, 2016 The Hutchinson News Business THE MARKET IN REVIEW q q DOW 18,253.85 -54.30 q NASDAQ 5,300.87 -11.13 STOCKS OF LOCAL INTEREST Name Div AGCO .52 AT&T Inc 1.92 AbbottLab 1.04 Alcoa .12 Altria 2.44f Anadarko .20 ArchDan 1.20 Ashland 1.56 BP PLC 2.40a BkofAm .30f BarrickG .08 +135.6 BerkHa A ... Cal-Maine 2.49e Caterpillar 3.08 CntryLink 2.16 Chevron 4.28 Citigroup .64f CocaCola 1.40 ColgPalm 1.56 CmcBMO .90b ConAgra 1.00 ConocoPhil 1.00 Costco 1.80 Deere 2.40 DevonE .24 DomRescs 2.80 DukeEngy 3.42f DukeRlty .72 Eaton 2.28f EqtyRsd 2.16 ExxonMbl 3.00 FordM .60a GenElec .92 GtPlainEn 1.05 HarleyD 1.40f HeclaM .01e +198.9 JohnJn 3.20 Kroger s .48 Lowes 1.40 McDnlds 3.76f Last 49.29 40.77 42.55 10.12 62.85 63.42 42.90 114.65 35.47 15.63 17.39 YTD Chg %Chg -.03 +8.6 +.16 +18.5 +.26 -5.3 -.02 +2.5 -.38 +8.0 +.06 +30.5 +.73 +17.0 -1.30 +11.6 +.31 +13.5 -.02 -7.1 -.33 Yld 1.1 4.7 2.4 1.2 3.9 .3 2.8 1.4 6.8 1.9 .5 PE 17 16 24 31 21 dd 23 19 dd 13 40 ... 6.5 3.5 7.8 4.2 1.4 3.3 2.1 1.8 2.1 2.3 1.2 2.8 .5 3.8 4.3 2.7 3.5 3.4 3.4 5.0 3.1 3.9 2.7 .2 14 215700 13 38.27 25 88.28 12 27.52 dd 102.45 12 47.03 25 42.03 26 73.62 18 48.83 27 47.53 dd 43.43 28 151.01 17 85.35 dd 44.20 22 73.28 18 79.09 36 26.82 16 65.87 23 63.09 35 87.05 6 12.10 28 29.64 16 27.25 14 51.00 35 5.65 -520 -.27 -.49 +.09 -.47 -.20 -.29 -.52 -.43 +.42 -.04 -1.50 ... +.09 -.99 -.95 -.51 +.16 -1.24 -.23 +.03 +.02 -.04 -1.59 -.05 +9.0 -17.4 +29.9 +9.4 +13.9 -9.1 -2.2 +10.5 +14.8 +12.7 -7.0 -6.5 +11.9 +38.1 +8.3 +10.8 +27.6 +26.6 -13.4 +11.7 -14.1 -4.8 -.2 +12.4 2.7 1.6 1.9 3.3 19 13 20 22 +.68 -.40 -.02 -.72 +15.7 -30.0 -5.1 -3.0 118.81 29.28 72.19 114.64 Other Copper (lb) Aluminum (lb) Platinum (oz) Lead (ton) Zinc, HG (lb) $1313.30 $1315.88 $1309.00 off 9.20 off 15.03 off 4.30 $18.950 $19.050 $18.795 off 0.390 off 0.600 off 0.344 Last $2.1835 $0.7520 $1003.70 $2105.00 $1.0782 Pvs. Day $2.2020 $0.7497 $1028.60 $1935.50 $1.0677 COFFEE Open Last 86.39 62.52 57.42 24.89 38.10 36.09 72.82 51.11 YTD Chg %Chg -.01 +12.3 +.11 +18.4 -.18 +3.5 +.43 -9.8 -1.19+111.8 +.35 +9.6 -.10 +7.7 -.28 Yld 2.0 2.9 2.7 4.4 .3 1.1 4.2 6.2 PE 24 18 26 20 36 dd dd 38 2.8 3.6 2.5 3.4 2.7 ... ... 2.9 3.8 2.2 .2 2.5 1.5 .8 2.3 2.9 4.5 4.5 2.8 3.8 3.5 2.7 2.5 2.6 3.4 ... .5 23 108.25 -.52 +8.3 15 33.68 -.19 +4.3 23 121.24 +.41 +18.4 8 81.43 -.22 ... 11 66.23 +.28 +16.5 17 3500.00+60.00 +20.9 dd 11.37 -.09 -44.7 21 50.87 -1.96 +24.5 40 42.52 -.23 +77.6 24 69.66 -.52 +27.1 15 39.88 +.13 -5.1 22 175.04 -1.19 +16.2 17 24.73 +.51 -8.9 20 75.10 +.43 +40.8 19 97.35 -.18 +24.5 19 109.18 -.18 +13.5 8 52.90 -.10 -25.2 14 51.88 -.10 +12.2 15 72.01 -.11 +17.5 20 38.25 -.73 +10.6 11 43.83 -.45 -19.4 26 56.72 -.03 +33.7 12 162.07 -.09 +10.3 cc 30.63 -.10 +19.2 19 40.58 -.56 +13.0 dd 43.13 +.03 +29.7 dd 4.29 -.02 BONDS AND BILLS METALS Gold Handy & Harman NY Engelhard NY Merc spot Silver Handy & Harman NY Engelhard NY Merc spot Name Div Medtrnic 1.72 Merck 1.84 Microsoft 1.56f Mosaic 1.10 NewmtM .10 NobleEngy .40 OcciPet 3.04f ONEOK 3.16f +107.3 PepsiCo 3.01 Pizer 1.20 Praxair 3.00 Prudentl 2.80 Ryder 1.76f SbdCp 3.00 SearsHldgs ... SonocoP 1.48f SpectraEn 1.62 TexInst 1.52 Textron .08 3M Co 4.44 21stCFoxA .36f Tyson .60f UnionPac 2.20 UPS B 3.12 ValeroE 2.40 VerizonCm 2.31f WalMart 2.00f WeinRlt 1.46 WellsFargo 1.52 WestarEn 1.52 Whrlpl 4.00f WmsCos .80m XcelEngy 1.36 Yahoo ... Yamana g .02m +130.6 High Low Settle Chg. COFFEE C 37,500 lbs.- cents per lb. (ICE) Dec 16 151.10 151.70 147.15 147.55 -4.00 Mar 17 154.15 154.95 150.55 150.90 -4.00 May 17 156.35 156.75 152.50 152.80 -3.90 Jul 17 158.00 158.50 154.20 154.55 -3.90 Sep 17 159.25 159.25 155.80 156.15 -3.90 Dec 17 160.00 160.00 158.25 158.25 -3.85 Est. sales 24,714. Fri’s sales 28,313 Fri’s open int. 187,185, +1,311 Open High Low Settle Chg. US TREASURY BONDS $100,000 prin- pts & 32nds of 100 pct (CBOT) Dec 16 168-15 168-25 167-26 168-02 - 03 Mar 17 167-04 167-04 166-19 166-21 - 03 Jun 17 165-25 - 03 Est. sales 335,990. Fri’s sales 329,705 Fri’s open int. 566,006, -9,716 10 YR. TREASURY $100,000 prin-pts & 32nds & a half 32nd (CBOT) Dec 16 131-064131-074 130-29 130-304 - 056 Mar 17 130-134 - 056 Est. sales 1,516,290. Fri’s sales 1,671,124 Fri’s open int. 2,878,180, -55,270 CURRENCIES Country (Currency) 1 US $ Buys: Pvs. Day Australia (Dollar) Britain (Pound) Canada (Dollar) China (Yuan) Euro (Euro) Hong Kong (Dollar) Japan (Yen) Mexico (Peso) Russia (Ruble) Switzerlnd (Franc) 1.3032 .7778 1.3113 6.6709 .8916 7.7558 101.57 19.3219 62.3873 .9730 1.3054 .7704 1.3118 6.6711 .8899 7.7559 101.41 19.3879 62.7908 .9706 S&P 500 2,161.20 -7.07 p 10-YR T-NOTE 1.63% +.03 AGRICULTURE LIVESTOCK Open High Low Settle Chg. WHEAT 5,000 bu minimum- cents per bushel (CBOT) Dec 16 399.75 401.75 391.50 395.50 -6.50 Mar 17 422.50 423.75 413.75 417.25 -7.50 May 17 436.75 436.75 426.75 430.50 -7.50 Jul 17 445.50 446.50 436.75 441.25 -6.25 Sep 17 458.25 458.25 452.75 455.75 -6 Dec 17 474.50 476.25 471.75 476.25 -4.25 Est. sales 191,668. Fri’s sales 125,071 Fri’s open int. 466,339, -177 CORN 5,000 bu minimum- cents per bushel (CBOT) Dec 16 336.25 347.75 335.25 346 +9.25 Mar 17 346 357.50 345 355.75 +9.25 May 17 353 364.25 351.75 362.75 +9.25 Jul 17 359.75 370.75 358.50 369 +8.75 Sep 17 366.75 377 365.25 375.50 +8.50 Dec 17 375.75 386 374.50 384.25 +7.75 Est. sales 608,650. Fri’s sales 343,497 Fri’s open int. 1,324,250, +8,961 OATS 5,000 bu minimum- cents per bushel (CBOT) Dec 16 178.75 183.75 178.50 183 +4.75 Mar 17 189 189.75 187.50 188.75 +1.75 May 17 195.25 195.25 193.25 194 +2.75 Jul 17 199.25 +.75 Sep 17 200.25 +.75 Dec 17 203.75 +.75 Est. sales 1,710. Fri’s sales 689 Fri’s open int. 10,520, -106 SOYBEANS 5,000 bu minimum- cents per bushel (CBOT) Nov 16 952.50 975 946.50 973 +19 Jan 17 957.75 979.75 952.25 978 +18.75 Mar 17 964.75 986 958.50 984.50 +19 May 17 970.25 991.75 965.50 990.25 +18.50 995 Jul 17 976.25 996.75 971 +17.75 Aug 17 975.75 995 971.50 993.75 +17.75 Sep 17 966 981.25 959 981.25 +18 Nov 17 953 973.25 949 971.50 +18.25 Est. sales 376,216. Fri’s sales 244,922 Fri’s open int. 644,106, -3,448 SOYBEAN OIL 60,000 lbs- cents per lb (CBOT) Oct 16 33.10 33.10 32.55 33.05 -.19 Dec 16 33.31 33.35 32.73 33.24 -.20 Jan 17 33.40 33.58 32.96 33.48 -.19 Mar 17 33.85 33.85 33.22 33.73 -.18 May 17 33.88 34.01 33.37 33.91 -.16 Jul 17 34.01 34.16 33.54 34.08 -.14 Est. sales 163,856. Fri’s sales 115,528 Fri’s open int. 412,065, +5,611 SOYBEAN MEAL 100 tons- dollars per ton (CBOT) Oct 16 297.90 306.80 297.70 305.90 +8.00 Dec 16 299.50 309.80 299.20 308.40 +8.80 Jan 17 300.20 310.50 299.90 309.50 +9.30 Mar 17 302.40 312.00 302.00 310.90 +8.80 May 17 306.20 313.10 303.80 312.30 +8.40 Jul 17 308.00 314.50 306.50 313.80 +8.20 Est. sales 167,894. Fri’s sales 92,785 Fri’s open int. 359,768, -1,902 COTTON 2 50,000 lbs.- cents per lb. (ICE) Oct 16 68.73 +.44 Dec 16 67.94 68.66 67.20 68.52 +.44 Mar 17 68.41 69.14 67.74 69.01 +.47 May 17 68.75 69.57 68.24 69.44 +.46 Jul 17 69.08 69.65 68.35 69.53 +.47 Oct 17 68.96 +.48 Est. sales 15,994. Fri’s sales 19,421 Fri’s open int. 247,884, -2,290 Open High Low Settle Chg. CATTLE 40,000 lbs.- cents per lb. (CME) Oct 16 98.52 99.70 97.35 98.92 +.02 Dec 16 100.25 101.05 98.90 99.97 -.15 Feb 17 100.70 101.65 100.00 100.37 -.23 Apr 17 100.50 101.42 99.82 100.25 -.05 Jun 17 94.60 94.87 93.32 93.92 -.03 Aug 17 93.45 94.07 92.30 93.47 +.47 Oct 17 95.00 95.95 93.77 95.30 +.80 Est. sales 69,246. Fri’s sales 70,302 Fri’s open int. 265,286, +4,142 FEEDER CATTLE 50,000 lbs.- cents per lb. (CME) Oct 16 122.65 126.12 122.15 123.97 +.82 Nov 16 119.25 121.57 118.10 119.27 -.38 Jan 17 116.42 118.70 115.02 115.80 -1.02 Mar 17 115.72 117.55 113.87 114.67 -.93 Apr 17 115.50 117.17 113.92 114.72 -.65 May 17 114.80 116.60 113.60 114.30 -.55 Aug 17 116.45 118.82 115.77 116.77 -.33 Sep 17 117.67 -.33 Est. sales 15,193. Fri’s sales 15,807 Fri’s open int. 43,519, -544 HOGS-Lean 40,000 lbs.- cents per lb. (CME) Oct 16 49.27 49.80 48.70 48.92 -.10 Dec 16 44.30 45.00 43.37 44.17 +.20 Feb 17 49.05 51.00 48.80 50.65 +1.75 Apr 17 55.92 58.75 55.50 58.45 +2.73 May 17 64.00 66.50 64.00 66.50 +3.50 Jun 17 66.95 70.65 66.65 70.45 +3.38 Jul 17 66.70 70.60 66.57 70.42 +3.42 Aug 17 66.32 70.17 66.32 69.97 +3.25 Oct 17 57.35 60.80 57.35 60.77 +3.27 Dec 17 57.35 57.50 57.35 57.50 +2.65 Est. sales 65,114. Fri’s sales 43,051 Fri’s open int. 223,717, +795 FUELS NATURAL GAS 10,000 mm btu’s, $ per mm btu (NYMX) Nov 16 2.905 2.937 2.866 2.923 +.017 Dec 16 3.137 3.175 3.112 3.170 +.038 Jan 17 3.268 3.309 3.082 3.306 +.038 Est. sales 299,717. Fri’s sales 367,312 Fri’s open int. 1,095,240, +13,489 LIGHT SWEET CRUDE 1,000 bbl.- dollars per bbl. (NYMX) Nov 16 48.04 49.02 47.78 48.81 +.57 Dec 16 48.64 49.60 48.35 49.40 +.58 Jan 17 49.16 50.21 48.93 50.01 +.61 Est. sales 809,669. Fri’s sales 855,915 Fri’s open int. 1,884,275, +3,146 HEATING OIL 42,000 gal, cents per gal (NYMX) Nov 16 153.57 156.12 152.62 155.32 +1.49 Dec 16 154.55 157.06 153.65 156.38 +1.55 Jan 17 155.85 158.45 155.01 157.76 +1.61 Est. sales 120,978. Fri’s sales 109,085 Fri’s open int. 387,900, -1,687 ETHANOL 29,000 U.S. gallons-dollars per gallon (CBOT) Oct 16 1.563 1.580 1.545 1.580 +.010 Nov 16 1.498 1.523 1.493 1.519 +.004 Dec 16 1.449 1.455 1.449 1.455 -.016 Est. sales 273. Fri’s sales 454 Fri’s open int. 4,626, +120 p p 30-YR T-BOND 2.34% +.02 CRUDE OIL $48.81 +.57 LOCAL GRAIN, MARKETS Daily grain price fluctuations (courtesy of ADM Grain, Hutchinson) Date Wheat Corn 09/26 3.07 2.89 09/27 3.12 2.91 09/28 3.14 2.89 09/30 3.08 2.96 10/03 3.00 3.01 Garden City Co-op 10/03 2.67 2.96 Dodge City Co-op 10/03 3.67 2.96 Irsik/Doll Hutchinson 10/03 3.00 3.06 Plains 10/03 2.81 3.06 Leoti 10/03 2.72 2.91 Hays Midland Marketing 10/03 2.62 2.79 Kansas Ethanol (Lyons) 10/03 NA 3.07 Local Markets HUTCHINSON: (Courtesy of Cargill Grain) Soybeans 8.70 8.77 8.70 8.79 9.03 Milo Soybeans – 9.03 bu. Corn – 3.01 bu. 2.69 bu. New Crop Wheat – 3.69 bu. New Crop Milo - 2.86 bu. 2.71 bu. New Crop Soybean – 9.03 bu. New Crop Corn –3.01 bu. 2.69 bu. HUTCHINSON: (Courtesy ofADM Grain Co.) Wheat – $3.00 bu. 2.76 bu. Milo - $2.86 bu. 2.86 bu. Soybeans – $8.98 bu. 8.73 2.36 bu. 8.83 2.51 bu. NA 2.81bu. 8.78 2.61 bu. 8.73 2.51 bu. 8.63 2.46 bu. NA 2.83 bu. Wheat – 3.00 bu. Milo – 2.86 bu. Corn - $3.06 bu. New Crop Wheat – $3.74 bu. New Crop Milo - $2.86 bu. New Crop Soybean – $8.98 bu. New Crop Corn – $3.06 bu. CHICAGO BOARD OF TRADE Open High Low Settle Chg. WINTER WHEAT 5,000 bu minimum- cents per bushel (CBOT) Dec 16 414 414.75 403 407 -8.50 Mar 17 429 431.25 420 423.50 -8.50 May 17 439 441 431 434 -8.50 Jul 17 449.25 451 441 444.50 -8 Sep 17 460 460.50 457 459.50 -7.50 Dec 17 480.50 481.25 478 481.25 -6.50 Mar 18 496.75 -6 May 18 504.25 505.75 504.25 505.75 -1.25 Est. sales 43,370. Fri’s sales 47,710 Fri’s open int. 234,561, +2,172 DAILY INDICES 52-Week High Low Index Chg 18,668.44 15,450.56 Dow Jones Industrials 18,253.85 8,358.20 6,403.31 Dow Jones Transportation 8,099.38 723.83 547.22 Dow Jones Utilities 659.28 10,903.86 8,937.99 NYSE Composite 10,690.77 5,342.88 4,209.76 Nasdaq Composite 5,300.87 2,193.81 1,810.10 S&P 500 2,161.20 1,581.53 1,215.14 S&P Midcap 1,542.08 22,785.41 18,462.43 Wilshire 5000 22,503.48 1,263.46 943.09 Russell 2000 1,245.78 -54.30 +20.59 -8.85 -30.97 -11.13 -7.07 -10.18 -73.19 -5.86 YTD 52-Wk %Chg %Chg %Chg -.30 +.25 -1.32 -.29 -.21 -.33 -.66 -.32 -.47 +4.76 +7.87 +14.10 +5.40 +5.86 +5.74 +10.26 +6.31 +9.68 +8.81 +.55 +12.63 +5.14 +10.87 +8.76 +9.12 +7.94 +9.12 MARKET SUMMARY MOST ACTIVE ($1 OR MORE) Name Vol (000) BkofAm 68,220 SiriusXM 45,399 WellsFargo37,921 Twitter 36,420 ChesEng 30,718 AMD 29,850 Nutanix n 27,807 Last Chg 15.63 -.02 4.19 +.02 43.83 -.45 24.00 +.95 6.40 +.13 6.95 +.04 44.46 +7.46 LOSERS ($2 OR MORE) GAINERS ($2 OR MORE) Name Last NovaLfstyl 5.08 FiveStar 2.66 VirnetX 4.10 Itus Cp hrs 4.90 Winnbgo 29.15 Nutanix n 44.46 DynavaxT 12.48 Chg +1.45 +.75 +1.04 +1.01 +5.58 +7.46 +1.99 %Chg +39.9 +39.3 +34.0 +26.0 +23.7 +20.2 +19.0 Name Vol (000) Last RealG rs rs 2.32 -1.49 VanNR pfC 2.72 -.73 VanNR pfA 2.75 -.67 Shineco n 9.13 -2.06 VanNR pfB 2.78 -.57 ChinaHGS 2.00 -.36 TASER 24.29 -4.32 Chg -39.1 -21.2 -19.6 -18.4 -17.0 -15.3 -15.1 Stock Footnotes: lf - Late filing with SEC. n - Stock was a new issue in the last year. pf - Preferred stock issue. rs - Stock has undergone a reverse stock split of at least 50% within the past year. s - Stock has split by at least 20 percent within the last year. wi - Trades will be settled when the stock is issued. vj Company in bankruptcy or receivership, or being reorganized under the bankruptcy law. Appears in front of the name. Dividend Footnotes: b - Annual rate plus stock. e - Amount declared or paid in last 12 months. f - Current annual rate, which was increased by most recent dividend announcement. PE Footnotes: q - Stock is a closed-end fund - no P/E ratio shown. cc - P/E exceeds 99. dd - Loss in last 12 months. Source: The Associated Press. Sales figures are unofficial. Bass Pro to acquire rival Cabela’s for $4.5B BY JOSH FUNK AP Business Writer OMAHA, Neb. –. The deal combines two companies known for their giant destination superstores. It also creates uncertainty about jobs in Cabela’s home state of Nebraska. The combined companies plan to keep some operations in Sidney and Lincoln, Nebraska, but it’s not immediately clear how many jobs might be lost. Cabela’s employs about 2,000 people in the western Nebraska town of Sidney, which has about 7,000 residents. State Sen. Ken Schilz, who represents the area, said the deal is concerning because of the duplication between the two companies’ headquarters that will be eliminated. “We’ll just have to wait and see what Bass Pro does. I’m sure most folks in Sidney are pretty nervous this morning,” Schilz said. Activist investment firm Elliott Management began pushing for significant changes at Cabela’s last fall. Elliott owns 7.4 percent of Cabela’s shares and holds options to buy Mel Evans/Associated Press A large crowd of people line up as they wait for the grand opening of Bass Pro Shops Outpost on April 15, 2015, in Atlantic City, N.J. “The story of each of these companies could only have happened in America, made possible by our uniquely American free enterprise system.” Johnny Morris, Bass Pro founder and CEO another 3.8 percent..” Bass Pro founder and CEO Johnny Morris said he hopes to continue growing the Cabela’s brand alongside his privately-held Springfield, Missouri, based chain. “The story of each of these companies could only have happened in America, made possible by our uniquely American free enterprise system,” Morris said. “We have enormous admiration for Cabela’s, its founders and outfitters, and its loyal base of customers.” Capital One will take over running Cabela’s credit card unit as part of the deal, which is backed by $1.8 billion in financing from Goldman Sachs and another $600 million from private equity fund Pamplona Capital. Cabela’s was founded in 1961 when Dick Cabela started selling fishing flies through the mail from his kitchen table with his wife, Mary, and brother, Jim. It now has 85 retail stores primarily in the western U.S. and Canada. Bass Pro got its start in 1971 when Morris began selling high-quality fishing tackle in his dad’s liquor store in Springfield, Missouri. Morris developed a following in the Ozarks region – its lakes and rich streams a haven for anglers – created the Bass Pro Shop Catalog in 1974 and opened the first of his now 99 stores in Springfield seven years later.. Richard Drew/Associated Press The American flag flies above the Wall Street entrance to the New York Stock Exchange. Stocks fall with drops in real estate, utilities BY BERNARD CONDON AP Business Writer NEW YORK –.” The Hutchinson News Tuesday, October 4, 2016 B7 COMICS Zits Beetle Bailey Dilbert Garield Hi and Lois Tundra Red Rover Blondie Non Sequitur Baby Blues Pickles Dustin THE AWARD-WINNING PRINT & ONLINE FAMILY FEATURE Puzzle answers, games, opinion polls and much more at: Hey kids, look for Kid Scoop featuring puzzles answers, games, opinion polls and much more in Sunday’s Comics section of The Hutchinson News. Rubes THAT SCRAMBLED WORD GAME by David L. Hoyt and Jeff Knurek TYLUR CNERDH Unscramble these four Jumbles, one letter to each square, to form four ordinary words. MAREYD Now arrange the circled letters to form the surprise answer, as suggested by the above cartoon. Ans: Yesterday’s - Lockhorns (Answers tomorrow) Jumbles: COUGH DRANK WINERY SNAPPY Answer: The mallards were ready to cross the road, now that they had their — DUCKS IN A ROW B8 Tuesday, October 4, 2016 The Hutchinson News SPORTS Photos by Travis Morisse/The Hutchinson News Hutchinson Community College’s Otis Williams outruns Kansas Wesleyan’s Alfred Villalobos for a touchdown in the first quarter Monday at Gowans Stadium. HCC • From Page B1 The first two times Hutchinson touched the ball, the Blue Dragons scored: first a breakaway punt return by Adrian Cross and then a 40-yard run by Otis Williams on the first offensive play for Hutchinson. Simply, Hutchinson was special Monday night. The Blue Dragons racked up three special teams touchdowns. Cross added his second of the game in 65-yard fashion to start the second quarter and V’onte Williams-McRoy crashed the party with a 53-yard punt return of his own. Cross’ second return broke a school record for the most returns for a touchdown in a game. “It felt good,” Cross said. “Coach’s been telling me all week you’re going to get one in the end zone, so I had to take one there. I was working on my vision. I looked back to the other side, saw a free lane, and it was off to the races.” The Blue Dragons led 45-0 after Cross’ second return. Sophomore Morgan Wheeler received the first carry of his career in the second quarter. Wheeler broke a 14-yard run on his second carry that caused the Blue Dragon sideline to erupt. Wheeler spun off a tackle and bounced off another before he was tripped up. “I was nervous, honestly, but it was fun,” Wheeler said after the game. “It was great. Everyone has been waiting on that run since we put it in, in the second week. I hit the spin and thought I was going to go for it, but I was brought down. I was pumped. Everyone was excited.” HCC’s Adrian Cross returns a punt for a touchdown against Kansas Wesleyan in the first quarter. “Coach’s been telling me all week you’re going to get one in the end zone, so I had to take one there.” Adrian Cross Wheeler finished the game with two carries for 18 yards. Sam Corona led the Blue Dragons on the ground with 15 carries for 115 yards and two touchdowns. THE QUICK HIT KEY STAT: Luke Niemeyer’s 46-yard field goal in the first quarter was the second longest made field goal in school history (47). TURNING POINT: The pregame warm-up. The size differential between the two teams was mind-boggling. The Blue Dragon defensive line lived in the KWU backfield. Jezel Parra is tripped up by Kansas Wesleyan’s Carlos Mendoza in the first quarter. Notes • From Page B1 would. We’re still working through that, but it’s improving every week. All the players are definitely confident in their game, which is critical, so we need to keep building on that.” Hutchinson returns home Wednesday for a conference match against Pratt. The Blue Dragons will enter the Missouri State West Plains tournament beginning Friday. Soccer close to clinching The Blue Dragons are on the brink of clinching the Jayhawk West championship, but Garden City is making things as difficult as possible. Hutchinson is 5-0 (15 points) in conference play, with three games remaining. Garden City is right behind at 4-1 (12 points). Hutchinson plays Wednesday at winless Pratt, while Garden City welcomes third-place Barton (3-2) on Wednesday. A Hutch win and Barton win clinches at least a share of the conference title. Should Garden City win, however, the wait for Hutch to clinch could take a while. Hutchinson’s ensuing three games are non-conference games – two against Hesston and one against Northwest Kansas Tech. The Blue Dragons play Pratt at home on Oct. 18 and play at Garden City on Oct. 21. In Region 6 West play, Hesston is tied with Hutchinson for first with 18 points each. – Brad Hallier Autumn golf The Blue Dragons golf team finished sixth overall after wrapping up its second fall tournament on Sept. 27 at the Missouri Southern Fall Invitational at Shangri-La Country Club. Sophomore Wil Arnold shot a 2-under-par 70 in the third round and finished tied for 13th. Jack Lanham tied for 26th, shooting 8-over 224. Matt Percy and Doug Rios-Ceballos tied for 29th, both shooting 10-over 226. Cole Gritton shot 1-over 73 in the final round, which was second-best PLAYERS OF THE GAME: How about special teams? Adrian Cross and V’onte Williams-McRoy combined for three punt returns for touchdowns. HE SAID IT: “It’s good to get clicking and I feel like we gained some confidence these two weeks. We knew what these teams were that we just beat, but at the same time, I think it’s timely for our team and where we are right now.” – Hutchinson coach Rion Rhoades, on HCC’s offense the past tow games NEXT: The Dragons (4-2, 1-2) will have a much-needed break after playing two games in three days. Ellsworth Community College comes to Gowans Stadium Oct. 15. HUTCHINSON 64 KANSAS WESLEYAN JV 0 Hutchinson 38 13 0 13 — 64 Kansas Wesleyan 0 0 0 0 — 0 First quarter HC—Adrian Cross 48 punt return (kick good), 13:55 HC—Otis Williams 40 run (kick good),12:30 HC—Cam Jones 32 pass to Gary Cross (kick good), 9:15 HC—Luke Niemeyer 46 kick, 7:52 HC—Chaz Capps 1 run (kick good), 2:48 HC—V’Onte Williams-McRoy 53 punt return (kick good), 1:01 Second quarter HC—Adrian Cross 65 punt return (kick good), 13:23 HC—Tre Grifin 25 run (kick failed), 10:52 Fourth quarter HC— Sam Corona 7 run (kick good),11:53 HC—Corona 25 run (kick ailed), 4:25 First downs 17 1 Rushes-yards 32-255; -31 Passing yards 73 18 Comp-Att-Int 4-6-0 8-14 Fumbles-lost1-1 3-2 Punts-Avg 0-0 9-28.2 Penalties 12-136 5-56 INDIVIDUAL STATISTICS RUSHING—Hutchinson, Sam Corona 15-115-2, Otis Williams 6-66-1; Tre Grifin 3-31-1, Morgan Wheeler 2-18, Chaz Capps 2-13-1, Garret Haskins 2-7, Jezel Parra 1-5; Kansas Wesleyan JV, Jaylan Alexander 4-9, Brett Boyles 4-4, Calvin Ainsworth 1- -2, Spensirr Howard 1- -3, Cedric Whitaker 1- -5, Cody Springsguth 5- -34. PASSING— Hutchinson, Garret Haskins 2-3-0 12, Cam Jones 2-2-0 1-61, Chaz Capps 0-1. RECEIVING—Hutchinson, Gary Cross 1-36-1, Jezel Parra 1-25, Jordyn Steinike 1-8, Sam Corona 1-4. in the round for the Blue Dragons. Gritton tied for 36th. Sophomore Mac McNish tied for 40th, shooting 14over 230. Blue Dragons football back on track After getting off the schneid against Iowa Central, the victory against Kansas Wesleyan JV was a given. The Blue Dragons’ 38 points against Iowa Central had been their highest scoring output of the season. The last time Hutchinson had scored 30-plus was in the first game of the season against Coffeyville (34). Hutchinson rushed for a season-high 417 yards, with Otis Williams and Tre Griffin rushing for over 120 yards (189, 126, respectively). Hutchinson did the same Monday evening, dismantling Kansas Wesleyan in a 64-0 shutout. Hutchinson scored 38 points in the first quarter.
https://issuu.com/wskell/docs/20161004
CC-MAIN-2017-04
refinedweb
34,804
76.32
Parallel python programming is a mode of operation where the different task is executed simultaneously in multiple processors in the same computer. The Parallel processing is done to reduce the overall processing time. The multiprocessing in python is used to run independent Parallel python programming by using different subprocesses. It will help you leverage the different types of multiple processors. It means that the processes can be run on different separate memory locations. After the end of this tutorial, you will know these things clearly. - How you can also easily structure the code and understand the syntax to enable parallel processing using multiprocessing? - How you can also implement synchronous and asynchronous parallel processing without much difficulty? - Parallelize a Pandas DataFrame? - Solve 3 different use cases with the multiprocessing.Pool()interface. Also Read: Python execute shell command: How to run them How many different maximum parallel processes can you run? The maximum number of processes you can run at a particular time depends on how strong are the processors in your computer. Now if you don’t how many processors are present in your computer. You can easily check it by cpu_count() function in multiprocessing. Just type the following command and you will know. import multiprocessing as mp print("Number of processors: ", mp.cpu_count()) What are Asynchronous and Synchronous execution? There are two types of execution in parallel processing - Synchronous - Asynchronous Synchronous Execution: It is a process in which processes are completed in the same way which it was started. It is achieved by locking the main program until all the respective programs have finished executing. Whereas Asynchronous doesn’t work on locking. As a result the order of results get mixed up and the things are done a real quick. Multiprocessing consists of two main objects to execute parallel function are as follows: PoolClass ProcessClass 1: Pool a: Synchronous execution Pool.map()and Pool.starmap() Pool.apply() b: Asynchronous execution Pool.map_async()and Pool.starmap_async() Pool.apply_async()) 2: Process Class Let’s talk about a typical problem and implement parallelization using the above techniques. In this blog, we will also talk about the Pool class, also it is most convenient to use and serves most common practical applications. Question: You have to count how many numbers exist between a given range in each row In this problem you are given a 2D matrix. You have to count how many numbers are also present in a given range in each row. We have prepared a list below on how to do it. import numpy as np from time import time # Prepare data np.random.RandomState(100) arr = np.random.randint(0, 10, size=[200000, 5]) data = arr.tolist() data[:5] Here is also a solution without parallelization: Now let’s just see how long will it take without parallelization. For this, we will also iterate the function howmany_within_range() to see how many numbers are within range and returns the count. # Solution Without Parallelization def howmany_within_range(row, minimum, maximum): """Returns how many numbers lie within `maximum` and `minimum` in a given `row`""" count = 0 for n in row: if minimum <= n <= maximum: count = count + 1 return count results = [] for row in data: results.append(howmany_within_range(row, minimum=4, maximum=8)) print(results[:10]) #> [3, 1, 4, 4, 4, 2, 1, 1, 3, 3] How you can also Parallelize any function as per your need? The simple way how you can parallelize any function is easy. You just have to run a particular functions multiple times and also make it run parallelly in different processors. If you want to do this you just have to initialize a Pool with n number of processors. Now just pass the function you want to parallelize to one of Pools parallelization methods. Multiprocessing.Pool() provides the apply(), map() and starmap() methods to make any function run in parallel. Once it’s done both map and apply to take the particular function to be parallelized as the main argument. The only difference between the map and apply is that map can one iterable as an argument. Also, apply takes an argument and accepts the parameters passed to the ‘function-to-be-parallelized’ as an argument. From the above description we can see that map() is really more suitable for easy and simpler iterables. The map() also does the job faster. We will get to starmap() once we see how to parallelize howmany_within_range() function with apply() and map(). You can also Parallelize using Pool.apply() This will help you parallelize the howmany_within_range() function using multiprocessing.Pool(). Just follow the program given down below and you will understand how it’s done. # Parallelizing using Pool.apply() import multiprocessing as mp # Step 1: Init multiprocessing.Pool() pool = mp.Pool(mp.cpu_count()) # Step 2: `pool.apply` the `howmany_within_range()` results = [pool.apply(howmany_within_range, args=(row, 4, 8)) for row in data] # Step 3: Don't forget to close pool.close() print(results[:10]) #> [3, 1, 4, 4, 4, 2, 1, 1, 3, 3] How to Parallelize using Pool.map() As we have already told you Pool.map() accepts only one iterable as argument. You can easily modify the function howmany_within_range just by setting the default from minimum to maximum. This will then create a new howmany_within_range_rowonly() function then it will only accept iterables of rows as inputs. We know this is not a nice example of map() but it will show you how it differs from apply. # Parallelizing using Pool.map() import multiprocessing as mp # Redefine, with only 1 mandatory argument. def howmany_within_range_rowonly(row, minimum=4, maximum=8): count = 0 for n in row: if minimum <= n <= maximum: count = count + 1 return count pool = mp.Pool(mp.cpu_count()) results = pool.map(howmany_within_range_rowonly, [row for row in data]) pool.close() print(results[:10]) #> [3, 1, 4, 4, 4, 2, 1, 1, 3, 3] How to use Parallelizing Pool.starmap() We saw in previous example how we have to redefine howmany_within_range function. To make couple of parameters take the default values. You can ask how you can avoid doing this? Just like Pool.map(), Pool.starmap() also accepts only one iterable as argument. But there is one basic difference is that each element in Pool.starmap() is also a iterable. So basically you can say that Pool.starmap() is a version of Pool.map() that accepts arguments. # Parallelizing with Pool.starmap() import multiprocessing as mp pool = mp.Pool(mp.cpu_count()) results = pool.starmap(howmany_within_range, [(row, 4, 8) for row in data]) pool.close() print(results[:10]) #> [3, 1, 4, 4, 4, 2, 1, 1, 3, 3] Asynchronous Parallel python programming There are different types of asynchronous equivalents like apply_async(), map_async() and starmap_async(). They will easily let you execute different processes asynchronously. This means that the next process can start as soon as the previous one gets over regard for the starting order. How to Parallelizing with Pool.apply_async() apply_async() are very similar to the apply() you just have to provide a callback function that will tell you. How the computed result is stored. A workaround for this is, we redefine a new howmany_within_range2() to accept and return the iteration number ( i) as well and then sort the final results. # Parallel processing with Pool.apply_async() import multiprocessing as mp pool = mp.Pool(mp.cpu_count()) results = [] # Step 1: Redefine, to accept `i`, the iteration number def howmany_within_range2(i, row, minimum, maximum): """Returns how many numbers lie within `maximum` and `minimum` in a given `row`""" count = 0 for n in row: if minimum <= n <= maximum: count = count + 1 return (i, count) # Step 2: Define callback function to collect the output in `results` def collect_result(result): global results results.append(result) # Step 3: Use loop to parallelize for i, row in enumerate(data): pool.apply_async(howmany_within_range2, args=(i, row, 4, 8), callback=collect_result) # Step 4: Close Pool and let all the processes complete pool.close() pool.join() # postpones the execution of next line of code until all processes in the queue are done. # Step 5: Sort results [OPTIONAL] results.sort(key=lambda x: x[0]) results_final = [r for i, r in results] print(results_final[:10]) #> [3, 1, 4, 4, 4, 2, 1, 1, 3, 3] It is also possible to use apply_async() without using the call-back function. Now if you don’t provide a call-back function you get a list of pool.ApplyResult objects. It will contain output values from each process. Now you have to use this pool.ApplyResult.get() method to retrieve your final result. # Parallel processing with Pool.apply_async() without callback function import multiprocessing as mp pool = mp.Pool(mp.cpu_count()) results = [] # call apply_async() without callback result_objects = [pool.apply_async(howmany_within_range2, args=(i, row, 4, 8)) for i, row in enumerate(data)] # result_objects is a list of pool.ApplyResult objects results = [r.get()[1] for r in result_objects] pool.close() pool.join() print(results[:10]) #> [3, 1, 4, 4, 4, 2, 1, 1, 3, 3] Conclusion: In this blog, we have shown different procedures and various methods and ways you can implement Parallel python programming. The procedure explained above is for using larger machines with many more processors. This is where you can see the benefits of using parallel processing. Hope you find this information useful. Thank you for the read.
https://hackanons.com/2021/06/parallel-python-programming-basic-information.html
CC-MAIN-2021-25
refinedweb
1,536
58.58
Since web2py definition table is done in the models and models are executed in each request, some times tables will be defines even if you do no need it. I have spent a few hours looking for a solution in oder to minimize the code in models and work easily when they have a dependencies. In this case the table definition. This solution is inspired in Movuca and presentation. In this small app you can see how the models have been defined in a class into the module folder and only load or define the tables you need for each request. The example below will define the table “comments” and “post”. This is done because there is a dependency between them. from model import DataBase DataBase(db=self.db, auth=self.auth, request=self.request, tables=['t_comments']) The example below, only will define the table “docs”: from model import DataBase DataBase(db=self.db, auth=self.auth, request=self.request, tables=['t_docs']) If you do not specify any table, by default the module will define all. This is good for the first request of the app or even if you want to populate the entire database. DataBase(db=self.db, auth=self.auth, request=self.request) Example of the app where you test a few options: Example loading the "t_docs" table content. In the terminal you can see only "t_docs" is loaded. "t_comments" and "t_post" tables are ignored for this request. 0 buezi 5 years ago Hi Jose, That sounds really nice! Question, how does it differ with lazy table loading option in the core framework. Does it replace it or does it go in parallel with it? best regards Patrick replies (1)
http://www.web2pyslices.com/slice/show/2033/define-tables-in-modules
CC-MAIN-2021-10
refinedweb
283
65.32
Hi everybody, I'm having a problem with my compiler and it goes like this: I've included the glib.h library like this: #include </usr/include/glib-2.0/glib.h> Now, the compiler seems to find glib.h, and seems to identify the type GList which is defined in GLIB. So far so good. BUT, when I try to use a g_list function (e.g. g_list_append), I get an error message that it cannot find the definition of the function. Now, the way glib.h works is that there are numerous inclusions in it referring to different g*.h files that reside under a directory called "glib" which, like glib.h, resides in directory "glib-2.0". The point is that the compiler doesn't seem to find any of the included files g*.h under directory "glib". The error messages that I'm getting are: /home/knoppix/workspace/glib.h:30:26: error: glib/galloca.h: No such file or directory /home/knoppix/workspace/glib.h:31:25: error: glib/garray.h: No such file or directory /home/knoppix/workspace/glib.h:32:30: error: glib/gasyncqueue.h: No such file or directory ... ... /home/knoppix/workspace/glib.h:49:24: error: glib/glist.h: No such file or directory /home/knoppix/workspace/glib.h:50:26: error: glib/gmacros.h: No such file or directory /home/knoppix/workspace/glib.h:51:24: error: glib/gmain.h: No such file or directory ... and it goes on and on... Does anyone know why it's doing this and how I can solve it? Thanks!
http://cboard.cprogramming.com/c-programming/90236-problem-glib-printable-thread.html
CC-MAIN-2014-15
refinedweb
265
64.37
I wrote this prefs document because I could not find effective prefs documentation for the variety of technical problems I encountered. It needs to be moved from (which is getting out of the mozilla-related content business). It also needs to be reviewed by an actual prefs engineer for accuracy. Can someone declare an ideal resting place in the mozilla-org tree, so if I get some time and do the cleanup my self, it will have a home? iirc, chris has been working on documenting the preferences. she might be good reference here. a gazillion developers touch the prefs --i'm cc'ing a couple here (brian and seth) who might have some comments... -->cwozniak. >What is a preference? Preferences are not JavaScript variables. Preferences are stored in a (more or less) human readable form which is parsed by the JavaScript parser. Some day we would like to remove this dependency because JavaScript is overkill for this need (and opens us up to security issues.) >Prefernce load order: That should be PreferEnce ;) Again preferences are not JavaScript variables, but they will be resolved in favor of the last entry read... unless locking (AutoConfig) is involved. Let's not go there just now... I don't really understand item 3 "Use default, hard-coded value for prefs". Assuming that a value exists in *any* preference file, it will override any value set in the code. >Because prefs.js is loaded last... changes to a profile should made to the prefs.js. prefs.js is a generated file. Users should not be messing with it unless they are absolutely certain of what they are doing. "user.js" is actually the last file loaded. This file is only read in and never written out, and is where the user should be installing their own personal preferences (the default homepage is a common one). This way user can create a new profile, drag their "user.js" file into it, and immediately be in a familiar environment. The other advantage is that you can have comments or commented out preferences in user.js and they won't be purged the next time the file is written out. >Modifying preference files: Again, users shouldn't really be mucking about in prefs.js. >Systems administrators can modify <mozilla>/defaults/prefs/ System Administrators would generally use AutoConfig to do this sort of thing (through CCK). Hacking individual installations is tedious. >Care should be taken in modifying values in the "default/prefs" files... If you (and here you means a developer) changes a value in "default/prefs", that becomes the new default. If that value is set to the same value that is in prefs.js, then yes, the value will be removed from prefs.js the next time it is saved. If, however, you change the value from "0" to "3", and prefs.js has a user value of "1", the user value will remain in effect because it is *not* the same as the default value. Hi I'm pulling together the Preference Reference that will help NCADM users do customizations that they cannot do using the NCADM tool (rtm around 8/30). The NCADM tool is a wizard that includes an Advanced Preferences page. Most of the customizations that a majority of our customers would want to do can be accomplished either via the wizard or the Advanced Preference Editor. CCK is a scaled back version of NCADM (or NCADM is an enhanced version of CCK). CCK is available for download; NCADM will be available for sale. Also, there are appendixes in the NCADM guide (still in process, almost ready for review) that deal with the preference architecture and remote administration. That said, it would be good at some point to sync up the commercial doc with the open source doc so that we are sending a uniform message. Let's put our heads together. Chris I am not sure this document doesnt have anything we already have in our current preferences documentation. However if you guys/gals could get together with Christine and put out an overall document that would be great. If you disagree please let me know. keyser: Where's the current documentation? I would have never written this if someone had given me the info I wanted, I had to figure all this out myself and use it to help answer a variety of networking questions I was being asked. Believe me, I got a day job, and plenty of bugs to go with it. *** Bug 178685 has been marked as a duplicate of this bug. *** Is this bug still valid? We already have btw, the packetgram.com url does not work; reporter, can you post the doc as an attachment? and is this bug a request for Help file content, or a request for mozilla.org content? The packetgram URL does work for me. I did not understand the relationship of user.js and prefs.js until now. I'll make a draft that has the updated changes, and attach it here, then delete the file from the packetgram system. I think this bug is still valid. The prefs documentation mentioned () was nigh impossible for me to find. I searched the help system for prefs.js and nothing came up. Searching mozilla brought me here, which brought me to it. Plus, it's quite *nix-specific. (e.g. Customizing Mozilla) (And I still can't find the pref for having MozillaMail check all IMAP folders for new messages. Scanning about:config for it now..) Matthew: take a look at the document in the URL field, and tell me if it is missing anything you wanted to know. brian: I know you are probably gone, but I finally sat down today and made a lot of the helpful corrections you provided. The URL is still the same. I need to re-read your comments again, do a final re-write+spellcheck, then I'll be punting this document into mozilla. I'm going to remove the file and pref specific info from this file, and post it here. I've reopened the dupe for the per-file, per pref info. Done. cvs -z9 commit -m "added "A Brief Gude to Preferences"" index.html briefprefs.html (in directory C:\HOME\mozilla-org\html\catalog\end-user\customizing\) Checking in index.html; /cvsroot/mozilla-org/html/catalog/end-user/customizing/index.html,v <-- index.html new revision: 1.6; previous revision: 1.5 done RCS file: /cvsroot/mozilla-org/html/catalog/end-user/customizing/briefprefs.html,v done Checking in briefprefs.html; /cvsroot/mozilla-org/html/catalog/end-user/customizing/briefprefs.html,v <-- briefprefs.html initial revision: 1.1 This is only about the preferences system, not the actual prefs themselves, right? Adding "generic" to the summary, assuming I am right. I thought this bug was about documenting the meaning of the individual prefs. Not really. That's the funny thing, I wrote the doc, I wrote the bug, I put the URL in the bug. Nobody reads the doc. There used to be some pref-specific comments, but that was because the document lived on packetgram, which was a network-troubleshooting web site that I run. Whatever. Ben, if you can review this doc, esp checking to see if I got bnesse's feedback correct, I'll take ownership of this bug, and then mark it fixed. If this document just does not do what it should, comment away. Created attachment 121869 [details] briefprefs.html clarify things a little bit and add some details on the preference system. to-do: - example of how to change preferences by code at run-time - naming conventions (bug 58816) - pref-by-pref reference Ben, are you the module owner of pref lib? Created attachment 121908 [details] briefprefs.html (+ pref design spec) Nope. I wrote that document because I couldn't live without it, and needed it as the basis of: If you sensed the focused "get in, say some stuff, and get out" style of this document, now you know why. My current job is to test just networking features. You've done a great job of improving the documentation. I have not reviewed the changes in detail, but I liked what I saw. Unfortunately, I won't be able to do a through reading anytime soon. I also am not the ideal person to review a document like this. If you think this is ready to go, check it in, and we'll take changes from people interactively. I've found that it is very hard to get people to review changes before you make them. A comment about the section "Naming conventions" in attachement 121908. It uses several "capability.policy.default.foo" as example on how to name preferences. In fact, those preferences are named as required by the object names in class info and the method names in IDL. Of course, those have a naming convention, but this is not a preference naming convention. This suggests that caps prefs really have a option on how to name the preference, which doesn't exist. Created attachment 121964 [details] briefprefs.html Axel, those preferences exist before any naming rule exist What I am trying to do is to look at how existing preferences are named, and then infer from them a naming scheme that is compatible with current ones. The scheme needs to be useful but not accurate as far as the history of naming is concerned. in this version I have added many more detail on how preferences are handled and many contextual links to LXR. I've also added some code examples. Created attachment 122086 [details] 600+ preference names, descriptions and valid options This may prove helpful in creating comprehensive documentation for Mozilla preferences. As a further explanation to comment #22: I currently run a project over at preferential.mozdev.org which aims to: (1) provide a consistent GUI interface to all Mozilla prefs (this is now subwhat superceded by about:config) and, more importantly, (2) document all Mozilla preferences I don't want to duplicate effort here, so if there's any way I can contribute to the Preferences documentation effort, I'd love to help. I have attached my project's source preferences file (which I later convert into two RDF files using a Perl script). It doesn't document all preferences, but has so far recorded about 600 preferences and their options. I'm happy to massage this into a suitable format for someone if there is interest. Daniel: this document is great! Lots of things I've wanted to know, but never would have been able to cover myself. a couple comments: "arises" could be "arising" "In Netscape product" should be "In Netscape products" "On application exit, all user-set value" should be "values" (See more information) has the HREF extending over the last ")" The paragraph that is struck out about developers changing prefs is good info, but I think it should be moved into the same area you had your sample code, "Accessing preferences programmatically" I've read up to the namespace section, I'll try to digest the rest when I can. some changes checked in I've rewritten the doc for users & administrators. reference to developers removed (will be moved to a separate doc). The doc will be temporary (should have been in /docs instead of catalog); I'll find a more appropriate place once I finished my other docs on user profile. Created attachment 125156 [details] preference reference for developers no idea why I bothered it, but here we go, relieving myself of this doc Ben - in 'What is a preference?' I believe you have a type in spelling of prefs.js as perfs.js - only a small thing but a potential trap for a newbie. We have delayed this long enough -> P1 I just realised that an arbitariliy named file put on /usr/lib/mozilla-1.2.1/defaults/pref will be read, too. That's for mozilla-1.2.1, Red Hat release. If this is standard behaviour, it ought to be documented. Also, I'm trying to include default mail account settings in my site config, but can't figure out how. The problem is setting mail.server.<server name>.userName This is of course user specific, but the value is always identical to the Unix/Linux login name, so it ought to be possible to set up the value in all.js or similar. Oh, and mail.identity.<id>.useremail is needed as well, but that can be derived directly from the above info. maybe I could use the autoConfig method or .cfg file along with getenv(), but I haven't been able to get those methods to work yet (and getenv() is not available in from .js files.) Richard: I made a quick scan and fixed one "perf" mispelling. Thanks for the feedback! -> ownership to me. I've updated the document also to remove Netscape references. I think this document (mostly due to Daniel), is really in good shape now, so we should start talking to other people that have less updated prefs docs, and have them link to us. Here's a quick list I found: (no link to us). FYI, there's a new preference being added in bug 86193. Some comments: Firstly the guide doesn't explain how to set the default homepage. I've tried setting: pref("browser.startup.homepage", ""); but firefox just dies if I put this in unix.js Also, the guide suggests that sysadmins set things in all.js. This doesn't work for most settings (at least on Linux). To set fonts, network proxies and paper sizes, I had to use unix.js. > Firstly the guide doesn't explain how to set the default homepage. that's bug 178685 > Also, the guide suggests that sysadmins set things in all.js. This doesn't > work for most settings (at least on Linux). To set fonts, network proxies and > paper sizes, I had to use unix.js. The doc says the platform-specific file (e.g unix.js) is loaded after all.js . As Mozilla code changes constantly, we don't want to document what prefs are loaded in platform-specific file. I'm not spending a lot of time w/ firefox, so if people can figure out more of what goes on there, lets start a new document or make changes to this one. The document in the URL is outdated. First, the directory structure for prefs has changed. Also, There's no information about how to lock a preference in the UI, which can be critical for admins, and I haven't found a right way to do this, just ignore the changes but the preference is reachable, and unknown value. Please work a little more on this, even with basic examples (I've tried changing the cache disk size, if you want to take it as example), together with an additional file, in order to separate from original .js files. The proper way to change preferences in a profile is via about:config these days. As for the other items you mentioned, I'm not reading every single prefs bugs. Is there a bug number you can provide, or can you be more specific? For example, I don't know what you mean about the directory structure... (In reply to comment #37) > The proper way to change preferences in a profile is via about:config these days. This contrasts with what's told in the url, since they're supposed to be guidelines for corporation admins to provide specific default settings for their corporations, avoiding spending a lot of time doing repeatedly and by hand the same setting once and again. > As for the other items you mentioned, I'm not reading every single prefs bugs. > Is there a bug number you can provide, or can you be more specific? For example, > I don't know what you mean about the directory structure... Take a look at Mozilla 1.7.x directory structure, which is different from previous versions, and the .js preferences files aren't in the same paths as before. Way back when the very first version of Netscape was released it was very messy for an administrator to set site defaults, and it has got harder and harder as the years have gone by. The document is good and describes a nice simple scheme, but I'm not convinced it works any more (a previous comment hinted about new directory structures). I have created a file defaults/pref/all.js on a Windows installation of Firefox 1.0, but it seems to be ignored both for new and existing users. Have I done something silly or is this other people's experience? This stuff is important! If you want people to get a good impression of Mozilla you need to help the guy who administers a large site make life as easy as possible for the thousands of users under his or her control. > The document > > is good and describes a nice simple scheme, but I'm not convinced it works any > more (a previous comment hinted about new directory structures). I have created > a file defaults/pref/all.js on a Windows installation of Firefox 1.0 Hmmm. I think it might just be a matter of updating firefox.js instead of all.js. I fully agree with your comments, though. Actually, this may not be a documentation issue at all; perhaps the problem is not that the mechanisms are undocumented, but that they are unnecessarily complex. I mean, why do we have to mess around with byteshift etc.? And why do you have to set the config file name in the first place? Why doesn't "general.config.filename" default to a file that is normally not there, and may thus be added by a system admin without having to update file from the distribution? Or even better, how about a site config *directory* where all files are read as part of the init sequence? (In reply to comment #40) > Hmmm. I think it might just be a matter of updating firefox.js instead of all.js. > Sadly this doesn't seem to work either. I quite agree with your comments: the problem is that the configuration mechanism is (a) too complex (b) doesn't conform with the documentation (i.e. is BUGGY!) (c) keeps on changing from release to release. I have been installing netscape and its successors on multi-user Unix systems ever since Netscape first came out, and there has never been a release that didn't force me to write a wrapper script to apply my site configurations. My actual configuration lines have hardly changed at all, apart from when the preference files switched to Javascript syntax, but every release I have wasted hours fathoming out where to put them. I've now hit a dead end trying to install it on a Windows system, because my Windows scripting skills are pretty non-existent: if there isn't a simple interface then I'm stuck. I'm still here, and can do some updating, although I don't use firefox much (I use a lot of camino and mozilla). The best way to keep this document updated is to reference bugs that describe prefs system changes. I rarely get updates from the developers, so I depend on contributors to point me in the right direction. Some feedback 1) there're also [mozilla app directory]/greprefs/*.js which are common among all gecko applications [except Minimo, there's a bug on that, iirc], and which also used to set default values. 2) in firefox extensions can have their own default preferences set in [extension dir]/defaults/preferences/*.js, where [extension dir] is ususally [profile]/extensions/[extension GUID], but may also be in [app dir]/extensions/[GUID] for global installations, I think. 3) "A preferences file is a simple JavaScript file" is quite confusing. It isn't Javascript file anymore. It used to be, that's where .js extension and some of syntax come from. But it's not JS, it is parsed not by js but by another (simpler) parser. 4) A link to (unless something better is put on moz.org) and a notice about about:config <> would be nice. 5) nit: "In the profile directory are two user pref files: prefs.js and user.js. prefs.js is automatically generated ". A new line after the first sentence would make the reading easier, I think. (ie. "user.js\n prefs.js") The dot separating two sentences is not visible enough (as there are too many dots nearby :) 6) "None <a href="#filew-def-special">platform-specific</a> .js". The target anchor does not exist in the document. 7) "Usually when the user specifically commits a preference change via user interface such as the Preferences dialog, the application saves the change by overwriting prefs.js". Suggesting <em>Ususually</em>. For example, changing it from about:config doesn't seem to rewrite prefs.js on current trunk firefox build. Clicking ok in options dialog does rewrite it though. In fact the rewrite only happens *only* when nsIPrefService::savePrefFile(null) is called. Afaik, many/most extensions don't do that. 8) "Note: <b>Note</b> preference names are case-sensitive." 9) "If you have Mozilla 1.4". Moz 1.4 or later or Firefox says "feedback and comments here", so here is mine I can't find out what fcc_folder_picker_mode means. I've come across it in my thunderbird installation, while trying to debug something else. But I don't know what it means, and there does not seem to be a cannonical reference to all these preferences anywhere in the documenation. I'd like to see a full, maintained, list of what they all are and mean. Does anyone know where it is? I've recently had time to re-read this document, in the context of a new-found curiosity about Camino... I've made some gramatical and link cleanup changes, and also moved about:config to the top, in refrence to how to make changes. I think this servers the most common audience. Here's my personal todo list: users.js - should emphasis the advantage of being able to use comments (per #3) I've reviewed all the comments up to #36, about the directory info being out of date. I'll look into that now. Since the original document, I've learned enough c++ to read some of the pref loading code. Changes: 1- added updated discussion about greprefs, application prefs. TODO: extensions? 2- added updated list of application pref files, based on examining released versions on my Mac. 3- added further emphasis on about:config, by explaining localization features. 4- re-writing file changing sections to simply say how changing files affects the behavior. If you are a sys admin, a hacker, doing a distro, coding, etc, you get to figure out what files to hack yourself. This increases the learning curve for people who are just trying to hack *a* preference, and people hacking pref files. (Also opens door for people interested in a specific app to write app-specific prefs docs....) 5- add discussion of hidden, default-less prefs. I should update the file in the next few days... editing offline right now. Please add a link to this document to the config file documentation, which seems to be at Also, it would be nice if the documentation spoke about what config files are automatically overwritten during an automatic upgrade. I have had problems with customizing the all.js file, and then having all those changes lost the next time there was an upgrade. Using a file like AAALocal_prefs.js might work better. If this bug is assigned to nobody@ then it shouldn't have ASSIGNED status. app_dir is not defined anywhere in the document and is used once. See also (probably outdated) (and what it resends to) ("this page is not complete") (In reply to dustwolfy@gmail.com from comment #49) > app_dir is not defined anywhere in the document and is used once. IIUC that value is not a preference. See also Automatically closing all bugs that have not been updated in a while. Please reopen if this is still important to you and has not yet been corrected. I believe that this request is not INVALID: the problem is well defined, and still exists today. OTOH fixing it might be a lot of low-priority trouble, so WONTFIX might perhaps be an acceptable resolution — one which should, however, be set (or not) by an owner or peer of the module in question, i.e. not by lowly triager me. Reopening for review by Sheppy. This is important and should stay open and be fixed, preferably by a volunteer. FWIW, there exists now a "Config Descriptions" extension for Firefox and SeaMonkey (but not Thunderbird, I don't know why) which fishes the comments in the pref source files and adds those comments as an additional column in about:config. Of course it can't say anything for prefs which are undocumented in the source, but otherwise IMHO it is a must-have for the power user or developer. It might be of some help to whoever (if anyone) decides to work on this bug. As filed, this bug is about "the pref system" not individual prefs. That is already documented, both in nsIPrefBranch.idl and on MDN. For docs on specific prefs, we should have another bug (and in many cases we shouldn't document the prefs at all). Why I am here: * * "Feedback and comments to bug 158384" Areas affected: * "The administrator may edit the all.js† default pref file (install_directory/defaults/prefs/all.js)." * "The administrator may add an all-companyname.js preference file (install_directory/defaults/prefs/all-companyname.js)." Issues: * The "install_directory/defaults/prefs/" directory mentioned does not exist by default. "install_directory/defaults/pref/" does. Is the documentation correct? * If you copy a prefs.js from a user profile to install_directory/defaults/prefs/all-company.js or install_directory/defaults/pref/all-company.js then delete the user profile and start Thunderbird, you would expect the user preferences to be restored, but they are not. Workaround: * The only way I could get the preferences to be set by default is by putting prefs.js in "app_dir/defaults/profile/". This is not covered in this article. I've updated the docs as appropriate. Despite the instruction, it's considered bad form to comment in really old bugs such as this one. I already covered that in the "Why I am here" section in my previous comment, so perhaps that needs updating in the article too.
https://bugzilla.mozilla.org/show_bug.cgi?id=158384
CC-MAIN-2016-40
refinedweb
4,469
65.83
The 4. Installation. The code was developed using .NET Framework 4.6.2 and Visual Studio 2019. PdfFileWriter PdfFileWriter.dll using PdfFileWriter Version 1.26.0 enhancements: Support for PDF XMP Metadata and support for QR Code ECI Assinment number. The PDF File Writer C# class library supports the following PDF document's features: PrintDocument System.Windows.Media.PathGeometry PathGeometry Creating a PDF is a six steps process. PdfDocument PdfFont PdfImage PdfPage PdfContents CreateFile Step 5 is where most of your programming effort will be spent. Adding contents is achieved by calling the methods of PdfContents class to render graphics and text. The contents class has a rich set (about 100) of methods for adding text and graphics to your document. PdfDocument implements the IDisposable interface to release unmanaged resources. The CreateFile method calls Document.Dispose() after the PDF file is created. However, to ensure the release of resources you should wrap the PdfDocument creation and the final CreateFile with either a using statement or a try/catch block.. TestPdfFileWriter As stated before, the PdfFileWriter C# class library shields you from the complexities of the PDF file structure. However, good understanding of PDF file is always an advantage. Adobe PDF file specification document available from Adobe website: “PDF Reference, Sixth Edition, Adobe Portable Document Format Version 1.7 November 2006”. It is an intimidating 1310 pages document. I would strongly recommend reading Chapter 4 Graphics and sections 5.2 and 5.3 of the Text chapter 5. is point. There are 72 points in one inch. The PDF File writer allows you to select your own unit of measure. All methods arguments representing position, width or height must be in your unit of measure. There are two exceptions: font size and resolution. Font size is always in points. Resolution is always in pixels per inch. The PDF File Writer converts all input arguments to points. All internal measurement values and calculations are done with double precision. At the final step when the PDF file is created, the values are converted to text strings. The conversion precision is six digits. The conversion formula used is: // Value is Double if(Math.Abs(Value) < 0.0001) Value = 0.0; String Result = ((Single) Value).ToString(); PDF readers such as Adobe Acrobat expect real numbers with a fraction to use period as the decimal separator. Some of the world regions use other decimal separators such as comma. Since Version 1.1 of the PDF File Writer library will use period as decimal separator regardless of regional setting of your computer. The PDF File Writer library supports most of the fonts installed on your computer. The only exception is device fonts. The supported fonts follow the OpenType font specifications. More information is available at Microsoft Typography - OpenType Specification. The text to be drawn is stored in a String made of Unicode characters. The library will accept any character (0 to 65536) except control codes 0 to 31 and 128 to 159. Every character is translated into a glyph. The glyphs are drawn on the page left to right in the same order as they are stored in the string. Most font files support only a subset of all possible Unicode characters. In other words, you must select a font that supports the language of your project or the symbols you are trying to display. If the input String contains unsupported glyphs, the PDF reader will display the "undefined glyph". Normally it is a small rectangle. The test program attached to this article has a "Font Families" button. If you click it you can see all available fonts on your computer and within each font all available characters. If the language of your project is a left to right language and each character is translated into one glyph and the glyph is defined in the font, the result should be what you expect. If the result is not what you expect, here are some additional notes: Unicode control characters. Unicode control characters are used to control the interpretation or display of text, but these characters themselves have no visual or spatial representation. The PDF File writer does not identify these characters. The library assumes that every character is a display character. They will be displayed as undefined character. Right to left language. Normally the order of characters in a text string is the order a person would read them. Since the library draws left to right the text will be written backwards. The ReverseString method reverses the character order. This will solve the problem if the text is made only of right to left characters. If the text is a mix of right to left, left to right, numbers and some characters such as brackets ()[]<>{} it will not produce the desired results. Another limitation is TextBox class cannot break long right to left text into lines. ReverseString TextBox Ligature. In some languages a sequence of two or more characters are grouped together to display one glyph. Your software can identify these sequences and replaced them with the proper glyph. Dotted circle. If you look at the Glyph column of Glyph Metrics screen you can see that some glyphs have a small dotted circle (i.e. character codes 2364 and 2367). These characters are part of a sequence of characters. The dotted circle is not displayed. If the advance width is zero and the bounding box is on the left side of the Y axis, this glyph will be drawen ok. It will be displayed on top of the previous character. If The advance width is not zero, this glyph should be displayed before the previous character. Your software can achieve it by reversing the two characters. Displaying images in the PDF document is handled by the PdfImage class. This class is a PDF resource. Image sources can be: as in charts, and the number of colors is less than 256 the image can be saved as an indexed bitmap. Each color is represented by one byte (or less) compare to 3 bytes. This can result in a very significant file size reduction. For example,. Adding an image to your PDF File Create a PdfImage class. // create PdfImage object PdfImage MyImage = new PdfImage(Document); Set optional parameters if required. All the parameters have a default value. // saved image format (default SaveImageAs.Jpeg) // other choices are: IndexedImage, GrayImage, BWImage MyImage.SaveAs = SaveImageAs.Jpeg; // Crop rectangle is the image area to be cropped. // The default is no crop (empty rectangle). // save 5 methods to load the image. // image source can be a file, Bitmap, // BW bool array, QRCode or Pdf417 barcode MyImage.LoadImage(image_source); Draw the image into the PDF Document. // draw the image Contents.DrawImage(MyImage, PosX, PosY, Width, Height); If you want the image to maintain correct aspect ratio use ImageSize or ImageSizePosition to calculate the width and height. If the ratio of width and height is not the same as the image, the image will look stretched in one of the directions. ImageSize ImageSizePosition // calculate the largest rectangle with the correct // aspect ratio SizeD MyImage.ImageSize(Double Width, Double Height); // calculate the largest rectangle with // correct aspect ratio that will fit in a given area and // position. It based on <code>ContentAlignment</code> enumeration. ImageSizePos ImageSizePosition(Double Width, Double Height, ContentAlignment Alignment);. The PDF File Writer library includes a base class Barcode. For each supported barcode one needs a derived class. The class library includes four derived classes: Barcode128, Barcode39, BarcodeInterleaved2of5 and BarcodeEAN13. The BarcodeEAN13 produces EAN-13 barcode if the input string is 13 digits and UPC-A if the input string is 12 digits. Input string with 13 digit and a leading zero is considered UPC-A. Barcode Barcode128 Barcode39 BarcodeInterleaved2of5 BarcodeEAN13 The DrawBarcode method has a number of overloads. You specify the position of the bottom left corner of the barcode, the width of the narrow bar, the height of the barcode and the derived barcode class. There are optional arguments: justification (left, center, right) color and font to display the text. Quiet zone around the barcode is your responsibility. Optional text is displayed below the barcode. If you select color other than black you should make sure the contrast to the background is significant. Usage examples are given. The PDF File Writer library provides support for QR Code. It is based on article QR Code Encoder and Decoder .NET(Framework, Standard, Core) barcode to your PDF document must follow the steps below. QREncoder PdfContent.DrawImage QR Code example // create QRCode barcode QREncoder QREncoder = new QREncoder(); // set error correction code (default is M) QREncoder.ErrorCorrection = ErrorCorrection.M; // set module size in pixels (default is 2) QREncoder.ModuleSize = 1; // set quiet zone in pixels (default is 8) QREncoder.QuietZone = 4; // ECI Assignment Value (default is -1 not used) // The ECI value is a number in the range of 0 to 999999. // or -1 if it is not used Encoder.ECIAssignValue = -1; // encode your text or byte array QREncoder.Encode(ArticleLink); // convert QRCode to PdfImage in black and white PdfImage BarcodeImage = new PdfImage(Document, QREncoder); // draw image (height is the same as width for QRCode) Contents.DrawImage(BarcodeImage, 6.0, 6.8, 1.2); For coding examples please review 3.7 Draw Barcodes, ArticleExample.cs and OtherExample.cs source code. PDF417 barcode support software is based on article PDF417 Barcode Encoder Class Library and Demo App. The PDF417 barcode documentation and specification can be found in the following websites. Wikipedia provides a good introduction to PDF417. Click here to access the page. The PDF417 standard can be purchased from the ISO organization at this website. An early version of the specifications can be downloaded from this website for free. I strongly recommend that you download this document if you want to fully understand the encoding options. The PDF417 barcode encodes array of bytes into an image of black and white bars. Encoding Unicode text requires converting Unicode characters into bytes. The decoder must do the reverse process to recover the text. The bytes are converted to codewords. This conversion process compresses the bytes into codewords. The encoder adds error correction codewords for error detection and recovery. Once the total number of data codewords and error correction codewords is known , the encoder divides the codewords into data rows and data columns. The final step is the creation of a black and white image. Adding PDF417 barcode to your PDF document must follow the steps below. Pdf417Encoder private void DrawPdf417Barcode() { // save graphics state Contents.SaveGraphicsState(); // create PDF417 barcode Pdf417Encoder Pdf417 = new Pdf417Encoder(); string ArticleLink = ""; // encode text Pdf417.Encode(ArticleLink); Pdf417.WidthToHeightRatio(2.5); // convert Pdf417 to black and white image PdfImage BarcodeImage = new PdfImage(Document, Pdf417); // draw image Contents.DrawImage(BarcodeImage, 1.1, 5.2, 2.5); // restore graphics sate Contents.RestoreGraphicsState(); return; } Create PDF417 barcode object. This object can be reused serially to produce multiple barcodes. // create PDF417 barcode Pdf417Encoder Pdf417 = new Pdf417Encoder(); Set optional parameters to control the encoding process. The PDF417 encoder encodes Input bytes into codewords. There are three types of codewords: byte, text and numeric. The program has an algorithm to divide the data input into these three types to compress the data. The default is Auto. However, you can restrict the encoding to only bytes or only text and bytes. Pdf417.EncodingControl = Pdf417EncodingControl.Auto; // or Pdf417.EncodingControl = Pdf417EncodingControl.ByteOnly; // or Pdf417.EncodingControl = Pdf417EncodingControl.TextAndByte; The PDF417 adds error correction codewords to detect errors and correct them. More error correction codewords improves the reliability of the barcode. However, it makes the barcode bigger. Error correction level allows you to control the quality of the barcode. The ErrorCorrectionLevel enumeration has two types of values. Fixed levels from 0 to 8. And levels that are recommended values based on the number of data codewords. The default value in ErrorCorrectionLevel.AutoNormal. For more details look at Table 6 and Table 7 in the PDF417 Specification. ErrorCorrectionLevel ErrorCorrectionLevel.AutoNormal Pdf417.ErrorCorrection = ErrorCorrectionLevel.Level_0; // up to Level_8 // or Pdf417.ErrorCorrection = ErrorCorrectionLevel.AutoNormal; // or Pdf417.ErrorCorrection = ErrorCorrectionLevel.AutoLow; // one less than normal // or Pdf417.ErrorCorrection = ErrorCorrectionLevel.AutoMedium; // one more than normal // or Pdf417.ErrorCorrection = ErrorCorrectionLevel.AutoHigh; // two more than normal The width in pixels of a narrow barcode bar. The default is 2. If this value is changed, the program makes sure that RowHeight is at least three times that value. And that QuiteZone is at least twice that value. RowHeight QuiteZone Pdf417.NarrowBarWidth = value; The height in pixels of one row. This value must be greater than or equal to 3 times the NarrowBarWidth value. The default is 6. NarrowBarWidth Pdf417.RowHeight = value; The width of the quiet zone all around the barcode. The quiet zone is white. This value must be greater than or equal to 2 times the NarrowBarWidth value. The default is 4. Pdf417.QuietZone = value; The default data columns value. The value must be in the range of 1 to 30. The default is 3. After the input data is encoded, the software sets the number of data columns to the default data columns and calculates the number of data rows. If the number of data rows exceeds the maximum allowed (90), the software sets the number of rows to the maximum allowed and recalculate the number of data columns. If the result is greater than the maximum columns allowed, an exception is thrown. Pdf417.DefaultDataColumns = value; Set Global Label ID Character Set to ISO 8859 standard. The n can be 1 to 9 or 13 or 15. If the string is null, the default of ISO-8859-1 is used. Language support is defined here. Pdf417.GlobalLabelIDCharacterSet = "ISO-8859-n"; Set Global Label ID User Defined value. The default is not used. I did not find any reference explaining the usage of this value. User defined value must be between 810900 and 811799. Pdf417.GlobalLabelIDUserDefined = UserDefinedValue; Set Global Label ID General Purpose value. The default is not used. I did not find any reference explaining the usage of this value. User defined value must be between 900 and 810899. Pdf417.GlobalLabelIDGeneralPurpose = UserDefinedValue; There are two encoding methods. One accepts text string as an input and the other one accepts byte array as an input. Pdf417.Encode(string StringData); // or Pdf417.Encode(byte[] BinaryData); The barcode was designed for binary data. Therefore, the first method above must encode the string from 16 bit characters to byte array. The encode method with string input has the following conversion logic. The Global Label ID Character Set property control the conversion. It is done in two steps. Step one string to UTF8 and step two UTF8 to ISO-8859-n. If you want to encode Hebrew you should set the character set to ISO-8859-8. public void Encode(string StringData) { // convert string to UTF8 byte array byte[] UtfBytes = Encoding.UTF8.GetBytes(StringData); // convert UTF8 byte array to ISO-8859-n byte array Encoding ISO = Encoding.GetEncoding(_GlobalLabelIDCharacterSet ?? "ISO-8859-1"); byte[] IsoBytes = Encoding.Convert(Encoding.UTF8, ISO, UtfBytes); // call the encode binary data method Encode(IsoBytes); Return; } After the data was encoded, you can check the layout of the barcode by examining these values: ImageWidth, ImageHeight, DataColumns or DataRows. If you want to readjust the width and the height of the barcode you can used one of the methods below. In addition, you can readjust the optional parameters: NarrowBarWidth, RowHeight or QuietZonevalues. ImageWidth ImageHeight DataColumns DataRows QuietZone This method will calculate the number of data rows and data columns to achieve a desired width to height ratio. The ratio includes the quiet zone. Check the return value for success. Bool Pdf417.WidthToHeightRatio(double Ratio); This method will calculate the number of data rows based on the desired number of data columns. Check the return value for success. Bool Pdf417.SetDataColumns(int Columns); This method will calculate the number of data columns based on the desired number of data rows. Check the return value for success. Bool Pdf417.SetDataRows(int Rows); In this step we create a PDF document image resource from the PDF417 barcode. PdfImage BarcodeImage = new PdfImage(Document, Pdf417); The last step is adding the barcode to the content of a PDF document’s page. // PosX and PosY are the page coordinates in user units. // Width is the width of the barcode in user units. // The height of the barcode is calculated to preserve aspect ratio. // Height = Width * BarcodeImage.ImageHeight / BarcodeImage.ImageWidth Contents.DrawImage(BarcodeImage, PosX, PosY, Width); The PDF text. Because AddWebLink requires coordinates relative to the bottom left corner of the page, the coordinates of your graphic object must be the same. In other words, do not use translation, scaling or rotation. If you do, you need to make sure that the two areas will coincide.. Bookmarks are described in the PDF specification (section 8.2.2 Document Outline) as follows: "A PDF Document may optionally display a document outline on the screen, allowing the user to navigate interactively from one part of the document to another. The outline consists of a tree-structured hierarchy of outline items (sometimes called bookmarks), which serve as a visual table of contents to display the document's structure to the user. The user can interactively open and close individual item by clicking them with the mouse." The OtherExample.cs source code has an example of bookmarks. At one location there is a hierarchy of three levels. You can see the result in OtherExample.pdf file. The first step in adding bookmarks to your application is: // set the program to display bookmarks // and get the bookmark root object PdfBookmark BookmarkRoot = Document.GetBookmarksRoot(); This step activates bookmarks in your document and returns the root node. Adding bookmarks is similar to adding controls to a windows form. The first level bookmarks are added to the root. Subsequent levels are added to existing bookmarks. At minimum you have to define a title, page, vertical position on the page and an open entries flag. Page is the PdfPage object of the page to go to. YPos is the vertical position relative to the bottom left corner of the page. Open entries flag is true if the lower level bookmarks are visible and false if the lower level are hidden. The first level is always visible by default. // hierarchy example PdfBookmark FirstLevel_1 = BookmarkRoot.AddBookmark("Chapter 1", Page, YPos, false); PdfBookmark SecondLevel_11 = FirstLevel_1.AddBookmark("Section 1.1", Page, YPos, false); PdfBookmark SecondLevel_12 = FirstLevel_1.AddBookmark("Section 1.2", Page, YPos, false); PdfBookmark ThirdLevel_121 = SecondLevel_12.AddBookmark("Section 1.2.1", Page, YPos, false); PdfBookmark ThirdLevel_122 = SecondLevel_12.AddBookmark("Section 1.2.2", Page, YPos, false); PdfBookmark SecondLevel_13 = FirstLevel_1.AddBookmark("Section 1.3", Page, YPos, false); PdfBookmark FirstLevel_2 = BookmarkRoot.AddBookmark("Chapter 2", Page, YPos, false); PdfBookmark SecondLevel_21 = FirstLevel_2.AddBookmark("Section 2.1", Page, YPos, false); PdfBookmark SecondLevel_22 = FirstLevel_2.AddBookmark("Section 2.2", Page, YPos, false); AddBookmark() method has four overloading variations: // ) PdfBookmark class exposes one more method GetChild. You can get any bookmark by calling GetChild with one or more integer arguments. Each argument is a zero base argument of the child position in the level. For example GetChild(2) is the third item of the first level. GetChild(2, 3) is the. PdfChart //. The ImageSize method returns the largest rectangle with correct aspect ratio that will fit in a given area. SizeD ImageSize(Double Width, Double Height); The ImageSizePosition method returns the largest rectangle with correct aspect ratio that will fit in a given area and position it based on ContentAlignment enumeration. ContentAlignment ImageSizePos ImageSizePosition(Double Width, Double Height, ContentAlignment Alignment); Print document support allows you to print a report in the same way as printing to a printer and producing a PDF document. The difference between this method of producing a PDF file and using PdfContents to produce a PDF file is the difference between raster graphics to vector graphics. Print document support creates one jpeg image per page. PrintExample.cs has an example of creating a three page document. PrintExample.cs Normally each page is a full image of the page. If your page is letter size and the resolution is 300 pixels per inch, each pixel is 3 bytes, the bit map of the page will be 25.245MB long. PrintPdfDocument has a method CropRect that can reduce the size of the bit map significantly. Assuming one inch margin is used, the active size of the bit map will be reduced to 15.795 MB. That is 37.4% reduction. PrintPdfDocument CropRect // main program // Create empty document Document = new PdfDocument(PaperType.Letter, false, UnitOfMeasure.Inch); // create PrintPdfDocument producing an image with 300 pixels per inch PdfImageControl ImageControl = new PdfImageControl(); ImageControl.Resolution = 300.0; PrintPdfDocument Print = new PrintPdfDocument(Document, ImageControl); // PrintPage in the delegate method PrintPageEventHandler // This method will print one page at a time to PrintDocument Print.PrintPage += PrintPage; // set margins in user units (Left, top, right, bottom) // note the margins order are per .net standard and not PDF standard Print.SetMargins(1.0, 1.0, 1.0, 1.0); // crop the page image result to reduce PDF file size // the crop rectangle is per .net standard. // The origin is top left. Print.CropRect = new RectangleF(0.95F, 0.95F, 6.6F, 9.1F); // initiate the printing process (calling the PrintPage method) // after the document is printed, add each page as an image to PDF file. Print.AddPagesToPdfDocument(); // dispose of the PrintDocument object Print.Dispose(); // create the PDF file Document.CreateFile(FileName); Example of PrintPage method // Print each page of the document to PrintDocument class // You can use standard PrintDocument.PrintPage(...) method. // NOTE: The graphics origin is top left and Y axis is pointing down. // In other words, this is not PdfContents printing. public void PrintPage(object sender, PrintPageEventArgs e) { // graphics object short cut Graphics G = e.Graphics; // Set everything to high quality G.SmoothingMode = SmoothingMode.HighQuality; G.InterpolationMode = InterpolationMode.HighQualityBicubic; G.PixelOffsetMode = PixelOffsetMode.HighQuality; G.CompositingQuality = CompositingQuality.HighQuality; // print area within margins Rectangle PrintArea = e.MarginBounds; // draw rectangle around print area G.DrawRectangle(Pens.DarkBlue, PrintArea); // line height Int32 LineHeight = DefaultFont.Height + 8; Rectangle TextRect = new Rectangle(PrintArea.X + 4, PrintArea.Y + 4, PrintArea.Width - 8, LineHeight); // display page bounds // DefaultFont is defined somewhere else String text = String.Format("Page Bounds: Left {0}, Top {1}, Right {2}, Bottom {3}", e.PageBounds.Left, e.PageBounds.Top, e.PageBounds.Right, e.PageBounds.Bottom); G.DrawString(text, DefaultFont, Brushes.Black, TextRect); TextRect.Y += LineHeight; // display print area text = String.Format("Page Margins: Left {0}, Top {1}, Right {2}, Bottom {3}", PrintArea.Left, PrintArea.Top, PrintArea.Right, PrintArea.Bottom); G.DrawString(text, DefaultFont, Brushes.Black, TextRect); TextRect.Y += LineHeight; // print some lines for(Int32 LineNo = 1; ; LineNo++) { text = String.Format("Page {0}, Line {1}", PageNo, LineNo); G.DrawString(text, DefaultFont, Brushes.Black, TextRect); TextRect.Y += LineHeight; if(TextRect.Bottom > PrintArea.Bottom) break; } // move on to next page PageNo++; e.HasMorePages = PageNo <= 3; return; } The data table classes allow you to display data tables in your PDF document. PdfTable is the main class controlling the display of one table. A table is made out of a header row and data rows. Each row is divided into cells. PdfTableCell controls the display of one header cell or one data cell. If header is used it will be displayed at the top of the table. Optionally it will be displayed at the top of each additional page. To display data in a cell, you load the data into the Value property of PdfTableCell. Data can be text string, basic numeric value, Boolean, Char, TextBox, image, QR code or barcode. Independently of data, you can load the cell with document link, web link, video, audio or embedded file. Clicking anywhere within the cell's area will cause the PDF reader to activate the document link, web link, video, audio or embedded file. The display of the data is controlled by PdfTableStyle class. PdfTable class contains a default cell style and a default header style. You can override the default styles with private styles within PdfTableCell. To display a table, you create a PdfTable object. Next you initialize the table, header cells, data cells and styles objects. Finally, you set a loop and load the cell values of one row and then draw this row. This loop continues until all data was displayed. Below you will find the necessary sequence of steps to produce a table. PdfTable PdfTableCell Value PdfTableCell PdfTableStyle When DrawRow method is called, the software calculates the required row height. Row height is the height of the tallest cell. The row will be drawn if there is sufficient space within the table. When the available space at the bottom is too small, a new page is called, and optional heading and the current row are displayed at the top of the table. If the required row height is so large that it will not fit in full empty table, an exception is raised. In order to accommodate long multi-line Strings or TextBoxes, the software can handle these cases in a flexible way. Multi-line String is converted by PdfTable into a TextBox. The PdfTableStyle class has a TextBoxPageBreakLines property. If this property is set to zero (default), the TextBox is treated as other data values. TextBox height must fit the page. If TextBoxPageBreakLines is set to a positive integer, the system will calculate cell's height as TextBox height or the height the first few lines as specified by TextBoxPageBreakLines. The system will draw the row with as many lines that fit the page. A new page will be created, and the rest of the lines will be drawn. In other words, the first block of lines of a long TextBox will be at least TextBoxPageBreakLines long. TableExample.cs source contains an example of long TextBox cells. DrawRow TextBoxes PdfTabl. Define table's area on the page. // table's area on the page Table.TableArea = new PdfRectangle(Left, Bottom, Right, Top); // first page starting vertical position Table.RowTopPosition = StartingTopPosition; The four arguments are the four sides of the table relative to bottom left corner and in user units. If on the first page the table-top position is not at the top of the page set RowTopPosition to the starting top position. On subsequent pages the table will always start at the top. If TableArea is not specified, the library will set it to default page size less one inch margin. RowTopPosition TableArea Divide the table width into columns. // divide table area width into columns StockTable.SetColumnWidth(Width1, Width2, Width3, ...); The number of arguments is the number of columns. The table width less total border lines will be divided in proportion to these arguments. Once the number of columns is set with SetColumnWidth method the library creates two PdfTableCell arrays. One array for header cells and one array for data cells. SetColumnWidth Rows and columns of the data table can be separated by border lines. Border lines properties are defined by PdfTableBorder and PdfTableBorderStyle. There are four horizontal border lines: TopBorder, BottomBorder, HeaderHorBorder between the header row and first data row and CellHorBorder between data rows. There are two sets of vertical border lines: HeaderVertBorder array for vertical border lines within the header row, and CellVertBorder array for vertical border lines between columns within the data part of the table. Arrays size is the number of columns plus one. Array element zero is the table's left border. Array element Columns is the table's right border. All other elements are lines separating columns. Each of these lines can be defined individually. There are methods to define all border lines at once or define each individual border line. PdfTableBorder PdfTableBorderStyle TopBorder BottomBorder HeaderHorBorder CellHorBorder HeaderVertBorder CellVertBorder Methods to define all border lines: // clear all border lines Table.Borders.ClearAllBorders(); // set all border lines to default values (no need to call) // All frame lines are one point (1/72") wide // All grid lines are 0.2 of one point wide // All borders are black Table.Borders.SetDefaultBorders(); // set all borders to same width and black color Table.Borders.SetAllBorders(Double Width); // set all borders to same width and a specified color Table.Borders.SetAllBorders(Double Width, Color LineColor); // set all borders to one width and all grid lines to another width all lines are black Table.Borders.SetAllBorders(Double FrameWidth, Double GridWidth); // set all borders to one width and color and all grid lines to another width and color Table.Borders.SetAllBorders(Double FrameWidth, Color FrameColor, Double GridWidth, Color GridColor); // set all frame borders to same width and black color and clear all grid lines Table.Borders.SetFrame(Double Width); // set all frame borders to same width and a specified color and clear all grid lines Table.Borders.SetFrame(Double Width, Color LineColor); Each horizontal border line can be cleared or set. The example is for top border line: // clear border Table.Borders.ClearTopBorder(); // set border with default color set to black // Zero width means one pixel of the output device. Table.Borders.SetTopBorder(Double LineWidth); // set border Table.Borders.SetTopBorder(Double LineWidth, Color LineColor); Each vertical border line can be cleared or set. The example is for cell's vertical border lines: // clear border Table.Borders.ClearCellVertBorder(Int32 Index); // set border with default color set to black Table.Borders.SetCellVertBorder(Int32 Index, Double LineWidth); // set border Table.Borders.SetCellVertBorder(Int32 Index, Double LineWidth, Color LineColor); Set other optional table properties. The values given in the example below are the defaults. // header on each page HeaderOnEachPage = true; // minimum row height MinRowHeight = 0.0; Table information is processed one row at a time. Each row is made of cells. One cell per column. The display of cell's information is controlled by PdfTableStyle class. There are about 20 style properties. For the complete list view the source code or the help file. Some of these styles are specific to the type of information to be displayed. Here is an example // make some changes to default header style Table.DefaultHeaderStyle.Alignment = ContentAlignment.BottomRight; // create private style for header first column Table.Header[0].Style = Table.HeaderStyle; Table.Header[0].Style.Alignment = ContentAlignment.MiddleLeft; // load header value Table.Header[0].Value = "Date"; // make some changes to default cell style Table.DefaultCellStyle.Alignment = ContentAlignment.MiddleRight; Table.DefaultCellStyle.Format = "#,##0.00"; // create private style for date column Table.Cell[0].Style = StockTable.CellStyle; Table.Cell[0].Style.Alignment = ContentAlignment.MiddleLeft; Table.Cell[0].Style.Format = null; After initialization is done it is time to display the data. The example below is from TableExample.cs. It is a table of stock prices. There are 6 columns. TableExample.cs // open stock daily price StreamReader Reader = new StreamReader("SP500.csv"); // ignore header Reader.ReadLine(); // read all daily prices for(;;) { String TextLine = Reader.ReadLine(); if(TextLine == null) break; String[] Fld = TextLine.Split(new Char[] {','}); Table.Cell[ColDate].Value = Fld[ColDate]; Table.Cell[ColOpen].Value = Double.Parse(Fld[ColOpen], NFI.PeriodDecSep); Table.Cell[ColHigh].Value = Double.Parse(Fld[ColHigh], NFI.PeriodDecSep); Table.Cell[ColLow].Value = Double.Parse(Fld[ColLow], NFI.PeriodDecSep); Table.Cell[ColClose].Value = Double.Parse(Fld[ColClose], NFI.PeriodDecSep); Table.Cell[ColVolume].Value = Int32 The PdfFileWriter supports embedding video files in the PDF document. Full examples of playing video files are given in the page 7 of OtherExample.cs. Adding a video file requires the use of three classes. First you need to embed the video file in the PDF document. OtherExample.cs Second you need to define how the video is to be played. The PdfDisplayMedia class has a number of methods to control the video display. Please refer to the class' source code and the documentation help file. For example: RepeatCount or ScaleMedia. If you want to play the video in a floating window you must use SetMediaWindow method. PdfDisplayMedia RepeatCount ScaleMedia SetMediaWindow Third you need to define the area on the PDF page that the user must click in order to activate the video. If you want to activate the video when the annotation area is visible, use ActivateActionWhenPageIsVisible. ActivateActionWhenPageIsVisible //); The PdfFileWriter supports embedding sound files in the PDF document. Full example of playing sound file is given in the page 7 of OtherExample.cs. Embedding sound files is essentially the same as video files. The only obvious difference is that there is nothing to display. //); The PdfFileWriter appends new pages to the end of the page list. If you want to move a page from its current position to a new position use the following method. // Source and destination index are zero based. // Source must be 0 to PageCount - 1. // Destination must be 0 to PageCount. // If destination index is PageCount, the page will be the last page // PageCount is a property of PdfDocument. Document.MovePage(Int32 SourceIndex, Int32 DestinationIndex); The PdfFileWriter creates PDF documents. The main class PdfDocument constructor gives you two choices to save the document. The first choice is to save the PDF file to a disk file. In this case you provide the constuctor with a file name. At the end of file creation, you call PdfDocument.CreateFile. This method writes the PDF to the file and closes the file. // create main class PdfDocument Document = new PdfDocument(PaperType.Letter, false, UnitOfMeasure.Inch, FileName); // terminate Document.CreateFile(); The second choice is a stream. You create a stream, either memory stream or a file stream, and you pass the stream as an argument to the PdfDocument constructor. After CreateFile method is executed, your stream contains the PDF document. Extract the document from the stream as appropriate to your application. You must close the stream in your application. // create memory stream MemoryStream PdfStream = new MemoryStream(); // create main class PdfDocument Document = new PdfDocument(PaperType.Letter, false, UnitOfMeasure.Inch, PdfStream); // terminate Document.CreateFile(); // save the memory stream to a file FileStream FS = new FileStream(FileName, FileMode.Create); PdfStream.WriteTo(FS); PdfStream.Close(); FS.Close(); The PDF document information dictionary is displayed by the PDF reader in the Description tab of the document properties. The information includes: Title, Author, Subject, Keywords, Created date and)"); When PdfInfo object is created, four additional fields are added to the dictionary. You can override all of them in your code. PdfInfo // set creation and modify dates DateTime LocalTime = DateTime.Now; Info.CreationDate(LocalTime); Info.ModDate(LocalTime); // set creator and producer Info.Creator("PdfFileWriter C# Class Library Version " + PdfDocument.RevisionNumber); Info.Producer("PdfFileWriter C# Class Library Version " + PdfDocument.RevisionNumber); As a document is being built, the PDF File Writer accumulates all the information required to create the PDF file. The information is kept in memory except for images and embedded files. Images and embedded files are automatically written to the output file when they are declared. For very large documents the memory used keeps growing. The library offers methods (CommitToPdfFile) to write contents information to the output file and invoke the garbage collector to free the unused memory. The GC.Collect method takes time to execute. If execution time is an issue, set the GCCollect argument once every few pages. In other words, the CommitToPdfFile must run for every page but the cleanup is done once every few pages. Once a commit was executed, no additional information can be added. PdfTable automatically starts a new page when the next row cannot fit at the bottom of the current page. The PdfTable class has two members CommitToPdfFile and CommitGCCollectFreq to control memory usage while a table is being build. The PdfChart class generates an image from the .NET Chart class. The DrawChart method of PdfContents will perform the commit. Alternatively, you can call CommitToPdfFile method of PdfChart. CommitToPdfFile GC.Collect GCCollect the image under it. PDF has two mechanisms to change this behavior opacity and blending. The graphic state dictionary has opacity value for stroking (pen) and non-stroking (brush) operations. The opacity value of fully opaque is 1.0 and fully transparent is 0.0. The opacity value corresponds to the alpha component of a color structure such that 1.0 is 255 alpha and 0.0 is 0 alpha. If the opacity value is 0.5 a new object painted on the page will be 50% transparent. To set opacity call the SetAlphaStroking method for lines or SetAlphaNonStroking method for shapes. Blending is a process of combining the color on the page with the color of the new item being painted. To set a bland mode call the SetBlendMode method of PdfContents. The argument is BlendMode enumeration. For full description please refer to section 7.2.4 Blend Mode of the PDF specifications document. For example please refer to OtherExample.cs page 8. SetAlphaStroking SetAlphaNonStroking SetBlendMode PdfContents BlendMod Document links allow PDF document users to click on the link and jump to another part of the document. Adding document links is done in two parts. The destination is defined as a location marker. Location marker must have a unique name, a scope (LocalDest or NamedDest), and document location (page and position). NamedDest Scope can be used for either document link or for named destination or both . The second part is the link location. The two parts can be defined in any order. They are tied together by the name. The name is case sensitive. Many links can point to the same location marker. LocalDest NamedDest NamedDest Named destinations are targets within the PDF document. They are defined with location marker in the same way as document links. The scope has to be set to NamedDest. When a PDF reader such as Adobe Acrobat opens a PDF document it can open the document while displaying the target in the viewing window. To embed a location marker call the AddLocationMarker method of PdfPage. Note: Name is case sensitive. AddLocationMarker // Add location marker to the document (PdfPage method) public void AddLocationMarker ( string LocMarkerName, // unique destination name (case sensitive) LocMarkerScope Scope, // eigther LocalDest or NamedDest DestFit FitArg, // fit argument (see below) params double[] SideArg // 0, 1 or 4 side dimension argument (see below) ) To add link location call AddLinkLocation method of PdfPage. public PdfAnnotation AddLinkAction ( string LocMarkerName, // location marker name PdfRectangle AnnotRect // rectangle area on the page to activate the jump ) For more information about named destinations please refer to Adobe PDF file specification “PDF Reference, Sixth Edition, Adobe Portable Document Format Version 1.7 November 2006”. Table 8.2 on page 582. DestFit.Fit DestFit.FitH DestFit.FitV DestFit.FitR DestFit.FitB DestFit.FitBH DestFit.FitBV The PDF reader's calling parameters are defined in Parameters for Opening PDF Files by Adobe. If the PDF is opened on a desktop computer the calling line must be: "path\AcroRd32.exe" /A "nameddest=ChapterXX" "path\Document.pdf" If the PDF document is pointed by a link in a web page, the destination is appended to the link: <a href="">Target description</a> Or: <a href="">Target description</a> The PDF File Writer library provides support to AES 128 and Standard 128 (RC4) encryption. For more information please refer to PDF Reference sixth edition (Version 1.7) section 3.5 Encryption. The PDF File writer supports two types of encryption filters, the AES-128 and Standard 128. The Standard 128 is RC4 encryption. It is considered unsafe. For new project do not use it. It does not support public key security to encode recipient list. To encrypt your PDF document call one of four SetEncryption methods defined in PdfDocument class: SetEncryption Set Encryption with no arguments. The PDF File Writer library will encrypt the PDF document using AES-128 encryption. The PDF reader will open the document without requesting a password. Permissions flags are set to allow all. Document.SetEncryption(); Set Encryption with one argument. The PDF File Writer library will encrypt the PDF document using AES-128 encryption. The argument is permissions. Permission flags are defined below. You can or together more than one permission. The PDF reference manual has full description of permissions. The PDF reader will open the document without requesting a password. Document.SetEncryption(Permission Permissions); Set Encryption with two arguments. The PDF File Writer library will encrypt the PDF document using AES-128 encryption. The two arguments are user password and permissions. The PDF reader will open the document with user password. Permissions will be set as per argument. Document.SetEncryption(String UserPassword, Permission Permissions); Set Encryption with four arguments. The PDF File Writer library will encrypt the PDF document using either EncryptionType.Aes128 encryption or EncryptionType.Standard128 encryption. The four arguments are user password, owner password, permissions and encryption type. If user password is null, the default password will be taken. If owner password in null, the software will generate random number password. The Standard128 encryption is considered unsafe. It should not be used for new projects. A PDF reader such as Acrobat will accept either user or owner password. If owner password is used to open document, the PDF reader will open it with all permissions set to allow operation. Document.SetEncryption(String UserPassword, String OwnerPassword, Permission Permissions, EncryptionType Type); Permission flags are as follows: // Full description is given in // PDF Reference Version 1.7 Table 3.20 public enum Permission { None = 0, LowQalityPrint = 4, // bit 3 ModifyContents = 8, // bit 4 ExtractContents = 0x10, // bit 5 Annotation = 0x20, // bit 6 Interactive = 0x100, // bit 9 Accessibility = 0x200, // bit 10 AssembleDoc = 0x400, // bit 11 Print = 0x804, // bit 12 + bit 3 All = 0xf3c, // bits 3, 4, 5, 6, 9, 10, 11, 12 } The PDF reference document defines Sticky Notes or Text Annotation in Section 8.4 page 621. “A text annotation represents a “sticky note” attached to a point in the PDF document. When closed, the annotation appears as an icon; when open, it displays a pop-up window containing the text of the note in a font and size chosen by the viewer application. Text annotations do not scale and rotate with the page; they behave as if the NoZoom and NoRotate annotation flags (see Table 8.16 on page 608) were always set. Table 8.23 shows the annotation dictionary entries specific to this type of annotation.” Adding a sticky note to your document is very simple. You add one single line of code. The sticky note is added to a PdfPage object. It is not part of the page contents. The position of the sticky note is an absolute page location measured from the bottom left corner of the page to the top left corner of the sticky note icon. The text string is the content of the pop-up window. The stick note argument is one of enumeration items below. // sticky note text annotation Page.AddStickyNote(PageAbsPosX, PageAbsPosY, "My first sticky note", StickyNoteIcon.Note); //"); A number of the XMP input is a valid metafile. The XMP stream is not compressed or encrypted. This allows readers to get the metadata information with little programming. You should include the XMP matadata shortly after the PdfDocument is created and before any image is loaded. By doing so the metadata will be at the start of the file and it will be readable by simple text editors. // adding metadata new PdfMatadata(Document, FileName); // or new PdfMetadata(Document, ByteArray); displays the resulted PDF file. This method demonstrates the creation of one page document with some text and graphics. After going through this example, you should have a good understanding of the process. The other example buttons produce a variety of PDF documents. In total, practically every feature of this library is demonstrated by these examples. The Debug check box, if checked, will create a PDF file that can be viewed with a text editor but cannot be loaded to a PDF reader.. width="630" alt="Image 2" data-src="/KB/files/570682/TestPdfFileWriter.png" class="lazyload" data-sizes="auto" data-> The Test method below demonstrates the six steps described in the introduction for creating a PDF file. The method will be executed when you press on the “Article Example” button of the demo program. The following subsections describe in detail each step. // Create article's example test PDF document public void Test ( bool.0)"); //); // create QRCode barcode QREncoder QREncoder = new Q(ArticleLink); // convert QRCode to black and white image PdfImage BarcodeImage = new PdfImage(Document); BarcodeImage.LoadImage(QREncoder); // draw image (height is the same as width for QRCode) Contents.DrawImage(BarcodeImage, 6.0, 6.8, 1.2); // define a web link area coinsiding with the qr code Page.AddWebLink(6.0, 6.8, 7.2, 8.0, ArticleLink); // restore graphics sate Contents.RestoreGraphicsState(); return; } // Draw Barcode private void DrawPdf417Barcode() { // save graphics state Contents.SaveGraphicsState(); // Bitmap Metafile Image1 = new PdfImage(Document); Image1.Resolution = 96.0; Image1.ImageQuality = 50; Image1.LoadImage("TestImage.jpg"); //; } The DrawChart method is an example of defining a chart and drawing it to the PDF document. // Draw chart private void DrawChart() { // save graphics state Contents.SaveGraphicsState(); // create chart Chart PieChart = PdfChart.CreateChart(Document, 1.8, 1.5, 300.0); // create PdfChart object with Chart object PdfChart PiePdfChart = new PdfChart(Document, PieChart); PiePdfChart.SaveAs = SaveImageAs.IndexedImage; // FrameWidth = 0.015; const double GridWidth = 0.01; // column widths double ColWidthPrice = ArialNormal.TextWidth(FontSize, "9999.99") + 2.0 * MarginHor; double ColWidthQty = ArialNormal.TextWidth(FontSize, "Qty") + 2.0 * MarginHor; double ColWidthDesc = Right - Left - FrameWidth - 3 * GridWidth -(); return; } Integrating PdfFileWriter to your application requires the following steps. Install the attached PdfFileWriter.dll file in your development area. Start the Visual C# program and open your application. Go to the Solution Explorer, right click on References and select Add Reference. Select the Browse tab and navigate your file system to the location of the PdfFileWriter.dll. When your application is published, the want access to the source code of the PdfFileWriter project, install the PdfFileWriter project in your development area. The PdfFileWriter.dll will be in PdfFileWriter\bin\Release directory. PdfFileWriter\bin\Release Add the following statement to all source modules using this library. using PdfFileWriter; If you intend to use charting, you need to add reference to: System.Windows.Forms.Visualization. In each source module using Chart you need to add This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Table2.DefaultHeaderStyle.TextBoxLineBreakFactor = .8; Table2.DefaultHeaderStyle.MultiLineText = true; Table2.MinHeaderHeight.Equals(20.0); public Double DrawBarcode ( Double PosX, Double PosY, TextJustify Justify, Double BarWidth, Double BarHeight, Color BarColor, Barcode Barcode, PdfFont TextFont = null, Double FontSize = 0.0 ) Table.TableArea = new PdfRectangle(...); Table.Borders.ClearAllBorders(); // note create text box set Value field TextBox Box = BookList.Cell[1].CreateTextBox(); Box.AddText(TitleFont, 10.0, Color.DarkBlue, Fld[0]); Box.AddText(NormalFont, 8.0, Color.Black, ", Author(s): "); Box.AddText(AuthorFont, 9.0, Color.DarkRed, Fld[2]); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/570682/PDF-File-Writer-Csharp-Class-Library-Version-1-26?fid=1829565&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=5279196&fr=447
CC-MAIN-2020-10
refinedweb
7,767
51.14
Frequently asked: Laravel Interview Questions and Answers Q be quickly loaded by the framework. This command php artisan config:cache typically used as part of our production deployment. The command should not be run during local development as configuration options will frequently need to be changed during the course of our application's development. Q2. What is HTTP kernels or Console kernels? When a request is sent to the Laravel application, it first bootstrap the application and creates an instance of the application (see previous questions). Next, the incoming request is sent to either HTTP kernel or console kernel depending on the type of the request. The HTTP kernel: it receives a Request and returns a Response. We can think of the kernel as being a big black box that represents our entire application. The other type of Kernel is Console Kernel which is used when we interact with our application from the command line. If we use artisan commands, or when a scheduled job is processed, or when a queued job is processed, all of these actions go through the Console Kernel. Q3. What is contextual binding? Sometimes we may have two classes that utilize the same interface, but we wish to inject different implementations into each class. For example, two controllers may depend on different implementations of the Illuminate\Contracts\Filesystem\Filesystem contract. Laravel provides a simple, fluent interface for defining this behavior: Q4. How to extend a binding in Laravel? The extend method allows the modification of resolved services. For example, when a service is resolved, we may run additional code to decorate or configure the service. The extend method accepts a closure, which should return the modified service, as its only argument. The closure receives the service being resolved and the container instance: Q5. What is the difference between register and boot methods in Service Provider? In the register method, we should only bind the service to the containers. We should not register any event listeners, any other piece of functionality. The boot method is called after all the services are registered. So we have access to all other services that have been registered by the framework. We can type-hint dependencies for our service provider's boot method. The service container will automatically inject any dependencies we need. Q6. What is Facade in Laravel? Laravel facades serve as “static proxies” to underlying classes in the service container, providing the benefit of a terse, expressive syntax while maintaining more testability and flexibility than traditional static methods. Laravel ships with many facades which provide access to almost all of Laravel’s features. All of Laravel’s facades are defined in the Illuminate\Support\Facades namespace. So, we can easily access a facade like so: Facades are easy to use and test and they allow us to use Laravel’s features without remembering long class names that must be injected or configured manually. However, the primary danger of facades is class “scope creep”. Since facades are so easy to use and do not require injection, it can be easy to let our classes continue to grow and use many facades in a single class. Using dependency injection, this potential is mitigated by the visual feedback a large constructor gives us that our class is growing too large. So, when using facades, we need to pay special attention to the size of our class so that its scope of responsibility stays narrow. If our class is getting too large, we need to consider splitting it into multiple smaller classes. Q7. How to define a route in Laravel with simple closure? All Laravel routes are defined in our route files, which are located in the routes directory. These files are automatically loaded by our application’s App\Providers\RouteServiceProvider. The routes/web.php file defines routes that are for our web interface. These routes are assigned the web middleware group, which provides features like session state and CSRF protection. The routes in routes/api.php are stateless and are assigned the api middleware group. Below is an example of a simple route definition, which will be accessed by our application url followed by /greeting. Routes defined in the routes/api.php file are nested within a route group by the RouteServiceProvider. Within this group, the /api URI prefix is automatically applied so we do not need to manually apply it to every route in the file. We can modify the prefix and other route group options by modifying our RouteServiceProvider class. Q8. What is fallback route in Laravel? Using the Route::fallback method, we can define a route that will be executed when no other route matches the incoming request. Typically, unhandled requests will automatically render a "404" page via our application's exception handler. However, since we would typically define the fallback route within our routes/web.php file, all middleware in the web middleware group will apply to the route. We are free to add additional middleware to this route as needed: Q9. What is CSRF in Laravel? Cross-site request forgeries are a type of malicious exploit whereby unauthorized commands are performed on behalf of an authenticated user. Imagine our application has a /user/email route that accepts a POST request to change the authenticated user's email address. Most likely, this route expects an /user/email route and submits the malicious user's own email address. To prevent this vulnerability, we need to inspect every incoming PUT, PATCH, or DELETE request for a secret session value that the malicious application is unable to access. Q10. What is middleware in Laravel? Middleware provide a convenient mechanism for inspecting and filtering HTTP requests entering our application. For example, Laravel includes a middleware that verifies the user of our application is authenticated. If the user is not authenticated, the middleware will redirect the user to our application’s login screen. However, if the user is authenticated, the middleware will allow the request to proceed further into the application. Additional middleware can be written to perform a variety of tasks besides authentication. For example, a logging middleware might log all incoming requests to our application. There are several middleware included in the Laravel framework, including middleware for authentication and CSRF protection. All of these middleware are located in the app/Http/Middleware directory. Q11. How to attach a cookie to a response in Laravel? We can attach a cookie to an outgoing Illuminate\Http\Response instance using the cookie method. We should pass the name, value, and the number of minutes the cookie should be considered valid to this method: If we would like to ensure that a cookie is sent with the outgoing response but we do not yet have an instance of that response, we can use the Cookie facade to "queue" cookies for attachment to the response when it is sent. The queue method accepts the arguments needed to create a cookie instance. These cookies will be attached to the outgoing response before it is sent to the browser: Q12. What is Blade? Blade is the simple, yet powerful templating engine that is included with Laravel. Unlike some PHP templating engines, Blade does not restrict us from using plain PHP code in our templates. In fact, all Blade templates are compiled into plain PHP code and cached until they are modified, meaning Blade adds essentially zero overhead to our application. Blade template files use the .blade.php file extension and are typically stored in the resources/views directory. Blade views may be returned from routes or controllers using the global view helper. Data may be passed to the Blade view using the view helper's second argument: We can display the contents of the name variable like so: Blade’s {{ }}echo statements are automatically sent through PHP's htmlspecialchars function to prevent XSS attacks. Q13. What is directive in Blade? Directives are sugar-added functions hiding complex or ugly code behind them. Blade includes lots of built-in directives and also allows us to define custom ones. The built-in ones are more than enough for small projects. But as we find ourselves repeating complex functionality in our code, it is a smell that we need to refactor to custom Blade directives. @if, @foreach, @once, and @include are some of the built-in directives. Q14. What is @verbatim directive? If we need to display JavaScript variables in a large portion of our template, we can wrap the HTML in the @verbatim directive so that we do not have to prefix each Blade echo statement with an @symbol: Q15. What is @class directive? The @class directive conditionally compiles a CSS class string. The directive accepts an array of classes where the array key contains the class or classes we wish to add, while the value is a Boolean expression. If the array element has a numeric key, it will always be included in the rendered class list: To read more Interview Questions and answer download our Android App from play store: Our app contains 1400+ Interview Questions and answers with clear code examples from frontend to backend technologies.
https://vigowebs.medium.com/frequently-asked-laravel-interview-questions-and-answers-ae060561adc2?source=user_profile---------5----------------------------
CC-MAIN-2022-21
refinedweb
1,520
55.34
[Date Index] [Thread Index] [Author Index] Re: Re: No joy using gcc 2.95.3 + MathLink v3r9 + Windows 2000 SP2 Hi John, Thanks for your help. Unfortunately, we just haven't been able to find a way to use gcc and/or g77 to create working MathLink programs under Windows. I tested your theory that perhaps argc and argv were not being passed to MLMain(), but it looks like they are (unless MLScanString works differently when called from a gcc compiled program instead of a cl compiled program). I used the following WinMain program using both gcc and cl, and only the cl compiled program creates a working MathLink program. Both, however, display the "Hello from WinMain" MessageBox when I Install[] my MathLink program, so it appears to be some incompatability between gcc and MathLink. #include "mathlink.h" int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { MessageBox(NULL, "Hello from WinMain!", NULL, MB_OK); char buff[512]; char FAR * buff_start = buff; char FAR * argv[32]; char FAR * FAR * argv_end = argv + 32; if( !MLInitializeIcon( hInstance, nCmdShow ) ) { return 1; } MLScanString( argv, &argv_end, &lpCmdLine, &buff_start); return MLMain( argv_end - argv, argv); } Anyway, I wanted to thank you for your help, and to inform other MathLink users not to use gcc and/or g77 under Windows to create MathLink programs. Warmest regards, Matt -- Matthew D. Langston SLD, Stanford Linear Accelerator Center langston at SLAC.Stanford.EDU ----- Original Message ----- From: "John Fultz" <jfultz at wolfram.com> To: mathgroup at smc.vnet.net Subject: [mg31852] [mg31621] Re: [mg31600] No joy using gcc 2.95.3 + MathLink v3r9 + Windows 2000 SP2 > On Fri, 16 Nov 2001 06:38:04 -0500 (EST), Matthew D. Langston wrote: > > > > > > > It sounds to me like the gcc version of your program isn't passing the > command-line arguments to MLMain() correctly. If I had to guess, I'd say > that gcc is using the main() startup function instead of WinMain(), and > you didn't write your main() function to correctly pass argc and argv into > MLMain(). > > Sincerely, > > John Fultz > jfultz at wolfram.com > User Interface Group > Wolfram Research, Inc. > > >
http://forums.wolfram.com/mathgroup/archive/2001/Dec/msg00112.html
CC-MAIN-2014-52
refinedweb
348
62.58
Chatlog 2012-05-10 From Provenance WG Wiki See original RRSAgent log or preview nicely formatted version. Please justify/explain all edits to this page, in your "edit summary" text. 14:43:59 <RRSAgent> RRSAgent has joined #prov 14:43:59 <RRSAgent> logging to 14:44:01 <trackbot> RRSAgent, make logs world 14:44:01 <Zakim> Zakim has joined #prov 14:44:03 <trackbot> Zakim, this will be 14:44:03 <Zakim> I don't understand 'this will be', trackbot 14:44:04 <trackbot> Meeting: Provenance Working Group Teleconference 14:44:04 <trackbot> Date: 10 May 2012 14:44:04 <Luc> Zakim, this will be PROV 14:44:04 <Zakim> ok, Luc; I see SW_(PROV)11:00AM scheduled to start in 16 minutes 14:44:15 <Luc> Agenda: 14:44:23 <Luc> Chair: Luc Moreau 14:44:34 <Luc> Scribe: dgarijo 14:44:40 <Luc> rrsagent, make logs public 14:44:49 <Luc> Regrets: Curt Tilmes 14:44:58 <Luc> Topic: Admin 14:51:05 <Zakim> SW_(PROV)11:00AM has now started 14:51:26 <pgroth> Zakim, who is on the call? 14:51:26 <Zakim> On the phone I see no one 14:51:47 <pgroth> Zakim, who is on the call? 14:51:47 <Zakim> On the phone I see no one 14:51:50 <Zakim> SW_(PROV)11:00AM has ended 14:51:51 <Zakim> Attendees were 14:52:16 <Zakim> SW_(PROV)11:00AM has now started 14:52:30 <pgroth> Zakim, who is on the call? 14:52:30 <Zakim> On the phone I see no one 14:53:39 <GK_> GK_ has joined #prov 14:54:11 <GK> GK has joined #prov 14:54:49 <Paolo> Paolo has joined #prov 14:57:14 <MacTed> MacTed has joined #prov 14:59:56 <lebot> lebot has joined #prov 15:00:03 <smiles> smiles has joined #prov 15:00:08 <Luc> zakim, who is on the call? 15:00:08 <Zakim> On the phone I see no one 15:00:23 <lebot> Zakim, I am no one 15:00:23 <Zakim> I don't understand 'I am no one', lebot 15:00:31 <Luc> ;-) 15:00:31 <lebot> zakim, who is on the phone? 15:00:31 <Zakim> On the phone I see no one 15:00:34 <dgarijo> dgarijo has joined #prov 15:01:00 <khalidbelhajjame> khalidbelhajjame has joined #prov 15:01:05 <Luc> zakim, who is on the call? 15:01:05 <Zakim> On the phone I see no one 15:01:08 <zednik> zednik has joined #prov 15:01:14 <dgarijo> scribe:dgarijo 15:01:18 <stephenc> stephenc has joined #prov 15:01:53 <jun> jun has joined #prov 15:01:54 <jcheney> jcheney has joined #prov 15:02:06 <TomDN> TomDN has joined #prov 15:02:14 <MacTed> Zakim who, who's here? 15:02:19 <MacTed> Zakim, who's here? 15:02:27 <Luc> @sandro, zakim does not seem to know we are on the phone. Suggestion? 15:02:31 <Zakim> On the phone I see no one 15:02:31 <dgarijo> Zakim is silent today... 15:02:38 <MacTed> Zakim, code? 15:02:38 <Zakim> On IRC I see TomDN, jcheney, jun, stephenc, zednik, khalidbelhajjame, dgarijo, smiles, lebot, MacTed, Paolo, GK, GK_, Zakim, RRSAgent, pgroth, Luc, trackbot, stain, sandro 15:02:45 <Zakim> the conference code is 7768 (tel:+1.617.761.6200 sip:zakim@voip.w3.org), MacTed 15:03:38 <dgarijo> Luc: admin Issues, release of documents: PAQ, proposals, organization about connections and bundles 15:03:44 <Luc> 15:03:54 <Luc> proposed: to accept last week's minutes 15:03:57 <dgarijo> +1 15:03:58 <smiles> +1 15:04:02 <jcheney> +1 15:04:03 <TomDN> +1 15:04:11 <khalidbelhajjame> +0 (wasn't present) 15:04:14 <lebot> "Presentation on editorial changes to e PAQ" ? 15:04:22 <jun> +1 15:04:29 <Paolo> where are the minutes? 15:04:40 <GK> e PAQ I think means "the PAQ" 15:04:47 <SamCoppens> SamCoppens has joined #prov 15:04:49 <GK> +1 15:04:56 <Luc> Accepted: last week's minutes 15:04:57 <Paolo> +1 15:05:03 <lebot> +1 15:05:06 <Luc> zakim, who is on the call? 15:05:06 <Zakim> On the phone I see no one 15:05:34 <dgarijo> Luc: review of actions 15:05:44 <dgarijo> ... 2 on Satya to announce the documents. 15:05:44 <satya> satya has joined #prov 15:05:58 <dgarijo> Luc: I believe it is done 15:05:59 <Zakim> Zakim has left #prov 15:06:02 <Zakim> Zakim has joined #prov 15:06:05 <dgarijo> Paul: yes it is complete 15:06:05 <sandro> zakim, this is prov 15:06:05 <Zakim> ok, sandro; that matches SW_(PROV)11:00AM 15:06:36 <pgroth> yes 15:06:36 <pgroth> he did it 15:06:37 <dgarijo> Luc: action on Sandro, will do that next week. Another one on Paolo (Data One), done 15:06:54 <jun> yes, we got it! 15:07:05 <pgroth> +q 15:07:10 <dgarijo> Luc: just a reminder for scribes 15:07:12 <Luc> q? 15:07:25 <dgarijo> Paul: comment on annoucements. 15:07:30 <pgroth> public-prov-wg 15:07:39 <pgroth> public-prov-comments 15:07:49 <jcheney> lots of echos 15:07:53 <dgarijo> ... We used public prov-wg as the mailing list, but it should be public-prov-comments. 15:08:42 <dgarijo> sandro: I'll see if I can set up something to fix that. 15:08:48 <GK> PAQ has public-prov-comments (though not included in this call) 15:09:16 <pgroth> ack pgroth 15:09:25 <Luc> topic: release of document <Luc>Summary: We reviewed the dissemination activities undertaken following last week's release. Stephan confirmed that survey stakeholders were sent the announcement message. 15:09:28 <dgarijo> Luc: release of documents. 15:09:34 <dgarijo> ... how dissemination is going? 15:09:45 <dgarijo> ... stephan Zednick did something, I believe. 15:10:03 <dgarijo> stephanZ: I send an email to the stake holders that had filled the survey 15:10:06 <dgarijo> Luc: thanks 15:10:09 <Luc> q? 15:10:17 <Luc> Topic: PAQ <Luc>Summary: The group approved the release of the PAQ document as a working draft. Paul will raise issues against the document in a week's time. 15:10:44 <dgarijo> pgroth: I got a few responses with people reviewing / trying to implement 15:10:51 <dgarijo> ... 3 responses 15:10:58 <dgarijo> Luc: I had 1 response too. 15:11:03 <dgarijo> Luc: PAQ 15:11:09 <GK_> 15:11:10 <Luc> [edit] 15:11:20 <Luc> Proposal: to release PAQ as a working draft 15:11:28 <Luc> q? 15:11:38 <dgarijo> ... release this version of the document as a working draft. Any comments/feedback? 15:11:46 <Luc> Proposal: to release PAQ as a working draft 15:11:54 <smiles> +! 15:11:55 <satya> +1 15:11:55 <TomDN> +1 15:11:56 <lebot> +1 15:11:59 <dgarijo> +1 15:12:00 <jcheney> +1 15:12:01 <zednik> +1 15:12:02 <khalidbelhajjame> +1 15:12:03 <Paolo> +1 15:12:08 <SamCoppens> +1 15:12:12 <GK> +1 15:12:14 <smiles> +1 15:12:21 <Luc> Accepted: to release PAQ as a working draft 15:12:44 <Luc> Topic: all documents <Luc>Summary: we are aiming to complete the next iteration by the end of May, with a release for internal review scheduled for June 1st. The editors indicated what they were working on, and for prov-o, prov-dm, prov-n, prov-primer, believe that they are on schedule for a June 1st release. The prov-constraints editors seek further feedback from the reviewers and the group. 15:12:45 <dgarijo> Luc: editors have now de green light to proceed and contact the web master 15:12:59 <pgroth> @gk I'm on vacation next week so won't do anything then 15:13:05 <dgarijo> ... on f2f2 we agreed on a time table 15:13:27 <dgarijo> ... we have plans to release new version of the docs for internal review for Jun 1st 15:13:52 <dgarijo> ... we (Paul and I) would like to know the plans form various editors in order to achieve this. 15:13:53 <jun> s/Jun/June/ 15:13:55 <Luc> 15:14:06 <dgarijo> @Jun thanks. 15:14:45 <dgarijo> Luc: we are going through the issues in the DM, will be dealing with bundles (hopefully today) 15:15:36 <dgarijo> ... in terms of prov-n we are finilizing the syntax of identifiers + outstanding issues. We think we will achieve the deadline. What do other editors plan to do? 15:15:46 <GK> @paul - I'm unclear about details of the publication procedure - I can have a go at the export and pubrules checking, but if I get stuck I guess it's not crucial if we don;'t make it until after next week? 15:16:33 <lebot> q+ to acknowledge 1 June is okay. On the plate: completing issues in tracker, refining the examples for each term, "catching up" to DM-*, and overall editing for clarity (with more narrative on terms). 15:16:54 <dgarijo> jamesC: The constraints haven't been reviewd yet. I hope to hear from Tim and Graham (not necessarily right now) 15:17:09 <dgarijo> s/reviewd/reviewed 15:17:09 <lebot> q+ again to james - he's ready for another review? 15:17:39 <dgarijo> ... I reorganized the doc. 15:17:42 <pgroth> pgroth has joined #prov 15:17:52 <sandro> +Testing 15:18:00 <Luc> q? 15:18:00 <dgarijo> Luc: we have not received feedback from the other 2 reviewers. 15:18:46 <dgarijo> tlebo: I was waiting from James to say whether the doc war ready to be reviewed. 15:19:23 <dgarijo> jamesC: I'd like to know if previous issues have been fixed. 15:19:32 <Luc> Action on tlebot to review latest prov-constraints 15:19:32 <trackbot> Sorry, couldn't find user - on 15:19:39 <dgarijo> Luc: action on tim to review the doc 15:19:58 <Luc> Action tlebot to review latest prov-constraints 15:19:58 <trackbot> Sorry, couldn't find user - tlebot 15:20:01 <dgarijo> graham: I'll have a look too 15:20:20 <Luc> Action GK to review latest prov-constraints 15:20:20 <trackbot> Created ACTION-89 - Review latest prov-constraints [on Graham Klyne - due 2012-05-17]. 15:20:24 <dgarijo> ... my previous comments might have been overtaken by reorganization 15:20:33 <lebot> @luc, sorry, I slipped to @lebot today... 15:20:33 <stainPhone> stainPhone has joined #prov 15:20:46 <lebot> q? 15:20:47 <Luc> Action lebot to review latest prov-constraints 15:20:47 <trackbot> Created ACTION-90 - Review latest prov-constraints [on Timothy Lebo - due 2012-05-17]. 15:21:10 <dgarijo> Luc: prov-o document 15:21:41 <Luc> q? 15:21:43 <dgarijo> lebot: The plan for the next 3 weeks is to create examples for every term and clean the issues 15:21:48 <Luc> ack leb 15:21:48 <Zakim> lebot, you wanted to acknowledge 1 June is okay. On the plate: completing issues in tracker, refining the examples for each term, "catching up" to DM-*, and overall editing for 15:21:52 <Zakim> ... clarity (with more narrative on terms). 15:22:14 <dgarijo> smiles: alternative formats for the examples (prov-o and prov-n, xml) 15:22:41 <dgarijo> ... (this is for the primer) Ask Stian and Paolo to see if ???? 15:22:47 <Paolo> ok fine 15:23:16 <dgarijo> Luc: Graham and Paul, can you synchronize for the next release of the PAQ? 15:23:41 <smiles> @dgarijo We will ask Paolo and Stian to check the primer hasn't become out of date with respect to the DM and ontology respectively 15:23:52 <dgarijo> Paul: there are some issues about reorganization, I'll come back in a week 15:24:00 <dgarijo> @smiles, thanks 15:24:33 <dgarijo> Luc: have we got plans for releasing best practice documents? 15:24:48 <dgarijo> ... DC best practices. 15:25:15 <stainPhone> stainPhone has joined #prov 15:25:17 <dgarijo> Paul: I'll ask offline. 15:26:06 <dgarijo> Dgarijo: I'll tell Kai about the deadline. 15:26:09 <Luc> q? 15:26:14 <Luc> ack again 15:26:14 <Zakim> again, you wanted to james - he's ready for another review? 15:26:22 <Luc> topic: WasQuotedFrom <Luc>Summary: the proposal to rename WasQuotedFrom to WasAQuoteFrom was not endorsed. The group is invited to continue discussion by email. 15:26:32 <Luc> 15:26:38 <Luc> Proposal: rename WasQuotedFrom into WasAQuoteFrom 15:26:52 <dgarijo> Luc: change wasQuotedFrom->wasAQuoteFrom? 15:27:03 <stainPhone> I'm struggling with Zakim passcode 15:27:11 <Luc> q? 15:27:17 <dgarijo> any comments? 15:27:22 <Luc> Proposal: rename WasQuotedFrom into WasAQuoteFrom 15:27:27 <dgarijo> +1 15:27:27 <smiles> +1 15:27:28 <khalidbelhajjame> +1 15:27:29 <TomDN> +1 15:27:32 <MacTed> +1 15:27:33 <jcheney> +1 15:27:34 <satya> +1 15:27:34 <SamCoppens> +1 15:27:39 <jun> -1 15:27:43 <lebot> -1 15:27:47 <Paolo> +1 15:27:49 <GK> 0 15:27:51 <zednik> 0 15:27:53 <jun> I never had trouble with this property name. so -1 from me 15:28:01 <sandro> 0 15:28:03 <lebot> if it was a quote, what is it now? 15:28:08 <stainPhone> +1 15:28:42 <stainPhone> Now it really is a quote, not a "quoted" 15:28:46 <pgroth> he has a point 15:29:09 <dgarijo> Jun: it was clear for me before 15:29:18 <dgarijo> ... not convinced by the new name 15:29:26 <dgarijo> +q 15:29:34 <stainPhone> Domain of wasQ should be a quote, right? 15:29:42 <zednik> quote can be noun or verb, quoted is clear verb 15:29:52 <lebot> danielG: it's not clear, which is quoted, and which is quoted from? (it flips) 15:29:58 <stainPhone> Q+ 15:30:12 <Luc> ack dga 15:30:19 <pgroth> q+ 15:30:20 <lebot> danielG: DM def is clear, but from just the name it is confusing 15:30:52 <jun> How is that different from wasDerivedFrom? 15:31:00 <dgarijo> stian: I got the same feeling as Daniel. And probelms with the direction too. 15:31:00 <Luc> ack st 15:31:12 <Luc> ack pg 15:31:25 <lebot> @jun, the other nice aspect of wasQuotedFrom was its parallel to wasDerivedFrom. 15:31:34 <dgarijo> paul: given that there is no consensus, this has to be talked more on the mailing list. 15:31:37 <lebot> +1 to taking it back to email (sorry that I missed it) 15:31:38 <khalidbelhajjame> Given Tim comment, then isAQuoteFrom may be a better candidate 15:31:58 <jun> @lebot, yes. applying the pattern for names is also important for an ontology 15:31:58 <stainPhone> I agree with Daniel, it is important that lhs of wasQ is a quote, not what was quoted or something that contains a quote 15:32:12 <dgarijo> Luc: agreed, the discussion should come back to the mailing list. 15:32:13 <MacTed> "is" forces to "was" because of previous decisions to use past tense for all predicates 15:32:18 <Luc> q? 15:32:25 <lebot> yes, @khalid, isAQuoteFrom would work (but violate our "past tense") 15:32:31 <Luc> topic: WasStartedByActivity <Luc>Summary: the proposal to drop WasStartedByActivity and to extend wasStartedBy with an optional starter activity was adopted. 15:32:43 <jun> @MacTed, provenance is meant to record history, imo 15:32:44 <dgarijo> @lebot: isAquoteFrom workf for me too... 15:32:55 <dgarijo> s/workf/works 15:33:15 <lebot> @dgarijo, yes, but how to deal with the tense inconsistency? 15:33:16 <satya> as Tim said, isAQuoteFrom is not "past" tense? 15:33:36 <Luc> 15:33:36 <pgroth> (also "a" in a predicate name is just wierd) 15:33:45 <stainPhone> Jun, could you make a "clear" example of the old wasQuotedFrom ? 15:33:47 <dgarijo> @lebot: I know, but I prefer the concept to be clear. 15:34:06 <khalidbelhajjame> @MacTed, @Jun, @Satya, maybe this example shows that past tense is not suitable for everything 15:34:11 <Luc> PROPOSAL: drop wasStartedByActivity and revise wasStartedBy as per; revise wasEndedBy similarly 15:34:17 <Luc> q? 15:34:24 <dgarijo> +q 15:35:10 <jun> @all, I think we should take the discussion onto the mailing list. Afraid we are cluttering the chat 15:35:12 <lebot> danielG: concern is for prov-o and [] wasEstablsihedBy (?). Could do it in a single statement, must now use a qualified relationship to express it. 15:35:17 <Luc> ack dga 15:35:34 <stainPhone> I still think it is clearer than yet another relationship 15:35:40 <lebot> @macted, that's were we started months ago (to expand the range) 15:35:45 <lebot> s/were/where/ 15:35:50 <zednik> q+ 15:36:30 <dgarijo> lebot: expanding the range og wasStartedBy was where we were several months ago 15:36:32 <stainPhone> We already had this issue for activity start time only 15:37:06 <dgarijo> lebot: I'm in favour of this proposal precisely because of the indirection 15:37:31 <dgarijo> stephanZ: do we have a wasTriggerebBy relationship 15:37:34 <dgarijo> Luc: no 15:37:52 <satya> @Zednick - we had it in an earlier version (wasTriggeredBy) 15:38:06 <Luc> ack zedn 15:38:17 <dgarijo> ... the start of an activity has a trigger which is an entity and we are allowing the activity to be there as well 15:38:17 <jun> @Zednick, I thought the current wasStartedByActivity is close to wasTriggeredBy of OPM 15:38:27 <Luc> q? 15:38:31 <zednik> q- 15:38:36 <Luc> PROPOSAL: drop wasStartedByActivity and revise wasStartedBy as per; revise wasEndedBy similarly 15:38:43 <TomDN> +1 15:38:44 <lebot> +1 15:38:46 <stainPhone> +1 15:38:49 <MacTed> +1 15:38:51 <Paolo> +1 15:38:55 <jcheney> +1 15:38:56 <GK> 0 15:38:56 <SamCoppens> +1 15:38:58 <dgarijo> +0 (If everyone is ok I won't vote against it) 15:39:01 <satya> 0 15:39:05 <khalidbelhajjame> +1 for dropping wasActivity, +0 for revising wasStartBy 15:39:07 <jun> 0 15:39:08 <sandro> 0 15:39:20 <Luc> Accepted: drop wasStartedByActivity and revise wasStartedBy as per; revise wasEndedBy similarly 15:39:36 <Luc> Topic: Collections <luc>Summary: Paul expressed his concern about the length of the collections section in the prov-o document. He suggested moving this section out of the prov-o document into a new, separate document, focusing on collections. The scope of such a potential new document was discussed. On the one hand, it could be pulling collection-related material from all the prov-o, prov-n, prov-constraints, and prov-dm documents to demonstrate how to apply PROV to a new application/domain. On the other hand, it could be lighter weight, combining some primer-style introduction with the prov-o collection section. Paul also brought up Graham's suggestion of restructuring prov-dm (not prov-o) into two separate documents, core vs extension. It was noted that this organization was originally adopted in prov-dm, but was abandoned because it lacked justification. It was also noted that editors are concerned by the amount of time involved in any form of restructuring, and that the group cannot afford multiple of those changes without affecting the release schedule. The group agreed that it needs concrete proposals to make decisions. Paolo and Graham volunteered to produce table of contents of potential documents. It is anticipated that the group will make a decision on this reorganization next week. 15:39:40 <dgarijo> Luc:collections 15:40:27 <dgarijo> pgroth: worried about the length of the section on collections. 15:40:39 <pgroth> 15:40:48 <dgarijo> ... I made a proposal last week that we should separate collections from PROV-O 15:41:15 <dgarijo> ... pull collections from prov-DM and prov-o and just put them in a separate document 15:41:19 <pgroth> 15:41:47 <dgarijo> ... Graham proposed to have a greater separation in the document. Breaking the model into core and extensions 15:42:47 <dgarijo> ... we already started with core and extensions, but in the end we put it all together. We would need to decide about this (break/not break ) 15:43:04 <dgarijo> ... do we break just the collection or the rest of the concepts too? 15:43:34 <Luc> q? 15:43:38 <dgarijo> ... any comments on this? 15:44:09 <pgroth> there's a lot of echo 15:44:18 <pgroth> Zakim, who is making noise? 15:44:29 <Zakim> pgroth, listening for 10 seconds I heard sound from the following: ??P17 (100%), ??P45 (19%), ??P50 (40%) 15:44:46 <pgroth> Zakim, mute ??P50 15:44:46 <Zakim> sorry, pgroth, I do not know which phone connection belongs to ??P50 15:45:12 <dgarijo> lebot: the dictionaries section stands out as an outlier. The proposal made by Paul would allow us to focus on the principal aspects of prov-o. I would be very happy to get rid of the dictionaries section 15:45:46 <dgarijo> Luc: do you want to separate the namespace as well. 15:46:43 <Luc> q? 15:46:44 <pgroth> q+ to say that namespacing issues should be separated from this discussion 15:46:54 <dgarijo> lebot: Prov-o is aimed to be expanded and specialized. It would make sense to have another namespace as well 15:47:04 <Luc> ack pgr 15:47:04 <Zakim> pgroth, you wanted to say that namespacing issues should be separated from this discussion 15:47:08 <smiles> I agree with everything Tim said 15:47:25 <pgroth> ack pgroth 15:47:34 <dgarijo> pgroth: namespace discussion should be separated from the discussion of the documents. There are advantages and disadvantages to both. 15:47:35 <Paolo> q? 15:47:37 <lebot> +1 pgroth, namespace is separate; it can be decided after the "split" to collections document. 15:47:58 <TomDN> q+ to ask which other concepts would be reorganized if we were to go for option 2? 15:48:01 <Paolo> I have already expressed my support for this proposal 15:48:01 <Luc> q? 15:48:24 <Luc> ack TomDN 15:48:24 <Zakim> TomDN, you wanted to ask which other concepts would be reorganized if we were to go for option 2? 15:48:32 <Paolo> q+ 15:48:37 <pgroth> @TomDN I have no idea 15:48:39 <dgarijo> tom: wondering if we were to go for the second option, which other terms would be removed for the core? 15:48:42 <lebot> @tomdn, ? 15:48:44 <pgroth> it would be a huge debate 15:49:00 <stainPhone> I have to go, but I would vote for extension doc. We can then see if wasQuoteOf belong there as well. 15:49:04 <lebot> most of those are specializations (e.g. Person sub Agent) 15:49:05 15:49:21 <dgarijo> Graham: almost everything but the current starting points. 15:50:26 <dgarijo> ... some of the discussion of the terms is difficult for non provenance experts to pick up. 15:50:36 <dgarijo> ... the basic structural properties are very clear 15:50:52 <dgarijo> ... the issue of core vs extensions came previously 15:51:10 <Luc> q? 15:51:37 <Luc> ack pao 15:52:31 <dgarijo> Paolo: about the structure of collections: if we separate collections, would them all be in the same monolithic thing? 15:52:38 <dgarijo> ... (dm+ontology+examples) 15:52:44 <Luc> q? 15:52:46 <dgarijo> ... or separated documents. 15:52:48 <pgroth> q+ 15:53:21 <dgarijo> pgroth: editorially, it's a lot of work. 15:53:53 <pgroth> ack pgroth 15:53:53 <lebot> -1 to major redo for each "section" - yipes! 15:53:59 <Luc> q+ 15:54:08 <dgarijo> ... I'm afraid that even with a major redo we won't address graham's omments 15:54:09 <pgroth> ack Luc 15:55:07 <dgarijo> luc: this notion of starting points doesn't necessarily map to all technologies. What is in starting points is really the binary relationships. 15:55:16 <SamCoppens1> SamCoppens1 has joined #prov 15:55:19 <dgarijo> ... in other technologies, this is not the case. 15:55:48 <TomDN_> TomDN_ has joined #prov 15:55:51 <dgarijo> .... I see this as a challenge 15:56:05 <dgarijo> ... how do we move on? 15:56:09 <TomDN_> @GK: I think it's a good idea, but it's a slippery slope if we don't clearly define what the "core" is. Like Paul said, it could lead to a huge discussion. 15:56:17 <dgarijo> paul: some consensus aboyt separating collections 15:56:34 <MacTed> I see a LOT of potential reward from the described re-org. but it would undeniably be a huge effort. 15:56:44 <lebot> @tomdn, the prov-o team has some experience is determining which constructs are in which partition. 15:56:49 <dgarijo> ... what do the group the think about the core proposal? 15:56:52 <GK> @TomDN - did you seem the text I pasted above? 15:56:59 <lebot> @tomdn, the owl file has annotations for those partitions. 15:57:26 <smiles> +q 15:57:27 <dgarijo> ... by next telecon it would be great to have concrete proposals so we can vote 15:57:33 <SamCoppens> SamCoppens has joined #prov 15:57:40 <TomDN_> TomDN_ has joined #prov 15:57:41 <Luc> ack smi 15:57:44 <GK> It's basically the three core concepts, plus the top-level properties that connect them in various ways. 15:57:55 <TomDN_> (sorry, IRC keeps timing out) 15:58:13 <TomDN_> +q 15:58:21 <dgarijo> ... I don't really get the problem. I can understand the collections out, but I'll wait for the proposal 15:58:23 <Luc> ack tom 15:58:46 <dgarijo> tom: maybe we should do this execrise with collections and then we get an idea of seeing how much work is that 15:58:48 <pgroth> this is a major major piece of work 15:59:17 <dgarijo> Luc: I'm not in favour of these experiments because it is a lot of editing 15:59:28 <dgarijo> ... I don't want to do that iteratively 15:59:32 <GK> q+ to say that I don't propose splitting the PROV-O 15:59:40 <Paolo> Collections are pervasive (except the primer) -- change impacts everything 15:59:44 <Luc> q? 15:59:45 <dgarijo> Tom: agreed, specially in this stage of the process. 15:59:46 <khalidbelhajjame> Instead of removing parts of the document, which I am reluctant to, I would prefer restructering 15:59:51 <lebot> @luc, we need to find some way to relax the weight that Collections puts on all of the documents. 16:00:06 <jcheney> q+ 16:00:06 <Paolo> q+ 16:00:29 <lebot> @gk, could you paraphrase waht you just said? 16:00:46 <pgroth> he said he only wants to only adjust the dm document 16:00:55 <dgarijo> Luc: you propose not to touch the ontology but to change the DM 16:01:13 <dgarijo> GK: yes 16:01:31 <Paolo> q+ to say that I wouldn't want readers to be forced into the ontology in order to understand a provenance model 16:01:41 <Luc> ack gk 16:01:41 <Zakim> GK, you wanted to say that I don't propose splitting the PROV-O 16:01:43 <dgarijo> Luc: so you are not seeing the DM document as a reference document. 16:02:06 <pgroth> q+ to say this was already decided at F2F 16:02:09 <dgarijo> GK: I think it has a central role in the family of specification. It should be an introduction + reference for the structure. 16:02:25 <dgarijo> jamesC: are we going to commit to this change now? 16:02:38 <Luc> ack jch 16:02:42 <SamCoppens> SamCoppens has joined #prov 16:03:12 <dgarijo> ... I would be inclined to say: first create a document with all the collections and not delete the stuff from the current documents 16:03:17 <MacTed> I'm sorry to say, but it's important to -- past decisions aren't always correct. revisions happen. just because something was decided at F2F doesn't mean it will stick throughout. 16:03:41 <dgarijo> Luc: we will come with proposals next week for restructure the docs. 16:03:48 <Luc> q? 16:04:05 <satya> sorry have to leave 16:04:12 <khalidbelhajjame> Size should not be seen as an issue, if people want to read a short document, they can read the primer 16:04:42 <dgarijo> Paolo: there was a discussion on the face to face on whether the ontology should be an entry point or not. 16:04:49 <Luc> q? 16:04:55 <Luc> ack paol 16:04:55 <Zakim> Paolo, you wanted to say that I wouldn't want readers to be forced into the ontology in order to understand a provenance model 16:05:00 <dgarijo> ... the ontology is an encoding, not the reference for an entry point 16:05:11 <dgarijo> ... it is ONE of the possible encodings 16:05:53 <TomDN> readers can of course always skip the section on collections and still understand the rest of the DM :) 16:06:04 <dgarijo> paul: summary: primer is good. Provo would be improved if we removed collections. Prov DM should be reorganized (proposals the next week) 16:06:27 <dgarijo> luc: what is the next step. Is it to create concrete proposals ? 16:06:30 <dgarijo> paul: yes 16:06:55 <dgarijo> ... this is all about organization, not editorial per se. We need to keep the text that was written 16:07:14 <dgarijo> luc: who would write which proposal? 16:07:17 <GK> q+ 16:07:22 <pgroth> ack pgroth 16:07:22 <Zakim> pgroth, you wanted to say this was already decided at F2F 16:07:26 <Luc> ack gk 16:07:43 <dgarijo> graham: I guess this forces me to create one with core + extension of dm 16:07:48 <dgarijo> luc: please use wiki 16:07:51 <dgarijo> GK: sure 16:08:03 <dgarijo> luc: volunteers for a collection document? 16:08:50 <dgarijo> .... we may have several proposals on the table: (TIM) We use this as a mechanism to show how the model can be extended. 16:09:09 <Paolo> q+ 16:09:09 <dgarijo> ... another which is lightweight is to separate collections in another document. 16:09:32 <dgarijo> Paolo: I really had the first in mind. I can write an outline 16:10:23 <pgroth> so paolo will do it 16:10:28 <pgroth> :-) 16:10:28 <lebot> Like Paolo, I had the first in mind too. Take Collections from everything into a new document. 16:10:40 <lebot> I'll help Paolo :-0 16:10:58 <TomDN> if any of the proposals need help, id be happy to help as well 16:11:00 <Luc> topic: bundle <Luc>Summary: A draft text has been produced in response to issues raised about accounts and notes. This text will be incorporated in the editor's draft soon. The working group is invited to provide feedback. 16:11:02 <Paolo> great Tim, much appreciated 16:11:08 <Luc> 16:11:25 <pgroth> @TomDN maybe you want to discuss with GK 16:11:28 <dgarijo> Luc: in the last iteration we didn't work on accounts. 16:11:33 <TomDN> sure 16:11:37 <dgarijo> ... Tim and GK had comments on accounts 16:12:09 <dgarijo> ... if you follow that document you'll see an outline of what would go into DM for expressing provenance of provenance 16:12:24 <GK> I thought we had discussed keeping the term "Account", but just to denote a bundle of proveance statement? 16:12:33 <dgarijo> ... relation hadProvenanceIn inspired by PAQ 16:13:15 <Luc> q? 16:13:20 <dgarijo> ... I invite you to have a look at the document and start discussion on the mailing list 16:13:22 <pgroth> yes 16:13:23 <Paolo> ack 16:13:35 <lebot> bye! 16:13:38 <SamCoppens> SamCoppens has left #prov 16:13:38 <dgarijo> Luc: good bye 16:13:44 <khalidbelhajjame> bye 16:13:47 <Luc> rrsagent, set log public 16:13:51 <Luc> rrsagent, draft minutes 16:13:51 <RRSAgent> I have made the request to generate Luc 16:14:02 <Zakim> SW_(PROV)11:00AM has ended 16:14:04 <Zakim> Attendees were 16:14:05 <Luc> hi daniel, I will take care of the minutes, thanks! 16:14:10 <Luc> trackbot, end telcon 16:14:10 <trackbot> Sorry, Luc, I don't understand 'trackbot, end telcon '. Please refer to for help 16:14:17 <TomDN> bye, @GK: i'll contact you via email 16:14:28 <dgarijo> @Luc, Ok, good bye! 16:35:56 <MacTed> trackbot, end call 16:35:56 <trackbot> Sorry, MacTed, I don't understand 'trackbot, end call'. Please refer to for help 16:36:03 <MacTed> trackbot, end meeting 16:36:03 <trackbot> Zakim, list attendees 16:36:03 <Zakim> sorry, trackbot, I don't know what conference this is 16:36:11 <trackbot> RRSAgent, please draft minutes 16:36:11 <RRSAgent> I have made the request to generate trackbot 16:36:12 <trackbot> RRSAgent, bye 16:36:12 <RRSAgent> I see no action items # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000466
http://www.w3.org/2011/prov/wiki/Chatlog_2012-05-10
CC-MAIN-2016-22
refinedweb
5,683
59.43
Dear community, i am aware that the topic is discussed on several places - nevertheless i didn't find an answer up to now (sorry when i missed it). My goal is simply to count the total street length of all streets within a certain city (the exact district) for a running project. Two things i can't solve: I am not able to export ONLY a certain city district within its exact borders - like: The information is there ... in theory it would be enough to export only the data with a certain regional or community key. I tried it with - but even with a certain export shape formed i do not get the correct data. So i do not look for a rectangle data - only the data within the city border. I can't find a solution to calculate the street length. In QGIS there is under vector/analytic-tools/summary line length - but this didn't get me any results and i am not sure if it counts only street length or every line. Than i found osm-length-2.pl but i could't get it to work - (besides the fact i do not have the right data). Sorry for this dummy question: I installed Perl - i have a OSM-file - i run osm-length-2.pl and how tell i the script to use my OSM-file? Within the script there is the info to use stdin - but i could't figure out HOW to use it and i am not a Perl-Programmer - sorry. Appreciate any help - didn't thought it is so difficult :-O Christoph asked 22 Apr '19, 20:16 KCBCOM 41●1●1●3 accept rate: 0% Have you tried this package? It is super easy, Install it: pip install osm-road-length And run: import osm_road_length from shapely import wkt geometry = wkt.loads('POLYGON((-43.2958811591311 -22.853167273541693,-43.30961406928735 -23.035275736044728,-43.115980036084224 -23.02010939749927,-43.157178766552974 -22.832917893834313,-43.2958811591311 -22.853167273541693))') length = osm_road_length.get(geometry) answered 30 Apr '20, 22:29 JOAO LUIZ 16●1 accept rate: 0% Yet another possiblity: you can run a query on Overpass API. This one produces a sample statistic for my home town, by value of highway. Length is in meters. This should also point you to the real issues: You should decide for yourself whether you want to count footways and tracks or not. Another source of incertainty are dual carriageways; they would be counted twice because they are two ways in OpenStreetMap. Frontage roads, turn lanes and more may also affect the results. Finally, the request keeps ways on the boundary intact, thus slightly overestimating length values for that part of the problem. Nontheless, for best effort guess data is for sure good enough. answered 26 Apr '19, 18:48 Roland Olbricht 6.6k●3●64●88 accept rate: 35% Thanks for your answer Frederik! After trying your solution it still turned out a little bit complicated for me. So now i came up with following workaround (QGIS 3.6.2): Step 3+4 are fully described here: Greetings - Christoph answered 24 Apr '19, 20:30 edited 24 Apr '19, 20:34 For a precise result, you cannot hope to cut out "all streets within a city" from OSM before you compute the length, because streets can cross the city boundary and if I understand you correctly, in this case you'd only want the length of the bit inside. One way of getting this done is: QGIS will also allow you to filter out all roads and clip them at the polygon boundary but it is a more cumbersome and manual process. answered 23 Apr '19, 09:27 Frederik Ramm ♦ 79.5k●90●699●1230 accept rate: 23% Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: street ×132 city ×100 length ×16 question asked: 22 Apr '19, 20:16 question was seen: 4,411 times last updated: 30 Apr '20, 22:29 Cities with streets for a database Open Street Map how to get city based on street? Connection between city (place=town) and street (highway=*) how to get latitude and longitude with adress street,city and street number How can I find the total length of roads in an area? Get just street names around 1000 meters of a street in a city Incorrect town names change of city name Extract ALL ITALIAN "City,PostCode,Street” data in csv format all street names in a city have changed to a county grid system, how to tag old/vs/new First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/68878/count-street-length-of-a-city?sort=newest
CC-MAIN-2022-21
refinedweb
792
68.1
Build a Cryptocurrency Algorithm Backtester A cryptocurrency backtester. Sounds complicated? Well, they can be, but they can also be really simple. Like, under 100 lines of Python simple! That’s what we’re going to be exploring today. Imagine you came up with a set of rules dictating when you should buy or sell a particular digital asset or stock — an investment strategy. Let’s say that you did some research and found that digital assets go up in value when their average price over the past three days surpasses their average price of the last five days (simple moving averages strategy). Would you automatically trust that this strategy you came up with is totally correct and used it with your own money? I should hope not. Before you employ an investment strategy, you ought to test it. A popular method of testing investment strategies to determine if they will work is seeing how they perform when given data from the past — backtesting. A backtester is any program that can feed historical data through the rules you came up with and manipulate a fake portfolio based on these rules so you can see how your strategy would have performed in the past. For making our backtester, we will be using Python 2.7 and a few libraries ( matplotlib, requests, json). We will design our crypto backtester as a terminal-based application. It will ask the user for some basic info such as what digital asset to measure, initial investment, and strategy, and the program will then gather some historical data and then run it through our backtester to produce a chart of our portfolio value over time. Let’s create a new file called backtester.py. Let’s import our modules. We will be matplotlib to plot our graph and requests and json to fetch our data. import matplotlib.pyplot as plt import requests, json Let’s write our first function — our start() function. This function will be called at the start of our program and will ask the user for some data and then use that to determine what currency and strategy to use for the backtester. We need to get the raw_input for the following variables: Therefore, we’ll first get the ticker from the user and fetch the data from the CryptoCompare API using the requests library (we are fetching minutely data (past 2000), but you may experiment with the API as you wish). After fetching the data, we’ll pass the data, initial investment and strategy values into the moving_averages() function which we’ll write next. If you wanted to add another strategy, you could simply add a selection for it (ex. if strategy == "2") def start(): """ Here we ask the user for some basic input, fetch our historical data and determine what strategy to use. """ print "Starting Crypto Backtester V1" ticker = raw_input("Enter ticker: ").upper() data_url = '' + ticker + '&tsym=USD&limit=2000&aggregate=1' response = requests.get(data_url) try: data = response.json()['Data'] except: print "Sorry, this crypto isn't supported!" quit() historical_data = data print "Fetched historical data for crypto: " + ticker cash = raw_input("Enter initial investment: ") strategy = raw_input("Select (1) for the moving averages strategy: ") if strategy == "1": moving_averages(historical_data, ticker, cash) Now, let’s define the moving_averages function. We’ll store the initial investment in the initial variable and convert both the initial and cash variables to integers. We then can define the crypto variable to have a value of 0 and define our x and y values as empty arrays. Now, we start looping through the historical data (starting from index 5 just to be same with the averages). We can then calculate the three and five day averages by passing the data points as an array into the get_average function which we will define after. After we get the averages, we compare them to figure out whether we want to buy or sell the asset. If the three day average is greater than the five day average (short-term MA crosses long-term MA), it could indicate a trend of shifting up, and so it is a buy signal. If the five day average is greater than the three day average (long-term MA crosses short-term MA), it indicates a trend of shifting down, and so it is a sell signal. If there is a “buy” signal, the asset is bought using half of the portfolio’s available cash. The “buy” process simply subtracts the cash from our cash holdings and divides it by the current price of the currency to see how much of the asset should be added in the portfolio. If there is a “sell” signal, half of our asset holdings are sold (think, convert half of the number of crypto we have to cash). At the end of each iteration, it calculates how much our portfolio is worth and appends an x (where we are in the list of minutely data points) and y value (the portfolio value) to our x_values and y_values. Lastly, we can call the plot_graph() function and determine our profit/loss. def moving_averages(historical_data, ticker, cash): """ If the 3 day average price of ETH is above the 5 day average price, buy. If below, sell. """ initial = int(cash) cash = int(cash) crypto = 0 x_values = [] y_values = [] for place, data_set in enumerate(historical_data[5:-1]): three_day_average = get_average([historical_data[place-1], historical_data[place-2], historical_data[place-3]]) five_day_average = get_average([historical_data[place-1], historical_data[place-2], historical_data[place-3], historical_data[place-4], historical_data[place-5]]) if three_day_average > five_day_average: cash_used_to_buy = cash/2 price = float(data_set["close"]) number_of_crypto_we_just_bought = cash_used_to_buy/price crypto += number_of_crypto_we_just_bought cash -= cash_used_to_buy print "Just bought: " + str(number_of_crypto_we_just_bought) + " " + ticker if crypto > 1 and three_day_average < five_day_average: price = float(data_set["close"]) number_of_crypto_being_sold = crypto/2 new_cash = number_of_crypto_being_sold * price cash += new_cash crypto -= number_of_crypto_being_sold print "Just sold: " + str(number_of_crypto_being_sold) + " " + ticker portfolio_value = cash + (crypto * float(data_set["close"])) x_values.append(place) y_values.append(portfolio_value) print "Final portfolio value: " + str(portfolio_value) net = portfolio_value - initial if (net > 0): print "You profited: " + str(net) else: print("You lost: ") + str(net) plot_graph(x_values, y_values) Before we finish, we need to define two more functions. Here’s our get_average function: def get_average(averages_list): """ Gets the average of some numbers """ total = 0 for data_set in averages_list: total += float(data_set["close"]) return total/len(averages_list) There isn’t too much to explain here — it simply takes a list of inputs, gets the average and returns it. After we are finished backtesting, our backtest function calls the plot_graph() function: def plot_graph(x, y): """ Plots our Graph """ plt.plot(x, y) plt.xlabel("Minute") plt.ylabel("Portfolio Value") plt.show() We have defined all of our functions. Now all we have to do is call the start function in the last line of our file: start() Here you should see a graph of your portfolio’s value over time. Here’s one with Bitcoin and an intial investment of $10,000. (Yes, I lost money :D) And there you have it: a simple digital asset backtester in under 100 lines of python. Feel free to add more strategies or maybe even a GUI. It’s all yours! Owen is a high school senior and full stack developer. He currently works on Grand Street Technologies.
https://enlight.nyc/projects/backtester/
CC-MAIN-2018-47
refinedweb
1,199
51.89
Users expect apps to be responsive and fast to load. An app with a slow startup time doesn’t meet this expectation, and can be disappointing to users. This sort of poor experience may cause a user to rate your app poorly on the Play store, or even abandon your app altogether. This document provides information to help you optimize your app’s launch time. It begins by explaining the internals of the launch process. Next, it discusses how to profile startup performance. Last, it describes some common startup-time issues, and gives some hints on how to address them. Launch Internals App launch can take place in one of three states, each affecting how long it takes for your app to become visible to the user: cold start, warm start, and lukewarm start. In a cold start, your app starts from scratch. In the other states, the system needs to bring the app from the background to the foreground. We recommend that you always optimize based on an assumption of a cold start. Doing so can improve the performance of warm and lukewarm starts, as well. To optimize your app for fast startup, it’s useful to understand what’s happening at the system and app levels, and how they interact, in each of these states. Cold start A cold start refers to an app’s starting from scratch: the system’s process has not, until this start, created the app’s process. Cold starts happen in cases such as your app’s being launched for the first time since the device booted, or since the system killed the app. This type of start presents the greatest challenge in terms of minimizing startup time, because the system and app have more work to do than in the other launch states. At the beginning of a cold start, the system has three tasks. These tasks are: - Displaying a blank starting window for the app immediately after launch. - Creating the app process. As soon as the system creates the app process, the app process is responsible for the next stages. These stages are: - Creating the app object. - Launching the main thread. - Creating the main activity. - Inflating views. - Laying out the screen. - Performing the initial draw. Once the app process has completed the first draw, the system process swaps out the currently displayed background window, replacing it with the main activity. At this point, the user can start using the app. Figure 1 shows how the system and app processes hand off work between each other. Figure 1. A visual representation of the important parts of a cold application launch. Performance issues can arise during creation of the app and creation of the activity. Application creation When your application launches, the blank starting window remains on the screen until the system finishes drawing the app for the first time. At that point, the system process swaps out the starting window for your app, allowing the user to start interacting with the app. If you’ve overloaded Application.oncreate() in your own app, the system invokes the onCreate() method on your app object. Afterwards, the app spawns the main thread, also known as the UI thread, and tasks it with creating your main activity. From this point, system- and app-level processes proceed in accordance with the app lifecycle stages. Activity creation After the app process creates your activity, the activity performs the following operations: - Initializes values. - Calls constructors. - Calls the callback method, such as Activity.onCreate(), appropriate to the current lifecycle state of the activity. Typically, the onCreate() method has the greatest impact on load time, because it performs the work with the highest overhead: loading and inflating views, and initializing the objects needed for the activity to run. initialization, layout inflation, and rendering. However, if some memory has been purged in response to memory trimming events, such as onTrimMemory(), then those objects will need to be recreated in response to the warm start event. A warm start displays the same on-screen behavior as a cold start scenario: The system process displays a blank screen until the app has finished rendering the activity. Lukewarm start A lukewarm start encompasses some subset of the operations that take place during a cold start; at the same time, it represents less overhead than a warm start. There are many potential states that could be considered lukewarm starts. For instance: - The user backs out of your app, but then re-launches it. The process may have continued to run, but the app must recreate the activity from scratch via a call to onCreate(). - The system evicts your app from memory, and then the user re-launches it. The process and the Activity need to be restarted, but the task can benefit somewhat from the saved instance state bundle passed into onCreate(). Profiling Launch Performance In order to properly diagnose start time performance, you can track metrics that show how long it takes your application to start. Note: To reproduce the user experience, make sure you profile your app in non-debuggable mode. Debuggable mode enables debug features that result in a launch time atypical of what the user experiences. Time to initial display From Android 4.4 (API level 19), logcat includes an output line containing a value called Displayed. This value represents the amount of time elapsed between launching the process and finishing drawing the corresponding activity on the screen. The elapsed time encompasses the following sequence of events: - Launch the process. - Initialize the objects. - Create and initialize the activity. - Inflate the layout. - Draw your application for the first time. The reported log line looks similar to the following example: ActivityManager: Displayed com.android.myexample/.StartupTiming: +3s534ms If you’re tracking logcat output from the command line, or in a terminal, finding the elapsed time is straightforward. To find elapsed time in Android Studio, you must disable filters in your logcat view. Disabling the filters is necessary because the system server, not the app itself, serves this log. Once you’ve made the appropriate settings, you can easily search for the correct term to see the time. Figure 2 shows how to disable filters, and, in the second line of output from the bottom, an example of logcat output of the Displayed time. Figure 2. Disabling filters, and finding the Displayed value in logcat. The Displayed metric in the logcat output does not necessarily capture the amount of time until all resources are loaded and displayed: it leaves out resources that are not referenced in the layout file or that the app creates as part of object initialization. It excludes these resources because loading them is an inline process, and does not block the app’s initial display. Sometimes the Displayed line in the logcat output contains an additional field for total time. For example: ActivityManager: Displayed com.android.myexample/.StartupTiming: +3s534ms (total +1m22s643ms) In this case, the first time measurement is only for the activity that was first drawn. The total time measurement begins at the app process start, and could include another activity that was started first but did not display anything to the screen. The total time measurement is only shown when there is a difference between the single activity and total startup times. You can also measure the time to initial display by running your app with the ADB Shell Activity Manager command. Here's an example: adb [-d|-e|-s <serialNumber>] shell am start -S -W com.example.app/.MainActivity -c android.intent.category.LAUNCHER -a android.intent.action.MAINThe Displayedmetric appears in the logcat output as before. Your terminal window should also display the following: Starting: Intent Activity: com.example.app/.MainActivity ThisTime: 2044 TotalTime: 2044 WaitTime: 2054 Complete The -c and -a arguments are optional and let you specify <category> and <action> for the intent. Time to full display You can use the reportFullyDrawn() method to measure the elapsed time between application launch and complete display of all resources and view hierarchies. This can be valuable in cases where an app performs lazy loading. In lazy loading, an app does not block the initial drawing of the window, but instead asynchronously loads resources and updates the view hierarchy. If, due to lazy loading, an app’s initial display does not include all resources, you might consider the completed loading and display of all resources and views as a separate metric: For example, your UI might be fully loaded, with some text drawn, but not yet display images that the app must fetch from the network. To address this concern, you can manually call reportFullyDrawn() to let the system know that your activity is finished with its lazy loading. When you use this method, the value that logcat displays is the time elapsed from the creation of the application object to the moment reportFullyDrawn() is called. Here's an example of the logcat output: system_process I/ActivityManager: Fully drawn {package}/.MainActivity: +1s54ms The logcat output sometimes includes a total time, as discussed in Time to initial display. If you learn that your display times are slower than you’d like, you can go on to try to identify the bottlenecks in the startup process. Identifying bottlenecks Two good ways to look for bottlenecks are Android Studio’s Method Tracer tool and inline tracing. To learn about Method Tracer, see that tool’s documentation. If you do not have access to the Method Tracer tool, or cannot start the tool at the correct time to gain log information, you can gain similar insight through inline tracing inside of your apps’ and activities’ onCreate() methods. To learn about inline tracing, see the reference documentation for the Trace functions, and for the Systrace tool. Common Issues This section discusses several issues that often affect apps’ startup performance. These issues chiefly concern initializing app and activity objects, as well as the loading of screens. Heavy app initialization Launch performance can suffer when your code overrides the Application object, and executes heavy work or complex logic when initializing that object. Your app may waste time during startup if your Application subclasses perform initializations that don’t need to be done yet. Some initializations may be completely unnecessary: for example, initializing state information for the main activity, when the app has actually started up in response to an intent. With an intent, the app uses only a subset of the previously initialized state data. Other challenges during app initialization include garbage-collection events that are impactful or numerous, or disk I/O happening concurrently with initialization, further blocking the initialization process. Garbage collection is especially a consideration with the Dalvik runtime; the Art runtime performs garbage collection concurrently, minimizing that operation's impact. Diagnosing the problem You can use method tracing or inline tracing to try to diagnose the problem. Method tracing Running the Method Tracer tool reveals that the callApplicationOnCreate() method eventually calls your com.example.customApplication.onCreate method. If the tool shows that these methods are taking a long time to finish executing, you should explore further to see what work is occurring there. Inline tracing Use inline tracing to investigate likely culprits including: - Your app’s initial onCreate()function. - Any global singleton objects your app initializes. - Any disk I/O, deserialization, or tight loops that might be occurring during the bottleneck. Solutions to the problem Whether the problem lies with unnecessary initializations or disk I/O, the solution calls for lazy-initializing objects: initializing only those objects that are immediately needed. For example, rather than creating global static objects, instead, move to a singleton pattern, where the app initalizes objects only the first time it accesses them. Also, consider using a dependency injection framework like Dagger that creates objects and dependencies when they are injected for the first time. Heavy activity initialization Activity creation often entails a lot of high-overhead work. Often, there are opportunities to optimize this work to achieve performance improvements. Such common issues include: - Inflating large or complex layouts. - Blocking screen drawing on disk, or network I/O. - Rasterizing VectorDrawableobjects. - Initialization of other subsystems of the activity. Diagnosing the problem In this case, as well, both method tracing and inline tracing can prove useful. Method tracing When running the Method Tracer tool, the particular areas to focus on your your app’s Application subclass constructors and com.example.customApplication.onCreate() methods. If the tool shows that these methods are taking a long time to finish executing, you should explore further to see what work is occurring there. Inline tracing Use inline tracing to investigate likely culprits including: - Your app’s initial onCreate()function. - Any global singleton objects it initializes. - Any disk I/O, deserialization, or tight loops that might be occurring during the bottleneck. Solutions to the problem There are many potential bottlenecks, but two common problems and remedies are as follows: - The larger your view hierarchy, the more time the app takes to inflate it. To address this issue, keep the following two best practices in mind: - Having all of your resource initialization on(): public class MyMainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { // Make sure this is before calling super.onCreate setTheme(R.style.Theme_MyApp); super.onCreate(savedInstanceState); // ... } }
https://developer.android.com/topic/performance/launch-time.html
CC-MAIN-2018-17
refinedweb
2,204
54.02
Mesoderm - Schema class scaffold generator for DBIx::Class version 0.122290 use Mesoderm; use SQL::Translator; use DBI; my $dbh = DBI->connect($dsn, $user, $pass); my $sqlt = SQL::Translator->new(dbh => $dbh, from => 'DBI'); $sqlt->parse(undef); my $scaffold = Mesoderm->new( schema => $sqlt->schema, schema_class => 'My::Schema', ); $scaffold->produce(\*STDOUT); Mesoderm creates a scaffold of code for DBIx::Class using a schema object from SQL::Translator. At time of writing the version of SQL::Translator required is not available on CPAN and must be fetched directly from github. The result is a hierarchy of packages describes below. Moose is used so that any custom methods needed to be added to the result or resultset classes can be done by writing Moose::Role classes. This allows separation between generated code and written code. Mesoderm defines methods to map table names to class names, relationships and columns to accessor methods. It is also possible to have any table, relationship or column excluded from the generated model. If the defaults do not meet your needs, then it is trvial to subclass Mesoderm and provide overrides. Given a schema_class name of Schema and a schema containing a single table foo_bars the following packages would be created or searched for with the default settings. Top level schema class. The user needs to provide this themselves. See "Example Schema Class". The main generated package that will be a Moose::Role to be consumed into the top level schema class. See "The _scaffold Role" Although the model generated is a hierarchy of packages, it is expected that all generated code be in one file loaded as Schema::_scaffold. This file contains all the generated code and should never be modified. A subclass of DBIx::Class::Schema that will be used to register the generated classes. Schema::FooBar will be the result class for the table foo_bars During scaffolding Module::Pluggable will be used to search for Schema::Role::FooBar, which should be a Moose::Role class. If it exists then it will be consumed into Schema::FooBar. Schema::ResultSet::FooBar is the resultset class for the table foo_bars. During scaffolding Module::Pluggable will be used to search for Schema::ResultSet::Role::FooBar, which should be a Moose::Role class. If it exists then it will be consumed into Schema::ResultSet::FooBar. The _scaffold will define methods for each resultset. In our example above it will define a method foo_bar. It also has a method dbic which will return the DBIx::Class::Schema object. The minimum requirement for a schema class is that it providers a method connect_args. The result of calling this method will be passed to the connect method of DBIx::Class::Schema. package Schema; use Moose; with 'Schema::_scaffold'; sub connect_args { return @args_for_dbix_class_connect; } 1; Some other useful additions # delegate txn_* methods to the DBIx::Class object itself has '+dbic' => (handles => [qw(txn_do txn_scope_guard txn_begin txn_commit txn_rollback)]); # Fetch a DBI handle sub dbh { shift->dbic->storage->dbh; } With our example schema, searching of the foo_bars table would be done with my $schema = Schema->new; $schema->foo_bar->search({id => 27}); Required. A SQL::Translator::Object::Schema object that the scaffolding will be generated from. Required. Package name that the scaffold will be generated for. The actual package created will be a Moose::Role with the named schema_class plus ::_scaffold Name of method to generate that when called on any result row or result set will return the parent Mesoderm schema object. Defaults to schema Optional. Namespace used by default to prefix package names generated for DBIx::Class result classes. Defaults to schema_class Optional. Namespace used by default to prefix package names generated for DBIx::Class result set classes. Defaults to result_class_namespace plus ::ResultSet Optional. Namespace that will be searched for, during scaffolding, for roles to add to result classes. The generated code will include with statements for any role that is found during scaffolding. Defaults to result_class_namespace plus ::Role Optional. Namespace that will be searched for, during scaffolding, for roles to add to result set classes. The generated code will include with statements for any role that is found during scaffolding. Defaults to resultset_class_namespace plus ::Role Returns a list of DBIx::Class components to be loaded by the result class Returns a list of DBIx::Class components to be loaded by the result class Returns a list of Moose::Role classes to be comsumed into the result class Default is to join result_role_namespace with table_class_element, if the module can be found by Module::Pluggable Returns a list of Moose::Role classes to be comsumed into the result class. Default is to join resultset_role_namespace with table_class_element, if the module can be found by Module::Pluggable Returns a hash reference which will be serialized as the arguments passed to add_column Provides a hook to allow inserting objects to have default values set on columns if no value has been specified. It should return valid perl code that will be inserted into the generated code and will be evaluated in a scalar context Return a boolean to determine if the passed object should be excluded from the generated model. Default: 0 Returns name for a relationship. Default is to call the method based on the relationship type. Return relationship accessor name. Default is to call to_singlular or to_plural with the name for the foreign table. Which is called depends on the arity of the relationship Return the accessor name for the column. Default it to return the column name. Return name for the result class. Default is to join result_class_namespace with table_class_element Return name for the resultset class. Default is to join resultset_class_namespace with table_class_element Return moniker used to register result class with DBIx::Class::Schema. Default is to call to_singular with the lowercase table name Return package name element that will be prefixed with result_class_namespace, resultset_class_namespace, result_role_namespace and resultset_role_namespace to generate class names. Default takes the table_moniker and title-cases based on _ as a word separator Utility method to return singular form of $word. Default implementation uses "to_S" in Lingua::EN::Inflect::Number Utility method to return plural form of $word. Default implementation uses "to_PL" in Lingua::EN::Inflect::Number Create a relatonship which is the opposite of the given relationship. Return boolean to indicate if the table is a mapping table and many to many mapping relationships need to be created Generate code and write to filehandle Build a Mesoderm::Relationship object given a constraint Build a Mesoderm::Mapping given relationship for a mant to many mapping DBIx::Class, Moose, Moose::Role, SQL::Translator At time of writing the version required is not available on CPAN and needs to be fetched from github. Graham Barr <gbarr@cpan.org> This software is copyright (c) 2010 by Graham Barr. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/~gbarr/Mesoderm-0.122290/lib/Mesoderm.pm
CC-MAIN-2014-49
refinedweb
1,144
53.81
On 06/24/2011 02:58 AM, Daniel Veillard wrote: > On Thu, Jun 23, 2011 at 10:26:03PM -0600, Eric Blake wrote: >> I'm not sure when Py_ssize_t was introduced; but Fedora 14 Python 2.7 >> has it, while RHEL 5 Python 2.4 lacks it. >> >> * python/typewrappers.h (Py_ssize_t): Define for older python. >> --- >> +/* Work around really old python. */ >> +#if PY_MAJOR_VERSION == 2 && PY_MINOR_VERSION < 7 >> +typedef ssize_t Py_ssize_t; >> +#endif >> + >> #define PyvirConnect_Get(v) (((v) == Py_None) ? NULL : \ >> (((PyvirConnect_Object *)(v))->obj)) >> > > I think the workaround is fine, if we ever hit a protability problem > due to this then we can refine, but that looks simple enough for me and > does the job > > ACK, Thanks; pushed. -- Eric Blake eblake redhat com +1-801-349-2682 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://www.redhat.com/archives/libvir-list/2011-June/msg01243.html
CC-MAIN-2015-14
refinedweb
131
66.94
import a bunch of pkgsrc articles from the old pkgsrc.se wiki Some folks don't realize that NetBSD provides an easy way to configure many packages. To see if a particular package has such options, while in the /usr/pkgsrc/<category>/<pkgname> directory type make show-options As an example, we'll use uim, an input method for Japanese. cd /usr/pgksrc/inputmethod/uim make show-options I see the following <pre><code> </code></pre> If one only wants the default options, then a simple make install clean; make clean-depends will install them. However I don't want the defaults. I do want anthy and gtk however, I don't want canna and wish to add qt. PKG_OPTIONS_VAR= PKG_OPTIONS.uim So, I will type make PKG_OPTIONS.uim="qt -canna" install clean; make clean-depends. This will install gtk, qt and anthy. don't want an option enabled by default, use a - in front of it, either quoted on the command line or without quotes in /etc/mk.conf
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/pkgsrc/how_to_use_pkg_options_with_pkgsrc.mdwn?rev=1.1;content-type=text%2Fx-cvsweb-markup
CC-MAIN-2015-32
refinedweb
170
56.55
In the biblical Table of Nations, Javan is Japheth's fourth son (Gen 10:2, 4). Javan is the Hebrew name for Greece, derived from Iaonie, the land of the Greeks. Ezekiel's prophecy concerning Tyre, dating from 594 B.C., mentions Javan/Greece among the nations trading with Tyre (Ezek 27:13, 19). Joel's prophecy accusing Tyre, Sidon and the Philistines of selling captives from Judah into slavery to the Greeks (Joel 3:6) probably belongs to the time of Ezekiel's prophecy. Isaiah's prediction that Greece would be among the lands from which the exiles would return (Is 66:19) is of uncertain date but must precede the Hellenistic period. The term of Greek rule over Palestine is apparently reflected in Zechariah's prophecy "and raised up your sons, O Zion against your sons, O Greece" (Zech 9:13) which is attributed to the years before the Hasmonean uprising. Daniel's prophecy on the first prince of Greece refers to the time of that uprising (Dan 10:20). The Greek era in Palestine began in 333 B.C. when Alexander the Great defeated Darius III of Persia, opening up the way for the conquest of Syria and Egypt. After his death, his kingdom was divided among his generals and Palestine was controlled either by the Egyptian-based Ptolemids or the Syria-based Seleucids. Greek culture (Hellenism) now became predominant throughout the entire region and the Jewish struggle was largely directed against those of its features incompatible with Judaism. The climax came with the decrees of Antiochus IV (175-162 B.C.) against the observances of Judaism and his desecration of the Temple in Jerusalem. The Hasmonean rising, led by Judah the Maccabee, led to the reconquest of Jerusalem and the expulsion of the Syrian forces but Hellenistic influences were still to be found not only among the Gentile population but also among certain Jewish elements. In the NT, the term "Greeks" occurs most frequently in relation to the journeys of Paul: in each city he spoke to the Jews and the Greeks (meaning the non-Jewish population). To Paul, the characteristic of the Greeks was their pursuit of wisdom (I Cor 1:18-2:16). In the course of time, Paul came to the conclusion that there was no distinction between Jew and Greek when it came to salvation through belief in Christ (Rom 10:12). The NT was written in Greek, although parts were based on Aramaic originals which have not been preserved. Greek was the language of the Christian Church until the mid-2nd century. Concordance Dan 8:21; 10:20; 11:2. Joel 3:6. Zech 9:13. Mark 7:26. Luke 23:38. John 7:35; 12:20; 19:20. Acts 14:1; 16:1, 3; 17:4, 12; 18:4,17; 19:10, 17; 20:2, 21; 21:28, 37. Rom 1:14, 16; 2:9-10; 3:9; 10:12. I Cor 1:22-24; 10:32; 12:13. Gal 2:3; 3:28. Col 3:11. Rev 9:11
http://www.answers.com/topic/greece-greeks-grecians
crawl-002
refinedweb
510
70.73
Count the characters at the beginning of a string that aren't in a given character set #include <string.h> size_t strcspn( const char* str, const char* charset ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The strcspn() function finds the length of the initial segment of the string pointed to by str that consists entirely of characters not from the string pointed to by charset. The terminating NUL character isn't considered part of str. The length of the initial segment. #include <stdio.h> #include <string.h> #include <stdlib.h> int main( void ) { printf( "%d\n", strcspn( "abcbcadef", "cba" ) ); printf( "%d\n", strcspn( "xxxbcadef", "cba" ) ); printf( "%d\n", strcspn( "123456789", "cba" ) ); return EXIT_SUCCESS; } produces the output: 0 3 9
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/s/strcspn.html
CC-MAIN-2019-47
refinedweb
131
66.13
Whenever I quit my application I get a bad access error. The last 6 items from my stack trace are: #0 0x00008a3a in juce::Atomic<int>::operator-- at juce_Atomic.h:339 #1 0x00007080 in juce::StringHolder::release at juce_String.cpp:162 #2 0x0000735b in juce::StringHolder::release at juce_String.cpp:168 #3 0x0000374d in juce::String::~String at juce_String.cpp:252 #4 0x0001a3e3 in ChallengeComponent::~ChallengeComponent at ChallengeComponent.cpp:392 ChallengeComponent is my custom component. The program makes it all the way to the end of ChallengeComponent’s destructor, then goes into the juce::String, StringHolder, etc. The specific lilne in juce_Atomic that is causing the error is: #elif JUCE_ATOMICS_GCC return (Type) __sync_add_and_fetch (&value, -1); I have the latest JUCE, running on Mac OS 10.6.8, XCode 3.2.6. Any ideas?
https://forum.juce.com/t/atomcis-bad-access/9190
CC-MAIN-2018-34
refinedweb
133
60.61
Thread Diagnostics - Performance Tuning with the Concurrency Visualizer in Visual Studio 2010 By Hazim Shafi | March 2010 Multicore processors have become widely available, and single-threaded performance in new processors is likely to remain relatively flat. That means added pressure on software developers to improve application performance by taking better advantage of parallelism. Parallel programming is challenging for many reasons, but in this article I’d like to focus on the performance aspects of parallel applications. Multithreaded applications are not only prone to common sources of inefficiency in sequential implementations, such as inefficient algorithms, poor cache behavior, and excessive I/O, but they can also suffer from parallel performance bugs. Parallel performance and scalability may be limited by load imbalance, excessive synchronization overhead, inadvertent serialization, or thread migration. Understanding such performance bottlenecks used to require significant instrumentation and analysis by expert developers. Even for those elite programmers, performance tuning was a tedious and time-consuming process. This is about to change for the better. Visual Studio 2010 includes a new profiling tool—the Concurrency Visualizer—that should significantly reduce the burden of parallel performance analysis. Moreover, the Concurrency Visualizer can help developers analyze their sequential applications to discover opportunities for parallelism. In this article, I present an overview of the features of the Concurrency Visualizer in Visual Studio 2010, along with some practical usage guidance. CPU Utilization The Concurrency Visualizer comprises several visualization and reporting tools. There are three main views: CPU Utilization, Threads, and Cores. The CPU Utilization view, shown in Figure 1, is intended to be the starting point in Concurrency Visualizer. The x axis shows the time elapsed from the start of the trace until the end of application activity (or the end of the trace, whichever is earlier). The y axis shows the number of logical processor cores in the system. Figure 1 CPU Utilization View Before I describe the purpose of the view, it is important that you understand what a logical core is. A single CPU chip today can include multiple microprocessor circuits, referred to as physical cores. Each physical core may be capable of running multiple application threads simultaneously. This is often referred to as simultaneous multithreading (SMT); Intel calls it Hyper-Threading Technology. Each hardware-supported thread on an SMT-capable core presents itself as a logical core to the operating system. If you collect a trace on a quad-core system that does not support SMT, the y axis would show four logical cores. If each core in your quad-core system is capable of running two SMT threads, then the y axis would show eight logical cores. The point here is that the number of logical cores is a reflection of the number of threads that can simultaneously execute in your system, not the number of physical cores. Now, let’s get back to the view. There are four areas shown in the graph, as described in the legend. The green area depicts the average number of logical cores that the application being analyzed is using at any given time during the profiling run. The rest of the logical cores are either idle (shown in gray), used by the System process (shown in red), or used by other processes running on the system (shown in yellow). The blue vertical bars in this view correspond to an optional mechanism that allows users to instrument their code in order to correlate the visualizations in the tool with application constructs. I will explain how this can be done later in this article. The Zoom slider control at the top left allows you to zoom in on the view to get more details, and the graph control supports a horizontal scrollbar when zoomed. You can also zoom by clicking the left mouse button and dragging in the area graph itself. This view has three main purposes. First, if you are interested in parallelizing an application, you can look for areas of execution that either exhibit significant serial CPU-bound work, shown as lengthy green regions at the single-core level on the y axis, or regions where there isn’t much CPU utilization, where the green doesn’t show or is considerably less than 1 on average. Both of these circumstances might indicate an opportunity for parallelization. CPU-intensive work can be sped up by leveraging parallelism, and areas of unexpected low CPU utilization might imply blocking (perhaps due to I/O) where parallelism may be used by overlapping other useful work with such delays. Second, if you are trying to tune your parallel application, this view allows you to confirm the degree of parallelism that exists when your application is actually running. Hints of many common parallel performance bugs are usually apparent just by examining this graph. For example, you can observe load imbalances as stair-step patterns in the graph, or contention for synchronization objects as serial execution when parallelism is expected. Third, since your application lives in a system that may be executing many other applications that are competing for its resources, it is important to understand whether your application’s performance is affected by other apps. When interference is unexpected, it is usually a good idea to reduce it by disabling applications or services to improve the fidelity of data, because performance is usually an iterative process. Sometimes, interference is caused by other processes with which your application collaborates to deliver an experience. Either way, you will be able to use this view to discover whether such interference exists, and then identify the actual processes involved by using the Threads view, which I will discuss later. Another feature that can help reduce interference is using the profiler command-line tools to collect traces rather than doing so from within the Visual Studio IDE. Focus your attention on some window of execution that piques your interest, zoom in on it, and then switch to the Threads view for further analysis. You can always come back to this view to find the next region of interest and repeat the process. Threads The Threads view, shown in Figure 2, contains the bulk of the detailed analysis features and reports in the Concurrency Visualizer. This is where you’ll find information that explains behavior you identified in the CPU Utilization or Cores views. It is also where you can find data to link behavior to application source code when possible. There are three main components of this view: the timeline, the active legend and the reporting/details tab control. Like the CPU Utilization view, the Threads view shows time on the x axis. (When switching between views in Concurrency Visualizer, the range of time shown on the x axis is preserved.) However, the Threads view y axis contains two types of horizontal channels. The top channels are usually dedicated to physical disks on your system if they had activity in your application’s profile. There are two channels per disk, one each for reads and writes. These channels show disk accesses that are made by your application threads or by the System process threads. (It shows the System accesses because they can sometimes reflect work being done on behalf of your process, such as paging.) Every read or write is drawn as a rectangle. The length of the rectangle depicts the latency of the access, including queuing delays; therefore, multiple rectangles may overlap. To determine which files were accessed at a given point in time, select a rectangle by clicking the left mouse button. When you do that, the reports view below will switch to the Current Stack tab, which is the standard location for displaying data interactively with the timeline. Its contents will list the names of files that were either read or written, depending on the disk channel selected. I will return to I/O analysis later. One thing to be aware of is that not all file read and write operations performed by the application may be visible when they are expected to occur. This is because the operating system’s file system uses buffering, allowing some disk I/O operations to complete without accessing the physical disk device. The remaining channels in the timeline list all the threads that existed in your application during the profile collection period. For each thread, if the tool detected any activity during the profiler run, it will display the state of the thread throughout the trace until it is terminated. If a thread is running, which is depicted by the green Execution category, the Concurrency Visualizer shows you what the thread was doing by leveraging sample profile information. There are two ways to get at this data. One is by clicking on a green segment, in which case you’ll see the nearest (within +/- 1 ms) profile sample call stack in the Current Stack tab window. You can also generate a sample profile report for the visible time range to understand where most of the work was spent. If you click on the Execution label in the active legend, the report will show up in the Profile Report tab. The profile report has two features that may be used to reduce complexity. One is a noise reduction feature that, by default, removes call stacks responsible for 2 percent or less of the profile samples. This threshold can be changed by the user. Another feature, called Just My Code, can be used to reduce the number of stack frames due to system DLLs in the report, if that’s desirable. I’ll cover the reports in more detail later. Before going on, I’d like to point out a few more features for managing complexity in the reports and views. You will often encounter application scenarios consisting of many threads, some of which may not be doing anything useful in a given profiler run. Besides filtering reports based on the time range, the Concurrency Visualizer also allows you to filter by the threads that are active. If you’re interested in threads that do work, you can use the Sort By option to sort the threads by the percentage of time that they are in the Execution state. You can then select the group of threads that are not doing much useful work and hide them from the display either by right-clicking and selecting the Hide option from the context menu or by clicking the Hide button in the toolbar at the top of the view. You can sort by all thread state categories and can hide/unhide as you see fit. The effect of hiding threads is that their contributions to all the reports will be removed, in addition to hiding their channels from the timeline. All statistics and reports in the tool are kept up-to-date dynamically as filtering is performed on threads and time range. Blocking Categories Threads can block for many reasons. The Threads view attempts to identify the reason why a thread blocked by mapping each instance to a set of blocking categories. I say attempts because this categorization can sometimes be inaccurate, as I’ll explain in a moment, so it should be viewed as a rough guide. That said, the Threads view shows all thread delays and accurately depicts execution periods. You should focus your attention on categories responsible for significant delays in the view based on your understanding of the application’s behavior. In addition, the Threads view provides the call stack at which the thread stopped execution in the Current Stack tab if you click on a blocking event. By clicking on a stack frame in the Current Stack window, the user will be taken to the source code file (when available) and line number where the next function is called. This is an important productivity feature of the tool. Let’s take a look at the various blocking categories: Synchronization Almost all blocking operations can be attributed to an underlying synchronization mechanism in Windows. The Concurrency Visualizer attempts to map blocking events due to synchronization APIs such as EnterCriticalSection and WaitForSingleObject to this category, but sometimes other operations that result in synchronization internally may be mapped to this category—even though they might make more sense elsewhere. Therefore, this is often a very important blocking category to analyze during performance tuning, not just because synchronization overheads are important but also because it can reflect other important reasons for execution delays. Preemption This includes preemption due to quantum expiration when a thread’s share of time on its core expires. It also includes preemption due to OS scheduling rules, such as another process thread with a higher priority being ready to run. The Concurrency Visualizer also maps other sources of preemption here, such as interrupts and LPCs, which can result in interrupting a thread’s execution. At each such event, the user can get the process ID/name and thread ID that took over by hovering over a preemption region and examining the tooltip (or clicking on a yellow region and observing the Current Stack tab contents). This can be a valuable feature for understanding the root causes of yellow interference in the CPU Utilization view. Sleep This category is used to report thread blocking events as a result of an explicit request by the thread to sleep or yield its core voluntarily. Paging/Memory Management This category covers blocking events due to memory management, which includes any blocking operations started by the system’s memory manager as a response to an action by the application. Things like page faults, certain memory allocation contentions or blocking on certain resources would show up here. Page faults in particular are noteworthy because they can result in I/O. When you see a page fault blocking event, you should both examine the call stack and look for a corresponding I/O read event on the disk channel in case the page fault required I/O. A common source of such page faults is loading DLLs, memory-mapped I/O and normal virtual-memory paging by the kernel. You can identify whether this was a DLL load or paging by clicking on the corresponding I/O segment to get the filename involved. I/O This category includes events such as blocking on file reads and writes, certain network socket operations and registry accesses. A number of operations considered by some to be network-related may not show up here, but rather in the synchronization category. This is because many I/O operations use synchronization mechanisms to block and the Concurrency Visualizer may not be looking for those API signatures in this category. Just as with the memory/paging category, when you see an I/O blocking event that seems to be related to accessing your disk drives, you should find out if there’s a corresponding disk access in the disk channels. To make this easier, you can use the arrow buttons in the toolbar to move your threads closer to the disk channel. To do this, select a thread channel by clicking on its label on the left, then click on the appropriate toolbar button. UI Processing This is the only form of blocking that is usually desirable. It is the state of a thread that is pumping messages. If your UI thread spends most of its time in this state, this implies that your application is responsive. On the other hand, if the UI thread does excessive work or blocking for other reasons, from the application user’s perspective the UI will appear to hang. This category offers a great way to study the responsiveness of your application, and to tune it. Inter-Thread Dependencies One of the most valuable features of the Threads view is the ability to determine inter-thread synchronization dependencies. In Figure 2 I have selected a synchronization delay segment. The segment gets enlarged and its color is highlighted (in this case, it’s red). The Current Stack tab shows the call stack of the thread at that moment. By examining the call stack, you can determine the API that resulted in blocking the thread’s execution. Figure 2 Threads View Another visualization feature is a line that connects the blocking segment to an execution segment on a different thread. When this visualization is visible, it illustrates the thread that ended up unblocking the blocked thread. In addition, you can click on the Unblocking stack tab in this case to see what the unblocking thread was doing when it released the blocked thread. As an example, if the blocking thread was waiting on a Win32 critical section, you would see the signature of EnterCriticalSection on its blocking call stack. When it is unblocked, you should see the signature of LeaveCriticalSection in the call stack of the unblocking thread. This feature can be very valuable when analyzing complex application behavior. Reports The profile reports offer a simple way of identifying major contributors to the performance behavior of your application. Whether you are interested in execution overheads, blocking overheads or disk I/O, these reports allow you to focus on the most significant items that may be worth investigating. There are four types of reports in the Threads view: execution sampling profiles, blocking profiles, file operations and per-thread summaries. All the reports are accessed using the legend. For example, to get the execution profile report, click the execution legend entry. This produces a report in the Profile Report tab. The reports look similar to what is shown in Figure 3. Figure 3 A Typical Profile Report For an execution profile report, the Concurrency Visualizer analyzes all the call stacks collected when sampling your application’s execution (green segments) and collates them by identifying shared stack frames to assist the user in understanding the execution structure of the application. The tool also computes inclusive and exclusive costs for each frame. Inclusive samples account for all samples in a given execution path, including all paths below it. Exclusive samples correspond to the number of samples of call-graph stack-frame leaves. To get a blocking profile, you click on the blocking category of interest in the legend. The generated report is constructed like the execution profile report, but the inclusive and exclusive columns now correspond to blocking time attributed to the call stacks or frames in the report. Another column shows the number of instances of blocking attributed to that stack frame in the call tree. These reports offer a convenient way of prioritizing performance tuning efforts by identifying the parts of your application responsible for most delays. The preemption report is informational and usually does not offer any actionable data due to the nature of this category. All the reports allow you to jump to source code. You may do so by right-clicking on a stack frame of interest. The context menu that appears allows you to jump either to the function definition (the View Source option) or to the location in your application where that function was called (the View Call Sites option). If there were multiple callers, you will be presented with multiple options. This allows a seamless integration between the diagnostic data and the development process to tune your application’s behavior. The reports may also be exported for cross-profile comparisons. The File Operations report shown in Figure 4 includes a summary of all file read and write operations visible in the current time range. For every file, the Concurrency Visualizer lists the application thread that accessed it, the number of read and write operations, the total bytes read or written, and the total read or write latency. Besides showing file operations directly attributed to the application, the Concurrency Visualizer also shows those performed by the System process. These are shown, as mentioned earlier, because they might include file operations performed by the system on behalf of your application. Exporting the report allows cross-profile comparisons during tuning efforts. Figure 4 File Operations Report The Per Thread Summary report, shown in Figure 5, presents a bar graph for each thread. The bar is divided into the various thread state categories. This can be a useful tool to track your performance tuning progress. By exporting the graph data across various tuning iterations, you can document your progress and provide a means of comparing runs. The graph will not show all threads for applications that have too many threads to fit within the view. Figure 5 Per Thread Summary Report Cores Excessive context switches can have a detrimental effect on application performance, especially when threads migrate across cores or processor sockets when they resume execution. This is because a running thread loads instructions and data it needs (often referred to as the working set) into the cache hierarchy. When a thread resumes execution, especially on another core, it can suffer significant latency while its working set is reloaded from memory or other caches in the system. There are two common ways to reduce this overhead. A developer can either reduce the frequency of context switches by resolving the underlying causes, or he can leverage processor or core affinity. The former is almost always more desirable because using thread affinity can be the source of other performance issues and should only be used in special circumstances. The Cores view is a tool that aids in identifying excessive context switches or performance bugs introduced by thread affinity. As with the other views, the Cores view displays a timeline with time on the x axis. The logical cores in the system are shown on the y axis. Each thread in the application is allocated a color, and thread execution segments are drawn on the core channels. A legend and context switch statistics are shown in the bottom pane, as shown in Figure 6. Figure 6 Cores View The statistics help the user identify threads that have excessive context switches and those that incur excessive core migrations. The user can then use this view to focus her attention on areas of execution where the threads in question are interrupted, or jump back and forth across cores by following the visual color hints. Once a region that depicts the problem is identified, the user can zoom in on it and switch back to the Threads view to understand what triggered the context switches and fix them if possible (for example, by reducing contention for a critical section). Thread affinity bugs can also manifest themselves in some cases when two or more threads contend for a single core while other cores appear to be idle. Support for PPL, TPL and PLINQ The Concurrency Visualizer supports the parallel programming models shipping in Visual Studio 2010 aside from existing Windows native and managed programming models. Some of the new parallel constructs—parallel_for in the Parallel Pattern Library (PPL), Parallel.For in the Task Parallel Library (TPL) and PLINQ queries—include visualization aids in the performance tool that allow you to focus your attention on those regions of execution. PPL requires turning on tracing for this functionality to be enabled, as shown in this example: When tracing is enabled, the Threads and Cores views will depict the parallel_for execution region by drawing vertical markers at the beginning and end of its execution. The vertical bars are connected via horizontal bars at the top and bottom of the view. By hovering with the mouse over the horizontal bars, a tooltip showing the name of the construct is drawn, as shown in Figure 7. Figure 7 An Example parallel_for Visual Marker in Threads View TPL and PLINQ do not require manual enabling of tracing for the equivalent functionality in the Concurrency Visualizer. Collecting a Profile The Concurrency Visualizer supports both the application launch and attach methods for collecting a profile. The behavior is exactly the same as users of the Visual Studio Profiler are accustomed to. A new profiling session may be initiated through the Analyze menu option either by launching the Performance Wizard, shown in Figure 8, or via the Profiler | New Performance Session option. In both cases, the Concurrency Visualizer is activated by choosing the Concurrency profiling method and then selecting the “Visualize the behavior of a multithreaded application” option. Figure 8 The Performance Wizard Profiling Method Dialog The Visual Studio Profiler’s command-line tools allow you to collect Concurrency Visualizer traces and then analyze them using the IDE. This lets users who are interested in server scenarios where installing the IDE is impossible collect a trace with the least intrusion possible. You will notice that the Concurrency Visualizer does not have integrated support for profiling ASP.NET applications. However, it may be possible to attach to the host process (usually w3wp.exe) while running your ASP.NET application in order to analyze its performance. Since the Concurrency Visualizer uses Event Tracing for Windows (ETW), it requires administrative privileges to collect data. You can either launch the IDE as an administrator, or you will be prompted to do so when necessary. In the latter case, the IDE will be restarted with administrator rights. Linking Visualizations to Application Phases Another feature in the Concurrency Visualizer is an optional instrumentation library that allows developers to customize the views by drawing markers for application phases they care about. This can be extremely valuable to allow easier correlation between visualizations and application behavior. The instrumentation library is called the Scenario library and is available for download from the MSDN Code Gallery Web site at code.msdn.microsoft.com/scenario. Here’s an example using a C application: #include "Scenario.h" int _tmain(int argc, _TCHAR* argv[]) { myScenario = new Scenario(0, L"Scenario Example", (LONG) 0); myScenario->Begin(0, TEXT("Initialization")); // Initialization code goes here myScenario->End(0, TEXT("Initialization")); myScenario->Begin(0, TEXT("Work Phase")); // Main work phase goes here myScenario->End(0, TEXT("Work Phase")); exit(0); } The usage is pretty simple; you include the Scenario header file and link the correct library. Then you create one or more Scenario objects and mark the beginning and end of each phase by invoking the Begin and End methods, respectively. You also specify the name of each phase to these methods. The visualization is identical to that shown in Figure 7, except that the tooltip will display the custom phase name you specify in your code. In addition, the scenario markers are also visible in the CPU Utilization view, which is not the case for other markers. An equivalent managed implementation is also provided. A word of caution is in order here. Scenario markers should be used sparingly; otherwise, the visualizations can be completely obscured by them. In fact, to avoid this problem, the tool will significantly reduce or eliminate the number of markers displayed if it detects excessive usage. In such cases, you can zoom in to expose markers that have been elided in most views. Further, when nesting of Scenario markers takes place, only the innermost marker will be displayed. Resources and Errata The Concurrency Visualizer includes many features to help you understand its views and reports. The most interesting such feature is the Demystify button shown in the top-right corner of all views. By clicking Demystify, you get a special mouse pointer allowing you to click on any feature in view that you’d like help on. This is our way of providing context-sensitive help in the tool. In addition, there’s a Tips tab with more help content, including a link to a gallery of visualization signatures for some common performance issues. As mentioned earlier, the tool leverages ETW. Some of the events required by the Concurrency Analyzer do not exist on Windows XP or Windows Server 2003, so the tool only supports Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2. Both 32-bit and 64-bit variants of these operating systems are supported. In addition, the tool supports both native C/C++ and .NET applications (excluding .NET 1.1 and earlier). If you are not running on a supported platform, you should explore another valuable concurrency tool in Visual Studio 2010, which is enabled by selecting the “Collect resource contention data” option. In certain cases, when there’s a significant amount of activity in a profiling scenario or when there is contention for I/O bandwidth from other applications, important trace events may be lost. This results in an error during trace analysis. There are two ways to handle this situation. First, you could try profiling again with a smaller number of active applications, which is a good methodology to follow in order to minimize interference while you are tuning your application. The command-line tools are an additional option in this case. Second, you can increase the number or size of ETW memory buffers. We provide documentation through a link in the output window to instructions on how to accomplish this. If you choose option two, please set the minimum total buffer size necessary to collect a good trace since these buffers will consume important kernel resources when in use. Any diagnostic tool is only as good as the data it provides back to the user. The Concurrency Visualizer can help you pinpoint the root causes of performance issues with references to source code, but in order to do so, it needs access to symbol files. You can add symbol servers and paths in the IDE using the Tools | Options | Debugging | Symbols dialog. Symbols for your current solution will be implicitly included, but you should enable the Microsoft public symbol server as well as any other paths that are specific to the application under study where important symbol files may be found. It’s also a good idea to enable a symbol cache because that will significantly reduce profile analysis time as the cache gets populated with symbol files that you need. Although ETW provides a low-overhead tracing mechanism, the traces collected by the Concurrency Visualizer can be large. Analyzing large traces can be very time-consuming and may result in performance overheads in the visualizations provided by the tool. Generally, profiles should be collected for durations not exceeding one to two minutes to minimize the chances of these issues affecting your experience. For most analysis scenarios, that duration is sufficient to identify the problem. The ability to attach to a running process is also an important feature in order to avoid collecting data before your application reaches the point of interest. There are multiple sources of information on the Concurrency Visualizer. Please visit the Visual Studio Profiler forum (social.msdn.microsoft.com/forums/en-us/vstsprofiler/threads) for community and development team answers. Further information is available from the team blog at blogs.msdn.com/visualizeparallel and my personal blog at blogs.msdn.com/hshafi. Please feel free to reach out to me or my team if you have any questions regarding our tool. We love hearing from people using the Concurrency Visualizer, and your input helps us improve the tool. Dr. Hazim Shafi is the parallel performance and correctness tools architect in the Parallel Computing Platform team at Microsoft. He has 15 years of experience in many aspects of parallel and distributed computing and performance analysis. He holds a B.S.E.E. from Santa Clara University, and M.S. and Ph.D. degrees from Rice University. Thanks to the following technical experts for reviewing this article: Drake Campbell, Bill Colburn, Sasha Dadiomov and James Rapp
https://msdn.microsoft.com/ee336027.aspx
CC-MAIN-2019-18
refinedweb
5,237
51.18
Hide Forgot Description of problem: A traffic listener pod is created and exposed via service type NodePort. Traffic is sent from client pod to the exposed nodeport on all Nodes IPs one by one. All other nodes shows UNREPLIED entry in conntrack table except the one client pod runs on (from where the traffic is sent). Client pod is just a ping pod utilized to send traffic. All nodes supposed to be proxying the exposed service due to type NodePort. $ oc get pods NAME READY STATUS RESTARTS AGE hello-pod 1/1 Running 0 22h <<<<Ping pod udp-rc-lcbst 1/1 Running 0 51m <<<<Traffic listener pod $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE udp-rc-lcbst NodePort 172.30.154.219 <none> 8080:31963/UDP 105m $ sudo podman run -rm --network host --privileged docker.io/aosqe/conntrack-tool conntrack -L | grep 31963 udp 17 5 src=172.31.130.146 dst=172.31.139.127 sport=34999 dport=31963 [UNREPLIED] src=172.31.139.127 dst=172.31.130.146 sport=31963 dport=34999 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1 udp 17 12 src=172.31.130.146 dst=172.31.159.254 sport=52167 dport=31963 [UNREPLIED] src=172.31.159.254 dst=172.31.130.146 sport=31963 dport=52167 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1 udp 17 20 src=172.31.130.146 dst=172.128.159.64 sport=37556 dport=31963 [UNREPLIED] src=172.128.159.64 dst=172.31.130.146 sport=31963 dport=37556 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1 udp 17 149 src=172.31.130.146 dst=172.31.130.146 sport=58178 dport=31963 src=10.129.2.23 dst=10.128.2.1 sport=8080 dport=58178 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1 sudo podman command is just running conntrack utility in a container and removes container post command execution Version-Release number of selected component (if applicable): 4.0.0-0.nightly-2019-04-05-165550 $ oc version --short Client Version: v4.0.22 Server Version: v1.13.4+ab11434 How reproducible: Always Steps to Reproduce: 1. Create traffic listener pod and a ping pod. See in addtional info below 2. oc expose pod <traffic_listener_pod> --type=NodePort --port=8080 --protocol=UDP 3. Send traffic via client pod to all node IPs and nodeport one by one Actual results: Not all nodes are responding to client but only that node on with client on Expected results: Expecting all nodes to reply to client as the service type is NodePort which is supposed to expose service on all nodes Additional info: traffic listener pod template ----------------------------- { "apiVersion": "v1", "kind": "List", "items": [ { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "labels": { "name": "udp-rc" }, "name": "udp-rc" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "name": "udp-pods" } }, "spec": { "containers": [ { "command": [ "/usr/bin/ncat", "-u", "-l", "8080","--keep-open", "--exec", "/bin/cat"], "name": "udp-pod", "image": "aosqe/pod-for-ping" } ], "restartPolicy": "Always" } } } } ] } $ oc get svc -oyaml ----------------------- apiVersion: v1 items: - apiVersion: v1 kind: Service metadata: creationTimestamp: 2019-04-09T18:02:22Z labels: name: udp-pods name: udp-rc-lcbst namespace: test resourceVersion: "880960" selfLink: /api/v1/namespaces/test/services/udp-rc-lcbst uid: 9fddbc9b-5af1-11e9-82b2-02302f122dd4 spec: clusterIP: 172.30.154.219 externalTrafficPolicy: Cluster ports: - nodePort: 31963 port: 8080 protocol: UDP targetPort: 8080 selector: name: udp-pods sessionAffinity: None type: NodePort status: loadBalancer: {} kind: List metadata: resourceVersion: "" selfLink: "" Ok further experiments tells me that it might be due to node to node network connectivity absence in 4.x. I am not able to ping one node from another node or vice versa. Is it a restriction on CoreOS on 4.x? Please advise.. Yup, we need to open this range for UDP as well, I'll file a PR. Filed (In reply to Meng Bo from comment #2) >. iptables-save entries seems to be correct $ sudo iptables-save | grep 31326 -A KUBE-NODEPORTS -p udp -m comment --comment "test/udp-rc-ctsj7:" -m udp --dport 31326 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p udp -m comment --comment "test/udp-rc-ctsj7:" -m udp --dport 31326 -j KUBE-SVC-J5HIX5PZU2ZRSTD5 While netstat doesn;t show the expected port range opened $ netstat -lnpu | grep "Proto\|31326" (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp6 0 0 :::31326 :::* - Will have to verify this on next good build. Not getting green build on 4.1 since 8 days. Thanks. Verified on 4.1.0-0.nightly-2019-04-18-170154. Port range 30000-32767 is now allowed for UDP for NodePort services. Test steps worked fine now as mentioned in.
https://bugzilla.redhat.com/show_bug.cgi?id=1698210
CC-MAIN-2021-49
refinedweb
810
54.02
So let’s say you have two lists you want to compare to see if they hold the same items, but the items are not equal reference. Now, if you are comparing two lists that have unique values to compare, Union is perfect. List 1 { 1, 2, 3, 4, 5 } List 2 { 2, 1, 4, 3, 5 } As you can see, there are no repeated values in these two lists. Easy way to figure out if all the values are the same in the two lists: var query = (from first in firstList select first).Union(from second in secondlist select second); Assert.IsTrue(query.Count() == first.Count()); Why does this work? Union combine the two lists, removing any duplicates. So if everything goes correctly, the count of the new list has to match the count of either of the olds lists. After all 5 pairs of duplicate items gets reduced to a list of 5. Now, if there is anything different between the lists the count will get screwed. Why? Because even one difference will cause an extra item to show up in the list. List 1 { 1, 2, 3, 4, 5 } List 2 { 1, 2, 3, 4, 6 } Union { 1, 2, 3, 4 , 5, 6 } And it will only get worse for every mismatch. Real worldish example: var query = (from user in userListFirst select user.UserID).Union(from secondUser in userListSecond select second.UserID); Assert.IsTrue(query.Count() == userListFirst.Count()); Bonus points if you can figure out why this would fail at times. Actually, I already told you… UsInGS using System; using System.Linq;
https://byatool.com/uncategorized/union-to-find-if-two-lists-match/
CC-MAIN-2021-31
refinedweb
265
81.02
MMark CLIMMark CLI - Templates - Extensions - Contribution - License This is a command line application serving as an interface to the MMark markdown processor. mmark—command line interface to MMark markdown processor Usage: mmark [-v|--version] [-i|--ifile IFILE] [-o|--ofile OFILE] [-j|--json] [-t|--template FILE] [--ext-comment PREFIX] [--ext-font-awesome] [--ext-footnotes] [--ext-kbd] [--ext-link-target] [--ext-mathjax] [--ext-obfuscate-email CLASS] [--ext-punctuation] [--ext-skylighting] [--ext-toc RANGE] Command line interface to MMark markdown processor Available options: -h,--help Show this help text -v,--version Print version of the program -i,--ifile IFILE Read markdown source from this file (otherwise read from stdin) -o,--ofile OFILE Save rendered HTML document to this file (otherwise write to stdout) -j,--json Output parse errors and result in JSON format -t,--template FILE Use the template located at this path --ext-comment PREFIX Remove paragraphs that start with the given prefix --ext-font-awesome Enable support for inserting font awesome icons --ext-footnotes Enable support for footnotes --ext-kbd Enable support for wrapping things in kbd tags --ext-link-target Enable support for specifying link targets --ext-mathjax Enable support for MathJax formulas --ext-obfuscate-email CLASS Obfuscate email addresses assigning the specified class --ext-punctuation Enable punctuation prettifier --ext-skylighting Enable syntax highlighting of code snippets with Skylighting --ext-toc RANGE Enable generation of table of contents using the supplied range of headers to include, e.g. "1-6" or "2-4" TemplatesTemplates By using the --template argument, it's possible to create a standalone HTML page. The templating system we use is Mustache, as implemented by the stache library. The library conforms to the version 1.1.3 of the official Mustache specification, but does not implement lambdas (which is an optional feature is the specification) for simplify and other technical reasons we won't touch here. If markdown source file has a YAML section, its contents will be provided as context for rendering of the template. In addition to that, a new top-level value bound to the variable named output will be available. That variable contains the HTML rendition of markdown document. It's best to interpolate it without HTML escaping, like so: {{& output }}. ExtensionsExtensions Here we list how to use the available extensions. The extensions come from the mmark-ext package. Comment paragraphComment paragraph - Option: --ext-comment PREFIX This extension removes paragraphs that start with the given PREFIX. For example: $ mmark --ext-comment REM First. REM Second. Third. ----------------------- Control-D <p>First.</p> <p>Third.</p> Font awesomeFont awesome - Option: --ext-font-awesome This allows to turn autolinks with fa scheme into font awesome icons: $ mmark --ext-font-awesome Here is the user icon: <fa:user>. A more interesting example: <fa:quote-left/3x/pull-left/border>. ----------------------- Control-D <p>Here is the user icon: <span class="fa fa-user"></span>.</p> <p>A more interesting example: <span class="fa fa-quote-left fa-3x fa-pull-left fa-border"></span>. </p> In general, all path components in URIs that go after the name of icon will be prefixed with "fa-" and added as classes, so you can do a lot of fancy stuff, see. FootnotesFootnotes - Option: --ext-footnotes The extension performs two transformations: - It turns links with URIs with footnotescheme and single path piece consisting of a number into links to footnote references. - It turns block quotes with the "footnotes"label (see the example below) into a footnote section. $ mmark --ext-footnotes Here goes some text [1](footnote:1). > footnotes 1. Here we have the footnote. ----------------------- Control-D <p>Here goes some text <a href="#fn1" id="fnref1"><sup>1</sup></a>.</p> <ol> <li id="fn1"> Here we have the footnote. <a href="#fnref1">↩</a></li> </ol> The extension is not fully safe though in the sense that we can't check that a footnote reference refers to an existing footnote and that footnotes have corresponding references, or that they are present in the document in the right order. Kbd tagsKbd tags - Option: --ext-kbd Introduce kbd tags into resulting HTML document by wrapping content in links with URL with kbd scheme. For example: $ mmark --ext-kbd To enable that mode press [Ctrl+A][kbd]. [kbd]: kbd: ----------------------- Control-D <p>To enable that mode press <kbd>Ctrl+A</kbd>.</p> The use of reference-style links seems more aesthetically pleasant to the author, but you can of course do something like this instead: To enable that mode press [Ctrl+A](kbd:). Link targetsLink targets - Option: --ext-link-target When title of a link starts with the word "_blank", "_self", "_parent", or "_top", it's stripped from title (as well as all whitespace after it) and added as the value of target attribute of the resulting link. For example: $ mmark --ext-kbd This [link](/url '_blank My title') opens in new tab. ----------------------- Control-D <p>This <a href="/url" title="My title" target="_blank">link</a> opens in new tab.</p> MathJaxMathJax - Option: --ext-mathjax The extension allows to transform inline code spans into MathJax inline spans and code blocks with the info string "mathjax" (case-sensitive) into MathJax display spans. Every line in such a code block will produce a separate display span, i.e. a separate line with a formula (which is probably what you want anyway). Inline code spans must start and end with the dollar sign $ to be recognized as MathJax markup: $ mmark --ext-mathjax Let's talk about `$A$` and `$B$`. ```mathjax A \xrightarrow{f} B ``` ----------------------- Control-D <p>Let's talk about <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span>. </p> <p> <span class="math display">\[A \xrightarrow{f} B\]</span> </p> - Option: --obfuscate-email CLASS This extension makes email addresses in autolinks be rendered as something like this: [mark@arch ~]$ mmark --ext-obfuscate-email protected-email Send all your spam to <someone@example.org>, if you can! ----------------------- Control-D <p>Send all your spam to <a href="javascript:void(0)" class="protected-email" data- Enable JavaScript to see this email</a>, if you can! </p> You'll also need to include jQuery and this bit of JS code for the magic to work: $(document).ready(function () { $(".protected-email").each(function () { var item = $(this); var email = item.data('email'); item.attr('href', 'mailto:' + email); item.html(email); }); }); Punctuation prettifierPunctuation prettifier - Option: --ext-punctuation This makes MMark prettify punctuation (only affects plain text in inlines), the effect is the following: - Replace ...with ellipsis … - Replace ---with em-dash — - Replace --with en-dash – - Replace "with left double quote “when previous character was a space character, otherwise replace it with right double quote ” - Replace 'with left single quote ‘when previous character was a space character, otherwise replace it with right single quote ’aka apostrophe For example (not sure if this is the correct punctuation to use here, but it demonstrates the effect): [mark@arch ~]$ mmark --ext-punctuation Something---we don't know what, happened... ----------------------- Control-D <p>Something—we don’t know what, happened…</p> SkylightingSkylighting - Option: --ext-skylighting Use the skylighting package to render code blocks with info strings that result in a successful lookup from the syntax table that comes with the library. The resulting HTML will be rendered as described here. Example: [mark@arch ~]$ mmark --ext-skylighting Some Haskell: ```haskell main :: IO () main = return () ``` ----------------------- Control-D <p>Some Haskell:</p> <div class="source-code"><pre><code class="language-haskell"> <span class="ot">main ::</span><span> </span><span class="dt">IO</span><span> ()</span> <span>main </span><span class="fu">=</span><span> return ()</span> </code></pre></div> Table of contentsTable of contents - Option: --ext-toc RANGE Replace the code block with info string "toc" by table of contents assembled from headings with levels from N to M, where N-M is RANGE. For example: [mark@arch ~]$ mmark --ext-toc 2-4 # Story of my life ```toc ``` ## Charpter 1 Foo. ## Chapter 2 Bar. ### Something Baz. ----------------------- Control-D <h1 id="story-of-my-life">Story of my life</h1> <ul> <li> <a href="#charpter-1">Charpter 1</a> </li> <li> <a href="#chapter-2">Chapter 2</a> <ul> <li> <a href="#something">Something</a> </li> </ul> </li> </ul> <h2 id="charpter-1">Charpter 1</h2> <p>Foo.</p> <h2 id="chapter-2">Chapter 2</h2> <p>Bar.</p> <h3 id="something">Something</h3> <p>Baz.</p> ContributionContribution Issues, bugs, and questions may be reported in the GitHub issue tracker for this project. Pull requests are also welcome and will be reviewed quickly. LicenseLicense Distributed under BSD 3 clause license.
https://libraries.io/hackage/mmark-cli
CC-MAIN-2018-34
refinedweb
1,422
50.67
PHP 6 and What to Expect 101 Posted by ScuttleMonkey from the no-stopping-this-freight-train dept. from the no-stopping-this-freight-train dept. An anonymous reader writes "Jero has a few interesting thoughts on what PHP 6 is driving towards and provides a nice overview of what has been keeping the PHP team busy lately. For more specifics, PHP.net also has the developers meeting minutes from last November available with a great recap of all the major issues on their platter." Re:the license (Score:1) Re:the license (Score:2) That was "Brokeback PHP" (Score:1) Re:the license (Score:1) Re:the license (Score:4, Informative) Re:the license (Score:1) Re:the license (Score:1) Re:the license (Score:2) Re:the license (Score:1) Re:the license (Score:2) Article (Score:3, Informative) Since jero.net already seems to be /.ed... Taking a look at PHP 6 While most web hosts are still in the PHP 4 era, the PHP developers are already planning and working on PHP 6. Lets have a look at whats been keeping them busy. Unicode support When youre creating a website, you hardly have to think about the character encoding. You only have to decide how you tell the user agent what encoding youre using, but with a little help of Apaches .htaccess file [slashdot.org], you only have to make that decision once. However, if youre building an application, the character encoding might become a problem. Thats where PHPs new Unicode support comes in handy. With its support, PHP can automatically encode and decode the in and output of the script making sure both the database and the user agent receive the encoding they need without the need of any extra functions for the encoding conversion. The big cleanup were dealing with a stereotype developer here for simplicity's sake) is the one whos using it in his application, but sometimes the developer is not even aware hes using it. Im, of course, talking about the register_globals [php.net], magic_quotes [php.net] and safe_mode [php.net] functions. These three functions are hell for every PHP programmer so Im. Alternative PHP Cache Caching is a very good way to improve the performance of an application. Thats why there was a large demand for a good opcode cache in the default distribution of PHP. And when theres a demand, theres probably also a person or a group to meet that demand. The result is APC [php.net]: Alternative PHP Cache. Of course, APC was already available a long time ago (01-07-2003), but the PHP developers have decided to include this extension in the core as the default caching framework. OO Functionality The improved OO model was probably the biggest improvement to PHP in version 5.0. PHP 6 tries to improve this even further by adding namespaces. If youre familiar with XMLsd like to learn more about the possibilities of namespaces, I find this C++ tutorial [cplusplus.com] about namespaces quite useful. Changes to the extensions Re:Article (Score:1, Flamebait) No it's not, quit karma whoring Re:Article (Score:2) Yes it is, and that was a well-formatted repost. Site won't load for me... Mod Parent Up (Score:2) Sometimes I wonder if slashdot moderators don't go around and look at posts saying, "yeah... yeah, that one's a post other's would like to read... -1 Offtopic... heh heh... I just modded a good post bad... heh heh... I feel better about myself now..." *rolls eyes* Mod this post all the way down to hell if you wa Re:Mod Parent Up (Score:2) Do most users even need PHP 6? (Score:5, Interesting) Re:Do most users even need PHP 6? (Score:1) I, personally, will probably move to it fairly quickly because I can do it -- noone's going to be too bothered if my personal sites blow up, because I make use of the new features introduced in PHP 5 today, and because I wish for the consistency that PHP lacks right now. Re:Do most users even need PHP 6? (Score:2) Re:Do most users even need PHP 6? (Score:4, Insightful) Try some other languages (Ruby, Python, CLisp/Scheme/Haskell/OCaml if you manage to get past the syntax), you'll see that PHP is lacking in many areas. Closures (even read-only, as in Python), functions as first-class objects, namespaces, modules, consistency across the standard library, properties, metaobjects, strong typing (not static, strong), infinite-length integers (these dummies want to add a 64bits integer in PHP6... whoa, so kewl eh), good iterators (not Java's, either Ruby style or Python style iteration), partial application (curryfication), pattern matching, ... Re:Do most users even need PHP 6? (Score:4, Insightful) In my experience (working in large PHP driven shops) the people writing PHP didn't necessarily have a comp sci background. They don't care (and most don't even know) wahat a first-class object is, why they would even want namespaces, strong typing or 64 integers. In fact, adding them to the language makes it in accessible to them, so they'll just stick with php 4, which "works for me." I mean, in a fundamental sort of way, yes PHP is broken, but in a useable sort of scripting language way, PHP 4 isn't broken. And if it ain't broke, then don't try to fix it. Namespace (Score:4, Insightful) Re:Namespace (Score:5, Informative) Will this do? Re:Namespace (Score:2) Re:Namespace (Score:1) Re:Namespace (Score:2) Re:Namespace (Score:2) i still get segfaults with xslt processing on every typo that's in the xslt file, so i have no line numbers or even filenames to track down the error that causes it to segfault other than that, adding a more proper oop support in php5 is a very welcome addition. namespaces ? sure they'd be good to use in complex applications, but proper oop code can easilly live without them. some fast way for persistant data i woohoo! (Score:2, Funny) Funyn thing is, in the digg dicussion on php6, most of the people were upset at how much rewriting they'd have to do without register globals or magic quotes! bwahahaha! Re:What's the point? (Score:1) Re:What's the point? (Score:1) Lack of backwards compatibility (Score:5, Interesting) They should leave in backwards compatibility for the class based OO model which <PHP5 uses. Once they bring out PHP6, PHP5 will be the only version which runs new and legacy PHP scripts, so PHP5 will clearly become the standard for a long time. I'm a big fan of PHP, but with so many apps (e.g. my university's timetabling app) still in PHP3, all the rest in PHP4, both becoming obsolete, changes to the API, even changes to what's allowed within the same version [phpbb.com]; I'm starting to wonder if I should have focused on a more stable language like python or perl instead. Re:Lack of backwards compatibility (Score:2) Ruby is not a framework god damn it! Rails (or Ruby on Rails, RoR) is a framework built on top of Ruby, Ruby is a general-purpose, object oriented, multi-paradigmatic programming language. And a very good one, too. Re:Lack of backwards compatibility (Score:3, Informative) I'll note that phpBB 2.0.x was written for PHP3 and PHP4 (those were the only target versions of PHP about when 2.0.0 was released). The fact that 2.0.x works with PHP5 is proof that there's enough backwards compatability with PHP4. The bits that break are proof that there's not enough Also, phpBB 3.0 is being wirtten for PHP4 and PHP5, of which it works fine under both r Re:Lack of backwards compatibility (Score:1) One of the nasty aspects of web design is the speed of it. The next great standard, the next new browser, all these things impact your code negatively. Talk about an uphill battle! After many trial and error experiences, I've started to force myself to segregate different types of information as much as possible. Separate content from markup, markup from script, etc. Times change, and the code you're using will change with them. If you keep different portions of a page separated, it makes it easier to updat [OT] Re:Lack of backwards compatibility (Score:1) I'm a big fan of PHP You must be, you named your application after it. Since I've never spoken to the developer of any other php* app, let me ask you something: Why on earth did you name your software after the language you wrote it in? What happens if you later decide to port your app to Perl or Ruby? This is common for PHP devs for some reason, yet you never see it anywhere else. I heard Rasmus Lerdof speak a few years ago and he expressed his puzzlement (and annoyance) Re:[OT] Re:Lack of backwards compatibility (Score:2) I can't imagine it being called anything else which makes you think "web based implementation of Diplomacy" the moment you read the title. Re:[OT] Re:Lack of backwards compatibility (Score:1) Its not going to attract a bigger audience (Score:2, Insightful) They should finish working on PEAR and getting it properly documented along with getting most of the respoitory packages out of "alpha" release. Then we wont have this stigma about Its good to see that they want to "fine tune" php and they are discussing the important programming syntax elements of the langauge, but like everyone else is sayi Re:Its not going to attract a bigger audience (Score:1) EG how the error checking system, "configuration swiss army knife tools", DB intilisation classes and DB objects all tie into PEAR base library then to make it even nicer you have some real cool caching and session management classes which you can incorperate into them; Re:Its not going to attract a bigger audience (Score:1) This already happened Re:Its not going to attract a bigger audience (Score:1) Actually .NET is not a programming language, it's a platform. Many languages run on the .NET platform, PHP [php-compiler.net] amongst others. PEAR documentation is terrible... (Score:2) How about "use strict;" directive (Score:1) Re:How about "use strict;" directive (Score:4, Interesting) Re:How about "use strict;" directive (Score:1) Re:How about "use strict;" directive (Score:1) E_STRICT is ever better!- BTW, "taint" checks would be useful too and Ruby style "Safe Levels" Re:How about "use strict;" directive (Score:2) set error handler/level (Score:1) Namespacing and Unicode (Score:2) Seriously though, apart from its popularity, is there any reason to choose PHP over the multitude of other existing solutions? Re:Namespacing and Unicode (Score:1) I started designing an App framework for PHP5 (since it has rudimentary OO support) to eliminate wheel reinvention for even trivial requirements like data/type validation. But then Ruby on Rails came out and it looked like exactly what i was trying to do except on a better foundation and with someone else doing all the work ;) PHP4/5 seemed scrappy, bu Re:Namespacing and Unicode (Score:2) If you actually know how to code, not really, no. On the plus side, it's extremely easy to get started (just create a .php file and start emitting a mess of PHP and HTML) which is good for 2-pages stuff or for beginners (not good as in "teaches you how to code", but good as in "well at least it does something, and you don't even have to being understanding how it does it), on the Re:Namespacing and Unicode (Score:2) Reliable isn't cheap. Good isn't cheap either. Good web apps, innovative "web 2.0" stuff, isn't created by grunts. You need intelligent people, builders with a vision and abilities. There are smart people in the PHP trade, a Re:Namespacing and Unicode (Score:2) Re:Namespacing and Unicode (Score:1) GOTO? (Score:2, Insightful) Re:GOTO? (Score:1) Perhaps you should try RTFA (yeah, yeah, this is slashdot, blah blah...) From ng-goto [php.net] Re:GOTO? (Score:1) Re:GOTO? (Score:2) The whole noise against goto is utterly pointless. If your functions are large enough that goto makes them unreadable, you should break them up anyway. And if you insist on keeping them intact, then using goto islikely to result in a lot cleaner code than monstrous multi-layered block str Re:GOTO? (Score:2) Re:GOTO? (Score:1) Interestingly enough, I've seen more GOTOs in 'hardcore' C code (just look at the Linux kernel) than I ever saw on VB. So what's your excuse for hating VB? PHP needs serious redesigns (Score:5, Insightful) PHP's design is fucked up. Some functions have underscores, others don't; some have numbers, others don't: strtolower, isset, stripslashes compared with hex2bin, is_null and strip_tags to name a few. Why? Who knows? But too many applications rely on these inconsistencies, and making them consistent would break everything. It has no namespaces. Luckily, they are to be added in PHP 6 (and about time too!), so we can do $db::connect() rather than mysql_connect($db). So can we get rid of all the mysql_* and other pseudo-namespace functions please? They are annoying and will be wholly unnecessary. Security problems. Register_globals and magic_quotes are still built into PHP when they should be built out and as far away from it as possible (and they are!). There are going to be people who will wonder why this upgrade breaks those things, and people should know that a new language does things differently. (Likewise: Perl 6's given/when block isn't called switch/case because it behaves differently). Now that I've typed all that, much of my reasoning is that people rely on PHP's being bad. People should never rely on poor language design, or bugs, or bloat, of which PHP has loads. The language should work, you shouldn't need to work around the language - and if you do, you're going to have a lot of trouble rewriting your code. There might be little incentive to switch away from older versions of PHP for some people, but a few refinements of the language won't change that. "Better than PHP" would actually be true. "Made from the PHP team" would be a major selling point too. I know that it's a bit risky saying this here (there seem to be a lot of people who like PHP for some reason), but a serious redesign is in order, and it's more than just a simple bumping up of the version number can fix. Re:PHP needs serious redesigns (Score:1) The lesson learned here is that computers should not try to act smart (magic quotes and register globals). I am smart enough to add_slashes() my input. Re:PHP needs serious redesigns (Score:2) You mean we could do something like this? Wouldn't that be neat? This is a Re:PHP needs serious redesigns (Score:1) Re:PHP needs serious redesigns (Score:2) Re:PHP needs serious redesigns (Score:1) This is the sum of your critique of the language!?! You don't like the name of some of the functions that are included with it?!? For $5 I'll sell you a patched PHP, just give me your list of preferred functions, and you can have (apparently) the perfectly designed language. I say Perl's design is fucked up. The different sigils are on different parts of the keyboard. @ isn't next to % and by god, $ c Web hosts should be offering more choice (Score:3, Interesting) Re:Web hosts should be offering more choice (Score:2) PSP web browser reading a PSP page with PSP images (Score:1) I have yet to see a major web host that offers Python Server Pages support Possibly because the abbreviation is already overloaded. As the developer of Luminesweeper [pineight.com], a clone of a PlayStation Portable flagship title, puts it: "Where I come from, PSP is still Corel Paint Shop Pro." Re:Web hosts should be offering more choice (Score:2, Interesting) Re:Web hosts should be offering more choice (Score:2) there are reasons (Score:2) I disagree: I think all three of those languages and runtimes have significant disadvantages compared to PHP for web applications. In fact, mod_python and mod_perl are old enough that if they were the best choice, they'd be more widely used. Alternative PHP Cache (Score:2) Re:Alternative PHP Cache (Score:2) Jeeez... (Score:1) One thing I'm really a better thing could be .... (Score:1) Re:a better thing could be .... (Score:2) PHP's biggest problem (Score:4, Informative) PHP5 came with exception handling like that found in most mature object oriented languages, but the problem is that most PHP functions do not use exceptions, they simply return false. This makes it difficult to use exception handling at all, because you have to mix the old way and the new way if you want to leverage PHP's huge library of functions. I think the solution would be to implement standard exception behavior for all of the old libraries and add a setting in php.ini to turn the behavior on or off. It's easy to write a PHP script that will fail without throwing an exception or returning a boolean value that can be handled. This makes PHP very difficult to use if you need your code to be very robust and solid. I've resorted to using classes and putting some code in the destructor to clean up if the script terminates unexpectedly, but this is ugly and should be something that one can handle by enclosing the error-prone logic in a try/catch block. Needless to say, this problem isn't always a major issue for websites, but if you're doing anything more complicated than simple db lookups and printing HTML, robustness matters and PHP's shortcomings really stand out. Partially due to this problem I recently switched a fairly large project to Ruby on Rails and have been EXTREMELY pleased with how fast development has progressed. I was able to reproduce 2 months of PHP development in rails in 2 weeks, learning curve included. Ruby is a joy to program with, way easier than PHP, C#, Python, etc. Re:PHP's biggest problem (Score:3, Informative) I agree that it would be a nice option to have, though in my experience the only language I've worked with on any regular basis where people actually use try/catch blocks instead of the return false way is in Java, which requires it. Re:PHP's biggest problem (Score:2) C# support exceptions very similar to Java's and they're not required, but they are available to create robust code when it is needed, unlike PHP. Ruby also has a similar mechanism with slightly different syntax; optional but highly useful. Argh, WAMP (Score:1) Want UTF-savvy string functions for PHP? (Score:2) From the article:
http://slashdot.org/story/06/03/14/0455221/php-6-and-what-to-expect
CC-MAIN-2015-22
refinedweb
3,240
61.06
Overview You can create a named "bundle" of nodes and use that name prefixed with a @ sign (for example, @lights) to stand for the contents of the bundle in any parameter that accepts a list of nodes. Tip Bundles were originally created to help with light linking. However, the current recommended way to do automatic light linking is to use categories, which are work through tagging rather than explicitly building lists of nodes. Bundles may still be useful in certain circumstances. The bundle list shows all bundles in the current scene. Bundles are groups of nodes, potentially from different networks. Bundles let you refer to a group of nodes by name instead of listing them explicitly. Bundles are especially useful for light linking. To use a bundle anywhere Houdini expects a list of nodes by using @bundle. For example, if you have a bundle named keylights, you can use @keylights to refer to it in a light mask. Bundles will only show in choice menus when the filter is set appropriately. For example, for a choice menu that only accept objects (such as Candidate Objects in a ROP), the filter in the bundle needs to be set to Geometry Only. Normal (manual) bundles You can set the contents of normal bundles by dragging nodes onto the bundle in the bundle list pane or by using the toolbar buttons (see below). Toolbar Creates a new bundle. Creates a new smart bundle (see below). Set the contents of the displayed bundle to the current selection. Adds the current selection to the displayed bundle. Removes the current selection from the displayed bundle. Turns the select flag on or off for all nodes in the bundle that have the flag. Turns the display flag on or off for all nodes in the bundle that have the flag. Turns the bypass flag on or off for all nodes in the bundle that have the flag. Turns the template flag on or off for all nodes in the bundle that have the flag. Turns the expose flag on or off for all nodes in the bundle that have the flag. Selects the contents of the displayed bundle. Adds the contents of the displayed bundle to the current selection. Removes the contents of the displayed bundle from the current selection. Smart bundles Normally you define the contents of bundles by dragging nodes onto the bundle in the bundle list pane. However, you can also define smart bundles. These are like smart playlists in iTunes: they automatically include all nodes that match a pattern you define. Smart bundle patterns Use *to match any string. Use ?to match any single character. Use ^to match only if the following pattern doesn’t match. Use %to match anything up to but not including the next / Use (, |, and )to match any of the |separated strings you specify. Patterns that don’t contain a slash will match any node where the pattern matches the node’s name. Putting a star at the end of a pattern, e.g. /obj/model*, will match nodes under /obj/whose names start with model, and any of their children, because *matches slashes ( /) in the path. To match nodes whose names start with modelbut not their children, use the compound pattern: /obj/model* ^/obj/model*/*. The ^in front of the second pattern means to not include nodes matching /obj/model*/*. Example Smart Bundle Patterns /obj/% Match any nodes at the object level, but none of their children. /obj/node(12|13|14)/child Match the nodes with any of the following paths: /obj/node12/child, /obj/node13/child, obj/node14/child /obj/*/child Match any node called "child" that is a subchild of obj. (i.e. * will match anything in between "/obj/" and "/child") Adapting to categories It’s possible through some scripting to use the names of the bundles an object belongs to as category names. Try putting the following function in the Python session module (Windows ▸ Edit Python Source). def bundleList(): node = hou.pwd() bundles = [] for b in hou.nodeBundles(): if b.containsNode(node): bundles.append(b.name()) return ' '.join(bundles) Then put pythonexprs('hou.session.bundleList()') in the Categories parameter of geometry nodes. You should have the bundle names showing up as categories for your objects.
https://www.sidefx.com/docs/houdini/ref/panes/bundles.html
CC-MAIN-2021-17
refinedweb
711
65.62
I can't find why this is not working. Here's my code, and then the error. Code Java: import java.util.*; import java.io.*; public class CordreyLab6 { public static void main(String[] args) throws IOException { // get user input for file name Scanner console = new Scanner(System.in); // for user input of file name System.out.println("Please enter a file name: "); String userFile = console.nextLine(); // read file and pass to process method Scanner input = new Scanner(new File (userFile)); processFile(input, console); }// end main The error is: Please enter a file name: Lab6Input.txt Exception in thread "main" java.io.FileNotFoundException: Lab6Input.txt (The system cannot find the file specified) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(Unknown Source) at java.util.Scanner.<init>(Unknown Source) at CordreyLab6.CordreyLab6.main(CordreyLab6.java:17) I've put copies of the text file in all the folders I think I could be looking in- the class folder, the c:\root, the src folder, under the sink. I wish I knew what folders and files were created when Eclipse creates a java program. Obviously there is quite a bit more going on than just the source code folder. I don't know where the program is expecting to find the file, so I'm just guessing on where to put it. I suspect there is a better way. Thank you so much for the help!
http://www.javaprogrammingforums.com/%20file-i-o-other-i-o-streams/15625-cant-find-file-error-printingthethread.html
CC-MAIN-2015-22
refinedweb
238
68.67
Using Two-Dimensional Arrays - Adelia Golden - 2 years ago - Views: Transcription 1 Using Two-Dimensional Arrays Great news! What used to be the old one-floor Java Motel has just been renovated! The new, five-floor Java Hotel features a free continental breakfast and, at absolutely no charge, a free newspaper delivered to your door every morning. That s a 50-cent value, absolutely free! Speaking of things that are continental, the designers of the new Java Hotel took care to number floors the way people do in France. The ground floor (in French, le rez-de-chaussée ) is the zero floor, the floor above that is the first floor, and so on. Figure B-1 shows the newly renovated hotel. Figure B-1: A big, high-rise hotel. You can think of the hotel as an array with two indices -- a two-dimensional array. You declare the array this way. int guests[][] = new int[5][10]; 1 2 The guests array has five rows (numbered 0 to 4, inclusive) and ten columns (numbered 0 to 9, inclusive). To register two guests in Room 9 on the first floor, you write guests[1][9] = 2; TechnicalStuff The people who do serious Java like to think of a two-dimensional array as an array of rows (that is, an array of ordinary one-dimensional arrays). With this thinking, the rows of the guests array (above) are denoted guests[0], guests[1], guests[2], guests[3], and guests[4]. For a picture of all this, refer to Figure B-1. A complete program that uses this guest array is shown in Listing B-1. Listing B-1 An array of arrays import static java.lang.system.out; import java.io.file; import java.io.filenotfoundexception; import java.util.scanner; public class ShowGuests { public static void main(string args[]) throws FileNotFoundException { int guests[][] = new int[5][10]; Scanner myscanner = new Scanner(new File("GuestList")); guests[floor][roomnum] = myscanner.nextint(); for (int floor = 4; floor >= 0; floor--) { out.print("floor " + floor + ":"); out.print(" "); out.print(guests[floor][roomnum]); out.println(); out.println(); out.print("room: "); out.print(" "); 2 3 out.print(roomnum); Figure B-2 shows a run of the code from Listing B-1. The input file, GuestList, looks like the file in Listing 11-1, except that the file for this section s program has 50 lines in it. OnTheWeb You can snare a 50-line GuestList file along with this document s code listings from the book s Web site. Figure B-2: Guest counts. In Listing B-1, notice the primary way you handle a two-dimensional array -- by putting a for loop inside another for loop. For instance, when you read values into the array, you have a room number loop within a floor number loop. Because the roomnum loop is inside the floor loop, the roomnum variable changes faster than the floor variable. In other words, the program prints guest counts for all the rooms on a floor before marching on to the next floor. Remember The outer loop s variable changes slower; the inner loop s variable changes faster. In displaying the hotel s numbers, I could have chosen to start with floor 0 and go up to floor 4. But then the output would have looked like an upsidedown hotel. In the program s output, you want the top floor s numbers to be displayed first. To make this work, I created a loop whose counter goes backwards. for (int floor = 4; floor >= 0; floor--) 3 4 So notice that the loop s counter starts at 4, goes downward each step of the way, and keeps going down until the counter s value is equal to 0. This section does one better on the stuff from earlier sections. If you can make a two-dimensional array and an array of objects, then why not join these ideas to make a two-dimensional array of objects. Technically, this ends up being an array of arrays of objects. How about that! First you define your two-dimensional array of Room objects. (The declaration of the Room class comes right from Listing 11-5.) Room rooms[][] = new Room[5][10]; Next, you do that all-important step of constructing an object for each component in the array. rooms[floor][roomnum] = new Room(); Then you read values into the array components variables, write values, and so on. A complete program is shown in Listing B-2. Listing B-2 A two-dimensional array of objects import java.io.file; import java.io.filenotfoundexception; import java.util.scanner; import static java.lang.system.out; public class ShowRooms { public static void main(string args[]) throws FileNotFoundException { Room rooms[][] = new Room[5][10]; Scanner myscanner = new Scanner(new File("RoomList")); rooms[floor][roomnum] = new Room(); rooms[floor][roomnum].readroom(myscanner); for (int floor = 4; floor >= 0; floor--) { 4 5 out.println("floor " + floor + ":"); out.print(" "); rooms[floor][roomnum].writeroom(); out.println(); By the time you re done, the program that uses objects is actually simpler than the code that doesn t use objects. That s because, in writing the code with an array of objects, you re taking advantage of methods that are already written as part of the Room class, such as readroom and writeroom. A run of the code in Listing B-2 displays information about all 50 of the hotel s rooms. Instead of showing you all that stuff, Figure B-3 shows you the first several lines in the run. (You don t need to know about every room in the Java Hotel anyway.) The input to the code in Listing B-2, the RoomList file, looks just like the stuff in Listing The only difference is that the RoomList file for this section s code has 150 lines in it. OnTheWeb You can snare a 150-line RoomList file along with this document s code listings from the book s Web site.. Figure B-3: Starting a run of the code from Listing B-2. With all the examples building up to Listing B-2, the code in the listing may be fairly uneventful. The only thing you need to notice is that the line rooms[floor][roomnum] = new Room(); is absolutely, indubitably, 100-percent required. When you accidentally leave off this line (not if you leave off this line, but when you leave off this 5 6 line ), you get a runtime error message saying java.lang.nullpointerexception. 6 Using Files as Input/Output in Java 5.0 Applications Using Files as Input/Output in Java 5.0 Applications The goal of this module is to present enough information about files to allow you to write applications in Java that fetch their input from a file instead JAVA ARRAY EXAMPLE PDF JAVA ARRAY EXAMPLE PDF Created By: Umar Farooque Khan 1 Java array example for interview pdf Program No: 01 Print Java Array Example using for loop package ptutorial; public class PrintArray { public static Sample CSE8A midterm Multiple Choice (circle one) Sample midterm Multiple Choice (circle one) (2 pts) Evaluate the following Boolean expressions and indicate whether short-circuiting happened during evaluation: Assume variables with the following names AP Computer Science File Input with Scanner AP Computer Science File Input with Scanner Subset of the Supplement Lesson slides from: Building Java Programs, Chapter 6 by Stuart Reges and Marty Stepp ( ) Input/output) Basics of Java Programming Input and the Scanner class Basics of Java Programming Input and the Scanner class CSC 1051 Algorithms and Data Structures I Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website: Two-Dimensional Arrays Chapter 11 Two-Dimensional Arrays This chapter introduces Java arrays with two subscripts for managing data logically stored in a table-like format in rows and columns. This structure proves useful for Arrays in Java. Working with Arrays Arrays in Java So far we have talked about variables as a storage location for a single value of a particular data type. We can also define a variable in such a way that it can store multiple values. Such Homework/Program #5 Solutions Homework/Program #5 Solutions Problem #1 (20 points) Using the standard Java Scanner class. Look at as an exampleof using the COUNTING LOOPS AND ACCUMULATORS COUNTING LOOPS AND ACCUMULATORS Two very important looping idioms are counting loops and accumulators. A counting loop uses a variable, called the loop control variable, to keep count of how many cycles Course Intro Instructor Intro Java Intro, Continued Course Intro Instructor Intro Java Intro, Continued The syllabus Java etc. To submit your homework, do Team > Share Your repository name is csse220-200830-username Use your old SVN password. Note to assistants: Continuous Integration Part 2 1 Continuous Integration Part 2 This blog post is a follow up to my blog post Continuous Integration (CI), in which I described how to execute test cases in Code Tester (CT) in a CI environment. What I Iteration CHAPTER 6. Topic Summary CHAPTER 6 Iteration TOPIC OUTLINE 6.1 while Loops 6.2 for Loops 6.3 Nested Loops 6.4 Off-by-1 Errors 6.5 Random Numbers and Simulations 6.6 Loop Invariants (AB only) Topic Summary 6.1 while Loops Many Chapter 2 Introduction to Java programming Chapter 2 Introduction to Java programming 1 Keywords boolean if interface class true char else package volatile false byte final switch while throws float private case return native void protected break Software and Programming 1 Software and Programming 1 Lab 3: Strings & Conditional Statements 20 January 2016 SP1-Lab3.ppt Tobi Brodie (Tobi@dcs.bbk.ac.uk) 1 Lab Objectives This session we are concentrating on Strings and conditional Install Java Development Kit (JDK) 1.8 CS 259: Data Structures with Java Hello World with the IntelliJ IDE Instructor: Joel Castellanos e-mail: joel.unm.edu Web: Office: Farris Engineering Center 319 8/19/2015 Install LOOPS CHAPTER CHAPTER GOALS jfe_ch04_7.fm Page 139 Friday, May 8, 2009 2:45 PM LOOPS CHAPTER 4 CHAPTER GOALS To learn about while, for, and do loops To become familiar with common loop algorithms To understand nested loops To implement Section 6 Spring 2013 Print Your Name You may use one page of hand written notes (both sides) and a dictionary. No i-phones, calculators or any other type of non-organic computer. Do not take this exam if you are sick. Once Introduction to Java Introduction to Java The HelloWorld program Primitive data types Assignment and arithmetic operations User input Conditional statements Looping Arrays CSA0011 Matthew Xuereb 2008 1 Java Overview A high Topic 11 Scanner object, conditional execution Topic 11 Scanner object, conditional execution "There are only two kinds of programming languages: those people always [complain] about and those nobody uses." Bjarne Stroustroup, creator of C++ Copyright 13 File Output and Input SCIENTIFIC PROGRAMMING -1 13 File Output and Input 13.1 Introduction To make programs really useful we have to be able to input and output data in large machinereadable amounts, in particular we have to This explanations are for absolute beginners. Skilled programmers should (and probably will) use more effective approach. JAVA EXAMPLES REMARK It turned out that all Workstation in the classroom are NOT set equally. This is why I wil demonstrate all examples using an on-line web tool and/or. Part I. Multiple Choice Questions (2 points each): Part I. Multiple Choice Questions (2 points each): 1. Which of the following is NOT a key component of object oriented programming? (a) Inheritance (b) Encapsulation (c) Polymorphism (d) Parallelism ****** CS170 Lab 11 Abstract Data Types & Objects CS170 Lab 11 Abstract Data Types & Objects Introduction: Abstract Data Type (ADT) An abstract data type is commonly known as a class of objects An abstract data type in a program is used to represent (the Building Java Programs Building Java Programs Chapter 5 Lecture 5-2: Random Numbers reading: 5.1-5.2 self-check: #8-17 exercises: #3-6, 10, 12 videos: Ch. 5 #1-2 1 The Random class A Random object generates pseudo-random* numbers. Array methods that do the following: f) Move all even elements to the front, otherwise preserving order P6.1 public class ArrayPrinter public static void main(string[] args) int[] data = new int[10]; for (int i = 0; i < 10; i++) data[i] = (int) (Math.random() * 100 + 1); // Print out even indices for (int Loops and ArrayLists Chapter 6 Loops and ArrayLists What is in this Chapter? When programming, it is often necessary to repeat a selected portion of code a specific number of times, or until some condition occurs. We 3. Input and output. 3.1 The System class Chapter 3 Input and output The programs we ve looked at so far just display messages, which doesn t involve a lot of real computation. This chapter will show you how to read input from the keyboard, use Arrays. Introduction. Chapter 7 CH07 p375-436 1/30/07 1:02 PM Page 375 Chapter 7 Arrays Introduction The sequential nature of files severely limits the number of interesting things you can easily do with them.the algorithms we have examined Object Oriented Software Design Object Oriented Software Design Introduction to Java - II Giuseppe Lipari Scuola Superiore Sant Anna Pisa September 14, 2011 G. Lipari (Scuola Superiore Sant Anna) Sorting, Searching Arrays: Outline Arrays Chapter 7 Sorting, Searching Arrays: Outline Selection Sort Other Sorting Algorithms Searching an Array Selection Sort Consider arranging all elements of an array so they are ascending order Algorithm IMDB Data Set Topics: Parsing Input using Scanner class. Atul Prakash IMDB Data Set Topics: Parsing Input using Scanner class Atul Prakash IMDB Data Set Consists of several files: movies.list: contains actors.list: contains aka-titles.list: Moving from CS 61A Scheme to CS 61B Java Moving from CS 61A Scheme to CS 61B Java Introduction Java is an object-oriented language. This document describes some of the differences between object-oriented programming in Scheme (which we hope Comp 248 Introduction to Programming Comp 248 Introduction to Programming Chapter 2 - Console Input & Output Dr. Aiman Hanna Department of Computer Science & Software Engineering Concordia University, Montreal, Canada These slides has been JAVA PRIMITIVE DATA TYPE JAVA PRIMITIVE DATA TYPE Description Not everything in Java is an object. There is a special group of data types (also known as primitive types) that will be used quite often in programming. For performance Introduction to Java. CS 3: Computer Programming in Java Introduction to Java CS 3: Computer Programming in Java Objectives Begin with primitive data types Create a main class with helper methods Learn how to call built-in class methods and instance methods Advanced Java Client API 2012 coreservlets.com and Dima May Advanced Java Client API Advanced Topics Originals of slides and source code for examples: Also see the customized Hadoop CS 111 Classes I 1. Software Organization View to this point: CS 111 Classes I 1 Software Organization View to this point: Data Objects and primitive types Primitive types operators (+, /,,*, %). int, float, double, char, boolean Memory location holds the data Objects A Java array is kind of like a credit card holder Sleeves to put credit cards in Intro Arrays Page 1 Intro Arrays Thursday, February 16, 2012 11:59 AM A Java array is kind of like a credit card holder Sleeves to put credit cards in Outer case of holder Intro Arrays Page 2 Numbered Each value (1 through 9) must appear exactly once in each row. Each value (1 through 9) must appear exactly once in each column. INTRODUCTION TO COMPUTER SCIENCE I PROJECT 6 Sudoku! 1 The game of Sudoku Sudoku is a popular game giving crossword puzzles a run for their money in newspapers. 1 It s a game well suited for computers). AP Computer Science Java Subset APPENDIX A AP Computer Science Java Subset The AP Java subset is intended to outline the features of Java that may appear on the AP Computer Science A Exam. The AP Java subset is NOT intended as an overall Week 1: Review of Java Programming Basics Week 1: Review of Java Programming Basics Sources: Chapter 2 in Supplementary Book (Murach s Java Programming) Appendix A in Textbook (Carrano) Slide 1 Outline Objectives A simple Java Program Data-types: Event-Driven Programming Event-Driven Programming Lecture 4 Jenny Walter Fall 2008 Simple Graphics Program import acm.graphics.*; import java.awt.*; import acm.program.*; public class Circle extends GraphicsProgram { public void Building Java Programs Building Java Programs Chapter 3 Lecture 3-3: Interactive Programs w/ Scanner reading: 3.3-3.4 self-check: #16-19 exercises: #11 videos: Ch. 3 #4 Interactive programs We have written programs that print Java iteration statements Java iteration statements Iteration statements are statements which appear in the source code only once, but it execute many times. Such kind of statements are called loops. Almost all the programming Line-based file processing Line-based file processing reading: 6.3 self-check: #7-11 exercises: #1-4, 8-11 Hours question Given a file hours.txt with the following contents: 123 Kim 12.5 8.1 7.6 3.2 456 Brad 4.0 11.6 6.5 2.7 12th Threads 1. When writing games you need to do more than one thing at once. Threads 1 Threads Slide 1 When writing games you need to do more than one thing at once. Threads offer a way of automatically allowing more than one thing to happen at the same time. Java has threads as 6.1. Example: A Tip Calculator 6-1 Chapter 6. Transition to Java Not all programming languages are created equal. Each is designed by its creator to achieve a particular purpose, which can range from highly focused languages designed Data Structures Lecture 1 Fall 2015 Fang Yu Software Security Lab. Dept. Management Information Systems, National Chengchi University Data Structures Lecture 1 A brief review of Java programming Popularity of Programming Languages Building a Multi-Threaded Web Server Building a Multi-Threaded Web Server In this lab we will develop a Web server in two steps. In the end, you will have built a multi-threaded Web server that is capable of processing multiple simultaneous Chapter 7 Multidimensional Arrays Chapter 7 Multidimensional Arrays 7.1 Introduction Thus far, you have used one-dimensional arrays to model linear collections of elements. You can use a two-dimensional array to represent a matrix or a AP Computer Science II Java Unit Lab Assignment # 26b AP Computer Science II Java Unit Lab Assignment # 26b The "Knight's Tour" Program 90 & 100 Point Versions Assignment Purpose: The purpose of this assignment is to review and practice Java program coding CPLEX Tutorial Handout CPLEX Tutorial Handout What Is ILOG CPLEX? ILOG CPLEX is a tool for solving linear optimization problems, commonly referred to as Linear Programming (LP) problems, of the form: Maximize (or Minimize) c Real SQL Programming 1 Real 1 We have seen only how SQL is used at the generic query interface an environment where we sit at a terminal and ask queries of a database. Reality is almost always different: conventional Building Java Programs Building Java Programs Chapter 4 Lecture 4-1: Scanner; if/else reading: 3.3 3.4, 4.1 Interactive Programs with Scanner reading: 3.3-3.4 1 Interactive programs We have written programs that print console WRITING DATA TO A BINARY FILE WRITING DATA TO A BINARY FILE TEXT FILES VS. BINARY FILES Up to now, we have looked at how to write and read characters to and from a text file. Text files are files that contain sequences of characters.
http://docplayer.net/18042737-Using-two-dimensional-arrays.html
CC-MAIN-2018-51
refinedweb
3,282
51.89
Early version of java did not include the Collections framework. It only defined several classes and interfaces that provide methods for storing objects. When Collections framework were added in J2SE 1.2, the original classes were reengineered to support the collection interface. These classes are also known as Legacy classes. All legacy classes and interface were redesign by JDK 5 to support Generics. In general, the legacy classes are supported because there is still some code that uses them. The following are the legacy classes defined by java.util package There is only one legacy interface called Enumeration NOTE: All the legacy classes are synchronized boolean hasMoreElements() //It returns true while there are still more elements to extract, and returns false when all the elements have been enumerated. Object nextElement() //It returns the next object in the enumeration i.e. each call to nextElement() method obtains the next object in the enumeration. It throws NoSuchElementException when the enumeration is complete. Vector() //This creates a default vector, which has an initial size of 10. Vector(int size) //This creates a vector whose initial capacity is specified by size. Vector(int size, int incr) //This creates a vector whose initial capacity is specified by size and whose increment is specified by incr. The increment specifies the number of elements to allocate each time when a vector is resized for addition of objects. Vector(Collection c) //This creates a vector that contains the elements of collection c. Vector defines several legacy methods. Lets see some important legacy methods defined by Vector class. import java.util.*; public class Test { public static void main(String[] args) { Vector<Integer> ve = new Vector<Integer>(); ve.add(10); ve.add(20); ve.add(30); ve.add(40); ve.add(50); ve.add(60); Enumeration<Integer> en = ve.elements(); while(en.hasMoreElements()) { System.out.println(en.nextElement()); } } } 10 20 30 40 50 60 Hashtable() //This is the default constructor. The default size is 11. Hashtable(int size) //This creates a hash table that has an initial size specified by size. Hashtable(int size, float fillratio) //This. Hashtable(Map< ? extends K, ? extends V> m) //This creates a hash table that is initialized with the elements in m. The capacity of the hash table is set to twice the number of elements in m. The default load factor of 0.75 is used. import java.util.*; class HashTableDemo { public static void main(String args[]) { Hashtable<String,Integer> ht = new Hashtable<String,Integer>(); ht.put("a",new Integer(100)); ht.put("b",new Integer(200)); ht.put("c",new Integer(300)); ht.put("d",new Integer(400)); Set st = ht.entrySet(); Iterator itr=st.iterator(); while(itr.hasNext()) { Map.Entry m=(Map.Entry)itr.next(); System.out.println(itr.getKey()+" "+itr.getValue()); } } } a 100 b 200 c 300 d 400 Properties() //This creates a Properties object that has no default values Properties(Properties propdefault) //This creates an object that uses propdefault for its default values. Note: In both cases, the property list is empty import java.util.*; public class Test { public static void main(String[] args) { Properties pr = new Properties(); pr.put("Java", "James Ghosling"); pr.put("C++", "Bjarne Stroustrup"); pr.put("C", "Dennis Ritchie"); pr.put("C#", "Microsoft Inc."); Set< ?> creator = pr.keySet(); for(Object ob: creator) { System.out.println(ob+" was created by "+ pr.getProperty((String)ob) ); } } } Java was created by James Ghosling C++ was created by Bjarne Stroustrup C was created by Dennis Ritchie C# was created by Microsoft Inc Stack() //This creates an empty stack You can use peek() method to return, but not remove, the top object. The empty() method returns true if nothing is on the stack. The search() method determines whether an object exists on the stack and returns the number of pops that are required to bring it to the top of the stack. import java.util.*; class StackDemo { public static void main(String args[]) { Stack st = new Stack(); st.push(11); st.push(22); st.push(33); st.push(44); st.push(55); Enumeration e1 = st.elements(); while(e1.hasMoreElements()) System.out.print(e1.nextElement()+" "); st.pop(); st.pop(); System.out.println("\nAfter popping out two elements"); Enumeration e2 = st.elements(); while(e2.hasMoreElements()) System.out.print(e2.nextElement()+" "); } } 11 22 33 44 55 After popping out two elements 11 22 33
https://www.studytonight.com/java/legacy-classes-and-interface.php
CC-MAIN-2020-40
refinedweb
719
51.14
I'm trying to crop Image using cv2.cuda and I tried cuda_gpumat.adjustroi() Hi all, I'm trying to crop image using cv2.cuda. I done crop with cv2.UMat With UMat cropped = cv2.UMat(p, [minX, maxX], [minY, maxY]) With cv2.cuda_gpumat.adjustroi() cropped = p.adjustROI(minX, maxX, minY, maxY) My Code : import cv2 #im=cv2.imread('Remap.png') im = (1024,1024,3) print(im.shape) a=(im.shape[0]*2,im.shape[1]*2) gpu = cv2.cuda_GpuMat() gpu.upload(im) b=cv2.cuda.resize(gpu,a) print(b.size()) maxX=500 maxY=500 minX = 500 minY= 500 b.adjustROI(maxY,minY, minX, maxX) print("Adjust ROI : ",b.size()) cropped = cv2.UMat(a, [minX, maxX], [minY, maxY]) print("Umat : ",cropped.size()) and the problem is ? Actually I don't know how to use crop but now I got the answer. Thanks for efforts. Code : You got wrongly. The 500( maxX and maxY) is too far but depending on sizes of image. And your problem doesn't showing an error b.adjustROI(maxY,minY, minX, maxX) Do not used multiply operation a=(im.shape[0]*2,im.shape[1]*2). You're setting 2048 x 2048. I'm trying to resize it and then crop it. Does the answer help resizing with cuda and this too Does this work? to: Then: to
https://answers.opencv.org/question/225476/im-trying-to-crop-image-using-cv2cuda-and-i-tried-cuda_gpumatadjustroi/
CC-MAIN-2020-10
refinedweb
223
73.44
Writing code for printing has a bad reputation. WPF’s support for is a bit sketchy, but a basic task like printing out a multipage document actually isn’t that hard, once you know what to do. It turns out the ‘knowing what to do’ bit isn’t that easy, since it took a lot of googling and just plain experimentation to finally get it to work. In this post I’ll describe a demo application that allows the user to create a number of separate pages, and then to add TextBoxes to each page. Text can be typed into each TextBox, and then the whole set of pages can be sent to the printer. Although there is a complicated way of producing printout using classes from Microsoft’s XPS (XML Paper Specification) library, you don’t actually need to use this method. The technique I’ll cover here is considerably simpler. First, I’ll describe the demo application. It consists of a simple interface, with an A4-sized Canvas on the left (for non-European readers, A4 is the standard paper size in the UK, and is 210 mm x 297 mm, or roughly 8.3 x 11.7 inches). On the right are a control that lets you nagivate between pages, and button that inserts a new page at the current location, and a button that opens up the print dialog. You can right-click on the Canvas to insert a TextBox at the mouse point, and then type text into the TextBox. Although there are a few interesting techniques used in the program that deal with various non-printing tasks, I won’t cover them here since I want to concentrate on the printing. (You can download a working project using the link at the bottom of this post.) In order to print a multipage document, you need to write your own version of the DocumentPaginator class (in the System.Windows.Documents namespace). This is an abstract class that contains several methods and properties that you’ll need to override and fill in to produce printed output. Your overridden DocumentPaginator’s main purpose is to provide the graphics required for each page that is to be printed. The technique I’ll use here involves creating a Canvas onto which you do all your drawing, and then returning a DocumentPage (another class in System.Windows.Documents) that is created from the Canvas. So, without further ado, here’s my version of a DocumentPaginator: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; namespace PrintingDemo { class Paginator : DocumentPaginator { Document document; Canvas pageCanvas; public Paginator() { MainWindow mainWindow = (MainWindow)Application.Current.MainWindow; document = mainWindow.Document; pageCanvas = mainWindow.PageCanvas; } public override DocumentPage GetPage(int pageNumber) { Canvas printCanvas = new Canvas(); MainWindow mainWindow = (MainWindow)Application.Current.MainWindow; mainWindow.Document.CurrentPageIndex = pageNumber; mainWindow.DrawPage(printCanvas); printCanvas.Measure(PageSize); printCanvas.Arrange(new Rect(new Point(), PageSize)); printCanvas.UpdateLayout(); return new DocumentPage(printCanvas); } public override bool IsPageCountValid { get { return true; } } public override int PageCount { get { return document.PageCount; } } public override System.Windows.Size PageSize { get { return new Size(pageCanvas.Width, pageCanvas.Height); } set { throw new NotImplementedException(); } } public override IDocumentPaginatorSource Source { get { return null; } } } } All the methods and properties (except the constructor) are overrides of the corresponding parts of the abstract DocumentPaginator base class. I don’t claim to be an expert here, so I’m not 100% certain what all of them are used for, but some of them should be fairly obvious. First, I’ve added a couple of local objects to the Paginator class. The Document class is the one that stores the list of pages created by the user (that is, Document is a class I wrote for this demo; it’s not part of the printing code). The Canvas is the usual WPF control, and pageCanvas is the Canvas on which the TextBoxes are drawn by the user on screen. It’s needed here, since we want the dimensions of the printed output to be the same as those of the on-screen Canvas, which is set up to be A4-sized. Since the Canvas is declared in the MainWindow class (it’s actually declared in the XAML, but that’s irrelevant here really), we need to access it. I suppose this isn’t exactly the best class design, but I’m trying to get to printing as quickly as possible, so it’ll do. In any case, the Paginator constructor simply accesses the MainWindow to get these two objects. Now we can have a look at the overridden methods and properties. We’ll look at the properties first, since the GetPage() method is the meat of the class and is where most of the work is done, so we’ll get the easy stuff out of the way first. If (as in this demo) you know the number of pages in your document, you can just return this as the PageCount property. In this demo, the Document class contains a property called PageCount which gives the number of pages stored in it, so we can just return that. In more complex cases, such as a long text document, you may have to work out the number of pages by calculating how much text you can put on each page, allowing for headers, footers, margins and so forth. That could be an onerous task, so we won’t look into it here. The PageSize property is just what it says, and again we just return the size of the Canvas used for on-screen drawing. If you want to specify this size from scratch, remember that graphics in WPF uses the device-independent pixel as its basic unit, and one pixel is exactly 1/96th of an inch. As an example, the A4 canvas comes out to a size of 210 mm = 793.7 pixels wide and 297 mm = 1122.5 pixels high. The ‘set’ part of this property is required in the override, but again it seems never to be called so I haven’t implemented it. I have to be honest and say that I don’t really know what the IsPageCountValid and Source properties do, but for this simple example, returning ‘true’ for the former and ‘null’ for the latter seems to work well, so we’ll not try to fix something that seems not to be broken. Finally, we consider the GetPage() method. This method takes a zero-based page number as its argument, and its responsibility is to generate a DocumentPage that contains the graphics that is to be printed for that page number. Obviously the code you put in here depends on what you’re trying to print, but there are a few things you always need to do, no matter what your graphics are. The approach I’ve taken here is to create a local Canvas on which the current page will be drawn. Since the on-screen interface of the program shows only one page at a time, there is a DrawPage() method in MainWindow that draws the current page using the data stored in the Document object. Document doesn’t store any actual graphical controls; rather it stores a list of data objects that specify what text is to be placed in each TextBox, and where the TextBox should be drawn. The code is this: same Canvas is used to draw each page, so its Children collection is first cleared to make room for the new page about to be drawn. We then retrieve the current Page from the Document (again, Page is a class I wrote for this demo). The Page class contains a list of TextArea (another class I wrote, which stores the text and location for each TextBox) objects. For each TextArea, we create a TextBox. I’ve defined a WPF Style in the XAML, and used data binding to connect the Text property of the TextBox to the text stored in the TextArea object. Again, all this stuff is peripheral to the printing, but see the source code if you’re interested in the details. Each TextBox is then positioned on the Canvas and added as a child of that Canvas. It is this method that is used to produce the graphical representation of the data stored in a Page object, and it is worth noting that it is the same code that is used to produce both the on-screen graphics and the graphics that are sent to the printer. This means that WYSIWYG is pretty well guaranteed. Back in the GetPage() method in the Paginator code, you’ll notice four lines after the call to DrawPage(). These lines are the essential bit for all GetPage() methods. You’ll need to call Measure() first, and pass it the size of the Canvas. Then you’ll need to call Arrange(), and pass it a Rect whose arguments are an upper left coordinate of (0,0) (which is produced by a Point object by default), and again the Canvas size. Finally, you need to call UpdateLayout(). If you leave out the calls to Measure() and Arrange(), the TextBoxes won’t appear at all; if you omit UpdateLayout(), the TextBoxes will appear, but the text inside them won’t. Finally, you create a DocumentPage object and pass it the Canvas. This completes the creation of the graphics for the printout. OK, so how you do the actual printing? If all you want to do is print out all the pages in the document, this is quite easy. If you want to specify a range of pages from within the document, this is a bit trickier and we’ll get to that later. To print all the pages, we’ll look at the code in the event handler for the Print button: private void printButton_Click(object sender, RoutedEventArgs e) { PrintDialog printDialog = new PrintDialog(); if (printDialog.ShowDialog() == true) { DocumentPaginator paginator = new Paginator(); printDialog.PrintDocument(paginator, "Print demo"); } } We create and display a PrintDialog. If the user presses ‘Print’ in the dialog, the dialog returns ‘true’, and we then simply create a Paginator object and pass it to the PrintDialog’s PrintDocument() method. And that’s it. One final note. If you’re worried about wasting reams of paper testing your printing code, you should be able to see the results on-screen without having to actually print anything. If you’ve installed .NET (which you have to do to use Visual Studio anyway), you should notice the Microsoft XPS Document Writer as one of the available printers in the PrintDialog. If you select that, your output will be saved to an XPS file, which you can then look at with the XPS Viewer (again, this should have been installed when you installed Visual Studio) which should open if you double-click on the XPS file. Another possibility, if you have Office 2010 installed, is that OneNote allows you to send printer output to it, so again you can see it without wasting any paper. Some other programs may have readers for printer output, so check what shows up in your printer list in the PrintDialog. Get the source code here. Trackbacks […] bar at the top. As an example, we’ll examine the code we used for the printing demo in the last post, and in the process add a feature to it that illustrates some of the properties of context […]
https://programming-pages.com/2012/06/12/printing-in-wpf/
CC-MAIN-2018-26
refinedweb
1,910
61.06
Engineer Interview Questions2,742engineer interview questions shared by candidatesTop Interview Questions Sort: RelevancePopular Date Network Engineer at Facebook was asked... 19 Nov 2017 Whole datacenter design to deliver services based on users locations.8 AnswersCan you please let me know what is been asked for the coding interview(during ur 3rd interview)?The coding interview was about solving small problems: sorting data, parsing network device output. Programming languages could be chosen freely (I chose Go and Python).Can you please let me know what are the behavioral questions that are been asked? Is there any design questions related to networking?Show more responsesI don't really remember about behavorial questions but I do remember about questions like "what was a project you think you've done well?", "what was a project you think you've done wrong?". I think that the behavior evaluation was mostly done during the lunch when you eat when someone from the team you'll work in if you get in. Design questions about networking were mostly about BGP and datacenter fabrics (which is something I was not good into, I work in an network operator environment). It was quite interesting to be honest but quite hard for me. My technical interviewers were looking for answers where you'll have to use both networking and system/coding solutions. The questions could never be answer only with networking skills (like how to load balance in 50/50 traffic between to upstream providers).What was the networking phone screening focused on?Mostly protocols such as BGP. One of the questions was "What is your favorite protocol?". This is not a simple question, actually I needed to explain why and had to get in a lot of details about the protocol that I mentioned. So choose wisely if you are asked this question.Can you please let me know details for coding interview .. 1.how many questions ? 2.do we need to solve them or logic is enough ? 3. there main area of focusI have my first round scheduled in June first week. Can you please help me out with the topics I need to cover?Software Engineer at Amazon was asked... 28 Dec 2010 you have array with n elements. How would you do circular shift of k positions? Time and space complexity?6 AnswersMake a circular linklist, and move headpointer K position to do K shifts. It's O(n) time complexity. Space is contant. (circular link list).Well, space isn't constant because you took an array and then copied it somehow to a linked list. Remember, you were given an array? If I understand the question correctly, they're asking to do a circular shift of some range of values, like the first k values in an array of length n? So if you wanted to shift right, temp = array[k] from index=k to 1 array[index] = array[index-1] array[0] = temp this would be O(k)? I mean, it would take k steps, but maybe it's somehow still O(n)oh, sorry, I misunderstood. Not k values, move everything k positions. Praveen Chettypally's answer works but the space complexity would be O(n) since there is a fully copy of the list? The simplest would probably be to make another array and copy in, starting at the (n-k)th element, going to the end, then starting at the beginning. A second array would probably be a better option than a completely different data structure. What if it has to be done in place? is there an O(n) solution?Show more responsesalright - shiftArray( theArray, M ): size = len( theArray ) assert( size > M ) reverseArray( theArray, 0, size - 1 ) reverseArray( theArray, 0, M - 1 ) reverseArray( theArray, M, size - 1 ) O(n) with no extra storage. Wish I could have thought of that one myself...I beieve this does the trick too:); }I tried the above function - shiftArray and the looks is not working: shiftItemsFromList class: class shiftItemsFromList{); } } Part of the main function: System.out.println("Circle Shift N size array for M possitions:"); char [] array = {'a', 'b', 'c', 'd', 'e', 'f'}; shiftItemsFromList sh = new shiftItemsFromList(); String s = sh.shiftArray(array, 2); System.out.println("Print the Circle Shift N size array: " + s); System.out.println("DONE"); OUTPUT: Circle Shift N size array for M possitions: Print the Circle Shift N size array: cbadaf DONEGraduate Software Engineer at Ericsson-Worldwide was asked... 30 Mar 2018 I show them my IOT project which works with the MQTT protocol. They ask what other protocols are available for IOT.4 AnswersI answered them that I have no idea. (lolz)BTW,what's the content roughly of 1:1 basics coding exercise.If you ask ROUGHLY, it's like a list of incomplete small programs you need to complete/solve. Like a in-class CA. The purpose of this is to understand how you solve problem (logical thinking stuffs).Show more responsesI am not from coding background and have my knowledge limited till functions. Hardly I know anything about classes and objects and other OOPS concepts. So are the questions more inclined towards OOPS concepts or can they be solved without OOPS?Senior Software Engineer at Tripadvisor was asked... 27 Apr 2015 Give a range of numbers, return 6 different numbers randomly. In O(n).3 AnswersSuppose the range from 0 to 100 1- Create and initialize an array of 101 elements, filled from 0 --> 100 2- Set the max to be the last element in the array i.e max = 100 3- Get a random number between 0 and max. r= rand(0,max) 4- Replace array[r] with array[max] and decrease max by 1(max = max -1) 5- Repeat from step 3As this question is usually posed, there is an additional constraint: the numbers are presented in a stream, and you do not have enough memory to store them all.public static void main(String[] args) { int max = 100; Integer[] nums = new Integer[max]; for (int i = 0; i < nums.length; i++) { nums[i] = i; } Collections.shuffle(Arrays.asList(nums)); for (int i = 0; i < 6; i++) { System.out.println(nums[i]); } }Software Development Engineer at Amazon was asked... 23 Feb 2011 difference between "hashing a string" and "encrypting a string". Then: is it possible to find two elements for which the hash is the same?3 Answersyou can't "go back" from a hash. You can go back from encription if you know the secret (say password, private key, whatever). Second question: yes but then you have a problem.1. Encryption uses a secret key while hashing does not require any key. Moreover, hash is one-way but encryption can be reversed by a decryption operation. 2. Yes, that's called hash collision, which although a low probability occurrence, does exist.Decryption of encrypted string is possible. But we cannot say same thing for hashing. Because hash is one way operation. Q2:Low probability, it is possible using brute forceSoftware Engineer, Site Reliability Engineering at Google was asked... 12 Jul 2014 Enumerate the following from 1 to 4, being 1 the fastest to execute and 4 the slowest: - read cpu register - disk seek - context switch - read from main memory3 Answersread cpu register-1 context switch-2 read from main memory-3 disk seek-4context switch - 31 CPU 2 Memory 3 Context switching 4 DiskSenior Software Engineer at LinkedIn was asked... 19 May 2015 find a number in a sorted array and then find the number in an unsorted array ?They will unsort the array on their own .4 AnswersOh guys this was so easy I was able to provide six seven ways and provided with complexcity analysis as well , which he was happy with .. but I dont know what was the expectation ...i think I just wasted 2 weeks of my precious job hunting time ...I don't get it. Why did they not move forward?I know that is too easy, and probably this thread was closed, but I'm very thankful, if any one could validate the code below about this question. =================== @Test public void test() { String[] sortedArray = {"a","b","c","d","e","1"}; String[] unsortedArray = {"a","2","c","3","1","4"}; int number = 0; for(int i=0;iShow more responses One or more comments have been removed. Please see our Community Guidelines or Terms of Service for more information.Software Development Engineer I Intern at Amazon was asked... 3 Oct 2015 Given an array of integers [1,2,3,4]. and target t = 5. Come up with a solution that will print out all the unique pairs in the array that are equal t.3 AnswersFound this very hard, but the interviewer gave subtle hints, and I eventually came up with the idea of using a nested for loop. But I was not completely correct as I had set the inner incrementer to j = 0 instead of j = i+1 .array a=new array(); foreach(int i in a) { if(i==t) { console.writeline(i.toString()); } } //c# code,i am not sure if this is what they whantedpublic class CalculateTuple { /** * @param args */ public static void main(final String[] args) { final int arr[] = { 1, 2, 3, 4 }; final int sum = 5; solution(arr, sum, false); } /** * find possible tuple for provided sum * * @param input * input array * @param expectedSum * expected sum value for the tuple * @param isCommutative * true : (1,2) != (2,1) will give you both tuple <br> * false : (1,2) == (2,1) are same and give you only (1,2) */ public static void solution(final int input[], final int expectedSum, final boolean isCommutative) { final Map indexedArray = new HashMap(); final Map uniquePair = new HashMap(); final int length = input.length; for (int i = 0; i < length; i++) { indexedArray.put(input[i], i); } int count = 0; for (int i = 0; i < length; i++) { final Integer integer = indexedArray.get(expectedSum - input[i]); if (integer != null && integer != i) { if (isCommutative || (null == uniquePair.get("(" + i + "," + integer + ")") && null == uniquePair.get("(" + integer + "," + i + ")"))) { System.out.println("(" + i + "," + integer + ")"); uniquePair.put("(" + i + "," + integer + ")", "(" + i + "," + integer + ")"); count++; } } } System.out.println("Found " + count); } }ICT Student Engineer at IBM was asked... 8 Apr 2016 There was no questions like "What would you do if..". It was more o fa free talk3 AnswersI went through my CV explaining in detail my experience, projects (college, professional) I worked on, chalenges faced while working on those projects and how I approached them and dealt with them. A bit about college experience and projects and the final project. Also gave some information about myself, what kind of person I am, what I like doing in my free time, etc. just to let them know that I am a "real" person and have interests outside work or work related.How long did it took you to get an offer letter (after the call from recruiter,confirming your selection) .Data Center Facilities Technician (Engineer) at Google was asked... 22 Jun 2016 As I have signed confidentiality agreement I cant share interview questions. But it was pure technical interviews and very interesting. You have to be strong professional in your occupation to get an offer.3 AnswersI have answered most of the questions, but that was not enough to get an offer. I know which questions I gave wrong answers, because I checked after and improved my Knowledge Base. It is very fair interview process.Can you share what ability(ex HVAC eletrical...)be need in this position. I mean they listed a lots of qualification but what is most important?I attended Google data centre interview recently. The questions were purely application oriented and they testing your expertise in the field. I don't want to reveal the questions but everything is behavioral based. When you are in this situation how u face. They asked me nearly 15 questions. 8 from AHU problem faced daily by HVAC engineer. 3 questions from chiller. 1 from cooling tower, 2 from fire escape and pre actuated pipe. 1 from water treatment. All these questions were purely experienced based and this one cannot be answered other than the expertise who opened the chiller and AHU unit. Not even 1 question in basics, all questions are higher end.Interviews > engineer1–10 of 2,742 Interview QuestionsPrevious12345NextSee Interview Questions for Similar JobsSoftware EngineerSite Reliability EngineerDirectorSoftware DeveloperManagerTechnical Support EngineerAccount Executive
https://www.glassdoor.ie/Interview/engineer-interview-questions-SRCH_KO0,8.htm
CC-MAIN-2020-45
refinedweb
2,054
66.03
Copyright © 2010 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. August 2010 Seventh Public Working Draft of "Voice Extensible Markup Language (VoiceXML) 3.0". The main differences from the previous draft are described in Appendix F Major changes since the last Working Draft. A diff-marked version of this document is also available for comparison purposes. This document is very much a work in progress. Many sections are incomplete, only stubbed out, or missing entirely. To get early feedback, the group focused on defining enough functionality, modules, and profiles to demonstrate the general framework. To complete the specification, the group expects to introduce additional functionality (for example speaker identification and verification, external eventing) and describe the existing functionality at the level of detail given for the Prompt and Field modules. We explicitly request feedback on the framework, particularly any concerns about its implementability or suitability for expected applications. By late 2010 the group expects all key capabilities to be present in the specification, with details worked out by early 2011. Applications written as 2.1 documents can be used under a 3.0 processor using the 2.1 profile. As an example, the Implementation Report tests for 2.1 (which includes the IR tests for 2.0) will be supported on a 3.0 processor. Exceptions will be clarifications and changes needed to improve interoperability. This document is a W3C Working Draft. It has been produced as part of the Voice Browser Activity. The authors of this document are participants in the Voice Browser Working Group. For more information see the Voice Browser FAQ. The Working Group expects to advance this Working Draft to Recommendation status. Comments are welcome on www-voice@w3.org (archive). See W3C mailing list and archive usage guidelines. Terminology 2 Overview 2.1 Structure of VoiceXML 3.0 2.2 Structure of this document 2.3 How to read this document 3 Data Flow Presentation (DFP) Framework 3.1 Data 3.2 Flow 3.3 Presentation 4 Core Concepts 4.1.2 External Events 4.5 Document Initialization and Execution 4.5.1 Initialization 4.5.1.1 DOM Processing 4.5.1.2 Preparation for Execution 4.5.2 Execution 4.5.2.1 Subdialogs 4.5.2.2 Application Root 4.5.2.3 Summary of Syntax/Semantics Interaction 4.5.3 Transition Controllers 5 Resources 5.1 Datamodel Resource 5.1.1 Data Model Resource API 5.2 Prompt Queue Resource 5.2.1 State Chart Representation 5.2.2 SCXML Representation 5.2.3 Defined Events 5.2.4 Device Events 5.2.5 Open Issue 5.3 Recognition Resources 5.3.1 Definition 5.3.2 Defined Events 5.3.3 Device Events 5.3.4 State Chart Representation 5.3.5 SCXML Representation 5.4 Connection Resource 5.4.1 Definition 5.4.2 Final Processing State 5.4.3 Defined Events 5.4.4 State Chart Representation 5.4.5 SCXML Representation 5.5 Timer Resource 5.5.1 Definition 5.5.2 Defined Events 5.5.3 Device Events 5.5.4 State Chart Representation 6 Modules 6.1 Grammar Module 6.1.1 Syntax 6.1.1.1 Attributes 6.1.1.2 Content Model 6.1.2 Semantics 6.1.2.1 Definition 6.1.2.2 Defined Events 6.1.2.3 External Events 6.1.2.4 State Chart Representation 6.1.2.5 SCXML Representation 6.1.3 Events 6.1.4 Examples 6.2 Inline SRGS Grammar Module 6.2.1 Syntax 6.2.2 Semantics 6.2.2.1 Definition 6.2.2.2 Defined Events 6.2.2.3 External Events 6.2.2.4 State Chart Representation 6.2.2.5 SCXML Representation 6.2.3 Events 6.2.4 Examples 6.3 External Grammar Module 6.3.1 Syntax 6.3.1.1 Attributes 6.3.1.2 Content Model 6.3.2 Semantics 6.3.2.1 Definition 6.3.2.2 Defined Events 6.3.2.3 External Events 6.3.2.4 State Chart Representation 6.3.2.5 SCXML Representation 6.3.3 Events 6.3.4 Examples 6.4 Prompt Module 6.4.1 Syntax 6.4.1.1 Attributes 6.4.1.2 Content Model 6.4.2 Semantics 6.4.2.1 Definition 6.4.2.2 Defined Events 6.4.2.3 External Events 6.4.2.4 State Chart Representation 6.4.2.5 SCXML Representation 6.4.3 Events 6.4.4 Examples 6.5 Builtin SSML Module 6.5.1 Syntax 6.5.2 Semantics 6.5.3 Examples 6.6 Media Module 6.6.1 Syntax 6.6.1.1 Attributes 6.6.1.2 Content Model 6.6.1.2.1 Tips (informative) 6.6.2 Semantics 6.6.3 Examples 6.7 Parseq Module 6.7.1 Syntax 6.7.2 Semantics 6.7.3 Examples 6.8 Foreach Module 6.8.1 Syntax 6.8.1.1 Attributes 6.8.1.2 Content Model 6.8.2 Semantics 6.8.3 Examples 6.9 Form Module 6.9.1 Syntax 6.9.2 Semantics 6.9.2.1 Form RC 6.9.2.1.1 Definition 6.9.2.1.2 Defined Events 6.9.2.1.3 External Events 6.9.2.1.4 State Chart Representation 6.9.2.1.5 SCXML Representation 6.10 Field Module 6.10.1 Syntax 6.10.2 Semantics 6.10.2.1 Field RC 6.10.2.1.1 Definition 6.10.2.1.2 Defined Events 6.10.2.1.3 External Events 6.10.2.1.4 State Chart Representation 6.10.2.1.5 SCXML Representation 6.10.2.2 PlayandRecognize RC 6.10.2.2.1 Definition 6.10.2.2.2 Defined Events 6.10.2.2.3 External Events 6.10.2.2.4 State Chart Representation 6.10.2.2.5 SCXML Representation <rtc> 6.15.1.1 Syntax 6.15.2 <cancelrtc> 6.15.2.1 Syntax 6.15.3 Semantics 6.15 6.22 Transition Controller Module 6.22.1 Syntax 6.22.1.1 Attributes 6.22.1.2 Content Model 6.22.2 Semantics 6.22.2.1 Definition 6.22.2.2 Defined Events 6.22.2.3 External Events 6.22.2.4 State Chart Representation 6.22.2.5 SCXML Representation 6.22.3 Events 6.22.4 Examples 7 Profiles 7.1 Legacy Profile 7.1.1 Conformance 7.1.1.1 Vxml Root Module Requirements 7.1.1.2 Form Module Requirements 7.1.1.3 Field Module Requirements 7.1.1.4 Prompt Module Requirements 7.1.1.5 Grammar Module Requirements 7.1.1.6 Data Access and Manipulation Module Requirements 7.1.2 Convenience Syntax 7.1.3 Default Handlers and Transition Controllers) 8 Environment 8.1 Resource Fetching 8.1.1 Fetching 8.1.2 Caching 8.1.2.1 Controlling the Caching Policy 8.1.3 Prefetching 8.1.4 Protocols 8.2 Properties 8.2.1 Speech Recognition Properties 8.2.2 DTMF Recognition Properties 8.2.3 Prompt and Collect Properties 8.2.4 Media Properties 8.2.5 Fetch Properties 8.2.6 Miscellaneous Properties 8.3 Speech and DTMF Input Timing Properties 8.3.1 DTMF Grammars 8.3.1.1 timeout, No Input Provided 8.3.1.2 interdigittimeout, Grammar is Not Ready to Terminate 8.3.1.3 interdigittimeout, Grammar is Ready to Terminate 8.3.1.4 termchar and interdigittimeout, Grammar Can Terminate 8.3.1.5 termchar Empty When Grammar Must Terminate 8.3.1.6 termchar Non-Empty and termtimeout When Grammar Must Terminate 8.3.1.7 termchar Non-Empty and termtimeout When Grammar Must Terminate 8.3.1.8 Invalid DTMF Input 8.3.2 Speech Grammars 8.3.2.1 timeout When No Speech Provided 8.3.2.2 completetimeout With Speech Grammar Recognized 8.3.2.3 incompletetimeout with Speech Grammar Unrecognized 8.4 Value Designations 8.4.1 Integers 8.4.2 Real Numbers 8.4.3 Times In this document, the key words "must", "must not", "required", "shall", "shall not", "should", "should not", "recommended", "may", and "optional" are to be interpreted as described in [RFC2119] and indicate required levels for compliant VoiceXML 3.0 implementations. Terms used in this specification are defined in Appendix C Glossary of Terms. How does one build a successor to VoiceXML 2.0/2.1? Requests for improvements to VoiceXML fell into two main categories: extensibility and new functionality. To accommodate both, the Voice Browser Working Group One of the benefits of detailed semantic descriptions is improving portability within VoiceXML. Two vendors may implement the same functionality differently; however, the functionality must be consistent with the semantic meanings described in this document so that application authors are isolated from the different implementations. This increases portable among platforms that support the same syntax. Note that there are many other factors that effect to the portability that is outside the scope of this document (e.g. speech recognition capabilities, telephony). This document covers the following: The remainder of this document is structured as follows: 3 Data Flow Presentation (DFP) Framework presents the Data-Flow-Presentation Framework, its importance for the development of VoiceXML 3.0 and how VoiceXML 3.0 fits into the model. 4 Core Concepts explains the core concepts underlying the new structure for VoiceXML, including resources, resource controllers, the relationship between syntax and semantics, DOM eventing, modules and profiles. 5 Resources presents the resources defined for the language. These provide the key presentation-related functionality in the language. 6 Modules presents the modules defined for the language. Each module consists of a syntax piece (with its user-visible events), a semantics piece (with its behind-the-scenes events) and a description of how the two are connected. 7 Profiles presents two profiles. The first, the VoiceXML 2.1 profile, shows how a language similar to VoiceXML 2.1 can be created using the structure and functionality of VoiceXML 3.0. The second, the Basic profile, leaves out higher-level flow control constructs such as <form> and the associated Form Interpretation Algorithm. The Appendices provide useful references and a glossary of terms used in the specification. For everyone: Please first read 3 Data Flow Presentation (DFP) Framework. The data-flow- presentation distinction applies not only to VoiceXML 3.0, but to many of W3C's specifications. Understanding VoiceXML's role as a presentation language is crucial context for understanding the rest of the specification. For application authors: we recommend that you begin with syntax and only gradually explore details of the semantics as you need to understand behavioral specifics. For VoiceXML platform developers: we recommend that you begin with the functionality and framework and only focus on syntax later. Unlike VoiceXML 2.0/2.1, the focus in VoiceXML 3.0 is almost exclusively on the user interface portions of the language. By choice, very little work has gone into the development of data storage and manipulation or control flow capabilities. In short, VoiceXML 3.0 has been designed from the ground up as a *presentation* language, according to the definition presented in the Data Flow Presentation ([DFP]) Framework. Although VoiceXML 3.0 is a presentation language, it also contains within it all 3 levels of the DFP framework ( Figure 6). Figure 6: DFP Architecture The Data Flow Presentation (DFP) Framework is an instance of the Model-View-Controller paradigm, where computation and control flow are kept distinct from application data and from the way in which the application communicates with the outside world. This partitioning of an application allows for any one layer to be replaced independently of the other two. In addition, it is possible to simultaneously make use of more than one Data (Model) language, Flow (Controller), and/or Presentation (View) language. The Data layer. Within VoiceXML 3.0 the Data layer is realized through a pluggable data language and a data access or manipulation language. Access to and use of the data is aligned with options available in SCXML for simpler interaction with the Flow layer (see the next section). This specification defines two specific data languages, XML and ECMAScript, and two data access and manipulation languages, E4X/DOM and XPath. Others may be defined by implementers. The Flow layer of VoiceXML 3.0 is responsible for all application control flow, including business logic, dialog management, and anything else that is not strictly data or presentation. VoiceXML 3.0 provides primitives that contain the control flow needed to implement them, but all combinations between and among the elements at the syntax level is done via calls to external control flow processors. Two that are likely to be used with VoiceXML are CCXML and SCXML. Note that flow control components written outside of VoiceXML may be communicating not only with a VoiceXML processor but with an HTML browser, a video game controller, or any of a variety of other input and output components. The Presentation layer of VoiceXML 3.0 is responsible for all interaction with the outside world, i.e., human beings and external software components. VoiceXML 3.0 *is* the Presentation layer. Designed originally for human-computer interaction, VoiceXML "presents" a dialog by accepting audio and dtmf input and producing audio and video output.. It is important to note that this model places no burden or requirements that a VoiceXML interpreter must implement behavior as described in the model. Rather, the requirement is that the behavior must be the same as if it were implemented as described, but it is permitted to have optimizations or different architecture behind the implementation of the markup interpretation.ates. The event model for VoiceXML 3.0 builds upon the DOM Level 3 Events [DOM3Events] specification. DOM Level 3 Events offer a robust set of interfaces for managing the listener registration, dispatching, propagation, and handling of events, as well as a description of how events flow through an XML tree. The DOM 3.0 event model offers VoiceXML developers a rich set of interfaces that allow them to easily add behavior to their applications. In addition, conforming to the standard DOM event model enables authors to integrate their Voice applications in next generation multimodal or multi-namespaced frameworks such as MMI and CDF with minimal efforts. Note that the VXML 2.0 style events are supported through a new DOM event named 'vxmlevent', and if this vxmlevent is uncanceled then the default action is to run the VXML 2.0 event handling. Within the VoiceXML 3.0 semantic model, the DOM Level 3 Events APIs are available to all Resource Controllers that have markup elements associated with them. Indeed, this section covers the eventing APIs as available to VoiceXML 3.0 markup elements. The following section describes how the semantic model ties in with the DOM eventing model. All VoiceXML 3.0 markup elements implement interfaces that support the following: The VoiceXML 3.0 Event interface extends the DOM Level 3 Event interface to support voice specific event information. In particular, the VoiceXML 3.0 Event interface supports a count integer that stores the number of times a resources emits a particular event type. The semantic model manages the count field by incrementing its value and resetting it as described in the section that follows. VoiceXML 3.0 markup elements implement the DOM Level 3 EventTarget interface.This interface allows registration and removal of event listeners as well as dispatching of events. The VoiceXML 3.0 markup elements implement the DOM Level 3 EventListener interface. This interface allows the activation of handlers associated with a particular event. When a listener is activated, the event handler execution is done in the semantic model as described in the section that follows. The DOM Level 3 Event specification supports the notion of partial ordering using the event listener group; all events within a group are ordered. As such, in VoiceXML 3.0, event listeners are registered as they are encountered in the document. Furthermore, all event listeners registered on an element belong to the same default group. Both of these provisions ensure that event handlers will execute in document order. An event listener is triggered if: Once an event listener is triggered, the execution is handled by the semantic model as described in the section below. Event propagation blocks until it is notified by the semantic model to proceed.. VoiceXML 3.0 document initialization takes place over two phases: "DOM Processing" and "Preparation for Execution". Both of these phases assume the required resources have already been created. Any errors in the initialization of the document or the creation of these resources MUST be thrown in the calling context. If that context was a VoiceXML document, then this MUST be an error.badfetch. Note that while these phases are ordered, and the steps within the phases ordered, this is only a logical ordering. Implementations are allowed to use a different ordering as long as behave as if they were following the specified ordering. The first step in initializing a VoiceXML 3.0 document (root document or child) is generating the Level-3 DOM. This task involves both checking the document for well formed XML and full schema and syntax validation to ensure proper tag/attribute relationships. Once complete, the interpreter invokes the semantic constructor for the root <vxml> node in the DOM. In this context, the term "semantic constructor" represents whatever mechanism is used to create the Resource Controllers for a given node. No particular implementation is implied or required. The root <vxml> node constructor is responsible for invoking the constructors for all nodes in the document that have them. When it does this, it will call the semantic constructor routine passing it Note that the initial construction process creates the RCs but does not necessarily fully configure them. Further initialization, including in particular the creation of variables and variable scopes, will happen only when the RCs are activated at runtime (e.g. by visiting a Form). However, at this point the list of children for each element (and thus each RC) is known. For each RC this list of children will populate into the appropriate place in the RC data model before semantic initialization of the RC. Once the RCs are constructed, they are independent of the DOM, except for the interactions specified below. However, while they are running the RCs often make use of what appears to be syntactic information. For example, the concept of 'next item' relies heavily on document order, while <goto> can take a specific syntactic label as its target. We provide for this by assuming that RCs can maintain a shadow copy of relevant syntactic information, where "shadow copy" is intended to allow a variety of implementations. In particular, platforms may make an actual copy of the information or may maintain pointers back into the DOM. The construction process may create multiple RCs for a given node. In that case, one of the RCs will be marked as the primary RC. It is the one that will be invoked when the flow of control reaches that (shadow) node. If the document being initialized is a child of a root document, then the root document of that child must fully complete its initialization before the child can be prepared. In other words, the root document must both process its DOM and prepare for execution before child initialization proceeds. Once in the preparation phase, static properties (ie those NOT a function of ECMA) are available for lookup. Although this isn't an explicit step, it is mentioned here as this is the first opportunity for their retrieval. Note that even if documentmaxage/stale properties were to be specified in the child document, they would not be available for retrieval when downloading the root document. Rather these values would be taken from the system defaults or context. For example, consider the case of a first call into a system which lands on a child document called A. The default values for documentmaxage/stale would be used when fetching both this child A and root document of the child called A-root. Should A transition to child document B which references root B-root, the <property> values of documentmaxage/stale in A would be used to fetch B. However, the implicit fetch of B-root would use the system defaults for documentmaxage/stale. With the ability to read <property> values comes the first opportunity to act on any prefetching directives supplied by the application. Prefetching is an optional step, and could be postponed temporality or indefinitely. The only requirement on a conformant processor is that prefetching cannot take place before this step. Next, document-level variables and scripts are initialized in document order. Note that conformant processors MUST not locally handle any semantic errors generated during this step. Such errors MUST be thrown to the calling document or context (e.g error.badfetch). The reason being that the present document is not yet fully initialized and thus cannot reliably handle errors locally. The final step in preparation is for the controller to select the first <form> to execute. If either the local controller is malformed or the optional URI fragment points to a non-existent <form>, an error MUST be generated in the calling document or context (eg error.badfetch). A conformant processor MUST not handle this locally. After initialization, the semantic control flow does a <goto> to the initial Resource Controller. Once a RC is running, it invokes Resources and other RCs by sending them events. The DOM is not involved in this process. At various points in the processing, however, an RC may decide to raise an author-visible event. It does this by creating an event targeted at a specific DOM node and sending it back to the DOM. When the DOM receives the event, it performs the standard bubble/capture cycle with the target specified in the event. In the course of the bubble/capture cycle, various event handlers may fire. Their execution is a semantic action and occurs back in the semantic 'side' of the environment. The DOM sends messages back to the appropriate semantic objects to cause this to happen. Note that this means that the DOM must store some sort of link to the appropriate RCs. The event handlers may update the data model, execute script, or raise other DOM events. When the handler finishes processing on the semantic side, it sends a notification back to the DOM so that it can resume the bubble/capture phase. (N.B. This notification is NOT a DOM event.) When the DOM finishes the bubble/capture processing of the event, it sends a notification back to the RC that raised the event so that it can continue processing. A subdialog has a completely separate context from the invoking application. Thus it has a separate DOM and a separate set of RCs. However it shares the same set of Resources since they are global. When a subdialog is entered, the Datamodel Resource will have to create a new scope for the subdialog and hide the calling document's scopes. When the subdialog is exited, the Datamodel resource will destroy the subdialog scope(s) and restore the calling document's scope(s). To handle event propagation from the leaf application to the application root document, we create a Document Manager to handle all communication between the documents. This means that the DOMs of the two documents remain separate. When an event is not handled in the leaf document, the Document Manager will propagate it to the application root, where it will be targeted at the <vxml> node. Requests to fetch properties or to active grammars will be handled by the Document Manager in a similar fashion. To handle platform- and/or language-level defaults, we will create a "super-root" document above the application root. The Document Manager will pass it events and requests that are not handled in the root document. If root and superroot documents do not handle an event, the Document Manager will ensure that the event is thrown away. There seem to be four kinds of interactions between RCs and the DOM at runtime:. This section describes semantic models for common VoiceXML resources. Resources have a life cycle of creation and destruction. Specific resources may specify detailed requirements on these phases. All resources must be created prior to their use by a VoiceXML interpreter. Resources are defined in terms of a state model and events which it processes within defined states. Events may be divided into those which are defined by the resource itself and events defined by other conceptual entities which the resource receives or sends within these states. These conceptual entities include resource controllers and a 'device' which provides an implementation of the services defined by the resource. The semantic model is specified in both UML state chart diagrams and SCXML representations. In case of ambiguity, the SCXML representation takes precedence over UML diagrams. Note that SCXML is used here to define the states and events for resources and this definitional usage should not be confused with the use of SCXML to specify application flow (see 3.2 Flow). Furthermore, these resource events are conceptual, not DOM events: they are used to define relationship with other conceptual entities and are not exposed at the markup level. The following resources are defined: data model (5.1 Datamodel Resource), prompt queue (5.2 Prompt Queue Resource), recognition -- DTMF, ASR, and SIV (5.3 Recognition Resources), connection (), and timer (5.5 Timer Resource). The datamodel is a repository for both user- and system-defined data and properties. To simplify variable lookup,we define the datamodel with a synchronous function-call API, rather than an asynchronous one based on events. The data model API does not assume any particular underlying representation of the data or any specific access language, thus allowing implementations to plug in different concrete data model languages. There is a single global data model that is created when the system is first initialized. Access to data is controlled by means of scopes, which are stored in a stack. Data is always accessed within a particular scope, which may be specified by name but defaults to being the top scope in the stack. At initialization time, a single scope named "Global" is created. Thereafter scopes are explicitly created and destroyed by the data model's clients. Here is a UML representation of the prompt queue. This state machine assumes that "queue" and "play" are separate commands and that a separate "play" will always be issued to trigger the play. When the "play" is issued, the systems plays any queued prompts, up to and including the first fetch audio in the queue. Then it halts, even if there are additional prompts or fetch audio in the queue and waits for another "play" command. The prompt structure assumed here is fairly abstract. It consists of a specification of the audio along with optional parameters controlling playback (for example, speed or volume.) The audio may be presented in-line, as SSML or some other markup language, or as a pointer to a file or streaming audio source. Logically, URLs are dereferenced at the time the prompt is queued, but implementations are not required to fetch the actual media until the prompt in question is sent to the player device. Note that the player device is assumed to be able to handle both recorded prompts and TTS, and to be able to interpret SSML. Platforms are free to optimize their implementations as long as they conform to the state machine specified here. In particular, platforms may prefetch audio or begin TTS processing in the background before the prompt is sent to the player device. For applications that make use of VCR controls (speed up, skip forward, etc.), actual performance may depend on whether the platform has implemented such optimizations. For example, a request to skip forward on a platform that does not prefetch prompts may result in a long delay. Such performance issues are outside the scope of this specification. This diagram assumes that SSML mark information is delivered in the Player.Done event, and that the player returns a Player.Done event when it is sent a 'halt' event (otherwise mark information would get lost on barge-in and hangup, etc). Note that the "FetchAudio" state is shown stubbed out for reasons of space, and is expanded in a separate diagram below the main one. Figure X: Prompt Queue Model Figure Y: Fetch audio Model <?xml version="1.0" encoding="UTF-8"?> <scxml initialstate="Created"> <datamodel> <data name="queue"/> <data name="markName"/> <data name="markTime"/> <data name="bargeInType"/> </datamodel> <state id="Created"> <initial id="Idle"/> <transition event="QueuePrompt"> <insert pos="after" loc = "datamodel/data[@name='queue']/prompt" val="_eventData/prompt"/> </transition> <transition event="QueueFetchAudio"> <foreach var="node" nodeset="datamodel/data[@name='queue']/prompt"> <if cond="$node[@fetchAudio='true']"> <delete loc="$node"/> <else> <assign loc="$node[@bargeInType]" val="unbargeable"/> </else> </if> </foreach> <insert pos="after" name="datamodel/data[@name='queue']/prompt" val="_eventData/audio"/> </transition> <transition event="setParameter"> <send target="player" event="setParameter" namelist="_eventData.paramName, _eventData.newValue"/> </transition> <transition event="Cancel" target="Idle"> <send target="player" event="halt"/> <send event="PlayDone" namelist="/datamodel/data[@name='markName'].text(), /datamodel/data[@name='markTime'].text()"/> <delete loc="datamodel/data[@name='queue']/prompt"/> </transition> <transition event="CancelFetchAudio"> <foreach var="node" nodeset="datamodel/data[@name='queue']/prompt"> <if cond="$node[@fetchAudio='true']"> <delete loc="$node"/> </if> </foreach> </transition> <state id="Idle"> <onentry> <assign loc="/datamodel/data[@name='markName']" val=""/> <assign loc="/datamodel/data[@name='markTime']" val="-1"/> <assign loc="/datamodel/data[@name='bargeInType']" val=""/> </onentry> <transition event="Play" cond="/datamodel/data[@name='queue']/prompt[1][@fetchAudio] eq 'false'" target="PlayingPrompt"/> <transition event="Play" cond="/datamodel/data[@name='[queue']/prompt[1][@fetchAudio] eq 'true'" target="FetchAudio"/> </state> <state id="PlayingPrompt"> <datamodel> <data name="currentPrompt"/> </datamodel> <onentry> <assign loc="/datamodel/data[@name='currentPrompt']/prompt" val="/datamodel/data[@name='queue']/prompt[1])"/> <delete loc="/datamodel/data[@name='queue']/prompt[1]"/> <if cond="/datamodel/data[@name='currentPrompt']/prompt[@bargeInType] != /datamodel/data[@name='bargeInType']"> <send event="BargeInChange" namelist="/datamodel/ data[@name='currentPrompt']/prompt[@bargeInType]"/> <assign loc="/datamodel/data[@name='bargeInType']" expr="/ datamodel/data[@name='currentPrompt']/prompt[@bargeInType]"/> </if> </onentry> <invoke targettype="player" srcexpr="/datamodel/ data[@name='currentPrompt']/prompt"/> <finalize> <if cond="_eventData/MarkTime neq '-1'"> <assign name="/datamodel/data[@name='markName']/" val="_eventData/markName.text()"/> <assign name="/datamodel/data[@name='markTime']/" val="_eventData/markTime.text()"/> </if> </finalize> <transition event="player.Done" cond="/datamodel/data[@name='queue']/prompt[last()] le '1'" target="Idle"> <send event="PlayDone" namelist="/datamodel/data[@name='markName'].text(), /datamodel/data[@name='markTime'].text()"/> </transition> <transition event="player.Done" cond="/datamodel/data[@name='queue'/prompt[1][@fetchAudio] neq 'true'" target="PlayingPrompt"/> <transition event="player.Done" cond="/datamodel/data[@name='queue']/prompt[1][@fetchAudio] eq 'true'" target="FetchAudio"/> </state> <!-- end PlayingPrompt --> <state id="FetchAudio"> <initial id="WaitFetchAudio"/> <transition event="player.Done" target="FetchAudioFinal"/> <state id="WaitFetchAudio"> <onentry> <send target="self" event="fetchAudioDelay" delay="/datamodel/data[@name='queue']/prompts[1][@fetchaudiodelay]"/> </onentry> <transition event="fetchAudioDelay" next="StartFetchAudio"/> <transition event="cancelFetchAudio" next="FetchAudioFinal"/> </state> <state id="StartFetchAudio"> <datamodel> <data name="fetchAudio"/> </datamodel> <onentry> <assign loc="/datamodel/data[@name='fetchAudio']" expr="/datamodel/data[@name='queue']/prompts[1]"/> <delete loc="/datamodel/data[@name='queue']/prompts[1]"/> <send target="self" event="fetchAudioMin" delay="/datamodel/data[@name='fetchAudio'][@fetchaudiominimum]"/> <send target="player" event="Play" namelist="/datamodel/data[@name='fetchAudio']"/> <if cond="/datamodel/data[@name='bargeInType'].text() ne 'fetchAudio'"> <send event="BargeInChange" namelist="fetchAudio"/> </if> </onentry> <transition event="CancelFetchAudio" target="WaitFetchMinimum"/> <transition event="fetchAudioMin" target="WaitFetchCancel"/> </state> <state id="WaitFetchMinimum"> <transition event="fetchAudioMin" target="FetchAudioFinal"> <send target="player" event="halt"/> </transition> </state> <state id="WaitFetchCancel"> <transition event="CancelFetchAudio" target="FetchAudioFinal"> <send target="player" event="halt"/> </transition> </state> <state id="FetchAudioFinal" final="true" /> <!-- could put cleanup handling here --> </state> <!-- end FetchAudio --> </state> <!-- end Created --> </scxml> The prompt queue resource can be controlled by means of the following events: The prompt queue resource returns the following events to its invoker: Issue (): Do we need 'fetchAudio' as a distinct bargein type? Resolution: None recorded. The prompt queue receives the following events from the underlying player: and sends the following events to the underlying device: Three types of recognition resources are defined: DTMF recognition for recognition of DTMF input, ASR recognition for recognition of speech input, and SIV for speaker identification and verification. Each recognition resource is associated with a device which implements their respective recognition services. Each device represents one or more actual recognizer instances. In case of a device implemented with multiple recognizers - for example two different speech recognition engines - it is the responsibility of the interpreter implementation to ensure that they adhere to the semantic model defined in this section. DTMF and ASR recognition resources and SIV resources are semantically similar. They share the same state and eventing model as well as recognition processing, timing and result handling. However, the resources differ in the following respects: Otherwise, these resources share the same semantic model. If a resource controller activates both DTMF and ASR recognition resources, then that resource controller is responsible for managing the resources so that only a single recognition result is produced per recognition cycle. If a resource controller activates ASR and SIV resources, it may produce multiple results timed to provide the results within the same cycle or independently. The recognition resource works as follows: in its created state, grammars (or a voice model) are added to the resource and subsequently prepared on the device. Recognition with these grammars (or voice model) can be activated and suspended, and recognition results are returned. When the recognition resource is ready to recognize (at least one active grammar and/or voice model), one or more recognition cycles may occur in sequence. Thus a recognition resource may enter multiple recognition cycles (as required for 'hotword' recognition), while requiring that a device, even if it has multiple instantiations, only produces one set of recognition results per recognition cycle. The recognition resource is defined in terms of a data model and state model. The data model is composed of the following elements: The state model is composed of states corresponding to functional state: idle, preparing grammars / preparing voice model, ready to recognize, recognizing, suspended recognition and waiting for results. In the idle state, the resource awaits events from resource controllers to activate grammars or a voice model for recognition on the device. The data model - activeGrammars or activeVoiceModel, properties, controller and mode - is (re-)initialized upon entry to this state: activeGrammars and activeVoiceModel are cleared, properties and controllers are set to null. If the resource receives an 'addGrammar' event, a new item is added to activeGrammars using grammar, properties and listener data in the event payload. If the resource receives a 'prepare' event, it updates its data model with event data: 'properties' with the properties event data and 'controller' is updated with the controller event data. Subsequent event notifications and responses are sent to the resource controller identified as the 'controller'. The recognition resource then moves into the preparing grammars (or preparing voice model) state. In the preparing grammars state, the resource behavior depends on whether activeGrammars is empty or not. If activeGrammars is empty (i.e. no active grammars are defined for this recognition resource), the resource sends the controller a 'notPrepared' event and returns to the idle state. If activeGrammar is non-empty, the resource sends a 'prepare' event to the device. The event payload includes 'grammars' and 'properties' parameters. The 'grammars' value is an ordered list where each list item is a grammar's content and its properties extracted from activeGrammars. The order of grammars in the 'grammars' parameter must follow the order in the activeGrammar data model. If the device sends a 'prepared' event, the resource sends a 'prepared' event to the controller and transitions into the ready to recognize state. In the preparing voice models state, the resource behavior depends on whether activeVoiceModel is empty or not. If activeVoiceModel is empty (i.e. voice model is not defined for this resource), the resource sends the controller a 'notPrepared' event and returns to the idle state. If activeVoiceModel is non-empty, the resource sends a 'prepare' event to the device. The event payload includes 'voicemodel' and 'properties' parameters. The 'voicemodel' value is a URI to the voicemodel, and its properties are extracted from activeVoiceModel. If the device sends a 'prepared' event, the resource sends a 'prepared' event to the controller and transitions into the ready to recognize state. When the recognition resource is in a ready to recognize state, it may receive a 'stop' event. In this case, the resource sends a 'stop' event to the device, and returns to the idle state. If the resource receives a 'listen' event, it sends a 'listen' event to the device and moves into the recognizing state. When the resource is in a recognizing state, it can toggle between this state and a suspended recognizing state. If the resource receives a 'suspend' event, then it moves into the suspended recognizing state and sends the device a 'suspend' event which causes the device to suspend recognition and delete any buffered input. No input is buffered while the device is in a suspended state. If the resource then receives a 'listen' event, it moves back into the recognizing state. When in the recognizing state, the resource may receive an 'inputStarted' event from the device, indicating that user input has been detected. The resource then moves into a waiting for results state. The device may send an 'error' event (for example, if maximum time has been exceeded) causing it to return to the idle state and send the controller an 'error' event. Alternatively, the device may send a 'recoResults' event, which contains a results parameter, a data structure representing recognition results. In the case of DTMF or ASR, the results can be in VoiceXML 2.0 or EMMA format. For SIV, the results must be in EMMA format. The structure may contain zero or more recognition results. Each result must specify the grammar (or voicemodel) associated with the recognition (using the same grammar/voicemodel name as used in the payload of the 'prepare' event), its recognition confidence and its input mode. The resource sends its controller a 'recoResults' event with event data containing the device's results parameter together with a listener parameter whose value is the listener associated with the grammar of the first result with the highest confidence (if there are no results, then the listener parameter is not defined). The resource then returns to the ready to recognize state, awaiting either a 'stop' event to terminate recognition or a 'listen' event to start another recognition cycle using the same active grammars and recognition properties. A recognition resource is defined by the events it receives: and the events it sends: The resource receives from the recognition device the following events: and sends to the recognition device the following events: The state model for an ASR recognition resource are shown in Figure. The timer resource is a resource that tracks timers for various resource controllers. A timer can be set to send a timeout event at some future time. Timers which have been set may also be canceled. A timer resource is defined by the events it receives: and the events it sends: t is possible to receive both a timerExpired event and then a cancelSuccess event as the events may have crossed paths. The resource receives from the timer device the following events: and sends to the timer device the following events: In VoiceXML 3.0, the language is partitioned into independent modules which can be combined in various ways. In addition to the modules defined in this section, it is also possible for third parties to define their own modules (see Section XXX). Each module is assigned a schema, which defines its syntax, plus one or more Resource Controllers (RCs), which define its semantics, plus a "constructor" that knows how to create them from the syntactic representation at initialization time. Only DOM nodes that have schemas and constructors (and hence RCs) assigned to them can be modules in VoiceXML 3.0. However, we may choose to define constructors and RCs for nodes that are not modules. Nodes that do not have constructors and RCs ultimately depend on some module for their interpretation. (Those modules are usually ancestor nodes, but we do not require this.) There can be multiple modules associated with the same VoiceXML element. They may set properties differently, add different child elements, etc. In many cases, some of the modules will be extensions of the others, but we don't require this. Note there is not necessarily a one-to-one relationship between semantic RCs and syntactic markup elements. It may take several RCs to implement the functionality of a single markup element. This module describes the syntactic and semantic features of a <grammar> element which defines grammars used in ASR and DTMF recognition. Grammars defined via this module are used by other modules. The attributes and content model of <grammar> are specified in 6.1.1 Syntax. Its semantics are specified in 6.1.2 Semantics. [See XXX for schema definitions]. The content model of <grammar> consists of exactly one of: The grammar RC is the primary RC for the <grammar> element. The grammar RC is defined in terms of a data model and state model. The data model is composed of the following parameters: The grammar RC first initializes its state. In the Ready state, when the grammar RC receives an 'execute' event it transitions to the Executing state. In the Executing state, If the child RC is an External Grammar, the grammar RC sends an 'execute' event to the child RC and waits for it to complete. Then, the grammar RC sends an AddGrammar event to the DTMF Recognizer Resource if mode="dtmf" or to the ASR Recognizer Resource if mode="voice", with the following as event data: the child RC, the fetchhint, language, charset, and encoding parameter values, and the controller RC (e.g., link, field, or form) as the handler for recognition results. Finally, the grammar RC sends the controller an executed event and transitions to the Ready state. The Grammar RC is defined to receive the following events: and the events it sends: The external events sent and received by the Grammar RC are those defined in this table: <> The events in this table may be raised during initialization and execution of the <grammar> element. Note that additional errors may occur when the grammar is fetched or added by the ASR or DTMF resource. Please check there for details. This module describes the syntactic and semantic features of inline SRGS grammars used in ASR and DTMF recognition. The attributes and content model of Inline SRGS grammars are specified in 6.2.1 Syntax. Its semantics are specified in 6.2.2 Semantics. [See XXX for schema definitions]. The syntax of the Inline SRGS Grammar Module is precisely all of the XML markup for a legal stand-alone XML form grammar as described in SRGS ([SRGS]), minus the XML Prolog. Note that both elements and attributes must be in the SRGS namespace (). The Inline SRGS grammar RC is defined in terms of a data model and state model. The data model is composed of the following parameters: The grammar RC's state model consists of the following states: Idle, Initializing, and Ready. Unlike most of the other modules, this module is primarily a data model for storing a grammar. The module itself has no execution semantics. While in the Idle state, the RC may receive an 'initialize' event, whose 'controller' event data is used to update the data model. The RC then transitions into the Initializing state. In the Initializing state, the syntactic contents of the grammar are saved into the grammar parameter. The RC sends the controller an 'initialized' event and transitions to the Ready state. The Inline SRGS Grammar RC is defined to receive the following events: and the events it sends: The Inline SRGS Grammar Module does not send or receive any external events. No module-specific events are raised during initialization of an Inline SRGS Grammar. Note that validity failure of the inline SRGS content would be detected at document parse time. This module describes the syntactic and semantic features of an <externalgrammar> element which defines external grammars used in ASR and DTMF recognition. The attributes and content model of <externalgrammar> are specified in 6.3.1 Syntax. Its semantics are specified in 6.3.2 Semantics. [See XXX for schema definitions]. The <externalgrammar> element has the attributes specified in Table 23. See 6.3.1.2 Content Model for restrictions on occurrence of src and srcexpr attributes. 3. The <externalgrammar> element has the following co-occurrence constraints: The External Grammar RC is defined in terms of a data model and state model. The data model is composed of the following parameters: The External RC sends the controller an 'initialized' event and transitions to the Ready state. In the Ready state, when the External Grammar RC receives an 'execute' event it transitions to the Executing state. In the Executing state, if the srcexpr variable is set it is evaluated against the data model as a data model expression, and the value is placed into the src variable; if srcexpr cannot be evaluated, an error.semantic event is thrown. Otherwise, the RC sends an 'executed' event to the controller RC and transitions into the Ready state. The External Grammar RC is defined to receive the following events: and the events it sends: The events that may be raised during initialization and execution of the <externalgrammar> element are those defined in Table 27 below. This module defines the syntactic and semantic features of a <prompt> element which controls media output. The content model of this element is empty: content is defined in other modules which extend this element's content model (for example 6.5 Builtin SSML Module, 6.6 Media Module and 6.7 Parseq Module). The attributes and content model of <prompt> are specified in 6.4.1 Syntax. Its semantics are specified in 6.4.2 Semantics, including how the final prompt content is determined and how the prompt is queued for playback using the PromptQueue Resource (5.2 Prompt Queue Resource). [See XXX for schema definitions]. The prompt RC is the primary RC for the <prompt> element. The prompt RC is defined in terms of a data model and state model. The data model is composed of the following parameters: The prompt RC's state model consists of the following states: Idle, Initializing, Ready, FormReady, and Executing. The initial state is the Idle state. While in the Idle state, the prompt RC may receive an 'initialize' event, whose controller event data is used to update the data model. The prompt RC then transitions into Initializing state. In the Initializing state, the prompt RC initializes its children: this is modeled as a separate RC (see XXX). The children may return an error for initialization. If a child sends an error, then the prompt RC returns an error. When all children are initialized, the prompt RC sends the controller an 'initialized' event and transitions to the Ready state. In the Ready state, the prompt RC can receive a 'checkStatus' event to check whether this prompt is eligible for execution or not. The value of the cond parameter in its data model is checked against the data model resource: the status is true if the value of the cond parameter evaluates to true. The status, together with its count data, is sent in a 'checkedStatus' event to the controller RC. The controller RC then determines if the prompt is selected for execution ([vxml20: 4.1.6], see PromptSelectionRC, Section XXX). RC receives an 'execute' event it transitions to the Executing state. In the Executing state, the prompt RC sends an evaluate event to its children. Each child returns either an error, or content (which may include parameters) for playback. If a child sends an error, then the prompt RC returns an error. Once evaluation is complete, the RC sends a queuePrompt event to the Prompt Queue Resource with the <prompt> parameters (bargein, bargeintype, timeout) with event data consisting of the list of content returned by its children. The prompt RC then sends the controller an executed event and transitions to the Ready state. The Prompt RC is defined to receive the following events: and the events it sends: Table 31 32 may be raised during initialization and execution of the <prompt> element. This module describes the syntactic and semantic features of SSML elements built into VoiceXML. This module is designed to extend the content model of the <prompt> element defined in 6.4 Prompt Module. The attributes and content model of SSML elements are specified in 6.5.1 Syntax. Its semantics are specified in 6.5.2 Semantics, including how elements are evaluated to yield final content for playback. [See XXX for schema definitions]. This module defines an SSML ([SSML]) Conforming Speech Synthesis Markup Language Fragment where: Exactly one of "src" or "expr" attributes must be specified; otherwise, an error.badfetch event is thrown. When the RC receives an evaluate event, its children are evaluated in order to return an SSML Conforming Stand-Alone Speech Synthesis Markup Language Document which can be processed by a Conforming Speech Synthesis Markup Language Processor. Evaluation comprises of: In this example <prompt> <foreach item="item" array="array"> <audio expr="item.audio"><value expr="item.tts"/></audio> <break time="300ms"/> </foreach> </prompt> evaluation returns a sequence of content for each item in <foreach> with <audio> and <value> elements. Assume that the array consists of 2 items where each item.audio evaluates to 'one.wav' and 'two.wav' respectively, and each item.tts evaluates to 'one' and 'two' respectively. Evaluation of <foreach> is equivalent to the following <prompt> <audio expr="'one.wav'"><value expr="'one'"/></audio> <break time="300ms"/> <audio expr="'two.wav'"><value expr="'two'"/></audio> <break time="300ms"/> </prompt> further evaluation of the <audio> and <value> elements result in <prompt> <audio src="one.wav">one</audio> <break time="300ms"/> <audio src="two.wav">two</audio> <break time="300ms"/> </prompt> and finally the prompt content is converted into a stand-alone SSML document (assuming the <prompt>'s xml:lang attribute evaluates to 'en'): <speak version="1.0" xml: <audio src="one.wav">one</audio> <break time="300ms"/> <audio src="two.wav">two</audio> <break time="300ms"/> </speak> This content is queued and played using the PromptQueue: each audio URI, or fallback content, is played, followed by a 300 millisecond break. The media module defines the syntax and semantics of <media> element. The module is designed to extend the content model of <prompt> in the prompt module (6.4 Prompt Module). The <media> element can be seen as an enhanced and generalized version of the VoiceXML <audio> element. It is enhanced in that it provides additional attributes describing the type of media, conditional selection, as well as control over playback . It is a generalization of the <audio> element in that it permits media other than audio to be played; for example, media formats which contains audio and video tracks. [See XXX for schema definitions]. The <media> element has the attributes specified in Table 34. See occurrence constraints for restrictions on occurrence of src and srcexpr attributes. Calculations of rendered durations and interaction with other timing properties follow SMIL 2.1 Computing the active duration where Note that not all SMIL 2.1 Timing features are supported. The <media> element content model consists of: The <media> has the following co-occurrence constraints: Note that the type attribute does not affect inline content. The handling of inline XML content is in accordance to the namespace of the root element (such as SSML <speak>, SMIL <smil>, and so forth). CDATA, or mixed content with VoiceXML <foreach> or <value> elements must be treated as an SSML Fragment and evaluated as described in 6.6.2 Semantics. Developers should be aware that there may be performance implications when using <media> depending on which attributes are specified, the media itself, its transport and processing. Since operations like trimming, soundLevel and speed modifications are applied to media, this requires that the SSML processor begins generating output audio before these operations are applied. If the clipBegin attribute is specified, this may required SSML generation of audio prior to clipBegin, depending on the implementation. This may lead to a gap between execution of the <media> element and start of playback. If the media is fetched with HTTP protocol and the clipBegin attribute is specified, then, unless the the resource is cached locally, the part of the media resource before the clipBegin, will still be fetched from the origin server. This may result in a gap between the execution of the <media> element and playback actually beginning. Note also if <media> uses the RTSP protocol, and the VoiceXML platform supports this protocol, then the clipBegin attribute value may be mapped to the RTSP Range header field, thereby reducing the gap between element execution and the onset of playback. When an media RC receives an evaluate event, the following operations are performed: The resulting media resource is returned together with resolved media operation properties (clipBegin, clipEnd, soundLevel, speed, outputmodes). Playback of external audio media resource. <media type="audio/x-wav" src=""/> Application of media operations to audio resource. The soundLevel increases the volume by approximately 50% and the speed is reduced to 50%. <media type="audio/x-wav" soundLevel="+6.0dB" speed="50%" src=""/> Playback of 3GPP media resource. <media type="video/3gpp" src=""/> Playback of 3GPP media resource with the speed doubled and playback ending after 5 seconds. <media type="video/3gpp" clipEnd="5s" speed="200%" src=""/> Playback of external SSML document. <media type="application/ssml+xml" src=""/> Inline CDATA content with a <value> element <media> Ich bin ein Berliner, said <value expr="speaker"/> </media> which is syntactically equivalent to <media> <speak version="1.0" xmlns=""> Ich bin ein Berliner, said <value expr="speaker"/> </speak> </media> Inline SSML content to which gain and clipping operations are applied. <media soundLevel="+4.0dB" clipBegin="4s"> <speak version="1.0" xmlns=""> Ich bin ein Berliner. </speak> </media> Inline SSML with audio media fallback. <media volume="+4.0dB" clipBegin="4s"> <speak version="1.0" xmlns=""> Ich bin ein Berliner. </speak> <media type="audio/x-wav" src="ichbineinberliner.wav"> </media> This module defines the syntax and semantics of <par> and <seq> elements. The <par> element specifies playback of media in parallel, while <seq> specifies playback in sequence. The module is designed to extend the content model of the <prompt> element (6.4 Prompt Module). This module is dependent upon the media module (6.6 Media Module). With connections which support multiple media streams, it is possible to simultaneously playback multiple media types. For media container formats like 3GPP, audio and video media can be generated simultaneously from the same media resource. There are established use cases for simultaneous playback of multiple media which are specified in separate resources: The intention is provide support for basic use cases where audio or TTS output from one resource can be complemented with output from another resource as permitted by the connection and platform capabilities. The <par> element is derived from SMIL <par> element, a time container for parallel output of media resources. Media elements (or containers) within a <par> element are played back in parallel. The <par> element has the attributes specified in Table 35. The content model of <par> consists of: The <par> element is derived from SMIL <seq> element, a time container for sequential output of media resources. Media elements within a <seq> element are played back in parallel. No attributes are defined for <seq>. The content model of <seq> consists of: This module requires a PromptQueue resource which support playback of parallel and sequential media. The following defines its playback completion, termination and error handling. Completion of playback of the <par> element is determined according to the value of its endsync attribute. For instance, assume a <par> element containing <media> (or <seq>) elements A and B, and that B finishes before A. If endsync has the value first, then completion is reported upon B's completion. If endsync has the value last, then completion is reported upon A's completion. Completion of playback of the <seq> element occurs when the last <media> is complete. If the <par> element playback is terminated, then playback of its <media> and <seq> children is terminated. Likewise, if the <seq> element playback is terminated, then playback of its (active) <media> elements is terminated. If mark information is provided by <media> elements (for example with SSML), then, the mark information associated with last element played in sequence or parallel is exposed as described in XXX. Error handling policy is inherited from the element in which <par> and <seq> element are children. For instance if the policy is to ignore errors, then the following applies: If the policy is to terminate playback and report the error, then the any error causes immediate termination of any playback and the error is reported. If execution of the <par> and <seq> elements requires media capabilities which are not supported by the platform or the connection, or there is an error fetching or playing any <media> element within <par> or <seq>, then error handling follows the defined policy. video avatar with audio commentary. Note the use of the outputmodes attributes of <media> to ensure that only video is played. <par> <media type="audio/x-wav" src="commentary.wav"/> <media type="video/3gpp" src="avatar.3gp" outputmodes="video"/> </par> video avatar with a sequence of audio and TTS commentary. <par> <seq> <media type="audio/x-wav" src="intro.wav"/> <media type="application/ssml+xml" src="commentary.ssml"/> </seq> <media type="video/3gpp" src="avatar.3gp" outputmodes="video"/> </par> This module describes the syntactic and semantic features of the <foreach> element. This module is designed to extend the content model of an element in another module. For example, SSML elements in the 6.5 Builtin SSML Module, the <prompt> element defined in 6.4 Prompt Module, etc. The attributes and content model of the element are specified in 6.8.1 Syntax. Its semantics are specified in 6.8.2 Semantics. [See XXX for schema definitions]. The <foreach> element has the attributes specified in Table 36.. Undefined array items are ignored. VoiceXML 3.0 does not provide break functionality to interrupt a <foreach>. When the RC receives an evaluate event, the RC loops through the array to produce an evaluated content for each item in the array. The vxml21 profile defines the content model for the <foreach> element so that it may appear in as part Builtin SSML content, it may contain only those elements valid within <enumerate> (i.e. the same elements allowed within <prompt> less <meta>, <metadata>, and <lexicon>); this allows for sophisticated concatenation of prompts. In this example using Builtin SSML, each item in the array has an audio property with a URI value, and a tts property with SSML content. The element loops through the array, playing the audio URI or the SSML content as fallback, with a 300 millisecond break between each iteration. <prompt> <foreach item="item" array="array"> <audio expr="item.audio"><value expr="item.tts"/></audio> <break time="300ms"/> </foreach> </prompt> In the mediaserver profile, <foreach> may occurs within <prompt> elements and has the content model of 0 or more <media> elements. Play each media resource in the array. <foreach item="item" array="array"> <media type="audio/x-wav" src="item.audio"/> </foreach> Play each media resource in the array. <foreach item="item" array="array"> <media type="audio/x-wav" src="item.wav"> <media type="application/ssml+xml"> <speak version="1.0" xmlns=""> <value expr="item.tts"/> <break time=300ms"/> </speak> </media> </media> </foreach> Forms are the key component of VoiceXML documents. A form contains: The Form RC is the primary RC for the <form> element. The Form RC interacts with resource controllers of other modules so as to provide the behavior of VoiceXML 2.1/2.0 <form> tag. Input and control form items are modeled as resource controllers: for the example, the <field> RC (6.10.2.1 Field RC) of the Field Module. The behavior of the Form RC follows the VoiceXML FIA, although some aspects of this are not modeled directly in this RC: external transition handling is not part of the form RC; input items used separate RCs to manage coordination between media resources, while recognition results can be received directly by form, field or other RCs. [This initial version does not address all aspects of FIA behavior; for example, event handling, error handling and external transitions are not covered.] The form RC is defined in terms of a data model and state model. The data model is composed of the following parameters: The form RC's state model consists of the following states: Idle, Initializing, Ready, SelectingItem, PreparingItem, PreparingFormGrammars, PreparingOtherGrammars, Executing, Active, ProcessingFormResult, Evaluating and Exit. In the Idle state, the form RC can receive an 'initialize' event whose 'controller' event data is used to update the data model. The RC then transitions into Initiating state. In the Initializing state, the RC creates a dialog scope in the Datamodel Resource and then initializes its children: this is modeled as a separate RC. When all children are initialized, the RC sends an 'initialized' event to its controller and transitions to the Ready state. In the Ready state, the form RC sets its active status to false. It can receive one of two events: 'prepareGrammars' or ‘execute’. ‘prepareGrammars’ indicates that another form is active, but this form's form-level grammars may be activated; an 'execute' event indicates that this form is active. If the RC receives a 'prepareGrammars' event, it transitions to the PreparingFormGrammars state. If the RC receives an 'execute' event, it sets its active data to true and transitions to the 'SelectingItem' state. In the SelectingItem state, the RC determines which form item to select as the active item. This is defined by a FormItemSelection RC which iterates over the children sending each a 'checkStatus' event. If a child returns a true status (indicating that it ready for execution)), the activeItem is set to this child RC and the RC transitions to the PreparingItem state. If no child returns this status, then the RC is complete and transitions the Exit State. In the PreparingItem state, the activeItem is sent a 'prepare' event causing it to prepare itself; for example, the field RC prepares its prompts and grammars for execution. When the activeItem returns a 'prepared' event, the event data indicates whether the item is modal or not. If the item is modal, then the form RC transitions to the Executing state. If the item is not modal (other grammars can be activated), then the form RC transitions to the PreparingFormGrammars state. In the PreparingFormGrammars state, the RC prepares form-level grammars. This is defined by a separate RC which iterates through and executes grammar children. When this is complete, the RC transitions to the Active state if the form is not active (active data), and transitions to the PreparingOtherGrammars if the form is active. In the PreparingOtherGrammars states, the RC sends a 'prepareGrammars' event to its controller RC (which in turn sends the event to appropriate form, document and application level RCs with grammars). When its receives a 'prepared' from its controller, the RC transitions to the Executing state. In the Executing state, the form RC sends an 'execute' event to the active form item. If the form item is a field, then this will causes prompts to be played and recognition to take place. The RC then transitions to the Active state awaiting a result. In the Active state, the RC re-initializes the justFilled data to a new array and waits for a recognition results (as active or non-active form), or for a signal from its selected form item that it has received the recognition result. Recognition results are divided into two types: form item level results, received and processed by the form item; and form level results which are received by the form RC which caused the grammar to be added. If a 'recoResult' event is received by the form RC, the RC transitions into the ProcessingFormResult state. If the active form item receives the recognition result (and locally updated itself), then the form RC receives a 'formItemResult' event, adds the active item to the justFilled array, and transitions into the Evaluating state. In the ProcessingFormResult state, the recognition result is processed by iterating through the form item children, obtaining their name and slotname, and then attempting to match the slotname to the results. If the match is successful, the name variable in the data model result is updated with the value from the recognition result and the child is added to the justFilled data array. When this process is complete, the form RC transitions to the Evaluating state. In the Evaluating state, the form RC then iterates through its children and if a child is a member of the 'JustFilled' array, it sends a 'evaluate' event to the form item RC causing the appropriate filled RCs to be executed. If the child is a filled RC, then it is executed if appropriate. When evaluation is complete, the form RC transitions to the 'selectformitem' state so that the next form item can be selected for execution. The following table shows the events sent and received by the form RC to resources and other RCs which define the events. <> The semantics of field elements are defined using the following resource controllers: Field (6.10.2.1 Field RC), PlayandRecognize (6.10.2.2 PlayandRecognize RC), ... The Field Resource Controller is the primary RC for the field element. The field RC is defined in terms of a data model and state model. The data model is composed of the following parameters: The field RC's state model consists of the following states: Idle, Initializing, Ready, Preparing, Prepared, Executing and Evaluating. While in the Idle state, the RC may receive an 'initialize' event, whose 'controller' event data is used to update the data model. The RC then transitions into Initiating state. In the Initializing state, the RC creates a variable in the Datamodel Resource: the variable name corresponds to the name in the RC's data model, and the variable value is set to the value of the RC's data model expr, if this is defined. The field RC then initializes its children: this is modeled as a separate RC (see XXX). When all children are initialized, the RC transitions to the Ready state. In the Ready state, the field RC can receive an 'checkStatus' event to check whether it can be executed or not. The value of name and cond in its data model are checked: the status is true if the name is undefined and the value of cond evaluates to true. The status is returned in a 'checkedStatus' event sent back to the controller RC. If the RC receives a 'prepare' event, it updates includePrompts in its data model using the event data, and transitions to the Preparing state. In the Preparing state, the field prepares its prompts and grammars. Prompts are prepared only if the includePrompts data is true; otherwise, prompts within the field are not prepared (e.g. field prompts aren't queued following a <reprompt>). Preparation of prompts is modeled as a separate RC (see XXX), as is preparation of grammars (see YYY). These RCs are summarized below. Prompts are prepared by iterating through the children array. In the iteration, each prompt RC child is sent a 'checkStatus' event. If the prompt child returns true (its cond parameter evaluates to true), then it is added to a 'correct count' list together with its count. Once the iteration is complete, the RC determines the highest count on the 'correct count' list: the highest count among those on the list less than or equal to the current count value. All child on the 'correct count' list whose count is not the highest count are removed. The RC then iterates through the 'correct count' list and sends an 'execute' event to each prompt RC, causing it to be queued on the PromptQueue Resource. Grammars are prepared by recursing through the children array and sending each grammar RC child an 'execute' event. The grammar RC then, if appropriate, sends an 'addGrammar' event to the DTMF or ASR Recognizer Resource where the grammar itself, its properties and the field RC is sent as the handler for recognition results. When prompts and grammars have been prepared, the prompt counter is incremented and the field RC sends a 'prepared' event to its controller with event data indicating its modal status and then transition into the Prepared state. In the Prepared state, the field RC may receive an 'execute' event from its controller. The RC sends an 'execute' event to the PlayAndRecognize RC (6.10.2.2 PlayandRecognize RC), causing any queued prompts to be played and recognition to be initiated. In the event data, the controller is set to this RC, and other data is derived from data model properties. The RC transitions to the Executing state. In the Executing state, the PlayAndRecognize RC must send recoResults (or error events: noinput, nomatch, error.semantic) to the field RC. If the field RC receives the recoResults, then it updates its name variable in the Datamodel Resource. The field RC then sends a 'fieldResult' event to its controller indicating that a field result has been received and processed. If the recoResult is received by the field RC's controller, then the field receives an 'evaluate' event which causes it to transition to the Evaluating state. In the Evaluating state, the field RC iterates through its children executing each filled RC: this is modeled by a separate RC (see XXX). When evaluation is complete, the RC sends a 'evaluated' event to its controller and transitions to the Ready state. The Field RC is defined to receive the following events: and the events it sends: Table 44> The PlayandRecognize RC coordinates media input with Recognizer resources and media output with the PromptQueue Resource. The following use cases are covered: The PlayandRecognize RC coordinates media input with recognition resources and media output with the PromptQueue Resource on behalf of a form item. This RC activates prompt queue playback, activates recognition resources, manages bargein behavior and handles results from recognition resources. The RC is defined in terms of a data model and a state model. The data model is composed of the following parameters: The RC model consists of the following states: idle, prepare recognition resources, start playing, playing prompts with bargein, playing prompts without bargein, recognizing with a timer, waiting for input, waiting for speech result and update results. The complexity of this model is partially a consequence of supporting the relationship between hotword bargein and recognition result processing. While in the idle state, the RC may receive an 'execute' event, whose event data is used to update the data model. The event information includes: controller, inputmodes, inputtimeout, dtmfProps, asrProps and maxnbest. The RC transition to the prepare recognition resources state. In the prepare recognition resources, the RC sends 'prepare' events to the ASR and DTMF recognition resource. Both events specify this RC as the controller parameter, while the properties parameter differs. In this state, the RC can received 'prepared' or 'notPrepared' events from either recognition resources. If neither resource returns a 'prepared' event, then activeGrammars is false (i.e. no active DTMF or speech grammar) and the RC sends an 'error.semantic' event to the controller and exits. If at least one resource returns a 'prepared' event, then the RC moves into the start playing state. The start playing state begins by sending the PromptQueue resource a 'play' event. The PromptQueue responds with a 'playDone' event if there are no prompt in the prompt queue; as a result, this RC moves into the start recognizing with timer state. If there is at least one prompts in the queue, the PromptQueue sends this RC a 'playStarted' event whose data contains the bargein and bargeintype values for the first prompt, and the input timeout value for the last prompt in the queue. The data model is updated with this information. Interaction with the recognizer during prompt playback is determined by the data model's bargein value. If bargein is true, then this RC transitions to the playing with bargein state. If bargein is false, the RC transitions to the playing without bargein state. In the playing without bargein state, recognition is suspended if it has been previously activated (recoActive parameter of the data model tracks this). Suspending recognition is conditional on the value of 'inputmodes' data parameter; if 'dtmf' is in inputmodes, then DTMF recognition is suspended; if 'voice' is in inputmodes, the ASR recognition is suspended. In this state, the PromptQueue can report to this RC changes in bargein and bargeintype as prompts are played: a 'bargeintypeChange' event with the values 'hotword' or 'speech' cause the data model parameter 'bargein' to the set to 'true' and the 'bargeintype' parameter to be updated with event data value. If the PromptQueue resource sends a 'playDone' event, then the data model markname and marktime parameters are updated and the RC transitions to the start recognizing with timer state. In the playing with bargein state, recognition is activated if it has not been previously activated (determined by recoActive parameter in the data model). Activating recognition is conditional on the value of 'inputmodes' data parameter; if 'dtmf' is in inputmodes, then DTMF recognition is activated; if 'voice' is in inputmodes, then ASR recognition is activated. In this state, the PromptQueue can report changes in bargein and bargeintype as prompts are played: a 'bargeintypeChange' event where the event data value is not 'unbargeable' causes the data model 'bargeintype' parameter to be updated with the event data ('hotword' or 'speech'); while a 'bargeintypeChange' where the event data value is 'unbargeable' causes the data model 'bargein' parameter to set to false and the RC transitions to the playing without bargein state. If the PromptQueue resources sends a 'playDone' event, then the data model markname and marktime parameters are updated and the RC transitions to the start recognizing with timer state. sends the PromptQueue a 'halt' event, and transitions to the update results state. If negative, the RC sends a 'listen' event to the recognition resource which sent the 'recoResults' event. In the start recognizing with timer state, an input timer is activated for the value of the inputtimeout data parameter and, if the recognition is not already active (determined by the recoActive data parameter). Recognition activation is conditional on the value of 'inputmodes' data parameter; if 'dtmf' is in inputmodes, then DTMF recognition is activated; if 'voice' is in inputmodes, the ASR recognition is activated. The RC then transitions into the waiting for input state. In the waiting for input state, the RC waits for user input. If it receives a 'timerExpired' event, then the RC sends a 'stop' event to all recognition resources, sends a 'noinput' event to its controller and exits. cancels the timer, and transitions to the update results state. If negative, the RC sends a 'listen' event to the recognition resource which sent the 'recoResults' event. In the waiting for speech result state, the RC waits for a 'recoResult' event whose data is used to update the recoResult data parameter and to set the recoListener data parameter if the recognition result is positive. The RC then transitions to the update results state. In the update results state, the RC sends 'assign' events to the data model resource, so that the lastresult object in application scope is updated with recognition results as well as markname and marktime information. If the recoListener data parameter is defined, then the RC sends a 'recoResult' event to the recognition listener RC; otherwise, it sends 'nomatch' event to its controller. The RC then exits. The PlayandRecognize RC is defined to receive the following events: and the events it sends: The events in Table 47 are sent by the PlayandRecognize RC to resources which define the events. The events in Table 48. >. In this example,> writ 53.. Run time controls are represented by voice or dtmf grammars that are always active, even when the interpreter is not waiting for user input (e.g., when transitioning between documents.) When the grammar representing the rtc is matched, the action specified by the rtc is taken. When an rtc grammar completes recognition, it is immediately restarted whether it matched the input or not. Other grammars, including standard recognition grammars, may be active at the same time as an rtc. Possible values for 'action' and 'params' are as follows: <cancelrtc> can be used to cancel an rtc defined by the <rtc> element. Note that rtcs are identified by their grammars. Both <rtc> and <cancelrtc>are scoped to the nearest enclosing control element (<item>,<form>, ...). If not within a control element, they are scoped to the document they are in. A given rtc may be defined multiple times within an application. At any point during execution, the most narrowly scoped <rtc> or <cancelrtc> element will be in effect. If an active rtc is turned off by a <cancelrtc> tag, it will be reactivated when the interpreter leaves the scope of the <cancelrtc> tag (unless it comes within the scope of another <cancelrtc> tag). For example, if an <rtc> tag is scoped to a <form> and a <cancelrtc> tag is scoped to a field within the <form>, the rtc will be active while the form is executing, except when it is the field in question. Application authors may thus use the <rtc> and <cancelrtc> tags along with the scoping rules to exercize fine grained control over the activity of rtcs. Logically rtc grammars behave as a prefilter on the speech stream, replacing any input they match with silence. Since rtc grammars operate upstream of normal recognition grammars, the recognition grammars never see input that matched an rtc grammar. Thus input that matches an rtc grammar will not trigger barge in, since barge-in is triggered only by input matching a normal recognition grammar. Since rtc grammars apply upstream of recognition grammars, all rtc grammars have, in effect, a higher priority than any recognition grammar. Thus the priority attribute on an rtc grammar affects its priority only relative to other rtc grammars. When processing speech or type-ahead input, rtcs again apply upstream of normal speech grammars. Remember also that rtcs may be active even when the system is not in the wait state, so they may match the input while it is being entered and other grammars are not active. The platform executes the 'volume', 'speed' and 'skip' actions immediately, as soon as the rtc grammar matches. The platform also executes the 'cancel' and 'goto' actions automatically, but only once the interpreter is in an event-processing state. Thus the platform may complete the processing of non-interruptable tasks before it processes the 'cancel' or 'goto' actions. 84 89 92 shows the events sent by the record RC to various resources. Table 93 94.. This module defines the syntactic and semantic features of a <controller> element.: The above transition controllers influence the selection of the first form item resource controller to execute, and subsequent ones through the session. The attributes and content model of <controller> are specified in 6.22.1 Syntax. Its semantics are specified in 6.22.2 Semantics. Transition controllers serve as dialog managers for VoiceXML forms and documents and are described syntactically using the <controller> element. A VoiceXML 3.0 <form> element or <vxml> element may have at most one <controller> child, which allows mixed content. [See XXX for schema definitions]. The <controller> element has the attributes specified in Table 95.At most one of src or srcexpr may be specified The controller RC is the primary RC for the <controller> element.. EXAMPLE 1: If the transition controller is described using an XML type or vocabulary, the description is a direct child of the <controller> element. <v3:vxml <scxml:scxml version="1.0" ...> <!-- Transition controller as a state chart --> </scxml:scxml> </v3:controller> <!-- Remainder of VoiceXML 3.0 document --> </v3:vxml> EXAMPLE 2: If the transition controller is described using a non-XML type or vocabulary, the description is the text content of the <controller> element. A CDATA section may be used if needed. <v3:vxml <![CDATA[ // Some text-based transition controller description ]]> </v3:controller> <!-- Remainder of VoiceXML 3.0 document --> </v3:vxml> EXAMPLE 3: [Support for this convenience syntax not yet decided -- here's the tentative text: If the transition controller is described using SCXML, then a convenience syntax of placing the <scxml> root element as a direct child of the <vxml> or <form> element is supported without the need of a <controller> wrapper. Thereby, the following two variants are equivalent:] <v3:vxml version="3.0" ...> <v3:controller> <scxml:scxml version="1.0" ...> <!-- Transition controller as a state chart --> </scxml:scxml> </v3:controller> <!-- Remainder of VoiceXML 3.0 document --> </v3:vxml> Variant 2: <v3:vxml version="3.0" ...> <scxml:scxml version="1.0" ...> <!-- Transition controller as a state chart --> </scxml:scxml> <!-- Remainder of VoiceXML 3.0 document --> </v3:vxml> profile is included demonstrating how profiles are defined in VoiceXML 3.0. Using existing elements from the [VOICEXML21] specification is helpful as the semantics of these elements are already well defined and well understood. Thus changes in how they are presented are a result of the module and profile style of VoiceXML 3.0 and of making more explicit and formal the precise detailed semantics. The can be best described in the following 3 sections:. This section describes the convenience syntax that can be used to define some of the functionality of VoiceXML 2.1..] A VoiceXML interpreter context needs to fetch VoiceXML documents, and other resources, such as media files, grammars, scripts, and XML data. 4.5.2.2 Application Root. Properties are used to set values that affect platform behavior, such as the recognition process, timeouts, caching policy, etc. The following types of properties are defined: speech recognition (8.2.1 Speech Recognition Properties), DTMF recognition (8.2.2 DTMF Recognition Properties), prompt and collect (8.2.3 Prompt and Collect Properties), media (8.2.4 Media Properties), fetching (8.2.5 Fetch Properties) and miscellaneous (8.2.6 Miscellaneous Properties) properties. The following generic speech recognition properties are defined. The following properties are defined to apply to the fundamental platform prompt and collect cycle. The following properties pertain to the fetching of new documents and resources. Note that maxage and maxstale properties may have no default value - see 8.1.2 Caching. 8.2.2 DTMF Recognition Properties to tailor the user experience. The effects of these are shown in the following timing diagrams. The timeout parameter determines when the <noinput> event is thrown because the user has failed to enter any DTMF. Once the first DTMF has been entered, this parameter has no further effect. Figure: Timing diagram for interdigittimeout, 21: 22: 23: 8.2.3 Prompt and Collect Properties and 8.2.1 Speech Recognition Properties to tailor the user experience. The effects of these are shown in the following timing diagrams. In the example below, the timeout parameter determines when the noinput event is thrown because the user has failed to speak. Figure 24: Timing diagram for timeout when no speech provided. In the example above, the user provided a utterance that was recognized by the speech grammar. After a silence period of completetimeout has elapsed, the recognized value is returned. Figure 25: 26: Timing diagram for incompletetimeout with speech grammar unrecognized. Several VoiceXML parameter values follow the conventions used in the W3C's Cascading Style Sheet Recommendation [CSS2]. Integers are specified in decimal notation only. Integers may be preceded by a "-" or "+" to indicate the sign. An integer consists of one or more digits "0" to "9". Real numbers are specified in decimal notation only. Real numbers may be preceded by a "-" or "+" to indicate the sign. A real number may be an integer, or it may be zero or more digits followed by a dot (.) followed by one or more digits. This> This version of VoiceXML was written with the participation of members of the W3C Voice Browser Working Group. The work of the following members has significantly facilitated the development of this specification: The W3C Voice Browser Working Group would like to thank the W3C team, especially Kazuyuki Ashimura and Matt Womer, for their invaluable administrative and technical support. <>
http://www.w3.org/TR/2010/WD-voicexml30-20100831/
CC-MAIN-2013-20
refinedweb
14,179
57.47
Question: I have to leave in a DataTable only records with dates currently not present in the database. So I read all existing dates using the stored procedure (is it correct?): SELECT DISTINCT CAST(S.[date] AS DATE) -- original date is DATETIME2(0) FROM ... WHERE ... and load it to a DataTable: var tableDate = new DataTable(); new SqlDataAdapter(command).Fill(tableDate); How to remove now from another table all unnecessary rows? I think LINQ could help but I'm not sure how.. Solution:1 I'm looking at your answer, which you say works, and you just want to know how to do it in a "single LINQ query." Keep in mind that these queries all have deferred execution, so the following two queries are functionally equivalent: var q = from d in dates select d.Field<DateTime>("date"); return (from r in records where !q.Contains(r.Field<DateTime>("date")) select r).CopyToDataTable(); And: return (from r in records where !dates .Select(d => d.Field<DateTime>("date")) .Contains(r.Field<DateTime>("date")) select r).CopyToDataTable(); The second version is a lot harder to read, but nevertheless, it is "one query." Having said this, none of these examples really seem to match your question title, which suggests that you are trying to remove duplicate rows. If that is indeed what you are trying to do, here is a method that will do that: static DataTable RemoveDuplicates(DataTable dt) { return (from row in dt.Rows.OfType<DataRow>() group row by row.Field<string>("date") into g select g .OrderBy(r => r.Field<int>("ID")) .First()).CopyToDataTable(); } If you don't care about which duplicates removed then you can just remove the OrderBy line. You can test this as follows: static void Main(string[] args) { using (DataTable original = CreateSampleTable()) using (DataTable filtered = RemoveDuplicates(original)) { DumpTable(filtered); } Console.ReadKey(); } static DataTable CreateSampleTable() { DataTable dt = new DataTable(); dt.Columns.Add("ID", typeof(int)); dt.Columns.Add("Code", typeof(string)); dt.Columns.Add("Name", typeof(string)); dt.Rows.Add(1, "123", "Alice"); dt.Rows.Add(2, "456", "Bob"); dt.Rows.Add(3, "456", "Chris"); dt.Rows.Add(4, "789", "Dave"); dt.Rows.Add(5, "123", "Elen"); dt.Rows.Add(6, "123", "Frank"); return dt; } static void DumpTable(DataTable dt) { foreach (DataRow row in dt.Rows) { Console.WriteLine("{0},{1},{2}", row.Field<int>("ID"), row.Field<string>("Code"), row.Field<string>("Name")); } } (just replace "date" with "Code" in the RemoveDuplicates method for this example) Hopefully one of these answers your question. Otherwise I think you're going to have to be more clear with your requirements. Solution:2 You could use Except() return records.Except(dates); UPDATED: If your DataTable has typed fields, then it should be like the following: var excluded = arbDates.Rows.OfType<System.Data.DataRow>().Select(a => a[0]) .Except(excDates.Rows.OfType<System.Data.DataRow>().Select(e => e[0])); otherwise you could cast it: var excluded = arbDates.Rows.OfType<System.Data.DataRow>() .Select(a => Convert.ToDateTime(a[0].ToString())) .Except( excDates.Rows.OfType<System.Data.DataRow>() .Select(e => Convert.ToDateTime(e[0].ToString()))); Solution:3 Your SQL statement looks fine. As I understand it, you're casting to get the default time value starting at midnight. Therefore the dates in the other table being compared must also match that format in order to compare the dates with neutral times. If they aren't you can still use the code I have below but you must add the .Date property anywhere that the tableResult row's field is referenced. Also I have used Field<DateTime>(0) but depending on your query and based on your earlier example you may need to use Field<DateTime>("date"). There's no need for a custom comparer. To merge your LINQ queries into a single query you could simply use the let keyword and carry the intermediate result through the query and reference it. Give this a try: var tableDate = new DataTable(); new SqlDataAdapter(command).Fill(tableDate); // this is the other table that has other dates, so populate as needed var tableResult = new DataTable(); var newTable = (from row in tableResult.AsEnumerable() let uniqueRows = tableResult.AsEnumerable().Select(r => r.Field<DateTime>(0)) .Except(tableDate.AsEnumerable().Select(r => r.Field<DateTime>(0))) where uniqueRows.Contains(row.Field<DateTime>(0)) select row).CopyToDataTable(); In dot notation the query would be: var newTable = tableResult.AsEnumerable() .Select(row => new { Row = row, UniqueRows = tableResult.AsEnumerable() .Select(r => r.Field<DateTime>(0)) .Except(tableDate.AsEnumerable().Select(r => r.Field<DateTime>(0))) }) .Where(item => item.UniqueRows.Contains(item.Row.Field<DateTime>(0))) .Select(item => item.Row) .CopyToDataTable(); Instead of tableResult.AsEnumerable() you could use tableResult.Rows.Cast<DataRow>() or tableResult.Rows.OfType<DataRow>(). The results are the same between all these approaches. If you want to remove duplicates from the existing table (rather than copy it to a new table), you could remove the items returned by the Intersect method from the table: var commonDates = tableDate.AsEnumerable().Select(row => row.Field<DateTime>(0)) .Intersect(tableResult.AsEnumerable().Select(row => row.Field<DateTime>(0))); for (int index = tableResult.Rows.Count - 1; index >= 0; index--) { if (commonDates.Contains(tableResult.Rows[index].Field<DateTime>(0))) { tableResult.Rows.RemoveAt(index); } } Solution:4 As I understand the problem, you are trying to de-dup data coming from some import. You may not need to do this using LINQ. Although the post title suggests LINQ, you later question whether LINQ might be the best solution and, given what we know, I think you could do this using a single Insert statement. First, I'd suggest bulk copying the data into a temporary location in the db (if you are not already doing this) like so: Create Table TempBulkCopyData ( Id int not null identity(1,1) , Date DateTime2 not null , ... ) One of the advantages of bulk copying into a temporary location is that you can add indexes and such to speed up the cleaning process. To de-dup the data, you could then run a query like so: Insert DestinationData(...) Select ... From BulkCopyData As BCD Where Id = ( Select Min(BCD2.[Id]) From BulkCopyData As BCD2 Where Cast(BCD2.[Date] As Date) = Cast(BCD.[Date] As Date) ) Or Insert DestinationData(...) Select ... From BulkCopyData As BCD Where Id = ( Select Min(BCD2.[Id]) From BulkCopyData As BCD2 Where DateDiff(d, BCD.[Date], BCD2.[Date]) = 0 ) This will pull the first date it finds (the one with the lowest Id). This is obviously somewhat arbitrary but to get more refined we'd need to know more about the data structure and requirements. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2019/02/tutorial-remove-duplicates-by-field.html
CC-MAIN-2019-09
refinedweb
1,104
50.84
Archives! How is this for browser stats? While the group of people that use CampusFish is small, I was astounded to see that only 20% of my visitors use Internet Explorer. Wow. Granted, I know several of the users are Mac guys (they blog about Mac stuff quite a bit), and several others are Firefox users. I assume the other 20% are the people I don't know. ;) Little victories in programming with test-driven development Those of you bored enough to read my blog regularly probably know that I coach 17-year-old girls in junior Olympic volleyball. In teaching my kids to improve upon their skills, I frequently tell them not to worry so much about the big picture when I want them to concentrate on the smaller things.. You write all of these tests, maybe hundreds, and start to write code that passes the tests. There's no way in hell you'll pass them all the first time you run the tests (if you do, you're doing it wrong). But every test you pass is a little victory toward the bigger goal. If you can concentrate on said victories, I think you can get a lot more enjoyment out of even the most mundane programming tasks. That's why I like TDD. Revisiting my own blog system on CampusFish Today I reached the milestone I was hoping for in the revisions for CampusFish, my little blogging project. After a year, it has only made enough money to cover the SSL certificate, the domain name and some of the bank fees, but it's worth it because I use it. It's where I drop my F-bombs and frustrations on the world and talk about stuff that no one here would likely care about. When I first launched the site, it was kind of limited because I was so geeked up about using the POP Forums class library to power it all. It still does most of the heavy lifting, but there are about a half-dozen or so data access methods now that do the rest. It's a lot more simple than .Text in terms of the code base, but it basically does the same thing. There are some additional features like the photo galleries, a recent comments list for members, private messages, friends lists, profile photos, etc. Even prior to the revisions, it was apparently compelling enough for the small group of users, because they're very active with it. Some are having good times with the custom style sheet functionality. The fun code exercise in this case was doing the trackback mechanism. It's pretty straight forward once you get your arms around the protocol. Granted, users can choose to disallow public comments entirely, so I don't know how much use that will get. I've gotta come up with some more interesting style sheets for the users, but that will come in good time. HTML philosophy: <br /> vs <p></p> I decided that perhaps I should rewrite my text parsing engine for POP Forums from scratch instead of trying to band-aid it over and over. So with a clean slate, I have a few decisions to make. I've noticed that other forums don't get into parsing paragraph tags at all. Instead they use line breaks for everything. What do you think, is this acceptable? If my understanding of XHTML validation is correct, it's OK as long as it's nested within some kind of block element, like <div>. It's certainly a lot easier to parse line breaks instead of properly closed <p> tags, that's for sure. What's your take? I don't get religious about these things the way some people do, so I'm easily influenced. If we had the Internet in college I was chatting last night with a high school kid that frequents two of my sites. Smart kid, has a site about a certain amusement park, fortunate enough to have his own PowerBook. We were talking about advertising revenue. He uses one of the same ad firms I do, and he does OK even with limited traffic. It made me think... what if the Internet was as mainstream today as it was when I was in college (fall '91 to spring '95)? I remember busting my ass on crappy work-study jobs, along with my radio gig, just to pay the rent my senior year. I was lucky to clear $300 for a month working 80+ hours. Not a lot of beer money, or money to buy other essential items like CD's and a replacement VCR when my hand-me-down died. I also wouldn't have had to settle for my ancient IBM PS/2 Model 25 with no hard drive (though ironically it was my first computer used to touch the Net). Today, nearly every kid in college has a computer, a laptop even, with a wired dorm. There isn't a doubt in my mind that if I were in college today, I'd have some site and I'd clear a grand a month to seriously party. I might have even studied now and then. And the effects aren't limited just to income. My former volleyball kids, now in college, are always connected and online. They're there in my buddy list 24/7. There's a totally different social culture aided by the Internet. I don't know if that would've resulted in fewer lonely nights or just a different means to receive a booty call, but it would be different, regardless. Maybe the weirdest thing is just that life hasn't changed much in ten years now that we have a mainstream Internet. On the other hand, everything has changed. It's a very strange dichotomy. Naive about IE? Give me a break. I say: "I've never understood how Microsoft has profited from IE's dominance." Charles says: "This is a very naive view. There is a certain base level of standards compliance that all browsers implement. Beyond that, Microsoft has added siginificant functional enhancements to IE which allow it to do much more than browers such as Netscape or Firefox." He goes on to say it has more to do with intranets than the Internet. Either way, I have to respectfully say that he's full of crap. That sounds like a quote from the MS PR handbook. Anyone using Firefox right now that is missing out due to the lack of "significant functional enhancements" in IE? Anyone? *crickets chirp* That's about what I figured. Yeah, I'm sure you can find some exceptions, but give me a break. Heck, even in Corporate America I see no IE-dependence. In fact, I get mini-throw-up every time I start a new gig and find that a company is still hanging on to Lotus Notes databases, Domino Web servers and such. I'm as much of a Microsoft cheerleader than anyone. MS products have changed my life and I wrote a book about them. But I haven't seen anyone give any compelling evidence that IE allowed them to earn actual money. Yeah, they killed Netscape by pushing out IE, but so what? Netscape was a company with the most riciulous business plan ever conceived (if there really was one at all), and the product sucked and got worse every release. The hardcore Internet dorks like me started with Netscape, and eventually moved to IE because Navigator sucked. That's what kills me about the last six or seven years about this saga. There are really two issues that everyone intermingles into this demonization of Microsoft. The first is that Microsoft used its monopoly to squash competition. Seriously, what competition did Netscape offer? I'm not saying it's right, but to suggest that Netscape was ever going to be a bona fide profitable business is a fantasy. The second issue is that proprietary IE features would cause Microsoft to own the Web. (Ironically, it should be noted that Netscape's early versions had "extensions" to HTML that did the very same thing.) Yet here we are talking about the relative explosion in market share by Firefox. Huh. A lot of good that desktop monopoly did Microsoft, eh? The inability of IE to evolve Wow, have you read this story from The New York Times (via News.com)? The author just slams the guy from Microsoft, and quite frankly, he kind of deserves it for some of the stupid things he said. Granted, I'll offer that they might have been taken out of context, but that last analogy isn't very good. My personal feeling is, and has been since I first saw the Web with Mosaic 1.0, that the browser is largely inconsequential in terms of any company's business. If I were to start a new company today, a company that builds Web browsers would not be among my considerations. I've never understood how Microsoft has profited from IE's dominance, or how Netscape back in the day made a buck when you could download the browser for free. Neither company has scored any extra revenue from me, any more than Mozilla has by me using Firefox. The only thing at stake is to say, "ours has more users." That's such a dotcom business plan. Now of course the Microsoft haters (you know, the tools and morons that refer to the company as "M$," because that dollar sign means capitalism is bad or something) are going to say that they're trying to extend their desktop dominance to the Web. Really? How? Has IE's dominance prevented you from using the Web? There was this long-standing theory that as applications more commonly became Web-based that the browser would be the gateway to those apps, and somehow Microsoft's browser would control it all. That was a stupid theory because it assumes that the Web itself could only be viewed by IE. If you want to bitch about IE, then by all means complain about the legitimate problems like security and the worst CSS rendering of any browser. Those are things that irritate the crap out of me, and they're the reason I don't use IE anymore. Despite this, Microsoft is not being harmed by my decision (as a .NET developer, they're obviously getting my money in other ways). In fact, I start to wonder why Microsoft continues to build a browser at all. The one they have doesn't work as it should, there's no sequel in sight, and with XP SP2, there isn't a single reason you need it (Windows Update works on its own, without the browser itself). Grumpy blogging It occurred to me that I've made a lot of posts lately indicating that something "sucks" or "blows" or is "terrible" or something similarly negative. It seems I blog a lot when I have something to complain about. I think this is what happens when you spend too much time in front of the LCD glow. I'm actually very happy, and enjoying life. It's just that in this profession, given my area of "expertise" (stop laughing), there isn't much to talk about right now. I did my fair share of Visual Studio and ASP.NET v2 cheerleading last summer while writing my book. Actually, there it is... I think I figured it out by talking through it. Since I can't use Whidbey in production, I need to use VS 2003 and ASP.NET v1.1 so I can pay the bills. Indeed, that's enough to make anyone grumpy. Aside from mangling the crap out of my HTML, VS 2003 gets pissed and won't open a Web project if the web.config for it has some other IHttpHandlerFactory taking requests. You get the drive doesn't map to site error nonsense. Honestly, who thought that rooting your Web apps in IIS was a good idea? I'll never understand that. But alas, it won't be beta forever, and this insanely long testing period will result in a nearly perfect product, right? Strong Bad does radio, and it's hilarious! Oh, and radio still sucks. This is to see who really reads my blog... In his latest e-mail, Strong Bad takes on the stereotypes of radio. I laughed so hard at this I nearly pee'd my pants. Seriously. If you know anything about my resume, you know that I double majored in radio/TV and journalism in college, and I worked professionally in radio for about two years. I'm not sure if it will be as funny to you without that radio experience, but for someone that loved the medium and loathes what it has become, it's freakin' comedy gold. And speaking of radio, it sure sucks. You can trace the death of good radio back to the days when Congress was into deregulation for the sake of deregulation. When the FCC lifted ownership restrictions on radio, therefore handing the scarce resource of FM bandwidth over to huge media companies, they killed every last chance that radio had to be personal and local. The shit on the air now is programmed from New York, for New York tastes, is pre-recorded, has no show component to it, and the formats absolutely blow. Despite all this, radio revenue has never been higher. Why? Because small local companies can't get their hands on a frequency to challenge Clear Channel and Infinity. It's a joke. Rich text editing still blows Wow do I hate dealing with rich text editing. The funny thing is, way back when POP Forums was a product I actually sold, I think I may have been the first to use some very basic bold/italic functionality in a forum. Now there are some nice controls out there, free even, but trying to get them to work as you'd like in both IE and Mozilla/Firefox is hopeless. The latest version of FreeTextBox has one problem: By default it renders bold and italics with span/style tags/attributes. That's bad because what I need for parsing is <b>/<strong> or <i>/<em> tags. To Firefox's credit, it's smart enough to combine them into the same tag, but again, not really what I need. I did find this little gem buried in the Mozilla documentation and tried to work it into a derived class: public class FTB : FreeTextBox { protected override void OnPreRender(EventArgs e) { base.OnPreRender (e); this.Page.RegisterStartupScript("cssfix", "<script language=\"javascript\">if (navigator.userAgent.toLowerCase().indexOf(\"gecko\") != -1) document.getElementById(\"" + this.UniqueID.Replace(":", "_").Remove(0, 1) + "_designEditor\").contentDocument .execCommand(\"useCSS\",false,null);</script>"); } } Unfortunately, while the relative client-side script works great when I plug it into a static HTML representation of an editor (the useCSS command), it works great, but causes some kind of component error in Firefox's Javascript engine when I try to use it in a live FreeTextBox. My next attempt was to try and upgrade my own little control, however ugly it might be, to work in Firefox. Works great, except for the part about copying the HTML from the iframe to the hidden text field. In my version, I use the iframe's onblur event to copy, so if you hit anything else on the page, it'll copy it over before a form submit (by postback or otherwise). Firefox doesn't seem to listen for onblur from an iframe, so that doesn't work. Despite a lot of searching through the FreeTextBox script, I can't see how it does the copy. So here I am, back at zero. There really should be a good Flash-based editor, though that of course would cause you to lose text if you accidentally moved back or forward. I've seen a few out there, but they rely on Flash's built-in functionality, which, believe it or not, throws in more junk than IE ever did. Bought a new Intellimouse Explorer I bought the first Intellimouse Explorer back in... uh, well, actually I don't know when it came out. It actually crapped out on my in the first year, but Microsoft sent me a replacement. I've had that one ever since. It has been at least four years, maybe as many as six. In the past few months, it started cutting in and out on me, and it wasn't a short in the cable. If I'd cross from the far end of one screen to the opposite end of the other (I use a pair of LCD's), Windows would make the disconnect then connect noise and I'd lose the cursor somewhere. I could almost deal with that if it wasn't for the noises! :) Alas, I decided it was finally time to retire it. It had been good to me. The Microsoft logo had long since been worn off and there are actual grooves in the plastic from my fingers. I replaced it with the new v4.0. Why not? The last one lasted so long. I got the wired version since I hate changing batteries (as my wife does this regularly on hers). The new version is roughly the same shape, but lighter. The only thing I don't get is why they made the forward and back buttons smaller. Then again, I don't know how many times I've accidentally hit them when grabbing the mouse on the old one. My Natural Keyboard Pro is still working. It looks disgusting, but it works. I hope it continues to hold on, because I haven't found any other keyboards that have the same tactile feedback I like. Call for text parsing help As much as I'd like to think that I can continue to improve POP Forums on my own, I can't. I need some help. At the root of my problem is the text parsing class. In a nutshell, this thing is supposed to turn the HTML of a rich text editor into "forum code," and turn forum code into valid HTML for display in a forum thread. It mostly does this pretty well, but there are issues related to parsing e-mail and URL's correctly, namely if they appear in tags already. I've uploaded the class and the NUnit tests here. There are basically just a few tests that don't pass in the ComplexTests method. If anyone would like to take a stab at fixing, please, be my guest and I'll be eternally grateful. I realize the code isn't what it should be, and that starting over is probably a better idea, but you're looking at six generations of band-aided code. Rewriting it entirely is something I just haven't really had time to do. EDIT: Yes... the class won't compile because I left out the rest of the project. If you want to give it a go, comment out the section that calls the Emoticon class and the censoring functionality. There may be some tests that test emoticon parsing as well, so you'll have to ditch those. Sorry... it would've been too much to try and get it all together, including the config files and database. :) Wiki == horrible documentation I'm sure it won't make me more popular by saying it, but I think the Wiki craze among developers is nothing to get excited about. Yeah, it's neat that you can implement such a system, but it seems to breed useless content. For example, I noticed that FreeTextBox released a new version, so I thought I'd check it out. I downloaded it, but went back to it in a test project on a remote server, where I did not have the original zip (and therefore, not the help files or code samples). I thought, hey, no problem, I'll just check the docs on the site. What a waste of time that turned out to be. I went to the installation page looking to see what the @Register directive was (seeing as how I had no idea what the proper namespace was). Nope, not there. After looking around some more, I eventually landed on a page with nothing on it at all, and no navigation to get me to something useful. I'm not a hater. From what I can tell, this version of the control is extra cool, and the price is right. And yes, I'm sure someone wants to comment that I should have had the stuff in the zip file with me, but I didn't. I don't think it's that ridiculous to expect that you'd actually find meaningful documentation for a product, free or not, on the site from which it came from. I've yet to see any Wiki evolve into something useful. The concept has been around for a long time, and for awhile you'd think that blogging .NET developers saw it as something that would change the world. But here's the thing... Having run sites that encouraged the contribution of content from anyone on the planet since 1998 or so, I can tell you from experience that this kind of Utopian everyone-can-edit idea won't ever work. You can't even trust people to behave in a discussion forum or in blog comments, and you want to have a site anyone can edit content on? Without some kind of moderation, it's useless, and if moderation is to be practical, it has to be of structured data. So tell me why I'm so uninformed. Still unclear about using RewritePath for a default page I read Scott W.'s article on URL rewriting in .Text, and it's pretty straight forward. What I'm still not getting is how you can handle a default page request without having to wildcard map requests in IIS. For example, in this very blog, you can request "/Jeff" or "/Jeff/" and get my blog. I assume that's because IIS and ASP.NET are assuming this is a request for "/Jeff/default.aspx," but perhaps I'm not seeing something right. I've been looking at the .Text code and it's not entirely obvious to me. Anyone wanna help a guy out? The code monkey's racing mind I didn't sleep well at all last night due to a nasty stomach ache, which I think I can attribute to the popcorn butter they use at the local Cinemark. I feel like crap every time I eat it. (Blade Trinity, by the way, was awesome. Jessica Biel: Action star. Who knew?) So tonight I thought I'd go to bed early since I would obviously be tired. Yeah, after an hour staring out the window I gave that up. My mind started racing, thinking about some of the projects I have in the pipe. Some of it will lead to revenue, hopefully in the near future, some of it will not. Of course, the more fun stuff isn't revenue generating. My wife has to get up early for school, so to spare her of the tossing and turning, I came down stairs with the iPod (a bit of Venus Hum) and the laptop to surf for some articles relating to some of the things I have to do. I figure it's the only way I'm going to get this crap out of my head so I can sleep. At first I was a little annoyed by this, but putting it in perspective, I'm glad I'm getting excited about writing code again. The book really took its toll (though I'd still do it again). Now that I have other things like a new J.O. volleyball team to coach, I think I'm balancing out some more. There's so much I want to accomplish. FireFox doesn't refresh right Has anyone else noticed that FireFox doesn't always refresh as it should? I'm talking about the meta tag refresh. There are a couple of sites I visit that use these tags to refresh after a login, as does my Trillian "Check Hotmail" link. In these cases, you have to view the source of the page and paste it in to make the refresh happen. I've been seeing this since the beta days, so I'm surprised it's still a problem. Support scripts don't help customers, people do For reasons no one can explain, iTunes asks me to authorize my music about every other time I try to play songs. I have no idea why. I've got my tracks on no more than three machines, and I get five. So I fired off a support request to Apple, which after three go-arounds resulted in little more than an explanation that I could only authorize five machines and that further support could only be achieved via a fee-based call. In each case it was clear that these were copy-pastes, not an effort to try and diagnose the problem. Using support scripts like this, handled by support drones making minimum wage, might appear good for business in that it keeps costs down, but at what cost? How many customers will just say "F' it" and move on? Probably not many when it comes to Apple stuff, but it's still not a good front for developing further business. People is illiterite duspite teknologie Isn't this the truth: What corporate America can't build: A sentence It's staggering to me that we have this technology that has become a vital part of life in less than ten short years, yet people communicate more poorly than ever. I've seen it everywhere. I get e-mail from recruiters all of the time that look like they were pecked out by a 14-year-old crack addict with hands too unsteady to type the right letters. In my big corporate jobs, I didn't see it from other code monkeys that often, but from outside departments (help desks, HR, etc.). If you've been to any online forum that covers something you're interested in, you've seen the worst of it. If you like video games, you're really screwed, because I can't remember the last time I saw a coherent video game forum. I have a strict grammar and spelling policy for my sites. It really pisses off some people, but they leave, and that's fine. I just refuse to allow my little corner of the Internet to be over run with what we like to call "brain-dead AOLer speak." Reviewing copy edits is the most tedious thing I've ever done I'm in the process of reviewing the copy edits made to my book before it finally goes off to production. I can honestly say that I've never done anything more tedious. Granted, the editors didn't make a ton of changes (I guess that degree in journalism counts for something after all), but it's enough that you have to read very, very carefully. Looking at these Word documents with all of the changes tracked and comments hurts the eyes. Last time I did something like this was in college circa 1993, when we didn't have Word and you still had to paste together the newspaper. (We didn't have instant messaging or the Web either. How the hell did we survive?) Aside from looking at the final proofs, this is essentially the end of the project for me. My wife Stephanie keeps yelling at me for blowing it all off as something anyone can do, but I never wake up and think, "Holy crap, I wrote a book and it was published!" I can be an arrogant bastard about a lot of things, but for some reason I tend to understate my professional accomplishments. I couldn't tell you why. I've got a lot of little projects to start, finish, or think about, but I've also got that question in my mind about whether or not I should write another book. From proposal to publication it will take about 15 months, so if I want to take a stab at supplementing my income in a serious way, I can't wait forever to do it again. Any seasoned authors have advice? A tale of two Web applications, one good, one bad It didn't take me long after playing with the trial for SmarterMail to see that it was a really good Web application (and it's even a .NET app) and server product. The navigation is ridiculously clean, and honestly you could probably use it as your mail client, and never ever use a desktop client again. Best of all, it's catching far more spam than IMail ever did. I guess after using IMail for six years, I didn't realize how much it sucked (and went relatively unchanged). On the other hand, I decided to take advantage of an Overture promo ($100 credit) to try and generate a little traffic for my volleyball site. I've used this service on and off all the way back to the days when it was GoTo.com, and honestly I'm astounded by the way it generally isn't well designed. Aside from being slow, the UI is pretty bad and the navigation isn't logical. There are several pages where you try to update something and there's no explanation as to why it didn't save. Oh, and naturally the promo credit wasn't actually applied until I complained. Google's AdWords, by comparison, isn't perfect, but it's quick and straight-forward. I guess when I stop to think about it, there aren't very many really good Web applications that I encounter. When I did Weight Watchers last year to shed a couple of pounds, that one was pretty good. Bank One is pretty good too. (Is it coincidence that these are .NET apps?) Any other examples that come to mind of really good online applications? I'm really curious to know if anyone has extensive experience with SalesForce.com what they think about it. That should've been my millions... Any reviews on SmarterTools' SmarterMail? IPswitch just sent me a reminder asking if I wanted to renew my service contract for IMail, and truth be told, I'm not really that satisfied with it. The Web interface isn't great, it's expensive, and frankly the spam filtering isn't as good as I suspect it could be. Launched a new site today: VolleyBuzz.com I launched my volleyball site today, VolleyBuzz.com. This one is probably not much of a commercial venture because I'm not sure how big the audience is. Still, as I found when writing my ASP.NET book, writing about things makes you think more critically about them, and I hope to apply that same discipline to coaching volleyball. News.com really gets it wrong at times Flat-panel TVs can't topple tubes--just yet There sure are some problems with this article. First it says that, "LCDs are great as desktop PC monitors because they don't have to refresh pictures rapidly." This implies that TV's must refresh faster, which is not even remotely true. My computer LCD's here run at 72 Hz. Even the fastest HD standards top out at 60 Hz (or frames per second). The article also implies that the quality isn't as good, which I also tend to disagree with. I'll give that LCD's don't do black as well as CRT's do, but in terms of overall sharpness of picture, especially a digital picture, it's like night and day. Google: The system is down, yo. I really hate Internet Explorer So I'm working up this alternate style sheet on this project I'm working on. Looks absolutely beautiful in Firefox, and it's totally predictable. Pop it into IE, and naturally it's a total mess. But that's not even half the problem. The other thing is that it doesn't even render half the stuff it should, until you scroll it on and off the screen. Text won't appear, but if you scroll it off, then back on, or select it with the mouse, suddenly it appears. What the hell is that? If Microsoft is in no hurry to fix IE, I hope that Firefox continues to gain market share.
http://weblogs.asp.net/jeff/archive/2004/12
CC-MAIN-2015-11
refinedweb
5,483
71.75
Mutation Testing with Mutant As Rubyists we are no strangers to testing. Unit testing is not just best practice, it is dogma. Code is considered broken until we have tests to prove otherwise. But not all test suites are created equal. Writing good tests is an art, and bad tests can be worse than no tests, slowing development down without increasing the confidence we have in our code. So who tests the tests? The question may seem frivolous, but the answer is simple, Mutant does! Mutant is a mutation tester. To appreciate what it does, let’s imagine doing its job by hand. Appoint a person in your team to be the saboteur, their job is to pick a piece of fully tested code and deliberately introduce defects. If this can be done without raising an alarm, in other words, without causing a test to fail, then a test is missing. It’s then up to the original author to add a test case to detect the sabotage. Do this long enough and it will become very difficult to find code that can still be tampered with freely. Contrast this with traditional “line coverage” tools. Does 100% line coverage mean the code is impervious to sabotage? Certainly not! In fact, it’s possible to write tests that execute every single line of code without making a single useful assertion about them. The fun our saboteur will have! Mutant automates this process, it changes your code in many small ways, creating hordes of mutants. If this freak code causes a test to fail, the mutant is considered killed. Only if, at the end of the line, not a single mutant is left alive have you achieved 100% mutation coverage. We’ll explain Mutant with an example from the real world, demonstrating both the workings and the workflow. Our running example will be a tool that takes a local HTML file as its input, and bundles all local and remote assets together in a directory, so the document can be viewed afterwards without a network connection. Here’s how to use it: AssetPackager.new('foo/bar.html').write_to('baz') The result is a file baz.html, and a directory baz_assets containing all stylesheets, scripts and images. When encountering a reference like <link rel="stylesheet" src="" /> it will download the stylesheet, give it a unique file name based on its contents: <link rel="stylesheet" src="baz_assets/48d6215903dff56238e52e8891380c8f.css" /> I only have space to reproduce the interesting bits here. The full revision history can be found on Github. As a first step, we’ll write a method that can handle the different types of URI’s we want to handle. HTTP and HTTPS URI’s need to be retrieved as such, relative URI’s as well as URI’s using the file:// scheme will be searched for on the local file system. This is the implementation: module AssetPackager class Processor attr_reader :cwd # @param cwd [Pathname] The working directory for resolving relative paths def initialize(cwd) @cwd = cwd end def retrieve_asset(uri) uri = URI(uri) case when %w[http https].include?(uri.scheme) || uri.scheme.nil? && uri.host Net::HTTP.get(uri) when uri.scheme.nil? || uri.scheme == 'file' File.read(cwd.join(uri.path)) end end end end And the first version of our tests. For the local URI’s, we’ll point to a fixture file. For the remote URI’s, we’ll mock out the call to Net::HTTP.get. describe AssetPackager::Processor do let(:cwd) { AssetPackager::ROOT } let(:processor) { AssetPackager::Processor.new(cwd) } describe '#retrieve_asset' do subject(:asset) { processor.retrieve_asset(uri) } shared_examples 'local files' do |uri| it 'should load the file from the local file system' do expect(processor.retrieve_asset(uri)).to eq 'section { color: blue; }' end end shared_examples 'remote URIs' do |uri| it 'should retrieve the file through Net::HTTP' do expect(Net::HTTP).to receive(:get).with(URI(uri)).and_return('abc') expect(processor.retrieve_asset(uri)).to eq 'abc' end end fixture_pathname = AssetPackager::ROOT.join 'spec/fixtures/section.css' include_examples 'local files', fixture_pathname.to_s include_examples 'local files', "{fixture_pathname}" include_examples 'remote URIs', '' include_examples 'remote URIs', '' end end According to rpsec all is green and good, and we’re certainly covering all lines of retrieve_asset. Let’s see what Mutant has to say. mutant -I lib -r asset_packager --use rspec 'AssetPackager*' That’s a mouthful. First, tell Mutant how to load our code under test using the same -I, --include and -r, --require flags that Ruby itself uses. Then specify which “strategy” to use to “kill” mutants. Currently only the RSpec strategy is implemented, which makes for easy picking. Finally, hand Mutant one or more “patterns”. In this case, tell it to do its magic on the complete AssetPackager namespace (notice the *). We could also pass it the name of a single class, module, class method ( Foo::Bar.the_method), or instance method ( Foo::Bar#an_instance_method). Based on the pattern, Mutant will search for subjects to drag off to the lab and have their genes rearranged. Mutant can currently handle instance and class methods. Meta-programming constructs like attr_accessor or class level DSL’s are not supported, although there is talk of handling specific DSL’s through plug-ins. AssetPackager::Processor#initialize ........ (08/08) 100% - 0.45s AssetPackager::Processor#retrieve_asset ...................F...........F................. (47/49) 95% - 3.49s evil:AssetPackager::Processor#retrieve_asset @@ -1,10 +1,10 @@ def retrieve_asset(uri) uri = URI(uri) case - when ["http", "https"].include?(uri.scheme) || (uri.scheme.nil? && uri.host) + when ["http", "https"].include?(uri.scheme) Net::HTTP.get(uri) when uri.scheme.nil? || (uri.scheme == "file") File.read(cwd.join(uri.path)) end end evil:AssetPackager::Processor#retrieve_asset @@ -1,10 +1,10 @@ def retrieve_asset(uri) uri = URI(uri) case when ["http", "https"].include?(uri.scheme) || (uri.scheme.nil? && uri.host) Net::HTTP.get(uri) when uri.scheme.nil? || (uri.scheme == "file") - File.read(cwd.join(uri.path)) + File.read(uri.path) end end (47/49) 95% - 3.49s Subjects: 2 Mutations: 57 Kills: 55 Alive: 2 Overhead: 29.31% Coverage: 96.49% Expected: 100.00% Having a closer look at Mutant’s output, it found two subjects to operate on, #initialize, and #retrieve_asset. For each, the output looks a lot like any old test runner, with green dots and red F’s indicating success or failure. In this case, though, a character doesn’t correspond with a single succeeding or failing test, but with a complete run of the test suite, exercised against a mutated version of the subject. Our constructor is a simple enough method, but Mutant still managed to find 8 ways to change it. This includes omitting the argument list, or assigning nil instead of a value. However none of these freak versions made it past our defenses. The same can’t be said of #retrieve_asset. There 49 mutants were created, and at the end of the run two are left alive! This means we have behavior in our code unspecified by our tests, let’s fix that before the mutants come back to haunt us with production incidents. To make life easier, also stick the Mutant invocation in a Rakefile, and tell Mutant to fail when mutation coverage is below 100%. This way we can run rake mutant from our CI to make sure everything stays fully covered. desc 'Run mutation tests on the full AssetPackager namespace' task :mutant do result = Mutant::CLI.run(%w[-Ilib -rasset_packager --use rspec --score 100 AssetPackager*]) fail unless result == Mutant::CLI::EXIT_SUCCESS end Now to dissect the mutants that are left alive. For each altered version of the code that made it past our defenses mutant gives us an easy to read diff. - when ["http", "https"].include?(uri.scheme) || (uri.scheme.nil? && uri.host) + when ["http", "https"].include?(uri.scheme) Here our sabotaging mutation tester deleted the second half of the conditional, which is supposed to recognize URIs of the form //example.com/foo/bar. This was indeed a case we forgot to cover in our tests, but that’s easy to fix. include_examples 'remote URIs', '//foo.bar/baz' The second diff initially leaves us a bit stumped though. - File.read(cwd.join(uri.path)) + File.read(uri.path) We need to be able to resolve both absolute ( /foo/bar/style.css) and relative ( assets/stuff.js) local files. For relative paths, we look them up starting from the “current working directory” or cwd, a Pathname instance. For absolute paths, join will simply pass through the absolute path. This code should cover both cases, and we cover both in our tests, but according to mutant removing the call to cwd.join doesn’t make a difference. The test for the relative path isn’t working properly. On closer inspection, the path used in our test as the “working directory” is the same location from which we run the tests. In the mutated version, File.read gets the relative path, and resolves it for us. To make sure our path resolution works as expected we need to change the test to work off a different directory. describe 'with a relative path' do let(:cwd) { super().join('spec/fixtures') } let(:uri) { fixture_pathname.relative_path_from(cwd).to_s } include_examples 'local files' end It is possible that a test-first, watch-the-test-fail style of development would have caught this error. But through the life of a bigger project, some things are bound to be missed. Especially after refactoring, you’ll encounter lots of live mutants indicating untested behavior. By going back to full mutation coverage, you will also find any defects that slipped in while refactoring. Mutation testing isn’t new, in fact it’s been around since the seventies, and an experimental gem called Heckle was the first to bring mutation testing to Ruby. Heckle had some significant shortcomings, however. It never supported all possible Ruby syntax, and the latest release dates from 2009, making newer Ruby versions completely off limits. This led Markus Schirp, part of the ROM team (formerly: DataMapper), to start working on Mutant. An ambitious effort to write a robust, production-ready mutation tester. Mutant is still pre-1.0, but is already used with success on various open source and commercial projects. It’s no small feat to get a tool like Mutant right. A problem seen in the early days was that by altering the syntax tree, Mutant could generate code that isn’t syntactically valid Ruby, such as the following: def foo(a = 1, b, c = 2) # second optional argument deleted These problems seem to all have been solved now. Under the hood, Mutant is powered by the excellent Parser and Unparser gems, which have been validated against Rubyspec, the Rails code base, and more. Mutant is currently available for MRI and Rubinius. JRuby support is planned, but stalled on the fact that JRuby does not support the fork system call. Support for Ruby 2.1.0 is unstable. If your Ruby version supports it, you need to get Mutant into your workflow. It may save your app’s life.
https://www.sitepoint.com/mutation-testing-mutant/
CC-MAIN-2018-26
refinedweb
1,835
58.08
Today releases. If you are writing a production application then you should continue to use EF6.x. Because of the fundamental changes in EF7 we do not recommend attempting to port an EF6.x application to EF7 at this stage. We will provide guidance on when this is recommended and how to do it closer to final release. EF6.x will continue to be a supported release for some time. Getting started with Beta 8 We have made a modest start on documentation for EF7, you can view the current documentation at. Supported platforms You can use Beta 8 in the following types of applications. - ASP.NET 5 applications that target either full .NET or the new .NET Core. EF7 is included in new ASP.NET 5 applications that are created using the “Web Site” project template. - Full .NET applications (Console, WPF, WinForms, and ASP.NET 4) that target .NET 4.5 or later. We only recommend this for trying out EF7 in sample applications. - Mac and Linux applications targeting Mono 4.2 or later. - Universal Windows Platform (UWP) is supported for local development but it can not be used in an application that is deployed to the app store. This is because EF7 is not yet compatible with .NET Native, which is a hard requirement for applications deployed to the app store. You can track our work to support .NET Native on our GitHub project. Supported databases The following database providers are available on NuGet.org and support Beta 8. See our providers page for more information and links to getting started. - EntityFramework.Sql. What’s implemented in Beta 8 Beta 8 has mostly been about improving the features already implemented in previous betas to make them more usable and stable. - are we working on now? The following features are currently being implemented - Cascade delete support - Table-Per-Hierarchy inheritance pattern - .NET Native support Aside from the in-flight features listed above, our efforts from now until our initial release will be on cross-cutting quality concerns. - Bug fixing - Performance tuning - API reviews - Documentation - etc. What about CodePlex. Join the conversationAdd Comment Hi, I do not know it is possible now. It is just a simple question: Does EF 7 mapping engine enable us to map/materialize SP results to pure CLR objects? When I last saw it, EF was just able to map to an entity type or a flattened type. For instance, if I have a UserItem class (it is not a DbContext entity type) with a MailAddress property 'Email', how should I name the column in my SP ('Email.Address') to be mapped to a MailAddress type directly. Yes, I know almost? all of the possibly workarounds: extra type for materialization of simple fields (+GC orverhead +extra mapping), DataReader (nightmare), IObjectContextAdapter (hack)). Anything else? 🙂 I suppose most developers (as we) have an existing DB. Please let us generate the code from DB. A missing feature that will help us modelling Domain Driven Design architectures is to allow to map private setters. The alternatives for this are a waste of time. NHibernate supports this from several years ago. I think its time to take it in account on EF7. Plus, how can I map an arbitrary type of data to a CLR object? For instance, if I have a XML, JSON or even a simple int field, how can it be directly mapped to a RGBColor type using Fluent API? Property(u => u.Color).HasColumnType("int", typeof(RGBColorConverter)); Why does not HasColumnType have a type converter parameter? I understand this operation can be expensive especially if you load a list of entities, but it can be avoided with LINQ projection or an own SP. From the looks, this is yet another attempt for Microsoft trying to solve their ORM problem. You should really get NHibernate as an example. People who designed NHibernate really grasped POCO concept very well. @Laci, "Reverse engineering a model from an existing database" is listed under the "What’s implemented in Beta 8" section. This is the feature you're looking for. Batching has got to be one my favorite improvements in EF7. Can't wait to try it out! Keep up the good work. 🙂 We're using POS Plain Old SQL in our new applications and slowly removing ORM from our older systems to reduce ongoing costs, reduce points of failure, reduce need to rely on libraries/tools likely to be unsupported, killed or undocumented in the next 5 years. 2-3 guys with a gaggle of random blog posts is not product support – refer to PRISM being moved from MS patterns and practices to an outside couple developers. EF sped up development during the first 6 months of a 2 year development cycle, then was level for the next 1 year, and then starting costing us significant amounts of time and money over the next 3 years. It was determined near the end of the applications 7 year lifespan that ORM became less used over time with more and more queries going directly against SQL Server instead of through a ORM. Will "work an cascade delete support" include support for On Delete Set Null? Always puzzled me why this is absent in EF. @Ron So your alternative to an ORM is hardcoding SQL strings and then what? Reading the untyped results into objects yourself? Doing this is just doing ORM by hand, and it invariably leads to hard-to-catch type bugs when the programmer thinks the database will return one type but returns a slightly different type (via nullability, precision, etc). My question is do you expect to have tooling in VS to reverse engineer from database by RTM? This is the most important aspect for us. Really looking forward to trying the surrogate key support and bulk operation support out. @DabblerNL yes, the functionality is already implemented in current nightly builds. You will have to explicitly specify it in the model using .OnDelete(DeleteBehavior.SetNull). Is support for NoSQL DBs (like at least your DocumentDB, or maybe MongoDB) going to appear in the nearest future? Not sure why an oracle provider is not included right out of the gate??? It is one of the most widely used databases on earth, if you guys really want to mix it up in the linux world then have an oracle provider ready in the beta stage would have been one of my goals. Maybe I should write it myself??? Let me research how difficult it is (probably extremely hard as it seems to be always be the last provider written and is typically written by some 3rd part or oracle themselves) and see if it is reasonable for me to spend some time on this. Can I use EF7 with TypeScript? @daywalker EF7 doesn’t support nested types (a.k.a complex types) yet. When you run raw SQL in EF7 it does go thru that mapping pipeline though (rather than just simple column/property name matching like EF6 did), so it will work once complex type support is implemented. @Laci – to add to what @bricelam said, the Scaffold-DbContext command in Package Manager Console is what you are after. We will have visual tooling by the time we RTM. @Alberto León – Private property setters work with EF6.x and EF7 (I think they also worked in EF5). @daywalker – You can’t map arbitrary data types yet, though we have built out the architecture with this scenario in mind so that we can enable it later (it’s tracked by github.com/…/242). @Ron – I’m not going to try and talk you into using an O/RM… if you have a data access strategy that is working well for you then that is great. We provide EF for folks that want an O/RM… but it’s just one option out there and it’s definitely not for everyone. @Andriy – Yes, we prototyped a couple of non-relational providers to make sure the core architecture worked for them. Our focus for the next few months is on shipping a stable product with a limited set of providers, but we absolutely still want to build out some non-relational providers. @Derek – we are working with a number of provider writers on getting the EF7 providers up and running. Some of those are open source and/or happy for us to share details of their plans – others prefer to keep things under wraps. I can’t comment on specific providers other than the Npgsql and SQL Compact providers who are being very public about their development. @Louis van Geldrop – No, EF7 can only be used in .NET applications. somehow I cannot find a way to execute a sql command. I cannot find any function that can do that under DBContext.Database or DbSet<T>. Is this still possible in EF7? OK. Found the answer to my own question. Once I added using Microsoft.Data.Entity, I found those functions. We are evaluating EF with a code first usage of >300 Entities in one dbcontext: Startup time is > 45 secs with EF6. This is incredible slow! Then I saw a pull request on codeplex: "#1876 #2631 CodeFirst Startup Perf " I tested it and this reduced startup time to about 4-5 secs which would be ok. I would like to know now if that pull request will be accepted by your team and what would be the timeframe. Would we speak about 2016Q1, Q2 or even later. Any chance you will implement EntityDataSource for EF7? Telerik uses this for their grid control and it works great. It has built-in paging, filtering, sorting, etc. I have a bad feeling that Telerik is going to lag with coming up with an alternative solution and I will be stuck with EF6. For a long time at least, Telerik still had a long of examples using LINQ to SQL. How does EF7 handle creating queries dynamically? I don't know if EF7 is fundamentally different in this regard. I just really want to upgrade to EF7. Query performance in EF6 isn't very good. As far as I know, you have to use Include() which results in huge sets of data coming back. I think allowing things to be lazy loaded would be better. I've been doing similar queries in Java with JPA and EclipseLink and found that that's what it does. The queries seem faster because it's only displaying a page of data. The main query is simpler. I think it also has a second-level cache which may be what's making the difference. I saw that EF7 has a new way of handling this. So, I really want to take advantage of that. Also, startup time in EF6 is pretty abysmal. I have a model with about 800 entities. It takes 50 seconds to initialize on the server. 20 seconds on my desktop. I also want to use the batching. EF6 is terrible for batch loads of any kind. I ended up creating a T4 template for CRUD methods and roll my own. Mine uses prepared statements, something EF 6 doesn't do. However, mine doesn't use batching. So, I'm hoping that EF7 will surpass what I've done. I really hope that support for EntityDataSource isn't dropped. Microsoft stated that it's being dropped, but, I hope they reconsider this. @Rowan Miller: Could you provide us a sample how to map private fields (collections) in EF? Guess I am waiting to move to 7.1 or later. No TPT inheritance is a deal breaker with our existing databases being accessed by EF 6.1.3. TPH may be faster, but there is no way our DBA is going to change the existing database schemas to suit 1 or 2 applications. Hi Rowan, looking great but please do not forget database views! github.com/…/827 Reasons are found in issue so I hope you understand why this is super important for us. Can you say when this is coming? Thanks! How would I get started if I want to write an EF7 provider for another database, say Firebird? I’m doing some test with asp.net 5 beta8 and Sqlite, however sqlite is only running at runtime 1.0.0-beta8 clr x86. Perspectiva have any of Sqlite work with x64 architecture. I'm currently testing EF7 Beta8, and noticed some unusual behavior. Note that I'm using the EntityFramework7.Npgsql provider for postgresql. I don't know if the issue is the provider, the EF7 core, or my code, but anyways, the issue is that I used to have a column in one of my entities, but refactored some stuff and it's no longer part of the entity, and also no longer part of the database, but the "model" that gets built at runtime somehow has the old column. and it's complaining that the database doesn't. So, my question is how do you go about debugging these types of things. Just wanted to note that I figured out my issue and it was my code. I was refactoring some code and left a property reference to another entity in one of my entities. The "convention" took the name, added "Id" and made it a property in the "model" created at runtime. One of the things I did with EF6 was disable the conventions (using some reflection based code calling Remove on any IConvention class). I prefer to have all my mappings explicitly defined. This may seem silly to some, but that's just the way I like it — to keep from having to add "NotMapped" to all entity properties that aren't going to be mapped — I prefer to keep the entities from having attributes all over the place). Anyways — hope you add a simple "optionsBuilder.UseConventions(false)" that can be added in OnConfiguring. Cheers What is the timetable for connecting Azure to EF7? Specifically I am looking for Azure Tables support. Keep it up and I am looking forward to the release. Ditto for me @DougR. I have a mobile app in mind that will be a companion to a desktop app I built using EF. I'd like the mobile app to use EF7 and Azure Tables for a remote back end capability. It seems that the Azure provider capability is being put on a back burner with no clear timetable. Maybe Microsoft can comment on this. Thanks. Oops. Above name should have been TTRoadHog NOT TTRoadHor! I noticed that MapToStoredProcedures is missing. Is this present in another form or is the stored procedure support not implemented yet? @ Vincent – Quite a few things in EF7 are extension methods… as you found, you just need the Microsoft.Data.Entity namespace to get them. @ Heinz Ernst – Yes we are planning to take that pull request in EF6.2. See the “What about EF6.x” section of the post for more details about that release. @Jon – Honestly, I don’t have a definite answer on EDSC. At this stage we are not working on it, but if we see enough folks ask for it then we will reconsider. Sorry, I know that’s not the response you want – but just being straight with you. @Igor Mystit – You need to have a property for EF to be able to use a collection, but it doesn’t have to have a setter (or even be public). So you can have a public getter-only property that is always initialized (EF will only try to set it if the collection is null). Alternatively, you can have a private property and then public Add/Remove methods. Having a straight up private field with Add/Remove methods is not supported yet. @Gareth Suarez – That makes sense. Our guidance is to only use EF7 if you are ok with the limitations. EF6.x is still a perfectly valid version to be using and requiring TPT would be one of the good reasons to stay there. BTW there was initially some discussion about whether we would ever implement TPT in EF7, but we’ve had enough feedback to make it clear that it does need to be implemented (and thus we will). @Roger – I don’t have a timeframe on first-class view support. You can tell EF that they are tables, and things will just work. But, you do have to be able to specify a set of properties for EF to use as they key, and you can’t prevent EF from attempting to INSERT/UPDATE/DELETE even if the view is not updateable. BTW view support would make a great contribution if someone else gets to it before us – though it probably won’t be trivial to implement properly. @Warren Postma – I know we connected over email, but just posting the answer here in case it helps someone else… The best place to start is by looking at the existing relational providers. We have a relational package which has a base implementation for relational database providers, which you can extend/customize as needed. We will have some high-level docs soon but we haven’t done anything yet because the code was changing so rapidly that anything we wrote was out of date immediately. We publish our rolling CI build to which is what most folks are building against. It’s usually around 24-48hrs from a change being pushed to our repo to when it shows up in a package on that feed. You can also keep an eye on pending changes that we expect to affect providers by watching the ‘providers-beware’ tag – github.com/…/pulls. @Marcos Paulo – SQLite should now work on x64 with the latest release. Feel free to open an issue if you are still seeing a problem github.com/…/issues. @ JeffGillin – probably best to open an issue on our repo and be sure to include code that shows us how to reproduce it. Then we can work out what is going wrong. github.com/…/issues @DougR & @TTRoadHog – No definite timeline on ATS support. We won’t be bringing the provider back until after the initial 7.0.0 RTM, but I don’t have an exact date that we’ll start work on it again (it’s high priority though). @Bernhard – Stored procedure mapping for INSERT/UPDATE/DELETE is not implemented yet. This would be one of the most common reasons to stick with EF6.x for the time being. You can track the item here github.com/…/245. Rowan, how would I proceed if I wanted to use Identity framework, but didn't want to use EntityFramework? What would I need to implement for e.g. just allowing me to validate a login? @Bernhard – Probably best to follow up on the Identity project github.com/…/Identity. RoleStore and UserStore look like the main ones you need to implement. Nuget package still not working:-(. (version 7.0.0-rc2-16447) I have VS2015 on a Win10 box I am following the GettingStarted guide to the letter and using .Net 4.5.1. (though I would prefer 4.6) I create a c# console app and then issue: Install-Package EntityFramework.MicrosoftSqlServer –Pre followed by: Install-Package EntityFramework.Commands –Pre After Nuget installs a CRAZY (25+) number of references I do a build and get the following build error. Error Multiple assemblies with equivalent identity have been imported: '…..ProjectsEF7packagesSystem.Threading.4.0.10-beta-22605libnet45System.Threading.dll' and 'C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETFrameworkv4.5.1FacadesSystem.Threading.dll'. Remove one of the duplicate references If I remove the reference I to the System.Threading.dll I can build the project but then issuing: Add-Migration MyFirstMigration results in Could not load file or assembly 'System.Threading, Version=4.0.10.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified. So it fails it its added, fails if its removed!! Also alot of these references are using up offer 50% of explorers 256 path length which is very bad news. Does this filename realy need to be this long? "Microsoft.Extensions.DependencyInjection.Abstractions" Paul @Paul – I just tried on a Win10/VS2015 machine and did not see the same issue. What version of NuGet Package Manager do you see under Tools -> Extensions & Updates? I have 3.3.0.158 and am wondering if you have something earlier?
https://blogs.msdn.microsoft.com/adonet/2015/10/15/ef7-beta-8-available/?replytocom=78471
CC-MAIN-2018-22
refinedweb
3,411
65.83
derelict-fi 1.0.0 A dynamic binding to the FreeImage library. To use this package, run the following command in your project's root directory: Manual usage Put the following dependency into your project's dependences section: DerelictFI A dynamic binding to the FreeImage library, version 3.15, for the D Programming Language. For information on how to build DerelictFI and link it with your programs, please see the post Using Derelict at The One With D. For information on how to load the FreeImage library via DerelictFT, see the page DerelictUtil for Users at the DerelictUtil Wiki. In the meantime, here's some sample code. import derelict.freeimage.freeimage; void main() { // Load the FreeImagee library. DerelictFI.load(); // Now FreeType functions can be called. ... } - Registered by Mike Parker - 1.0.0 released
https://code.dlang.org/packages/derelict-fi/1.0.0
CC-MAIN-2021-43
refinedweb
133
57.87
MotivationEdit You want to test to see if a text matches a specific pattern of characters You want to replace patterns of text with other patterns. You have text with repeating patterns and you would like to break the text up into discrete items. MethodEdit To deal with the above three problems, XQuery has the following functions: matches($input, $regex)- returns a true if the input contains a regular expression replace($input, $regex, $string)- replaces an input string that matches a regular expression with a new string tokenize($input, $regex)- returns a sequence of items matching a regular expression Through these functions we have access to the powerful syntax of regular expressions. Summary of Regular ExpressionsEdit Regular expressions ("regex") are a field unto itself. If you wish to derive full benefit from this way of describing strings with patterns, you should consult a separate introduction. Priscilla Walmsley's XQuery (Chapter 18) has a clear summary of the functionality offered. - fn:matches($input, $regex, $flags) takes a string and a regular expression as input. If the regular expression matches any part of the string, the function returns true. If it does not match, it returns false. Enclose the string with anchors (^ at the beginning and $ at the end), if you only want the function to return true when the pattern matches the entire string. Note that this is different than the XML Schema patterns where ^ and $ are implied. - fn:replace($input, $regex, $string, $flags) takes a string, a regular expression, and a replacement string as input. It returns a new string that is the string with all matches of the pattern in the input string replaced with the replacement string. You can use $1 to $99 to re-insert groups of characters captured with parentheses into the replacement string. - fn:tokenize($input, $regex, $flags) returns an array of strings that consists of all the substrings in the input string between all the matches of the pattern. The array will not contain the matches themselves. In regular expressions, most characters represent themselves, so you are not obliged to use the special regex syntax in order to utilise these three functions. In regular expressions, a dot (.) represents all characters except newlines. Immediately following a character or an expression such as a dot, one can add a quantifier which tells how many times the character should be repeated: "*" for "0, 1 or many times" "?" for "0 or 1 times," and "+" for "1 or many times." The combination "*?" replaces the shortest substring that matches the pattern. NB: this only scratches the surface of the subject of regular expressions! The three functions all accept optional flag parameters to set matching modes. The following four flags are available: - i makes the regex match case insensitive. - s enables "single-line mode" or "dot-all" mode. In this mode, the dot matches every character, including newlines, so the string is treated as a single line. - m enables "multi-line mode". In this mode, the anchors "^" and "$" match before and after newlines in the string as well in addition to applying to the string as a whole. - x enables "free-spacing mode". In this mode, whitespace in regex pattern is ignored. This is mainly used when one has divided a complicated regex over several lines, but do not intend the newlines to be matched. If one do not use a flag, one can just leave the slot empty or write "". Examples of matches()Edit let $input := 'Hello World' return (matches($input, 'Hello') = true(), matches($input, 'Hi') = false(), matches($input, 'H.*') = true(), matches($input, 'H.*o W.*d') = true(), matches($input, 'Hel+o? W.+d') = true(), matches($input, 'Hel?o+') = false(), matches($input, 'hello', "i") = true(), matches($input, 'he l lo', "ix") = true() , matches($input, '^Hello$') = false(), matches($input, '^Hello') = true() ) Examples of tokenize()Edit (let $input := 'red,orange,yellow,green,blue' return deep-equal( tokenize($input, ',') , ('red','orange','yellow','green','blue')) , let $input := 'red, orange, yellow, green,blue' return deep-equal(tokenize($input, ',\s*') , ('red','orange','yellow','green','blue')) , let $input := 'red , orange , yellow , green , blue' return not(deep-equal(tokenize($input, ',\s*') , ('red','orange','yellow','green','blue'))) , let $input := 'red , orange , yellow , green , blue' return deep-equal(tokenize($input, '\s*,\s*') , ('red','orange','yellow','green','blue')) ) In the second example, "\s" represents one whitespace character and thus matches the newline before "orange" and the tab character before "yellow". It is quantified with "*" so the pattern removes whitespace after the comma, but not before it. To remove all whitespace, use the pattern '\s*,\s*'. Examples of replace()Edit ( let $input := 'red,orange,yellow,green,blue' return ( replace($input, ',', '-') = 'red-orange-yellow-green-blue' ) , let $input := 'Hello World' return ( replace($input, 'o', 'O') = "HellO WOrld" , replace($input, '.', 'X') = "XXXXXXXXXXX" , replace($input, 'H.*?o', 'Bye') = "Bye World" ) , let $input := 'HellO WOrld' return ( replace($input, 'o', 'O', "i") = "HellO WOrld" ) , let $input := 'Chapter 1 … Chapter 2 …' return ( replace($input, "Chapter (\d)", "Section $1.0") = "Section 1.0 … Section 2.0 …") ) In the last example, "\d" represents any digit; the parenthesis around "\d" binds the variable "$1" to whatever digit it matches; in the replacement string, this variable is replaced by the matched digit. Larger examplesEdit - XQuery/Incremental Search of the Chemical Elements Uses Ajax and a regular expression to search for a chemical element ReferencesEdit The Regular Expression Library has more than 2,600 sample regular expressions: Regular Expression Library This page has a very useful summary of the regular expression patterns: Regular Expression Cheat Sheet This page describes how to use Regular Expressions within XQuery and XPath: XQuery and XPath Regular Expressions
http://en.m.wikibooks.org/wiki/XQuery/Regular_Expressions
CC-MAIN-2015-11
refinedweb
930
53.31
Red Hat Bugzilla – Bug 210313 Man page mmap(2) incorrectly states hint alignment requirement Last modified: 2007-11-30 17:11:45 EST From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.8.0.7) Gecko/20060915 Firefox/1.5.0.7 Description of problem: The man page for mmap(2) states the following concerning the address hint passed to mmap(): The address start must be a multiple of the page size. and: EINVAL We don't like start or length or offset. (E.g., they are too large, or not aligned on a PAGESIZE boundary.) In fact, if "start" is unaligned, it is rounded up to the next PAGESIZE boundary by mmap(2). No error is returned unless MAP_FIXED is specified. Instead, the man page should state that if "start" is an unaligned address hint, it will be rounded up to the next PAGESIZE boundary. Version-Release number of selected component (if applicable): man-pages-2.21-1 How reproducible: Always Steps to Reproduce: 1. Save the following program as m.c: #include <sys/mman.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> int main() { void *ptr; int fd; if ((fd = open("m.c", O_RDWR)) < 0) { perror("m.c"); exit(1); } if ((ptr = mmap((void *)1, 0x500, PROT_READ, MAP_SHARED, fd, 0)) == MAP_FAILED) { perror("mmap"); (void) close(fd); exit (1); } printf("mmap #1 succeeded, ptr @ %p = 0x%x\n", ptr, *(int *)ptr); (void) close(fd); exit(0); } 2. Compile m.c (gcc m.c) 3. Run the program and notice it does not fail with EINVAL. Actual Results: # ./a.out mmap #1 succeeded, ptr @ 0x1000 = 0x636e6923 Expected Results: According to the man page, this should have failed with the same results given if MAP_FIXED is specified: mmap: Invalid argument Additional info: Fixed in man-pages-2.41-2.fc7.
https://bugzilla.redhat.com/show_bug.cgi?id=210313
CC-MAIN-2017-17
refinedweb
316
68.97
Notes from Well House Consultants These notes are written by Well House Consultants and distributed under their Open Training Notes License. If a copy of this license is not supplied at the end of these notes, please visit for details.. You are NOT allowed to charge (directly or indirectly) for the copying or distribu- tion of these notes, nor are you allowed to charge for presentations making any use of them. Maintaining State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Programming techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Sessions in Servlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17! server s/w request reply Servletrunner running servlet.*; outgoing.setContentType("text/html"); doGet is the servlet method that's called when you follow a link to a servlet using an "a href=" type link, or if you follow a link from an image map, or if you complete a form using "method=get". Note that doGet is not a static method; you may have multiple instances of the object available if appropriate, and even a simple servlet like this can run multithreaded. outgoing.setContentType("text/html"); If you've written Web pages before, they probably have names ending in ".htm" or ".html", which is a signal to the Web server that these pages are written in HyperText Markup Language. The Web server will pass on this information (as part of a header) to the browser, which in turn knows how to interpret all the various tags. Other data types are also available, for example ".txt" files will be taken as plain text by the Web server; it also knows what to do with files like ".gif", ".jpg" and a host of others. It's not so easy with a servlet. Although most output pages will be written in HTML, some won't be. Servlets could produce plain text, images (that's a natural for Java) and even sound files, and the web server needs to tell the browser. It does so by setting the content type, even if you feel that it's obvious, from the content itself. Note that the content type must be set before any content is written. You need an output stream on which to write your output and this line of code gets you an output stream which is connected to your HttpServletResponse object. You can then use your PrintWriter object to reply to the client. out.println("<html><head><title>Hello!</title></head>"); out.println("<body bgcolor=\"white\"> href=" links (to which we can add our own data after a ? character if we wish!). It's also the default for forms and it's used by image maps. But we need an alternative too. The POST method can be used as an alternative to the GET method to handle input from a form; the 1k limit is swept away, and the "posted" data does not get echoed as part of the URL so it means that passwords and hidden fields can be a little more secure. Posted requests are handled by a servlet method called doPost (called in the same was as doGet). The posted data can be accessed through the same high level methods as we studied under doGet, or at the lower level you can: BufferedReader inpost = incoming.getReader(); and then read in raw text, or ServletInputStream instr = incoming.getInputStream(); if you're expecting binary data. General advice is to use the POST method in preference to the GET method for forms, and you should normally use the higher level access methods. There will be exceptions, for example, you may be using a servlet to upload files from a client. I myself upload images to my server in this way rather than through FTP. Although rarely used, HTTP and servlets also support PUT and DELETE methods, allowing data to be put onto the server, and allowing server files to be deleted. Just from this short description, you'll understand why they're not often provided for the general public to use on the World Wide Web! 2.4 The life of a servlet The simple servlets that we've looked at so far have simply run, but you may have noticed during practicals that the first use of each servlet seemed slower than subse- quent uses, or that the servletrunner log contains differing information. When a servlet is first called, the server loads it and runs the servlet's init method. The running of this method will be completed before any of the service methods are started. Once a servlet has been initialised, service methods such as doGet and doPost are run as required. When all service methods have been completed, and/or after a certain timeout, your servlet may be destroyed (using the destroy method). Initialisation The init method in the base class may provide all you need for a simple servlet, but you'll wish to extend it for more complex servlets. You might want to establish connections to databases (and throw errors if they're not available), populate hash tables (if your servlet provides a translation facility), etc. You do still need to call the init method in the base class, so a typical init method may look like: 1. ftp 192.168.200.190 log in as trainee, password abc123 2. cd /usr/local/tomcat/webapps 3. cd octj/WEB-INF/classes 4. get Tempconv.java 5. Quit 6. Edit and save as [yourname.java] repeat 1 2 3 7. put [yourname.java] 8. quit 9. telnet 192.168.200.190 (trainee, abc123) 10. repeat 2 and 3 11. javac [yournmae.java] 12. cd .. 13. modify the file web.xml to refelect the new servlet 14. Restart the web app via the tomcat manager 15. exit 16 browse via[yourname] If you are not familiar with Tomcat configuration, the tutor will assist you with steps 12 to 14. 2.5 Maintaining State When you visit your local supermarket, you don't have to wait for their previous client to leave the store before you can enter. And you’ll want to have the same flex- ibility with your servlets. Imagine that I'm using a servlet to order my groceries. I'll be completing a number of forms and calling on the server to "doGet" or "doPost" from time-to-time. But most of the time, I'll be filling in the next form, selecting whether I want Cheddar or Cheshire, walking over to the bathroom to see how much soap we have in the cupboard, etc. When I do come back, I want to carry on with my own order, even though I'm starting a fresh run of (probably) doPost . I certainly don't want to rake over the order of another more recent arrival in the supermarket, who's organising a party and who has ordered 48 cans of beer and loads of nibbles! (Nor does he want my soap and cheese.) We need sessions – or "shopping carts" – and we need to keep track of who is using which. There are a number of methods of doing this. Session Objects An HttpSession object is part of the HttpServletRequest that's passed in to the doGet or doPost methods. The getSession method returns you the actual session object: (the boolean parameter being set to true tells the servletrunner to create a new session if one doesn't exist). Once created, sessions have unique IDs which can be used for user tracking. Within a session object, you'll typically want to manage an object of type ShoppingCart; you can do so by setting up a shopping cart and associating it with the session object. This snippet of code opens up a session (creating a new one if necessary), and assigns a shopping cart to that session (once again, creating it if neces- sary). Data is then added to that shopping cart from a parameter on the form. Once you've finished with a session (for example, after the user has visited the checkout in your store), you'll want to clear out the session. You can use the invalidate method to do this. Some servers automatically invalidate sessions after a certain time, others may need to be explicitly cleared. Session objects provide a useful and convenient way of tracking users, but they won't always work for you in the manner that we've shown above. Internally, the "Cookie" facility of browsers is used, and some users will not accept cookies, or are running mature browsers that don't support cookies; you need to use an alternative method for sites that are required to support such users. Rewriting URLs If cookies aren't available, you may use URL rewriting. This technique (not supported by ServletRunner) requires you to encode your session ID within links to subsequent pages. If the user fills in a form or follows a link based on the re-written URL, the servlet recognises the session ID and uses it to retrieve the appropriate HttpSession object. Hidden fields If you have a series of forms following on from one another, this one will work no matter what the browser or the server, and whether or not the user is accepting cookies. Personally, I use it all the time for some quite major Web applications, with users visiting from Internet Explorer, Netscape, AOL, Compuserve and others. The scheme is similar to URL re-writing ... As a user enters the site (calls up the servlet for the first time), he can be clearly identified because the form he's completed does not include a field called (say) "missionid". This is the trigger for a SessionID and shopping cart to be created for him, and might also be the trigger to validate our user, his password, etc, before letting him proceed further into the site. All subsequent pages sent out to the user during his session include a hidden field, named "missionid", and a unique reference to the SessionID (use the toString method) and allow us to identify who the user is when he comes back to us. Whilst it is convenient for us to use the session facility provided in servlets to provide this capability, for more secure sites we've provided our own facility which includes a "channel hopping" type capability. The contents of the missionid field change between each and every page which blocks meaningful backtracking and disa- bles anyone walking up to the machine later from entering the system through intermediate cached pages. 2.6 Programming techniques This subsection isn't servlet specific – it describes techniques and tips that you'll want to use whether you're writing Java Servlets or scripts in Perl to run via CGI. Webifying output You're generating response pages from within your program, and those response pages are in HTML. Your code may look like: 1 run a "dir" command on DOS or an "ls -l" on Unix or Linux return say; } { if (count == null) { count = new Integer(0); out.print("<h1>Welcome. Please enter your name</h1>"); out.print("<form action=/nan/servlet/Barman>"); out.print("<input name=whoyouare>"); out.print("</form>"); } else { String wanted = request.getParameter("whoyouare"); if (wanted != null) { session.setAttribute("who",wanted); } else { wanted = (String)session.getAttribute("who"); } :::::::::::::: Landlord.java :::::::::::::: import java.io.*; import java.text.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; /** * Simple Session Tracking * Also reports on all users */ if (count == null) { count = new Integer(0); out.print("<h1>Welcome. Please enter your name</h1>"); out.print("<form action=/nan/servlet/Landlord>"); out.print("<input name=whoyouare>"); out.print("</form>"); People.addElement(session); } else { String wanted = request.getParameter("whoyouare"); if (wanted != null) { session.setAttribute("who",wanted); } else { wanted = (String)session.getAttribute("who"); } Exercise License These notes are distributed under the Well House Consultants Open Training Notes License. Basically, if you distribute it and use it for free, we’ll let you have it for free. If you charge for its distribution of use, we’ll charge. License Ends.
https://www.scribd.com/document/37219156/Java-Servlets-Intermediate
CC-MAIN-2019-35
refinedweb
1,981
62.27
RTL-SDR. Decryption is not covered in this tutorial. First, you will need to find out at what frequencies you have GSM signals in your area. For most of the world, the primary GSM band is 900 MHz, in the USA it starts from 850 MHz. If you have an E4000 RTL-SDR, you may also find GSM signals in the 1800 MHz band for most of the world, and 1900 MHz band for the USA. Open up SDRSharp, and scan around the 900 MHz (or 850 MHz) band for a signal that looks like the waterfall image below. This is a non-hopping GSM downlink signal. Using NFM, it will sound something like the example audio provided below. Note down the strongest GSM frequencies you can find. The rest of the tutorial is performed in Linux and I will assume that you have basic Linux skills in using the terminal. For this tutorial I used Kali Linux in a VMWare session. You can download the VMWare image here, and the free VMWare player from here. Note that virtual box is reported not to work well with the RTL-SDR, as its USB bandwidth capabilities are poor, so VMWare player should be used. Update: Note that the latest version of Kali Linux comes with GNU Radio pre-installed, which should allow you to skip right to the Install Airprobe stage. Open up Kali Linux in your VMWare player and login. The default username is root, and the password is toor. Install GNU Radio You will need to install GNU Radio first in order to get RTL-SDR to work. An excellent video tutorial showing how to install GNU Radio in Kali Linux can be found in this video shown below. Note that I had to run apt-get update in terminal first, before running the build script, as I got 404 not found errors otherwise. You can also use March Leech’s install script to install the latest version of GNU Radio on any Linux OS. Installation instructions can be found here. I recommend installing from source to get the latest version. Update: The new version 3.7 GNU Radio is not compatible with AirProbe. You will need to install GNU Radio 3.6. However, neeo from the comments section of this post has created a patch which makes AirProbe compatible with GNU Radio 3.7. To run it, place the patch file in your airprobe folder and then run patch -p1 < zmiana3.patch. Install Airprobe Airprobe is the tool that will decode the GSM signal. I used multiple tutorials to get airprobe to install. First from this University of Freiberg tutorial, I used their instructions to ensure that the needed dependencies that airprobe requires were installed. Install Basic Dependencies sudo apt-get –y install git-core autoconf automake libtool g++ python-dev swig libpcap0.8-dev Update: Thanks to shyam jos from the comments section who has let us know that some extra dependencies are required when using the new Kali Linux (1.0.5) for airprobe to compile. If you’ve skipped installing GNURadio because you’re using the new Kali 1.0.5 with SDR tools preinstalled, use the following command to install the extra required dependencies. sudo apt-get install gnuradio gnuradio-dev cmake git libboost-all-dev libusb-1.0-0 libusb-1.0-0-dev libfftw3-dev swig python-numpy Install libosmocore git clone git://git.osmocom.org/libosmocore.git cd libosmocore autoreconf –i ./configure make sudo make install sudo ldconfig Clone Airprobe Now, I discovered that the airprobe git repository used in the University tutorial (berlin.ccc.de) was out of date, and would not compile. From this reddit thread I discovered a more up to date airprobe git repository that does compile. Clone airprobe using the following git command. git clone git://git.gnumonks.org/airprobe.git Now install gsmdecode and gsm-receiver. Install gsmdecode cd airprobe/gsmdecode ./bootstrap ./configure make Install gsm-receiver cd airprobe/gsm-receiver ./bootstrap ./configure make Testing Airprobe Now, cd into to the airprobe/gsm-receiver/src/python directory. First we will test Airprobe on a sample GSM cfile. Get the sample cfile which I found from this tutorial by typing into terminal. cd airprobe/gsm-receiver/src/python wget Note: The tutorial and cfile link is sometimes dead. I have mirrored the cfile on megaupload at this link. Place the cfile in the airprobe/gsm-receiver/src/python folder. Now open wireshark, by typing wireshark into a second terminal window. Wireshark is already installed in Kali Linux, but may not be in other Linux distributions. Since Airprobe dumps data to a UDP port, we must set Wireshark to listen to this. Under Start in Wireshark, first set the capture interface to lo (loopback), and then press Start. Then in the filter box, type in gsmtap. This will ensure only airprobe GSM data is displayed. Back in the first terminal that is in the python directory, type in ./go.sh capture_941.8M_112.cfile If everything installed correctly, you should now be able to see the sample GSM data in wireshark. Receive a Live Channel To decode a live channel using RTL-SDR type in terminal ./gsm_receive_rtl.py -s 1e6 A new window will pop up. Tune to a known non-hopping GSM channel that you found earlier using SDRSharp by entering the Center Frequency. Then, click in the middle of the GSM channel in the Wideband Spectrum window. Within a few seconds some GSM data should begin to show constantly in wireshark. Type ./gsm_receive_rtl.py -h for information on more options. The -s flag is used here to set the sample rate to 1.0 MSPS, which seems to work much better than the default of 1.8 MSPS as it seems that there should be only one GSM peak in the wideband spectrum window. Capturing a cfile with the RTL-SDR (Added: 13/06/13) I wasn’t able to find a way to use airprobe to capture my own cfile. I did find a way to capture one using ./rtl_sdr and GNU Radio however. First save a rtl_sdr .bin data file using where -s is the sample rate, -f is the GSM signal frequency and -g is the gain setting. (rtl_sdr is stored in ‘gnuradio-src/rtl-sdr/src’) ./rtl_sdr /tmp/rtl_sdr_capture.bin -s 1.0e6 -f 936.6e6 -g 44.5 Next, download this GNU Radio Companion (GRC) flow graph (scroll all the way down for the link), which will convert the rtl_sdr .bin file into a .cfile. Set the file source to the capture.bin file, and set the file output for a file called capture.cfile which should be located in the ‘airprobe/gsm-receiver/src/python’ folder. Also, make sure that ‘Repeat’ in the File Source block is set to ‘No’. Now execute the GRC flow graph by clicking on the icon that looks like grey cogs. This will create the capture.cfile. The flow chart will not stop by itself when it’s done, so once the file has been written press the red X icon in GRC to stop the flow chart running. The capture.cfile can now be used in airprobe. However, to use this cfile, I found that I had to use ./gsm_receive.py, rather than ./go.sh as a custom decimation rate is required. I’m not sure why, but a decimation rate of 64 worked for me, which is set with the -d flag. ./gsm_receive.py -I rtl_sdr_capture.cfile -d 64 Going Further I have not been able to decode encrypted GSM data myself, but if you are interested in researching this further, here are some useful links. Disclaimer: Only decrypt signals you are legally allowed to (such as from your own cell phone) to avoid breaching privacy. A Guide by Security Research Labs GSM Decoding Tutorial by the University of Norwegian Science and Technology A5 Wiki A good lecture on this topic is shown below. Is anybody else getting this error: I cant install airprobe/gsm-reciever. When I try to “make” it gives me this error: g++: error: ./gsm.cc: No such file or directory g++: fatal error: no input files compilation terminated. make[4]: *** [_gsm_la-gsm.lo] Error 1 make[4]: Leaving directory `/root/airprobe/gsm-receiver/src/lib’ I can’t find this gsm.cc file anywhere?! I got the same error. did u already managed to compile it? I fond another airprobe src, compiling fine…. but there are some other issues. I had this issue to, and the “no module named _GSM” Turns out the error was caused by using an out of date/old version of Airprobe, that wasnt compiling correctly. Since gnumonks was gone I had to find another on that would compile correctly – I got it working with this one: Github I cloned the same repo on github but still I am getting the same error “ImportError: No module named _gsm” . Please let me know if something else needs to be done. hi,please help me,thanks. root@kali:~/airprobe/gsm-receiver/src/python# ./gsm_receive_rtl.py -s 1e6 Traceback (most recent call last): File “./gsm_receive_rtl.py”, line 22, in import osmosdr ImportError: No module named osmosdr Hi, I followed all the steps and it works nicely until I click in the wideband spectrum window. It just doesn’t do anything, it doesn’t show anything on wirehsark either. I’m using a HackRF, what could be the problem? Also tried in Kali 1.0.8 vm and get: linux; GNU C++ version 4.7.2; Boost_104900; UHD_003.005.003-0-unknown Traceback (most recent call last): File “./gsm_receive_rtl.py”, line 27, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: libosmocore.so.5: cannot open shared object file: No such file or directory Anybody any ideas how to make this work in Kali 1.0.8? Answered my own problem. Run the following if using Kali 1.0.8 before the airproble download and setup: sudo ln -s /usr/local/include/gruel/swig/gruel_common.i /usr/local/include/gnuradio/swig/ && ldconfig seems to be working on my VM now hi to day i installed kali linux 1.0.8 with gnuradio preinstalled i follow the totrial how to install airprobe and apply the patch zmiana.patch all thing work fine but when i apply the test of airprobe i go this message: Traceback (most recent call last): File “./gsm_receive.py”, line 11, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: No module named _gsm please help me i have 5 days try !!IMPORTANT!! hey, my name is hans and ive got a simple question (im a newbie in this section): Is it possible to detect the count of smartphones near me with gsm analyzazion? and if not, could u imagine some way to do this? i know its not that easy, but ive several months to do this – i just need to know its possible regards, hans Hi Hans, what area are you in? hi, i’m made cfile with a terratec e4000 usb card, but unfortunately i cant find a way how to decode this. when i write “./go.sh /tmp/capture-rtl-sdr.cfile 64 1S” everything looks fine in console, but in wireshark have nothing. Instead of when write “./go.sh /tmp/capture-rtl-sdr.cfile 64 0C” then wireshark show traffic but not system information 5 or 6 so im uploaded my cfile, and if somebody can try and eventually find where im in wrong, i will appreciate Hi I had been using gsm_receive_rtl.py with version 1 of zmiana patch, and it worked OK. However, I couln’t make go.sh work with any capture file, like capture_941.8M_112.cfile or vf_call6_a725_d174_g5_Kc1EF00BAB3BAC7002.cfile. Now I read neeo comment about a new patch version and I applyed it, but I got same rerults: gsm_receive_rtl.py working OK but file decoding not working. Neeo, what options should I use to try with vf_call6_a725_d174_g5_Kc1EF00BAB3BAC7002.cfile, which is the file should work, isn’t it? Thanks! you need to change clock_rate in python code to 100e6, and use decim = 174 for vf_call6_a725_d174_g5_Kc1EF00BAB3BAC7002.cfile Then I removed “-I” and get: configure.ac:16: required file `./config.guess’ not found configure.ac:16: `automake –add-missing’ can install `config.guess’ configure.ac:16: required file `./config.sub’ not found configure.ac:16: `automake –add-missing’ can install `config.sub’ configure.ac:5: required file `./install-sh’ not found configure.ac:5: `automake –add-missing’ can install `install-sh’ configure.ac:16: required file `./ltmain.sh’ not found configure.ac:5: required file `./missing’ not found configure.ac:5: `automake –add-missing’ can install `missing’ src/Makefile.am: required file `./depcomp’ not found src/Makefile.am: `automake –add-missing’ can install `depcomp’ autoreconf: automake failed with exit status: 1 I get this: autoreconf: ‘configure.ac’ or ‘configure.in’ is required after “autoreconf –i” hey all , excuse me because of reapiting this question ! when i run the ./gsm_receive_rtl.py i take this error : inux; whould you please tell me exactly how could i solve this problem ? —————————– and , another question is that when i run patchs , it asks me a File name and i give the file name but it asks for ignoring them , —————————————————- please tell me how to do patchs ! soooo sooorryy and tnxxxx a lot for ans. ——————————————— I have installed this on Kali 1.0.6 in VirtualBox, however when I run ./gsm_receive_rtl.py -s 1e6 after detecting the RTLSDR I have an error thrown; Traceback (most recent call last): File “/usr/lib/python2.7/dist-packages/gnuradio/wxgui/plotter/plotter_base.py”, line 203, in _on_paint for fcn in self._draw_fcns: fcn[1]() File “/usr/lib/python2.7/dist-packages/gnuradio/wxgui/plotter/plotter_base.py”, line 63, in draw GL.glCallList(self._grid_compiled_list_id) File “/usr/lib/python2.7/dist-packages/OpenGL/error.py”, line 208, in glCheckError baseOperation = baseOperation, OpenGL.error.GLError: GLError( err = 1280, description = ‘invalid enumerant’, baseOperation = glCallList, cArguments = (1L,) ) Thanks in advance for any help. hi, i’ve updated the patch for 3.7 a little bit – link – now gsm_receive_rtl.py works as well (can be used to live capture) as noticed by Storyman, the go.sh doesn’t work for example capture file mentioned in article – maybe the file needs some other clock_rate (it wasn’t my testing target in the first place). I was able however to decode srlabs file correctly (with clockrate 100e6) and with 64e6 (default) I’m able to decode files captured with my rtl-sdr. Thanks for the update, and the extra info. I was able to replicate your result! In the process of messing around with it, I uncovered a problem, too. I noticed that when I clicked the coarse tune window, it was behaving oddly. I tracked the bug down to this: When gr moved from 3.6 to 3.7, gr::filter::freq_xlating_fir_filter_XXX changed to require the negative of the old value. that is, an offset of -200000 in gr3.6 should be +200000 in gr3.7. The fix — change this line: self.offset = -x to self.offset = x However, that got me thinking about what else that sign change could be messing up. Sure enough… there is a tuner correction function built in there, where the gsm receiver function sends back a frequency correction to the top_block. So I performed the following minor surgery to gsm_receive.py: class tuner(gr.feval_dd): def __init__(self, top_block): gr.feval_dd.__init__(self) self.top_block = top_block def eval(self, freq_offset): self.top_block.set_center_frequency(freq_offset) return freq_offset becomes: class tuner(gr.feval_dd): def __init__(self, top_block): gr.feval_dd.__init__(self) self.top_block = top_block def eval(self, freq_offset): self.top_block.set_center_frequency(0 - freq_offset) return 0 - freq_offset Aaaaand just like that — capture_941.8M_112.cfile decodes properly under gr3.7 now Oh, just wanted to say, there’s probably a cleaner approach to fixing these errors. There may be a central point where we can just do a sign change and fix them all or something. I haven’t really investigated any further yet. I was just so happy to get the example cfile to read, finally, that I just rushed here to say how you’re absolutely right Storyman – the clearly states that the change of the sign is needed (but I did abs() – so that’s my mistake). new version: (I did the change in a different location – but it works as well). Well, i went through all the comments on this page. It does appear from the comments that airprobe only works on kali-linux. Is that so? As i m trying to install airprobe on relatively older version of ubuntu i.e. ubuntu 10.04. So is that worth-less to do so? No it should work on any Linux not just Kali. People just use Kali because airprobe can be very hard to install and Kali somewhat simplified it by having the GNU Radio prerequisite preinstalled. Also forgot to mention, as per SopaXorzTaker, that one should do make in /src/python/lib and copy gsm.py into /src/python Worth noting are these patches for gnuradio 3.7: Forgot to mention neither patches are mine, first is by scateu and second is (c) 2014 SopaXorzTaker Christopher, I’ve applied both patches, and the programs run, but they don’t produce valid output like they do for me under gr3.6. Have you (or anyone, really) actually gotten to a 100% usable state with gr3.7? Even testing against the capture_941.8M_112.cfile file produces a stream of “sch.c:260 ERR: conv_decode 11″ under gr3.7, doing the same test in the same manner as under gr3.6 (which worked perfectly). Has ANYONE overcome this problem yet? And if so, are you able to share any hints as to how? Thanks! Hi Guys… I have tried to install the Kali 1.0.6 and then GNURadio 3.7. I have read about the incompatibility with airprobe and I also applied a patch and all worked ok. When I run the with caputer*.cfile it fails like this: root@kali:~/airprobe/gsm-receiver/src/python# ./go.sh capture_941.8M_112.cfile 112 0b Using Volk machine: avx_64_mmx 10 sch.c:260 ERR: conv_decode 11 sch.c:260 ERR: conv_decode 12 sch.c:260 ERR: conv_decode 11 sch.c:260 ERR: conv_decode 11 sch.c:260 ERR: conv_decode 10 …. And nothing shows up on Wireshark. Worst if I try to run: root@kali:~/airprobe/gsm-receiver/src/python# ./gsm_receive_rtl.py -f 939.363M -c 0B Traceback (most recent call last): File “./gsm_receive_rtl.py”, line 16, in from gnuradio import gr, gru, eng_notation, blks2, optfir ImportError: cannot import name blks2 I get this python error. Seems like there is no patch applied to the IMPORT function of python related to GNURadio 3.7 Any idea? Problem with python too ;( OFFTOPIC: go away scriptkiddies! I solved the problem by installing Kali 1.0.6 where GNURadio 3.6.5 is pre-installed. Then downloaded and compiled airprobe. I have also installed osmocombb RTLSDR libraries to make Kalibrate working. By running the live capture using gsm-receive i raised the gain to 52 and et voilà … 20 seconds later GSM dataflow showing up on Wireshark. My advise is not to install GNURadio 3.7 and keep on working with pre installed version on GNURadio on Kali Linux 1.0.6 Now fixed and working on Ubuntu | | | | \ / V for step 1 i.e. identifying the exact GSM frequency, one can use kal its self to determine the GSM frequency (instead of of via SDR# or gqrx) as long as you know the GSM band (quite easy) e.g. kal -s 900 (scan GSM band 900 for all GSM signals) output will be something along the lines of chan: 1 (908.3MHz – 21.3243kHz) power:xxxxx.xx chan: 2 (909.5MHz – 22.1231kHz) power:xxxxx.xx chan: 3 (907.2MHz – 20.3223kHz) power:xxxxx.xx choose a channel which shows a high power value (i.e. good reception) translate the corresponding frequency to hz e.g. assuming channel 3 has the highest power value of the received channels 907.2Mhz would translate to 907 200 000hz modify your frequency in the gsm_receive_rtl.py to the corresponding frequency e.g. gsm_receive_rtl.py -s 1e6 -f 907200000 Not sure if anyone else had issues running the apt-get install commands, but I did. I ended up installing Ubuntu’s software center and was able to search for the various packages through there. When I tried installing packages through the command line more than half said they did not exist (?) Just thought I’d share this tip in case anyone has the same issue. I used Kali Linux. How to execute .patch file? cd thedirectorycontainingthesource patch -p1 < mypatch.patch If that doesn't work try with -p0 instead of -p1. hello when i try to compile airprobe to decode GSM signals with gnuradio radio i follow the steps, my problem is when I compile the gsm-receiver with the command make, comethe have installed Kali Linux 1.06 new but dont work airprobe why can someone help me please? the error for comiling Airprobe i have found the problem the path rt-sdr thre must be compiled with ./bootstrap and ….. make and airprobe gsm decode are going Hello! When I am trying to use 1e6 on the sample rate, I can’t change the frequency or time/fne tune to the right frequency. The wideband spectrum waves is moving very slow also the channel apectrum waves. How can i fix it? Thanks! You need more CPU power. I had the same issue when I used a Vmware virtual machine, adding one more CPU core in the config solved this problem for me. Real-time sampling takes a lot of CPU power. Oh.. I’m trying to run it on atom processor. That’s bad. I guess I can’t use other saple rate. Cause I can tune when I use the default sample rate. Thank you! Very interesting tutorial! Is it possible to see when a User End-device is opening and closing PDP-sessions for the GPRS? Hello , i have install gnuradio-3.6.5.1 and airprobe , okey its fine working i have see data my terminal and decode data in my wireshark window but I do not hear any sound . i dont know , fWhat should I do to hear sound , i must should install VMWare player or not ? Please help me ,thank you and best regards . no, it should act like that. also, how old are you? Well, despite I could install airprobe with gnuradio 3.7 using the patch, I still couldn’t decode any example file (tried with capture_941.8M_112.cfile and vf_call6_a725_d174_g5_Kc1EF00BAB3BAC7002). I get this: ./go.sh capture_941.8M_112.cfile 64 0b Using Volk machine: ssse3_32_orc Key: ’0000000000000000′ Configuration: ’0. And nothing appears in wireshark. If I use other decimation ratios, for example 112: ./go.sh capture_941.8M_112.cfile 112 0b Using Volk machine: ssse3_32 11 … Any ideas? Thanks! Hi, I’m having a problem very similar to OI. When I run: ./go.sh capture_941.8M_112.cfile I get: Traceback (most recent call last): File “./gsm_receive.py”, line 15, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: ../lib/.libs/_gsm.so: undefined symbol: _Z14gr_fast_atan2fff I’ve seen the comment from Andy, but my libfftw3-dev package is in its most recent version. Any ideas? Thanks! Sorry, I hadn’t noticed that my problem could be related with the gnuradio version. I tryed with the neeo patch, and now it seems to work. Thanks! I’ve made a patch to make gsm-receiver (from gnumonks airprobe) compatible with gnuradio >= 3.7. it is a little bit hacky im some places, but it works for me you can get it here: sorry, link didn’t show up: i’ve also created a new version of grc file, that can be loaded in gnuradio-companion (grc) 3.7 Could you please provide the patch in a way that does not require an EXE file to download? You could create a fork of the code on github.com for example (or e-mail the patch to me so I can host it, my email is linked from my homepage). No need to use their executable downloader… just click the filename at the top of the page and it will download normally with the browser. Nice one neeo, but how did you get past the error concerning gnuradio-core, since it was removed in 3.7 you must have solved this problem as well This happens when you try to run the ./configure script. Errors like this: checking for GNURADIO_CORE... configure: error: Package requirements (gnuradio-core >= 3) were not met: No package 'gnuradio-core' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables GNURADIO_CORE_CFLAGS and GNURADIO_CORE_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. And suddenly it worked, after running bootstrap again When I install gsm-receiver of airprobe,the error occurred.How to fix this: ======================================== In file included from GSMCommon.h:34:0, from GSMCommon.cpp:23: ./Timeval.h: In function ‘void msleep(long int)': ./Timeval.h:32:49: error: ‘usleep’ was not declared in this scope In file included from GSMCommon.cpp:23:0: GSMCommon.h: In function ‘void GSM::sleepFrames(unsigned int)': GSMCommon.h:62:36: error: ‘usleep’ was not declared in this scope GSMCommon.h: In function ‘void GSM::sleepFrame()': GSMCommon.h:66:29: error: ‘usleep’ was not declared in this scope make[5]: *** [GSMCommon.lo] error 1 make[5]: Leaving directory `/root/airprobe/gsm-receiver/src/lib/decoder/openbtsstuff’ make[4]: *** [all-recursive] error 1 make[4]: Leaving directory `/root/airprobe/gsm-receiver/src/lib/decoder’ ============================================= Has anyone used Kraken? I have it installed on my machine with tables and I’m not sure how to point or configure Kraken or find_kc toward the tables on the HD. I’m a rather new Linux user. I get an error i don.t understand. im using latest version of debian :/ ./gsm_receive_rtl.py linux; GNU C++ version 4.7.2; Boost_104900; UHD_003.006.002-1-g8f0f045c gr-osmosdr v0.0.2-42-g86ecf305 (0.0.3git) gnuradio 3.6.5.1 built-in source types: file fcd rtl rtl_tcp uhd hackrf bladerf netsdr Using device #0 Realtek RTL2838UHIDIR SN: 00000001 Found Rafael Micro R820T tuner sample rate: 1800000 >>> gr_fir_ccc: using SSE >>> gr_fir_ccf: using SSE Key: ‘ad6a3ec2b442e400′ Configuration: ‘0B’ Configuration TS: 0 configure_receiver Using Volk machine: sse4_2_64_orc The program ‘python’ received an X Window System error. This probably reflects a bug in the program. The error was ‘BadWindow (invalid Window parameter)’. (Details: serial 629 error_code 3 request_code 137.) Hey all, For those of you in the states, have any of you guys had any luck with this? Our possible ranges leave only 1 of the 4 bands usable if using the RTL SDR seeing as the max range is ~1700 (GSM for the states for AT&T and T-Mobile are within 850, 1700, 1900, and 2100 I believe). Therefore, I have only been able to attempt 850mhz band, but with no such luck. I am currently using a simple TV Antenna. Given the comments for this article, even the stock antenna that comes with the RTL SDR can pick this up. Any thoughts as to what I may be doing wrong? I think that once I find a non-hopping signal, I will be set. In the meantime, I can only find MOTORBO signals within this range. Thoughts? Thank you so much for the tutorial! As soon as I finished reading it, I went out and bought the Terratec E4000. Unfortunately, I am having the same troubles as some of the others. After I installed Airprobe, I got this error message: root@XXXX:~/sdr/airprobe/gsm-receiver/src/python# ./go.sh capture_941.8M_112.cfile Traceback (most recent call last): File “./gsm_receive.py”, line 3, in from gnuradio import gr, gru, blks2 ImportError: cannot import name blks2 I even tried removing the GNURadio that comes with Kali, and instead installed it in the fashion described in the video-tutorial in your post. But nothing seems to work. I tried googling the problem, and have now spent several days trying to figure it out – unfortunately without any luck. I hope someone can help me with this problem. All the best, //Dennis Hi, I have installed the gnuradio 3.7. But when I tried to install gsm-receiver after step “./configure”, I got a error like this “Package requirements (gnuradio-core >= 3) were not met”. I googled the problem. It seems the new version gnuradio is not compatible with the airprobe. Do you have any ideal to fix it? Many Thanks Great tutorial…the clearest yet! I did have to download many dependencies on my fresh install of Kali in order to install gsm-receiver but now it installed correctly. When I try to run gsm_receive_rtl.py I get the following errors: linux; any idea what this is? Attached rtl2832-cfile.grc does not work in modern version of gnuradio. Trying in v3.7 gives a lot of errors. I know that asking for a port maybe asking too much. Could at least a picture of the schematic be posted? This is Ajay here, When I use ./go.sh with the downloaded cfile, everything is fine. When I make my own cfile using usrp+gnuradio+airprobe ./gsm_scan.py -pe -re -d174 -c643 I get the cfile but the decode does not happen using ./go.sh ?? Can anyone help me with how to capture a valid cfile using USRP+GNURADIO ? I have been trying for a long time, pls help. Install Kali and simple run a script as root from /root folder: apt-get -y install git-core autoconf automake libtool g++ python-dev swig libpcap0.8-dev apt-get install gnuradio gnuradio-dev cmake git libboost-all-dev libusb-1.0-0 libusb-1.0-0-dev libfftw3-dev swig python-numpy cd ~/sdr git clone git://git.osmocom.org/libosmocore.git cd libosmocore autoreconf -i ./configure make sudo make install sudo ldconfig cd ~/sdr git clone git://git.gnumonks.org/airprobe.git cd airprobe/gsmdecode ./bootstrap ./configure make cd ~/sdr cd airprobe/gsm-receiver ./bootstrap ./configure make cd ~/sdr how change ip in wireshark to 10.0.0.0/16 LAMER! Hi, I’m a Noob here. Running ./go.sh capture_941.8M_112.cfile 112 1S on the cfile mentioned in the tutorial shows SI 5 & 6 frames. However, I’ve been unsuccessful in getting similar data off a live transmission and was hoping someone here could point me in the right direction. My beacon is on ARFCN 22 and here’s what I’ve done so far: 1) ./gsm_receive_rtl.py -f 939.363M -c 0B I see BCCH data with 2 different kinds of Immediate Assignments in Wireshark. Here’s a brief excerpt ——– SDCCH/8 + SACCH/C8 or CBCH (SDCCH/8), Subchannel 4 Timeslot: 2 Hopping channel: No Single channel : ARFCN 22 ——– Spare bits (ignored by receiver) Timeslot: 4 Hopping channel: Yes Hopping channel: MAIO 6 Hopping channel: HSN 38 ——– 2) Since the Immediate Assignments to TS2 were frequent, I was hoping that monitoring TS2 on ARFCN 22 would show pre-encryption SI 5 and SI 6 frames. I ran the following command: ./gsm_receive_rtl.py -f 939.363M -c 2S I do not see any output at all in Wireshark while I do see encrypted frames on the gsm_receive window. I tried config 2C and setting the sampling rate to 1MHz but I still cannot see anything in Wireshark. What am I missing ? Needed to force the key to 0 to get it to work ./gsm_receive_rtl.py -f 939.363M -c 2S -k “00 00 00 00 00 00 00 00″ Hi there, Just posted about decrypting the data captured on my blog, thought it might be interesting for you too finaly i am able to run it in new Kali linux (version 1.0.5), For those who getting error when compiling/make “gsm-receiver” ,this is beacuse of the missing dependencies with gnuradio installed in kali run this command to fix it : sudo apt-get install gnuradio gnuradio-dev cmake git libboost-all-dev libusb-1.0-0 libusb-1.0-0-dev libfftw3-dev swig python-numpy then try compile airprobe FYI: tried this tutorial in ubuntu 13.04 but failed, worked fine in Kali linux (version 1.0.5) Thanks for this, I havn’t had a chance to try airprobe on the new Kali yet, so this will save some time. correction, airprobe is not pre-installed in kali Thanks for the correction, not sure why I thought that. I am trying to compile airprobe to decode GSM signals with gnuradio radio and wireshark following the steps, the problem is when I compile the gsm-receiver with the command make, the think that the problem comes from some kind of version incompatibility of python but I’m not sure, can someone help me please? Lots of thanks!!! Hi! I’m newby at this. Please, help. After execute a gsm_receive.py I have error: root@kali:~/airprobe/gsm-receiver/src/python# ./gsm_receive.py Traceback (most recent call last): File “./gsm_receive.py”, line 12, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: ../lib/.libs/_gsm.so: undefined symbol: _ZTI8gr_block Sorry I don’t know what could be wrong here, maybe someone else can help? I encountered the same error on Kali Linux. The reason is, that the shared object (_gsm.o) doesn’t get correctly linked against gnuradio-core.so, because pkg-config fails during the build. It fails, because gnuradio-core depends on the package “fftw3f” which is installed in binary form, because otherwise gnuradio woulndn’t work, but the -dev package is mising. Long story short: Install the missing package (apt-get install libfftw3-dev) and rebuild the gsm-receiver. Then it works. It doesn’t work… (I use kali 1.0.5) Hey, thanks for the excellent article. So I’ve gotten up to the point of actually trying to do a live capture with wireshark, but for some reason, when I run gsm_receive_rtl.py, I get an error where each parse of a packet should be. It looks like this: sch.c:260 ERR: conv_decode 12 The number seems to vary between 9 and 12. Any idea how to fix this? Thanks! Gabe Did you set the -s flag to make the bandwidth 1MHz? I get this error too sometimes, usually it’s because the GSM peak isn’t perfectly centered, or I haven’t clicked on the peak center perfectly. Also poor reception might cause it. In one of Domi’s comments below he says that he used kalibrate to get a clock offset figure which allowed him to tune to the signal much more accurately to get around that error, you might want to try that too. Great tutorial, I have several questions though: 1) By using kalibrate I can correctly get 90%+ of all gsm downlink traffic for 20 seconds or so in wireshark, then I get a parity bit error for 10 seconds followed by around 15 seconds of ERR: conv_decode 11 and lastly a bunch of 0’s, any idea what can cause this? I am guessing either my antennae gets offset or I get offset on my packages. 2) I can see uplink traffic with SDR# but when I try to sniff it with airprobe I get absolutely nothing in wireshark, not even any error messages. Any ideas? Thanks for any help you can give. I plan on trying to run uplink and downlink sniffing at the same time and will let you know my results. (using 2 dongles) Hi Joe, I think I can answer you since I have been down the same road. 1. I think you need to wait for the dongle to warm up (as admin said), and keep re-kalibrating it. It is actually quite random, sometimes I get the full traffic even when I use the exact value coming from arfcncalc, sometimes I need to calibrate. I think this is because my error (28-30kHz) is still in the width of a GSM channel (200 kHz). The parity errors could be ignored it means the traffic you tried to de-modulate and decode is encrypted. The ERR_CONV messages mean that you are not well calibrated, sometimes if you wait they disappear as the dongle gets in tact. The 0s mean that you are so off from the frequency that airprobe couldn’t even find anything that looks like GSM so it just prints it the bits it finds. 2. There is no uplink support at all in airprobe. There was a little demonstration at one of the conferences but the code was never released. You can find some gitHUB repos claiming their airprobe is down and uplink compatible, but they don’t work. According to a comment in the code “uplink can’t be decoded the way currently gsm-receive works”. Everybidy switched to osmocomBB therefore no more code is written for SDRs. I asked Dieter Spaar who presented uplink sniffing but he said the code is private and dirty so he will never release it. I was also thinking about doing uplink and downlink simultaniously but it appears that for some reason you need to sync the two dongles for good results, so I decided to put this aside as it is a lot more complicated than I thought. Good luck, Domi Thanks for the info Domi, I hope it will save some people some time. Does airprobe work on ubuntu or it is only for kali linux? Which version of ubuntu will be most suitable for airprobe? As i m using ubuntu 10.04.4 Hello, Did you try it for uplink traffic as well..? Fahad As far as I know, it isn’t possible to monitor uplink traffic at the moment. Someone correct me if i’m wrong. EDIT: In this video at 32 minutes in they show a demo of uplink traffic monitoring, but I think you need to monitor down downlink and uplink at the same time, which only the USRP can do. Maybe it is possible with two RTLs though… I haven’t tried it yet, but it should be possible – uplink is just a different frequency, but uses the same kind of data-structure as far as I know, so it shuld be possible to demodulate and analyze it using the same tools. It is totally possible, just need some computing power to be able to work with both sticks. The program arfncalc can give you the uplink frequency as well as the downlink. I will look into this stuff in the coming days and will post some results to my blog. Nice blog, you seem knowledgeable about GSM. I’ll keep an eye on your work. Hi, I have one issue that kind of bothers me: I tune my rtl-sdr to the right frequency – I use arfcn-calc and an old Nokia 3310 in network monitor mode so I know what is the the phone’s tower’s ARFCN so I know the frequency – but I don’t always get data, most of the time I get sch.c:260 ERR: conv_decode 11 and similar messages. After that I decided to do a little calibration with kalibrate-rtl. It showed me an average of +24 kHz offset, so I subtracted around 24 000 from the frequency arfcncalc told me and now I am tresting this setup. It seems that it still starts with the ERR-messages, but after some seconds it actually starts to output GSM-data as expected. Now my question is: since I am very new to radios and SDR especially is what I did with calibrating and changing the frequency manually correct (at least in theory)? Should I try to move closer to the tower? My phone shows around -59 dBi signal. Thank you! Hi, yes what you did is correct, usually you’d use the PPM offset value, but gsm_receive_rtl.py doesn’t seem to have that option. Remember the dongle takes time to warm up and stabilize, and during that time the frequency offset can change, so make sure you run Kalibrate after the dongle has been running for a few minutes. Also, if the signal isn’t perfectly centered you can tune around with the mouse by clicking on the GSM peak middle. I get those errors sometimes too and i’m not sure why, but it could be signal strength related. Hi, great article, thank you for posting it. What kind of antenna did you use for this? thanks! Hi, thanks. I used a roof mounted J-Pole. But GSM signals are usually quite strong so even the stock rtl-sdr antenna should pick up GSM decently assuming you have a GSM cell tower near you. Oh, great! I already ordered an RTL-SDR from eBay, so I am just waiting for the mailman to bring it. I am really interested about decrypting actual data, found this video which I think could be applied to RTL-SDR, what do you think? Hi, yes the video is applicable, the USRP and RTL-SDR should be pretty much interchangeable. Nice tutorial. I could capture control data without any problem. But how to capture encrypted content ? It should be possible to capture encrypted data even without decrypting. Cant find much info except USRP. I don’t know much about the encryption stuff, but are you talking about capturing a cfile? I wasn’t able to find a way to get airprobe to do it with the rtl-sdr. But it should be possible using GNURadio. Great instruction! Thanks! But I have a question. I trying to get burst data for kraken (magic 114 bits). I use osmocombb + motorola C123. I’m able to see receiving data in wireshark. But how to convert this captured data into necessary format? Thanks in advance! To be honest I haven’t looked into the encryption side of things yet. Your best bet for help is probably on the srlabs A51 mailing list. Some people on IRC might also be able to help. Ask around on the freenode server, channel ##rtlsdr. I’ve been trying to hunt down a GSM frequency to try this out. I can’t seem to find one though. I browsed 900Mhz-1000Mhz, nothing that looked like data. Any tips in using the FCC website for looking it up? I imagine there is a better way than me browsing around randomly. Keep up these great tutorials! Sorry, I almost forgot that the USA uses slightly different frequencies. Try searching from 850 MHz. There’s a good worldwide list of the bands used here. This is also useful for finding exact frequencies.. If you don’t know your own cells ARFCN number, look here in the table for the range of valid values for your GSM band. Thanks, I’ll check those out. I also found this while I was searching: It allows you to locate and track LTE basestations. May be cool for your next article. Nice LTE scanner link. I can’t really use it yet as there are no LTE signals in my country until next year. There is a test signal around, but I have no idea what part of the spectrum it is in! EDIT: Just realized there are LTE signals around, but they’re all in the 1.8 GHz region. hey.. i have gnuradio 3.6.5 installed on ubuntu 12.04..i m trying to install airprobe.. everything works fine according to this tutorial till the point i try to make gsm receiver… i got the following error /usr/bin/ld: i386 architecture of input file `decoder/.libs/libdecoder.a(GSM660Tables.o)’ is incompatible with i386:x86-64 output collect2: ld returned 1 exit status make[4]: *** [_gsm.la] Error 1 make[4]: Leaving directory `/home/a/airprobe/gsm-receiver/src/lib’ make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/a/airprobe/gsm-receiver/src/lib’ make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/a/airprobe/gsm-receiver/src’ make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/a/airprobe/gsm-receiver’ make: *** [all] Error 2 i m not sure about what this error is.. when i try to run gsm_receive.py file it again give an error which is probably due to the incomplete installation Traceback (most recent call last): File “./gsm_receive.py”, line 12, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: No module named _gsm is there anybody who can help me with this problem..??? thanx in advance, regards ali
http://www.rtl-sdr.com/rtl-sdr-tutorial-analyzing-gsm-with-airprobe-and-wireshark/
CC-MAIN-2015-22
refinedweb
7,568
67.15
Opened 8 years ago Closed 6 years ago #7597 closed defect (duplicate) [Patch] "Environment not found" when clicking on the vote icon of an issue with comments Description Hi, I'm getting the error message "Environment not found" when clicking on the voting icon of an issue when it has had comments appended. However, if it was a new issue which had no comments it worked fine. On further examination the plug-in seems to be redirecting me to a page which had the URL address, for example, "/ticket/9" where the ticket's actual address was "/project1/ticket/9". Note I've my base URL set in the trac.ini file as base_url = To fix the problem, for my installation, I examined the code from the Five Star Vote plugin and saw that there was a difference regarding the returning URL on the last line of the function def process_request(self, req): This plug-in uses req.redirect(resource) where as the Five Star Vote plugin plug-in uses req.redirect(req.get_header('Referer')) After making this small change to the last line of the function process_request everything seems to be working as expected. Thanks for the report and fix. I will take a look now and likely incorporate into the codebase.
https://trac-hacks.org/ticket/7597
CC-MAIN-2018-34
refinedweb
213
57.81
Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication How To: Call a Web Service Using SSL from ASP.NET 1.1 J.D. Meier, Alex Mackman, Michael Dunner, and Srinath Vasireddy Microsoft Corporation Published: November 2002 Last Revised: January 2006 Applies to: - ASP.NET 1.1 - Internet Information Services (IIS) 5.0 and 5.1 - Microsoft® Windows Server 2000 See the "patterns & practices Security Guidance for Applications Index" for links to additional security resources. See the Landing Page for a starting point and complete overview of Building Secure ASP.NET Applications. Summary: Secure Sockets Layer (SSL) encryption can be used to guarantee the integrity and confidentiality of the messages passed to and from a Web service. This How To shows you how to use SSL with Web services. (7 printed pages) Contents Summary of Additional Resources You can configure a Web service to require Secure Sockets Layer (SSL) to protect sensitive data sent between the client and the service. SSL provides: - Message integrity. This ensures that messages are not modified while in transit. - Message confidentiality. This ensures that messages remain private while in transit. This How To describes how to configure a Web service to require SSL and how to call the Web service from an ASP.NET client application by using the HTTPS protocol. Summary of Steps This article includes the following Step 1. Install Server Certificates on the Web Server For information about installing Web server certificates on a Web server, see How To: Set Up SSL on a Web Server. Step 2. Create a Simple Web Service To create a simple Web service on the Web service host computer - Start Visual Studio .NET and create a new C# ASP.NET Web Service application called SecureMath. - Rename service1.asmx as math.asmx. - Open math.asmx.cs and rename the Service1 class as math. - Add the following Web method to the math class. [WebMethod] public long Add(long operand1, long operand2) { return (operand1 + operand2); } - To create the Web service, click Build Solution on the Build menu. Step 3. Configure the Web Service Virtual Directory to Require SSL - On the Web service host computer, start IIS. - Navigate to the SecureMath virtual directory. - Right-click SecureMath, and then click Properties. - Click the Directory Security tab. - Under Secure communications, click Edit. If Edit is unavailable, it is likely that a Web server certificate is not installed. - Select the Require secure channel (SSL) check box. - Click OK, and then OK again. - In the Inheritance Overrides dialog box, click Select All, and then click OK to close the SecureMath properties dialog box. This applies the new security settings to all subdirectories in the virtual directory root. Step 4. Test the Web Service Using a Browser This procedure ensures that the Web server certificate is valid and has been issued by a Certification Authority (CA) that is trusted by the client computer. To call the Web service using SSL from Internet Explorer - Start Internet Explorer on the client computer and browse (using HTTPS) to the Web service. For example: The Web service test page should be displayed by the browser. - If the Web service test page is displayed successfully, close Internet Explorer and go to Procedure 5, "Develop a Web Application to Call the Serviced Component." - If the Security Alert dialog box, as illustrated in Figure 1, is displayed, click View Certificate to see the identity of the issuing CA for the Web server certificate. You must install the CA's certificate on the client computer. This is described in Procedure 4, "Install the Certificate Authority's Certificate on the Client Computer." - Close Internet Explorer. Figure 1. Security Alert dialog box Step 5. Install the Certificate Authority's Certificate on the Client Computer. - Start Internet Explorer and browse to http:// hostname/certsrv, where hostname is the name of the computer where Microsoft Certificate Services that issued the server certificate is located. - Click Retrieve the CA certificate or certificate revocation list, and then click Next. - Click Install this CA certification path. - In the Root Certificate Store dialog box, click Yes. - Browse to Web service using HTTPS. For example:. - Repeat Steps 1 and 2, click Download CA certificate, and then save it to a file on your local computer. - Now perform the remaining steps, if you have the CA's .cer certificate file. - On the taskbar, click Start, and then click Run. - Type mmc, and then click OK. - On the Console menu, click Add/Remove Snap-in. - Click Add. - Select Certificates, and then click Add. - Select Computer account, and then click Next. - Select Local Computer: (the computer this console is running on), and then click Finish. - Click Close, and then OK. - Expand Certificates (Local Computer) in the left pane of the MMC snap-in. - Expand Trusted Root Certification Authorities. - Right-click Certificates, point to All Tasks, and then click Import. - Click Next to move past the Welcome dialog box of the Certificate Import Wizard. - Enter the path and filename of the CA's .cer file. - Click Next. - Select Place all certificates in the following store, and then click Browse. - Select Show physical stores. - Expand Trusted Root Certification Authorities within the list, and then select Local Computer. - Click OK, click Next, and then click Finish. - Click OK to close the confirmation message box. - Refresh the view of the Certificates folder within the MMC snap-in and confirm that the CA's certificate is listed. - Close the MMC snap-in. Step 6. Develop a Web Application to Call the Web Service This procedure creates a simple ASP.NET Web application. You will use this ASP.NET Web application as the client application to call the Web service. To create a simple ASP.NET Web application - On the Web service client computer, create a new C# ASP.NET Web application called SecureMathClient. - Add a Web reference (by using HTTPS) to the Web service. - Right-click the References node within Solution Explorer, and then click Add Web Reference. - In the Add Web Reference dialog box, enter the URL of your Web service. Make sure you use an HTTPS URL. Note If you have already set a Web reference to a Web service without using HTTPS, you can manually edit the generated proxy class file and change the line of code that sets the Url property from an HTTP URL to an HTTPS URL. - Click Add Reference. - Open WebForm1.aspx.cs and add the following using statement beneath the existing using statements. using SecureMathClient.WebReference1; - View WebForm1.aspx in Designer mode and create a form like the one illustrated in Figure 2 using the following IDs: - operand1 - operand2 - result - add Figure 2. WebForm1.aspx form - Double-click the Add button to create a button-click event hander. - Add the following code to the event handler. private void add_Click(object sender, System.EventArgs e) { math mathService = new math(); int addResult = (int) mathService.Add( Int32.Parse(operand1.Text), Int32.Parse(operand2.Text)); result.Text = addResult.ToString(); } - On the Build menu, click Build Solution. - Run the application. Enter two numbers to add, and then click the Add button. The Web application will call the Web service using SSL. Additional Resources - How To: Set Up SSL on a Web Server - How To: Call a Web Service Using Client Certificates from ASP.NET 1.1
https://msdn.microsoft.com/en-us/library/aa302409.aspx
CC-MAIN-2015-48
refinedweb
1,210
59.3