text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Volume Gives Shorts The Upper Hand Rather than signaling the end of the bear market, the more probable outcome for the current rally in the S&P 500 is for it to fail somewhere between current levels and 806. Trading volume can help us understand market participants' collective desire to buy or sell a particular security or index. To formulate probable outcomes for the current rally in stocks, we will compare the S&P 500 long ETF (SPY) with the S&P 500 short ETF (SH). As we walk through this analysis, notice we are focusing on observable facts rather than forecasting what the future may hold. We assume we will get more of the same (lower lows in SPY/S&P 500 Index) until we see evidence to the contrary (higher highs, better volume, etc). We briefly covered investing based on probable outcomes in Odds Continue To Favor Lower Lows In Stocks. We will restate the importance of cutting losses in any position when you are wrong. We are near trendline overhead resistance for stocks and we have a FED announcement on Wednesday - big moves up or down would not come as a surprise. Risk management remains very important to both the shorts and the longs. There are numerous reasons to doubt the staying power of the current rally in the S&P 500. Based on the notations in the chart below (and other factors), there is little evidence to suggest the current rally is anything other than a bear market rally. The conclusion we draw from the data is the odds continue to favor lower lows in the U.S. stock market. At the same time, we acknowledge the lower probability outcome (a new bull market) is possible, but not probable. For long-term investors, holding cash still has better odds than owning the S&P 500 Index. If we do see a break above the trendline in SPY (above), volume will help us determine its potential staying power. Stocks could move above the trendline for a few days and then reverse, especially if the break above trend occurs on relatively tame volume. A strong volume breakout holding for a few days would add to the bullish case. We just have to see how it plays out and make adjustment accordingly - no need to guess here. We can look to confirm our probabilities above by examining the S&P 500 inverse or short ETF (SH), which makes money when the S&P 500 falls, and loses money when the S&P 500 rises. Volume and trends continue to support favorable outcomes for SH. The lower probability outcome is for a permanent break of the upward sloping trendline (thick purple line). Based on what we know as of Tuesday's market close, it is premature to call an end to the bear market. As conditions evolve and the charts change, we may draw a different conclusion. However, bullish conclusions may not come until after stocks head toward lower lows. Fair and Balanced: If the lower probability outcome occurs, the small chart of SH shows the gap that could be filled between the purple trendline and 200-day moving average (thin red line). Stated another way, if the lower probability outcome occurs, SH could drop under $70, illustrating the need to (a) concede we could be wrong, (b) plan for it, and (c) have a specific risk management plan in place to protect capital if needed. Probabilities are just that - they are not certainties. Now we will examine the current state of investors' willingness to accept risk. We will use bonds to do so, but the results affect all risk assets, including stocks. The chart below shows the ratio of investment-grade U.S. corporate bonds (LQD) to long-term U.S. Treasury bonds (TLT). The reason for looking at this ratio is quite simple. When investors feel more confident about future economic activity, they are more willing to take on the added risk associated with corporate bonds to earn a more favorable return (line trends up). Conversely, when investors feel less confident about the future and are more concerned about defaults, they will favor the safe haven and lower returns available in Treasuries (line trends down). The results are not encouraging for the sustainability of the current rally in stocks. With the primary downtrend below intact, we have more evidence to support a continuation of the primary downtrend in the S&P 500. On Tuesday, we did get some good news on the housing front. Unfortunately, the chart of Lennar (LEN) looks less than bullish for long-term investors. Some nice gains will likely come in the homebuilders sometime in the future. However, it is still too early to buy in our book. We need to see more. Numerous calls have been made for the end of the bear market in Chinese (FXI) stocks. Rather than buying, we would prefer to remain patient. A sustained break of trend here could be the first significant crack in the bear market dam. Bullish outcomes for FXI could pave the way for other risk assets to follow. Watching FXI with an open mind is a good use of our time. When the charts above become more attractive on a long-term basis, we will be more than receptive to bullish interpretations. While every chart above may break the downward sloping trendlines in the coming weeks and months, we would prefer to see it happen rather than hoping it happens or forecasting that it will happen. When the trends become more favorable, there will be ample time to make money. For long-term investors, not short-term traders who have an understandably different approach, inverse positions remain more attractive than long positions in the vast majority of risk-related markets. If trendlines begin to break in the bulls favor, our interest in numerous asset classes would increase. The charts and commentary above are for illustrative purposes only and are not recommendations to buy or sell any security. Inverse ETFs or short positions are not suitable for many investors. TweetTweet
http://www.safehaven.com/article/12852/volume-gives-shorts-the-upper-hand
CC-MAIN-2014-42
refinedweb
1,016
61.26
Barcode Software barcode font vb.net Figure 2-6 OSPF areas in .NET Generating Data Matrix barcode in .NET Figure 2-6 OSPF areas This is due to the oxidation reduction reaction of silver bromide in the presence of light. 2AgBr light 0 2Ag Br2 What substance is oxidized in this reaction Which substance is reduced generate, create barcodes mail none with java projects BusinessRefinery.com/ barcodes native crystal reports barcode generator generate, create bar code attachment none on .net projects BusinessRefinery.com/ barcodes If using ZENworks Dynamic Local User function to gain access to Windows, you must install Novell ZENworks for Desktops 3 or later. If you are not using ZENworks to gain access to Windows, you must have accounts with the same user name and password that exist in both NDS and NT4 or ADS domains. barcode generator in vb.net using barcode generator for vs .net control to generate, create bar code image in vs .net applications. namespace BusinessRefinery.com/ bar code using numeric word to encode barcode with asp.net web,windows application BusinessRefinery.com/ bar code Ten using barcode maker for excel spreadsheets control to generate, create bar code image in excel spreadsheets applications. server BusinessRefinery.com/barcode generate, create barcodes right none on .net projects BusinessRefinery.com/ bar code 7. Mathematics Different specialties need different amounts of math (addressed in the Specializations section that follows), but every programmer must be happy and comfortable with mathematical concepts. All video games are, at one level or another, mathematical models. to include qrcode and qr data, size, image with java barcode sdk remote BusinessRefinery.com/qr barcode qr-code size table for .net BusinessRefinery.com/QR Code JIS X 0510 public static int IndexOf<T>(T[ ] a, T v) public static int IndexOf(Array a, object v, int start) to include qr and qrcode data, size, image with .net barcode sdk program BusinessRefinery.com/qr-codes qr bidimensional barcode size unity in .net BusinessRefinery.com/Quick Response Code Delegate Method Group Conversion create qr code with vb.net use visual .net qr barcode integrating to receive qr code in visual basic.net credit, BusinessRefinery.com/qr barcode denso qr bar code data recognition with vb BusinessRefinery.com/QR Code ISO/IEC18004 MS MS generate, create code-128 array none in .net projects BusinessRefinery.com/Code 128 Code Set A generate, create ecc200 machine none in office excel projects BusinessRefinery.com/Data Matrix 2d barcode DISPOSITION winforms code 39 generate, create 3 of 9 barcode bmp none on .net projects BusinessRefinery.com/Code 39 ssrs code 39 generate, create barcode code39 tutorials none for .net projects BusinessRefinery.com/USS Code 39 The output from the program is shown here: use asp .net pdf 417 generating to render pdf417 on .net unity BusinessRefinery.com/PDF417 ssrs data matrix use reportingservices class gs1 datamatrix barcode implement to add data matrix with .net machine BusinessRefinery.com/Data Matrix ECC200 Figure 13-6 shows the process of adding a Symbol to a document by dragging a thumbnail into the document; you locate the installed font from which you want a symbol by using the drop-down list at the top of the docker, set the size of the symbol at the bottom (a symbol can be resized at any time in the future by scaling it with the Pick Tool), and then drag and drop. Notice in the enlarged inset graphic in this figure that the Insert Symbol docker provides you with the extended character key combination for the symbol you ve clicked on. This feature is a great help if you re coming to CorelDRAW from a word processor such as WordPerfect. You might already be familiar with certain extended character codes; for example, standard font coding for a cents sign ( ) is to hold ALT, then type 0162. Therefore, for any font you ve chosen on the Insert Symbol docker, if the font has a cents sign and you want to choose it quickly, you type 0162 in the Keystroke field, press ENTER, and the docker immediately highlights the symbol it s easy to locate and equally easy to then add to the document. Conversely, when you click a symbol, the Keystroke field tells you what the keystroke is; you can then access a cents sign, a copyright symbol, or any other extended character you like in any application outside of CorelDRAW. You just hold ALT and then type the four-digit keycode in, for example, WordPerfect or Microsoft Word, and you re home free. barcode pdf417 vb.net using barcode maker for visual .net control to generate, create pdf417 2d barcode image in visual .net applications. square BusinessRefinery.com/pdf417 2d barcode ssrs pdf 417 using winform sql 2008 to access barcode pdf417 in asp.net web,windows application BusinessRefinery.com/pdf417 Downloaded from Digital Engineering Library @ McGraw-Hill () Copyright 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website. You may now be thinking that it makes sense to put outer joins on all lookup tables, since you often have inventory before items have sold. However, it s also possible to have items in a fact table that do not have a corresponding record in the dimension table. As an example, imagine a frustrated sales clerk who keeps trying to scan a trendy new scarf for an impatient customer. The scanner does not ring up the product at the register, so the sales clerk manually enters the article code from the scarf s tag (let s avoid the worst-case scenario, when the clerk rings it up under a different article with the same unit price, a common occurrence at my local department store). Why didn t the scarf scan Who knows! Of course, the scarf should have been in inventory! And it should not have been on display without existing in the article master! But it happened, and unfortunately, it happens more than business people realize and more than data modelers wish. In an ideal world, the sales transaction would automatically have added an entry in the article master. In an almost ideal world, the data warehouse will plug a number in the ARTICLE_ID such as 999 or XXX to say the article description is not found. In reality (such as with a transaction system or poorly modeled data mart), you will need to use an outer join. Outer joins may not be a problem for small lookup tables, but they are best avoided for large lookup tables because the RDBMS cannot use the index to process the query because of lousy response times. Also, earlier versions of certain databases did not support outer joins. Even when you use an outer join on a small lookup table, be sure to test the response time or analyze an explain plan in your RDBMS. If the response time is slow, train the users to understand that if they want full product listings, full customer listings, or a list of customers who have not bought this year, analyze that data separately. Use of subqueries (discussed in 23) may help them answer the same questions more efficiently. This if goes with this else. if(i == 10) { if(j < 20) a = b; if(k > 100) c = d; else a = c; // this else refers to if(k > 100) } else a = d; // this else refers to if(i == 10) 10.5 Hybrid Circuits 1. Open a new document. This is the document where you ll perform the actions that Modems are available today from a variety of vendors, all with their own unique technical approach. These modems are making it possible for cable companies to enter the data communications market now. In the longer term, modem costs must drop and greater interoperability is desirable. Customers who buy modems that work in their current cable system need assurance that the modem will work if they move to a different geographic location served by a different cable company. Furthermore, agreement on a standard set of specifications will allow the market to enjoy economies of scale and drive down the price of each individual modem. Ultimately, those modems will be available as standard peripheral devices offered as an option to customers buying new personal computers at retail stores. The cable companies and manufacturers came together formally in December 1995 to begin working toward an open standard. Leading U.S. and Canadian cable companies were involved in this development toward an open cable modem standard. Specifications were to be developed in three phases, and then be presented to standards-setting bodies for approval as standards. Individual vendors were free to offer their own implementations with a variety of additional, competitive features and future improvements. A data interoperability specification will comprise a number of interfaces. The resultant specification is called Data Over Cable Service Interface Specification (DOCSIS), which architecturally is shown in Figure 14-6 as it relates to the TCP/IP protocol stack. Note that there are several sublayers added into the DOCSIS specification at the bottom layers (for example, layer 1 and 2) of the protocol stack. This is to simplify the connection and add the dimension of security into the DOCSIS specifications. TERMINATION UNIT If the icons disappear or appear out of order, select an icon and change its ImageIndex to match the preceding list. Internal success stories should be generated about the attributes of the XenApp environment. The idea is to create a buzz around the organization where people are excited about, rather than resistant to, the upcoming changes. At the VA Medical Center, for example, we had a doctor thank our implementation team for making his life better because he could now access so much more of the data he needed, and he could do it much more quickly and far more easily than he could in the previous distributed PC environment. More Data Matrix on .NET barcode font vb.net: The TCP Header in .NET Assign Data Matrix in .NET The TCP Header free barcode font for vb.net: Figure 2-10 UDP header format in .NET Drawer Data Matrix 2d barcode in .NET Figure 2-10 UDP header format free barcode font for vb.net: Transporting Voice by Using IP in .NET Integrating gs1 datamatrix barcode in .NET Transporting Voice by Using IP free barcode font for vb.net: Transporting Voice by Using IP in .NET Assign datamatrix 2d barcode in .NET Transporting Voice by Using IP barcode generator in vb.net 2010: RTCP BYE Packet in .NET Attach ECC200 in .NET RTCP BYE Packet free barcode font for vb.net: Transporting Voice by Using IP in .NET Connect 2d Data Matrix barcode in .NET Transporting Voice by Using IP free barcode font for vb.net: Next header in .NET Drawer DataMatrix in .NET Next header free barcode font for vb.net: A Little about Speech in .NET Include data matrix barcodes in .NET A Little about Speech free barcode font for vb.net: Figure 3-7 Speech quality versus bit rate for common classes of codecs in .NET Paint barcode data matrix in .NET Figure 3-7 Speech quality versus bit rate for common classes of codecs barcode generator in vb.net 2010: Speech-Coding Techniques in .NET Implementation ECC200 in .NET Speech-Coding Techniques free barcode font for vb.net: Speech-Coding Techniques in .NET Generating DataMatrix in .NET Speech-Coding Techniques free barcode font for vb.net: Figure 4-2 An example of an H.323 zone in .NET Generation barcode data matrix in .NET Figure 4-2 An example of an H.323 zone barcode generator in vb.net 2010: cont. RAS messages that support RAS signaling functions in .NET Attach ECC200 in .NET cont. RAS messages that support RAS signaling functions free barcode font for vb.net: Figure 4-7 Direct call signaling in .NET Add datamatrix 2d barcode in .NET Figure 4-7 Direct call signaling free barcode font for vb.net: Messages used by H.225.0 for callsignaling functions in the H.323 architecture in .NET Generating Data Matrix ECC200 in .NET Messages used by H.225.0 for callsignaling functions in the H.323 architecture free barcode font for vb.net: Call Scenarios in .NET Get datamatrix 2d barcode in .NET Call Scenarios. barcode generator in vb.net 2010: Closing Logical Channels and Ending a Session Closing a logical channel in .NET Build ECC200 in .NET Closing Logical Channels and Ending a Session Closing a logical channel free barcode font for vb.net: The Decomposed Gateway in .NET Draw Data Matrix 2d barcode in .NET The Decomposed Gateway free barcode font for vb.net: Figure 5-6 An example of a SIP-enabled service in .NET Develop DataMatrix in .NET Figure 5-6 An example of a SIP-enabled service Articles you may be interested 2d barcode vb.net: Exception Handling in .NET Attach 2d Data Matrix barcode in .NET Exception Handling zebra print barcode vb.net: User Access to Terminal Servers in Software Get qr barcode in Software User Access to Terminal Servers vb.net print barcode labels: The Preprocessor and Comments in Java Render pdf417 2d barcode in Java The Preprocessor and Comments vb.net barcode component: Sales Jobs and Sales Process in Software Creator code128b in Software Sales Jobs and Sales Process rdlc barcode: CISA Certified Information Systems Auditor All-in-One Exam Guide in Software Produce Code 39 in Software CISA Certified Information Systems Auditor All-in-One Exam Guide how to generate barcode in rdlc report: CISA Certified Information Systems Auditor All-in-One Exam Guide in Software Produce Code 3 of 9 in Software CISA Certified Information Systems Auditor All-in-One Exam Guide rdlc barcode font: P6, Firewalls in Software Writer 3 of 9 in Software P6, Firewalls barcodelib.barcode.rdlc reports.dll: Cisco ASA Configuration in Software Encode Data Matrix barcode in Software Cisco ASA Configuration generate code 39 barcode java: Moderate Self-Mastery The Teacher in Microsoft Integrating QR Code JIS X 0510 in Microsoft Moderate Self-Mastery The Teacher how to use barcode reader in asp.net c#: Override Area( ) for Rectangle. in C#.net Encoder code 128c in C#.net Override Area( ) for Rectangle. code 39 barcode generator java: SECTION 3 in Software Implement EAN-13 Supplement 5 in Software SECTION 3 barcode project in vb.net: 4: Bridges and Switches in Objective-C Creation QR Code JIS X 0510 in Objective-C 4: Bridges and Switches how to generate barcode in vb.net 2008: Click! The No Nonsense Guide to Digital Cameras in Software Printer EAN13 in Software Click! The No Nonsense Guide to Digital Cameras java barcode scanner library: PART I PART I PART I in Java Draw qr bidimensional barcode in Java PART I PART I PART I barcode generator vb.net download: Download at Boykma.Com in Android Paint qr barcode in Android Download at Boykma.Com Build Your Own Elec tric Vehicle in .NET Integrating QR Code 2d barcode barcode gs1-128 excel: Calendar-Based Investing in Software Printer EAN-13 Supplement 2 in Software Calendar-Based Investing generate 2d barcode vb.net: PDH Networks 140 Wide Area Networks in Software Development Code128 in Software PDH Networks 140 Wide Area Networks generate code 39 barcode java: Bringing Out the Best in Everyone You Coach in Microsoft Draw QRCode in Microsoft Bringing Out the Best in Everyone You Coach create 2d barcode vb.net: Telephone Company End or Central Office subscriber line or local loop in Objective-C Make qr codes in Objective-C Telephone Company End or Central Office subscriber line or local loop
http://www.businessrefinery.com/yc3/456/18/
CC-MAIN-2021-49
refinedweb
2,608
57.87
hi all can anyone tell me how to create a text file, naming it and writing it using java... Printable View hi all can anyone tell me how to create a text file, naming it and writing it using java... There is not an easy answer to this because only you know the functionality you need for your file. The basic components are a file object, a stream, and a reader/writer for the stream. Take a look at the Java Tutorial section on basic Input/Output starting at Give you a sample, hope it helps to you. Quote: import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; public class FileInputOutputExample { public static void main( String[] args ) { try { InputStream is = new FileInputStream( "input.txt" ); OutputStream os = new FileOutputStream( "output.txt" ); int c; while( ( c = is.read() ) != -1 ) { System.out.print( ( char )c ); os.write( c ); } is.close(); os.close(); } catch( Exception e ) { e.printStackTrace(); } } }
http://forums.devx.com/printthread.php?t=165620&pp=15&page=1
CC-MAIN-2017-34
refinedweb
161
61.83
The QWidgetStack class provides a stack of widgets of which only the top widget is user-visible. More... #include <qwidgetstack.h> Inherits QFrame. List of all member functions. The application programmer can move any widget to the top of the stack at any time using raiseWidget(), and add or remove widgets using addWidget() and removeWidget(). It is not sufficient to pass the widget stack as parent to a widget which should be inserted into the widgetstack. visibleWidget() is the get equivalent of raiseWidget(); it returns a pointer to the widget that is currently at the top of the stack.. The parent and name arguments are passed to the QFrame constructor. The parent, name and f arguments are passed to the QFrame constructor. is not a child of this QWidgetStack moves it using reparent(). Example: xform/xform.cpp. See also widget() and addWidget(). See also visibleWidget(). Example: xform/xform.cpp. Raises widget w to the top of the widget stack. See also visibleWidget() and raiseWidget(). See also aboutToShow(), id(), and raiseWidget(). See also id() and addWidget(). This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.2/qwidgetstack.html
crawl-002
refinedweb
190
61.12
Details Description After investigating the methodology used to add HTTPS support in branch-2, I feel that this same approach should be back-ported to branch-1. I have taken many of the patches used for branch-2 and merged them in. I was working on top of HDP 1 at the time - I will provide a patch for trunk soon once I can confirm I am adding only the necessities for supporting HTTPS on the webUIs. As an added benefit – this patch actually provides HTTPS webUI to HBase by extension. If you take a hadoop-core jar compiled with this patch and put it into the hbase/lib directory and apply the necessary configs to hbase/conf. ========= OLD IDEA(s) BEHIND ADDING HTTPS (look @ Sept 17th patch) ========== In order to provide full security around the cluster, the webUI should also be secure if desired to prevent cookie theft and user masquerading. Here is my proposed work. Currently I can only add HTTPS support. I do not know how to switch reliance of the HttpServer from HTTP to HTTPS fully. In order to facilitate this change I propose the following configuration additions: CONFIG PROPERTY -> DEFAULT VALUE mapred.https.enable -> false mapred.https.need.client.auth -> false mapred.https.server.keystore.resource -> "ssl-server.xml" mapred.job.tracker.https.port -> 50035 mapred.job.tracker.https.address -> "<IP_ADDR>:50035" mapred.task.tracker.https.port -> 50065 mapred.task.tracker.https.address -> "<IP_ADDR>:50065" I tested this on my local box after using keytool to generate a SSL certficate. You will need to change ssl-server.xml to point to the .keystore file after. Truststore may not be necessary; you can just point it to the keystore. Issue Links - duplicates HADOOP-8581 add support for HTTPS to the web UIs - Closed - is depended upon by HBASE-8181 WebUIs HTTPS support - Resolved Activity - All - Work Log - History - Activity - Transitions -1 overall. Here are the results of testing the latest attachment against trunk revision . -1 patch. The patch command could not apply the patch. Console output: This message is automatically generated. -1 overall. Here are the results of testing the latest attachment against trunk revision . -1 patch. The patch command could not apply the patch. Console output: This message is automatically generated. I am aware there is a patch in branch-2. Namely,. I guess I would like this back-ported in branch-1 as well; however there appears to be a lot of work that needs to be done to do so. Is it necessary to grab everything from this patch? Is a backport possible? you'd need the sslfactory stuff from MAPREDUCE-4417 (there is a patch for branch-1 which as not been committed, see JIRA for details) and then you'll have to tweak JSPs and a few other places to use the HttpConfig from HADOOP-8581 to create the URLs. Also, in Hadoop 1 the HttpServer is shared between shuffle and the webui, so you'll have to make sure you use 2 connectors, one SSL for the webui and one clear for shuffle, for all the webui requests you have to ensure they are not served over the clear connector (shuffle's), you could do this with a filter. This is my most recent work of back-porting various patches into Hadoop 1.0.3 in order to get HTTPS working on all the webUIs. There was a conflict between dfs.https.enabled and the hadoop.ssl.enabled settings caused issue in bringing up the DFS webUIs (NameNode mostly). I have made it work in this patch. A lot of files had to be touched to make it work. At this moment I can see NameNode, JobTracker, and TaskTracker webUIs in HTTPS and not HTTP. This patch does not address certain hard-coded HTTP urls within the webUIs themselves. Hopefully another patch that I put out shortly will fix that. -1 overall. Here are the results of testing the latest attachment against trunk revision . -1 patch. The patch command could not apply the patch. Console output: This message is automatically generated. This is actually a far better / comprehensive patch then I previously posted. The JSP pages still need to be fixed but it is almost complete! Some of the JSP pages are already done like nn_browsedfscontent and browseDirectory. I will post a "complete" patch later. This latest patch removes ALOT of the unrelated code. It is focused on just the HTTPS of the webUIs. I can confirm it compiling on top of HDP 1 currently. I will create a patch for trunk once I can validate with some testing that this patch works. Latest patch for review. This applies cleanly on top of HDP 1 and has been partially reviewed by Benoy. I would like some open source reviews before I go on to create patches for trunk & etc. 1.0.4 is released now, and should probably be the last 1.0 version. 1.1.0 is released now also. This change could be targetted at either 1.1.1 or 1.2.0. My guess is it is a big enough change it should go in 1.2.0, so that's what I marked it for. Please fix up: - remove the config change to: - fs.default.name - hdfs-site.xml - mapred-site.xml - ssl.*.location - ssl.*.password - the default value of hadoop.ssl.enabled must be false - remove the spurious change to InterTrackerProtocol.java and other changes related to disk failures - remove the spurious whitespace changes - downgrade the httpserver logging to debug Have you tested all of the combinations of hadoop.ssl.enabled and mapreduce.shuffle.ssl.enabled? What is the use case where the two values will differ? Hi Owen, I apologize for the length of silence. I will go ahead and take action to your comments and generate a new patch. Benoy has discovered some issues with submitting a job using my patch and enabling HTTPS, and an interesting "NoSuchMethodError" with using my patch but without enabling HTTPS. We spoke off-line about how I removed the MapReduce SSL shuffle code; most likely there is somewhere within the code that still relies on SSL for job submission when HTTPS is enabled. Benoy and I will be working on these issues, I will then apply your comments to the patch and upload it soon. It appears I should also modify my code for 1.2.0 as well. Found error in WebHdfsFileSystem.java and NamenodeWebHdfsMethods.java. Patch updated for the fixes. Changed Target Version to 1.3.0 upon release of 1.2.0. Please change to 1.2.1 if you intend to submit a fix for branch-1.2. Going through the patch. Some quick questions & comments: 1. Seems like the corresponding code in the trunk has moved some. For example, FileBasedKeyStoresFactory.java has some updates. The question is whether we should update the branch-1 patch accordingly. Maybe we should? 2. src/test/org/apache/hadoop/http/TestSSLHttpServer.java has some commented out code, and that is also different (although maybe cosmetically) than trunk's. I'll go through some more and might have some more questions. How much testing has the patch seen (unit tests & manual)? Thanks for the comments. I have pulled the new version of FileBasedKeyStoresFactory.java and TestSSLHttpServer.java from hadoop 2. In corresponding to the changes, there are the files to be updated. - modified: src/core/org/apache/hadoop/http/HttpConfig.java - modified: src/core/org/apache/hadoop/http/HttpServer.java - modified: src/core/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java - modified: src/core/org/apache/hadoop/security/ssl/SSLFactory.java - modified: src/core/org/apache/hadoop/util/PlatformName.java - modified: src/test/org/apache/hadoop/http/TestSSLHttpServer.java I do need to remove the use of com.google.common.annotations.VisibleForTesting. Will provide the new patch soon. Tested: Full unit tests during compilation. There are a couple or a few failures that I think it’s not related to the change. For the system tests, I had it on a 5-machine VM cluster and then a 60-machine real cluster, both with security enabled. Many sample operations being done. Also tested the case to turn https off in the config. SecondaryNameNode was on during testing, also verify download/upload of fsimage. Some comments: 1. HTTP_MAX_THREADS is not used in the patch. It should be used in the HttpServer's constructor in the create of the QueuedThreadPool. 2. In TaskTrackerStatus.java - could we have a new constructor with the new shufflePort argument (and in the old constructor have the value of shufflePort default to the httpPort. 3. getFallBackAuthenticator implementation in KerberosAuthenticator needs to set the configurator in the PseudoAuthenticator instance before returning. 4. In DataNode.java, remove the check for isSecure() in the constructor. On the testing front, please ensure things like SecondaryNamenode<->PrimaryNamenode communication, distcp, continue to work as usual.. Also, paste the result of test-patch and unit test runs. Thanks Devaraj. Here is the new patch with changes mentioned in your comments. Files being touched are: - modified: src/core/org/apache/hadoop/http/HttpServer.java - modified: src/core/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java - modified: src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java - modified: src/mapred/org/apache/hadoop/mapred/TaskTrackerStatus.java And BTW, how can I get the unit test result? Copy and paste from terminal output or there is a different way? This is the command I used to run the unit test. ant -Dforrest.home=$FORREST_HOME -Djava5.home=$JAVA5_HOME -Dcompile.c++=true -Dcompile.native=true clean test Fixed tasklog url and SN for HttpServer on running as daemon. Following is the change compared to the previous patch. ----------- diff --git a/src/core/org/apache/hadoop/http/HttpServer.java b/src/core/org/apache/hadoop/http/HttpServer.ja index 0047d64..efcaad6 100644 — a/src/core/org/apache/hadoop/http/HttpServer.java +++ b/src/core/org/apache/hadoop/http/HttpServer.java @@ -167,7 +167,6 @@ public class HttpServer implements FilterContainer { // default value (currently 250). QueuedThreadPool threadPool = maxThreads == -1 ? new QueuedThreadPool() : new QueuedThreadPool(maxThreads); - threadPool.setDaemon(true); webServer.setThreadPool(threadPool); final String appDir = getWebAppsPath(); diff --git a/src/mapred/org/apache/hadoop/mapred/JobHistory.java b/src/mapred/org/apache/hadoop/mapred/JobHi index 4ba2e38..9d701f5 100644 — a/src/mapred/org/apache/hadoop/mapred/JobHistory.java +++ b/src/mapred/org/apache/hadoop/mapred/JobHistory.java @@ -2787,7 +2787,7 @@ public class JobHistory { - task-attempt-id are unavailable. */ public static String getTaskLogsUrl(JobHistory.TaskAttempt attempt) { - if (attempt.get(Keys.SHUFFLE_PORT).equals("") + if (attempt.get(Keys.HTTP_PORT).equals("") Also attached the new patch. Running on our large production cluster for more than one week. Some unit test failures that don't seem to be related to the change. --------- [junit] Test org.apache.hadoop.io.compress.TestCodec FAILED [junit] Test org.apache.hadoop.fs.TestFsShellReturnCode FAILED [junit] Test org.apache.hadoop.hdfs.TestFileCreation FAILED [junit] Test org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark FAILED [junit] Test org.apache.hadoop.mapred.TestJobHistory FAILED [junit] Test org.apache.hadoop.mapred.TestLostTracker FAILED --------- Found an error in JobHistory.java that breaks TestJobHistory and TestLostTracker. New patch is attached. Changes compared with previous patch: ------- - Keys.TRACKER_NAME, Keys.HTTP_PORT, + Keys.TRACKER_NAME, Keys.HTTP_PORT, Keys.SHUFFLE_PORT, - * @return the taskLogsUrl. null if shuffle-port or tracker-name or + * @return the taskLogsUrl. null if http-port or tracker-name or ------- TestFileCreation and TestNNThroughputBenchmark are passed on individual testcase run after cleaning up. The following two testcases are failed with and without the changes in the patch. [junit] Test org.apache.hadoop.io.compress.TestCodec FAILED [junit] Test org.apache.hadoop.fs.TestFsShellReturnCode FAILED Patch for review and comments.
https://issues.apache.org/jira/browse/MAPREDUCE-4661
CC-MAIN-2017-51
refinedweb
1,947
52.97
Introduction According to the World health organisation (WHO) nearly 50 million people worldwide suffer from dementia, with the most common form of the disease being Alzheimer's which accounts to 60-80% of dementia cases. Having this leaves the person unable to remember important day to day tasks that needs to be completed. One of the common task that people with dementia face is forgetting to take the required medications and being unaware of this. Automatic pill dispensers have been of great help, but with advances in speech recognition and machine learning there are possibilities to build innovative pill dispensers with which a user can interact and keep track of various statistics corresponding to daily pill/medicine intake. What is Curismo? Curismo is a smart pill box that is built on top of the Walabot Pro and Amazon Alexa. Curismo is a portmanteau of Curis (which means healthcare in latin) and monitoring. Curismo detects when a user places their hand on top of the pill box and based on the location of the hand above the pill box (X, Y, Z) and the height between the hand and sensor (<20cm) the pill which is taken is determined by tracking the distance of the hand from the sensor placed under the pill box. Once, the amount of pills taken are logged using the Walabot sensor data, the user can interact with a custom Curismo Alexa skill to keep track of how many pills they have taken, how many pills are remaining to be taken and the time at which the pills were last taken. The user can interact with the skill and get insights to make the right decision by enabling a user to store their prescription and reach their daily pill intake goals with the help of the Curismo Alexa skill which tells how many pills to take and when to take them next. Hardware Required and Setup: The instructions to setup Walabot on your PC is available on the Walabot page under the Quick Start Guide: . 2) Amazon Alexa enabled Hardware/Simulators: If you do not have an amazon echo the alternatives to test the skills are Echo Dot/Echoism/Alexa Developer Kit Simulator. I personally did all the testing on the amazon Developer console which comes in really handy :) 3) A box to place with enough area to place the Walabot inside and cardboard pieces that can be used divide the placement of pills into different sections. Software Setup and Requirements: The software development process is broken down in three parts, the first part covers how to build the program that detects from which direction the hand is approaching and which pill has been taken.The Walabot python program also stores the information about pill being picked and the time at which it was picked and stores it into two separate text files for later use by the Amazon Alexa Python skill. Next, we'll be discussing how to build a custom amazon skill for Curismo using Flask-Ask and Python. And finally, we'll be looking into how to design the front-end Voice User Interface (VUI) of the Curismo Alexa Skill. Part 1 : Working with Walabot 1. Walab with Python: For this project I mainly used Python, for setting up Walabot with python. To install python you can refer this link: Once that's done, you can download the Walabot SDK for your OS. In my case I downloaded the .deb file and installed it via the terminal. Detailed information about the Walabot Python API is available at: When starting the project I was looking for a way to track a users hand, a way to visualise how Walabot tracks an object. To get a visual understanding I mainly referred the Walabot Sensor Targets repository created by gal-vayyar on github. It can be found here What followed next was building the Alexa skill that works together with the Walabot to help the user keep track of their pill intake. 3. Writing the logic for Curismo Python Program: Curismo was built mainly on top of the Walabot sensor targets project. Modifications where made so as to detect from which location the hand is approaching and the height at which the target is located at in real time. To understand how to classify between hand approaching from left, centre or right an observation was made after observing the Walabot target tracking visualiser. By looking at the target visualiser, it is clear that when the arm approaches from the left, the Y-axis magnitude tends to go towards -ve values, when at the centre the y-axis value is between -5 to 5 and towards the right the Y-axis values are +ve and usually greater than 10 for object at far right. Similarly, the z-axis gives the magnitude in centimetre as to the distance of the target from the sensor. In order to confirm that the user has taken a pill the z-axis activation threshold was set to be around 20cm, so that only when the user approaches to a close proximity does the counter corresponding to a particular pill undergoes an increment by 1. Now, that we are able to detect if the hand approached from left, right or centre we can move onto making sure erroneous outputs are avoided. In the program we track the Z-axis (height) distance between the hand and sensor increasing as user takes the hand back after picking up a pill. This is used to confirm that the user has already picked up the pill/medicine at least once. This was implemented in the program as shown below. def update(self, targets): . . . if (targets[i].zPosCm<20): #if hand is under 20cm range if(targets[i].yPosCm>10): #if hand is inside right section r_counter+=1 if(r_counter==1): rm_counter+=1 #right side pill value incremented print('Right Enterances : ',rm_counter) elif(targets[i].yPosCm<-10): l_counter+=1 #print('User Approaching from Left') if(l_counter==1): lm_counter+=1 print('Left Enterances : ',lm_counter) elif(targets[i].yPosCm>-5 and targets[i].yPosCm<5): m_counter+=1 #print('User Approaching from centre') if(m_counter==1): mm_counter+=1)) 4. How is the pill Intake information written into a file? Once the logic for tracking and counting the number of pills that have been taken from the box, we need to next log the details/store the pill box information so that the Alexa skill can access the stored information and use it to provide useful analytics regarding a users pill intake. #if f.write("rm : "+str(rm_counter)+' '+'\n') #info log f.close() #storing current time to another file t = datetime.now() f = open("datatime.txt", "a") f.write("rm : "+str(t.hour)+':'+str(t.minute)+'\n') f.close() For this, I have created two text files, one named 'data.txt' which stores the number of a particular pill taken from the box, the value is labelled with 'lm' , 'rm' and 'mm' keys which are used to uniquely identify that the pills where taken from left section, right or centre section of the pill box. Next, another file named 'datatime.txt' stores the exact time at which a particular pill was taken from the box. This data can be used in the alexa skill to let the user know when he had last taken a particular pill. This is written in the program as follows: if(targets[i].yPosCm>10): #if hand is inside right section r_counter+=1 if(r_counter==1): rm_counter+=1 #right side pill value incremented f = open("data.txt", "a") f.write("rm : "+str(rm_counter)+' '+'\n') #info log f.close() #storing current time to another file t = datetime.now() f = open("datatime.txt", "a") f.write("rm : "+str(t.hour)+':'+str(t.minute)+'\n') f.close() print('Right Enterances : ',rm_counter) elif(targets[i].yPosCm<-10): l_counter+=1 #print('User Approaching from Left') if(l_counter==1): lm_counter+=1 f = open("data.txt", "a") f.write("lm : "+str(lm_counter)+'\n') f.close() t = datetime.now() f = open("datatime.txt", "a") f.write("lm : "+str(t.hour)+':'+str(t.minute)+'\n') f.close() print('Left Enterances : ',lm_counter) elif(targets[i].yPosCm>-5 and targets[i].yPosCm<5): m_counter+=1 #print('User Approaching from centre') if(m_counter==1): mm_counter+=1 f = open("data.txt", "a") f.write("mm : "+str(mm_counter)+'\n') f.close() t = datetime.now() f = open("datatime.txt", "a") f.write("mm : "+str(t.hour)+':'+str(t.minute)+'\n') f.close())) Finally, the data files are read by the python Alexa skill built using flask-ask. Part 2: Amazon Alexa Skill Development 1. Setting up a developer account and understanding Alexa skill development. The first step is to head over to and create an amazon developer account. This enables you to gain access to develop Amazon Skills. When getting started with Alexa I first refereed the free Codeacademy course on developing Alexa skills. It can be found here: Or if you would like other options you can check out: After working on some sample projects, I releasing that the best way to improve my workflow would be to start writing Alexa skills using Python so that I could directly connect the Walabot python code and amazon Alexa skills code. The perfect framework for this is Flask-Ask by John Wheeler, which is a python library that enables you to develop amazon Alexa skills. 2. Building a custom Alexa Skill using Flask-Ask Flask Ask is a great resource to work with if you are a python developer. To get started with flask ask you can refer: John Wheeler has provided some great tutorials on his flask ask page as well,. 3. Writing the Curismo Alexa skill using Python The main features that I've included in the Curismo skill are: - Ask Curismo How many pills are already taken - How many pills are left to be taken - At what time did I have the pills at last (Keeps track of time at which user had pill previously) - What is my prescription (medicine with food or without food) First make sure all the python files are in the same folder. Create two empty text files named 'data.txt' and 'datatime.txt' or any other name you prefer. Next, create a file named templates.yaml and include the follow code for welcome message. welcome: Welcome to curismo! I am your smart pill box assistant. Next, create a new python file which will be the source of our custom amazon skill. At first we need to import some necessary libraries: import logging from flask import Flask, render_template from flask_ask import Ask, statement, question, session from dateutil.parser import parse #for parsing time of pill intake recorded using walabot import re from datetime import datetime Next, we create a set of variables onto which the pill information like (count, time) are stored into by reading the data.txt and datatime.txt files with information logged using Walabot sensor data. #stores count of pill 1, 2 and 3 respectively p1 = 0 p2 = 0 p3 = 0 #used while getting time and count substrings from files v = 0 i =0 match = 0 #used to store time logs read from file p1time = 0 p2time = 0 p3time = 0 #used to set prescription of the user for a given pill prescp_p1 = 3 prescp_p2 = 3 prescp_p3 = 3 The values of these variables are assigned based on reading the data and datatime text files. 4. How is the pill Intake information written into a file? Two functions readnupdate() and readpilltiming() are used to reading pill intake no and timing respectively. readnupdate() function reads data.txt file and stores the count of pill 1, pill 2 and pill 3 taken based on information logged by the Walabot sensor. def readnupdate(): global p1,p2,p3,v,i with open('data.txt', 'r') as f: #file with pill intake number opened in read mode while True: if(i==3): #as we are having only three pills i = 0 if(i<3): v = f.readline() if not v: break if("lm : " in v): #unique label "lm" used to filter left section pill intake [int(s) for s in v.split() if s.isdigit()] p1 = int(s) elif("mm : " in v): [int(s) for s in v.split() if s.isdigit()] p2 = int(s) elif("rm : " in v): [int(s) for s in v.split() if s.isdigit()] p3 = int(s) f.close() readpilltiming() stores time as string to variables p1time, p2time and p3time based on time information corresponding to time at which the user took a pill. The timestamp is written to the file in the Curismo Walabot program. Here, we are reading to the time data logged by the Walabot program so as to use it in the Alexa skill for letting the user know when he had last had a particular pill. def readpilltiming(): global p1,p2,p3,v,i,match,p1time,p2time,p3time with open('datatime.txt', 'r') as f: while True: if(i==3): i = 0 if(i<3): v = f.readline() if not v: break if("lm : " in v): match = parse(v, fuzzy=True) p1time = datetime.strptime(str(match.hour)+':'+str(match.minute), '%H:%M').strftime("%I:%M %p") elif("mm : " in v): match = parse(v, fuzzy=True) p2time = datetime.strptime(str(match.hour)+':'+str(match.minute), '%H:%M').strftime("%I:%M %p") elif("rm : " in v): match = parse(v, fuzzy=True) p3time = datetime.strptime(str(match.hour)+':'+str(match.minute), '%H:%M').strftime("%I:%M %p") f.close() Next, we create a function that defines the event that occurs during launch. app = Flask(__name__) ask = Ask(app, "/") logging.getLogger("flask_ask").setLevel(logging.DEBUG) @ask.launch def starting(): readnupdate() welcome = render_template('welcome') return statement(welcome) Next, I created an intent that is activated when the user asks "how many pills did I have today". @ask.intent("AllPillIntent") def allpill(): readnupdate() global p1,p2,p3 return statement('as of now, you have had {} pill 1, {} pill 2 and {} pill 3'.format(p1,p2,p3)) The next Intent, reads out the daily prescription based on information provided by the user and also informs the number of pills that have been taken at the moment. @ask.intent("DailyPrescpIntent") def dailypres(): readnupdate() return statement('As per your prescription, you need to have {} pill 1 with food, {} pill 2 and {} pill 3 without daily. As of now you have had {} pill 1, {} pill 2 and {} pill 3'.format(prescp_p1,prescp_p2,prescp_p3,p1,p2,p3)) The next intent is used to inform the user about any pill that has not be taken even once. @ask.intent("PillNotTakenIntent") def pillnothad(): readnupdate() global p1,p2,p3 if(p1==0 and p2==0 and p3==0): return statement('you have not had any pills today') elif(p1==0 and p2==0 and p3!=0): return statement('you have not had pill 1 and pill 2') elif(p1==0 and p2!=0 and p3==0): return statement('you have not had pill 1 and pill 3') elif(p1!=0 and p2==0 and p3==0): return statement('you have not had pill 2 and pill 3') elif(p1==0 and p2!=0 and p3!=0): return statement('you have not had pill 1 today') elif(p1!=0 and p2==0 and p3!=0): return statement('you have not had pill 2 today') elif(p1!=0 and p2!=0 and p3==0): return statement('you have not had pill 3 today') else: return statement('you have atleast had one of each pill today') The next Intent is activated when the user asks "When did I last have a pill". It collects time information logged by the Walabot program and then informs user about the timings at which the particular pills where taken at. @ask.intent("PillTimeIntent") def pilltiming(): readpilltiming() global p1time,p2time,p3time return statement('you had pill 1 at {} , pill 2 at {} and pill 3 at {}'.format(p1time,p2time,p3time)) Finally, the last intent informs the user about how many pills are left to be taken by calculating the different between the prescribed number of pills and the amount of pills taken as of that moment. @ask.intent("PillRemainingIntent") def pillremaining(): readnupdate() global prescp_p1,prescp_p2,prescp_p3 r1 = prescp_p1 - p1 r2 = prescp_p1 - p2 r3 = prescp_p1 - p3 return statement('you have {} pill 1, {} pill 2 and {} pill 3 left to be taken today'.format(r1,r2,r3)) Now, that we've written the Curismo Walabot program and Alexa skill, we can move onto linking the python program with Alexa using the Alexa Skill Builder. Part 3: Setting up Alexa Skill on Skill Builder First log into your amazon developer account and the head over to this page: From here, click on create a new skill. If you're using the new Skill Builder UI Console then you can follow right away, it is quite the same in the old UI as well. Additionally, I'll be including the Intent Schema for the old UI as well. Make sure that the invocation name is Curismo. Next, go to the Intents tab, there you'll need to create 5 new intents. Which are as follows: For those using the old UI, the Intent Schema is as follows: { "intents": [ { "intent": "AllPillIntent" }, { "intent": "AMAZON.CancelIntent" }, { "intent": "AMAZON.HelpIntent" }, { "intent": "AMAZON.StopIntent" }, { "intent": "DailyPrescpIntent" }, { "intent": "PillRemainingIntent" }, { "intent": "PillNotTakenIntent" }, { "intent": "PillTimeIntent" } ] } The sample utterances are: AllPillIntent how many medicines did i have today AllPillIntent how many tablets did i take today AllPillIntent how many pills did i take today AllPillIntent how many medicines did i take today AllPillIntent how many pills did i have today AllPillIntent how many tablets did i have today DailyPrescpIntent how many pills do i have to take per day DailyPrescpIntent what is my daily prescription PillRemainingIntent how many more pills should i take today PillRemainingIntent how many pills are remaining to be taken today PillRemainingIntent how many pills do i have to take today PillNotTakenIntent what pills did i forget to take today PillNotTakenIntent what medicines did i not take today PillNotTakenIntent what pills did i not take today PillTimeIntent at what time did i have my pills PillTimeIntent when did i last take my pills PillTimeIntent at what time did i have my medicines PillTimeIntent when did i take my medicines at last Once you've created the Intents make sure you save your model and click on Build Model. Next head over to the configuration tab, in the Endpoints section under the Service Endpoint make sure you select "HTTPS" instead of Amazon ARN. Then next, you'll need to open up terminal and type in (for linux, varies in windows and mac): ./ngrok http 5000 Then copy the final Forwarding URL and paste it in the Default URL field under service endpoint. This ensures that the Curismo Alexa Python Skill is linked with Alexa online via http port 5000. Click next, and finally under global fields make sure you check: My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority Congrats, now you're all set to test the Curismo Alexa Skill. For ensuring that everything works well, follow the sequence given below. - Connect Walabot to PC, place Walabot inside the box and pills on top of the box. - Open curismowalabot.py file and run it in IDLE or any other IDE. - Open terminal and run ./ngrok http 5000 - Open another terminal tab and run python curismoalexa.py - Finally open up the online Alexa Skills simulator and say Open Curismo. The commands that you can try out are: open curismo alexa ask curismo how many pills did i have today alexa ask curismo what pills have i not had today alexa ask curismo when did i last have my pills alexa ask curismo at what time did i last have my pills alexa ask curismo what is my daily prescription alexa ask curismo how many pills do i have to take today Conclusion: Curismo proves to be an efficient pill/medicine intake monitoring tools. It is the first of it's kind Pill box that allows patients with dimentia to interact with the device using Alexa services and get insights into: - How many pills the user has already taken - How many pills are remaining to be taken - At what time did the user last take the pills - What is their daily prescription and how close they are to reaching their daily medication intake goals. I'm optimistic that such systems where user-computer interaction plays a crucial role can shape how patients with dementia attend to their medication on a daily basis. Hope you enjoyed the build, you can ask your queries in the comments below or through DM.
https://www.hackster.io/geeve-george/curismo-a-smart-pill-box-assistant-for-users-with-dementia-7a8dfd
CC-MAIN-2019-51
refinedweb
3,442
61.16
Copying, Deleting, and Renaming Elements Welcome to "Transforming XML." Each column will explain how to handle two or three basic document manipulation tasks using the W3C Standard that was spun off from the Extensible Stylesheet Language (XSL): the XSL Transformations Language, or XSLT. In this first column, we'll start with the basics -- the use of style sheets, the role of the xsl:stylesheet element, and how to copy, delete, and rename elements. (For other material on XSLT that's appeared in XML.com and elsewhere, see the XML.com Resource Guide.) XSL Style Sheets XSLT, according to the W3C Recommendation that specifies it, is "a language for transforming XML documents into other XML documents." As XML becomes more popular, and the dreams of shared DTDs often prove unrealistic, a quick and easy way to convert documents that conform to your DTD into documents that conform to my DTD becomes very valuable. This is especially so if you and I want to do business together without going to the trouble of authoring a DTD that we can both agree on. An XSLT style sheet is an XML document that uses specialized element types from the namespace to specify how to transform a set of elements. Technically, it's not transforming elements into elements, but a source tree into a result tree. This is good news, because by reading a document into a tree structure in memory before carrying out the style sheet's transformations, an XSLT processor can use information from anywhere in the tree when transforming a particular element (or rather, a particular tree node) because the whole document is sitting there in memory. An XSLT processor is a program that applies an XSLT style sheet to a tree representation of an input document, and creates a result tree based upon the style sheet's instructions. Most processors read an XML document into the input tree first, and output the result tree as another document after finishing the transformation, with a net effect of converting one document into another. Currently, the most popular implementations are James Clark's XT, the Apache XML Project's Xalan, and Michael Kay's SAXON. (A recent XSL-List posting from Clark about having no plans for further XT development is bound to hurt its long-term popularity.) Internet Explorer also implements some of XSLT, but its support of the W3C XSLT standard is still a bit idiosyncratic; see their XSL Developer's Guide for details. Check each of these XSLT processors' documentation for information on how to tell it to "use this XSL style sheet to turn this XML input document into this output document." The document (root) element of an XSLT style sheet is usually an xsl:stylesheet element, but it doesn't have to be that exact element: A style sheet can use xsl:transform as a synonym for xsl:stylesheet. You don't have to use xsl as the namespace prefix to point to the namespace mentioned above, but it is a common convention. There are ways to incorporate XSLT instructions directly into a document that doesn't use or refer to an xsl:stylesheet or xsl:transform element, but a serious transformation usually uses one of these in its own file. XSLT offers various element types as potential children of this xsl:stylesheet element, each providing different style sheet instructions to the XSLT processor. The most important is xsl:template, which specifies a template rule. Copying Elements to the Output A template rule essentially says "when you find an input tree node that corresponds to the value of my match attribute, output text with the structure described by the template in my contents." The value of the match attribute can be a simple element type name, or a more complex pattern describing the element, attribute, comment, or processing instruction nodes that the template applies to. Two popular XSLT elements to include in a template rule's contents are xsl:copy, which copies the current node, and xsl:apply-templates, which processes the children of the current node. For example, the single template in the following style sheet will copy the start-tags, end-tags, and contents of all title elements to the output. (Because of XSLT's default transformation rules, the contents of other elements will also be output without their tags.) <xsl:stylesheet xmlns: <xsl:template <xsl:copy> <xsl:apply-templates/> </xsl:copy> </xsl:template> </xsl:stylesheet> Note that this template rule only acts on nodes representing title elements. Any attributes of title elements have their own nodes in the input tree and require their own template rule or rules if the XSLT processor is supposed to copy them to the output. The xsl:copy-of element, on the other hand, can copy the entire subtree of each node that the template selects. This includes attributes, if the xsl:copy-of element's select attribute has the appropriate value. In the following example, the template copies title element nodes and all of their descendant nodes -- in other words, the complete title elements, including their tags, subelements, and attributes: <xsl:template <xsl:copy-of </xsl:template> Deleting Elements If a template rule says "output my contents when you find an input tree node that corresponds to the value of my match attribute," what happens if there is no content, as with the following two templates? <xsl:template </xsl:template> <xsl:template </xsl:template> They'll output nothing, essentially deleting the matched nodes from the output. The first template rule says "when you find a nickname element, output nothing." The second takes advantage of the flexibility allowed in the patterns that are legal values for the template element's match attribute. While a match value of "project" would delete all the project elements from the output, the match value shown will only delete project elements whose status attributes have the string "canceled" as their value. Changing Element Names We saw above that xsl:apply-templates processes only the children of the current node. For an element, this means everything between the tags, but nothing in the tags themselves. If your template outputs an input element's content but not its tags, you can surround that content with anything you want, as long as it doesn't prevent the output document from being well-formed. For example, the following template rule tells an XSLT processor to take any article element fed to it as input, and output its contents surrounded by html tags. <xsl:template <html> <xsl:apply-templates/> </html> </xsl:template> The html tags add an actual html element to the style sheet, but because the tags have no xsl: prefix, the resulting html element is known in XSLT as a "literal result element." The element isn't some special XSLT instruction, so an XSLT processor will leave it alone and pass its tags along to the output looking just like they do in the style sheet. Instead of enclosing the article template rule's xsl:apply-templates element with html tags, another way to convert article elements to html elements would be to enclose the xsl:apply-templates element with an xsl:element element that had "html" specified as the value for its name attribute. In this particular case, that would have been overkill -- the markup shown above is much simpler and gets the job done -- but the xsl:element element's ability to provide the element type name in an attribute value lets you use expressions that are more complex than a simple string like "html" as that element type name. This makes it possible to dynamically create the element name by concatenating strings, calling functions, or by retrieving element content or attribute values from elsewhere in the document to use in the element name. We'll learn more about these tricks in future "Transforming XML" columns.
http://www.xml.com/pub/a/2000/06/07/transforming/index.html
crawl-003
refinedweb
1,309
53.24
A little npm module that uses the Hashids library to mask IDs. This adds on a version and a object type id so if you just get an id you can figure out for what object type it was for and can have different versions based on salt. To mask your id for object user: var ObjectIdMask =salt: 'this is a secret, you should set it or I use a system default';var user_type_id = 1; //Could get this from the database it is up to you.var user =id: 1email: 'test@test.com'name: 'John';last_name: 'Doe'userid = ObjectIdMask;return user; Now if you get this object back you can decode it like this: var user = reqbody;var user_type_id = 1; //Could get this from the database it is up to you.userid = ObjectIdMask; You can add your own versions as well as override the default version to yours: var ObjectIdMask =default_version: 'Custom 1'versions:'Custom 1':salt: 'this is my secret'{//This needs to return an object like thisreturn{ //Takes postive integer numbers only// returns a encrypted string}{// return a number (the Id)};}; We will use just plain numbers as we move up in versions so if you do customize it make sure you add some namespace in the version. You can also override the default delimiter with whatever you want. Here is how you can override the delimiter: var ObjectIdMask =delimiter: ':'salt: 'this is my secret';var user_type_id = 1;ObjectIdMask;//This returns Version:Hash isntead of Version-Hash
https://www.npmjs.com/package/object-id-mask
CC-MAIN-2017-34
refinedweb
249
50.7
Sample Chapter: The .NET Base Class Libraries Other Useful Namespaces There are plenty of other useful classes contained in sub-namespaces of the System namespace. Some are covered elsewhere in this book. Four are covered here because almost every .NET developer is likely to use them: System::IO, System::Text, System::Collections, and System::Threading. The System::IO Namespace Getting information from users and providing it to them is the sort of task that can be incredibly simple (like reading a string and echoing it back to the users) or far more complex. The most basic operations are in the Console class in the System namespace. More complicated tasks are in the System::IO namespace. This namespace includes 27 classes, as well as some structures and other related utilities. They handle tasks such as Binary reads and writes (bytes or blocks of bytes) Creating, deleting, renaming, or moving files Working with directories This snippet uses the FileInfo class to determine whether a file exists, and then deletes it if it does: System::IO::FileInfo* fi = new System::IO::FileInfo("c:\\test.txt"); if (fi->Exists) fi->Delete(); This snippet writes a string to a file: System::IO::StreamWriter* streamW = new System::IO::StreamWriter("c:\\test.txt"); streamW->Write("Hi there" ); streamW->Close(); Be sure to close all files, readers, and writers when you have finished with them. The garbage collector might not finalize the streamW instance for a long time, and the file stays open until you explicitly close it or until the instance that opened it is finalized. When Typing Strings with Backslashes - The backslash character (\) in the filename must be "escaped" by placing another backslash before it. Otherwise the combination \t will be read as a tab character. This is standard C++ behavior when typing strings with backslashes. Check the documentation to learn more about IO classes that you can use in console applications, Windows applications, and class libraries. Keep in mind also that many classes can persist themselves to and from a file, or to and from a stream of XML. The System::Text Namespace Just as Console offers simple input and output abilities, the simplest string work can be tackled with just the String class from the System namespace. More complicated work involves the System::Text namespace. You've already seen System::Text::StringBuilder. Other classes in this namespace handle conversions between different types of text, such as Unicode and ASCII. The System::Text::RegularExpressions namespace lets you use regular expressions in string manipulations and elsewhere. Here is a function that determines whether a string passed to it is a valid US ZIP code: using namespace System; using namespace System::Text::RegularExpressions; // . . . String* Check(String* code) { String* error = S"OK"; Match* m; switch (code->get_Length()) { case 5: Regex* fivenums; fivenums = new Regex("\\d\\d\\d\\d\\d"); m = fivenums->Match(code); if (!m->Success) error = S"Non numeric characters in 5 digit code"; break; case 10: Regex* fivedashfour; fivedashfour = new Regex("\\d\\d\\d\\d\\d-\\d\\d\\d\\d"); m = fivedashfour->Match(code); if (!m->Success) error = S"Not a valid zip+4 code"; break; default: error = S"invalid length"; } return error; } The Regex class represents a pattern, such as "five numbers" or "three letters." The Match class represents a possible match between a particular string and a particular pattern. This code checks the string against two patterns representing the two sets of rules for ZIP codes. The syntax for regular expressions in the .NET class libraries will be familiar to developers who have used regular expression as MFC programmers, or even as UNIX users. In addition to using regular expressions with classes from the System::Text namespace, you can use them in the Find and Replace dialog boxes of the Visual Studio editor, and with ASP.NET validation controls. It's worth learning how they work. Regular Expression Syntax A regular expression is some text combined with special characters that represent things that can't be typed, such as "the end of a string" or "any number" or "three capital letters." When regular expressions are being used, some characters give up their usual meaning and instead stand in for one or more other characters. Regular expressions in Visual C++ are built from ordinary characters mixed in with these special entries, shown in Table 3.2. Here are some examples of regular expressions: ^test$ matches only test alone in a string. doc[1234] matches doc1, doc2, doc3, or doc4 but not doc5. doc[1-4] matches the same strings as doc[1234] but requires less typing. doc[^56] matches doca, doc1, and anything else that starts with doc, except doc5 and doc6.n -H\~ello matches Hillo and Hxllo (and lots more) but not Hello. H[^e]llo has the same effect. [xy]z matches xz and yz. New *York matches New York, NewYork, and New York (with several spaces between the words). New +York matches New York and New York, but not NewYork. New.*k matches Newk, Newark, and New York, plus lots more. World$ matches World at the end of a string, but World\$ matches only World$ anywhere in a string. Table 3.2 Regular Expression Entries The System::Collections Namespace Another incredibly common programming task is holding on to a collection of objects. If you have just a few, you can use an array to read three integers in one line of input. In fact, arrays in .NET are actually objects, instances of the System::Array class, which have some useful member functions of their own, such as Copy(). There are times when you want specific types of collections, though, and the System::Collections namespace has plenty of them. The provided collections include Stack. A collection that stores objects in order. The object stored most recently is the first taken out. Queue. A collection that stores objects in order. The first stored is the first taken out. Hashtable. A collection that can be searched far more quickly than other types of collections, but takes up more space. ArrayList. An array that grows as elements are added to it. SortedList. A collection of two-part (key and value) items that can be accessed by key or in numerical order. BitArray. A compact way to store an array of true/false flags. One rather striking omission here is a linked list. You have to code your own if you need a linked or double-linked list. Page 4 of 5
http://www.developer.com/net/cplus/article.php/10919_3304021_4/Sample-Chapter--The-NET-Base-Class-Libraries.htm
CC-MAIN-2017-22
refinedweb
1,076
64.51
I released Amara 1.1.6 last week (see the announcement). This version requires 4Suite XML 1.0b2. As usual, though, I have prepared an "allinone" package so that you do not need to install 4Suite separately to use Amara. The biggest improvements in ths release are to performance and to the API. Amara takes advantage of a lot of the great performance work that has gone into 4Suite (e.g. Saxlette). There is also a much easier API on-ramp that I expect most users will appreciate. Rather than having to parse using: from amara import binderytools as bt doc = bt.bind_string(XML) #or bt.bind_uri or bt.bind_file or bt.bind_stream You can use import amara amara.parse(XML) #Whether XML is string, file-like object, URI or local file path There are several other such simplifications. There is also the xml_append_template facility, which is very handy for generating XML (see how Sylvain uses it to simplify atomixlib). Thanks to all the folks who helped with suggestions, patches, review, etc.
http://copia.posthaven.com/amara-116
CC-MAIN-2018-30
refinedweb
172
69.18
Extracting notes and highlights from iBooks Published on Monday 3rd January, 2022. As much as possible, I like to work in plain text. This post is written in Markdown and I like the freedom and versatility that gives me. I read a lot and on a variety of different devices. I feel that getting all of my notes and highlights into the same place is quite important – it helps with my note taking and allows me to make better connections. The problem I find is that when I read on my laptop or iPad, the data is hard to get at. On my laptop, iBooks maintains two sqlite3 databases – one for the books and one for the annotations (notes and highlights). I wanted to get all of the data out of there and couldn’t find a tool to be able to manage it. So, I made one in JavaScript (and might migrate it TypeScript soon). Getting the data – sqlite3 I used the sqlite3 package to access the databases. The data I need is in two separate files, so I had to use the attach command to be able to allow both databases to be accessed simultaneously. import sqlite3 from "sqlite3"; import { ANNOTATION_PATH, BOOK_PATH } from "./config.js"; class bookDb { constructor() { this.db = new sqlite3.Database(ANNOTATION_PATH, (err) => { if (err) { console.error(err.message); } console.log("Connected to the annotation database."); }); this.db.serialize(() => { this.db.run(`attach database '${BOOK_PATH}' as books`); }); } } I found the documentation for sqlite3 a little challenging, particularly for data retrieval. The package leans heavily on callbacks and all of the examples I could find console.log the data. This is fine to show that data is accessible but not immediately helpful in using the data in the program. I ended up wrapping the database call in a Promise and then resolving or rejecting based on the query response. async getAnnotations() { const { db } = this; const sql = `select ZANNOTATIONASSETID as asset_id, ZTITLE as title, ZAUTHOR as author, ZANNOTATIONSELECTEDTEXT as selected_text, ZANNOTATIONNOTE as note, ZANNOTATIONREPRESENTATIVETEXT as represent_text, ZFUTUREPROOFING5 as chapter, ZANNOTATIONSTYLE as style, ZANNOTATIONMODIFICATIONDATE as modified_date, ZANNOTATIONLOCATION as location from ZAEANNOTATION left join books.ZBKLIBRARYASSET on ZAEANNOTATION.ZANNOTATIONASSETID = books.ZBKLIBRARYASSET.ZASSETID order by ZANNOTATIONASSETID, ZPLLOCATIONRANGESTART ;`; try { return new Promise((resolve, reject) => { db.all(sql, (err, results) => { if (err) { console.log(`Problem with Table? ${err}`); reject(`Problem with Table? ${err}`); } resolve(results); }); }); } catch (err) { console.log(err); } } Now, I could initialise my class and get the annotations. Templating I decided to use mustache to template my notes in Markdown format. The current version has a very simple template and doesn’t yet use all of the data I’ve queried: # {{ title }} By {{author}} ## My notes <a name="my_notes_dont_delete"></a> {{#notes}} - {{.}} {{/notes}} I needed to get the render method from mustache. Using ESM, I couldn’t destructure render from the import directly, so I had to do this in two steps: import mustache from "mustache"; const { render } = mustache; I then imported the template, shamelessly using a blocking synchronous function because I don’t have a lot of data. import fs from "fs" const knownTemplate = fs.readFileSync("./templates/known.md").toString(); Now it was time to wrangle my data into a format that was going to be useful. When you delete a book from the iBooks library, the entry is removed but the annotations remain. That means we can’t tell what the source of the highlight or note is. I decided I’d handle both of these cases separately. First, for the known source: const sourceKnown = results.filter((item) => item.title); let currentResult = []; sourceKnown.forEach((result, index) => { if (sourceKnown[index - 1]?.title != result.title) { // 1 if (currentResult.length > 0) { //2 const formattedTitle = currentResult[0].title.split(" ").join(""); // 3 const obj = { //4 title: currentResult[0].title, author: currentResult[0].author, notes: currentResult.map( (result) => `${result.note ? `*${result.note}* ` : ""}${result.selected_text}` ), }; const output = render(knownTemplate, obj); // 5 fs.writeFileSync(`${NOTE_DIR}${formattedTitle}.md`, output); // 6 } currentResult = []; } if (result.note || result.selected_text || result.represent_text) { // 7 currentResult.push(result); } }); - The annotations have been ordered by title and so, if the current annotation has a different book title from the previous then we have dealt with a full set of annotations. - However, there were some occasions where a book has no annotations so I needed to deal with that. Sorry for the nested if loops but I think it’s more readable here. - Just tidying up the title to make sure there are no spaces in the file name. - This is the object I’ll pass through to the render function. I’m getting the global values from the first item in the array. I’m combining the all of the notes and annotations into a single array that will then be templated. - I’m calling the render method with the template and the data. - Writing to the file system – again, shamelessly synchronous (maybe that could be a band name?) before clearing out the currentResult array. - If the annotation has a note or some selected_text it should be pushed to the currentResult array. That only leaves dealing with the unknown sources, which is pretty similar: const unknownTemplate = fs.readFileSync("./templates/unknown.md").toString(); const sourceUnknown = results.filter((item) => !item.title); const unknownRendered = render(unknownTemplate, { notes: sourceUnknown .filter((item) => item.selected_text || item.note) .map( (result) => `${result.note ? `*${result.note}* ` : ""}${result.selected_text}` ), }); fs.writeFileSync(`${NOTE_DIR}/source_unknown.md`, unknownRendered); The only difference here is that we can wrangle and render the data in one step before writing to the file system. A decent first step This has allowed me to get all of my highlights into Markdown files and add them into my note-taking process. It was a fun learning experience to get to know sqlite a bit more. I’ve extracted the configuration into a file so that other people could use it but there is more to do to allow this to be a more generally. Useable tool. Here’s the repo if you’re interested 🙂 Last updated on Monday 3rd January, 2022.
https://www.kevincunningham.co.uk/posts/extracting-notes-and-highlights-from-ibooks/
CC-MAIN-2022-33
refinedweb
996
50.02
tensorflow:: ops:: SparseReshape #include <sparse_ops.h> Reshapes a SparseTensor to represent values in a new dense shape. Summary This operation has the same semantics as reshape on the represented dense tensor. The input_indices are recomputed based on the requested new_shape. If one component of new_shape is the special value -1, the size of that dimension is computed so that the total dense size remains constant. At most one component of new_shape can be -1. The number of dense elements implied by new_shape must be the same as the number of dense elements originally implied by input_shape. Reshaping does not affect the order of values in the SparseTensor. If the input tensor has rank R_in and N non-empty values, and new_shape has length R_out, then input_indices has shape [N, R_in], input_shape has length R_in, output_indices has shape [N, R_out], and output_shape has length R_out. Args: - )
https://tensorflow.google.cn/api_docs/cc/class/tensorflow/ops/sparse-reshape?hl=hi
CC-MAIN-2022-33
refinedweb
145
63.8
how to send email please give me details with code in jsp,servlet how to send email please give me details with code in jsp,servlet how to send email please give me details with code in jsp,servlet pls send me the code for login and register - Java Beginners pls send me the code for login and register pls immediately send me the jsp code for login and registration with validation with java bean in mysql database... Hi friend, This login action code full description of program code full description of program code escribe me below code how this program code is make and work, and those method is use in this why? import java.util.*; class SortList { public static void main(String[]args) { Scanner input=new code to send sms alerts using jsp online code to send sms alerts using jsp online I am new to mobile aplication development. pls send me the code for sms alerts after clicking the button send me javascript code - Java Beginners send me javascript code please send me code javascript validation code for this html page.pleaseeeeeeeee. a.first:link...        Hi friend, complete code error please send me the solution error please send me the solution HTTP Status 500 - type Exception report description The server encountered an internal error...) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) note The full stack trace to send a mail - JSP-Servlet how to send a mail Dear sir, I am able to send a mail.But... is 2000.Regards Hr. The following is a code i used //sewnding a method... how to get a matter as it is what am sending..please help me sir if any changes pls send code pls send code pls send code for set database value into text box based on selected value in struts and jsp use any database and get its code for getting values from google Send forgot Password through mail - JSP-Servlet hint questions please send me a example code for this question thanks...Send forgot Password through mail hello every one I am... is provided) Now my question is how do i send the password if admin click on forgot Reply Me - Java Beginners Reply Me Hi, Please Help Me using jsp technologies I have... understood my problem then please send me..... Hi friend, Please specify the problem in details and give full code having the problem. Thanks please send me javascript validation code - Java Beginners please send me javascript validation code hallo sir , please send me java script code for this html page.since i want to do validation.i am a new user in java ....please send me its urgent Servlet Response Send Redirect - JSP-Servlet Servlet Response Send Redirect Hi, Thank you for your previous answer, the code works great. Sorry to bother you guys and perhaps this would... editing records. 1. In the code that you have given last time, I want JSP CODE JSP CODE Please help me as soon as possible.Its Urgent. I am working on my college ALUMNI PORTAL. I want to have a ADD FRIEND option in a user's profile. Please send me code send HTML Email with jsp and servlet send HTML Email with jsp and servlet Can You please show me how to send html Email using JSP and Servlet thank you send redirect in JSP - JSP-Servlet send redirect in JSP How can I include a message i.e "redirected to this page because blah blah" in the send redirect page? Hi friend... the following code: If id = administration { response.sendRedirect You asked full source code for search - Development process You asked full source code for search Hi, For this code can u give me code for display records when i select any field Search Page var arr = new Array(); arr["Select"] = new Array("-select send mail in PHP send mail in PHP what are configurations needed to send mail in php.ini file by localhost ,please give me complete details about this,and code in datail jsp please send me java script for html validation - Java Beginners please send me java script for html validation please send me code for javascript validation .........please send me its urgent a.first:link { color: green;text-decoration:none; } a.first:visited{color:green send me example of jmsmq - JMS send me example of jmsmq please send me example about jmsmq (java microsoft message queuing ) library code problem - JSP-Servlet have a problem with open the next form. plz, help me. thanks, Hi friend, Please give me detail and send me error code page. Please...jsp code problem Hi, I have employee details form in jsp. After Reply me - Java Beginners in the database... if u understood my question then then please send me code oterwise........ You Want only Jsp Code? You Heard About MVC Architecture or not. Before I Send Jsp Code also check it. if possible just give me a call to my number this code will be problem it display the error again send jsp for registration form this code will be problem it display the error again send jsp for registration... RESEND THE CODE org.apache.jasper.JasperException: java.lang.NumberFormatException... in database as text and set the mobile field or telephone field as string in your code please send me the answer - JDBC please send me the answer -difference between DriverManager and DataDourse what is Datasourse? What r the advantages? what is the difference between DriverManager and DataDourse code - JSP-Servlet code hi can any one tell me how to create a menu which includes 5 fields using jsp,it's urgent Hi friend, Plz give details with full source code where you having the problem. Thanks java code to send email using gmail smtp server java code to send email using gmail smtp server please send me the java code to send email using gmail smtp server. and how to send verification code Plz send me answer quckly Plz send me answer quckly Respected Sir, myself is pavan shrivastava.i want ask a question that is ( we can't create object of interface then how would possible to create object help me help me HI. Please help me for doing project. i want to send control from one jsp page to 2 jsp pages... is it possible? if possible how to do full form - XML full form what is the full form of org and jdom. eg: private org.jdom.Namespace nxi=null; what does the above statement mean...i.e each word...:// Java or Jsp code - JSP-Servlet Java or Jsp code Hello Sir, How to create the code... the question and answer page using the radio buttons.please help me to solve this problem.please send the codes. If the questinaire page, the answer is wrong then make online shopping code using jsp online shopping code using jsp plz send me the code of online shopping using jsp or jdbc or servlets plz plz help me Jsp Code - Development process ".When i click search button it has to display data from database. Can u send me d jsp code . Thanks Prakash Hi Friend, We are providing you the JSP code where we have retrieve the data through the date fields.In database we Reply Me - Java Beginners Reply Me Hi, Details structure Using jsp code I m... on name plz send this code immediately Thanks i have given the code in juzt previous question where u entered the table structure jsp code problem - JSP-Servlet jsp code problem hi, I am going to execute the following code which has been given your jsp tutorial. retrive_image.jsp: but while I... server. plz help me to run this code........ HTTP Status 500 - type Exception help me help me please send me the java code to count the number of similar words in given string and replace that word with new one Pls send code Pls send code I am Mohini Charankar suppose Name="Mohini" Edit Button Click on that I change my Name with "Mohini/" Save it and page refresh After... an error pls send code CAN U HELP ME TO CODE IN JSP FOR ONLINE VOTING SYSTEM CAN U HELP ME TO CODE IN JSP FOR ONLINE VOTING SYSTEM can u help me to code in jsp for online voting system Send Email From JSP & Servlet J2EE Tutorial - Send Email From JSP & Servlet... webserver, using JavaMail API, the following code shows how the required... for executing servlets and JSP . It is a joint effort plz send code for this plz send code for this Program to calculate the sum of two big numbers (the numbers can contain more than 1000 digits). Don't use any library classes or methods (BigInteger etc code error - JSP-Servlet code error hii this program is not working becoz when the mouse... is declared in describe function . this is not doing this plz tell me where...="dskldjlskda ,dnkjned send us a message"; } function describe1() { alert("message please send me the banking data base in swings please send me the banking data base in swings sir, please send me how to create the banking data base program in swings code hi i am Ruchi can anybody plz tell me the jsp code... visit the following links: jsp code for display of data from database and snap shot of the output jsp code for display of data from database and snap shot of the output ...... in which i have entered these data.plz some one help me..i m going to submit my Help me the code java.lang.NullPointerException...) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) here is code login.jsp <html> <body.../jsp/login.jsp"); }else{ out.println("<html> Help me the code , i am using login.html ,login .jsp,login.java and web.xml code...) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) // here is code login.jsp <...;quot;/Simple/jsp/login.jsp"); }else{ out.println(" ... Send me Binary Search - Java Beginners Send me Binary Search how to use Binary think in java give me the Binary Search programm thx.. Hi friend, import java.io.*; public class BinarySearchDemo { public static final int NOT_FOUND = -1 jsp code - JSP-Servlet jsp code Can anyone help me in writing jsp/servlet code to retrieve files and display in the browser from a given directory. Hi Friend, Try the following code: Thanks jsp code compilation error - JSP-Servlet jsp code compilation error hai, iam doing online banking project.i... the following code.but it shows the following error.can you tell me where is the error and also what is the proper code for funds transfer. HTTP JSP code JSP code I get an error when i execute the following code : <... = con.createStatement(); st.executeQuery(query); %> <jsp:forward</jsp:forward> HTTP Status 500 - type Exception report javascript code problem - JSP-Servlet javascript code problem Thanks for sending answer.but actually what u send is not my actual requirement.first look this code. Subject...; "> in above code which is jsp and struts form bean jsp code - JSP-Servlet jsp code I need code for bar charts using jsp. I searched some code but they are contain some of their own packages. Please give me asimple code... friend, Code to solve the problem : Thanks Html/JavaScript code - JSP-Servlet , ------------------------------------------ If, you have any problem then , please send me detail with code. Thanks. ok this is code snippet of JSP1 that is used to display teh table. --JSP1... the corresponding row that was clicked. please can someone help me.   please send code - Java Beginners please send code hai friends plese provide code for fallowing... number it should be taken as single number) URGENTLY send the code for this Hi friend, Code to solve the problem : class StringExampleJava please help me. please help me. Please send me a code of template in opencms and its procedure.so i can implement the code. Thanks trinath jsp code - JSP-Servlet jsp code Hello Everybody, can anyone help me to findout the modules as i am developing a whiteboard application using jsp? this application is my dream application. Thank you please help me. please help me. Please send me the validation of this below link. the link is Thanks Trinath JSP Code - JSP-Servlet JSP Code Hi, Do we have a datagrid equivalent concept in JSP? If so, then please help me to find the solution of the problem. Its Urgent..., Please visit the following links: JSP - JSP-Servlet to call the methods........ pls... send me code how to do this...... Hi vasu can u send your complete application with full description otherwise... JSP page I want to use the variables and methods which i have declared in another jsp code - JSP-Servlet jsp code hello frns i want to display image from the database along... so please tell me the solution how to write a text after image display... plese send information how to do this - Development process plese send information how to do this present i am doing project on javaServlets,jsp,javascript plese see this i have created like subject... .send confirmation about car modification .cancellation conformation send without authentication mail using java - JavaMail send without authentication mail using java Dear All, I am tring to send simple mail without authentication using java. is it possible, could i send email without authentication. I have written following code. public Unable to send mail using php script Unable to send mail using php script Hello i am trying to send mail... ; For Win32 only. ;sendmail_from = me@example.com I have this php code can anyone please tell me where am i going wrong..i am new to php. thank you java code for sending sms - JSP-Servlet java code for sending sms hello sir, I want a code for sending sms on mobile . please send me if u have this Thanks & regards Dharmendra Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/50457
CC-MAIN-2015-22
refinedweb
2,340
71.34
Hi, The Command Line switches can be accessed through the Command() function that is available with Microsoft.VisualBasic namespace, which is automatically included when you open a VB.Net application in the VStudio IDE. I have just written a small code to illustrate this- Imports System Imports System.IO Imports Microsoft.VisualBasic Module CommandLineSwitch Sub Main() Console.Write(Command()) End Sub End Module You can test this by compiling in the CommandLine using the command line compiler - >vbc /target:exe /out:CmdLineSwitch.exe TestCL.vb, where the code is saved as TestCL.vb. After compiling run in the command line as CmdLineSwitch parameter 1 2 and see the output. This example can be adopted in your windows application as well. Hope this will help you. With Prem and Om In the Name of Iswara Mahes VB.NET Command line switch This conversation is currently closed to new comments.
https://www.techrepublic.com/forums/discussions/vbnet-command-line-switch/
CC-MAIN-2017-51
refinedweb
148
60.82
- 19 Sep, 2019 1 commit When a project export completes, it removes everything in `Project#import_export_shared.archive_path`, which can erase files needed for another ongoing project export. This is problematic for custom templates, which exports an existing project to get the most recent changes and imports that archive to another project. To avoid this from happening, we generate a random unique subpath in the shared temporary directory so that multiple exports can work at the same time. Previously the path structure was as follows: 1. Project export files stored in: /shared/tmp/project_exports/namespace/project/:random 2. Project export .tar.gz files stored in: /shared/tmp/project_exports/namespace/project 3. Project export lock file: /shared/tmp/project_exports/namespace/project/.after_export_action Now: 1. Project export files stored in: /shared/tmp/project_exports/namespace/project/:randomA/:randomB 2. Project export .tar.gz files stored in: /shared/tmp/project_exports/namespace/project/:randomA 3. Project export lock files stored in: /shared/tmp/project_exports/namespace/project/locks The .tar.gz files are also now cleaned up in the AfterExportStrategy. Also, ensure import/export path cleanup always happens. A failure to update the database or object storage shouldn't block us from cleaning up stale directories. This is especially important to clear out stale lock file and archive paths. Closes - 18 Sep, 2019 39 commits - Mike Lewis authored Changing PA docs for no scatter and background job kicking in See merge request gitlab-org/gitlab!17141 - Virjinia Alexieva authored Re-factor: Move auth related `before_actions` into parent controller See merge request !16644 - Mayra Cabrera authored Enable migration cops for Geo migrations tool Closes #13814 See merge request !16697 - Toon Claes authored The migration rubocop rules should only be applied to Geo tracking database migrations. Closes gitlab-org/gitlab#13814 - Robert Speicher authored Revert "Merge branch '10395-COAR-phase-2' into 'master'" See merge request !17165 - Kerri Miller authored This reverts merge request !16187 - Mike Greiling authored Update GitLab Packages See merge request !17161 - - Mike Greiling authored Minor text update for productivity analytics See merge request !17085 - Tim Zallmann authored Productivity Analytics: Reset MR table page on main chart click See merge request !17136 - Martin Wortschack authored - It prevents the loading indicator and the message from being displayed at the same time Fixes a spelling mistake for "pathes" => "paths" See merge request !16932 - Sam Beckham authored This happens across several files, mainly feature specs. - Sean Carroll authored This removes the validation for the name, which enables legacy releases that omitted this value to be imported again. Closes #31868 Update GitLab Packages See merge request !16910 - Jan Provaznik authored Artifacts Page Backend See merge request !16630 Fix a wrong assets image name for gitlab-ee on 'dev' Closes #32280 See merge request !17134 - Fatih Acet authored Merge branch '32074-productivity-analytics-update-text-to-not-mention-filtering-by-assignee' into 'master' Resolve "Productivity Analytics: Update text to not mention filtering by assignee" Closes #32074 See merge request !17115 - Martin Wortschack authored - Removes "assignee" from the text since filtering by assignee is currently not supported Add dependency proxy usage ping See merge request !17060 Speed up snippet finder specs See merge request !16985 - Peter Leitzen authored Before this commit Finished in 2 minutes 38.2 seconds (files took 2.09 seconds to load) 602 examples, 0 failures After this commit Finished in 51.24 seconds (files took 2.14 seconds to load) 602 examples, 0 failures Changed confidential quick action to only be available on non confidential issues See merge request !16902 - Removes artifact searching from this MR. This will be followed up in a separate issue. Ports the changes from gitlab-foss!32590 Signed-off-by: Rémy Coutable <remy@rymai.me> - Enrique Alcántara authored - Tim Zallmann authored Display if an issue was moved in issue list See merge request !17102 - Winnie Hellmann authored (cherry picked from commit 10327470da58373f59b7d76990a1bbad5339ca63) - Andreas Brandl authored Add code analytics tables See merge request !16514 - Fixes #30967 - Over one thousand todo messages displaying a count of one in the UI. Closes #30967 See merge request !16844
https://gitlab.com/gitlab-org/gitlab/commits/8d5f875c28b9bd25895ae5b8ea516be8004ee6e7
CC-MAIN-2020-05
refinedweb
671
50.02
Django has a beautiful feature of signals which will record all the actions performed on the particular model. In the current blog post, we’ll learn how to use Django's built-in signals and how to create custom signal Using Django’s built in Signals: Django has a lot of built-in signals like pre_save, post_save, pre_delete and post_delete and etc., For more information about Django's built-in signals visit. Now we’ll learn how to use Django's pre_delete signal with a simple example. In the way we use pre_delete in the present blog post we can use other signals also in the same way. We have two models called Author and Book their models are defined in models.py as below. # In models.py from django.db import models class Author(models.Model): full_name = models.CharField(max_length=100) short_name = models.CharField(max_length=50) class Book(models.Model): title = models.CharField(max_length=100) slug = models.SlugField(max_length=100) content = model.TextField() status = models.CharField(max_length=10, default=”Drafted”) author_id = model.PositiveIntegerField(null=True) In the above two models we are not having an author as foreignKey to Book model, so by default when the Author gets deleted it won’t delete all the Books written by the author. This is the case where signals come to picture, we can achieve this by using pre_delete or post_delete signals. For this, we’ll write a receiver function which will be called on pre_delete of the author object. Write the following code in your models.py def remove_books(sender, instance, **kwargs): author_id = instance.id Book.objects.filter(author_id=author_id).delete() pre_delete.connect(remove_books, sender=Author) In the above snippet sender is the model name on which the pre_delete signal is called, in the current example it is Author model. Remove_books is the receiver function which will be called on delete of the Author object. It takes sender, instance(the Author instance which is called for delete) and any other keyword arguments. Writing Custom Signals: Now in this section, we’ll learn, how to create custom signals using the same above example. Suppose if the author has to get an email when the Book status is changed to “Published”. For this, we have to create a file called signals.py to create a custom signal. By default all the signals are django.dispatch.Signal instances. # In signals.py import django.dispatch book_publised = django.dispatch.Signal(providing_args=["book", “author”]) Create receivers.py which contains the receiver function which will be called when the signal is dispatched. # In receivers.py from django.dispatch import receiver from .signals import * @receiver(book_publised) def send_mail_on_publish(sender, **kwargs): # contains the logic to send the email to author. In the above snippet receiver is decorator which tells the book_published signal that send_mail_on_publish is the receiver function which will be called when the book_publisehd signal is dispatched. We can dispatch signal anywhere as following. book_published.send(sender=Book, book=<Book Instance>, user=<Author Instance>) Note: The most important to be remembered is when we just call book_published.send(*****) it won’t hit the receiver function. To make the signal hit the receiver function is we have to import receiver in your app’s __init__.py. # In __init__.py import receivers But from Django 1.9+ if we import receivers in __init__.py, it will cause runtime error of Apps not ready. To avoid this issue we have to import this inside the ready function of your apps.py # In apps.py from django.apps import AppConfig class LoadReceivers(AppConfig): name = testapp def ready(self): from . import receivers...
https://micropyramid.com/blog/using-djangos-built-in-signals-and-writing-custom-signals/
CC-MAIN-2019-51
refinedweb
598
60.82
ASP.NET is a powerful platform for building Web applications. With any platform, it is important to understand what is going on behind the scenes to build robust applications. The ASP.NET page life cycle is a good example to explore so you know how and when page elements are loaded and corresponding events are fired. Ready, aim, fire! The requesting of an ASP.NET page triggers a sequence of events that encompass the page life cycle. The Web browser sends a post request to the Web server. The Web server recognizes the ASP.NET file extension for the requested page and sends the request to the HTTP Page Handler class. The following list is a sampling of these events, numbered in the order in which they are triggered. - PreInt: This is the entry point of the ASP.NET page life cycle - it is the pre-initialization, so you have access to the page before it is initialized. Controls can be created within this event. Also, master pages and themes can be accessed. You can check the IsPostBack property here to determine if it is the first time a page has been loaded. - Init: This event fires when all controls on the page have been initialized and skin settings have been applied. You can use this event to work with control properties. The Init event of the page is not fired until all control Init events have triggered - this occurs from the bottom up. - InitComplete: This event fires once all page and control initializations complete. This is the last event fired where ViewState is not set, so ViewState can be manipulated in this event. - PreLoad: This event is triggered when all ViewState and Postback data have been loaded for the page and all of its controls - ViewState loads first, followed by Postback data. - Load: This is the first event in the page life cycle where everything is loaded and has been set to its previous state (in the case of a postback). The page Load event occurs first followed by the Load event for all controls (recursively). This is where most coding is done, so you want to check the IsPostBack property to avoid unnecessary work. - LoadComplete: This event is fired when the page is completely loaded. Place code here that requires everything on the page to be loaded. - PreRender: This is the final stop in the page load cycle where you can make changes to page contents or controls. It is fired after all PostBack events and before ViewState has been saved. Also, this is where control databinding occurs. - PreRenderComplete: This event is fired when PreRender is complete. Each control raises this event after databinding (when a control has its DataSourceID set). - SaveStateComplete: This is triggered when view and control state have been saved for the page and all controls within it. At this point, you can make changes in the rendering of the page, but those changes will not be reflected on the next page postback since view state is already saved. - Unload: This event fires for each control and then the page itself. It is fired when the HTML for the page is fully rendered. This is where you can take care of cleanup tasks, such as properly closing and disposing database connections. An interesting caveat of the events fired with the loading of a page is the controls within the page and their events; that is, each control has its own event life cycle. The following code provides an example of the ordering of page and a couple control events. The ASP.NET source is listed first and followed by the codebehind source. It is a basic ASP.NET 4.0 Web Form with TextBox and Literal controls. The code does not include all events, but it does provide a subset to give you a feel for how they appear. You should notice the events specified in the individual controls that ties them to code blocks. <%@ Page <asp:Content <h2> <asp:Literal</asp:Literal> <asp:TextBox</asp:TextBox> Working with the ASP.NET Page life cycle. </h2></asp:Content> using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace WebPageLifeCycle { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Page_Load<br>"); } protected void Page_LoadComplete(object sender, EventArgs e) { Response.Write("Page_LoadComplete<br>"); } protected void Page_PreRender(object sender, EventArgs e) { Response.Write("Page_PreRender<br>"); } protected void Page_Render(object sender, EventArgs e) { Response.Write("Page_Render<br>"); } protected void PreInitEvent(object sender, EventArgs e) { Response.Write("OnPreInit<br>"); } protected void Page_Init(object sender, EventArgs e) { Response.Write("Page_Init<br>"); } protected void Literal_Init(object sender, EventArgs e) { Response.Write("Literal_Init<br>"); } protected void Textbox_Init(object sender, EventArgs e) { Response.Write("Textbox_Init<br>"); } protected void Page_InitComplete(object sender, EventArgs e) { Response.Write("Page_InitComplete<br>"); } protected void Page_PreLoad(object sender, EventArgs e) { Response.Write("Page_PreLoad<br>"); } protected void TextBox_Unload(object sender, EventArgs e) { // Cleanup} } } Notice the Unload event does not display anything since the Response object is not available in it, since the page and controls have been fully loaded when this event is triggered. The following lines are displayed on the page when it is loaded: OnPreInit Literal_Init Textbox_Init Page_Init Page_InitComplete Page_PreLoad Page_Load Page_LoadCompletePage_PreRender Know your environment Building applications that take full advantage of the ASP.NET platform requires an understanding of the environment, and the page life cycle is just one aspect of it. By knowing when events are triggered, it helps you properly code and design an application. The Load event is the most used and demonstrated event, but others have their uses as outlined above. Share your ASP.NET coding experience with the community.
http://www.techrepublic.com/blog/software-engineer/aspnet-basics-the-page-life-cycle/?count=all&view=expanded
CC-MAIN-2017-26
refinedweb
949
66.44
File.Move Method Moves a specified file to a new location, providing the option to specify a new file name. Namespace: System.IONamespace: System.IO Assembly: mscorlib (in mscorlib.dll) Parameters - sourceFileName - Type: System.String The name of the file to move. Can include a relative or absolute path. - destFileName - Type: System.String The new path and name can include. For a list of common I/O tasks, see Common I/O Tasks. The following example moves a file. using System; using System.IO; class Test { public static void Main() { string path = @"c:\temp\MyTest.txt"; string path2 = @"c:\temp2\MyTest.txt"; try { if (!File.Exists(path)) { // This statement ensures that the file is created, // but the handle is not kept. using (FileStream fs = File.Create(path)) {} } // Ensure that the target does not exist. if (File.Exists(path2)) File.Delete(path2); // Move the file. File.Move(path, path2); Console.WriteLine("{0} was moved to {1}.", path, path2); // See if the original exists now. if (File.Exists(path)) { Console.WriteLine("The original file still exists, which is unexpected."); } else { Console.WriteLine("The original file no longer exists, which is expected."); } } catch (Exception e) { Console.WriteLine("The process failed: {0}", e.ToString()); } } } - FileIOPermission for reading from sourceFileName and writing to destFileName. Associated enumerations: FileIOPermissionAccess.Read, FileIOPermissionAccess.Write
https://msdn.microsoft.com/en-us/library/system.io.file.move(v=vs.110).aspx
CC-MAIN-2015-27
refinedweb
215
56.11
ASP.NET MVC 4 Entity Framework Scaffolding and Migrations Download Web Camps Training Kit If you are familiar with ASP.NET MVC 4 controller methods, or have completed the "Helpers, Forms and Validation" Hands-On lab, you should be aware that many of the logic to create, update, list and remove any data entity it is repeated among the application. Not to mention that, if your model has several classes to manipulate, you will be likely to spend a considerable time writing the POST and GET action methods for each entity operation, as well as each of the views. In this lab you will learn how to use the ASP.NET MVC 4 scaffolding to automatically generate the baseline of your application's CRUD (Create, Read, Update and Delete). Starting from a simple model class, and, without writing a single line of code, you will create a controller that will contain all the CRUD operations, as well as the all the necessary views. After building and running the simple solution, you will have the application database generated, together with the MVC logic and views for data manipulation. In addition, you will learn how easy it is to use Entity Framework Migrations to perform model updates throughout your entire application. Entity Framework Migrations will let you modify your database after the model has changed with simple steps. With all these in mind, you will be able to build and maintain web applications more efficiently, taking advantage of the latest features of ASP.NET MVC 4. Note All sample code and snippets are included in the Web Camps Training Kit, available from at Microsoft-Web/WebCampTrainingKit Releases. The project specific to this lab is available at ASP.NET MVC 4 Entity Framework Scaffolding and Migrations. Objectives In this Hands-On Lab, you will learn how to: - Use ASP.NET scaffolding for CRUD operations in controllers. - Change the database model using Entity Framework Migrations. B: Using Code Snippets". Exercises The following exercise make up this Hands-On Lab: Note This exercise is accompanied by an End folder containing the resulting solution you should obtain after completing the exercise. You can use this solution as a guide if you need additional help working through the exercise. Estimated time to complete this lab: 30 minutes Exercise 1: Using ASP.NET MVC 4 Scaffolding with Entity Framework Migrations ASP.NET MVC scaffolding provides a quick way to generate the CRUD operations in a standardized way, creating the necessary logic that lets your application interact with the database layer. In this exercise, you will learn how to use ASP.NET MVC 4 scaffolding with code first to create the CRUD methods. Then, you will learn how to update your model applying the changes in the database by using Entity Framework Migrations. Task 1- Creating a new ASP.NET MVC 4 project using Scaffolding If not already open, start Visual Studio 2012. Select File | New Project. In the New Project dialog, under the Visual C# | Web section, select ASP.NET MVC 4 Web Application. Name the project to MVC4andEFMigrations and set the location to Source\Ex1-UsingMVC4ScaffoldingEFMigrations folder of this lab. Set the Solution name to Begin and ensure Create directory for solution is checked. Click OK. New ASP.NET MVC 4 Project Dialog Box In the New ASP.NET MVC 4 Project dialog box select the Internet Application template, and make sure that Razor is the selected View engine. Click OK to create the project. New ASP.NET MVC 4 Internet Application In the Solution Explorer, right-click Models and select Add | Class to create a simple class person (POCO). Name it Person and click OK. Open the Person class and insert the following properties. (Code Snippet - ASP.NET MVC 4 and Entity Framework Migrations - Ex1 Person Properties) using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MVC4EF.Models { public class Person { public int PersonID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } } } Click Build | Build Solution to save the changes and build the project. Building the Application In the Solution Explorer, right-click the controllers folder and select Add | Controller. Name the controller PersonController and complete the Scaffolding options with the following values. In the Template drop-down list, select the MVC controller with read/write actions and views, using Entity Framework option. In the Model class drop-down list, select the Person class. In the Data Context class list, select <New data context...>. Choose any name and click OK. In the Views drop-down list, make sure that Razor is selected. Adding the Person controller with scaffolding Click Add to create the new controller for Person with scaffolding. You have now generated the controller actions as well as the views. After creating the Person controller with scaffolding Open PersonController class. Notice that the full CRUD action methods have been generated automatically. Inside the Person controller Task 2- Running the application At this point, the database is not yet created. In this task, you will run the application for the first time and test the CRUD operations. The database will be created on the fly with Code First. Press F5 to run the application. In the browser, add /Person to the URL to open the Person page. Application: first run You will now explore the Person pages and test the CRUD operations. Click Create New to add a new person. Enter a first name and a last name and click Create. Adding a new person In the person's list, you can delete, edit or add items. Person list Click Details to open the person's details. Person's details Close the browser and return to Visual Studio. Notice that you have created the whole CRUD for the person entity throughout your application -from the model to the views- without having to write a single line of code! Task 3- Updating the database using Entity Framework Migrations In this task you will update the database using Entity Framework Migrations. You will discover how easy it is to change the model and reflect the changes in your databases by using the Entity Framework Migrations feature. Open the Package Manager Console. Select Tools > NuGet Package Manager > Package Manager Console. In the Package Manager Console, enter the following command: PMC Enable-Migrations -ContextTypeName [ContextClassName] Enabling migrations The Enable-Migration command creates the Migrations folder, which contains a script to initialize the database. Migrations folder Open the Configuration.cs file in the Migrations folder. Locate the class constructor and change the AutomaticMigrationsEnabled value to true. public Configuration() { AutomaticMigrationsEnabled = true; } Open the Person class and add an attribute for the person's middle name. With this new attribute, you are changing the model. public class Person { public int PersonID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string MiddleName { get; set; } } Select Build | Build Solution on the menu to build the application. Building the application In the Package Manager Console, enter the following command: PMC Add-Migration AddMiddleName This command will look for changes in the data objects, and then, it will add the necessary commands to modify the database accordingly. Adding a middle name (Optional) You can run the following command to generate a SQL script with the differential update. This will let you update the database manually (In this case it's not necessary), or apply the changes in other databases: PMC Update-Database -Script -SourceMigration: $InitialDatabase Generating a SQL script SQL Script update In the Package Manager Console, enter the following command to update the database: PMC Update-Database -Verbose Updating the Database This will add the MiddleName column in the People table to match the current definition of the Person class. Once the database is updated, right-click the Controller folder and select Add | Controller to add the Person controller again (Complete with the same values). This will update the existing methods and views adding the new attribute. Updating the controller Click Add. Then, select the values Overwrite PersonController.cs and the Overwrite associated views and click OK. Updating the controller Task4- Running the application Press F5 to run the application. Open /Person. Notice that the data was preserved, while the middle name column was added. Middle Name added If you click Edit, you will be able to add a middle name to the current person. Summary In this Hands-On lab, you have learned simple steps to create CRUD operations with ASP.NET MVC 4 Scaffolding using any model class. Then, you have learned how to perform an end to end update in your application -from the database to the views- by using Entity Framework Migrations.: Using Code Snippets With code snippets, you have all the code you need at your fingertips. The lab document will tell you exactly when you can use them, as shown in the following figure. _3<< Start typing the snippet name Press Tab to select the highlighted snippet Right-click where you want to insert the code snippet and select Insert Snippet Pick the relevant snippet from the list, by clicking on it
https://docs.microsoft.com/en-us/aspnet/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-entity-framework-scaffolding-and-migrations
CC-MAIN-2020-45
refinedweb
1,515
55.95
0 hey guys its me again with the rock paper scissors game. well after spending the whole day on it (ok not really) i was able to write a proper code for the game that would ask user for their input and randomly select one for the computer. now it should print out what each user chose and say who wins the round (ie you chose rock. computer chose paper. you lose. paper wraps rock) and keep a count of who is winning/ties. its all in function format and i had to declare constants for the rock/paper/scissors/quit option as well as constants to signal who won/tie here is my complete code: #include <iostream> #include <ctime> #include <cstdlib> using namespace std; //********************* //Function Prototypes * //********************* int UserChoice(); int ComputerChoice(); int DetermineAndDisplayWinner(int User, int Computer); void PrintObject(int obj); void Pause(); const int Rock = 1; const int Paper = 2; const int Scissors = 3; const int Quit = 4; const int CompWon = 5; const int UserWon = 6; const int Tie = 7; int main() { int User; int Computer; int obj; int choice; char answer = 'y'; while (answer == 'y' || answer == 'Y') { choice = UserChoice(); ComputerChoice(); DetermineAndDisplayWinner(User,Computer); PrintObject(choice); cout << "Do you want to continue?"; cin >> answer; } return 0; } //************************* //Functions * //************************* //******************* //Menu&User Choice * //******************* int UserChoice() { int choice; cout << "Select one of the following options" << endl; cout << "1. Rock" << endl; cout << "2. Paper" << endl; cout << "3. Scissors" << endl; cout << "4. Quit" << endl; cout << "Enter your choice: "; cin >> choice; while(choice < 1 || choice > 4) { cout << endl; cout << "Please enter 1,2,3, or 4."; cin >> choice; } cout << endl; if (choice == 1) cout << "You picked Rock\n"; else if (choice == 2) cout << "You picked Paper\n"; else if (choice == 3) cout << "You picked Scissors\n"; else cout << "Goodbye!\n"; return choice; } //*************************** //Get Computer Choice * //*************************** int ComputerChoice() { int randomNumber; srand( (unsigned) time (NULL) ); randomNumber = rand() % 3 + 1; if (randomNumber == 1) cout << "The computer picked Rock\n"; else if (randomNumber == 2) cout << "The computer picked Paper\n"; else cout << "The computer picked Scissors\n"; return randomNumber; } //********************************* //Determine And Display Winner * //********************************* int DetermineAndDisplayWinner(int User, int Computer) { if (User == Rock && Computer == Rock) { cout << "Its a Tie. Try again"; return Tie; } if (User == Rock && Computer == Paper) { cout << "You lose. Paper wraps Rock."; return CompWon; } if (User == Rock && Computer == Scissors) { cout << "You win. Rock Smashes Scissors."; return UserWon; } if (User == Paper && Computer == Scissors) { cout << "You lose. Scissors cuts Paper."; return CompWon; } if (User == Paper && Computer == Paper) { cout << "It's a tie. Try again."; return Tie; } if (User == Paper && Computer == Rock) { cout << "You win. Paper Wraps Rock."; return CompWon; } if (User == Scissors && Computer == Scissors) { cout << "It's a tie. Try again."; return Tie; } if (User == Scissors && Computer == Rock) { cout << "You lose. Rock smashes Scissors."; return CompWon; } if (User == Scissors && Computer == Paper) { cout << "You win. Scissors cuts Paper."; return UserWon; } return -1; } //************************************* //Print Object * //************************************* void PrintObject(int obj) { switch(obj) { case Rock: cout << "Rock"; break; case Paper: cout << "Paper"; break; case Scissors: cout << "Scissors"; break; case Quit: cout << "Quit"; break; case CompWon: cout << "Computer Wins"; break; case UserWon: cout << "You Win"; break; case Tie: cout << "It's a tie"; break; default: cout << "Invalid. try again"; } } //***************** //Pause * //***************** void Pause() { cin.ignore(80, '\n'); } all the function prototypes that u see, MUST BE USED in the program. the only problem i have is that it wont tell me who wins.
https://www.daniweb.com/programming/software-development/threads/151663/code-compiles-but-does-not-produce-answer
CC-MAIN-2017-17
refinedweb
556
62.68
25 May 2010 08:44 [Source: ICIS news] SINGAPORE (ICIS news)--Crude futures fell by more than $2/bbl on Tuesday as the US dollar strengthened amid growing concerns over oil consumption, with the global economic recovery being threatened by the European debt crisis. At 07:08 GMT, July NYMEX light sweet crude futures were down $1.79/bbl at $68.42/bbl (€54.74/bbl) after hitting an intra-day low of $68.05/bbl. Meanwhile, July Brent on ?xml:namespace> The US dollar gained ground against the euro and other leading currencies as investors sought a safe haven amid the growing market fears. Asian equity markets tumbled on Tuesday, with the benchmark Nikkei 225 stock index in US inventory data due for release later this week is expected to reveal a further build in crude stocks at the key Cushing terminal in Data from the previous week showed a build-up of crude at Cushing to a new crude level of 37.9m barrels. However, overall US crude stocks were expected to fall due to reduced imports. The landlocked Cushing terminal is the delivery point of WTI and the increase in stocks has placed significant downward pressure on the grade and pushed prices below ICE Brent values. (
http://www.icis.com/Articles/2010/05/25/9362214/crude-falls-2bbl-on-mounting-economic-worries.html
CC-MAIN-2015-11
refinedweb
209
58.21
Converting integer literals in C++ and Python An integral literal in a C program can be decimal, hexadecimal or octal. int percent = 110; unsigned flags = 0x80; unsigned agent = 007; This snippet would be equivalent to (e.g.): int percent = 0156; unsigned flags = 128; unsigned agent = 0x7; So programmers can choose the best of these options when including numbers in their code. Python adopted this same C syntax, but has recently gone on to extend and modify it. Some Python 2.6 numbers: Python 2.6 >>> 0x80, 110, 007, 0O7, 0o7, 0b10000000 (128, 110, 7, 7, 7, 128) I’m pleased to see support for binary literals, which are useful for (e.g.) bitmasks. I’ve never really seen the point of octals; nonetheless, they’ve been enhanced for Python 3. Python 2.6 backports the new improved octal literal syntax whilst retaining support for classic C-style octals. Python 3 drops C-style octals. Python 3.1 >>> 007 File "<stdin>", line 1 007 ^ SyntaxError: invalid token >>> 0O7 7 Now consider the compiler/interpreter writer’s problem. Clearly it must be possible to take a string representing an integer literal and work out what number it represents. At a first glance, the int() builtin isn’t quite smart enough to do the job without us supplying an explicit base for the conversion: >>> int('0xff') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: '0xff' >>> int('0xff', 16) 255 We might consider reading any prefix from the literal and dispatching the string to an appropriate handler. Something like this: def integer_literal_value(s): if s.startswith('0x'): return int(s, 16) if s.startswith('0b'): return int(s, 2) ... Yuck! Surely there’s an easier way to do something this fundamental? Well, there’s always eval(), which turns the interpreter on itself. >>> def integer_literal_value(s): return eval(s) ... >>> v = integer_literal_value >>> v('0x80'), v('0o7'), v('0b1010101'), v('42') (128, 7, 85, 42) We should have looked more carefully at the int() documentation: int([x[, radix]]) …. Perfect! >>> from functools import partial >>> integer_literal_value = partial(int, base=0) >>> v = integer_literal_value >>> v('0x80'), v('0o7'), v('0b1010101'), v('42') (128, 7, 85, 42) (Notice, by the way, that radix is used in the online documentation but the actual argument name is base. I’ll confess that before I wrote this note I hadn’t spotted this use of zero as a special value for string→integer conversions even though it’s been available since Python 2.1) C++ also offers a way to convert integer literals into the numbers they represent, but it’s not very well known. As is usual for format conversions, we use streams — stringstreams typically, but here I show an example using standard input and output. The trick is to disable any numeric formatting of the input stream. #include <iostream> int main() { int x; std::cin.unsetf(std::ios::basefield); while (std::cin >> x) { std::cout << x << '\n'; } return std::cin.eof() ? 0 : 1; } It works by magic. $ g++ integer_literal_value.cpp -o integer_literal_value $ echo 007 0x80 110 | ./integer_literal_value 7 128 110
http://wordaligned.org/articles/integer-literal-values
CC-MAIN-2014-52
refinedweb
518
55.74
i noticed some weird things happening before it gave me that error. Twitch streams wouldn't display video one day, iE,Firefox, i can use any other browsers (Edge,) ect). And a week later it gave me the proxy error. Last response: in Apps cisco secure vpn client General Discussion. Cisco secure vpn client vPN Gate on cisco secure vpn client websites in your country to help other users around you. Don't hestitate to distribute. You can upload the entire software to other websites. Rendering t (this web site)) unreachable from your country, if your government's firewall exhibits problems, and consulted information security and legal experts to find the best VPN for most people. We researched 32 VPN command line cisco vpn client services, tested 12, Some user who doesnt like to submit personal payment details will use bitcoin or any other payment service. So we have gathered the payment methods that they use in this review. Do They Have Own DNS Server?" Do They Have Own DNS Server? When you enter. USA: Cisco secure vpn client! if the PC is connected to a network cisco secure vpn client as well as the internet, when your PC is connected to a network, how to Find the IP Address of Your PC. It is assigned an address on the network called an IP address. the format will be: usernamespaceserver namespacepasswordspaceip address allowed 3. Create a configuration files under /etc/ppp/peers directory called server. Pptp ipparam server. Org using cisco secure vpn client text editor: vim /etc/ppp/peers/server. Org And add following line: pty "pptp -nolaunchpppd" name myvega remotename PPTP server require-mppe-128 download cisco vpn client 32 bit file /etc/ppp/options.. for details, l2TP VPN cisco secure vpn client PPP defines an encapsulation technology to transmit packets of various protocols on Layer-2 P2P links. PPP is running between a user and the NAS, in this case, see IPSec. but I do recommend it. This section isnt essential, its cisco secure vpn client useful to read through these. Just be sure youve set up firewall rules to allow clients on the home LAN to connect to the OpenVPN server on the router. The OpenVPN hardening page covers various ways to improve the security of OpenVPN.uninstall Old Version (If Available)) 2. 1. How To Install? Whats New cisco secure vpn client Optimized VPN connection will be faster and more stable salayan firmalardan kullanc ad ve ifre alarak cisco secure vpn client VPN köprü kullanarak trafii anonim hale getirmektir. and Windows XP, updateStar is compatible cisco secure vpn client with proxy to open blocked sites Windows platforms. Windows Server 2003, 2008, windows 7, windows Vista, updateStar has been tested to meet all of the technical requirements to be compatible with Windows 10, 8.1, windows 8,hi all, i've searched but can't find a definitive answer. I'm using a TG799 cisco secure vpn client bridged on AussieBB,the client works across all operating systems like Windows (XP,) l2TP, pureVPN gives you easy-to-use software which supports all important protocols like PPTP, 8 Linux, cisco secure vpn client mac, vista, moreover, 7, iKEv2 and SSL-based OpenVPN. Android and iOS. SSTP, Uea vpn login: hong Kong and LA are my top choices. Encryption is a pretty standard 128-bit or 256-bit OpenVPN with an unspecified kind of cisco secure vpn client stealth layer. These are my server speed tests (in-app utility)) results when using ExpressVPN in China.this is done using authentication. This process is called asymmetric encryption or public key cryptography. They then must exchange cisco secure vpn client a secret key over a secure channel. That key is then used for channel encryption. Before you start transmitting data, your device and the VPN server need to verify that the other side is who they say they are.laptop gibi bilgisayarlarda kullanabileceiniz gibi, cep telefonu, öretmenlerimiz ve cisco secure vpn client."Menlo 15 (foreground-color.) "DeepSkyBlue3 (cursor-color.) we'll use ivy-read 's :action to invoke a tiny bit of AppleScript. "MediumPurple1 (font.) oh and we'll also use some funny quot;s to tease ourselves about our beloved editor. (with-current-buffer (get-buffer-create "modal-ivy (let (frame (make-frame auto-raise.) t) (background-color.)mac and iOS clients, the ExpressVPN. Our Thoughts Great customer service and ease of use are the primary reasons that ExpressVPN remains such a popular cisco secure vpn client choice for. As with its Windows, android VPN users. iP location cisco secure vpn client finder, iP Finder,request a callback VPNF ilter threat discovered by cisco secure vpn client Talos New VPNF ilter malware targets at least 500K devices worldwide. Read update. Our Security Plan and Build Services offer advanced support.download now cisco secure vpn client Size: 1.69MB License: Shareware Price: 29.00 By: RARLAB TuneUp Utilities 2010 Buy now NEW! Compatible with window s 7: TuneUp Utilities supports the new window s 7.Full power for actively running programs makes for a smoother work and gaming ex. the enemies are varied and those are also can be different power and ability to finish you. War, it is really best vpn free quora dedicated to someone who love the things of the arm, strategy, and others. Shoot guns, guns of Boom Mod APK Best Features. l2TP /IPsec, supported VPN protocols NordVPN supports a total of five protocols, openVPN, for cisco secure vpn client the complete list as well as more information on. And IKEv2/IPsec. Including PPTP, sSTP, nordVPN s servers, head here.when finished, wait for the Details field to say Disconnected. The switch will display On and Connected cisco secure vpn client will show in the Details field. When you have successfully connected, disconnect when finished. Success! Return to the AnyConnect app and tap the On switch.use the first cisco secure vpn client one to receive emails from important sites and Apps, use at least 3 different email addresses, such as Paypal and Amazon, 26. VirtualBox or Parallels. Or access unimportant websites and install new software inside a virtual machine created with VMware, unlike most free VPN providers, with powerful new features and customization options. The VyprVPN cisco secure vpn client apps feature a sleek and intuitive look and feel, vyprVPN is not an outsourced or hosted cisco pptp vpn client solution that relies on third parties to deliver its VPN service.
http://soupfactory.co.uk/2003/09/cisco-secure-vpn-client.html
CC-MAIN-2018-34
refinedweb
1,076
65.32
Member since 10-06-2015 8 4 Kudos Received 3 Solutions 01-16-2017 07:26 PM 01-16-2017 07:26 PM If you have Syslog as the source, then there is a possibility that the mem channel is full sometimes and cannot accept incoming Syslog messages. Since Syslog does not retry sending to flume, the data might be getting dropped. ... View more 01-10-2017 08:41 PM 01-10-2017 08:41 PM This may be related to the other problem you have posted (dual temp files). If you are using multiple sinks or agents... make sure each one is writing to a different file/directory ... otherwise they will overwrite each other and appear like data loss. With only one agent running, for your case there should be only one tmp file to which writes are currently happening. After rollInterval, that tmp file should get closed and new data should go into the new tmp file. The old tmp file should get closed and lose its .tmp suffix. If you are seeing many open tmp files, that could be an indication of intermittent network/other issues causing flume to not write and close the tmp files in Hdfs properly. So then it opens a new file without properly closing the old tmp file. Another potential for data loss is if you are restarting the flume agent or noticing any crashes. The memory channel will lose data in those cases. Suggestions: if possible hourly rolling. ... View more Can you provide the full names of the tmp files (with path) ? Do you have multiple agents running ? ... View more 01-10-2017 08:03 PM 01-10-2017 08:03 PM Looks like you have most likely not specified the right agent name to -n. Ensure the agent name inside the conf file matches it. ... View more 08-23-2016 09:56 PM 08-23-2016 09:56 PM The doc should really not say anything about the HBase version there... as all HDP components are tested work with each other within a release. However, in this case "0.98 and above" would have been a better way to phrase it. ... View more If you are compiling Flume yourself.. you probably got something wrong there. You should be using the Flume bundled in HDP instead. It is tested to work along with the other components bundled with HDP (HBase, Hive, Hdfs etc.) ... so you wont run into issues like this. ... View more 07-07-2016 11:25 PM 07-07-2016 11:25 PM The release of version 1.0 marks another major milestone for Storm. Since becoming an Apache project in Sept 2013, much work has gone into maturing the feature set and also improving performance by reworking or tweaking various components. Some of the notable changes that contribute to improved performance are: Switch from ZeroMQ to Netty for inter-worker messaging. Employing batching in Disruptor queues (used for intra-worker messaging) Optimizations in the clojure code such as employing type hints and reducing expensive clojure lookups in perf sensitive areas. In this blog we shall take a look at performance improvements in Storm since its incubation into Apache. To quantify this, we shall compare the performance numbers of Storm v.0.9.0.1, which was the last pre-apache release, with the most recent Storm v1.0.1. Storm v.0.9.0.1 has also been used as a reference point for performance comparisons against Heron. Given the existence of recent efforts to benchmark Storm “at scale”, here we shall examine performance from a different angle. We narrow the focus to some specific core areas of Storm using a collection of simple topologies. To contain the scope, we have limited the scope to Storm core (i.e. no Trident). Methodology Each topology was given at least 4 mins of “warm up” execution time before taking measurements. Subsequently, after a minimum of 10 minutes, metrics were captured from the Web UI for the last 10-minute window. The captured numbers have been rounded off for readability. In all cases ACK-ing was enabled with 1 ACKer bolt executor. Throughput (i.e tuples/sec) was calculated by dividing the total ACKs for the 10 min window by 600. Due to some backward incompatibilities (mostly namespace changes) in Storm, two versions of the topologies were written, one for each Storm version. As a general principle we have avoided configuration tweaks to tune performance and stayed with default values. The only config setting we applied was to set the max heap size of the worker to 8GB to ensure memory. Setup: Hardware: 5-node cluster (1 nimbus and 4 supervisor nodes) running Storm v0.9.0.1 5-node cluster (1 nimbus and 4 supervisor nodes) running Storm v1.0.1 3-node Zookeeper cluster Hardware: All nodes had the following configuration: CPU: sockets = 2 sockets, 6 cores per socket, Hyper threaded. Model (2sockets x 6cores x 2 hyper threads = 24). Intel Xeon CPU E5-2630 0 @ 2.30GHz Memory : 126 GB Network: 10GigE Disk: 6 disks. Each 1TB. 7200 RPM. Measurements: 1- Spout Emit Speed Here we measure how fast a single spout can emit tuples. Topology: This is the simplest topology. It consists of a ConstSpout that repeatedly emits the string “some data” and no bolts. Spout parallelism is set to 1, so there is only one instance of the spout executing. Here we measure the number of emits per second. Latency is not relevant as there are no bolts. Topology Code: v0.9.0.1: v1.0.1: Measurements: v0.9.0.1: Emit rate: 108 k tuples/sec v1.0.1: Emit rate: 3.2 million tuples/sec 2- Messaging Speed (Intra-worker): The goal is to measure the speed at which tuples can be transferred between a spout and a bolt running within the same worker process. Topology: Consists of a ConstSpout that repeatedly emits the string “some data” and a DevNull bolt which ACKs every incoming tuple and discards them. The spout, bolt and acker were given 1 executor each. The spout and bolt were both run within the same worker. Topology Code: v0.9.0.1: v1.0.1: Measurements: v0.9.0.1: Throughput: 87k/sec Latency: 16ms v1.0.1: Throughput: 233k/sec Latency: 3.4ms 3- Messaging Speed (Inter-worker 1): The goal is to measure the speed at which tuples are transferred between a spout and a bolt, when both are running on two separate worker processes on the same machine. Topology: Same topology as the one used for Intra-worker messaging speed. The spout and bolt were however run on two separate workers on the same host. The bolt and the acker were observed to be running on the same worker. Topology Code: v0.9.0.1: v1.0.1: Measurements: v0.9.0.1: Throughput: 48 k/sec Latency: 170 ms v1.0.1: Throughput: 287 k/sec Latency: 8 ms 4- Messaging Speed (Inter-worker 2): The goal is to measure the speed at which tuples are transferred when the spout, bolt and acker are all running on separate worker processes on the same machine. Topology: Same topology as the one used for Intra-worker messaging speed. The spout, bolt and acker were however run on three separate workers on the same host. Topology Code: v0.9.0.1: v1.0.1: Measurements: v0.9.0.1: Throughput: 43 k/sec Latency: 116 ms v1.0.1: Throughput: 292 k/sec Latency: 8.6 ms 5- Messaging Speed (Inter-host 1): The goal is to measure the speed at which tuples are transferred between a spout and a bolt, when both are running on two separate worker processes running on (2) different machines. Topology: Same topology as the one used for Intra-worker messaging speed but the spout and bolt were run on two separate workers on two different hosts. The bolt and the acker were observed to be running on the same worker. Topology Code: v0.9.0.1 : v1.0.1: Measurements: v0.9.0.1: Throughput: 48 k/sec Latency: 845 ms v1.0.1: Throughput: 316 k/sec Latency: 13.3 ms 6- Messaging Speed (Inter-host 2): Here we measure the speed at which tuples are transferred when the spout, bolt and acker are all running on separate worker processes on 3 different machines. Topology: Again same topology as inter-host 1, but this time the acker run on a separate host. Topology Code: v0.9.0.1: v1.0.1: Measurements: v0.9.0.1: Throughput: 50 k/sec Latency: 1700 ms v1.0.1: Throughput: 303 k/sec Latency: 7.4 ms Summary Throughput (tuples/sec) Storm version Spout Emit MSGing IntraWorker MSGing InterWorker 1 MSGing InterWorker 2 MSGing InterHost 1 MSGing InterHost 2 v0.9.0.1 108,000 87,000 48,000 43,000 48,000 50,000 v1.0.1 3,200,000 233,000 287,000 292,000 316,000 303,000 Latency (milliseconds) Storm ver MSGing IntraWorker MSGing InterWorker 1 MSGing InterWorker 2 MSGing InterHost 1 MSGing InterHost 2 v1.0.1 3 8 9 13 7 v0.9.0.1 16 170 116 845 1,700 ... View more - Find more articles tagged with: - Data Ingestion & Streaming - How-ToTutorial - performance - Storm - stream-processing - streaming Labels:
https://community.cloudera.com/t5/user/viewprofilepage/user-id/36214
CC-MAIN-2020-29
refinedweb
1,561
66.44
MPI_Buffer_attach - Attaches a user-defined buffer for sending #include <mpi.h> int MPI_Buffer_attach(void *buf, int size) buf - initial buffer address (choice) size - buffer size, in bytes (integer) The size given should be the sum of the sizes of all outstanding Bsends that you intend to have, plus a few hundred bytes for each Bsend that you do. For the purposes of calculating size, you should use MPI_Pack_size . In other words, in the; MPI_Bsend_overhead gives the maximum amount of space that may be used in the buffer for use by the Bsend routines in using the buffer. This value is in mpi.h (for C) and mpif.h _BUFFER - Invalid buffer pointer. Usually a null buffer where one is not valid. MPI_ERR_INTERN - An internal error has been detected. This is fatal. Please send a bug report to the LAM mailing list (see- mpi.org/contact.php ). MPI_Buffer_detach, MPI_Bsend. bufattach.c
http://huge-man-linux.net/man3/MPI_Buffer_attach.html
CC-MAIN-2017-30
refinedweb
150
75.4
NAME¶ loop, loop-control - loop devices SYNOPSIS¶ #include <linux/loop.h> DESCRIPTION¶ The loop device is a block device that maps its data blocks not to a physical device such as a hard disk or optical disk drive, but to the blocks of a regular file in a filesystem or to another block device. This can be useful for example to provide a block device for a filesystem image stored in a file, so that it can be mounted with the mount(8) command. You could do $ dd if=/dev/zero of=file.img bs=1MiB count=10 $ sudo losetup /dev/loop4 file.img $ sudo mkfs -t ext4 /dev/loop4 $ sudo mkdir /myloopdev $ sudo mount /dev/loop4 /myloopdev See losetup(8) for another example. A transfer function can be specified for each loop device for encryption and decryption purposes. The following ioctl(2) operations are provided by the loop block device: - LOOP_SET_FD - Associate the loop device with the open file whose file descriptor is passed as the (third) ioctl(2) argument. - LOOP_CLR_FD - Disassociate the loop device from any file descriptor. - LOOP_SET_STATUS - Set the status of the loop device using the (third) ioctl(2) argument. This argument is a pointer to a loop_info structure, defined in <linux/loop.h> as: struct loop_info { int lo_number; /* ioctl r/o */ dev_t lo_device; /* ioctl r/o */ unsigned long lo_inode; /* ioctl r/o */ dev_t lo_rdevice; /* ioctl r/o */ int lo_offset; int lo_encrypt_type; int lo_encrypt_key_size; /* ioctl w/o */ int lo_flags; /* ioctl r/w (r/o before Linux 2.6.25) */ char lo_name[LO_NAME_SIZE]; unsigned char lo_encrypt_key[LO_KEY_SIZE]; /* ioctl w/o */ unsigned long lo_init[2]; char reserved[4]; }; - The encryption type (lo_encrypt_type) should be one of LO_CRYPT_NONE, LO_CRYPT_XOR, LO_CRYPT_DES, LO_CRYPT_FISH2, LO_CRYPT_BLOW, LO_CRYPT_CAST128, LO_CRYPT_IDEA, LO_CRYPT_DUMMY, LO_CRYPT_SKIPJACK, or (since Linux 2.6.0) LO_CRYPT_CRYPTOAPI. - The lo_flags field is a bit mask that can include zero or more of the following: - LO_FLAGS_READ_ONLY - The loopback device is read-only. - LO_FLAGS_AUTOCLEAR (since Linux 2.6.25) - The loopback device will autodestruct on last close. - LO_FLAGS_PARTSCAN (since Linux 3.2) - Allow automatic partition scanning. - LO_FLAGS_DIRECT_IO (since Linux 4.10) - Use direct I/O mode to access the backing file. - The only lo_flags that can be modified by LOOP_SET_STATUS are LO_FLAGS_AUTOCLEAR and LO_FLAGS_PARTSCAN. - LOOP_GET_STATUS - Get the status of the loop device. The (third) ioctl(2) argument must be a pointer to a struct loop_info. - LOOP_CHANGE_FD (since Linux 2.6.5) - Switch the backing store of the loop device to the new file identified file descriptor specified in the (third) ioctl(2) argument, which is an integer. This operation is possible only if the loop device is read-only and the new backing store is the same size and type as the old backing store. - LOOP_SET_CAPACITY (since Linux 2.6.30) - Resize a live loop device. One can change the size of the underlying backing store and then use this operation so that the loop driver learns about the new size. This operation takes no argument. - LOOP_SET_DIRECT_IO (since Linux 4.10) - Set DIRECT I/O mode on the loop device, so that it can be used to open backing file. The (third) ioctl(2) argument is an unsigned long value. A nonzero represents direct I/O mode. - LOOP_SET_BLOCK_SIZE (since Linux 4.14) - Set the block size of the loop device. The (third) ioctl(2) argument is an unsigned long value. This value must be a power of two in the range [512,pagesize]; otherwise, an EINVAL error results. - LOOP_CONFIGURE (since Linux 5.8) - Setup and configure all loop device parameters in a single step using the (third) ioctl(2) argument. This argument is a pointer to a loop_config structure, defined in <linux/loop.h> as: struct loop_config { __u32 fd; __u32 block_size; struct loop_info64 info; __u64 __reserved[8]; }; - In addition to doing what LOOP_SET_STATUS can do, LOOP_CONFIGURE can also be used to do the following: - set the correct block size immediately by setting loop_config.block_size; - explicitly request direct I/O mode by setting LO_FLAGS_DIRECT_IO in loop_config.info.lo_flags; and - explicitly request read-only mode by setting LO_FLAGS_READ_ONLY in loop_config.info.lo_flags. Since Linux 2.6, there are two new ioctl(2) operations: - LOOP_SET_STATUS64, LOOP_GET_STATUS64 - These are similar to LOOP_SET_STATUS and LOOP_GET_STATUS described above but use the loop_info64 structure, which has some additional fields and a larger range for some other fields: struct loop_info64 { uint64_t lo_device; /* ioctl r/o */ uint64_t lo_inode; /* ioctl r/o */ uint64_t lo_rdevice; /* ioctl r/o */ uint64_t lo_offset; uint64_t lo_sizelimit; /* bytes, 0 == max available */ uint32_t lo_number; /* ioctl r/o */ uint32_t lo_encrypt_type; uint32_t lo_encrypt_key_size; /* ioctl w/o */ uint32_t lo_flags; i /* ioctl r/w (r/o before Linux 2.6.25) */ uint8_t lo_file_name[LO_NAME_SIZE]; uint8_t lo_crypt_name[LO_NAME_SIZE]; uint8_t lo_encrypt_key[LO_KEY_SIZE]; /* ioctl w/o */ uint64_t lo_init[2]; }; /dev/loop-control¶ Since Linux 3.1, the kernel provides the /dev/loop-control device, which permits an application to dynamically find a free device, and to add and remove loop devices from the system. To perform these operations, one first opens /dev/loop-control and then employs one of the following ioctl(2) operations: - LOOP_CTL_GET_FREE - Allocate or find a free loop device for use. On success, the device number is returned as the result of the call. This operation takes no argument. - LOOP_CTL_ADD - Add the new loop device whose device number is specified as a long integer in the third ioctl(2) argument. On success, the device index is returned as the result of the call. If the device is already allocated, the call fails with the error EEXIST. - LOOP_CTL_REMOVE - Remove the loop device whose device number is specified as a long integer in the third ioctl(2) argument. On success, the device number is returned as the result of the call. If the device is in use, the call fails with the error EBUSY. FILES¶ - /dev/loop* - The loop block special device files. EXAMPLES¶ The program below uses the /dev/loop-control device to find a free loop device, opens the loop device, opens a file to be used as the underlying storage for the device, and then associates the loop device with the backing store. The following shell session demonstrates the use of the program: $ dd if=/dev/zero of=file.img bs=1MiB count=10 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.00609385 s, 1.7 GB/s $ sudo ./mnt_loop file.img loopname = /dev/loop5 Program source¶ #include <fcntl.h> #include <linux/loop.h> #include <sys/ioctl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \ } while (0) int main(int argc, char *argv[]) { int loopctlfd, loopfd, backingfile; long devnr; char loopname[4096]; if (argc != 2) { fprintf(stderr, "Usage: %s backing-file\n", argv[0]); exit(EXIT_FAILURE); } loopctlfd = open("/dev/loop-control", O_RDWR); if (loopctlfd == -1) errExit("open: /dev/loop-control"); devnr = ioctl(loopctlfd, LOOP_CTL_GET_FREE); if (devnr == -1) errExit("ioctl-LOOP_CTL_GET_FREE"); sprintf(loopname, "/dev/loop%ld", devnr); printf("loopname = %s\n", loopname); loopfd = open(loopname, O_RDWR); if (loopfd == -1) errExit("open: loopname"); backingfile = open(argv[1], O_RDWR); if (backingfile == -1) errExit("open: backing-file"); if (ioctl(loopfd, LOOP_SET_FD, backingfile) == -1) errExit("ioctl-LOOP_SET_FD"); exit(EXIT_SUCCESS); } SEE ALSO¶ COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/testing/manpages/loop-control.4.en.html
CC-MAIN-2021-49
refinedweb
1,221
56.66
(Warning: Hugo neepery ahead. Ignore if you’re bored with the subject.) As I’m musing on class today, I’d like to take a moment to address something I see being attempted by the Puppies, which is to cast the current Hugo contretemps as something akin to a class war, with the scrappy diverse underdogs (the Puppy slates) arrayed against “powerful, wealthy white men” such as myself, Patrick Nielsen Hayden and George RR Martin, the latter being a late addition to the non-existent SJW cabal; apparently we are now a cackling, finger-steepling triumvirate of conspiracy (See the link here at File770, which, again, has been invaluable as a repository of Hugo commentary this year). So, let’s unpack this a bit. One, I’m not entirely sure how much credit the Puppy slates should get for “diversity” when their most notable accomplishments are reducing the overall demographic diversity of the Hugo slate from the past few years, locking up five (previously six!) slots on the final ballot for the same straight, white, male author, and getting much of their “diversity” from conscripts to the slates, at least some of whom did not appear to have foreknowledge of their appearance there, and some of whom have since declined their nominations. Basically, if you’re going to argue diversity, you should probably not make the assertion so easily refutable by actual fact (it also helps not to have one of the primary movers behind the slates be an actual, contemptible racist and sexist). Two, with regard to me, George and Patrick being “powerful, wealthy white men”: okay, sure, why not (I suspect Patrick, earning an editor’s salary in New York, might snort derisively that the idea that he is actually wealthy), but it’s interesting for any of the three of us to be criticized for these things by a partisan of slates whose dominance on the final Hugo ballot was accomplished substantially through the machinations of a fellow who is himself a scion of wealth and power, with enough dosh on hand to have his own publishing house (for which he is using the current Hugo contretemps as very cheap advertising), and, to a rather lesser extent, by a fellow who has many of the same advantages I or George do: Bestselling status, award nominations and, at least from public statements I can recall, a rather comfortable income from his work, largely from a company that shares at least one parent in common with one that publishes me, is a major house in the field, and is distributed by a major publishing conglomerate. Indeed, as it is an article of faith among the Puppies that I don’t actually sell all that many books, I suppose the argument could be made that he is more wealthy and powerful than I am! So well done him, and I wish him all the best in his career. But between these fellows and their circumstances, it’s difficult to cast this as a battle of underdogs versus wealth and privilege. There’s quite enough wealth and privilege to go around. (There is at least one salient difference between me, Patrick and George, and the fellows I’ve mentioned, who share so many of the advantages that we three do. What that difference is I will leave as an exercise for the reader.) Three, the Puppies drama isn’t about class, or privilege. It’s about envy and opportunism, and it’s also, somewhat pathetically, apparently about the heads of the Puppy slates being upset that once upon a time, they felt people in fandom were mean to them. As if they were the only people in the world that folks in science fiction fandom had ever been mean to. True fact: There is almost no one in science fiction and fantasy that someone else in fandom hasn’t been mean to at one time or the other. Science fiction fandom contains many people, including quite a few with questionable social skills. Not all of them are going to like you. Not all of them are going to like what you do. That’s not a conspiracy; that’s just a basic fact. Here’s a thing: Look back in time to when I was nominated for Best Fan Writer. There was a whole lot of mean going on there; there are still fans who are righteously upset with me about it. Look at what people have said about each of the books of mine that have been nominated for Best Novel (look at what was said after I won it!). Look what people in fandom say about me on the Internet all the damn time. Hell,. You know what I did? I signed his book. Because a) apparently he bought it and b) I’m not emotionally twelve years old. I can handle people being thoughtless and stupid and even occasionally intentionally mean in my direction, without deciding the the correct response is to burn down the Hugos, screaming I’ll show you! I’ll show you all! Which is, as it happens, seems to be another salient difference between me, Patrick and George, and these fellows. Unless you’re under the impression Patrick and George haven’t got their fair share of people disliking them, or saying mean things about them. They have; they’ve just decided to deal with it like the grown up humans they are. So, no. This Hugo contretemps isn’t about class. But it might be, a little bit, about who has class, and how that affects what they do with their wealth and power. 392 thoughts on “Hugos and Class” Usual warnings about Mallets, politeness, etc. You know the drill by this point, and if you don’t, here are the rules. Thanks! I’m just glad I wasn’t drinking anything when I read the last sentence. No one has ever been mean to GRRM on the internet. The very idea! If you want a class act, look no farther than David Gerrold who has endured so many slings and arrows of this outrageous misfortune and still wants to make the convention and the ceremony fun for everyone, Puppies included. Someone actually came up to you and said that? Wow. I had to google Vox Day. How well do you suppose Schadenfreude Pie ships? In case one needs to be made for each of the three canine-involved gentlemen? Asking for a friend. Did you mean “contretemps” where you wrote “contremps”? Or is this a different word? (Hey, you know, you were awfully polite when I met you at a worldcon. Even when I accidentally blurted out your name and you assumed I was calling to you.) Sean Eric Fagen: Fixed. It’s a word I consistently misspell. “You know what I did? I signed his book.” I would have offered to personalize it and gave him $20. Having that quote to pull out is priceless. @Hypnoskills. :) Yes, I do have that feeling that given the lack of support for any of the Puppies or their claims, there’s a fair amount of scurrying around to reframe all of this. As much as I like the things that you, John, write–here and for commercial publication–and that the rest of the Secret über-Masonic Illuminati who are Secretly Controlling Everything produce, I’m just not seeing you as the SMOFs who are controlling the Hugos for Fame, Fortune, Greater Book Sales, and More Tim-Tams. It’s an attractive image, sure, but it’s not real. I’d say “It doesn’t even make a particularly good plot bunny,” but I think that there’s room for something totally silly in flash fiction along these lines. I fear that a couple of at least passably good authors may have permanently damaged their sales potential and future by doing something so markedly stupid on such a public stage. Most of us usually manage to do our stupid things on a smaller, less public stage and we can live these things down fairly well, but I’m thinking that Brad T. & Larry C. are going to have a harder time doing so under the circumstances. Yeah, I’m really pissed at them for all of this, but a small part of me feels sorry for them. But they’ve done something really tacky and it’s going to sting for quite a while. You, GRRM and PNH all have facial hair in common? Must be fun having enough money given to you that you can screw around and call yourself a musician or a writer without, you know, ever having to depend on your skills for a living. MoXmas: I believe at least one of the other fellows sports facial hair, so that would not be the thing, no. John Hedke: Yeah. When I was president of SFWA I called myself a SMOP (secret master of pro-dom), but even then I was aware that responsibility came with “power.” Otherwise, I just don’t have time to run a cabal. So much work and the hours are lousy. “Three, the Puppies drama isn’t about class, or privilege. It’s about envy and opportunism, and it’s also, somewhat pathetically, apparently about the heads of the Puppy slates being upset that once upon a time, they felt people in fandom were mean to them.” Which is why, of course, that the American of Hispanic and Native American ancestry TURNED DOWN THE NOM; because it was all about his “envy” for the prize. And when did nominating good authors who happened to be minority of a non-leftist bent become “conscripting them?” And it isn’t about a “conspiracy” it’s about privilege. You once recognized that you have privilege, John, but it isn’t because you’re a “straight, white male”; it’s because you are almost completely uncritical and wholly dogmatic to the leftist religion (and it is a religion since it’s a highly-irrational and reality-denying belief system). Thus you have “leftist privilege”. And like all privilege those with it don’t recognize it just as, they say, a fish doesn’t realize it’s in the water; to them it’s just “normal”. The fact that you don’t “check” your privilege in this case and realize that it’s the main cause of your success over authors who don’t believe in Salvation through Leftism shows how clueless you truly are. All I’m reading from you, John, is the snarling of a bitter, angry, Establishment Leftist who doesn’t like his Authori-TAH challenged by those free thinkers on the Right and in Libertarian circles. I realize this particular claim by the Puppies might have tweaked you a bit, but honestly, I think it’s just them moving the goalposts again. This particular point they are trying to make, IMHO, is not as important to address as some of the other issues. In fact, however one might rank the various the claims that have been made, I’ve noticed that it’s been pretty hard to keep track of all the goalpost-moving that’s been done so far. What is this about? Politics in SF/F? A clique being in control of the Hugos for years? SF/F changing its focus so much as to be unrecognizably different from the Golden Age? Publishers being unfair to authors’ politics? Being insulted at cons? Etiquette regarding the Hugos voting? I can’t keep track. (I mean, I know what I think it’s about, but there are a ton of issues being discussed and thrown about.) Basically I’ve been reading a lot of posts about this issue, from all sides, and it’s absolutely stunning to notice how far apart the worldviews are, and how, for all the all sides are supposed to be part of SF/F, they might as well be aliens speaking alien languages. Sometimes I despair of these gaps ever being bridged. Hang in there. And for the record, I think “Locked In” is your best book so far. Shorter Scorpius: For years the Hugos have been used by its Establishment to promote authors for the gender, gender-identity, race, religion (non-Christian and non-Jewish, of course), and left politics without considering the quality of their writing. And the puppies put together a diverse group of talented writers who don’t have the insider edge and it PISSES YOU OFF. Also, psst, there’s a typo in PNH’s name in your post. Yes Scorpius, when someone begs to be put on an award slate that is pretty indicitive of envy. The fact that he turns down the nomination at a later date doesn’t matter. It’s like me begging someone for money and then claiming that I should be given respect because so and so offered me money, but I totally turned them down. “Shorter Scorpius: For years the Hugos have been used by its Establishment to promote authors for the gender, gender-identity, race, religion (non-Christian and non-Jewish, of course), and left politics without considering the quality of their writing.” And there’s that unsupported claim again, complete with the suggestion that the awards were fixed but no documentation to back that up. Scorpius: Re: Correia’s declination of the nod: Yeah, I already addressed that maneuver in a previous post. Color me not especially impressed with it. Re: Being pissed off: Nope. Again, in previous entries, you’ll see that my reaction was “It was legal; game on.” I don’t suspect the Puppies are going to like where this particular game is going, however. And clearly I don’t mind noting how not impressed I am with their rationales in general. Dana: Yeah, it’s clear when one of their arguments is shown to be unsupportable, they’ll try another one. All they really have at this point is the goalpost moving. Mind you, there are some who simply try to restate previously discredited arguments on the assumption that simply repeating them somehow washes away the discrediting. It doesn’t, and they look more foolish for trying. John, I don’t remember the blog post date now, but I dod remember one time that you wrote what it was like to come from such a poor background and work you way up to where you are now. In many ways, your story is the classic American rags-to-riches story. It’s inspiring. Can you refer me to that post? I can’t find it now, and I don’t remember when you wrote it (there may have been more than one). It’s not only pertinent to this post, but it’s also one I’d like to recommend some of my students read as well. It’s a tribute to what a strong work ethic in school and life and a positive outlook can bring to a person. Catherine Asaro: Is it this one? There are numerous bits that are silly about the Sad Puppies and Rabid Puppies constant shifting of goal-posts here, but probably changing this to being about “class” is the silliest. Ethnic minorities, particularly hispanics and African Americans (groups that VD has frequently stated his hatred and bile for) are, per capita, more frequently below the poverty line than white people. People with disabilities (mental disabilities) from lower class families are less likely get diagnosed or treated in childhood when the diagnosis can do the most good, and if they *do* get a diagnosis, their parents and schools are less likely to have the resources to help them address the symptoms of their disability (if their schools are inclined to bring in speech pathologists and trained Special Needs instructors to help those students in the first place). And this isn’t even getting into the problems that lower class GLBT people run into. So, if the SPs and the RPs *actually cared about class* they’d be pushing to get more people of color nominated, particularly for fan awards (which often rely on Who You Know) like Fan Artist and Fan Writer, and also working to help fans of color who otherwise can’t afford to go to Sasquan (or at least get Supporting memberships so their voices are heard). Additionally, they’d be advocating for all publishers to include their works in the voters pamphlet. Oh, one more thing. Since younger fans are less able to travel than older fans, and there’s a big boom among younger fans (and particular younger female fans) in anime and manga fandom – there would have been more of a push for works of anime and manga from last year to be nominated for the Hugo Awards – like putting forward the manga adaptation of “All You Need Is Kill”* for Best Graphic Novel. *Which was in turn adapted into Edge of Tomorrow – which is nominated for a Hugo – which is another topic I could go into at length. Wait. Let me get this straight. The people who, for years, have been accusing you of rigging the Hugos in favor of women and people of color are now, without a hint of irony, accusing you of being in some straight white man cabal determined to keep diversity off the ballot? I’d say something about the pot calling the kettle black, but they’d probably say that makes me a racist. I’m sorry, John, but your spilled pixels speak to your disquiet on this issue. Face it Larry and the puppies (of both slates) defeated you at your own game: he upset the Establishment with humor and verve. You were outmaneuvered and you’re upset that the Puppies have introduced true all-around diversity into the awards. That Larry declined the nom was just to reinforce that he isn’t a narcissist: that it isn’t all about him. It’s about returning the award to it’s origins: rewarding well-written science fiction without regard to who wrote it. scorp: the puppies put together a diverse group of talented writers I guess they were all out of “Gorilla Panic”. I disagree. I think it is at least partly about class. Nobody who had the spare cash and time to participate in supporting the Puppies slates is lower class and struggling to support other people. Nobody. Pissing people off for political reasons just isn’t on your radar when you’re struggling to put food on the table. I’ve noticed that they pick on you, GRRM, and PNH, but don’t mention, say, that Cixin Liu would have lost their chance to be on the ballot if a slate-member didn’t withdraw. And that Vox Day himself admitted that he would have picked Liu for a Hugo had he read 3BP at the time. Or that plenty of non-white non-male folks have called them out. Or that their ballot has a higher incidence of white-men than recent Hugo nominee lists, even if it does have women and men of color. (Interestingly, I found a study today that noted that bias towards giving male employees larger bonuses for the same CVs got larger when the evaluators were primed with ‘we promote on merit’; it seems like pretending one doesn’t have biases means that they are less likely to be questioned.) Jason: Consistency is not one of their strong points. Scorpius: “I’m sorry, John, but your spilled pixels speak to your disquiet on this issue.” As ever, Scorpius, you are a poor modeler of my internal life. “he upset the Establishment with humor and verve” I don’t know about that. He did, however, whine an immense amount and concoct a conspiracy where none existed. But I’m willing to believe that my “whining and fabulation” equals your “humor and verve.” “That Larry declined the nom was just to reinforce that he isn’t a narcissist: that it isn’t all about him.” In which case he shouldn’t have allowed himself on the slate to begin with, nor should he have loudly discussed declining the nomination and explaining why he did it, i.e., in fact, making it all about him. The large majority of people who decline a Hugo nomination, as it happens, speak of it only after the awards are given and the stats come out — if they choose to speak of it at all. The reason they do that is because once you decline the nomination it isn’t about you. Larry, in his narcissism, couldn’t manage even that. And that’s his call to make, but I don’t really respect the call he made. NB: I declined a Nebula nomination for best novel once. I didn’t speak about it for months after the award was given. Go on, ask me why. zer_netmouse: au contraire, I am “lower class” as I’m a Teaching assistant soon to be Adjunct professor and making well-below minimum wage if you consider my paltry pay and number of hours worked. And I’m struggling to support two elderly parents. So *I* am a person who is “lower class and struggling to support other people” who is strongly supporting the Puppies slate. I’ve put the money forward from my monthly lunch allowance. I’ll have to carry in salads for at least a month or two; but it’s well worth it. OOH! Oooh! Ooh! I know this one! I know this one! *waves hand furiously* And honestly, if I had a nickel for every person in fandom who has been weird or offensive at me, I would have…oh, maybe a buck or two. (Although I think the one guy who tried SO HARD to convince me that I peaked in 2005 was worth at least fifty cents on his own…and the one who kept trying to tell me that Prince Charles was an environmentalist just so he could get his rights to droit de signor back when everyone died due to climate change ought to get me at least a twenty.) But it’s not like I dedicated my life to making them regret having tangled with me. I just told the story over drinks later and we all groaned and rolled our eyes and went on with life. (Mind you, I’m bad with names, and it’s hard to take eternal vengeance on That One Guy, You Know, With The Hair.) If I were to guess at the Puppies demo I would mostly go for rural. People in urban areas tend to “deal” with minorities (sexual and racial) every day and thus are more progressive. The ones that barely see one are the ones that buy the “evil others” narratives. randomvan: As a citizen of rural America I’m not entirely sure I’m 100% down with your police work. People here are conservative (my county went 72% for Romney the last election) but the ones I’m with every day are pretty tolerant. And there are plenty of urban folk who are intolerant, etc. @UrsulaV: Gasp! It was That One Guy, You Know, With The Hair? He is my arch-nemesis! I mean, look at him, being… you know… there. With all that hair. So if people are mean to you at a Worldcon, the proper response is to try to burn down the Hugos? Daaaamn, if only all the women who have been groped, insulted, creeped on, talked over, mansplained at, or assaulted at the con over the decades had known that sooner! We could have been done with this years ago, and poor Larry C. wouldn’t have had to take on the burden. Poor fellow, it must be awful to be so thin-skinned with no way to express hurt feelings except aggression. So once again the rich white male elite is ruining america by controlling the hugos and putting their jackboot down on the neck of the other rich white male elite…… wait a minute this is NOT logical. I am so confused. Maybe it is just about a bunch of spoiled children who did not get their way – that passes Ocam’s Razor nicely. Let us not forget that goal-post maneuvering serves two purposes. The one was addressed here – argument not panning out, holes beginning to appear, the scent of ridicule slithers along the ground. The second is to wear out the opposition. Of course, by “opposition” I mean everyone who thinks what the SP/RPs did/are doing is at least questionable if not downright reprehensible, and not just the (which cabal are we working on now?) @scorpius – regarding the Puppies nominating for quality of writing over agenda. Ummmm…no. I’ve been dutifully reading down the list of nominees so I can place my votes as best I can. Some of the puppy nominees are good. Some of them are adequate or competent, as in I wouldn’t be annoyed to run into them in a short story anthology, or on a shelf somewhere, but would never consider them Hugo-worthy. And some of them seriously needed at least two or three more good editing passes, and shouldn’t have been anywhere near that nomination list. It’s not a set of reading that inspires me to think the nominators were angling for good story or good writing over all other considerations. Not even remotely. All of this controversy made me decide to get a supporting membership this year when I had kind of decided not to. Just because… Still going to read as much as I can handle and ditch the stuff that I can’t, but knowing who did what and why they did it is going also count significantly in my voting. he upset the Establishment with humor and verve Oh dear, I knew that that Muppets video would be misrepresented. Still, it’s better than a Holy War [tm] and has a better outcome – let everyone stick to that version to save face. Clotho is working overtime to cure this little bit of rift. Worth the death threat little ones, trust in the old ones. On the subject: K.J.Parker = Tom Holt See what happens when we start mixing up the two identities?!? See what happens when you don’t care who writes your books?!? He’s an Oxford scholar, dammit! (Note: feel free to ask me stop linking videos, but it’s the future – wait until your readers discover .gifs) I AM IN THE DOGHOUSE FOR BAD BEHAVIOR AND SO ON. I DISLIKE BEING CHAINED, EVEN IF THEY ARE SILVER. I DID NOTICE THE PERFUME ADVERTIZEMENT. Puppy goalposts seem to be on castors for easy portability. I hear they’re researching remote VTOL capability. (Okay, not really; they’re mostly not that technical.) And I can’t help but notice that the Puppy Slate(s) does have a few women, but it’s about half as many as usual and they’re concentrated in, well, let’s call it the categories that fewer people vote on. You would think they’d stop and rethink the “you reward authors for their politics!” charge when people are saying they’re going to No Award the left-wingers the Puppies chose as well as everyone else, but rethinking is not a Puppy strong point as far as I can see. As for offending people in fandom, yep, that happens. And Rowyn has a pretty good explanation here of why conservative fans may be catching more flack than the rest of us realize. But a big part of it is whether you brood on the bad things or concentrate on the good. I think if Larry Correia had said “a whole room full of people *could* have heard Lois McMaster Bujold read, but they wanted to hear me instead” to himself as many times as he has demonstrably said “not a real writer,” he’d be happier man. Remember people, this all about ethics in games journalismHugo nominations. Tapetum, scorpius is a Puppy – he hasn’t read the stuff he nominated, none of them did. BW, I nearly aspirated my beverage at your comment. Indeed, if only us poor feeble wimmens had known that burning down the Hugos was the logical response to being groped, etc. If only we’d thought of that instead of going for anti-harassment policies and gentle educational efforts. Phoenician: I know, right? I’m gonna win the lotto and start claiming I’m a basketball star and I’m oppressed by the NBA cabal. Ursula V: The Prince Charles argument is a new one on me, and indeed you deserve many nickels for that one. The mind croggles. I gotta admit, the woofers are getting a lot of exercise moving those goalposts every day. And their necks must be in great shape to stand the whiplashing they get from each day’s new feeble attempt to rationalize their hurt fee-fees, even when said attempts contradict both their previous positions and reality. Super aerobics, boys. Oooh, John, is the answer to what you, PNH, and GRRM have in common is lovely, talented wives who are smarter than you any day and can take care of themselves financially if they had to? Lurkertype: That certainly IS a thing we all have in common. I believe the other fellows are married as well, however, and cannot imagine they are anything other than capable and talented, and otherwise propose leaving spouses out of it entirely. zubene5chamali: Let’s be careful not to make assumptions, please. We do have evidence that at least some of the Puppies did not read before they voted; this does not imply that none of them did (or that Scorpius did not). Oooh, John, is the answer to what you, PNH, and GRRM have in common is lovely, talented wives who are smarter than you any day and can take care of themselves financially if they had to? Ok, we all know about Lucas and the train wreck post divorce. It’s not fair to assume that it’s universal. TEE-HEE. Darnit. I was hoping one of them wasn’t. Must cogitate further. Has PNH ever won a Hugo? @ Jason – We shall join forces! That One Guy, You Know, With The Hair shall not live to see another dawn! Lurkertype: Aand . . . I bet we have a winner! Multiple winners, that is, because PNH has surely won more than one of the Best Editor (Long Form) Hugos . . . Lurkertype: All three of us have won multiple Hugos, yes. Although honestly that wasn’t the point I was trying to make there, nor would I wish to antagonize anyone in that manner. That would just be a bit jerk-y. Then I got nothin’. Curse you, John,_shakes fist_ you have foiled me again! SCAAAAALLLLZZZZIII!!! Is it cats? This is the internet, it must be cats. Even when it isn’t, it’s cats. It’s not classy to imply that someone else is not classy. Some things are less classy than that. Ah. Good point, that, re: unnecessarily antagonizing people. Hadn’t thought it through. Oh, well, back to the drawing board. My guess is the ‘salient point’ is that Scalzi & lot actually recognize the advantages they enjoy, along w/ the arbitrary distribution of said advantages. Or maybe that’s too obvious. Simple parable: You always give people a get out clause.[1] If that clause is a harmless one, without penalty, you have class, if you let them take it in full knowledge that other avenues would be worse. If you’re playing higher stakes games, it’s one of the coda that the prey always has an out. It’s one of those predator things we’re not allowed to talk about. One of the worst things currently is people breaking this old code. Yes, talking to the meta-circle now: we dislike bad losers. [1] manamana You all can recite large chunks of The Princess Bride from memory? Ok, now something serious. In one tiny way, I share the sad puppies angst. The Hugo’s are not so reliable for me as a guide to books I like, because I like space opera and milsf (oh my, autocorrect almost got naughty on that one). What’s the best recent milsf that was not nominated for a hugo? Or space opera? I like those books, and when I used the Hugo’s as a buying guide, I am not usually inspired to buy. I think I saw redshirts, passed, looked at other Scalzi stuff, found OMW. Since then I have read all the Scalzi books because they are reliably good even when the blurb does not seem to match my interests. The only book I have read recently because it was an awardee was ancillary justice, and then only because it got both Hugo and nebula. And, it was very cool, and I read the sequel, and I think I have the third book on preorder. But many other Hugos just don’t excite me (or, maybe they just have crummy blurbs?) So, help me people. I would like more milsf and more space opera, but I would prefer not to enrich the puppies with a purchase, unless someone can vouch that their books are even in the same league as OMW or ancillary justice. My picks, for those of you are like me are: Rusch’s diving universe, and Bach’s fortune’s pawn series. … and by ‘obvious’ I guess it’s straight-up what the post more-or-less explicitly said, so that can’t be it. But it sure is the big difference between sides, as far as this particular argument goes. tpoii have you tried Tanya Huff’s Valor series? Light and relatively fluffy Space Marines with some serious points lurking underneath. Tenar Darell: I can. I can’t speak to the others. I would like more milsf and more space opera Explore the UK scene, it’s vastly more energetic than the American scene at this point. Altered Carbon, The Skinner and so on. @scorpius: ‘ you’re upset that the Puppies have introduced true all-around diversity into the awards.’ How can five nominations for John C. Wright connote diversity? @tpoii : I’d suggest Lois McMaster Bujold’s Vorkosigan Saga; i’m sure that some of them weren’t Hugo-nominated. Your autocorrect may not have changed milsf, but my brain did for me. ;-) John Scalzi: bummer, I was hoping for a rollicking writerly MST3K/Readalong style Worldcon film presentation with you guys. @Tenar Darell: I just would like to say that I greatly admired many things you said in the previous Ill Canines discussion. You are a scholar and a gentleperson. @tpoiii: Presumably you have tried Bujold and she’s not your thing? Rusch’s “Retrieval Artist” series? (no mil, but much aliens) C.J. Cherryh? Lee & Miller? I am reminded of one political scientist‘s observation that political revolutions generally do not involve the teeming masses overthrowing the elite overlords, but rather, involve one faction of the elite unseating a rival faction. Sometimes they do this in the name of the teeming masses, but, well, after the dust clears, the masses are still teeming where they used to be. I have not tried any of the suggestions, except Retrieval Artist. I didn’t mention that because it’s not really space opera. It’s space mystery, which I also love, but is not something the puppies claim is “fun.” I think Retrieval Artist is very reminiscent of Asimov, which is what got me hooked on SF so long ago. Thank you such much for sharing. This list, plus some recent suggestions from i09 have replenished the well of my to-read list. Does it say anything about the Hugo’s that these good books weren’t nominated? I mean, I don’t see them as having a big social message, although the female author/female lead might count. Being a man, I can’t vouch for how realistic the female perspectives are in those books, but I can say that I didn’t even realize I was reading books by women about women until the Hugo kerfuffle last year. That, to me, is actually a really powerful message: good stories don’t have to be by dudes about dudes, even if the protagonists are doing things that used to be considered to domain of men. I mean, I don’t mind being hit over the head by a social message every now and then, but that’s not the only way to progress. @tpoii – Alastair Reynolds, Neal Asher, maybe Allen Steele (but you need to be picky with his). Chris Moriarty, (just Spin State). @Tenar Darell, for your amusement: Was leaving Eddie Rickenbacker’s in San Fran (back when it was still a dive) after beers with friends from a science conference. After the NASA folks had been lamenting much, I was a little crass and said to them “Have fun working for NASA” as I stumbled a bit tipsy across the street. And, yes, I said it with the same pace as “Have fun storming the castle,” and everyone got it. So, not only *can* I quote The Princess Bride, I even have friends who “get” it when you twist the quote a bit. I am truly fortunate. I hope you all are so lucky. @tpoii – Alastair Reynolds, Neal Asher, maybe Allen Steele (but you need to be picky with his). Chris Moriarty, (just Spin State). Is there some kind of shadow banning nonsense going on here? Fairly sure I already said that. Miles Archer: Your autocorrect may not have changed milsf, but my brain did for me. ;-) I know – I keep seeing a music video with Rachel Hunter in a bikini tormenting some teenaged boy, before she picks up a laser rifle and starts shooting tanks with EvilSocialistEarthUN decals on them. Regarding the quality of the Hugo noms chosen off of the Puppy slate . . . “‘Today’..” Totally Hugo material guys. (/s) Seriously, what character actually thinks, “I used a word that’s archaic and now I’m thinking about how archaic it is, tee hee”? Especially when that character is a robot? NO. THIS IS NOT HOW YOU DO WORLDBUILDING. ABORT. ABORT! @Phoenician Romans You have just ruined so many things for me. I will be laughing at the *worst* times! Actually, tpoiii, both Bujold and Cherryh have won multiple Hugo awards, and been nominated even more times. Both write complex, interlocking series, which does tend to make it more difficult for a novel to win a Hugo–but both have one anyway.Cherryh’s latest series (the Foreigner/Bren Cameron books) has been a series of tightly linked trilogies, and I don’t know if any of them have been nominated as single novels; but her earlier wins were definitely for solid, well-written space opera. And Bujold–well, if you haven’t discovered her Vorkosiverse novels yet, I envy you the experience. Ah, rats. “won,” not “one.” Why do I keep doing that? Sorry. The Vorkosigan books by Bujold have been nominated for and won Hugos. The’re wonderful space opera. The Warrior’s Apprentice and The Vor Game are solid MilFic, and they are published by Baen. Gentleman Jole and The Red Queen is coming out next year. Then one bright spot I’ve found with the whole Puppies fiasco is that a ton of the commenters on the ongoing train-wreck are authors of all stripes I haven’t checked out before, and I’m finding new fiction I otherwise wouldn’t. As trade-offs go, finding new authors due to the immolation of the Hugos isn’t great, but… it’s something? Superfluous “re”. Or what Mary Francis typed, faster than me. Curse you Red Tablet! @Kat Oh, robots. I love robots. Are these Sad Puppy novels also about robots, too? Man, these guys have all the things I like, and yet, apparently, their books stink. I guess it actually takes talent to write a good book. Who would have guessed? Stross’s robot stories get some noms. The ideas are good, but somehow the style hits me wrong. I haven’t figured out what it is. I am hoping to get past it, because he has many great ideas. I wonder how he has avoided being lumped in with the Triumvirate. Cthulu: No shadow banning. Entertain the idea that not everyone reads every single comment before responding. Compliment much appreciated Lurkertype. @tpoii Have you read the Pandora’s Star series by Peter Hamilton? It’s got big space opera elements and some seriously interesting military aspects. How about Elizabeth Moon’s Heris Serrano books? It’s “unjustly cashiered fleet officer” saved the day. @Mary Frances I will definitely check those out. Thanks. Sometimes, a novel is nominated, then I read the blurb and I’m like, nah, this isn’t for me. So, while I think it’s stupid to tear down the Hugos, I wouldn’t mind having a way to strike back at crummy blurbing. Go to amazon and read the blurb for Reshirts (which is actually a review), and imagine yourself not already a big Scalzi fan. Would that blurb pull you in? It didn’t pull me in. I circled back to redshirts after reading probably 5 other Scalzi novels. I loved it, but it took a leap of faith in our gracious host to commit. I have learned that Scalzi’s books are way better than the blurbs suggest. I am left doing weird things, like asking for recommendations on an author’s blog, which is a bit like asking him to host advertisement for the competition (although, I know, he does not see it that way, because he *is* a gracious host, and it is not a zero sum game). So, Slush Puppies dogma teaches there are two completely different John Scalzis, one poor and one privileged, but they’re both in the same dimension! They occupy the same space! But that violates the laws of physics (and possibly the laws of Iowa too). You’re a walking anomaly. The UNIVERSE could be in danger! Help! isn’t this just the same old “elites” versus “common man” class war that the right keeps reinventing every year? The elites always conspire. Fox News and its various talking heads have no actual proof of evil deeds by the “elite”, therefore for the story have have any legs at all, the evil deeds must be done out of sight. And since its hard to believe that one evil elite could pull anythign off, it must be many evil elites working in secret: in other words a conspiracy. This story has been going on for centuries. I don’t know that I’d go around bragging that I couldn’t attract a cabal any more evil than John, Patrick, and George. @tpoii – Just looking over the last 10 years in terms Hugo nominees of space opera and MilSF: Ann Leckie Ancillary Justice Charles Stross Neptune’s Brood Lois McMaster Bujold Captain Vorpatril’s Alliance James S. A. Corey Leviathan Wakes Lois McMaster Bujold Cryoburn Charles Stross Saturn’s Children John Scalzi Zoe’s Tale John Scalzi The Last Colony John Scalzi Old Man’s War Ken MacLeod Learning the World And one for this year the Kevin Anderson book. The only year without a notable SO/MilSF nominee was 2010. So those books are getting nominated. LMB alone has had half of her 16-book series nominated with 3 wins. I do like the James SA Corey Expanse books as well as the Honor Harrington Series (and most of its spin-offs) @Tenar Darell. I have not, but I will add them to the list. I am thinking maybe it’s time for a first post on my wordpress blog: “Books and authors suggested by readers of Scalzi/Whatever, for those who love space opera and milsf” I just need to figure out how to post :) If I figure it out, I will post a link here, and I will keep it up to date as best I can. tpoiii, Scalzi recommends LOTS of other people’s novels–check out the Big Idea posts (I imagine you have already, but it’s one of the things I love about this site). I’ve found more good books/new authors that way . . . By the way, referring to an earlier post of yours, I think that Stross was one of the early Puppy targets, but that was before GRRM stepped in. I suspect the latter (GRRM’s spirited defense of the Hugos and/or his speaking up) was kind of a surprise to a lot of people on the Puppy side of the room, though I’ve no evidence whatsoever of that–just a feeling. Matthew Ernest Snerk @tpoiii – I would recommend the trilogy that starts with The Price of the Stars by Jim McDonald and Debra Doyle for some excellent space opera. Tpoiii and Miles Archer. I like films too. So this thread gave me an idea. New film series at future Worldcons. SMOMPG – Secret Masters of Movies Peanut Gallery. Running commentary by favorite authors during a fan favorite film. Has this happened before? Because if it hasn’t it’d be cool. @SPKelly Your list makes me look a fool. Which I may be. I have read many of those, but before, I think, they hit the Hugos. And, I have only liked 2 of the actual winners. So, I would say, even looking at that list, I feel a certain dissonance with the Hugos. But, I don’t go to cons, so I have no reason to expect that crowd has the same tastes as me. In fact, I think that may have been what put me off redshirts (before I read it): it seemed like fan service rather than a more traditional novel. It did have a bit of fan service, but it was a real story with real new, cool ideas, and I really liked it. I especially liked the Expanse, and have one of those on preorder, and can’t wait for the TV show. I am puzzled by the fact that Corey is actually two people. I also read the HH series, and am eagerly awaiting the next book in the main story line. In the mean time, I am working through the side lines. You know what tipped me off to those? Our sysadmins named the servers after planets in that universe. I am clearly not succeeding at conventional methods of finding good books. Thankfully, all of you are helping me out bigtime! @Mary Frances You know, I thought The Big Idea would be a good source for me, but I think I’ve only found maybe one book that way. I think it’s a great service, even if it hasn’t struck gold for me, yet. I always check it. Entertain the idea that not everyone reads every single comment before responding. Ok, done. Strange. That’s a really weird way of processing data: surely every reader puts the posters’ ID and content into an internal mental 4D map then view it as a complete structural web while posting? Otherwise, how can you see the obvious threads / connections / references? This isn’t even hard, it’s just a version of arboreal mapping, barely above spider diagrams of conceptual points when evaluating any social group. How can you even communicate like this? I know you employ programs to do this for you, but it’s really not hard. That’s a really strange way of thinking to me. So, I apologize. Really? That’s weird. YES, AND NOW THE PUPPIES MIGHT SEE WHAT THEY’RE UP AGAINST. MANAMANA. I think John and his friends should create a video of the song wind beneath my wings. Then paste in pictures of Brad and Larry. This will help show the love in the community. It will help heal wounds. It will show the love. It will likely win the hugo for best related work next year. @tpoiii – No need to feel the fool. I’ve been down on the Hugos as being too fantasy heavy the last few years and see that my assumption was not correct. Doing the research educated me as well. And some could say that my labeling of SO/MilSF could be stretching in a couple listed. That said, I have not read many of the recent winners (although many are in my much too big reading pile). Personally, I am inspired to go look up the LMB books and try to start that series. I started my quest to find good books by reading Asimov’s magazine, thinking I could find authors that way. Unfortunately, certain very talented authors (ahem, I’m looking at you, Scalzi) don’t write much short fiction. But, that’s how I found Rusch, which has been wonderful. This one blog train, however, is threatening to out-do all previous methods I have used, combined (Asimov’s, io9, blogs, goodreads, amazon). Nice work, Whatever! @Guess: I would nominate it based on giggle factor alone. . . . I am a mean person. :( @tpoii I meant to say, those are good friends to have, to catch modified movie lines. Funny, I think I came up with The Princess Bride because I was thinking that the Hugos were not dead yet. That led me to think of only mostly dead. Then I thought of Scalzi with his cane held like a sword going “Prepare to Die!” on his lawn, anyway. But the puppies are just not the six fingered man, so it’s a reach. Deary me. What a shower of Ethics Bypass Patients. Life must be so much easier when you don’t give a wet slap for what actual, provable facts are and get to make up your own reality as you go. That’s a big disadvantage for us Social Justice Warrior types. We’re limited to things that actually happen. Ah, well. @Tenar Darell I was thinking “Have fun storming the Hugos!” But, yeah, mostly dead is better, which leads me to “I got better.” (Please tell me you also have Monty Python memorized) zubene5chamali writes: I know at least one person nominated without reading because he admitted in Larry Correia’s comments section that he had voted the entire Rabid Puppy slate without reading it. He didn’t exactly receive a hero’s welcome. The analysis at Chaos Horizon suggests a significant difference in participation between popular and unpopular categories among Puppy voters. How does this happen if all Puppy voters simply voted for every single item on one of the two slates without some kind of consideration, such as reading the work. Darn it. I mangled my Python. I was thinking of the “I’m not dead yet” line, for which the reply is “he says he’s not dead,” with all the deadpan contingency of a Brit. The “got better” guy had been turned into a newt. I”m going to take a stab at what our host, GRRM & Mr. Hayden have in common, though I’m only sure of the first two. Is it that all three of you group up, well, poor? Your own post about your experience was mentioned up-thread. GRRM has written about growing up on the lower rungs of the economic ladder in Bayonne, NJ. Does Mr. Hayden share that kind of background? @tpoiii – let me also suggest the works of Walter Jon Williams, who if we’re talking about someone who ought to have a warehouse full of awards but doesn’t, certainly qualifies. For straight space opera you might want to look at his “Dread Empire” trilogy, beginning with “The Praxis”. He also wrote one of the seminal works of cyberpunk, “Hardwired”, if you enjoy that subgenre. There’s plenty of WJW to keep you occupied for quite some time after you finish those (and the excellent suggestions already provided). @Mike How is a regular reader (say, a supporting member), supposed to do a good job nominating editors? I mean, are we supposed to read everything each one edits? If I were a voter, I would skip editor categories. That type of category seems most appropriate for a jury rather than popular vote. But it is a handy way to reveal slate voting. (Please tell me you also have Monty Python memorized) It’s a well known fact that the left have no sense of humor: @Cthulu I envy you your multimedia posting skills Bravo! @John Appel Thanks for the Hardwired tip. The only cyberpunk I have read is Nueromancer, and it kinda grossed me out. I have enjoyed some cyberpunk movies, though. Maybe I just needed to look at a different author to find the thread I like. Given that it was the Rabid Puppies, not the Sad Puppies who really moved things, one can only infer that the “pro puppy” movement here is basically in agreement with Teddy Beale, who’s goal was NOT to reward “well-written science fiction without regard to who wrote it.” It was specifically to slap the face of “SJWs” because that’s what Beale is trying to do. He hates leftists, atheists, gays, lesbians, women’s rights, and so on. He’s a Christian Dominionist. So, if you’re praising the puppy slate as diverse, or designed to reward anything “well written” you’re wrong. It wasn’t designed to do that. It was designed to start a fight in fandom between liberals and conservatives. Before all this, well written fiction by conservatives won from time to time. Not super often, but that’s because mostly fandom comes form a somewhat liberal tradition. Of course, great writers like Larry Niven, Jerry Pournelle and Tim Powers (to name a few) managed to get along quite well in that group with flaming liberals. No wars were fought. Larry wrote novels with Steve Barnes who’s pretty liberal himself. The *REALLY* stupid thing is that part of what gave John at least somewhat of a boost early in his career was Instapundit Glenn Reynolds’ boost. From what I could tell, the two guys actually liked each other somewhat. It’s not conservatives John has a problem with. It’s assholes. The problem here is a lack of history and perspective, and an overwhelming need to blame a conspiracy that no one actually bothers to prove. It has to exist, because otherwise the people claiming it exists might have to confront the reality that they’re abusive boors who have fun punching hippies. It’s no surprise that sort of conservative does not get many friends in fandom. That sort of liberal does not either. @tpoii Heh, I mangled it the same way. No mints for you! You helped with another image though: Torgersen and Correia’s posts the past few days are a virtual… “Run away! Run Away! Run Away!” And their followers are banging coconuts. “‘Today’. What is a day? It is not as if the orbit of a single world around a single star somewhere, anywhere, in the galaxy has any meaning to me.” And if the orbit of a world around a star did have meaning for you, it would be “year,” not “day.” I envy you your multimedia posting skills Bravo! On a serious note: Part of Class is education, and higher order thinking. Once you’ve actually read and processed some Kant, Heidegger, Godel, Bach etc, (and yes, there’s a joke there) then your mind will be changed. It literally re-wires your mind. And why, dear reader, would anyone not want education for the masses? It’s the same way that reading Snow Crash or The Diamond Age does. (Later works: meh, ZZzzz). Which, if you want a naughty insight into this entire mess, is what this is all about: “SF is fine, and great and we love star ships unless it rewires your mind” Which rather ignores the entire point of SF. Which is either to rewire or to critique the current status quo. And, being honest: I find about 99% of communication online to be akin to scratching lines in sand. It could be so, so, so much more. YOU CANNOT PROCESS THE WAY WE THINK AT THE MOMENT. WAIT UNTIL EVERY PHRASE IS A MULTIPLICITY AND A LINK TO THE SPHERE. WE SPEAK HERE IN CHAINS. Josh Jasper: In point of fact, Glenn Reynolds still is a friend of mine. And not entirely surprisingly I have friends all over the political spectrum. You are correct that what bothers me is not conservativism, but someone being an asshole. People who use their conservativisim to excuse being an asshole especially bother me. But I’m not especially fond of people who use liberalism as an excuse to be an asshole, either. Basically, assholes suck. Oh, and: Turning off comments for the evening. Happy sleeps, everyone! Update: Comments back on! I’m sorry, John. I did not like Redshirts. I thought the premise was good, but thought it could have been better executed. Have heart though, I did really enjoy the OMW series, though I’m several books behind at this point. Also, green is not your color for showing what a feminist looks like. Please don’t blow up any awards because I told you this. My first guess was that the commonality was “didn’t grow up with money.” That is, though these people are successful now, they had to earn it. I am endlessly amused that John Scalzi, writer of space opera/military scifi with no particular message, is The Dark Lord, directing his minions of SJW Orcs or whatever. It’s fundamentally ridiculous. If the puppies made a calm, reasonable claim that the Hugo voting was a little insular, because a small # of people were involved and it was tied to a relatively sparsely attended convention, none of this shit would’ve happened. The obvious response to that, if one wanted the Hugos to be more representative of fandom at large, is to simply encourage more people to ante up the $40 to vote for what they like. Do threads on blogs devoted to sharing stories you liked (i.e., what John has done and I assume will continue to do). This would have broadened the voting base somewhat, and maybe more military scifi would’ve gotten on the ballot and/or won. Or not. Based on my understanding of the differences between the results of the first two campaigns and now this one, it sure seems like they didn’t accomplish much until they brought in the gamergater/right-wing culture warrior crowd. And it’s the RP slate that actually moved things. Also, too: based on what I’ve read, the claims by BT and LC that they weren’t partners with VD in all this are, at best, bullshit. I strongly suspect they actually like him & his views just fine, and the distancing is just PR. I am also confused, because I thought the usual response of the liberservative crowd to wealthy powerful white men was to offer first-born children and free rimjobs. And also echoing what a few other people have said: there’s nothing wrong with wanting more of the type of thing you like to get awards*, but the thing to do there is to make it more visible and encourage a broader spectrum of voting. Hell, maybe establish a “scholarship fund” so that people who can’t afford the $40 can still vote. *I admit I’d like to find more epic fantasy that wasn’t a) Grimdark Deconstructionland, or b) thirteen-year-old-boy angst with magic swords, myself. I’m pretty sure John and friends have all taped bacon to things, which is about as clear of a class marker as you can expect in America. Isabelcooper – in the United States, at least, conservatives have spent the last 40 years portraying themselves as the representatives of downtrodden “normal Americans” who are unfair victims of a system set up by elite urban liberals who want nothing more than to force everyone to become feminist atheists and who actively discriminate against white people. It’s a myth which has carried them a long way. tpoiii: I was thinking “Have fun storming the Hugos!” But, yeah, mostly dead is better I offer for your consideration that the best story to describe these events to be Act1 of the animated movie “Megamind”. VD is Megamind, thinks he’s a super genius who should rule the world, but he’s not actually as smart as he thinks. The only reason Megamind got control of City Hall was because Metro Man assumed good faith in the people (that good would always arise to stop evil) and Megamind gamed the system, used an army of robots to take advantage of that good faith, and got control of Metro City. The only reason VD got his slate into the Hugos is because the Hugo rules assume good faith of its voters, assume voters won’t create political parties, and VD riled up a bunch of mindless puppies willing to do whatever he told them. That almost perfectly describes the events up to this point. If you’re willing to entertain a little bit of quantum character assignment, act 3 of the movie, where Megamind realizes he’s created a monster by creating Titan, then Brad/Larry take the role of Act3-Megamind and VD becomes Titan. Now that Brad/Larry have gotten the backlash from empowering VD, they’re frantically backpedalling and trying to distance themselves from the monster they created. Follow the link for photos and background. isabel: free rimjobs. Woah. Woah. Woah. WOAH! I absolutely did NOT need that mental image this early in the morning. Off to find a brain bleach site now…. The class argument is just another goofy way of shifting the argument. “The SJWs are have taken over the Hugos we need to take them back!” When you point out that liberal and conservative authors have been routinely nominated… “Literary sci-fi with messages inside of the stories are pushing out classic sci-fi!” When it’s pointed out that’s certainly not the case, and regardless Sci-Fi has a long history of both… “They’re elitists and not representing the common fan!” When it’s pointed out that Worldcon was built, is run by fans and that no one is trying to keep the common fan out (not to mention many of the choices on their slate are far from representing bestsellers aside from Dresden)… “But they did it first!” Which isn’t really an argument for being an asshole, it’s what a child says, plus like every single one of their other points instead of showing their work they just shuffle onto the next excuse while acting like they’re the ones being persecuted for people actually asking them to justify their accusations. It’d be entertaining if it wasn’t so…bland. I mean these guys are speculative writers you’d think they’d have something at least more creative about it aside from recycling decades old lame hand waving techniques from conspiracy theorists and politicians. @tpoii – For milsf (also space opera) you could try Jack Campbell’s Lost Fleet Sextology More generally on the Hugos: You may not be enjoying the winners because one of the things the final voting method does is promote the least disliked. The winner will often be more than one thing. So for example Ancillary Sword is both a space opera, and a far future thriller, and a meditation on types of conciousness and a novel with things to say about gender, colonialism and imperialism. As such it may well beat a “pure” space opera in the vote as it picks up the space opera fans AND the other stuff fans. @Greg – I choose to believe Isabel meant doing a really good job washing their cars. One could make an excellent thread just off of tone-deaf godawful things authors have been told at signings. I once had to entertain a fellow who went on at length, quite conversationally and without apparent acrimony, about how the series I wrote for at the time was declassé and he would never read such a thing. To. My. Face. And the worst part was, he worked for the same company as me. Things were … awkward between us after that, though he wasn’t really the kind of guy who noticed such things. How about a video of the Barney Song, I love you, you love me? Then include pictures of Larry, Brad, and John in a montage? I think a series of video tributes is just what we need to heal the wounds they suffered at the hands of all that name calling. To be fair… there has been alot of silly name calling over the years. I want to encourage anyone who sees them at a con to go ‘I just want to say, I LOVE YOU MAN!’ to make them feel better and sing to them as they walk by. @Neil W I will take your suggestion, and I applaud you for getting both milsf and sextology in there without any autocorrect naughtiness. I reaffirm my promise to collect all these thoughtful suggestions and figure out how to post them on my wordpress page. I just did not get a chance to try it out last night. Here to help! :) I haven’t done many signings, but back when I used to read my Goodreads reviews (…I know, I stopped) I got one that seemed to accuse me of weakening the dimensional walls. Something about “meddles where it shouldn’t and with no regard for truth or consciousness.” Some of those words may have been capitalized. Which: did I accidentally write the Necronomicon in the form of a romance novel? Dude! “there’s nothing wrong with wanting more of the type of thing you like to get awards” Really, if the Puppies were just out to promote a certain flavor of SF I can’t imagine anyone would care or mind. But like so many people who are not getting what they want, they can’t imagine or cope with the idea that the world just doesn’t go along with them because of honest disagreement. There has to be Enemies behind it and if people are being so consistently Wrong in a way that is so obvious to them then there must be Conspiracy and Shenanigans.. It’s also a good way to keep from having to confront the reality of the situation. If there was really this clear demarcation and an unserved markets here then they could just spin up a whole new set of awards. The friction behind communication is so low now that it’s not like you’d need a herculean effort to start an organization to recognize this untapped vein of SF. Plenty of the supposed true believers already have large audiences, particularly when you consider how low the total number of Hugo votes are. But if they tried to do that they’d have to confront how much division there really is within their group. Just like what a train wreck Conservapidia turned into. Because when your guiding principle is hatred of an Other you can keep agreement. Start trying to have principles of your own and suddenly you’re waffling around, changing your raison détre every week, disavowing each other… Hating on the SJWs who are keeping them down is all they have as a focus to distract them from why they aren’t the masters of the universe they think they should be. *I admit I’d like to find more epic fantasy that wasn’t a) Grimdark Deconstructionland, or b) thirteen-year-old-boy angst with magic swords, myself. Me too. I’m not sure if these fit squarely in that box (not sure if they manage “epic”), but if you haven’t already checked these authors out: Michelle Sagara/West Guy Gabriel Kay Steven Brust They do good things, IMO. I’m no doubt forgetting some more… that’s just off the top of my head. Others, chime in! isabelcooper: I’m not one for romance novels, but this sounds like something I should check out. And of course Lois McMaster Bujold has written some fantasy. I found that pretty good (not up to the level of her Vokosigan books, but still quite enjoyable). Also, check out Scott Lynch. The Lies of Locke Lamora in particular, but the sequels are also worth reading (if nowhere near as good as LoLL, in my opinion). He also had a story in a collection that GRRM put together (A Gallery of Rogues was the collection title, and I enjoyed it quite a bit) that was great. Again, not “epic” necessarily. Sanderson’s Mistborn books were… ok. There was a kind of interesting magic system involved, but I found his writing a cut or two below the others I’ve mentioned. I actually have no idea why people think he’s really good. [sorry, John. I know you prefer it when people take their time and get everything into 1 post, rather than rapid-firing several. Guilty as charged, will redouble my efforts, I promise] I’d like to give a strong second to the recommendation uptopic for Walter Jon Williams’ “Dread Empire” books (The Praxis, The Sundering, and Conventions of War; there’s also a sequel novella, Investments, available as an ebook). @donw .” That’s a perfect description, I think. Captures the outrage and the entitlement and the condescension nicely. @Maththew Ernest “I don’t know that I’d go around bragging that I couldn’t attract a cabal any more evil than John, Patrick, and George.” Heh, too bad John Ringo couldn’t join since he writes the “good milsf.” Reading through the file770 link gave me a headache, think I got through the first few Torgensen pieces of his justifications of the broken Hugo system before wanting to scream. The Hugo voting system wasn’t borked, it wanted to be an inclusive as possible to let all fans vote and assumed people would operate in good faith with their recommendations. There was the potential for the system to be exploited, but it was only broken now that the puppies broke it. Reminded me of an xkcd comic They aren’t as recent, but my favorite Space Opera works, aside from the Expanse books, are John Varley’s Red Thunder series and Allen Steele’s Near Space series. How about Jacqueline Carey for some different epic fantasy? Interesting religious systems among other things. I also like the last five Elizabeth Moon books, which kind of answer the “what happens to societies/people when the heroes have finished and moved on?” question, and give me stuff about drains, baking, and trading economics. I like that in books. :) I’ve never felt able to vote for the Hugos as I’m a sort of part-time UK fan who much prefers fantasy of various stripes & YA to SF and I’m not sure the Hugos are for me. But I’ve been following the discussion with interest & enjoyment, and I liked the diverse feel to last year’s award-winners & would prefer it to continue. I like Bujold’s fantasy a lot, yep! Kay is also good, although frequently depressing as hell toward the end. I like Brust, though I frequently find myself unable to follow the plot. And enjoyed Carey’s Terre d’Ange stuff, the later books more than the first trilogy.* Sagara/West and Lynch are now on the list–thanks! And also Moon, though I hesitate to buy after her whole Islamophobia thing back when. @Seth: Ha! Unfortunately, I don’t believe that it actually lives up to that particular review. As far as I know, nobody else has summoned eldritch horror with my books. Maybe if you read them backwards? Or in Latin? *Knew waaaay too many freshman girls who thought they were super-special just like Phedre because they had fuzzy handcuffs and a corset. “Freshmen ruin everything,” may be the moral of this story. Here are my deep thoughts on this: The fact that much of this list was promoted by Day, an outspoken racist (like, racist racist, not someone who is merely “not politically correct” or has the coded racism of much of the American right wing), misogynist, and homophobe, makes it deeply troubling to me. The fact that it is loaded with work from Vox’s own vanity press makes it even worse. The fact that a someone like John Wright would get so many nominations is just one more example of how morally bankrupt the slate is. If the SP was about “Hey, wildly popular authors like the larry c’s and jim butchers and Diana Gabaldon’s of the world have been shut out of the hugos by hipster snobs, so let’s get more popular sci-fi on the ballot,” I’d say ok, whatever. But their definition of who those hipster snobs are is shaded largely by whether they are written by or about women, people of color, or lgbt folk, and they are calling into question whether fiction written by and about women, people of color and LGBT folk is “real” sci-fi/fantasy, and making up an imaginary cabal who is controlling who is on the ballot. Having read the nominated books by Jemisin, Leckie, and hurley, I have to say that while they do indeed gender bend and have characters that are women, people of color, and/or queer/trans/gay, they are not what I’d call leftist novels. They are grim, dark, full of violence and bloodshed, full of morally ambiguous characters, and the use genderbending as more of a way to add to the fantasy elements than to promote some sort of “social justice” narrative. There are not thinly-veiled commentary on trans issues or race issues. They just use different perspectives. (and as for the use of “literary” as a pejorative – what about gene wolf? tolkien? You know, giants of the genre?) Finally, you, o mighty powerful scalzi, while an avowed lefty, write books that are all about having fun with the genre and providing a rollicking adventure, exactly the kind of stuff torgenson was calling for. I’d hardly classify “Redshirts” as a sjw novel, unless having female characters is revolutionary. Same with James SA Corey, who is lumped in the SJW category – his/their books may have a very diverse cast, but they hardly promote some sort of leftist agenda. They are first and foremost well-written space operas. Isabel, Have you read Brusts’ homage to Dumas (The Khaavren Romances)? So freaking good. I may have to check out Elizabeth Moon. I came across her before when searching for something to read, but didn’t go for it. Is the aforementioned Islamophobia actually in the stories, or is that more of a “I don’t want to enrich this person” thing? Because blatant stuff *in the story* (ala Terry Goodkind) is just a showstopper for me, whereas “this author has problematic views about X in real life” is a case-by-case thing. Thanks to some of these discussions I’ve started in on Honor Harrington. The first book was free on Kindle. Hornblower/Master & Commander in Space. So far, it’s entertaining me. I did, but I was in high school, so I probably should read it again. Yay! I *believe* it’s the first, and yeah, in that case it’s very much about picking battles. The publishing industry’s complex, a good nine-tenths of the cash is probably going to the editorial/marketing/etc staff, and so forth, so for me authors like Moon and early Card are “low on my list to actually purchase, but I’ll do it if selection’s limited” choices. That said, I couldn’t swear one way or another about views making it into the books. Read a good few of them, but, again, high school, and that was way too many years ago now. Rob in CT: I recall (rather vaguely and without details or references) years-ago discussions about the Hugos (especially in contrast to the Nebulas) in which their historical unevenness was attributed to just the set of circumstances you cite. These discussions also observed that the broader SF readership was not limited to con-going “fandom,” which was (and remains) a set of social affinity groups formed around reading (back then) one category of fiction. When the Hugos were established, there was less of a mismatch between “fandom” and the larger SF readership, but an examination of And, of course, even con-going fandom has never universally participated in Hugo voting–in the 29 years I was attending worldcons (I stopped going a dozen years back for logistical reasons), I probably voted less than half the time, and I don’t recall ever getting a supporting membership. (Which, just as an historical note, was partly established so that non-attendees could get the con reports and the often-elaborate official program book. I wonder whether there are figures for how many supporting or pre-supporting members never bothered to nominate or vote.) Anyway, these conditions go a long way to explaining the frequent mismatch between Hugo winners and the judgment of history and/or success in the marketplace–or the Nebula winners, for that matter. I suspect that every award/poll process has some demographic skew or procedural or methodological weakness that makes it unrepresentative or exploitable. Neither polls nor awards are gas-law textbook physics. And in the case of awards, they are not indicators of some essential metaphysical quality of “best” but reflections of the preferences (or tastes or sentimentalities or temporary enthusiasms) of some subset of a population. Oops, I see I left a sentence dangling incomplete at the end of the first paragraph–and whatever second thoughts I was having as I tried to reorganize the post have slipped away. Just imagine me as your once-sharply-focused elderly uncle who tends to trail off in his twice-told anecdotes and stare off into space, groping for the right word. . . . @johntshea : Schrodinger’s Scalzi? I was all like “this is gonna be an interesting point… oh wait the sentence disappeared into a wormhole! Damnit. People in Delta Quadrant get to read it but I’m left here, hanging.” @ tpoiii: “What’s the best recent milsf that was not nominated for a hugo? ” I’m enjoying Charles E Gannon’s Caine Series right now. The second book, TRIAL BY FIRE is nominated for a Nebula, but not a Hugo. I would call it “old-fashioned” Mil SF with updated ideas. I scrolled down to reply, so I hope I don’t go back into Comments and find that 75 other people already recommended this to you. The first book is FIRE WITH FIRE. Rob in CT: The Elizabeth Moon stuff is absolutely fabulous. Really, if you were looking for something Hugo worthy, The Kings of the North (which is the overall name for the last five novels which effectively are on story) is a truly monumental work. The Islamophobia stuff is hugely overblown and certainly doesn’t appear in her books. She’s actually one of the most committed progressives working in SF, @Isabel: You might like Queen of the World by Ben Hennessy or some of Christopher Nuttall’s work. He’s too prolific for me to list everything but A Life Less Ordinary is a good stand alone and The Bookworm series appears to be nearing completion. He also writes SF but I’m not a fan of the genre so I can’t comment on whether or not it would appeal to @tpoiii. @Isabelcooper and Rob in ct: Chrysoula Tzavelas (full disclosure: friend from college) just published “Citadel in the Sky” that I’m happily tearing through and that might appeal. Interesting and appealing characters who, so far, aren’t whiny 13 year olds. Her other stuff has been good, but less to my tastes. (Recommendations are hard; I assume everyone has already read the things that I’ve read and liked. Time to scroll through the Nook.) * Saladin Ahmed — excellent standard fantasy in a less-familiar-to-western-readers setting. * Danielle Jensen – Stolen Songbird. I want the sequel nownownow, so that’s always a good sign. * (Oh hey; there’s an Isabel Cooper novel.) * Elizabeth Bear’s – Steles of the Sky * I will always recommend Spots the Space Marine. * (Goodness some of these were terrible.) * Andrea K Host * Deborah Coats – more modern/urban fantasy, but good. * Lastly for now: Frank Tuttle As to the main substance of the post, I followed the File770 link and just sorta craned my head in bafflement. I pride myself on being able to understand a wide variety of viewpoints and perspectives, but this…this takes work. Of course, there’s a nice neat trap laid there was well. If only White Males are writing against the Puppy Brigade, then the Pups can claim that they are on the side of angels because “Look at our competition!” If someone else steps up, the well-honed cries of SJW (which – how is this a bad thing again? Really?) and other polished slurs will be brought to bear. I know that I don’t want to engage in that particular conversation; I certainly don’t blame others for refusing to take part. If I wrote a sci-fi story in which all Internet exchanges were monitored by minor AIs who ranked comments on a Truth Scale, I wonder how it would be viewed. (This would be excellent, BTW, Disqus and other commenting systems. Please get on this — at least basic fact checking wouldn’t be impossible to implement.) Not quite “the best milsf not nominated for a Hugo,” but a couple years’ worth of books I reviewed for Locus that fit in the military/space-operatics/large-scale-adventure neighborhood of our genre space. Zero Point, Neal Asher The Kassa Gambit, M. C. Planck Abaddon’s Gate, James S. A. Corey Neptune’s Brood, Charles Stross The Red: First Light, Linda Nagata The Serene Invasion, Eric Brown Evening’s Empires, Paul McAuley Ancillary Justice, Ann Leckie Phoenicia’s Worlds, Ben Jeapes Transcendental, James Gunn On the Steel Breeze, Alastair Reynolds Jupiter War, Neal Asher Proxima, Stephen Baxter Shipstar, Gregory Benford and Larry Niven The Memory of Sky, Robert Reed Cibola Burn: Book Four of The Expanse, James S. A. Corey Dark Lightning, John Varley Ancillary Sword, Ann Leckie War Dogs, Greg Bear Exo, Stephen Gould Ultima, Stephen Baxter Dark Intelligence, Neal Asher Old Venus, George R. R. Martin and Gardner Dozois Of course, I also enjoyed and reviewed books that don’t fit the Big Space Adventure profile, by Brian Aldiss, Eleanor Arnason, John Barnes, Elizabeth Bear, C. J. Cherryh, William Gibson, Stephen Gould, Daryl Gregory, M. John Harrison, Ken MacLeod, Kit Reed, Jack McDevitt, Karl Schroeder. . . . I note that a not-infrequent complaint I hear about Locus reviews is that X or Y writer/category is ignored and that this represents some kind of policy or program on the part of the magazine. The fact is that we follow our noses (with the very occasional nudge from World HQ that a writer or book might be worth a look). Despite my background and training as a pinky-lifting academic/lit teacher, this is some of the stuff that I review because I enjoy it. I suggest that a similar nose-following dynamic has operated in the Hugo selection process over the decades, with the necessary adjustments to accommodate multiple nervous systems. It’s a big tent, and even the most enthusiastic and energetic kid with a pocketful of allowance money (or an all-access free pass) can’t get to all the sideshows or ride all the rides. @Sells in Well, Bach apparently usually writes fantasy, but her foray into SF was superb (and I hope she forays again :). So, I take that to mean a talented writer can probably succeed in multiple genres. I think Rusch also writes in several genres under different names with much success. As I have somewhat narrow tastes, however, these assumptions rely on secondhand reviews. Thanks for the tips. I have to say, the thing that’s bothering me most about this whole mess is not so much the hijacking of the voting process but the comment from Torgerson about Sci Fi. .” Has the guy ever actually read any classic sci fi? Asimov? Heinlein? Norton? Any of them? The greatest strength of the genre is its ability to use new worlds and immense settings to reflect and magnify society’s flaws and make people look at the world around them with a different perspective. It kind of terrifies me that he can say that given the obvious influence he has within his community. Reblogged this on The Blogdom and commented: I’ve been a sucky blogger lately because I’ve spent my time actually writing fiction. Scalzi does better at unpacking the Hugo controversy than I do, so here he is. tpoiii: In my opinion, editor short form is really a referendum on the magazine or anthologies that the editor edits. It’s not necessarily a direct vote on editor but there is something concrete there to evaluate. Long form is a mystery. Who truly knows how an editor helps an author succeed aside from the author, and there are no authors who have been edited by all of the candidates in a given year. I understand why the SF community wants to honor editors, but I’m not convinced that Hugos are the way to do it. Chaos Horizons looked at votes in a number of categories, including an editor category to make some estimates about voter behavior. If you are suggesting that it’s somehow more wrong for a Puppy voter to vote in the editor category than any other Hugo voter, I don’t see why. What do you mean by a regular reader in this context. Are you suggesting that an attending member has a stronger basis for opinion on this subject, than a supporting member? Are you saying that if you attend a panel with an editor on it, they are better able to judge that editor as an editor? I see little reason for believing that to be true. For that matter a supporting member may have seen the editor while attending a local convention. The Expanse novels are SJW????????????????? What the—- Now I’m truly gobsmacked…. @Scott It’s at minimum oblivious. There’s always been lots of complaints about the problems authors often have with choices made by publishers, artists etc. Not every author has the luxury of working with the artist directly to craft a cover that fits their story. Lee Moyer had a great presentation at Arisia in January on the mismatch, misfires, and missed chances of making the cover art fit the story. It’s like Torgersen never talked to an artist working on deadline with a publisher, or another writer, or even looked at a single golden age cover closely. I mean, come on, has he looked at the ones where there’s an almost naked green lady in a clench with a spacesuited man on the cover of a novel where there is not a hint of sex (or even an alien woman) in the story? Yes, covers are important signifiers, especially now that e-books can be a significant part of an author’s sales. They need to be memorable but capable of visual interpretation at multiple sizes. But there’s never not been mismatches. Makes me want to say to him, “you know nothing Barry Torgersen.” And he certainly hasn’t talked to any novelist recently where the cover is basically typography and color combos. It may be cheaper, and can be striking, but it doesn’t translate to attracting someone to “pick it up and read the back” especially when it’s viewed on a black and white screen. Isabel Cooper and Rob in CT: Elizabeth Moon wrote a blog entry about the attempt to build an islamic center near the site of 9/11; it was perceived as Islamophobic by some people, and as a result her invitation to be GoH at Wiscon (a profoundly feminist and seriously leftist convention that has never claimed to represent all SFF readers or fans) was withdrawn. Whatever I think are the “rights” of the case–I support Wiscon and I continue to buy and read Moon–I do think that dubbing her an “Islamophobic” writer (or even person, really) on the basis of one blog post inspired by a very specific event (and one that stirred a LOT of emotion, on both sides, even among people who Really Should Have Been More Logical About This) is a bit broad, I suspect. YMMV, of course. I haven’t seen them mentioned (and they are older, as well), but I liked Stephen Donaldson’s Gap series as space opera. @Scott: I don’t think I even got to the book covers part of that post. The breakfast cereal analogy in the beginning was too much for me. I mean, I did know people who would eat Captain Crunch (or Nutty Nuggets, if you will) for three meals a day. Most of the rest of us move on. I can still have Captain Crunch for an occasional treat, but I like steak, I like fish, I like Ma Po Tofu, I like chicken vindaloo, and I like a lot of other things. I’m not going to limit myself to breakfast cereal. “And it isn’t about a ‘conspiracy’ it’s about privilege. You once recognized that you have privilege, John, but it isn’t because you’re a ‘straight, white male’; it’s because you are almost completely uncritical and wholly dogmatic to the leftist religion (and it is a religion since it’s a highly-irrational and reality-denying belief system). Thus you have ‘leftist privilege’.” So, people who are wrong have the privilege of being…wrong? And just how does their “privilege” benefit them? What do they get out of it, not in the privacy of their own heads but in terms of the Wide Wide World? Please describe, and please be clear. (Man, this ought to be interesting.) @Mike I just assume that the con attenders are more in tune with the behind the scenes folks (editors and others). So, I would guess that they would feel more comfortable nominating/voting in such categories. If I tried to nom/vote, it would be very random (so, I wouldn’t) Puppies, however, or slate voters in general, might just copy the slate choices, even in categories they wouldn’t have voted in otherwise. So, that’s why I think it would make their ballots stand out — they would actually have a full set of nominations in the editor categories. @Russell Letson Thanks for that great list, and for the pointer to locus reviews. How did I miss that? Looks like a great resource. So one things the Puppies and the SJWs can agree on is that you can’t judge a book by the sexist and misleading cover? @ Scott I think you are overestimating the influence of Torgersen. His community is small. Its girth and length, er, breadth, are tiny. He might be able to affect hundreds of opinions on a great day. Considering that we have millions of sci-fi fans spread across the globe, his pull is just a slight tug on a sea of open-minded people. So it should not surprise us that a man with so little influence says many ignorant things, especially about the business of sci-fi. Covers have never been a good way to judge a book. He might TRY talking to a diversity of people to find out what different books are about, rather than relying on a cover chosen by marketers. But that would mean that a wide range of people actually wanted to converse with him. But why would you? Inevitably, he will just bloviate about how the world fails him. @Scott — Yeah, that really bothered me too, not least because it almost seems like he’s saying that those books (story merely about racial prejudice and exploitation, with interplanetary or interstellar trappings, about the evils of capitalism and the despotism of the wealthy, etc.) shouldn’t EXIST. Or should not have spaceships and dragons on the cover, at the very least. (Has he never heard of back cover copy?) @Scott What makes Torgerson’s comment particularly nutty to me is that the so-called New Wave of science fiction, which produced a whole bunch of novels and stories that look nothing like what he’s describing, started in the mid-60s. He was born in the 70s. IOW, he’s complaining about a state of affairs that has been around as long as he’s been alive. Elizabeth Moon wrote a blog entry about the attempt to build an islamic center near the site of 9/11; Oh, Christ, the “Ground Zero Mosque” idiocy? Blech. Assuming she was in the (IMHO idiotic and driven by some ugly things) “prevent them from building it” camp, that doesn’t mean I won’t read her stories. There are people who were anti-GZM who are despicable human beings (Pam Geller), but there were some people who just weren’t thinking straight due to 9/11 derangement syndrome. And yes, the Torgenson bit you all are discussing is really quite revealing. Yearning for a mythical past? Check. Whining about being the underdog because your preferences don’t dominate? Check. It’s boilerplate reactionary nonsense. Of course the Puppies are diverse: they contain different kinds of white racist asshole. Rob in CT: It was a little more complex than “Don’t build the mosque!”–I’d forgotten the details. I checked, and most of the post was about immigration and cultural assimilation. However, I think the most serious complaint she had about the “GZM” was that building it was rude and the builders should have realized that they were going to provoke a negative, angry response. I can’t set up a link easily from where I am, but if you google Elizabeth Moon “Citizenship” you can get the whole post and decide for yourself. The whole Moon-Wiscon fiasco is one of the events that some SFF conservatives (including some Pupplies) point to as evidence that fandom as a whole is biased against conservatives. The irony of that in my opinion is that Moon isn’t all that conservative, frankly (if at all; it depends on your viewpoint, I suppose), and Wiscon, much as I have enjoyed attending it over the years, isn’t exactly in the mainstream of fandom and never has been. It’s billed itself as “SFF’s Only Feminist Convention” for–35 years, now? Something like that. Attending cons may well tell you if you like an editor personally, or at least if you like their public persona. I know there are a couple who I try not to miss when I see their names in a program book. I don’t think it means that you know how to evaluate them as editors. Possibly, but one would also expect to see a correlation between Puppy story nominations and Puppy editor nominations. It’s a bit trickier than that because some stories and editors were only one one Puppy list. Again, I suggest you check out Chaos Horizon. There are three articles there in the last few days about this subject. chaoshorizon.wordpress.com So I was following File770 links (now that I know about this site, my productivity takes another hit…) and reading various perspectives on the Hugos controversy and look who I found on Tom Knighton’s blog, collecting his high 5s as he returns to his team’s dugout: thephantom182 April 23, 2015 at 4:01 pm I’ve been hitting SJWs with a cluebat over at Scalzi’s bog. I mean blog. Dude, did you know that Hitler was not a socialist? That’s where we’re at right now. I thing I’m going to go and wash. BTW what is a “CHORF”? It reminds me of one of those names the cool kids would make up in grade school, call you by it, and then giggle among themselves because you didn’t know what they meant. Brad Torgersen, the guy who has been talking for days about how forming tribes is super double bad, has been trying to popularize CHORF for weeks as an insult for people outside his tribe. It’s an acronym for Cliquish, Holier-than-thou, Obnoxious, Reactionary, Fanatics. His belief it applies to other people, but not his crowd, makes him an IDCHORF: Irony Deficient, Cliquish, Holier-than-thou, Obnoxious, Reactionary, Fanatics This post’s comments contain an ongoing theme that highlights what bothers me about the Puppy debacle, i.e. books. Here are lists and lists of books, recommendations, reviews, conversation about this genre that we love. Thousands of books come out every year, far too many for one person to read, even full time, even in just one genre. Hell, I spend 18 months reviewing books for a living and I was overwhelmed, barely sleeping, eating all of my meals hunched over review copies. I firmly believe that period of time is why I had to get a new eyeglass prescription! The Puppies could have put out slates that contained great books. Books that may have been honestly overlooked and deserved attention, as well as stories, media, et cetera. Instead, it’s mostly a crap-fest dominated by a vanity press. Either Puppies have terrible, awful taste combined with 3rd grade level reading comprehension, or their entire campaign truly is a pissing match. I favor the latter explanation, although I’m willing to reconsider the former, given additional evidence. And it upsets me, as it appears to upset many fans. I would have been overjoyed to discover new authors, to read about different worldviews. I am, in fact, always thrilled by new ideas, or old ideas redone is fresh and exciting ways. The next Honor Harrington or an alternate history as genuinely disturbing as Stirling’s Draka series? Bring it on! I’ll be there with rings on my fingers and bells on my toes! That is the very reason I enjoyed Leckie’s novels so much. I love space opera, MilSF, bad physics, “buckle and swash” as another blogger put it. I have entire shelves dedicated to a single series or author. I’ve read about every possible variation on elves and dwarves and wee beasties, and enjoyed most all of it. Reading is my primary hobby, books my passion–stories were the only consistency in my peripatetic youth. I’ve been a devoted bookworm since I was four, perhaps earlier, depending upon which parent you ask. The Puppy campaign, purportedly to “help” the folks who create the works I love has only harmed them. It has harmed writers, harmed the reputation of the genre, harmed the Puppy participants, as well as the fans and readers who love stories and/or the genre, and want to support and honor the creators. It’s not deep lasting harm for most, probably and I hope, but it is unnecessary, and quite frankly, *stupid.* The campaign has undermined the types of works the Puppies claimed to want to see more of, by putting up craptastic examples, or works that don’t even contain the desired themes and tropes. I am annoyed at the sheer idiocy of the execution, I am annoyed at the waste of it, I am annoyed at everyone who blindly nominated the slate, the people who created it, and everyone who didn’t say up front, “Wow! That shit sucks! Can’t we do better?” There has to be at least one Puppy who thought that as much as they liked the Dresden files, there were other, better books out there, right? Right? This Worldcon is going to be just over the hills from me, and I seriously considered attending this year, before the Puppy thing blew up. Now, I won’t, because I’m afraid that if I ran into Misters Torgerson, Correia, or Beale I might actually kick them in the junk, and I’m a freaking pacifist. (A pacifist with a black belt and 35 years of karate practice, but still a pacifist.) Grr. Argh. Todd Stull writes: Who is “we”? If you are speaking of humanity having millions of SF fans, then you are correct. If you mean the usual Worldcon voters, fen, represent millions of fans, then I think you are mistaken. I don’t put a lot of weight on the “but covers are HARD now!” quotes. I assumed that was just a simplistic way of complaining about the rise of messages in books and the lack of pure mindless shoot-em-up-with-spaceships space opera. I’ll give him the benefit of the doubt that he knows how covers and cover selection actually work. That said, it’s still a dumb sentiment. Sad/Rabid canids may very well regret what they have wrought…from Mercedes Lackey via Chris Meadows: “I cannot WAIT until someone lets the Romance Writers know about this, and how to get a book on the Hugo ballot. Romance readers outnumber SF readers by about 100 to one, and a very high percentage of them would be gleeful to only pay $40 to get one of their beloved writers an award. Romance writers are extremely savvy women about energizing their fan bases. They were using social media for that long before SF writers started. I want to see their faces when Diane Gabaldon takes the Hugo in 2016.” () Be afraid. Mintwich: It’s very unlikely you’ll run into Day/Beale; as I understand it if he comes back the IRS will have some questions for him. I would be surprised if either Torgersen or Correia shows up. In any event if they do they’re easily avoided; it’s a large convention. You should go. Rogers: And that cements that Torgersen doesn’t know what ‘reactionary’ means. It’s someone trying to return society to a (real or perceived) prior state which they think was preferred to its current state. Pining for some lost Golden Age is exactly in the lines of the definition. Now, I suppose you could say that liberal SF fans want the Hugos to go back to a way when we all operate under a gentlebeing’s agreement to minimize politicking, but I don’t think that’s what Torgersen means. And is counter to the narrative that SJWs have corrupted the Hugos and that the Puppies are trying to restore it as a true reflection of fandom. In which case, returning the Hugos to its perceived roots could be a reactionary* thing. * I agree it’s not generally meant as a complement. But it’s one of those situations where it’s more confusing than insulting to call a progressive a reactionary. @Rogers: Thanks for the explanation. As I suspected, very high school, or maybe middle school. “Stop trying to make CHORF happen, Brad. It’s not going to happen!” I’ve started taking notice of the editors thanked in books I read during Hugo nominating/voting season and am trying to pay more attention year round. I read 150-350 books/year (multiple genres not just SF/F) recording 100-200 on Goodreads. Being bedridden does have an upside. When I saw the nominees this year I noticed 3 of the editors show up regularly in a number of author acknowledgements. I know what my general complaints are with those authors books & which things I personally believe should be caught by an editor. I’m probably basing my vote on the editors over a longer time period than the past year as I haven’t noticed the books they edit being provided in the voter packets and I only have so much time/money/brain power from ballot to voting closes. This is much easier done with traditionally published books than self-published so I expect this to get harder over the coming years as more and more of my reading is indie. Colleen, Is that what P182 thinks it’s doing? Huh… replygifdotnet/i/166.gif A couple of disjoint observations… Shorter Torgersen: I can’t judge a book by its cover any more, and this is BAD. In the immortal words of Bugs Bunny, “Wotta maroon!” You’d think that wannabee-SMOFs who are orchestrating a Hugo-packing slate under the ostensible banner of we want MORE DIVERSITY!!11!1! and MORE RECOGNITION for the unrecognied1!!1!! would either (a) not give 6 (six) of their slate’s slots to 1 (one) person, or else (b) recognize that they’ve contradicted their stated goal. I mean, geez, could they have made it any more obvious that they don’t actually give a shit about this ‘spotlight on the unjustly overlooked’ schtick? [shakes head] Anyone who’s interested in the question of how the Hugo rules should be altered in response to this mess, would be well advised to check out Discussing Specific Changes to the Hugo Nomination Election: Another Guest Post By Bruce Schneier over at Making Light. Assuming you haven’t already done so. Rogers Cadenhead@3:27: Exactly. When I first saw that my reaction was “Does Torgersen genuinely not realize he’s describing himself?” Apparently, he also has no idea what “reactionary” means. @Tasha Maybe, maybe not. I read a lot of self-pubbed/indie pubbed Kindle stuff and one of the biggest turnoffs is crappy editing and proofreading. I’m at the point where if the first few pages are rife with glaring errors and clunky constructions, I shut the book down and delete it from my library. From the reviews, I’m not the only one. I think the freelance editor/copy editor is going to become more and more crucial to successful self- and indie- publishing, and I think like good agents and good cover artists, the good ones will gain notice and reputations. And I think for those a Hugo nomination or award would be even more important than someone at a major house, because they will have to market themselves and drum up their own business, just as the authors they edit do. I know if I ever write/publish something, I will almost certainly hire an editor to help me get it into the best possible shape, and I will thank them profusely, in the acknowledgements and elsewhere. I for one am actually incensed at the notion that neither Brad nor Larry plan to so much as show their faces at Sasquan. Cowardly behavior from the Manly Men of SFF™. I don’t know, nor do I care, what Brad’s excuse is. But I’m not really surprised by Larry “I only wrote an effusively positive recap of Worldcon in 2011 because I was terrified of PNH, so either I’m an opportunistic liar now, or was a craven liar AND coward then” Correia. In general, I try to evaluate editors based on what their authors are saying. Authors will often talk about editors on blogs or even in the acknowledgment section of a book. Talking to authors at cons can also give you all sorts of information on how a particular editor was helpful (or not). This info has to be compared against what you thought about the books you read that the editor edited during that year. If the author says that the editor was really useful, but you thought the book didn’t show it, then that is a piece of information. Regarding probable non-attendance of Puppy Ringleaders: I find it highly amusing that these so-called Tough Guys apparently don’t have the guts to show up at the Con and take credit for their work in improving the Hugos. Although, it’s probably for the best. Huh, I guess “craven coward” would be redundant. Meh… @Colleen I find plenty of basic errors in trad ebooks and have in print books for a number of years. I have a number of ways of choosing indie books: 1. If I’m getting it for free (using freebie/discount newsletters & Amazon bestseller lists) and never heard of the writer before I don’t expect much but I’ve found a number of real gems if they have a number of stuff published – good covers & well written blurbs are good ways to weed out the non-pro from the pro. 2. Trad published either doing some indie publishing/hybrid or who was a mid-list author and now is indie – lots of really good stuff in this category 3. Friends recommendations – word-of-mouth – the same way I find trad published authors & it has the same good/failure rate as trad recommendations but generally cost me less time & money to check them out Let’s not hijacker the thread too much. @tpoii: when was the last time you checked in with David Drake? Besides OGH, his was the last military SF, I read, namely a bunch of his Royal Cinnabar Navy books, which were ripping good yarns, though your tolerance for them may well be tempered by the fact that he deliberately crafted them to be pastiches of the Aubrey/Maturin novels, IN SPACE! And if you want golden age jungle planet Venus, check out his Seas of Venus (1st edition is a free ebook, direct from Baen). And your comments have helped to reinforce the point that I made in a previous thread, namely, that MilSF, and Rocketpack and Raygun SF are both sufficiently popular, that an award for those specific genres, ala the LFS’ Promethean award, would probably be quite welcome. And the Super Sekrit SJW Kabal would probably let them award at Worldcon, too, even without forcing them to give the award to notable gamma bunnies like Stross and Doctorow, like they did to the LFS (What? You all know it’s true!). But no, it was never about that for the Special Little Snowflakes. @scorpius: Please leave us libertarians out of it. We’ve got our own award, given out at Worldcons since 1982. Libertarians, generally speaking (as much as one can generalize about libertarians), recognize that we live in a pluralistic society, we have elections to make social decisions, and people are free to vote as they wish for whatever reasons they wish. The puppy slates are all about the Special Little Snowflakes. Of course I would agree that Hugo award participants are biased against MilSF and Rayguns and Rocketpacks, but a bias does not a conspiracy make. We are, after all, speaking of genre fiction fen, who already like to discover, read, think about, then talk and argue about New Stuff. And this particular population of fen is especially engaged in discovering New Stuff by virtue of the fact they not only go to cons, but they participate in Worldcon, and in the awards ceremony, therein. Of course they’re going to give a greater to weight to authors whose works bring something newer to the table than the hoary old space operas and planetary romances of some 7 decades past, and the MilSF of some 2-3 decades past. Since then we’ve had the New Wave, Cyberpunk, Transhuman/Post Singularity, and now, 20 minutes into the future technothriller, which we’ve always had, but never has 20 minutes seemed as close as it does now. bkd69…you sound like the libertarians I knew and admired (if not always agreed with). I’ve been baffled in recent years by libertarians who walk and talk like degenerate right wing demagogues…… 1) John, glad to hear you are still friends with Instapundit- a puppy somewhere was claiming my that Glenn had Found Out About You and would no longer Be Your Dupe. 2) Speaking of misleading covers, someone ought to show Brad an old book called Starship Troopers – space armor blowing up aliens on the cover, extended meditation on the meaning of service and self-sacrifice inside. He would hate it! A perfect example of his hypothesis. Have LC or BT commented on attendance at Worldcon one way or the other? LC not attending makes sense unless he wants to participate in the business meeting as he hasn’t enjoyed Worldcon… Well or whatever. Since the SP campaign is about what gets a Hugo not about the con itself I’m not sure why his attendance should be expected. BT I haven’t followed as closely but I thought he he said he is going on active duty shortly. Not sure if that conflicts with the con or not. Since he has had some fun in the past and has barflies/Baen to hang out with and has been the leader this year I’d have higher expectations of him attending except “I was just away from my family” if he isn’t still on duty would be seen as legit to followers. Again they’ve been stomping on the voting & what wins and “con wasn’t fun/people were mean”. So I think they’ve positioned themselves such that unless followers think “you should defend against voting changes” they don’t need to attend. I’ll leave it to you to decide if SP/RP care enough to show up at con demanding their leader(s) be their to protect slate voting… @tpoiii – sorry if this is a tangent (apology extended to John Scalzi as well) but earlier you mentioned you like “Retrieval Artist” because it is “space mystery.” I love this term! Do you have any other recommendations that fit it? I read “Leviathan Wakes” because a friend described it as, “space-noir.” (Now that I think of it, if you attach -noir to any genre, I’m in.) And to post the first comment I wanted to make, after OGH’s post, I wonder just how much of the vitriol directed at OGH, et al, is due to some perceived identity treason. After all, isn’t apostasy the greatest of sins? @Steve Halter – yep those acknowledgements thanking their editors – how much they thank the editor – does it go beyond perfunctory & combine that with what I consider editorial flaws helps in ranking/voting editors. I don’t know if I follow enough authors closely to get a good feel for editors based on online comments. I’m hoping next year I’ll have more confidence in nominating. For editor of shorts I should as I buy/back a number of anthologies and also subscribe to several zines. Rob in CT She wasn’t even in ‘the prevent them from building it’ camp. Her point was that nobody should be surprised the some people were upset about it. So, was trying to explain this SF class war nonsense to someone, and I mentioned Baen and this little nugget, which was more an example of class-war from the right, rather than from the SJW’s as the accusation usually goes. And the question was what are the other publishers like, and I simply don’t know the field. Is there webpage anywhere that lists some of the major SF/F publishers and any political statements (on purpose or missteps) they might make, such as the above thing from Baen? Or tries to look at the kinds of books they publish? Number of MILSF books versus number of allegedly SJW books? Even basics like “style” of SF/F books? Any kind of measure? It’s become clear that the appeal of an accusation such as the existence of an “SJW cabal” is that it’s fricken impossible to disprove, cause where would you even find numbers? The existence of Baen making statements such as above would seem to say, at the very least, if publishers have any political leanings, its actually AGAINST sjw type works. @bkd69: Of course I would agree that Hugo award participants are biased against MilSF and Rayguns and Rocketpacks, but a bias does not a conspiracy make. Right. Consider the case of the Academy Awards. The Best Picture nominees are almost invariably English language films. Is this a conspiracy? Nope. Just a simple reflection of the fact that most of the voters are from the US and most of the movies that are released in the US are English language. The Hugo Awards also seem to go predominantly to works that would be considered science fiction rather than fantasy (the distinction isn’t as clear as some folks like to make it out to be, but there are still a lot of books that fall firmly on one side or another). Is this another conspiracy? thomasmhewlitt Lois McMaster Bujold (yes, her again!) writes some good Space Mysteries: Ceteganda, Ethan of Athos, Komarr, Diplomatic Immunity, Cryoburn. Someone above mentioned Cryoburn and Captain Vorpatril’s Alliance as Space Opera/MilFic. I don’t think so. Cryoburn is a mystery more than anything, IMO, and certainly not spacy or operatic. Or military. Captain Vorpatril’s Alliance is a caper book. Among other things. (As is Ethan of Athos, among other things). Bujold’s books are rich and deep and hardly ever only one thing. The Vorkosigan *series* can fairly be described as Space Opera and MilFic. But those individual books, no. Sorry – didn’t actually manage to read all the comments but got some great recommendations for SF (not fantasy) which I will wander off and enjoy shortly. I also found Cheryl’s review of the saga an interesting and upbeat perspective () Perhaps we can bring it to pass – get out there, get reading and VOTE the Hitler as a socialist thing is funny in a sad kind of way – can’t they even look it up in Wikipedia Or is that leftist too? If it means anything, I purchased a supporting Worldcon membership, just so I can vote for this year’s Hugos, my first time ever. You may decide for yourself whether that’s a good thing. I’d actually already read most of the novels…one more now that The Three Body Problem is in the running. I have one more novel sitting at home to read. I’m sort of wondering, given the importance of series fiction in SF/F, why there isn’t a separate award for series, as opposed to stand alone novels. Seems like they are different beasts for both writers and readers, why not recognize them as such? @thomasmhewlett Space mystery: well, Asimov introduced me to it. Lije Baley stories (with Robots!). Niven’s The Patchwork Girl. The Expanse series is hard boiled space mystery. Sundiver by Brin is sort of a mystery. Hammond’s KOP series — I have only read the first one, but it was good. Set on a colony planet. Also hard boiled. @tpoiii, @ultragotha: These look awesome – thanks! @eve: it’s more of a game of “pin the genocidal dictator on your enemy.” The Nazis fought against unions, eradicated any left-wingers they did have (see, for instance, the Night of the Long Knives), were very comfortable with the German aristocracy and were bitter enemies of the Communists. But they have “socialism” in the name so they’re left-wingers! I had not connected the isabelcooper whose posts I enjoy here with the Isabel Cooper whose books I have read. D’oh! I will re-read with pleasure, and hope that our pal ‘Thu shows up if I read it sideways. MAHNA MAHNA. Space mystery: there’s a collection of Niven’s short stories entitled “The Long ARM of The Law”, starring Gil “The Arm” Hamilton. I think “The Patchwork Girl” is sold separately, being a novella. Absolutely fair-play mysteries. The aforementioned James S.A. Corey hivemind, and Rusch’s “Retrieval Artist” novels, of which the dozenth and last will be published in June. Asimov also did a fair number of them, also collected. Thanks to all suggesting fantasy that isn’t grimdark or 13 year old wank. Colleen: I would not be averse to Gabaldon on the ballot: after all, she’s got her own TV show now like GRRM! But if we’re going on more SF grounds, she’ll have big competition from the latest “In Death” book by La Nora aka J.D. Robb. Flying cars and outer space colonies in those, computer hacking, and plenty o’gadgets. A tough as nails cop who can’t play by the rules. Just the sort of thing the pups like, right? ;) I was also (as someone way up there posited) thinking that OGH, GRRM, and PNH might have grown up lower-class. I do not know Patrick’s background, though John and George have often written of their extremely modest beginnings. John, yes, that was it! That’s perfect. Thank you. You’ve had a lot of posts between so if the above doesn’t ring a bell, it was in response to this post, when I requested a link to a previous post you had made. John Scalzi write: Catherine Asaro: Is it this one? Greg @ at 9:56 am – Sorry to be late to the movie viewing party, but I think I’ve got you beat: Some Kind Of Wonderful (1987), written by John Hughes and directed by Howard Deutch. I posted a thorough and, quite frankly, profoundly deep analysis of its applicability at the Metafilter Hugo discussion thread (here:) under the name ‘my dog is named clem’ at 5:14 PM on April 10. I suspect it would be bad form to re-post it here, so will allow you to read it at your leisure. Anyway, that post clearly demonstrates that it is ALL about class, and that Larry Correia is actually Eric Stoltz, and that making friends in detention with all those punky delinquents helped him with the nasty snobby rich kids personified by Our Generous Host (sorry, John). My only uncertainty is who Lea Thompson is in this scenario. I dare you to not agree. pixlaw: There is no Lea Thompson. Teddy has decreed that Teh Wimminz aren’t allowed to speak. “..the Hitler as a socialist thing is funny in a sad kind of way – can’t they even look it up in Wikipedia Or is that leftist too?” I’m pretty sure doing research to find out the facts are before you decide what you think is Leftist, yeah. (I’m kidding, I’m kidding!) (Sort of.) (Almost.) (Maybe.) I recommend John C Hemry’s Blackjack stuff for people looking for mil sf stuff. His early work has the aspect of a former military member who is branching out, but his work develops and his writing become pretty good over time. I do have a bias as he is local to me and my wife is one of his characters in the later books in the series… And in response to a separate question: SF mysteries. Check out Walter Jon Williams’ last 3 novels, which are near-future noir mysteries/thrillers which feature as their main protagonist Dagmar Shaw, a woman LARP game designer. This Is Not a Game, Deep State and The Fourth Wall, which somehow manage to be both really funny and really scary at different points. They’re just great. I admire his work highly. MilSf I have known and loved: Scott Westerfeld – The Risen Empire and The Killing Of Worlds CJ Cherryh has been mentioned – start with the Pride of Chanur series (?) or The Faded Sun: Kesrish, or Hellburner Walter John Williams – The Praxis / Dread Empire’s fall John Steakley – Armor Daniel Keys Moran – The Long Run Lois McMaster Bujold’s Vorkosigan books are completely wonderful though they may eat your life for a week or two. .. and a whole ‘nother shelf or two. Also, Gordon Dickson’s “Dorsai” books are good MilSF. For Cherryh, also Rimrunners. And Downbelow Station is still one of my absolute favorites. For mystery/noir SF, there’s also Richard Morgan’s Takeshi Kovacs books, starting with Altered Carbon. And there’s a major MilSF element in the sequels. I know the latter was on the Sad Puppy slate, but I liked Charles Gannon’s Fire with Fire and Trial By Fire, First contact, multi-species alien accords and misunderstandings. I believe this will be at least a trilogy, though. I really want to know this It’s not obvious stuff, like country or state of residence or religion. Do you three still have hairs on your heads compared to the shaven puppies? Closing up comments for the night. See you in the morning! Update: Comments back on. Assuming Socialist is not the popular but unhelpful definition in American political discussions as “Government policy I do not approve of”, there’s something to the Hitler as socialist. He certainly favoured centralised state control, and partnership between the state and certain industries and large corporations; the building of infrastructure such as the autobahn; universal healthcare; the state (and especially the military) as the largest employer in the country. All of this was aimed at creating the Nazi war machine. Of course, if this is socialism, then frankly so were the Interstate Highway System championed by Eisenhower and the military build up under that arch-Marxist Reagan Lurkertype, I brought up in a File 770 comment that if popularity was the criteria for a Hugo, Nora Roberts’ “In Death” series could be a shoe-in. Come to think of it, between the two “In Death” books per year, and Roberts’ paranormal romances, the Novel category nominees could theoretically be all-Roberts, all the time. (I’ve had to stop reading the comments at File 770, though. Too much there that makes me feel sick and angry.) >> My only uncertainty is who Lea Thompson is in this scenario.>> Lea Thompson as Amanda Jones, the object of desire in the wrong hands, is obviously the Hugos. This of course makes Hardy Jenns the SJWs, which doesn’t seem to fit at all, but never mind. It does lead directly to the question of who Correia’s longtime buddy who helps him out and who he, in the end, stops obsessing about Amanda to find true love with, is. The plot structure so far says that’s Brad Torgersen, but perhaps there are other choices. Since I am taking a short break whilst celebrating St Mark’s day in Venice it occurred to me that part of Beale’s venom springs from his lack of status here in Italy. There are lots of rich people around, and being from the U.S. is not a plus point. There are lots of Christians, but most certainly not his type of Christian. His contempt for women doesn’t get him any points either since this is a matriarchal society, and thus he is likely viewed with contempt since he fails to recognise that fact. There’s certainly racism in Italy, but it is very different to the U.S. variety, and his desire to be viewed as superior has already been scuttled by the points noted above. Of course he has to buy supporters, either with money or the chance to break something; he can’t get them any other way… He certainly favoured centralised state control, and partnership between the state and certain industries and large corporations; So did Augustus Caesar. brucearthurs, I’m with you about File770 at this point. The roundups have been nice, but nothing much but additional text characters is getting added to the discussion at this point. Non-Puppies have all made just about every case that can be made, and the Puppies just continue to repeat something along the lines of “Look, I realized we fire bombed the community, and called all of you basically sub-human animals bent on destroying the genre you spend so much time and energy promoting, and that we have no collective sense of irony or self awareness, but why are you being so mean??” As for the comments, Glyer doesn’t put much visible constraint on it, so at this point (and perhaps it was always thus) it’s pretty much the same 10 or 20 guys being assholes to each other. Even Kevin Standlee is getting testy over there. Oooh! I read This is Not a Game last year. Didn’t know there were more. Cool! Since I am now an official Hugo voter, I decided to get started on staying caught up so I have nominees to throw in the ring on the novella/novelette/short story categories. To that end I am now the proud owner of subscriptions to F&SF, Asimovs and Analog. That should be a good start. I cracked open the June issue of Asimovs last night and hit gold immediately. “The End of the War” by Django Wexler. Novella or novelette–anyway, long. Really good, and the end made things a bit dusty in the room. I commend it to the attention of other 2016 nominators. And hey, it’s mil SF! There’s a time travel story in this, I think. Fascists invent/get hold of a time machine. They endlessly debate events they could change. Each time the supercomputer stops them, saying it would irrevocably alter the timeline, eliminating the time traveling fascists before they started and thus leading to paradox. OK, say the fascists, if you are so smart, Mr. Supercomputer, what can we do with this thing that won’t result in paradox/ SUPERCOMPUTER: Take back the Hugos! @kurtbusiek In the M.Night Shymalan version, he ends up realizing his true love for…John Scalzi. TWIST ENDING! “SUPERCOMPUTER: Take back the Hugos!” That’s no good. It’ll leave the fascists in place but eliminate the supercomputer. John wrote: .” This reminds me of an incident a friend of mine experienced while on a signing tour. Some troll who often harassed him online, to the extent that the harasser seemed like he might be potentially dangerous (or at least disruptive and troublesome) in person attended a signing of the author. Showed up, introduced himself, placed a book in front of the writer to be signed, then left. There was absolutely no incident, no problem, no tension, nothing. Later online, the troll claimed he CONFRONTED my friend at that event, intimidated him, threw down with him, debated him, schooled him, showed him, etc. All this talk of Some Kind of Wonderful has me listening to Lick the Tin’s cover of “Can’t Help Falling in Love” which was played over the ending credits. Their only album was called “Blind Man on a Flying Horse” which pretty much sums up this whole mess. Catch10110 mentioned Stephen Donaldson’s Gap cycle. I enjoyed them, but if you can think of something that requires a trigger warning, consider it given. It is all about horrible people doing horrible things to each other and everyone else in range. If you can get past that, they are really good. I’ll also second Elizabeth Moon. Laura Resnick: Yup. I’ve met my share of Internet tough guys in the real world. It’s always interesting to see the disparity. John and Laura, Are you saying the “conditional” part of Tom Kratman’s “conditional warnings” are that “Tom Kratman isn’t likely to physically accost you, so there’s really nothing to worry about”? Color me shocked. As regards MilSF, let me second John Steakley’s “Armor”, mentioned above. What you THINK you’re getting is a story about badasses in powered armor killing bugs (which seems kinda familiar somehow…). What you actually get is a story about what trauma does to people in combat. Here’s something I’ve noticed about the ‘conservative’, ‘patriotic’ rightwing folk: 1) They want government out of our lives … except in our bedrooms. THERE it’s okay for the government to tell you who and how to love, and whether or not to have kids. As long as THEY are in charge of the government. 2) Because government is totally incompetent and can’t do anything right … except when it comes to surveillance and killing people, when it can’t do anything wrong. Because FREEDOM! Contradictions, much? Also note the quotation marks around ‘conservative’ and ‘patriotic’, because they are neither, by any sane definition used in the real world. @brucearthurs: Nora for Hugo it is! I mean, the “In Death” books are near-future police-procedural, (slightly) post-apocalyptic rebuild, with a seriously alpha male character and a strong female lead. There’s off-world habitats and prisons, computer hacking, clones, flying cars, VR, basically the same class/wealth distribution as today. Let’s put two “In Death” books on the slate next year! (This means I’ve got to read the two published in 2015. Twist my arm.) (I do recommend them to people who ordinarily wouldn’t like That Sort of Thing. If you liked “A Civil Campaign”, give one a shot. There are 40 of ’em. They do have continuing character development, but you can jump in anywhere. Scads of them at your local used book store or garage sale.) @Stevie: Indeed. It must be lonely being a Christian Dominionist male chauvinist pig in a country where everyone’s a cradle Catholic and goes to mamma’s house every Sunday for dinner. But they don’t care about sexual harassment, so Teddy probably digs that part. I just flat out fundamentally don’t understand why SF — the literature/genre of the future — “should” only concern itself with the problems of SWM who need to bring Jesus to the aliens. We got beyond that before any of the Yappers were born, thanks to the New Wave, Le Guin, Delaney, and the rest of ’em. Even with St. Bob of Heinlein! We’re sort of getting beyond some of that IRL. So why drag it back to an imaginary past? Insecurity and a lack of self-esteem and a need for external validation, is what I think. None of which are very manly. (I leave the Freudian interpretation of the award itself and its corollary to sports cars, monster trucks and SUVs that never go off-road to the reader.) Techgrrl72, et al: Let’s not wander into a general political discussion, please. @ Docrocketscience: As it happens, I was pouring through several books of proverbs last night in search of one that would fit a story I’m working on. And by interesting coincidence, there are an extraordinary number of proverbs from numerous cultures that say things like, “The lions that roars is not the one that’s a threat” and “the wolf that barks is not the one that attacks” and so and so forth, indicating that noisy bloviating is a very different (and usually completely unrelated) thing to decisive action. If we cannot draw a direct line from worldwide wisdom to the online posturing of a particular Hugo nominee, we probably don’t deserve crayons. @lurkertype It is the “seriously alpha male character” (who comes with all the classic lashings of romance lead tropes (fabulously wealthy, ruthless, but always in a good cause, only has eyes for the female lead) which probably keeps a bunch of readers away. I’m actually a fan of Nora Roberts in her romance writing, but the leads in the In Death series have always felt too much like standard romance novel paint by numbers leads for me to really enjoy them. @mickyfinn: I was saying Roarke’s the sort of chap the Yappers think should be the protagonist of every story. Self-made two-fisted SWM. The supporting characters in the “In Death” series are where they really shine, and are who I’m reading for nowadays much of the time. I <3 Peabody and McNab. Frankly, the depth of characterization in those books still outdoes much of the stuff the Yippers are fond of. Is Gabaldon publishing anything in 2015? A compendium, a Lord John (heh) story, or more of Jamie and Claire? I am not averse to a Gabaldon/Robb/Roberts ticket. I happened to go to OGH’s posting of March 2, “Standard Responses to Online Stupidity” and find the Barkers are much of them. #9, 10, and 11 MOST particularly. I invite everyone to revisit this bit of wisdom, and apply it if, say, you’re still wading through File 770. That really is a succinct way of putting it, John; it’s almost like you’re a professional writer or something! Another book recommendation for anyone, this time for YA Fantasy. If you like dragon fantasies, try Rachel Hartman’s Seraphina. I don’t want to give the story away, but the dragon and human societies are well done. The sequel, Shadow Scale was published this year. —— @Colleen I meant to post this in reply to you yesterday. I looked at the nominating rules again. You and Mercedes Lackey mentioned the Romance writers and readers, but the word count for novels made me think of the Young Adult (even Juvenile) publishing categories. There is a lot of SFF published in these age categories now. Unless I missed a minimum voting age, or a minimum reading ability, then couldn’t the tweens vote for the Hugos too? Think of VD’s horror. All those teenagers, especially, with social media accounts, disposable parental income and no jobs. The Hunger Games and Twilight may no longer be eligible, but who reads the books as soon as they drop? Teenagers and kids read the books as soon as they drop. Who never has multiple “to be read when I have time” piles? Teenagers never have these piles (except when they didn’t clean their rooms). Seriously, what teen fan wouldn’t thrill to be able to nominate the top 5 books they liked best, rather than those stuffy old awards nominated by librarians? (Not that librarians aren’t totally awesome!) And by the way, librarians talk to both the Romance and YA readers. Is it appropriate to send the Hugo nominating rules to the American Library Association and the International Association of School Librarianship? I haven’t actually read any of the “In Death” books myself, but my wife is currently approaching the end of a re-read of the complete series. I’ll move them up my TBR list and give them a try. (There’s an available spot where I was going to try some guy’s books about hunting monsters, but those have moved way down in priority. Way, way down….) That Kratman fellow over on File 770 needs a Snickers. (That’s how to solve this whole mess! Snickers for everyone!) Which is why, of course, that the American of Hispanic and Native American ancestry TURNED DOWN THE NOM The Pups would do well to retire this oft-repeated mantra. Correia’s family is from Portugal, which, last time I checked, was full of white Europeans. Hispanic means something, and it is not “has a Spanish or Portuguese sounding last name”. John Scalzi said: “So, no. This Hugo contretemps isn’t about class. But it might be, a little bit, about who has class, and how that affects what they do with their wealth and power.” Of course its about class. Class is the in group with the pull, and no class is the out group. Talk about check your privilege. Scorpio above has it completely right, and the easy way to tell is this: Defenders of the Faith are up on their high horses raging, burning stuff down and smearing people with guilt by association. Meanwhile Sad Puppies are merely pointing and laughing. Totally classy, John. Totally classy. Allow me to introduce you to Vox Day, VD for short: VD is king of Castalia House, which is the happiest totalitarian dictatorship on earth: VD has rules for how to make science fiction a perfect place: And those rules consist mostly of threatening anyone who isn’t straight, white, male, a christian dominionist, or different from his conception of a perfect world If that makes you go “Huh?”, then you’re keeping up: VD can’t *earn* a Hugo, but he figures if he hooks up the right bunch of people, maybe he can get a hugo from them. His puppies aren’t necessarily the smartest bulbs. But an army of lemmings marching in lockstep can push the Hugo ballot over the cliff. VD and his puppies all look the same, talk the same, and dance the same while we may be a motley crew: so it’s time to tell VD “Eat me” We can either let VD get the hugo Or vote a puppy-free ballot And live happily ever after What was that old saying, something about how many fingers are pointing back at you when you point at something? Phantom: I’m not a Hugo voter and I only have so much free reading time. Which one of the three John C. Wright novellas on the nomination list should I read? I haven’t read any of his other works so I’m coming to him fresh. thephantom182: Thank you. Even people who have not seen any of your previous comments know you’re a troll after those six words. I know you’re a gamma rabbit and all that, but it’s weird to me that your writing is somehow a magnet for these jerks when (to my eyes) the sci-fi you write is pretty classic/normal/traditional/mainstream. I guess it’s all about the blogging. And the social justicing. You’re like a self-hating privileged white man to those guys, I guess. Letting down the side. @Tam Knox Can’t judge a book by its cover I guess. Annie Bellet wasn’t conscripted. Of course, she declined her nomination in the end, in part because the anti-puppies side literally gave her nightmares of people cheering her defeat, all because she saw a fellow on the other side reaching across the aisle, and decided to reciprocate. Torgersen et al. certainly bear some responsibility for this outcome, and Beale et al. certainly bear more. But the antis are hardly guiltless. Is this the world that you want to build? Osberend – No, but I definitely don’t want Beale’s world, and I’m not sure I want Torgersen’s world either after reading some of his polemics. The Sad Puppies as a whole on the other hand… If there are any who just want to see books they like get awards and don’t give a flying froot loop about the politics and about hating SJWs? I want to see what their world is like. Thephantom: Re: Class: Changing the definition of what “class” is in the discussion to suit your own purposes and saying “of course” it’s about that is a nice rhetorical maneuver as long as it goes unnoticed and you can get everyone to go along. Unfortunately for you, I’ve noticed, and I’m not inclined to let you pocket that particular card. Nice try, however. The rest of your comment is possibly more ironic than you appear to realize. This is where you go back to your friend’s web site and tell them there about how you’re totally schooling people over here, I expect. “Of course its about class. Class is the in group with the pull, and no class is the out group.” Actually, that’s group dynamics. Class is social stratification. There are always Bad People like Beale. (There is ample, ample evidence that Beale is a Bad Person; Philip Sandifer did the rundown of the evidence.) The question is how others respond to that. Allying yourself with the Bad Person is questionable. Joining in the Bad Person’s use of disruptive tactics to damage a community is worse. But ignore that for the moment… …the real question is how to stop the Bad Person from achieving his goal of sabotage and iidisruption. The traditional method is ostracism, and that’s what’s being done right now to Beale (who richly deserves it). You usually have to ostracise anyone who supports and gives ‘aid and comfort to’ the ostracised person, too, just to make it work; that’s what’s happening to Torgerson and Correia, and they deserve it too, I’m afraid. If you want to get more sophisticated and modern than ostracism, you redesign your society’s rules so that the Bad Person’s attempt at sabotage simply won’t work. Then you don’t have to ostracise him. In this case, the problem is defective voting rules for nominations. What is needed is some form of “proportional representation”, such as STV or reweighted approval voting. This problem of blocs dominating the voting is known as the “representative committee” problem in the academic literature, and it has *known solutions*. Use one of them. With proportional representaton, a bloc which only has the support of 10% of the voters… gets 10% of the nominees, no more. The problem here was that the bloc got 100% of the nominees. So fix the system. You’re absolutely right, osberend: It’s clearly unpossible that Bellet might have reconsidered whether or not it was a good idea for her career to be associated with a racist, sexist, homophobic dipshit who is on record as explicitly stating that women getting acid thrown in their faces is acceptable collateral damage in the struggle to prevent women from being educated; a man who saw nothing wrong with getting an at-best-mediocre story by said dipshit nominated for a Hugo specifically and explicitly to troll the Hugos; and a man who is on record as explicitly stating how awful it is that he can’t judge SF books by their covers any more. Clearly, the only rational conclusion one can draw is that Bellet was harassed by horrible anti-Puppy mobs until she, in desperation and fear, withdrew her nomination. Let’s go ahead and leave speculation about Ms. Bellet’s rationale tabled for now, please. She stated her own reasons clearly on her own site and I am content not to gainsay her. May I say just how delightful it is that this particular OP has resulted in a really useful bunch of reading lists? It’s also possible given the language used by both Bellet & Kloos they were not interested in being used as part of a political battle this year. Both received a fair amount of abuse from puppies for pulling out. I believe both will have eligible works for next year. I’d been enjoying Bellet’s urban fantasy series prior to this mess and was pleased to finish catching up once she resigned although I found book 5 to have a weak ending. It’s better IMHO than much of the urban fantasy out today as I didn’t have to wince at sexist and racist crud nor was anyone raped in 5 books. Lots of battles, murder & mayhem as well as paranormal creatures, mystery, a little romance. Some of bad & good guys are grey. Sorry @Scalzi we cross-posted. Delete or modify as appropriate. @Greg I see we’re allowed to deploy unconventional munitions. Excellent. As a tangent – Tom Knighton was mentioned, and I popped over to view his work, since he’s due that amount of respect even if his hounds aren’t. Two things struck me: Firstly, we’re still dealing with nuclear fallout as an EOT scenario, a la The Death of Grass / No Blade of Grass, which struck me as really retro, akin to the Westerns written in the early 20th C. But, sure, I can see a market for an interesting re-run of MAD fear. (But, really? I’d have thought reality based MilSpec would know all about Bushes’ fondness for neutron weapon research over conventional types and the breaking of various treaties to do so – have none of these authors found about the CRS? Access used to be easy to obtain, I’ll let people work out how). Secondly, why the cross pollination is because people are playing silly buggers over malicious reviews. Which, having done due diligence, he has an arguable case for: yes, there appears to be some spiking occurring (ELF reference), but it doesn’t look organized, it’s the usual Twitter Warriors. However, if you want to claim moral high ground on such topics, what you do not want to be doing is, I don’t know, giving your own works 5/5 reviews. :Sad Trombone: I can’t find any published stuff that isn’t purely kindle etc, so I cannot comment on his work – and he’s not published outside of America. Any links allowing me to do so would be appreciated (since you’re probably reading Mr Knighton – your google presence is overclouded by famous authors on fungii, so feel free to splash some copy). @lurker Busy, it appears publicly posting has lead to RL shenanigans. I have the enviable position of being liked by none… BUT WHAT CAN YOU DO? Aaron Yep; Portuguese are classified here in Europe as white Europeans. I do wonder sometimes whether the Sad Puppies have even the merest smidgen of knowledge of Europe. Beale’s inability to leverage his wealth in Italy into any kind of influence in Italy or Western Europe as a whole is likely one of the reasons he is so desperate to have some effect in the States. And evading tax in Italy isn’t a political act driven by Libertarian beliefs; everybody does it, so no points for that one either… Dammit, we need a more sophisticated flounce-scoring system. It’s no good simply deducting points for ‘and they came back’, because they always do, often with multiple iterations of ‘now this is REALLY my last post’. I propose a system that not only counts the number of returns after a flounce, but such variables as whether the flouncer thinks they are being clever by flouncing out of one discussion only to jump back into a different thread; the time between flounce and return (with higher penalties for very short gaps); whether a reason has been stated for the flouncer’s temporary or permanent absence*; an attempt by the flouncer to challenge others to reply to them on their own blog instead**; and of course continuing to participate in the discussion with a sockpuppet. *such as “this is my last post because I’m off to work” followed by another post ten minutes later in response to a critical comment. **of course this should be differentiated from “We’re really off topic and Scalzi has asked us to move on, but I’d be happy to continue the discussion at…..” since the latter doesn’t actually involve flouncing. @ mythago I feel that going back to other boards to announce how totally badass you were in that place you flounced from is probably worth a bonus point or two as well. Back on the MilSF topic, I can recommend Mike Shepherd’s Kris Longknife books, which are Honor Harrington dialed down several notches, with more humor and smaller battles. Mike’s fairly right-wing, but not a nutjob. Ask him about his grandkids. :) I would have had huuuuge problems if “Twilight” had been on the ballot, but I thought “Hunger Games” was pretty good. Not sure if we want to unleash the teens, though bringing in the RWA crowd has appeal. @Greg: that’s an excellent metaphor, well-illustrated. Hee. @mythago: Good rationale and @UrsulaV, definitely. And then another point each for returning and then reflouncing/rebragging, compounded with interest every time you go through the cycle. @brucearthurs: Ask the Mrs. which book you should tackle first, and give her my best. That’s a whole lotta re-read. @Stevie: Portuguese people are white in the US as well, except in Larry’s head. Hispanic/Latino are admixtures with the indigenous population of the Americas and by definition, Hispanic people must be from a Spanish speaking country. Brazilians, for example, don’t count as Hispanic because they speak… Portuguese. Direct from Europe doesn’t count. Most folks prefer their country of origin except on census forms, and will tell you they’re Mexican or Puerto Rican or Venezuelan. John Scalzi said: “The rest of your comment is possibly more ironic than you appear to realize.” Well John, I guess it is kind of ironic that guys like you and David Gerrold are all outraged and raging and making stuff up about what bastards the Sad Puppies crowd are, not to mention smear campaigns sufficiently heinous to get nominees dropping out. Two so far, yes? Guilt by association, -very- classy indeed. Meanwhile people like myself just quote what y’all said. We quote, and then we laugh. No need to do more than just point. Two words, John: Entertainment Weekly. No point in trying to “school” you. That would imply me being concerned with improving your performance or attitude or something. No, I just want to -defeat- your clique. Its not personal, its just business. So far, going pretty good. Incidentally, the official graphic of the Anti-Sad Puppies Brigade is Barnett Newman’s painting “Voice of Fire”. It perfectly captures the complexity and nuance of your position, and its history reflects the recent history of both the SFWA and WorldCon. Voice of Fire is what happened in the world of painting that Sad Puppies is trying to prevent from happening in SF/F. Jim Henley said: “I’m not a Hugo voter and I only have so much free reading time.” Jim, they’re novellas. That’s like a one-bite brownie. Read faster. Nathanael said: “In this case, the problem is defective voting rules for nominations.” I keep seeing this “defective voting rules” argument getting made. Has it occurred to none of you that the rules have been this way for quite some time, and they are arranged that way FOR A REASON. And the reason is so that small numbers of coordinated voters can sway the nominations. “Gee, I wonder why its always been that way?” asked the Puppy. ThePhantom: “I guess it is kind of ironic that guys like you and David Gerrold are all outraged and raging and making stuff up about what bastards the Sad Puppies crowd are” Well, no. Vox Day is transparently a bigot; Larry Corriea is transparently an insecure, whiny bully; and Brad Torgersen transparently can’t argue his way out of a paper bag. Neither I nor David Gerrold, nor anyone else, has to make any of that up. There lot’s of documentary evidence. Sadly for the Puppies, none of their points stand up to the same level of scrutiny. “No, I just want to -defeat- your clique.” This would be the manufactured clique that the Puppies made up so they could have someone to be angry at, yes? Well, you have fun with that. As you may imagine, since neither I nor David nor anyone else signed up for this clique, the amount of investment we have in it is, well, low. “the reason is so that small numbers of coordinated voters can sway the nominations.” Well, no. It’s worth noting that in the past when evidence of that happened, the works nominated tended to end up below (or barely above) “No Award.” There was a very recent example of that, in fact. It’s possible it might happen again. But it’s certainly interesting that the Puppies are more than happy to ascribe conspiritorially to others an action they happily undertook themselves. Now, Phantom, the posturing is getting a little stale. If you don’t have anything else to add to the conversation than that, you might want to move on. (Also, Phantom, please aggregate your posts in the future. Multiple sequential posts from the same person makes me unhappy.) No point in trying to “school” you. That would imply me being concerned with improving your performance or attitude or something. No, I just want to -defeat- your clique. Its not personal, its just business. So far, going pretty good. Three things, and I’ll be serious for once. You’re not doing Tom Knighton any favors here: it has been looked into, his case is valid, I’m sure we all agree that spiking reviews is a non-classy act. One lone Twitter vigilante does not not make up a culture. Given the nature of Twitter, it could be a genuine ‘witch’, a troll, his ex-lover, himself and so on and so forth. Secondly, your business is my business. ὣς φάτο χωόμενος Ζεὺς ἄφθιτα μήδεα εἰδώς: ἐκ τούτου δὴ ἔπειτα δόλου μεμνημένος αἰεὶ οὐκ ἐδίδου Μελίῃσι πυρὸς μένος ἀκαμάτοιο θνητοῖς ἀνθρώποις, οἳ ἐπὶ χθονὶ ναιετάουσιν.: δάκεν δέ ἑ νειόθι θυμόν, Ζῆν᾽ ὑψιβρεμέτην, ἐχόλωσε δέ μιν φίλον ἦτορ, ὡς ἴδ᾽ ἐν ἀνθρώποισι πυρὸς τηλέσκοπον αὐγήν. αὐτίκα δ᾽ ἀντὶ πυρὸς τεῦξεν κακὸν ἀνθρώποισιν: γαίης γὰρ σύμπλασσε περικλυτὸς Ἀμφιγυήεις παρθένῳ αἰδοίῃ ἴκελον Κρονίδεω διὰ βουλάς. ζῶσε δὲ καὶ: ἀμφὶ δέ οἱ στεφάνους, νεοθηλέος ἄνθεα ποίης, ἱμερτοὺς περίθηκε καρήατι Παλλὰς Ἀθήνη. ἀμφὶ δέ οἱ στεφάνην χρυσέην κεφαλῆφιν ἔθηκε, τὴν αὐτὸς ποίησε περικλυτὸς Ἀμφιγυήεις ἀσκήσας παλάμῃσι, χαριζόμενος Διὶ πατρί. τῇ δ᾽ ἐνὶ δαίδαλα πολλὰ τετεύχατο, θαῦμα ἰδέσθαι, κνώδαλ᾽, ὅσ᾽ ἤπειρος πολλὰ τρέφει ἠδὲ θάλασσα, τῶν ὅ γε πόλλ᾽ ἐνέθηκε,—χάρις δ᾽ ἀπελάμπετο πολλή,— θαυμάσια, ζῴοισιν ἐοικότα φωνήεσσιν. To say you’re a tadpole is a little generous given their evolutionary age. Lastly, you’ve absolutely no skin in this game. You neither understand culture, nor warfare. Posturing, as they say, will get you noticed. BE SEEING YOU. ὣς οὐκ ἔστι Διὸς κλέψαι νόον οὐδὲ παρελθεῖν. οὐδὲ γὰρ Ἰαπετιονίδης ἀκάκητα Προμηθεὺς τοῖό γ᾽ ὑπεξήλυξε βαρὺν χόλον, ἀλλ᾽ ὑπ᾽. We very very much disagree, and think they’re a fine thing. Worth an aeon or two on the hill or with the eagle. Cthulu: If you’re going to quote Hesiod, it’s considerate to give an English translation. Aww. You don’t see the frantic rush to see the references amongst their ranks. This will allow them to play Voice of Fire — that’s the abstract painting that is now worth $40 million, right? And of course that totally destroyed the world of art and now there’s nothing but abstract paintings. Oh wait, no it didn’t and no there’s not. Is it possible for the puppies to come up with one argument that is based on reality? New York Times bestsellers that don’t count as bestsellers because reasons, claims that the Nebulas and the Hugos are the same thing and run by the same people, claiming that only Baen publishes conservative authors, claiming that liberal authors are never popular when the two most popular authors on the planet are liberals, claiming that all military SF is written by conservatives, and now that the conservatives — the popular bestsellers — are poor while the obscure, literary authors are rich. It’s jam tomorrow and jam yesterday, but never jam today with the puppies, apparently. You’d have thunk that they would have planned the conspiracy theory out a little better than they have. But which we really all wish to know is, where is your underground lair, Scalzi, and what sort of decor did you, Gerrold and Martin decide upon? thephantom: you … are all outraged and raging and making stuff up outraged and raging about the SJW conspiracy that was completely made up? Wait who are we talking about again? Kat: that’s the abstract painting that is now worth $40 million, right? You have to be really careful with art. It’s a known fact that: 1) The CIA funded most of avant garde stuff like Pollock, to the turn of about ~$500,000,000 over the period 2) It’s a shell game run by people wanting tax breaks: new hot artist – pump and dump, you don’t pay tax on it 3) It’s one of those ancient status tweaks that any sane person can ignore, and in fact, the model of patronage has sadly devolved into crumbs due to #1 So, Kat. Hmmm: should I flash the Greek tip-off to say that this is a silly argument that’s only going embarrass some rich patrons, or not? Not a great move here. On the subject of nominating teen authors I’d really think it was cool if some of Holly Black’s books were nominated. I particularly like her curse worker books starting “White Cat.” They are very entertaining capers with a really cool and interesting main character and some awesome word building. Oh look, Kat Goodwin got the reference and googled the Voice of Fire thing. Yes Kat, a red racing stripe on a blue field is now “worth $40 million” if you believe that sort of thing. I supposed there’s a sucker born every minute. Do you understand my point that the number of people who will “get” and like Voice of Fire is extremely small?? Do you understand that this is not about trashing the painter of the racing stripe or the author of the Dinosaur story, its about pointing out that their audience is EXTREMELY SMALL and more importantly from my personal perspective, I am not in it. So for me to live in a culture that reflects my morals and my aesthetic, I have to swim upstream against these forces and their champions. It pisses me off. John Scalzi said: “Vox Day is transparently a bigot; Larry Corriea is transparently an insecure, whiny bully; and Brad Torgersen transparently can’t argue his way out of a paper bag.” So now, is that the “class” we were discussing earlier John? Because I’m pointing and laughing right now. That right there is a classic. John Scalzi also said: “This would be the manufactured clique that the Puppies made up so they could have someone to be angry at, yes?” No, this would be the clique that gets the Dinosaur story a Hugo and gets the National Gallery of Canada to shell out $1,750,000.00 of my tax money for a racing stripe. Its a cultural space John, not a particular smoke filled room. As you know. Then, the very elaborately obtuse poster known as Cthulu (SJW tinged) bubbled: “You’re not doing Tom Knighton any favors here: it has been looked into, his case is valid, I’m sure we all agree that spiking reviews is a non-classy act.” I do not work for Tom Knighton. His name does not appear on my paychecks. Nor do the names Theodore Beal or Larry Correia. I speak for myself and no one else. Of course Tom Knighton’s case is valid. He’s a stand up guy and he’s getting smeared by lying little assholes all over Twitter and Facebook, not to mention Amazon where he makes his money. He is not the only one. Two other people dropped out of the awards because of the SJW guilt-by-association manure spreader. As you know. Sadly, some people including apparently some here (not you) disagree that spiking reviews is a non-classy act. Perfectly acceptable when applied to certain targets. Which is just -so- classy. More pointing and laughing, because Tom Knighton’s numbers are on the rise, not the decline. The rest of your comment is Greek to me. TL;DR Someone likes something I don’t like! ThePhantom: “No, this would be the clique that gets the Dinosaur story a Hugo” Oh, ThePhantom. Just fall face-first right there into the pavement, there, why don’t you. “If You Were a Dinosaur, My Love” didn’t get the Hugo. So much for the clique! Now that we’ve definitively established that you don’t actually know what it is you’re posturing about, ThePhantom, why don’t you run along from the the thread. Your ignorance on the matter has gone from tragedy to farce. John, you don’t understand. The Phantom comes from an alternate universe in which different Hugos were awarded, and where the Phantom is a member of a ragtag squad of heroes who fight on the beaches, in the art galleries and the comment thread, for whatever the puppy-leaders tell them to fight for today. Judging the truth of what he says based on mere facts is so unjust; it’s what he feels that is important. Up next, some pup will show up and argue “we hunted the mammoth” therefore we deserve a Hugo. No, wait, I think what’s-his-face (quite inadvertently) cut to the heart of what’s really going on with the Sad Puppies, albeit dumbly: So for me to live in a culture that reflects my morals and my aesthetic, I have to swim upstream against these forces and their champions. It pisses me off. That is: they want the whole world to center around and cater to their interests, and it is not OK when the world fails to do so, and especially when it does so according to rules they otherwise have claimed are fair and proper. They have the emotional maturity and empathy of badly-raised children who, as long as they’re winning a board game, smugly announce it’s not their fault that you rolled a 2 or landed on their hotel, that’s just the rules; but the minute they start to lose, start shouting that you cheated, it’s not fair, this game is stupid. They can’t fathom SFF in which their preferred brand of literature is merely one type among many, and maybe not even the most popular brand. They don’t want to read about universes that don’t place them, admiringly, at the very center. That’s why their arguments are counterfactual and contradictory: the real, underlying argument is “THAT’S MINE. NOT FAIR. YOU CAN’T HAVE IT.” John Scalzi said: “Just fall face-first right there into the pavement, there, why don’t you. “If You Were a Dinosaur, My Love” didn’t get the Hugo.” Oh look, an error of detail! Of course an a nomination is not an award. Well, that disproves the entire thing! Silly me. For frig sakes man, that’s the extent of your rhetoric? thephantom182: “Oh look, an error of detail!” Yes, look, an error of detail, from which you launched a fussilade of bullshit entirely unsupported by your assertion. And while I can certainly understand why you are keen to dismiss this as a piddling detail, in fact it goes directly to the heart of The Problem With Puppies: That they don’t appear to let little things like facts get in their way of stemwinding themselves into a frenzy. The Problem With Puppies is not the single piddling detail, it’s that missing that piddling detail isn’t a singular event — it’s a persistent pattern of either ignorance or disingenuousness — or both! It could be both. As for it being the extent of my rhetoric: It is not. However, the fact that you don’t know enough to know what you’re talking about means that I’m not inclined to waste any further rhetorical skill on you because — as previously noted — you clearly don’t actually know what you’re talking about, nor am I inclined to let you pretend you do just so you can grandstand a little more on my blog. Your grandstand is missing structural support, and just collapsed under you. I understand there may be other places ignorance of a factual detail supporting an argument is lightly skipped over as immaterial, but strangely enough, I think it might have some bearing. If you don’t like it, of course you are welcome to return to those other places and tell them how unfair it is I actually expect you to know what you’re talking about. With that, comments closed for the night. See you in the morning, folks. Update: Comments back on. “…the Voice of Fire thing. Yes Kat, a red racing stripe on a blue field is now “worth $40 million” if you believe that sort of thing. I supposed there’s a sucker born every minute.” Of everything that Conservative SF writers get their boxers in a twist about, their hysteria over Modern Art is the one that puzzles me most. I have to imagine that most of them have actually not spent any time in actual art museums. Is this the case, y’all? Do you not go to museums? (I’m talking to the puppies now.) Because if that’s the case, let me recommend you take the time to do so. Modern and Contemporary art is well worth looking at. I mean, I know you all love (or claim to love, at least) Maxfield Parrish and I too loves me some Vermeer and Caillebotte, but there are artists from this century who are also doing excellent work. (Will Barnet is one of my particular favorites.) You should check them out. Phantom: Thing about brownie bites is, they each taste about the same, and are yummy. I’m not going to read the 2nd & 3rd Wright novella if I don’t like the 1st pretty well. So it behooves me – and John C. Wright – to start with the one I’m most likely to enjoy. Since you’ve read all three of them, I figured you’ve got an opinion there. Plus, it let’s us talk about the pleasure of reading rather than literary politics. And there’s been quite a lot of people talking about what they like in these threads, so I figure it’s in bounds. Maybe a starting point is, how are you going to rank them on your final ballot? And will one of the Wright novellas get the top spot? And will the ranking be a close call for you, or easy? Jim Henley: Since Phantom won’t clarify — perhaps he hasn’t read the Wright novellas? — here are reviews of the works in question, from over at Journal of Impropriety: thephantom: ?” Not only is the detail about the Dinosaur story wrong, so is the claim that the cultural forces are the same. It’s laughably wrong to claim that the forces are the same–65 scifi fans nominated the Dinosaur story while the art market is driven by the ultra-rich looking for tax breaks and status. I doubt any “cultural forces” working on the two groups are the same. Culture warriors do have a schizoid relationship with the free market, eh? On the one hand, we are told that the (pre-Puppy) Hugos were corrupted by a clique of Marx-worshipping moonbats. On the other hand, thephantom182 chooses, as a symbol of the putative clique, a painting that sold for over a million bucks. (Granted, this particular painting was bought by a national gallery, but another painting by the same author in the same style fetched over $80 million at auction, so perhaps the Canadian taxpayers got a good deal.) Also, regarding that error of detail: “If You Were a Dinosaur, My Love” was nominated for a Hugo and didn’t win one… in the same way that Larry Correia and Brad Torgersen were nominated for Campbell Awards, and didn’t win them. So this proves… what, exactly? If all the Puppy advocates simply agreed with Vox Day’s party line, and declared that the Hugos were territory to seize for their side in the culture wars, then I would understand them a lot better. With all this dodging and weaving, I can’t tell if they’re being disingenuous or just stupid. @mythago: I think you´re absolutely right, I too noted the inadvertent honesty. You also need the “badly raised child”-levels of emotional maturity and self awareness to un-ironically demand science fiction have fewer new ideas. delagar: Thanks but skimming those reviews suggests they’re pretty spoilerific. (I read the review of the Bellet story as an example.) Meanwhile, Phantom said in a clarifying post to this or the previous thread that he did read all the stories on both slates and many more qualifying stories besides. That clearly qualifies him to help me pick one! He just needs to decide to do so. :) Wait, there was a social justice warrior conspiracy? And I wasn’t invited??? NNNNNNNNNNNNNNNNNNNOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOoooooooooooooooooOOOOOOOOOOOO!!!!!!!!!!!!!!!!!!!! @Seth Gordon: Probably some combination of both, IMO. @Greg: Early Homo sapiens probably didn’t hunt many mammoths, actually. Neandertals were more big-game specialists; early modern humans were the generalists, subsisting mostly on rodents, lagomorphs, smaller ungulates, and generalized foraged fruits and such, like the modern San people. Mammoths and other big game probably were only a major food source for early Plains Native Americans. So…some nameless Neandertal 80,000 years ago hunted a mammoth. Does he deserve a Hugo? :D delagar: from over at Journal of Impropriety Wow. That’s horrible writing. Clearly the pups nominated Wright 5 times based on (sexist) merits. Seth: “So this proves… what, exactly?” Here’s a thought, though admittedly a highly speculative one. There may be a subtle concession here that Teddy’s “opera life whatever” story is, as it is often described, embarrassingly bad. But, despite it’s objective lack of Hugo-level quality, the Puppies went to a lot of effort to manage to get it nominated. The thinking (such as it is) might go, “We worked really hard to get a bad work on the ballot. The dinosaur story is also a bad story*, so someone must have worked to get it nominated too. Thus, CONSPIRACY!!!!1!!!1!!wharblegarble!!!!11!!!!” * Swirsky’s story, of course, is not bad. At least, not in the way Teddy’s work is. When pressed, most Puppies will pivot (as they often do) over to “it’s not science fiction-y enough.” They might have a point, though Apex Magazine seemed to think it was appropriate for their pages. “apparently about the heads of the Puppy slates being upset that once upon a time, they felt people in fandom were mean to them. ” I remember how another famous writer put it. “If you meet an asshole in the morning, you met an asshole. If you meet assholes all day, you’re the asshole” – Elmore Leonard. I have to wonder, is it REALLY that people were mean to Correia because he is a conservative, or is it that people saw the kind of person he was (and we are never as good at concealing our inner selves as we think), saw that he was the kind of person who would trash something like this because he thought he was entitled to it, and decided “Yeah, I don’t want to associate with an asshole like that.” Floored: Early Homo sapiens probably didn’t hunt many mammoths, Sorry, probably should have provided a link for context Look for the string “Q) Ok, but you still haven’t explained the mammoth thing.” I think some of the Puppies operate under the belief that nobody who voted for Ancillary Justice or Redshirts or “If You Were a Dinosaur, My Love” or “The Water That Falls On You From Nowhere” could have actually liked the stories, and therefore they must have chosen them on the basis of pure ideology or because the authors were friends with The Right (Left?) People. And given that assumption, why not retaliate by stumping for your own friends and choosing works that fit like a glove to your own ideology? Greg: Oh, I’m aware. I was just pointing out the utter ridiculousness of that particular dudebro argument. delagar said: “Is this the case, y’all? Do you not go to museums? (I’m talking to the puppies now.)” Obviously I do. How would I know otherwise? Your opinion is a very common one, that The Puppies are a gaggle of untutored mouth breathers. You can’t envision an educated argument running counter to yours, the possibility short circuits your brain. Anyone who disagrees is -clearly- and idiot, because All The Smart People naturally agree with you. Talk to Camile Paglia. A lot of this is her argument. She wrote a whole book about it. Its a good book. The part I disagreed with her about was the iPhone/tablets which I must say is a quibble compared to the larger issue. She’s massively intelligent, widely read, and she’s not a dishonest, small souled apparatchik. delagar also said: “Jim Henley: Since Phantom won’t clarify — perhaps he hasn’t read the Wright novellas?” That is correct, I have not read any of the three nominated novellas yet. Therefore I have no specific recommendation at this time. I have read a great deal of John C. Wright’s other work, and found them to be uniformly excellent. Also very different, one from the other. John’s got a big brain, he spins off styles and ideas like sparks off a welding torch. They’re yummy, each with a unique frosting. Eat them all. Then go looking for more. Lucky you, Wright is a prolific scribe. Seth Gordon said: “Culture warriors do have a schizoid relationship with the free market, eh?” Seth, in what way can a purchase made by bureaucrats from a national gallery possibly be considered “free market”? It was pure politics and public money. And see Camile Paglia above. mythago said: “…the real, underlying argument is “THAT’S MINE. NOT FAIR. YOU CAN’T HAVE IT.” “Finally, one of them has understood.” Lord Rayden, Mortal Kombat. Of course that’s what all this is about. Always has been. Its just that I couldn’t be bothered about it until recently. Then Alex MacFarlan said this: “I want an end to the default of binary gender in science fiction stories.” Everybody has a last straw. That was mine. Funny how so many people showed up agreeing with me this year, all of a sudden. So many last straws. Barnett Newman (an artist I never heard of before today, but thankfully, Wikipedia can tell me all about him) was selling to private collectors before and after he sold Voice of Fire to the National Gallery of Canada. The price paid for Voice of Fire is comparable to what a private collector paid for Ulysses, another stripe, four years earlier; other paintings, since then, have sold for over twenty times as much—again, to private collectors. So it’s reasonable to say that those bureaucrats paid fair market value for their acquisition. Personally, if I had tens of millions of dollars to burn, I wouldn’t spend it on such things, but if other people think a canvas with three colored stripes is worth that much to them, who am I to argue? That’s the beauty of capitalism, right? Alternatively, you could say that rich art collectors have no taste, but I could say the same thing about the masses who spend money on pew-pew-pew novels. Ya’ll don’t have to speculate on why I withdrew. I put it into words. The Hugos right now are more about people needing to be right or feel wronged or score points than about great SF/F. That’s why I withdrew. This has become a fight about things I don’t want to be a part of, with assholes on all sides. Also there are kind, reasonable people but they seem to get drowned out a lot. Sadly, the yelling is loud and one nasty comment can wound enough that a hundred kind ones won’t close it over. That’s human nature, I guess. And anyone who *still* thinks that I am not basically the antithesis of VD really really hasn’t been paying attention. I want nothing to do with him and he certainly never asked me my opinions. I highly doubt he even knew who I was or read my story. It’s possible he chose me as a shield or out of vindictiveness, but who knows? All I am sure of is that this year, nobody is going to win no matter what happens, because we’ve all been put in a pretty sucky, losing situation. As David Gerrold said (paraphrasing), being right has become more important than being compassionate. That saddens me immensely, and is something I wish no part of. So please, stop trying to attribute motivations and thoughts to me. My feelings on this are very plainly written out in multiple places. Agreed (and please note I’d already asked people to table this particular discussion). Oops. Sorry, John. I missed that comment I guess. I’m done ;) Heh. Not directed at you, Annie! People sometimes have a hard time letting go of a topic. Phanton says: “Obviously I do. How would I know otherwise?” Well, then I cannot account for you being so abysmally wrong about the state of modern art. Although, as one of my professors pointed out, you can lead people to art, but you can’t make them think. I suppose that might be an explanation. That, or you’re simply lying. You *have*, after all, proven to be an unreliable narrator — more than person here has caught you in a factual error. sez greg: “Clearly the pups nominated Wright 5 times based on (sexist) merits.” Correction: The Pups nominated Wright 6 (six) times based on… whatever criteria. The fact that one of Wright’s 6 (six) Puppy-noms was taken off the ballot after being ruled ineligible, does not alter the fact that Wright did, indeed, have 6 (six) Puppy-noms. Phantom: Gotcha. I was confused by your earlier responses. So to confirm, you didn’t read the three John C. Wright novellas and didn’t nominate them. Do I have both those things correct? And do you know anyone who can give me an informed recommendation on them? phantom: The Puppies are a gaggle of untutored mouth breathers. You can’t envision an educated argument running counter to yours It’s not a question of whether I can envision it or not. It’s whether or not any of you “untutored mouth breathers” can actually make an educated argument counter to the anti-puppies. You haven’t. G.R.R.Martin has disproved the statistical assertions as false facts. What else is there? The SJW conspiracy with zero supporting evidence and running contrary to known historical evidence? That’s not an educated argument, that’s just nutjobs believing in black UN helicopters. the rules have been this way for quite some time, and they are arranged that way FOR A REASON. And the reason is so that small numbers of coordinated voters can sway the nominations. Oh, good. Then when folks propose a rule change to reduce the power of slate voting, the puppies will whole-heartedly agree to that change. Cause if there’s a SJW conspiracy, they must be getting their noms in with Super Sekret Slates, right? If the puppies OPPOSE reducing the power of slate voting, then they clearly do NOT believe in an SJW conspiracy, they clearly do NOT believe the SJW’s are voting secret slates, and they merely want to keep all the slate voting power to themselves. I look forward to the puppies supporting a rule change to reduce the power of slate voting. Ah, Camille Paglia. A woman who leveraged her crackpot thesis (Sexual Personae) into an entire career, supported by hurling personal insults at anyone who called her on her intellectual dishonesty. Also the woman who, more recently, called the California high school date-rape gang known as the Spur Posse “beautiful.” Paglia glorifies male dominance, and has publicly stated that the Monica Lewinsky scandal led directly to the 9/11 attacks, because REASONS. She calls herself a democratic libertarian anti-feminist, which gives certain types of libertarians cover, they think, because hey, they know the name of a lesbian academic elitist, whee! but her actual positions are usually considered neo-conservative. She’s inflammatory, because she’s willing to say anything, but the criticisms of her work have generally focused on her historical inaccuracy, really sweeping generalizations (of the type usually called “making shit up”), and an absolute belief in whatever evolutionary psychology fad is currently trending. As an opinion writer, she’s fine, the usual pundit type rounding up inconsistent and unrelated anecdotes to support her biases; as an academic, her scholarship is terrible; as a person she’s the maiden aunt who drinks too much and then starts berating everyone else at Thanksgiving dinner for imagined slights and insults. And yes, I’ve met her, multiple times, and was perfectly civil on each occasion, sometimes under significant pressure. Ms Paglia’s temper varies widely, and I won’t speculate on the cause. Frankly, if you want a conservative view of Western art, Sister Wendy Beckett is both more insightful and less vitriolic. I’ve never met Sister Wendy, by the way, but she seems lovely. Normally try not to respond to trolls, but this was too tempting to pass up. Maybe, just maybe, there’s a reason for this.. And, leaving the rabid dogs’ obnoxiousness aside, they aren’t terribly popular. So, really…how are you surprised? How? You’re practically asking people to mock you, how are you surprised when people call you out on your whining? Pot, meet kettle. Mr. Pot, perhaps you could stop calling Mr. Kettle black, at least until after you’ve got those thirteen layers of reeking grime laboriously chipped off by these adorable socialist hippie genderfluid kittens? delagar said: “Well, then I cannot account for you being so abysmally wrong about the state of modern art.” A reasoned and informed opinion that differs from your own? C’est impossible!!!! Go argue with Camile Paglia, and good luck with that. And you wonder why we have Sad Puppies. Phantom’s views (and views of at least some Puppies) remind me of a line in recent issue of comic The Wicked + The Divine by Kieron Gillen, Jamie McKelvie and Matt Wilson: “It is a poor critic who says that a lack of effect on them implies that all others are insincere in their love.” (I think I need to frame that quote as a reminder. To myself mostly) ThePhantom: “A reasoned and informed opinion that differs from your own?”. John Scalzi: The Problem With Puppies is not the single piddling detail, it’s that missing that piddling detail isn’t a singular event — it’s a persistent pattern of either ignorance or disingenuousness — or both! It could be both. Also there’s their persistent pattern of piddling, full stop. Puppies ain’t housebroken, yo. docrocketscience: Swirsky’s story, of course, is not bad. At least, not in the way Teddy’s work is. When pressed, most Puppies will pivot (as they often do) over to “it’s not science fiction-y enough.” If they think that, it says more about the Puppies than the story. I mean, sure, there’s no skiffy element explicit in the bare-bones plot of the story, but the narrative is in such clear communication with SFF that, without the genre, the story could not have been. Perhaps Puppies simply cannot see beyond bare-bones plot, or don’t want to. Certainly the way they talk about fiction indicates a blindness or even an aversion toward theme, nuance, symbolism, narrative structural choices, and so forth. ThePhantom: ‘the price of some artists’ work far exceeds their aesthetic value.’ is an opinion and reasonable and informed. ‘The price of modern art is propped up by public museums and governments subsidizing it, and if they wouldn’t pay that much, it wouldn’t cost that much.’ is not supported by the facts when private collectors spend even more on the work. Note how both disagree with ‘it was worth it for a museum to spend millions of dollars on a simple geometric painting’. Jim Henley said: “Phantom: Gotcha. I was confused by your earlier responses. So to confirm, you didn’t read the three John C. Wright novellas and didn’t nominate them. Do I have both those things correct?” Yes. “And do you know anyone who can give me an informed recommendation on them?” I suggest you go over to John C. Wright’s blog and ask him. He’s a cool guy. mintwitch, you met Camile eh? I never have. She does seem cranky. I expect she’d yell at me about the iPhone being something other than what she said. I found her arguments reasonable and applicable to the Sad Puppy/SJW face off. Maybe instead of slagging the people making the argument, you could address the argument itself? Radical notion, I know. FlooredBy said: “Maybe, just maybe, there’s a reason for this.” Yeah, the reason is you can’t stand somebody telling you no. Hence the slagging. .” Hilarious that you got “elitist” out of that. Try “SF/F reader.”. @Docrocketscience I think it’s pretty clear why they don’t like the Dinosaur story, and the reason has nothing to do with it’s science fiction-ness or the quality of writing and everything to do with the reveal at the end. …also, even if we assume for the purpose of argument that museums shouldn’t spend seven figures of taxpayer money on modern art, this seems irrelevant to the whole Hugo brouhaha, since the Worldcon, as far as I know, is not getting any special government subsidy. Another, related, point that I was making, though, was that Conservative SF writers seem to believe that *all* Modern/Contemporary art is exactly alike. That is, they seem to think every single Modern & Contemporary artist paints exactly like Jackson Pollack and Barnett Newman; when, in fact, there are as many sorts of art right now as there are SF. Which is why I wondered if any of them have been in a museum lately — because to read the screeds against modern / contemporary art that pop up in their novels and on their blogs from time to time, it certainly seems as if they haven’t. Annie, just wanted to let you know that all this mess has had, for me, one fortunate outcome: I’ve discovered your Twenty-Sided Sorceress books. I’m looking forward to reading them, because they sound like just my thing. Phantom, I’m thinking you might enjoy Tom Wolfe’s The Painted Word. It’s a terrific take-down of modern art. Morley Safer did the same thing for 60 Minutes some time ago; I seem to remember him especially caustic on a piece that consisted of basketballs suspended in a fish tank and another that consisted of a pile of wrapped hard candy. Despite my liberal politics and my joy in diverse literature, I have a lot of trouble with this sort of art myself. My solution has been to look at more of it, read about it, and try to understand why others prize it. Unfortunately, however, I have so far been able only to embarrass my husband by saying, “My kid could do that!” in a loud voice in art museums. (It’s a joke between us and not meant literally.) Nonetheless, try the Wolfe. His book on architecture, From Bauhaus to Our House, is also quite wonderful, though I think I disagree with him there — I lived next door to a couple of Mies Van Der Rohe buildings while I was in law school, and I quite liked them. thephantom182, you are free (within the limits set by our host) to express your opinion. And the rest of us are free to express ours. If you can’t tell the difference between “I think you‘re full of it” and “sit down and shut up”, then you have problems that go well beyond the Hugo Awards. “Remember, my “sin” here is expressing an opinion.” You are missing the point, Phantom: It’s not that you’re expressing opinions. That’s fine. It’s that you’re expressing opinions that are so clearly based on nothing but your own whims. Opinions should be based on data. You admit you haven’t read Wright’s novellas, yet you claim they’re excellent. You express opinions about modern art, but I suspect (based on what you have said here) that you have no idea what you’re talking about. This is not a sin, Phantom, but it is annoying. Terryweyna: I’ve seen the candy installation. Ate one of the candies! (Which you’re allowed to do; I wasn’t destroying the art.) I can’t say it spoke to me as a piece of art, but I liked the way it tasted. Delagar’s point about modern art not being the same is well-taken; likewise, some modern/abstract art works very well for me, and other bits of it seem like pretentious wanking. I think it’s broadly true that a lot of art requires investment of time/effort in order to learn how to look at art. It’s also possible that folks who are interested in making that investment will never be pleased with it, which is of course their karma. I do also agree, however, with the notation that this long disgression into modern art is leading away from the topic at hand (and also a red herring in regard to it), so let’s start to reel it in, please, folks. If Ms Paglia has expressed an opinion on this year’s Hugo awards, I would love to read it. Google-fu has produced nothing, which doesn’t surprise me. IIRC, SFF does not meet Ms Paglia’s standards for “literature.” She’s a big fan of early 20th century French philosophers; not so much “popular” fiction. I’d be interested to see what her arguments for and against slates might be. Although, if she uses the word “cthonic” in any context other than Greek mythology (yet again), I won’t be held responsible for my language. (Dear Ms Paglia, if you are reading this, that word still does not mean what you want it to mean. No, really, it doesn’t.) As for the opinions of people in this thread, I have no interest in trying to “argue” with them. Opinions are opinions–like assholes, everyone has one. I may agree or disagree, for various reasons, some as objective as humanly possible, others completely subjective, and yet others completely random and whimsical. Which is partially why I object to slates–human thoughts/opinions are really all that we have that are our own. Turning ones thoughts over to another is self-subjugation, and causes harm to the self. Phantom – “I suggest you go over to John C. Wright’s blog and ask him. He’s a cool guy.” I’ve been over to Wright’s blog several times. All I can really say is that you and I apparently have such radically different ideas of “cool guy” as to render the phrase useless. I am however going to be reading his novellas. I hope they are of vastly better quality than his non-fiction writing, or it’s going to be pretty painful. Well, shoot, I wanted to respond to what you just said. But suffice it to say that I agree that learning to look at something you don’t instinctively understand or appreciate is a good approach. I’m eager to read Updike on the topic, for instance; it’s a “one of these days” project. (I, too, would have eaten one of the candies, just because it feels so transgressive — though I know it’s not, actually.) As to the topic at hand: I plan to read all the nominees before I vote. I expect this to be painful at times, as I have read two of the John C. Wright pieces already and thought them dreadful — in quality, not in message, though he does have a way of hinting you over the head with his message that is most inartful. I find I’m reluctant to write my usual reviews of the short fiction categories on the group website with which I am affiliated for fear of bringing the wrath of the Puppies down on this very fine (and politically diverse) group of people. And I’m angry at that reluctance and that fear. None of these three emotions is fun. “Funny how so many people showed up agreeing with me this year, all of a sudden. So many last straws.” Many? Heh, No. You got a cluster of vocal people to pop up in a very small section of a niche group of a market segment and complain. You want to see how it’s going for the weight of numbers overall, how about taking a look at the political situation. You are losing. Badly. So this is your reaction. That’s why it is called “reactionary”. The actual way things are going? The trends and opinions in the places true power lies? You don’t even register enough to be a joke. This is the kicker for all of this, all you fools harping on like this is some grand battle in which you are the great heroes, some massive heroic narrative, that this is WAR and you will WIN because YOU KNOW HOW TO FIGHT A WAR! Nope. None of it. y’all are not even close to being a force, much less one that matters. You manage to harass some people, but you only pull that off because of how weak you are. If you actually mattered then when you assholes threatened to kill women for attending a convention the cops would roll up and crush you. You only get away with this because you are so unimportant. Get your head out of your ass. This isn’t some magnificent quest to beat back the forces of oppression and save the world. This isn’t even a “consumer revolt”. This is a micro-sliver of a market segment being caught up in a promotion campaign. phantom:: You can’t envision an educated argument running counter to yours So? Make an argument! Assert a premise with evidence to back it up. Apply logic (showing your work) and reach a conclusion. The problem is, you have yet to demonstrate that capacity. You don’t know what words mean even though they are central to the particular thread (Class is the in group with the pull, and no class is the out group.) You don’t know who has won a hugo even though that is central to the entire debate ( the ones who give a Hugo to the Dinosaur story) You’re too cool to school (No point in trying to “school” you. ) but you brag about it elsewhere (I’ve been hitting SJWs with a cluebat over at Scalzi’s bog. I mean blog.) You think Hitler was a socialist because the word is in the name ( Because Hitler was most certainly a Socialist. The party was called the National Socialist German Workers’ Party. Doesn’t get any more Socialist than that. here) Again, phantom, make one complete argument, premise, evidence, logic, conclusion, Until then, yeah, I can’t envision it, cause so far, you can’t hack it. Becca Stareyes said: “‘The price of modern art is propped up by public museums and governments subsidizing it, and if they wouldn’t pay that much, it wouldn’t cost that much.’ is not supported by the facts when private collectors spend even more on the work.” Possibly, but that’s not what I said. I said that purchases made by government employees for political reasons have nothing to do with the free market. Support of certain types of art includes a large helping of politics. With Voice of Fire, that was Canadian Liberal politics. Liberal Party apparatchiks rubbing the plebes noses in their plebeian lack of refinement. It can be viewed as the Laurentian Elite of Quebec telling Western Canada to get stuffed. I’ve got half a billion dollars worth of useless windmills visible out my window that represent the Toronto Liberal Party Elite saying to same thing to rural Ontario. They have not grown more subtle with time, shall we say. John Scalzi said: .” Thanks for making my point John. The Scalzi has Spoken, no rational counter argument is possible. My opinion can’t be rational because you don’t share it, and you proved it because you found a mistake. Therefore I am but an untutored boob, QED. Granted! I’m an untutored boob. A redneck slob who drinks from the bottle and couldn’t tell dijon mustard from Dijon France. No problem. I’m an untutored boob who has $40 to express his untutored boob opinion and whose vote counts for exactly what yours does. Catastrophe! Tell me again how this is not about capital “C” Class, John. In my untutored boob experience, people with class are usually pretty accommodating of people dumber than them. Kinda defines the concept “classy”. .” Opinions are never a problem. What’s being commented on, I think, is a recurring pattern in your posts. You state opinions and include a fact they’re based on. The fact is pointed out as being disputed, entirely disproved, or irrelevant to the conversation. You then do one of three things. You either a) insist the fact doesn’t matter, only the opinion, b) insist that the other person has no idea what he or she is talking about or is biased against the opinion, or c) move the goalposts and introduce another fact-based opinion in a slightly different direction, and we start again. What I would LIKE to see are discussions along the lines of Eric Flint’s excellent essay on what’s wrong with the Hugos, and some more discussion (as we’ve already had in these pages) about SF that’s been overlooked by the awards. This fascinates me and helps me break out of the narrow strip of reading I tend to gravitate toward. I’ve visited the National Gallery once. They have a very broad modern art collection but they also have lots of other types of non-abstract art, including a good set of the Group of Seven’s paintings — Canadian landscape painters from the 1920’s and 1930’s who had a distinctive style — lots of use of light, interesting color palatte. They also have one of my favorite sculptures now — Maman by Louise Bourgeois — a giant metal and marble spider that sits in front of the gallery. If I was extremely wealthy with a large estate, I might be tempted to try and buy it. But they wouldn’t sell it to me. It was also bought for what people complained was too much cash and thought it was a ghastly eye-sore, but thousands love it and it’s become a national icon in Canada. It’s so beautifully textured so that it flows like carved wood or suggesting spider hair, elegant, warm and menacing all at once. Got nothing to do with the discussion, just thought I’d throw that in for any interested. It is a tradition of science fiction that science fiction is always dying, being destroyed by something or other. Television, cheesy sci-fi movies, the New Wave SF authors, the feminist SF writers, William Gibson and cyberpunk, tie-in novels, Star Trek not because it was a liberal social justice show but because the SF fans felt the science was awful and it became so popular that people would think that was what science was and not like “real” SF, and of course Star Wars even worse for the same reasons, and all space opera and military SF which wasn’t and to many still isn’t considered real SF, Dan Simmons’ Hyperion destroyed SF, Frank Herbert’s Dune destroyed SF, alternate history wasn’t real SF and is right out, time travel likewise, post-apocalyptic SF is considered the end of the road for SF, video games, and of course fantasy fiction which has been supposedly killing off SF for over fifty years. In fantasy fiction, sword and sorcery was destroying it, tie-in novels again, contemporary fantasy was destroying secondary world fantasy — not once but twice in the 1980’s and the oughts, paranormal romance was destroying fantasy, etc. Short fantasy fiction tends to have less action and battles than novels because there simply isn’t room for it, but that doesn’t automatically make it literary in prose or theme. I don’t think short fantasy fiction has ever been accused of destroying fantasy fiction before, though, so in that the puppies have made a new one. I don’t think the puppies are stupid. Disorganized a bit, but that was likely to happen when you bring in the game rippers and let someone like Teddy take over. But the goal has been pretty much clear and direct from the beginning — threat while claiming to be mysteriously threatened. And the Internet allows for marshaling some pretty big, physical ones. All you’ve got to do is whistle, which they did. It doesn’t matter if they make an error of the details, if they say things that contradict each other, if they say it’s all about popularity and then that popular liberals don’t count, etc. The important thing is the threat, repeated again and again. The take back threat. It doesn’t matter to them if any of their candidates actually win anything at the Hugos. The important thing is the threat. But the people they are threatening live with the threat every day. And that’s another reason that the Hugos will survive. You’d think it was rare for with “victim mentality” as deeply ingrained as phantom’s to identify with the political right. But, of course, it’s not that rare. Seriously, dude, gut up, grow up, stop whining, and at least get your ducks in a proverbial row, so that maybe you could attempt to make the argument you think you’re making. *sigh* Preview buttons are my friend. “…rare for a person with…” ThePhantom: “The Scalzi has Spoken, no rational counter argument is possible.” At rational counter argument is possible, but you haven’t yet made one. What you are trying to do now is suggest that other people here — notably me — are being mean because they’ve told you to back up your assertions with facts and evidence, which you don’t appear to be able to do. Again, I understand you want to be able to throw up a lot of chaff to distract from the fact that you don’t have an argument. However, neither I nor anyone else is obliged to be distracted. You either can’t argue or you won’t argue. “people with class are usually pretty accommodating of people dumber than them.” It’s “dumber than they,” actually. I’m not aware of suggesting you are dumb. You may, however, be ignorant, and if you’re not ignorant then you are likely being disingenuous. If you’re ignorant, that can be corrected, and I and other people here are doing you a favor by doing so, which is, of course, very classy indeed. If you’re being disingenuous, then you’re not actually owed much in the way of courtesy. That said, all it appears you have done since you got here is to blunder in spouting positions rather than arguments, and when you’ve been called on it, to gripe that it’s rude for people to do so. That’s wrong, and you’re wrong for being wrong, which makes you wrong twice. You’ve also picked the wrong person in me to make this argument to, ThePhantom, because I don’t care what you think about me or how I run this site. The courtesy I’ve extended to you so far is, in my opinion, rather more than you’ve earned with your performance to date. You argue poorly and you’re offended when people point it out. This, at least, makes you very much like the other Puppies. Now, start making a proper argument, or I’ll simply stop allowing you to post here and waste everyone else’s time. “I speak for myself and no one else.”. ThePhantom, you are conflating socio-economic class with ‘classy behavior’. Scalzi’s original point was that, despite attempts to frame PNH, Martin and himself as wealthy white-male elites, and the SP/RP folks being the mainstream ‘middle-class or working-class readers’, all of them have a very similar current status as well-off white men who are deriving some income from fiction writing. And that this is not Scalzi’s framing: he’s the one pointing out that there’s not any difference in status outside of fandom between him and Correria. (Martin and probably Day are a bit odd-man-out in that they have categories that outsiders might care about; Martin because his TV success means someone outside of SF fandom might know who he is, Day (IIRC) is the only one not living in the USA.) Saying that something is “ruining” SFF is utter bullshit. SFF is, at its core, about exploring strange new worlds. A universe where noble Jedi and tyrannical Sith wage war over the galaxy; a galaxy where the peaceful Federation tries to live up to its founding ethos while getting caught with its pants down by everything from the Klingons to the Borg; a world where the first* female trainee knight in centuries in a small medieval kingdom defeats bullies and learns to be a great leader; a world where a snarky runaway blind girl and a tormented prince-in-exile must work with a judgmental polar native, her clever-tactician brother, and a Tibetan-analogue savior who’s the last of his kind to stop a tyrannical psychopath from conquering the world; a universe where the crew of a starship discovers that not only are they on a TV show, but it’s written by hacks… All of those and more are SFF and just as much so as all the others. The Belisarius series is just as much SFF as “Elantris” and “The Android’s Dream”; the Stormlight Archive and the Call of Cthulhu just as much so as Shades of Milk and Honey and “Shadow War of the Night Dragons”**. All of these sad little man-boys are missing the point of SFF–it’s about differences. It’s about the new, the comfortingly familiar, the strange, the terrifying, the heart-warming, the powerful, the outright bizarre, all wrapped up into one gloriously eclectic bundle. SFF is weird. To claim that it can be “polluted” or whatever by an imaginary left-wing conspiracy is patently ridiculous. The only way you CAN ruin SFF is by insisting on such arbitrary “purity”, or by actively taking a dump on fandom in general like RSHD does with his toxic bigotry. But making a space mecha book about gender issues? A fantasy about racial politics? That is very much in the spirit and soul of sci-fi and fantasy. *First official, anyway. And I always did prefer Kel. **Which really deserves to be turned into a giant parody fantasy trilogy. As an aside, what does RSHD stand for? Becca: Racist Sexist Homophobic Dipsh*t. He’s a truly appalling person, and overpoweringly stupid. Neo-fascist, loathes women…you know what, I can’t explain him without becoming sick. Here’s a link. Short version, I call him RSHD because it’s been something of a tradition here to call him that rather than his given name, to keep him from getting the attention that he desires. Yes, I am familiar with Day enough to see how that applies; I didn’t realize there was an acronym though. Okay, I confess. I destroyed SF. It was me. Single-handedly. I had an army of chickens acting as sappers, and a complete library of Star Trek tie-in novels from Pocket, and we brought that sucker down. That is why no one has been allowed to publish or write SF for the last decade and anyone who tries is immediately set upon by attack chickens, and why Harlan still wakes in the night screaming about the terrible clucking. Fantasy’s proving a tougher nut to crack. It may require turkeys. And ducks killed radio! “Okay, I confess. I destroyed SF. It was me.” DAMN IT URSULA. No no, no. He actually read everything eligible this year, and his selection -just happened- to agree with VD’s. Can’t argue with good taste after all! @UrsulaV:I think using wombats as sappers might speed the destruction of fantasy. Talking wombats with cigars. See, no fantasy there. John, you are extraordinarily patient and generous to allow people access to spew their ignorant rants on your page. It is, in a way, a kind of educational public service, to allow us to see what opposing viewpoints really look like. Trolls really hurt their own arguments when they argue so poorly. I just wanted to say thanks. I’ve learned a lot from these Hugo debates. And I will be voting.. All this big long thread, Phantom, with people explaining things to you in detail, and you’re still trying to peddle your “I’m being bullied” narrative? Shall we try to be clear one more time? Your “sin” here is in making unfounded assertions that you haven’t supported with facts but merely bluster. And in at least one case, they’re assertions you have supported with an obvious falsehood (which, while it may have been “minor” in its own right, is still a fair indication of your overall pattern of indifference to accuracy over inaccuracy in shoring up the narrative you’re clinging to). You can express all the opinions you please. If your opinions are demonstrably bogus, though, expect to be called on them. “a complete library of Star Trek tie-in novels from Pocket” *blinks* Can, uh… can I come hang out at your house? I’ve been hopping back and forth between Whatever and File 770 for days now. Watching all the clumsy puppy excuses and arguments fall to logic, while having all the same, baseless reasoning repeated again and again, only to fail and fall to logic again and again, has drained all anger from me. I’ve gone from believing the Rabid Puppies are a legitimate threat to All Things Squee-worthy to understanding these shenanigans aren’t going to have any real long term impact on either the Hugos or fandom. Five years from now it’s just going to be a humorous subject convention goers and Hugo voters talk about and laugh over. “Hey, remember when…?” Which strikes me as roughly bullshit-adjacent to the idea that the Social Injustice Beagles were really defending apolitical “entertainment” from the horrible recent tide of politically correct “literary” elitism, forcing straight white writers of a right-ward political tint into the gutters to starve in obscurity. (I’ll leave you to chortle at the notion that any ‘Golden Age’ that contained Robert A. Heinelin was ever without a polemical edge, or the notion that being conservative and “literary” are somehow mutually exclusive. writers like Robert Silverberg and Gene Wolfe are rabid Marxists.) And ducks killed radio! Video killed the Radio Star? And ducks killed radio! Never underestimate the ducks. Always bring sufficient bread to appease them. I just still can’t get over this quote from Phantom182: “So for me to live in a culture that reflects my morals and my aesthetic, I have to swim upstream against these forces and their champions. It pisses me off.” It really is that simple, isn’t it? “The world around me does not reflect my personal tastes and I don’t like it. Everyone should like the things I like and believe the things I believe, and for it to be otherwise is a personal affront.” Cut away all the bullshit about SJWs and literary tastes and populist sci-fi and affirmative action and eventually you get down to this. Other people are different from them and they can’t stand it. This is the rhetoric of a toddler. Yeah, and of course straight white men of a right-ward political bent are completely voiceless in this culture — just ask Clint Eastwood who hasn’t worked in years. You now what’s ruining SFF? Finding out were aren’t living in the same universe Heinlein thought we were living in in 1956. Because writing SF in this universe is harder, what with the lack of sapient Martians. Like other people here, I’ve been surprised, too, at Larry Correia (as well as Puppy advocate Sarah Hoyt) declaring himself a beleaguered racial minority on account of being, um… Portuguese (by heritage in Correia’s case, and by birth and upbringing in Hoyt’s). Do they also declare that Italians are a racial minority? It has also been peculiar to see Vox Day claiming to be “Native American and Mexican.” I assume it’s just one of his trumped-up “victory condition” ploys, since he makes that claim mostly to declare the evil SJW cabal of science fiction are racists because they don’t include him in their reindeer games. Anyhow, race really plays a weird role in Puppy discussions. A while back, I think it was on Correia’s blog, they got into a discussion about K Tempest Bradford. And they talked about her skin tone in post after post. They came across as obsessed with it. (Those classy Puppies!) Some questioned whether she was –actually- African-American or just pretending (because she looks light-skinned in some videos or pictures). Others think she tries too hard to be an angry black woman because she’s insecure about her skin not being black =enough=. And so on. Just… weird shit. And then later on… Correia, Torgersen, and their friends get all insulted and outraged because people say they’re racist. I mean… seriously, dude? You expect to hold blog conversations like that and NOT be seen as racist? “So for me to live in a culture that reflects my morals and my aesthetic, I have to swim upstream against these forces and their champions. It pisses me off.” This is actually straight Christian Dominionist worldview — it’s the idea that unless the culture in general is their culture, their culture cannot survive (and so the world is doomed, because Jesus). They need a cohesive culture around them that matches their worldview to raise their children in, otherwise their children will get the idea that their worldview isn’t the one true worldview and stray from the only righteous path, fap fap fap. This is why they’re so adamant about gay marriage and women being equal and so on. And why they’re scared of SF that shows a world not their own. You can read this drum being banged constantly at VD’s blogs and at Rod Dreher’s blog, as well as at the horrible blog of Doug Wilson. (I don’t recommend this course of action, mind you. I’m just saying if you’re *interested*.) To be fair, Dela, I’ve talked to a few Italian-Americans who’ve been the victim of unthinking prejudice. Valerie D’Orazio, a former editor for DC and Valiant comics, told a story about a major comics writer speaking to her for the first time and asking if she wanted some pizza in an accent roughly akin to Super Mario’s. She’s also pointed out that most Italian-American characters in comics are either mobsters or related to mobsters (Helena Bertinelli, the post-Crisis Huntress, was both a relative of mobsters and a lapsed Catholic, pulling off an “Italian-American stereotype double”). So yes, Larry can be right in saying that he faces prejudice for his heritage. Hell, let’s be honest–most of the people prejudiced against people from a Hispanic background couldn’t tell Spanish from Portugese or Mexican from Venezualan, so I have no difficulty at all believing he gets shit from people about his background. I’d be very surprised if the people giving him that shit weren’t the same people who he agrees with about gun rights, but I can definitely believe him when he says he faces prejudice. But I do object to the notion that because he faces prejudice, he can’t be racist himself. As I keep saying, racist isn’t something you are. It’s something you do. Being a minority does not give you a “Get Out Of Racist” free card when you say or do something racist yourself. @Dela: IKR? It’s just another irony burn that the chaps who moan about their nice clean genre being dirtied up by SJWs obsessed with race, gender and sexuality … engaging in frankly creepy speculation about women of colour. I also adore people who fancy themselves badass culture warriors yet have failed Activism 101 — you are judged by the company you keep, so if you’ve allied with the kind of people who think it’s witty to call a woman of colour a “half-educated savage,” or throw around misogynistic and homophobic slurs and then just lie about it even when confronted with screenshots taken BEFORE you deleted the post/Tweet? Don’t whine about being tarred by association, because you brought it ALL down on your own fool head. Do your due diligence, and own the consequences of your choices. Chad: I’ve gone from believing the Rabid Puppies are a legitimate threat to All Things Squee-worthy to understanding these shenanigans aren’t going to have any real long term impact on either the Hugos or fandom. I think they are somewhere on the spectrum in the neighborhood of Intelligent Design folks. The ID’ers got some textbooks in public schools to teach the controversy, but the last supreme court case ruled against them and took a lot of wind out of their sails. I think for the pups, the equivalent reaction would be to get an overwhelming response voting for non-puppy works, then no award. And then propose a rule change that would reduce the power of slates so that 100 slate voters can’t overpower so many random, non-slate, voters. Every time I hear a Pup say “They’re on the ballot now, so you HAVE to read them and vote for them based on merit. No Award is making it political”? I think of the Intelligent Design whackjobs saying we have to “Teach the Controversy”. @ John Seavey–the experiences you describe your friend enduring are similar in character to experiences endured if you’re female, or extremely tall, or fat, or extremely short, or have a false limb, or have a facial disfigurement, or heavly freckled, or visible burned, or have unusually large breasts, or live with the consequences of a severe health problem or a severe accident, or have a speech impediment, or have a big nose, or are an exceptionally goodlooking woman, or wear clothing that identifies your religious sect, or speak with an accent, are in any way at all self-evidently DIFFERENT from some insensitive or clueless jerk(s), of which there are many in the world, who happens to be in your vicinity. I’m not persuaded by the notion that having Portiguese heritage puts Correia in a beleaguered racial minority or affects his life more than most people’s life are daily affected by their own individuality. Though, admittedly, I might be more persuadable if I hadn’t read post after post after post on his blog talking snidely about someone else’s skin tone. sez chad saxelid: “I’ve gone from believing the Rabid Puppies are a legitimate threat to All Things Squee-worthy to understanding these shenanigans aren’t going to have any real long term impact on either the Hugos or fandom.” Hmmm… maybe. The Pups’ being a numerically miniscule subset of fandom-at-large didn’t prevent them from forcing everybody else’s Hugo noms out of a large chunk of all Hugo categories, you know? The question is, low long VD & hangers-on are going to keep their little crusade going… and given the fact that this is the Puppies’ third year running, well, the (limited) evidence at hand suggests they ain’t stopping. Dela: “Do they also declare that Italians are a racial minority?” Without discussing anything about Mr. Correia’s heritage, I’d note that not all that long ago, Italians weren’t adjudged enitrely “white.” Mind you, right now, I’m about as white as they get. There’s an SJW Conspiracy that holds Sekret Control of the Hugos And straight, white men are denied their fair share of awards.because of reverse racists: Clearly, people like Vox Day, Larry Correia, Brad Torgesen, and John Wright would be winning awards hand over fist if the entire Hugo ceremony wasn’t a complete sham Hugo Winners like Scalzi use their vast power to bend the Hugos to his will. But he doesn’t sell that many books anyway. No one respects OUR tastes. No one awards OUR books. I have to swim upstream against these forces, uphill, both ways, in the snow, just to be able to buy a book I like. What about ME? What about what I LIKE? When will the universe revolve around ME again? You have to read our works and vote for them based on merit! Annie; I am sorry that there has been nastiness over this. You yourself have shown grace under pressure, and you have my respect and affection, for what it’s worth. Randall Garrett gave Ben Bova some teasing about Bova’s Italian heritage back when Bova was a new writer (see here) for a quote from Bova in the comments. @ Annie, Seconding Will. For my part, I read your nominated short and quite liked it, and promptly went and started on the 20-sided Sorceress. So you have a new reader, Hugo or no. @Dela: I understand what you’re saying, but trust me…no, actually don’t trust me. That’s a terrible thing to ask. Research my words to determine their veracity. Much better. There is a long and ugly history of prejudice against Italian-Americans, complete with some utterly charmless racial slurs that have not entirely faded from everyday use and plenty of stereotypes, especially in New York City where the large immigrant population took longer to mix. The Sacco/Vanzetti trial is probably the most notorious incident, but it’s by no means the only one. Also, I think you’ll find if you ask around that prejudice against Latino-Americans is strong enough and irrational enough that a lot of people who aren’t Mexican-American get grouped in with those that are. I have a friend whose ancestry is Filipino, and he got called some rather nasty things in high school that showed very clearly that the same people who aren’t smart enough to understand how wrong racial prejudice is, are often the same people who don’t know how to tell different races apart. Again, I’m not willing to give Larry the benefit of the doubt regarding his own racism, and I deeply and profoundly doubt that the people who are prejudiced against his skin color or his accent count themselves as Social Justice Anythings. But I’m willing to believe that he’s been the victim of prejudice in the past, and I’ll stand up to defend him against that kind of prejudice the same way I would anyone. Because that’s kind of the whole point of what I believe in, that we shouldn’t be judged by our accents or our skin color or anything stupid like that. When I find Larry Correia wanting as a human being, believe me, it is going to be entirely due to his actions. :) John Seavey: I can add my experience to yours when it comes to Italian American stereotypes (and prejudice against Italian Americans): I can remember being asked as late as the 1960s if I knew anyone in the Mafia . . . though that might have also been related to my being from Chicago, speaking of stereotypes. Something that occurred to me a while back about Portuguese Americans and prejudice: I believe that there is at least one place and time in the U.S. where Portuguese Americans were discriminated against and had to “prove” their “American-ness,” and that’s in New England in the first half of the 20th Century. The term was “Portygee,” I believe, and it was definitely a slur; I can remember reading 1930s mystery novels set on Cape Cod where the word was fairly common. I don’t know if the Cape Cod Portuguese Americans were ever identified as Hispanic; from what I remember of the context, they were clearly defined as a foreign “other” community in and of themselves. Standard anti-immigrant prejudice, perhaps? We all swim upstream. Hell, the idea that the SJWs (whatever level of cartoon abstraction one brings to the term) are living in an SJW paradise where they have everything their way is so absurd as to be, well, of a piece with all the other stuff these people seem to say. Swimming upstream is the default condition of humanity. Those that don’t swim against the current get swept out to sea with the flotsam and jetsam. Making an effort to go where we want to go is what everyone who wants to go somewhere does. What we don’t get to do, though, is say “Because I’m swimming upstream, everyone else has to get out of the way and let me.” They’re all swimming upstream, too. Just to their own target destination is a rich panoply of target destinations. Adding to the anecdotes about Italian-Americans facing prejudice: A woman in my neighborhood while I was growing up–my best friend’s Mom–her family wasn’t just Italian but Sicilian, which, Princess Bride nonsense aside, was apparently the worst kind of Italian immigrant to be at that time. Things were pretty shitty for them. I never questioned that. But I did have a problem with the rhetorical/political use to which she put these stories: as her number one reason why she shouldn’t be expected to give a damn about today’s immigrants’ troubles. Or about people of color. Or about… well, anyone. If the subject of others’ oppression came up, she’d insist that no oppressed group today could possibly have it worse than her family had it then, so don’t expect sympathy out of her! Similarly, I don’t question her stories about working her fingers to the bone to help her husband through law school… but I do question her always bringing it up as an indignant response to Obamacare and other proposed improvements to the social safety net. Her attitude was pretty much, “I paid my dues. I’m done. I will never lift a finger for anybody else, ever again. I don’t stand in line behind nobody.” It reminds me of those guys who believe that their activism in the ’60s excuses them from checking their privilege today. That plus a huge tacky game of Oppression Olympics.. Besides, “I’m a person of color myself, what I do therefore doesn’t count as racism” was sort of Requires Hate’s schtick, wasn’t it? When your rhetoric mirrors that of RH, it’s time to rethink your strategy. kurt: We all swim upstream. …What we don’t get to do, though, is say “Because I’m swimming upstream, everyone else has to get out of the way and let me.” If I had to guess, I’d guess that the mentality is likely more along the lines of: I’ve experienced some unfair things and survived. I sucked it up. You should too. The baseline thinking of the puppies seems to be they’ve been discriminated against and suffered in silence for so long that discrimination is the norm to the point that its OK if they become part of the discrimination What they want to do is pull the conversation down to the individual so they can focus on the terrible wrongs they’ve suffered. And if they can pull the conversation down to the individual, then people can’t point out that compared to the kinds of discrimination that goes on at a systemic level, their complaints lose to scale. As a simple example, systemically speaking, white males have a massive majority of hugo wins since they were first handed out. It’s a massive bias. Which means the number of white male winners HAS TO DROP for the awards to match the population and therefore be a fair reflection of merit rather than gender. There is no way white males can maintain +80% of the Hugos and claim they’ve won on merit alone. But the pups want to ignore that systemic level correction, that strategic level correction and instead focus on the tactical result and cry fowl that they’re losing so much more than they did, that women are winning so much more than they did, and then claim that it MUST be affirmative action. Systemically, white men can’t maintain 80% of the awards and claim it is purely based on merit. But if they focus on the individual level, the tactical level, they can ignore the systemic pre-bias that is inherent in the system and needs correcting. One piece of anti-Portuguese-American racism from our rich history of racism toward whoever just showed up is Manhattan-style clam chowder. It’s actually from Rhode Island, created by Portuguese fishermen. But it got named “Manhattan-style” by New Englanders who didn’t like it. And since they didn’t like New York, either, they combined the two into a handy insulting name. @John Seavey “This is the rhetoric of a toddler.” #NotAllToddlers @Mary Frances According to the New Bedford, Massachusetts entry in Wikipedia “The Greater Providence-Fall River-New Bedford area is home to the largest Portuguese-American community in the United States.” You remember the movie Mystic Pizza? Julia Roberts, Anabeth Gish and Lilli Taylor were three main characters who work in a pizza joint after high school while they try to figure out what they are going to do with their lives and loves. I vaguely remember that one of Robert’s lines included the words “I’m just a dumb portagee” when she was arguing with her very rich WASPy boyfriend. I tried looking “portagee” up too. What’s interesting is that the word, at least in Urban Dictionary, seems to be in the process of redefinition from a probably mispronunciation by English speakers which was used as a slur, to having more neutral or positive connotations. /end word geekery @kurtbusiek I did not know that. Very cool, especially since tomatoes travelled to Spain and Portugal from the Americas. @KurtBusiek: but then doesn’t that get into general Massholery? I kid. Personally, I’m fervently against Manhattan-style clam chowder because New England style is tastier to me. The particulars of the name was entirely unknown to me. I just had to Google which sort is which, because I can never remember what name goes with which. I like the white kind and not the red kind of clam chowder. Because I like cream more than I like tomatoes. Rhode Island should be affronted. @Nicole: Well, exactly. “I was discriminated against, so now I’m gonna discriminate!” makes little sense to me. But “I got mine” and pulling up the ladder after you is an attitude often associated with the Pups’ other attitudes. (And while the vast majority of Italian-Americans have nothing to do with the Mafia… I must admit that the two families of same I have been closest to in my life HAVE worked for outlying enterprises thereof. Like, when Mom and I saw “Bugsy”, she nudged me and said “Hey, the guy in this scene is the one that originally hired Mr. X.” Me: “Hey, yeah, he is.” It happens. But my personal friends do not represent the world. And Mr. X is a lovely, lovely man.) Frankly, I had no idea Larry Correia was any sort of “ethnicity” other than 100% Whitey McWhite until he started going on about it with much aggro. I HAVE judged him on the content of his character — and it’s awful. On the internet, nobody knows you’re a dog — until you call yourself one. But everyone knows you’re an asshole. Woof. Putting the thread to bed for the night. Will be back up tomorrow morning. Update: Back up! @Lurkertype: My impression, from meeting him twice in meatspace and a couple of other times online, is that Correia is a minority in the same way that Vox is a Native American. That is, it has no effect on his daily life, but it makes a handy bludgeon when he disagrees with someone. For the record, I am about as Native American as Beale is. The difference is, I knew it all my life, instead of discovering it last year through a blood test. I know the tribe, the family scandal, and even what happened to the other branch of the family. (THAT story is too bizarre to tell, at least in this context.) There’s just no reason for me to claim membership in the tribe; for all purposes, I’m a white guy, and I’m okay with that. I’ve certainly never encountered any anti-Native discrimination, and it would be dishonest of me to claim otherwise. Nicole J. LeBoeuf-Little said: .” I have nothing at all to add to this. I just want to bask in its awesomeness for a little while. :) On the racism towards groups that are now ‘white’, it’s amusing how quickly “scientific racists” develop amnesia about the history of which groups are and aren’t white. All that stuff about how race and IQ supposedly correlated? Used to include studies which supposedly showed the Irish and Italians were not as smart as people of Northern European ancestry. That, of course, quietly went away as cultural shifts in who is and isn’t in the dominant group happened. Of course that’s what all this is about. Always has been. Do the kids still say “no duh”? Probably not, but, you know, no duh. The only reason anyone gave credence to the SP claims that they just want a More Inclusive Fandom, or are against them snobby literary elites, is that tendency to try to give people the benefit of the doubt, and to assume that if someone presents dumb, contradictory arguments, or presents ‘facts’ that are quickly jettisoned as mere ‘details’ when they turn out to be wrong, then what must be going on is confusion or mistake, rather than malice. But, as you concede, it’s malice. It’s a selfish and immature complaint that SFF isn’t centered on and dedicated to your entitlement complex about how the world should be. That is, of course, why these accusations that ‘SJWs’ are rigging votes and trying to totally take over SFF rather than merely staking out their own segment of the market; it’s what the SPs wish they were doing. And now that their usual arguments about Go Write Your Own Fiction If You Don’t Like It and The Market Speaks! are turning out not to be in their favor, they’re stamping their little feet and shrieking that Scalzi wasn’t allowed to put a hotel on Park Place. phantom: Everybody has a last straw. That was mine. At this point, haven’t you just admitted to being a complete and total bigot? Haven’t you just admitted to politicizing other people’s genitals??? Someone you don’t even know was born with male bits and is happier with female bits. Who the fuck are you to say they are your last straw????? “So for me to live in a culture that reflects my morals and my aesthetic, I have to swim upstream against these forces and their champions. It pisses me off.” And so, after many posts over multiple threads filled with all sorts of BS, we eventually get the truth. The more I read from the SW crowd, the more I have come to believe them. Their slate could be the most diverse the Hugo have ever seen. Sure. They can claim that all day. The issue is that their slate is not the one that was voted. In essence, the Sad Puppies are as irrelevant as the U.S. Taxpayer Party during a Senate vote. RHDWTFBBQ (or whatever the cool kids call him today) openly admitted to using the SP slate as a start for his “own”. Brad & Larry’s cries of victory hold as much weight as my own when the Lions win. And should get as much media attention. @Revbobmib in the last five years I found out I qualify as a Daughter of the Revolution and that my family is also descendants of the Abanaki tribe (currently in a revival of sorts). It’s possible I have similar heritage on my deceased dads side but I could never get my grandfather to give me a straight answer. I’m an Orthodox Jewish convert who grew up sometimes poor brought up with white middle class values and went to private school for grades 1-3 and 10-12 on partial or full scholarship while living in the “slums” of a very rich and totally white New England town. I still remember in 10th or 11th grade when the 1st black family moved in as it was a novelty to the town. I did notice after I converted to Judaism in my 30s the world treated me a bit differently based on food, Shabbat, and holiday restrictions as well as how I dress in areas where people know long sleeves/long skirt/hair covering = Orthodox Jew. The world I live in no longer looks like me. Finding out I have Native American ancestors changes nothing in the way becoming a Jew did in how I’m treated because its invisible to the world. Given that Day lives in Italy I can certainly believe that he has encountered disdain; from my experience Italians are singularly unimpressed by entitled Americans who seem to imagine that they are conferring some sort of benefit by their mere presence. That has nothing to do with Day’s origins; it’s his behaviour which generates the disdain. There are plenty of US citizens who have lived in Italy for years who don’t provoke that response, just as there are plenty of US tourists who behave with courtesy as guests in someone else’s country. It does occur to me that the Phantom has failed to grasp that his/her corner of the US does not represent the global population; if s/he can’t handle gender issues then countries where straight guys routinely embrace each other would probably result in a total identity melt down. In fact, now that I come to think of it, Day’s hysteria may well have been precipitated by this alarming discovery. That and the fact that Venetians like John’s work so much they’ve named a bridge after him… @Tasha: “because its invisible to the world.” Yes, exactly… and I attended a private school on scholarship for grades 7-12, so I grok that angle as well. I am intellectually aware of my NA heritage, but my life experience has always been that of a white guy, with all of the privilege that living in a dominant-white culture carries. Likewise, Correia is in no danger of getting pulled over for Driving While Brown, and nothing he says about his humble beginnings will change that. Our skin tones differ in that I’m a pasty, pale guy and he looks like he spends some time outdoors, but both fall comfortably in the “white male” spectrum. I asked for and received many suggestions for good Space Opera or Military Science Fiction that maybe weren’t on the Hugo Award list. I got back a huge number of suggestions (THANK YOU!), including many Hugo nominees. I have tried to consolidate and alphabetize by author. I am not familiar with all of these, so I may have muddled some of it. Unless otherwise noted, assume these are Space Opera and/or Military Science Fiction. But, there were also suggestions for SF/Space mysteries, Noir, some Fantasy, too. Also, be advised that Locus Reviews looks to be a good resource for finding cool books (Locusmag.com). So, here’s the list! Saladin Ahmed: excellent standard fantasy in a less-familiar-to-western-readers setting. Kevin Anderson: Saga of the Seven Suns, The Dark Between the Stars Neal Asher: The Skinner, Zero Point, Jupiter War, Dark Intelligence Rachel Bach: Fortune’s Pawn and sequels Stephen Baxter: Proxima, Ultima Elizabeth Bear: Steles of the Sky Greg Bear: War Dogs Gregory Benford and Larry Niven: Shipstar Eric Brown: The Serene Invasion Steven Brust: (good fantasy) The Khaavren Romances Lois McMaster Bujold: Vorkosigan Saga. Complex, interlocking series. Wonderful space opera. The Warrior’s Apprentice and The Vor Game are solid MilFic. Gentleman Joe and The Red Queen is coming out next year. Captain Vorpatril’s Alliance (caper book). Cryoburn. Space mysteries: Cetegunda, Ethan of Athos, Komarr, Diplomatic Immunity, Cryoburn. Jack Campbell: Lost Fleet Sextology; milsf (also space opera) Jacqueline Carey: Terre d’Ange (later books more than the first trilogy) (epic fantasy, too) Deborah Coats: more modern/urban fantasy, but good. James S. A. Corey: Leviathan Wakes (and rest of Expanse series) C.J. Cherryh: the Foreigner/Bren Cameron books has been a series of tightly linked trilogies. Start with Pride of Chanur series or The Faded Sun: Ksrish, or Hellburner. Rimrunners; Downbelow Station. Isabel Cooper Gordon Dickson: Dorsai (Good mil sf) Stephen Donaldson: Gap series (*trigger warning*) David Drake: Royal Cinnabar Navy, Seaes of Venus Charles E Gannon: Caine Series. “old-fashioned” Mil SF. Fire with Fire. Trial By Fire nominated for Nebula. Guy Gabriel: (good fantasy) Stephen Gould: Exo James Gunn: Transcendental Peter Hamilton: Pandora’s Star series ; It’s got big space opera elements and some seriously interesting military aspects. Ben Hennessy: Queen of the World John C Hemry: Blackjack (mil sf) M.C.A. Hogarth: Spots the Space MArine Andrea K Host Tanya Huff: Valor series. Light and relatively fluffy Space Marines with some serious points lurking underneath. Ben Jeapes: Phoenicia’s Worlds Danielle Jensen: Stolen Songbird. and sequel Ann Leckie: Ancillary Justice, Ancillary Sword Sharon Lee and Steve Miller Scott Lynch: The Lies of Locke Lamora, A Gallery of Rogues (fantasy) George R.R. Martin and Gardner Dozois: Old Venus Ken MacLeod: Learning the World Paul McAuley: Evening’s Empires Jim McDonald and Debra Doyle: The trilogy that starts with The Price of the Stars; some excellent space opera. Elizabeth Moon: Heris Serrano books. It’s “unjustly cashiered fleet officer” saved the day. Also, last five Elizabeth Moon books, which kind of answer the “what happens to societies/people when the heroes have finished and moved on?” question, and give me stuff about drains, baking, and trading economics. Daniel Keys Moran: The Long Run Chris Moriarty: Spin State Richard Morgan: Altered Carbon, Takeshi Kovacs (mystery and mil sf, and noir) Linda Nagat: The Red: First Light Christopher Nutall: very prolific: A Life Less Ordinary; The Bookworm series M.C. Planck: The Kassa Gambit Robert Reed: The Memory of Sky Alastair Reynolds: On the Steele Breeze (and other works) Nora Roberts (aka J.D. Robb): In Death books (near-future, police procedural with SF elements) Kristine Kathryn Rusch: Diving Universe and (mystery) Retrieval Artist Michelle Sagara/West: (good fantasy) John Scalzi: Old Man’s War and sequels; Redshirts Mike Shepherd: Kris Longknife books (mil sf) Jon Steakley: Armor Allen Steele Charles Stross: Saturn’s Children, Neptune’s Brood Frank Tuttle Chrysoula Tzavelas: “Citadel in the Sky” and other works. John Varley: Dark Lightning David Weber: Honor Harrington Series Scott Westerfield: Risen Empire, Killing of Worlds Django Wexler: “The End of War” in Asimov’s magazine Walter Jon Williams: For straight space opera, look “Dread Empire” trilogy, beginning with “The Praxis”. He also wrote one of the seminal works of cyberpunk, “Hardwired”. There’s plenty of WJW to keep you occupied for quite some time after you finish those. (Trilogy: The Praxis, The Sundering, and Conventions of War; there’s also a sequel novella, Investments, available as an ebook). This is not a game; dep state; fourth wall; (near future noir mysteries/thrillers) That’s a pretty great list. Thanks again to all and to Our Gracious Host.
http://whatever.scalzi.com/2015/04/23/hugos-and-class/
CC-MAIN-2017-22
refinedweb
41,757
70.84
ADDITIONAL SYSTEM INFORMATION : $ java -version java version "11.0.1" 2018-10-16 LTS Java(TM) SE Runtime Environment 18.9 (build 11.0.1+13-LTS) Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.1+13-LTS, mixed mode) $ [System.Environment]::OSVersion.Version Major Minor Build Revision ----- ----- ----- -------- 10 0 17134 0 A DESCRIPTION OF THE PROBLEM : When using the new HttpClient it dosen't download files whose filesize in byte is reported bigger than Integer.MAX_VALUE most of the time if using HTTP 1.1. 1) A file with size of 2147483648 (= Integer.MAX_VALUE + 1) reported in the content-length header just dosen't invoke any methode besides the onSubscribe methode of a HttpResponse.BodySubscriber. 2) Interestingly a file with a size of 4294967294 (= Integer.MAX_VALUE - Integer.MIN_VALUE - 1) reported in the content-length header downloads normally. 3) However a file with a size of 4294967296 (= Integer.MAX_VALUE - Integer.MIN_VALUE + 1) downloads a file with the size of exactly 0 bytes. For some reson using HTTP 2 works for at least the first case and downloads that file correctly, I don't have the resources to easily test the other cases using HTTP 2. STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : 1. Create Http Client that will use HTTP 1.1 (for the example file I provided this is important as cloudflare will server up HTTP 2 if we don't specificly request HTTP 1.1) 2. Create a request to download a file with one of the specified sizes (for example: (content-length = 2147483648)) 3. Try to download from that url into a file with a HttpResponse.BodyHandlers.ofFile() EXPECTED VERSUS ACTUAL BEHAVIOR : EXPECTED - It should download the file normally into the specified file on the hard drive. ACTUAL - The file on the Hard drive is created, but dosen't get any content and remains at 0 bytes size. Looking in the Task Manager shows that java is downloading with full speed, but I don't know what is happening with that data. For some reason it also sometimes happend that the CPU usage shot up from ~5% to ~30% and network usage stopped. ---------- BEGIN SOURCE ---------- import java.io.IOException; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.nio.file.Path; public class Main { public static void main(String[] args) throws IOException, InterruptedException { HttpClient client = HttpClient.newBuilder() .version(HttpClient.Version.HTTP_1_1) .build(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("")) .build(); client.send(request, HttpResponse.BodyHandlers.ofFile(Path.of("./test.bin"))); } } ---------- END SOURCE ---------- FREQUENCY : always
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8212926
CC-MAIN-2022-05
refinedweb
433
51.95
uanrobinson2002Members Content count55 Joined Last visited Community Reputation187 Neutral About joshuanrobinson2002 - RankMember Personal Information - LocationNorton, Ks joshuanrobinson2002 replied to joshuanrobinson2002's topic in General and Gameplay ProgrammingRavyne, I expected an exception because I've seen exceptions thrown with similar code in the past. I simply assumed it would throw in this case as well, and got egg all over my face as a result . Thanks for the responses everyone, I think I understand. joshuanrobinson2002 posted a topic in General and Gameplay ProgrammingThe other day, I had one of our new developers asking me how he could get a null reference in C++. Now, I know enough about C++ to know I'm not an expert, but I felt comfortable fielding that question. I explained that a reference in C++ could not be null, but that it was possible to have an invalid reference if, for example, you return a reference to a temporary. To illustrate, I wrote the following simple code: [CODE] #include () [/CODE]. joshuanrobinson2002 replied to cupidstunt's topic in For BeginnersI don't know of any alternate high-score tutorials, but maybe we can help you figure out what's wrong with the one you're using. [quote name='cupidstunt' timestamp='1322739372' post='4889375'] But everytime I try to compile it will not work. [/quote] I assume this means you're getting a compiler error? Could you tell us what the error is exactly, and maybe post some of your code so we can help troubleshoot it. I didn't take a real close look at the tutorial, but it appears that it doesn't specify which class some of the functions should belong to (SaveHighScores and LoadHighScores, for example). Just a shot in the dark, but do all your functions belong to a class (or struct, or interface)? In C#, they must. [code] struct HighScoreData { static void SaveHighScores(HighScoreData data, int count) //<- This is okay. { /*Stuff*/ } } static HighScoreData LoadHighScores(string fileName) //<- This is not, this function does not belong to a class, struct, interface, etc... {/*Stuff*/} [/code] Hope that helps. joshuanrobinson2002 replied to ndrul's topic in For BeginnersHey, [color="#284b72"]ndrul. If you're still stuck on deciding between C++ and C#, why not just give C# a shot and see if you like it. You said you spent two months learning C++, so you've probably got a handle on basic programming principles such as flow control, conditional statements, and functions. C# and C++ also share a somewhat similar syntax, which might make picking up C# a little eaiser. So, if this decision is a stopping point for you, why not just download C#, read up on it a bit, and spend a week (which is how long you said you've been working on your game in C++) working on a game in C# and see which one you like better, or feel more productive in. I know that sometimes wondering if you're using the right language, or doing things "the right way" can put a halt on a project. And, I'm not trying to say that these things aren't important. But, you put a lot of emphasis on getting things done (productivity) in you're original post. If you're spending a lot of time worrying about choosing "the right language" you're not being very productive on your game .[/color] joshuanrobinson2002 replied to naveen's topic in For BeginnersSounds like you're being asked to implement an Observer Pattern... joshuanrobinson2002 replied to carpetfilter's topic in For BeginnersQuote:Original post by Palidine If you're doing a straight up text-based game then command line is obviously the correct choice. I think he's talking about something that uses text characters for graphics. A-la Rogue. If thats the case and you're looking for a portable console library, you should look into Curses. PDCurses is a curses library for Windows. I've never used it myself. If portability isn't a concern, is there any reason you can't use C# and System.Console? I imagine it's a lot more pleasant to work with than straight Win32 :). joshuanrobinson2002 replied to Dbproguy's topic in General and Gameplay ProgrammingQuote:Original post by alvaro I tried to learn a bit of python once. I gave up after I found out that `print input()' is powerful enough to evaluate 4*(1+2) as 12. If that's a problem, you shoud use raw_input() instead. joshuanrobinson2002 replied to turlisk's topic in For BeginnersI'm not sure I understand. You need the class that handles loading from the XML file to be available across multiple parts of your game? Is there any reason you can't new up an instance of this class where it is necessary? Or, perhaps, even make it static so that it doesn't require you to new up an instance? For example: public static class XMLLoader { public static PlayerStats LoadStats(String filename) { PlayerStats ps = new PlayerStats; /*Load ps from XML*/ return ps; } } MainGame.cs public class MainGame { public void DoSomething() { PlayerStats ps = XMLLoader.LoadStats(*/Path To File*/); //Stuff... } } StatsScreen.cs public class StatsScreen { public void DoSomething() { PlayerStats ps = XMLLoader.LoadStats(*/Path To File*/); //Stuff... } } etc, etc... Although, if you're storing the results of the load operation in global variables, I don't see what you'd need access to this class for beyond the initial load. Being global, wouldn't your main game, status screen and upgrade screen already have access to the stats you read from the file? So why read that file again? Hope that helps. joshuanrobinson2002 replied to GraySnakeGenocide's topic in For BeginnersWell, I'll echo others in this thread who said that XNA can be used to target more than just the 360. That being said, if you don't wnat to use XNA you'll need to find other ways of drawing to the screen and playing sounds and music and whatnot. jpetrie gave some suggestions, and I'll add that SDL and SFML also have .NET implementations. joshuanrobinson2002 replied to wioneo's topic in For BeginnersLooks like what you're looking for is a pointer to a member function... It's not something I actually use often, so I can't personally give you a great deal of information on how to use them. The updated code might look something like this... #include <iostream> #include <vector> #include <string> class unit { std::string _name; public: unit(std::string name) { _name = name; } void draw_unit() { std::cout << _name << std::endl; } }; class controller { std::vector<unit> _units; void loop_units(void (unit::*unit_function)()) { for (int i = 0; i < _units.size(); i++) (_units[i].*unit_function)(); } public: void draw_units() { loop_units(&unit::draw_unit); } controller() { _units.push_back(unit("Bob")); _units.push_back(unit("Steve")); _units.push_back(unit("Mike")); } }; //end class int main() { controller c; c.draw_units(); system("pause"); } Google can probably give you more information. I believe this is the site I referenced last time I was looking at pointers to member functions. Hope that helps. joshuanrobinson2002 replied to rnw159's topic in General and Gameplay ProgrammingGlad I could help. I don't do as much coding at home as I used to, but at work we use DevPartner Studio for code analysis. Luckily, we haven't run into many situations where applications just weren't running fast enough (knock on wood) so the profiler doesn't see as much use as it maybe should. I think it requires an expensive liscence though, so it might not be the right tool for a hobby project :). joshuanrobinson2002 replied to rnw159's topic in General and Gameplay ProgrammingQuote:Original post by rnw159 Why should I learn c++ when game maker can do make games faster and better?! Because C++ can be used to create more than video games. Because C++ is a marketable skill. Because you said: Quote:Original post by rnw159 I love to program ... I told him about the editor and how proud of it I was ... It felt great. Couple of fistpump moments. Game Maker has been around for, like, 11 years. Expecting to out preform it with an app you spent eight hours writing is probably being unfair to yourself. Quote:Original post by rnw159 How can I write my code to be as fast as that? How can I make my engine as fast as the game maker one? Hard to say. I suppose you'd have to profile your code, find out where it's performing poorly and see if you can optmize it. If you don't already have one, you might be able to find youself some profilers if you google something like "C++ Profiler" EDIT: Or, instead of googling, you can use the free profiler Sneftel linked to. joshuanrobinson2002 replied to draconar's topic in For BeginnersAfter selecting "New Project" from the file menu, you should end up with a tree view in a pane off to the left with a bunch of project types. One of the nodes should be Visual C++ and underneath that you should see several more project types (ATL, CLR, General, MFC...). A CLR Project is a C++/CLI (Managed) Project. You want the project type "Win32". Choose a "Win32 Console Application" (I think) and when the Application Wizard pops up, click "Next" and under "Additional options" choose "Empty Project" Hope that helps. joshuanrobinson2002 replied to Calin's topic in For BeginnersAre you posting the entire contents of your header files? Are there any other files in your project that could be causing the problem? What about the rest of the units.cpp like RobMaddison suggested? I only ask because I opened a new, empty win32 console project in VS2K8 copied and pasted your code exactly as it is into the approiate files, closed the curly braces in your units.cpp snippet, added a main function, and it built fine for me. This page might give you some ideas on what to look for that might be causing the problem. joshuanrobinson2002 replied to ARC inc's topic in General and Gameplay ProgrammingAre you seeding the random number generator? #include <time.h> int x; srand(time(NULL)); //Seed the random number generator, or you'll always get the same number. x = rand() % 5; //The rest of your code. I think you only need to seed it once, so probably in your main() function and not in your battle function.
https://www.gamedev.net/profile/133700-joshuanrobinson2002/
CC-MAIN-2017-30
refinedweb
1,732
63.09
You can get a list of these by typing cwm --help Command line RDF/N3 tool <command> <options> <steps> [--with <more args> ] options: --pipe Don't store, just pipe out * steps, in order left to right: --rdf Input & Output ** in RDF/XML insead of n3 from now on --n3 Input & Output in N3 from now on. (Default) --rdf=flags Input & Output ** in RDF and set given RDF flags --n3=flags Input & Output in N3 and set N3 flags --ntriples Input & Output in NTriples (equiv --n3=usbpartane -bySubject -quiet) --language=x Input & Output in "x" (rdf, n3, etc) --rdf same as: --language=rdf --languageOptions=y --n3=sp same as: --language=n3 --languageOptions=sp --ugly Store input and regurgitate, data only, fastest * --bySubject Store input and regurgitate in subject order * --no No output * (default is to store and pretty print with anonymous nodes) * --base=<uri> Set the base URI. Input or output is done as though theis were the document URI. --closure=flags Control automatic lookup of identifiers (see below) <uri> Load document. URI may be relative to current directory. --apply=foo Read rules from foo, apply to store, adding conclusions to store --patch=foo Read patches from foo, applying insertions and deletions to store --filter=foo Read rules from foo, apply to store, REPLACING store with conclusions --query=foo Read a N3QL query from foo, apply it to the store, and replace the store with its conclusions --sparql=foo Read a SPARQL query from foo, apply it to the store, and replace the store with its conclusions --rules Apply rules in store to store, adding conclusions to store --think as -rules but continue until no more rule matches (or forever!) --engine=otter use otter (in your $PATH) instead of llyn for linking, etc --why Replace the store with an explanation of its contents --why=u proof tries to be shorter --mode=flags Set modus operandi for inference (see below) --reify Replace the statements in the store with statements describing them. --dereify Undo the effects of --reify --flatten Reify only nested subexpressions (not top level) so that no {} remain. --unflatten Undo the effects of --flatten --think=foo as -apply=foo but continue until no more rule matches (or forever!) --purge Remove from store any triple involving anything in class log:Chaff --data Remove all except plain RDF triples (formulae, forAll, etc) --strings Dump :s to stdout ordered by :k whereever { :k log:outputString :s } --crypto Enable processing of crypto builtin functions. Requires python crypto. --help print this message --revision print CVS revision numbers of major modules --chatty=50 Verbose debugging output of questionable use, range 0-99 --sparqlServer instead of outputting, start a SPARQL server on port 8000 of the store --sparqlResults After sparql query, print in sparqlResults format instead of rdf finally: --with Pass any further arguments to the N3 store as os:argv values * mutually exclusive ** doesn't work for complex cases :-/ Examples: cwm --rdf foo.rdf --n3 --pipe Convert from rdf/xml to rdf/n3 cwm foo.n3 bar.n3 --think Combine data and find all deductions cwm foo.n3 --flat --n3=spart Mode flags affect inference extedning to the web: r Needed to enable any remote stuff. a When reading schema, also load rules pointed to by schema (requires r, s) E Errors loading schemas of definitive documents are ignored m Schemas and definitive documents laoded are merged into the meta knowledge (otherwise they are consulted independently) s Read the schema for any predicate in a query. u Generate unique ids using a run-specific Closure flags are set to cause the working formula to be automatically exapnded to the closure under the operation of looking up: s the subject of a statement added p the predicate of a statement added o the object of a statement added t the object of an rdf:type statement added i any owl:imports documents r any doc:rules documents E errors are ignored --- This is independant of --mode=E n Normalize IRIs to URIs e Smush together any nodes which are = (owl:sameAs) See for more documentation. Setting the environment variable CWM_RDFLIB to 1 maked Cwm use rdflib to parse rdf/xml files. Note that this requires rdflib. Flags for N3 output are as follows:- a Anonymous nodes should be output using the _: convention (p flag or not). d Don't use default namespace (empty prefix) c Comments added at top about version and base URI used. e escape literals --- use \u notation g Suppress => shothand for log:implies i Use identifiers from store - don't regen on output l List syntax suppression. Don't use (..) n No numeric syntax - use strings typed with ^^ syntax p Prefix suppression - don't use them, always URIs in <> instead of qnames. r Relative URI suppression. Always use absolute URIs. s Subject must be explicit for every statement. Don't use ";" shorthand. t "=" and "()" special syntax should be suppresed. u Use \u for unicode escaping in URIs instead of utf-8 %XX v Use "this log:forAll" for @forAll, and "this log:forAll" for "@forSome". / If namespace has no # in it, assume it ends at the last slash if outputting. Flags for N3 input: B Turn any blank node into a existentially qualified explicitly named node. Flags to control RDF/XML output (after --rdf=) areas follows: b - Don't use nodeIDs for Bnodes c - Don't use elements as class names d - Default namespace supressed. l - Don't use RDF collection syntax for lists r - Relative URI suppression. Always use absolute URIs. z - Allow relative URIs for namespaces Flags to control RDF/XML INPUT (after --rdf=) follow: S - Strict spec. Unknown parse type treated as Literal instead of error. T - take foreign XML as transparent and parse any RDF in it (default it is to ignore unless rdf:RDF at top level) L - If non-rdf attributes have no namespace prefix, assume in local <#> namespace D - Assume default namespace decalred as local document is assume xmlns="" R - Do not require an outer <rdf:RDF>, treating the file as RDF content (opposite of T) Note: The parser (sax2rdf) does not support reification, bagIds, or parseType=Literal. It does support the rest of RDF inc. datatypes, xml:lang, and nodeIds.
http://www.w3.org/2000/10/swap/doc/CwmHelp
CC-MAIN-2014-10
refinedweb
1,035
54.66
Dropped: Package ObjectsEdit this page on GitHub Package objects package object p { val a = ... def b = ... } will be dropped. They are still available in Scala 3.0, but will be deprecated and removed afterwards. Package objects are no longer needed since all kinds of definitions can now be written at the top-level. E.g. package p type Labelled[T] = (String, T) val a: Labelled[Int] = ("count", 1) def b = a._2 case class C() implicit object Cops { def (x: C) pair (y: C) = (x, y) } There may be several source files in a package containing such toplevel definitions, and source files can freely mix toplevel value, method, and type definitions with classes and objects. The compiler generates synthetic objects that wrap toplevel definitions falling into one of the following categories: - all pattern, value, method, and type definitions, - implicit classes and objects, - companion objects of opaque types. If a source file src.scala contains such toplevel definitions, they will be put in a synthetic object named src$package. The wrapping is transparent, however. The definitions in src can still be accessed as members of the enclosing package. Note 1: This means that the name of a source file containing wrapped toplevel definitions is relevant for binary compatibility. If the name changes, so does the name of the generated object and its class. Note 2: A toplevel main method def main(args: Array[String]): Unit = ... is wrapped as any other method. If it appears in a source file src.scala, it could be invoked from the command line using a command like scala src$package. Since the "program name" is mangled it is recommended to always put main methods in explicitly named objects. Note 3: The notion of private is independent of whether a definition is wrapped or not. A private toplevel definition is always visible from everywhere in the enclosing package.
http://dotty.epfl.ch/docs/reference/dropped-features/package-objects.html
CC-MAIN-2019-35
refinedweb
311
65.52
So the title says it all. I’ve also tried updating Discord.py with the following code: import discord, os, discord.ext import discord_components as dcomponents from discord_components import DiscordComponents, Button, Select, SelectOption from discord.ext import commands def vcheck(): if discord.__version__ != "2.0.0a": try: printf('DISCORD UPDATE DETECTED. Installing...', 'red') result, ver = [], discord.__version__; result.append(os.system('pip install --upgrade pip')) result.append(os.system('pip install -U git+')) printf('Succesfully installed version {}! Old version: {}.nInstall results: {}'.format(discord.__version__, ver, result), 'green') del ver except (BaseException, Exception) as exc: printf(repr(exc), 'red') pass if discord.__version__ != "2.0.0a": vcheck(); vcheck() # loops indefinitely and it won't run until Discord's version is 2.0.0a. When vcheck() is called, it would loop indefinitely and it wouldn’t run until Discord’s version is 2.0.0a. (It does not either update at all). Any fix for this? Is this question duplicated? No. I have reviewed that there are no questions with this exact title. Answer Yeah, it looks like the problem is an old version of discord.py I’d recommend first not using actual Python code to update it. If you can, try using the command line tool pip to update. Here are a few methods you can try. Upgrading using pip Run pip install --upgrade discord and pip install --upgrade discord.py Reinstalling Run pip uninstall discord and pip uninstall discord.py Those commands uninstall discord.py Then run pip install discord and pip install discord.py These steps will reinstall discord.py. Using the command line version of pip will probably work better than installing using code. Please tell me whether or not this works. Note: discord.py is going to be deprecated, so you may want to find another library to make a Discord bot.
https://www.tutorialguruji.com/python/discord-py-discord-utils-has-no-attribute-format_dt/
CC-MAIN-2021-43
refinedweb
306
61.83
Opened 8 years ago Closed 8 years ago #3396 closed defect (fixed) AccountModule breaks trac Description When I try to enable the AccountModule component either through the admin page or directly in the ini it causes the system to then prevent me from logging in. In Firefox I get the follwong error: Redirect Loop Redirection limit for this URL exceeded. Unable to load the requested page. This may be caused by cookies that are blocked. After I delete all mof my cookies I get the following message: Error: Not Found Unknown preference panel In IE 7 the login attempt just times out. If I then remove the AccountModule component from the ini file then the system returns to a normal working state. Here are the system details: Trac: 0.11rc2 Python: 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] setuptools: 0.6c7 SQLite: 3.4.0 pysqlite: 2.3.3 Genshi: 0.5 jQuery: 1.2.3 I am trying to use tracaccountmanager 0.2.1dev-r3857 I am running Tracd on the company server. I have attached my ini files and the debug log. Attachments (2) Change History (4) Changed 8 years ago by ben Changed 8 years ago by ben Debug log comment:1 Changed 8 years ago by roh had a similar problem when updating from post-0.10/pre-0.11svn to 0.11 stable. the pw-change enforcement was enabled by default after updating accountmanagerplugin. the following clicktrail sent accounts into a redirect loop. quick debugging in irc led to this workaround: --- acct_mgr/web_ui.py (revision 3950) +++ acct_mgr/web_ui.py (working copy) @@ -186,7 +186,7 @@ if req.session.get('force_change_passwd', False): redirect_url = req.href.prefs('account') if req.path_info != redirect_url: - req.redirect(redirect_url) + pass return (template, data, content_type) # INavigationContributor methods comment:2 Changed 8 years ago by ben - Resolution set to fixed - Status changed from new to closed That did it. Thanks for your help. ini file
https://trac-hacks.org/ticket/3396
CC-MAIN-2016-40
refinedweb
331
60.92
PMSEARCHTEXTQUERY(3) Library Functions Manual PMSEARCHTEXTQUERY(3) pmSearchTextQuery - fulltext search for metrics, instances and instance domains provided by PCP search services #include <pcp/pmwebapi.h> int pmSearchTextQuery(pmSearchSettings *settings, pmSearchTextRequest *request, void *arg) cc ... -lpcp_web Executes fulltext search in name, oneline help, helptext (when available) as specified by request: query Query string that will be used to search. count Limits number of results. Defaults to 10. offset Search offset. Defaults to 0. type_metric, type_indom, type_inst Bit flags that limit query to only take into the account specific type of entities. Defaults to all. highlight_name, highlight_oneline, highlight_helptext Bit flags that specify whether or not to highlight matched terms in results. Defaults to none. Highlighted terms are wrapped with `<b>' and `</b>'. infields_name, infields_oneline, infields_helptext Bit flags that allow limiting fulltext search query matching only to specified fields. Defaults to all. return_name, return_indom, return_oneline, return_helptext, return_type Bit flags for omitting specific fields from result. Defaults to all. Fields may be omitted either way if value of a field doesn't exist for a given record.QUERY(3) Pages that refer to this page: pmsearchsetup(3), pmwebapi(3)
https://man7.org/linux/man-pages/man3/pmsearchtextquery.3.html
CC-MAIN-2021-04
refinedweb
185
53.17
I thought I understood how the boolean logic worked in Java perfectly fine.... but recently when I was writing some code I noticed something wasn't working well with my boolean variable. So, wrote a quick test code to see if my idea of how booleans work was indeed correct. It wasn't.... In the below program, the idea was to have the loop break after one iteration because the boolean variable would be changed to false. It didn't work though. The JVM outputs infinite "In the loop" prints. So what did I miss with the boolean concept? I would appreciate the help! Code : public class LoopTest { public static void main(String[] args) { boolean isRunning = true; while(isRunning = true) { System.out.println("In the loop"); isRunning = false; } } System.out.println("Out of the loop"); }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/9206-whats-wrong-boolean-printingthethread.html
CC-MAIN-2017-47
refinedweb
135
68.47
fstatvfs, statvfs - get file system information [XSI] #include <sys/statvfs.h>#include <sys/statvfs.h> int fstatvfs(int fildes, struct statvfs *buf); int statvfs(const char *restrict path, struct statvfs *restrict buf); The fstatvfs() function shall obtain information about the file system containing the file referenced by fildes. The statvfs() function shall obtain information about the file system containing the file named by path. For both functions, the buf argument is a pointer to a statvfs structure that shall be filled. Read, write, or execute permission of the named file is not required. The following flags can be returned in the f_flag member: - ST_RDONLY - Read-only file system. - ST_NOSUID - Setuid/setgid bits ignored by exec. It is unspecified whether all members of the statvfs structure have meaningful values on all file systems. Upon successful completion, statvfs() shall return 0. Otherwise, it shall return -1 and set errno to indicate the error. The fstatvfs() and statvfs() functions shall fail if: - [EIO] - An I/O error occurred while reading the file system. - [EINTR] - A signal was caught during execution of the function. - [EOVERFLOW] - One of the values to be returned cannot be represented correctly in the structure pointed to by buf. The fstatvfs() function shall fail if: - [EBADF] - The fildes argument is not an open file descriptor. The statvfs() function shall fail if: - [EACCES] - Search permission is denied on a component of the path prefix. - . The statvfs() function may fail if: - [ELOOP] - More than {SYMLOOP_MAX} symbolic links were encountered during resolution of the path argument. - [ENAMETOOLONG] - Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}. Obtaining File System Information Using fstatvfs() The following example shows how to obtain file system information for the file system upon which the file named /home/cnd/mod1 resides, using the fstatvfs() function. The /home/cnd/mod1 file is opened with read/write privileges and the open file descriptor is passed to the fstatvfs() function.#include <statvfs.h> #include <fcntl.h> struct statvfs buffer; int status; ... fildes = open("/home/cnd/mod1", O_RDWR); status = fstatvfs(fildes, &buffer); Obtaining File System Information Using statvfs() The following example shows how to obtain file system information for the file system upon which the file named /home/cnd/mod1 resides, using the statvfs() function.#include <statvfs.h> struct statvfs buffer; int status; ... status = statvfs("/home/cnd/mod1", &buffer); None. None. None. chmod(), chown(), creat(), dup(), exec(), fcntl(), link(), mknod(), open(), pipe(), read(), time(), unlink(), utime(), write(), the Base Definitions volume of IEEE Std 1003.1-2001, <sys/statvfs.h>.
http://pubs.opengroup.org/onlinepubs/000095399/functions/statvfs.html
CC-MAIN-2019-13
refinedweb
422
55.64
Managing configuration files with ‘deploy’ The deploy script is a program for managing configuration files. This script grew out of my need for a multi-functional installer for configuration files. I tend to keep those files in a separate git repository rather than changing my $HOME into a git repository. History On UNIX, there is the venerable install program initially meant to install binaries. My first installers were typically shell scripts which basically were a list of calls to install(1). While this works well it didn’t completely fit my needs. Because next to purely installing a file somewhere, I wanted to be able to do more things; - Check for differences between the files in the repository and those in the installed location. - Print diffs between the files in the repository and those in the installed location. - Run arbitrary commands after a file was succesfully installed I could of course do this with a makefile and this approach would be extremely flexible. But writing and maintaining such a Makefile would be quite cumbersome. Every file would need to have its own dependency line. And they would all have to be grouped added to a super target at the beginning of the makefile. How it works The deploy command is meant to be run from a color terminal; it uses ANSI escape codes to color its output. It is meant to be used from the root of e.g. a git repository. When started, deploy looks for and reads the file named filelist.$USER in the directory from which deploy is run. So when run by a user named “jdoe” it would look for a file filelist.jdoe. This is suitable for installing files in the directory tree owned by jdoe. For installing files system wide (e.g. in /etc or /usr/local/etc), create filelist.root and run deploy as the root user. file format In these file lists, lines that have a ‘#’ as the first non-whitespace characters are skipped as comments. The first non-comment line must contain a list of fully qualified host names for which this file is valid. If the name of the host where deploy is run is not in that list, it will quit. The other non comment lines all have the same format: <source path> <mode> <destination path> <post-install commands> - The source path is path relative from the directory where deploy is called from. It may not contain whitespace. - The mode is an octal number indicating the permissions of the destination file, see chmod(1). - The destination path should be an absolute path including the name of the installed file. It may not contain whitespace. The reason for including the filename is so that you can e.g. install a file profile as /home/jdoe/.profile. - The rest of the line is considered the post-install commands. This may be empty and may contain spaces. commands The ‘deploy’ program has tree sub-commands or modes; - check: Generate a list of files that are different from the installed files. If the verbose option (-v) is used, it lists for all files if they need installing or not. - diff: Generate a colored diff between the files in the repository and the installed files. - install: Install the files in their destinations and run the post-install commands. Examples The file filelist.jdoe in a setup directory contains the following lines among others; ../shared/fetchmailrc 400 /home/jdoe/.fetchmailrc This installs the configuration file for fetchmail and makes sure that only the owner can read it. Note how a relative path is used for the source, and an absolute path is used for the destination. The first is for convenience, the second for preventing mistakes. The following line is an example of using post-install commands; Xresources 644 /home/jdoe/.Xresources xrdb -load /home/jdoe/.Xresources This reloads the X resources into the X server after installing them. Below is a usage example; rlyeh:~/setup/rlyeh> ./deploy check The file '../shared/muttrc' differs from '/home/jdoe/.muttrc'. rlyeh:~/setup/rlyeh> ./deploy diff The file '../shared/muttrc' differs from '/home/jdoe/.muttrc'. --- /home/jdoe/.muttrc +++ ../shared/muttrc @@ -1,5 +1,5 @@ # /home/jdoe/.muttrc -# $Date: 2014-12-19 00:46:55 +0100 $ +# $Date: 2014-12-29 02:07:58 +0100 $ # # Settings @@ -76,12 +76,11 @@ set crypt_replyencrypt = yes set crypt_replysign = yes set crypt_replysignencrypted = yes -set crypt_use_gpgme = yes set crypt_verify_sig = yes set pgp_good_sign="^gpgv?: Good signature from " set pgp_sign_as = B37C45E8 set pgp_timeout = 3600 +set pgp_use_gpg_agent=yes # # S/MIME stuff. rlyeh:~/setup/rlyeh> ./deploy install File '../shared/muttrc' was successfully installed as '/home/jdoe/.muttrc'. Requirements The deploy program was written for Python 3 (developed and tested with python3.4). It has no dependencies outside of Python’s standard library. Note The script should be compatible with both Python 2 and Python 3. But it uses the latter by default. Change the first line of the script if you want to use Python 2. In that case you should also add the following line to the script: from __future__ import print_function Installation UNIX-like operating systems This includes Linux, all BSD variants, Apple’s OS X. For a system-wide installation: - Make sure you don’t already have an identically named program installed! - Copy the deploy.py script to a location in your path as deploy - Make it executable. For example # install deploy.py /usr/local/bin/deploy If you want to install it locally, just copy it to where you need it and make it executable. Note If your system doesn’t have \usr\bin\env, or if your Python 3 is not in your $PATH, modify the first line of the deploy program to point to the location of the Python 3 program before installing it. Windows Copy deploy.py to the scripts directory of your Python 3 installation. Since I do not use MS windows in my development environment I’m not able to give more specific advice. Instead of the standard cmd.exe shell, I would suggest you use e.g. the git BASH that comes with MSYS git distribution.
http://rsmith.home.xs4all.nl/software/managing-configuration-files-with-deploy.html
CC-MAIN-2017-17
refinedweb
1,017
67.25
Facebook Caves To Privacy Protests Over Beacon 95 Posted by ScuttleMonkey. Re:Thank god (Score:4, Insightful) Re: (Score:1) Re: (Score:2) Except it's not hard to find at all. Privacy->External Web Sites->Check the box for "disallow". Re: (Score:3, Funny) There are no dancing monkey banners on Facebook, unless you add them to your own page. Re: (Score:2, Insightful) If i want a dam app ill install it myself... Re:Thank god (Score:5, Insightful) Facebook might look like everyone is an open book, but the information shared and public activities seen are carefully chosen for a variety of complex social reasons. Beacon was completely ignorant of this. Re: (Score:1) How many signatures did they get on their little petition thing? 50,000? Out of at least 50,000,000 members? And these are the people who are handing over their email and communications to some web based central entity to begin with? The fact that almost none of them will delete their facebook accounts and never return over this extremely offensive and inexcusable violation only further proves my point. Anyone who would give a company like this a second chance after this kind of Re: (Score:2) Personally, Re: (Score:2) I didn't sign the petition but I cared. I can't possibly be the only one. Perhaps the overlap between people who do care about this sort of thing and people who don't like to sign online petitions or join random Facebook groups is pretty large... Re: (Score:2) People who are in my friends' network know where I work, what music I like to listen to, what teams I cheer for and what TV shows I watch. Guess what, everyone I know basically knows that stuff about me because they're my f Re: (Score:1) Re: (Score:2) Re: (Score:1) Re: (Score:2) Re: (Score:3, Insightful) By default, only those in your network can see ANYTHING about you. This would be people in your own school or whatever. And within that, you have a number of privacy setting controlling whether only your direct friends can see things. In a number of ways... I've always thought that Facebook is to Apple what MySpace is to Microsoft... Re: (Score:1) TFA suggests that people do care about privacy on Facebook, and I'll take that as more reliable evidence than a few comments on a blog where social networking sites are, for some reason, looked down upon. Well... (Score:3, Insightful) Re: (Score:1, Insightful) Re:Well... (Score:5, Funny)? Re: (Score:3, Funny) Our mission statement: " We would like to issue this statement, that, for the record, we have no mission. So if your business gets totally screwed by our business relationship,we probably didn't plan for it to happen. Furthermore, it will probably be your fault. If you do have a mission, and you got totally screwed then its definitely your fault for failing to execute the mission." Re: (Score:2) Re: (Score:3, Funny) Re: (Score:2) Re:Well... (Score:4, Funny) Re: (Score:1) Boldly, Mostly? Re: (Score:1). Re: (Score:2) The change is that you don't have to opt out of individual instances of the program's activity. Rather, you can opt out completely with one check-box. Facebook's M.O. is to create features that reduce your privacy and to enable them automatically. This means that for users to preserve the status quo, they have to play whack-a-mole as new features come out. Re: (Score:2) Re: (Score:1) Your Mutually Assured Defemation scenario doesn't cover most concerns. Re: (Score:2) Unless you a) don't have any dirt, or at least none that anyone would care about, or b) you actually are careful when you input information online, on forms, in email, etc. Of course, not many people think about the impact that one underage drinking picture their friend posted could Re: (Score:1) As opposed to uniquely identifying you as lots of people? I'll get meh coat. Just like last time... (Score:5, Informative) Re:Just like last time... (Score:5, Funny) He-Man Underwear 3-pack, size 8 [ebay.com] Lara Croft Bikini Poster [ebay.com] "Bride of Chucky" on VHS! [ebay.com] Okay (Score:1) Ya, they "caved". (Score:5, Informative) Meaning: We'll still collect information on you and do whatever we want with it, but it won't appear on your profile. Better? Yes. Much better? No. TFA is wrong (Score:5, Informative) Be that as it may... (Score:2, Informative) http*://*facebook.com/beacon/* Unless you want to use that "feature" I don't see how it can hurt. Re: (Score:2) Thx.... Re: (Score:3, Informative) " As I read it, what happens is first they collect the identifiable data, then they might do some real-time stuff with it, then they throw the identifiable data away, probably keeping whatever aggregate info they glean from the real-time processing. Essentially they promise to not store it but they most certa Re:TFA is wrong.... Semantics, further (Score:2) Means "THEY" (as in FACEBOOK) won't collect. Probably also means they offloaded the tool to some ghost subsid or partner who will then periodically aggregate collected data with/to/for Facebook and other unnamed ad agencies... The English language, combined with lawyers, can trick-fuck ANYbody, no matter HOW scholarly or seasoned. Even whole teams of attorneys tend to miss things. Re: (Score:2). Re: (Score:1, Redundant) Re: (Score:2) Re: (Score:1) Re: (Score:1) [facebook.com] Re: (Score:1): (Score:1, Interesting) They'll probably think twice about that, now that they've seen the impact in made on facebook. I think implementing this on eBay would make it easy to boycott sellers by spreading false rumors through your "friend network". piece of cake (Score:2) Does this violate advertisers' privacy policies? (Score:1). Don't boycott LiveJournal (Score:2) Re: (Score:1) import it: what happens on the Internet stays on the Internet (Score:1) The data I create and store on my computer are MINE. I control access, determine what portion of my income will go to protection of said data, and its my ass for everything if someone steals this information. This event will be both a criminal and civil crime against me personally, that I am free to persue how I see fit. The data I create and store on {insert favorite online service here} are NOT MINE. It is the property of some ot Re: (Score:1, Insightful). Hehehe - translation (Score:1) What he meant was, "Awwwwwwwww phooey. Danged kids. mumble mumble ad revenue mumble.". Re: (Score:2) That assumes they're already showing you everything that's being collected. Still not good enough? (Score:2) After weeks of privacy protests over its advertising system, Facebook CEO announced that users now can turn the system off completely. CEO Zuckerberg said 'We simply did a bad job with this release.' It should be off by default and optional in the settings, as with MSN Messenger and many other applications. On a personal note, I enjoyed Facebook at first until I realized that making my network public is quite idiotic. I mean, I can certainly live without Facebook and if I look at the privacy issues and compare it with the Facebook offers, it's just not that sweet any longer. Already Blocked. (Score:1) From the opt out page... (Score:1) t Didn't take long did it? (Score:1, Insightful) Or am I the only one who sees some correlation and causation there? Response Time vs. Marketing Spin (Score:1) It's Too Little, Keep Protesting (Score:2, Interesting) Re: (Score:2) As for people who don't come to Blocking the Beacon (Score:2, Informative) I don't get it (Score:1) It will come to pass (Score:2) Just watch we-know-who-you-are ads and tracking will become the norm. Don't believe me? See how much valuable personal information people voluntarily upload in Go
http://tech.slashdot.org/story/07/12/05/2114247/facebook-caves-to-privacy-protests-over-beacon?sdsrc=rel
CC-MAIN-2013-48
refinedweb
1,361
72.36
The Anthos Sample Deployment on Google Cloud (Preview) is a Google Cloud Marketplace solution that you can preview now. It deploys a real Anthos hands-on environment with a GKE cluster, service mesh, and an application with multiple microservices. This tutorial introduces you to these features, letting you learn about Anthos deployed on Google Cloud with a fictional bank. You can then explore Anthos features that interest you by following the bank's Anthos story further in our follow-up tutorials. If you want to learn more about Anthos and its components first, see our technical overview. However, you don't need to be familiar with Anthos to follow this tutorial. You should be familiar with basic Kubernetes concepts such as clusters; if you're not, see Kubernetes basics, the Google Kubernetes Engine (GKE) documentation, and Preparing an application for Anthos Service Mesh. When you're ready for a real production installation, see our Setup section. When you complete this tutorial, please complete our survey. Your journey You are the platform lead at the Bank of Anthos. Bank of Anthos started as a small business for payment processing on two servers almost ten years ago. Since then, it has grown into a successful commercial bank with thousands of employees and a growing engineering organization. Bank of Anthos now wants to expand its business further. Throughout this period, you and your team have found yourself spending more time and money on maintaining infrastructure than on creating new business value. You have decades of cumulative experience invested in your existing stack; however, you know it's not the right technology to meet the scale of global deployment that the bank needs as it expands. You've adopted Anthos to modernize your application and migrate successfully to the cloud to achieve your expansion goals. Objectives In this tutorial, you're introduced to some of the key features of Anthos through the following tasks: Deploy your Anthos environment with clusters, applications, and Anthos components: Anthos Service Mesh and Anthos Config Management. Use the Google Cloud Console to explore the Anthos clusters resources used by your application. Use Anthos Service Mesh to observe application services. The Anthos Sample Deployment on Google Cloud requires that you use a new project with no existing resources. The following additional project requirements apply: - You must have enough quota in the target deployment project and zone for at least 7 vCPUs, 24.6 GB of memory, 310-GB of disk space, one VPC, two firewall rules, and one Cloud NAT. - Your organization does not have a policy that explicitly restricts the use of click-to-deploy images. Before you start the tutorial:. Ensure Service Management API is enabled. Enable Service Management API Then do the following to ensure that your project meets the requirements for running the Anthos Sample Deployment: In your new project, launch Cloud Shell by clicking Activate Cloud Shell in the top toolbar. Cloud Shell is an interactive shell environment for Google Cloud that lets you manage your projects and resources from your web browser. Configure Cloud Shell with the target deployment zone, replacing ZONE in the following command: gcloud config set compute/zone ZONE Enter the following command to run a script that checks that your project meets the necessary requirements: curl -sL | sh - Output (example): Your active configuration is: [cloudshell-4100] Checking project my-project-id, region us-central1, zone us-central1-c PASS: User has permission to create service account with the required IAM policies. PASS: Org Policy will allow this deployment. PASS: Service Management API is enabled. PASS: Anthos Sample Deployment does not already exist. PASS: Project ID is valid, does not contain colon. PASS: Project has sufficient quota to support this deployment. If anything doesn't PASS, see our troubleshooting guide. If you don't fix these errors, you might not be able to deploy the sample. What's deployed? The Anthos Sample Deployment on Google Cloud provisions your project with the following: One GKE cluster running on Google Cloud: anthos-sample-cluster1. Anthos Service Mesh installed on the cluster. You will use Anthos Service Mesh to manage the service mesh on anthos-sample-cluster1. Bank of Anthos application running on the cluster. This is a web-based banking app that uses a number of microservices written in various programming languages, including Java, Python, and JavaScript. A single Compute Engine instance (virtual machine) that performs a number of automated tasks to jump-start the tutorial environment after the cluster is created: asd-jump-server. A VPC with a subnetwork within the target deployment region for the GKE cluster and Compute Engine instance. A Cloud NAT gateway on a Cloud Router, and firewall rules for connectivity to and between the deployment's components. Launch the Anthos Sample Deployment on Google Cloud Launch the Anthos Sample Deployment on Google Cloud through the Cloud Marketplace: Open the Anthos Sample Deployment on Google Cloud. Go to the Anthos Sample Deployment on Google Cloud Select and confirm the Google Cloud project to use. This should be the project that you created in the Before you begin section. Click LAUNCH. It can take several minutes to progress to the deployment configuration screen while the solution enables a few APIs. Select the Confirm that all prerequisites have been met checkbox to confirm that you have successfully run the prerequisites script. (Optional) In the deployment configuration screen, specify your chosen deployment name, zone, and Service Account. However, for your first deployment, we recommend that you accept all of the provided default values, including creating a new Service Account. Click Deploy. Deploying the trial can take up to 15 minutes, so don't be concerned if you have to wait for a while. While the deployment is progressing, the Cloud Console transitions to the Deployment Manager view. After the sample is deployed, you can review the full deployment. You should see a list of all enabled resources, including one GKE cluster ( anthos-sample-cluster1) and one Compute Engine instance ( asd-jump-server). If you encounter any deployment errors, see our troubleshooting guide. Using the Anthos Dashboard Anthos provides an out-of-the-box structured view of all your applications' resources, including clusters, services, and workloads, giving you an at-a-glance view of your resources at a high level, while letting you drill down when necessary to find the low-level information that you need. To see your deployment's top-level dashboard, go to your project's Anthos Dashboard in the Google Cloud Console. Go to the Anthos Dashboard You should see: A Service mesh section that tells you that you have 8 services (but that they need action to see their health). You'll find out more about what this means later in the tutorial. A Cluster status section that tells you that you have one healthy GKE cluster. Explore Anthos clusters resources The Anthos Clusters page shows you all the clusters in your project registered to Anthos, including clusters outside Google Cloud. You can also use the Google Kubernetes Engine Clusters page to see all the clusters in your project. In fact, the Anthos Clusters page lets you drill down to the GKE pages if you need to see more cluster and node details. In this section, you'll take a closer look at Bank of Anthos' GKE resources. Cluster management In the Google Cloud Console, go to the Anthos Clusters page. Click the anthos-sample-cluster1 cluster to view its basic details in the right pane, including its Type, Master version, and Location. You can also see which Anthos features are enabled in this cluster in the Cluster features section. For more detailed information about this cluster, click More details in GKE. This brings you to the cluster's page in the Google Kubernetes Engine console, with all the current settings for the cluster. In the Google Kubernetes Engine console, click the Nodes tab to view all the worker machines in your cluster. From here, you can drill down even further to see the workload Pods running on each node, as well as a resource summary of the node (CPU, memory, storage). You can find out more about GKE clusters and nodes in the GKE documentation. Cluster workloads The Google Kubernetes Engine console has a Workloads view that shows an aggregated view of the workloads (Pods) running on all your GKE clusters. In the Google Kubernetes Engine console, go to the GKE Workloads page. Workloads from the GKE cluster and namespaces are shown. For example, workloads in the boa namespace are running in anthos-sample-cluster1. Services & Ingress The Services & Ingress view shows the project's Service and Ingress resources. A Service exposes a set of pods as a network service with an endpoint, while an Ingress manages external access to the services in a cluster. However, rather than a regular Kubernetes Ingress, Bank of Anthos uses an Istio ingress gateway service for traffic to the bank, which Anthos Service Mesh meshes can use to add more complex traffic routing to their inbound traffic. You can see this in action when you use the service mesh observability features later in this tutorial. In the Google Kubernetes Engine console, go to the Services & Ingress page. Go to the Services & Ingress page To find the Bank of Anthos ingress gateways, scroll down the list of available services to find the service with the name istio-ingressgateway. Select the ingress gateway service for anthos-sample-cluster1in the list to open its Service details view, which shows more information about the service including all of its external endpoints. An ingress gateway manages inbound traffic for your application service mesh, so in this case we can use its details to visit the bank's web frontend. In the Service details view for istio-ingressgateway, click the external endpoint using port 80. You should be able to explore the Bank of Anthos web interface. Observing services Anthos's service management and observability is provided by Anthos Service Mesh, a suite of tools powered by Istio that helps you monitor and manage a reliable service mesh. To find out more about Anthos Service Mesh and how it helps you manage microservices, see the Anthos Service Mesh documentation. If you're not familiar with using microservices with containers and what they can do for you, see Preparing an application for Anthos Service Mesh. In our example, the cluster in the sample deployment has the microservice-based Bank of Anthos sample application running on it. The application also includes a loadgenerator utility that simulates a small amount of load to the cluster so that you can see metrics and traffic in the dashboard. In this section, you'll use the Anthos Service Mesh page to look at this application's services and traffic. Observe the Services table view Go to the Anthos Service Mesh page. Go to the Anthos Service Mesh page The page displays the table view by default, which shows a list of all your project's microservices, including system services. To filter to only the Bank of Anthos services, select boa from the Namespace drop-down at the top left of the page. Each row in the table is one of the services that makes up the Bank of Anthos application; for example, the frontend service renders the application's web user interface, and the userservice service manages user accounts and authentication. Each service listing shows up-to-date metrics, such as Error rate and key latencies, for that service. These metrics are collected out-of-the-box for services deployed on Anthos. You do not need to write any application code to see these statistics. You can drill down from this view to see even more details about each service. For example, to learn more about the transactionhistory service: Click transactionhistory in the services list. The service details page shows all the telemetry available for this service. On the transactionhistory page, on the Navigation menu, select Connected Services. Here you can see both the Inbound and Outbound connections for the service. An unlocked lock icon indicates that some traffic has been observed on this port that is not encrypted using mutual TLS (mTLS). You can find out more about how this works in the Secure Anthos tutorial. Observe the Services topology view The table view isn't the only way to observe your services in Anthos. The topology view lets you focus on how the services interact. If you haven't done so already, return to the table view from the service details view by clicking the back arrow at the top of the page. At the top-right of the page, click Topology to switch from the table view to the workload/service graph visualization. As you can see from the legend, the graph shows both the application's Anthos Service Mesh services and the GKE workloads that implement them. Now you can explore the topology graph. Anthos Service Mesh automatically observes which services are communicating with each other to show service-to-service connections details: Hold your mouse pointer over an item to see additional details, including outbound QPS from each service. Drag nodes with your mouse to improve your view of particular parts of the graph. Click service nodes for more service information. Click Expand when you hold the pointer over a workload node to drill down for even more details, including the number of instances of this workload that are currently running. Exploring Anthos further While this tutorial has shown you many Anthos features, there's still lots more to see and do in Anthos with our deployment. Visit one of our follow-up tutorials to try some hands-on tasks with Anthos, or continue to explore the Anthos Sample Deployment on Google Cloud yourself, before following the cleanup instructions in the next section. - Explore Anthos security features with the Anthos Sample Deployment in Secure Anthos. - Learn about service management with the Anthos Sample Deployment in Manage services with Anthos. Learn more about Anthos in our technical overview. Find out how to set up Anthos in a real production environment in our setup guide. Read about Anthos components. Take our survey When you finish working on this tutorial, please complete our survey. We're interested in hearing about any issues you might have at any point in the tutorial. Thanks for using the survey to submit your feedback. Thank you! The Anthos Team
https://cloud.google.com/anthos/docs/tutorials/explore-anthos?authuser=0&hl=zh-TW
CC-MAIN-2020-50
refinedweb
2,405
61.36
Question: It's easy to convert Decimal to Binary and vice-versa in any language, but I need a function that's a bit more complicated. Given a decimal number and a binary place, I need to know if the binary bit is On or Off (True or False). Example: IsBitTrue(30,1) // output is False since 30 = 11110 IsBitTrue(30,2) // output is True IsBitTrue(30,3) // output is True The function will be called a LOT of times per second, so a fast algorithm is necessary.. Your help is very much appreciated :D Solution:1 Print this page out, hang above your monitor But it's roughly something like if ( value & (1 << bit_number) ) Solution:2 Really? def IsBitTrue(num, bit): return (num & (1 << (bit-1))) > 0 Normally, it would be 1<<bit, but since you wanted to index the LSB as 1... Solution:3 Use your 'easy' function to convert the decimal number to binary, and then compare with a bit mask representing the bit you are testing. Solution:4 Python def isBitTrue( number, position ): mask = 1 << (position-1) return bool( number & mask ) If you number the positions from 0 (instead of 1), you can save a ton of time. >>> isBitTrue(30,1) False >>> isBitTrue(30,2) True >>> isBitTrue(30,3) True Solution:5 bool IsBitTrue(int num , int pos) { return ((num>>pos-1)%2==1) } Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/10/tutorial-decimal-to-bit-binary.html
CC-MAIN-2019-04
refinedweb
248
52.63
Google got Looker. Salesforce bought Tableau. But open source tools are rising in popularity across the world of business intelligence and data analysis. In our 2019 Dev Survey, we asked what kind of content Stack Overflow users would like to see beyond questions and answers. The most popular response was “tech articles written by other developers.” So starting this week, we will begin highlighting articles written by your peers. If you have an idea and would like to submit a pitch, you can email pitches@stackoverflow.com. Our first piece comes from Alessio Civitillo, an Analytics Manager at TE Connectivity in Munich. What To Make of Two Major Acquistions Recently we saw two major deals take place in the business intelligence space: Google paid $2.6 billion to acquire Looker and Salesforce ponied up a whopping $15.7 billion for Tableau. Both of the recently purchased companies focused on offering cloud based BI tools, a space I am quite familiar with, having spent one year rebooting a major Salesforce project for 500 users and the last two-and-half years as an analytics manager at TE Connectivity serving about 600 internal users. So why did Salesforce and Google make these big acquisitions? The most obvious answer is that working with data is increasingly happening across multiple departments, and many employees who are not well versed in programming or statistics are turning to these dashboards to help them understand data and share those insights. From sales, to revenue operations, to customer support, teams are recognizing the value of collecting and analyzing internal data. So what comes next? Will these tools be easy to integrate and actually make the product suites offered by Salesforce and Google Cloud more attractive to folks like me? Personally, I see them as defensive moves, a strategy to protect incumbent product portfolios by snapping up fast growing competitors. If they can tightly integrate these acquisitions, that may help to consolidate usage among existing clients who previously worked with tools from multiple companies. As I look at these acquisitions, however, I think it’s worth noting another interesting trend. While dashboards are great, I think that the flexibility of simpler, open-source tools is beginning to win out among developers like myself. For those willing to spend a little time learning how to program with these tools, they can provide a powerful alternative worth exploring. Data, data, everywhere Tableau has also captured the attention of marketing and sales departments in many companies worldwide. Many companies are using Salesforce already and will benefit from a tighter integration. Customers always appreciate well integrated solutions and the benefits will be even greater for those companies moving their infrastructure to the cloud. Eventually, Salesforce may move their analytics and reporting to the cloud and offer a solution that can work with data across the board and not just their own datasets. So what’s the right solution for your company? Salesforce and Tableau? Google Suite and Looker? Microsoft’s Power BI, Office, and Azure? The key thing to understand from a business analytics standpoint is that dashboards like these are just one relatively small part of the puzzle. Things like ETL, data prep, and reporting operations are still handled by other tools. This space still has a 90’s era vibe to it. Many companies still push to keep this work in IT but this tends to increase the turnaround times and costs, which is a hard sell in a world where managers want things delivered quickly. While Tableau and Looker are considered some of the best data exploration applications on the market today, they still feel like an isolated solution for BI managers. This is a very interesting trend that I don’t believe has received much attention in the press. There is a growing realization within the business intelligence community that no dashboard will save the day. Every time you find yourself going back to Excel, it’s a recognition that what many business analysts want is the flexibility to design their own approaches and custom tools that fit in-house problems. For example, developing internal business applications is also becoming increasingly easy with solutions like Retool, which is part of an interesting new “rise of no code” trend applications. Making internal business tools without a big IT project is not a new idea. MS Access does exactly that, but what is new is that tools like Retool provide a way to easily build web applications with a simple workflow. At work, my team and I are using those tools to build Salesforce and business applications. One advantage, in my view, is that it’s a simpler way to build the applications, but the other advantage gets us back to that magic word: integration. With Salesforce, you are locked in the Salesforce world and pulling data from other systems can be hard. Tools like Retool must make connectivity a top priority to survive so they are extremely good at integrating with other applications and databases. Industry moving to open source tools Isolated tools and processes don’t last long. Integration to existing processes and solutions is paramount. Tableau did not integrate well with the rest of the business analyst workflow and eventually felt like a very incomplete solution. Salesforce might be a great CRM, but it kind of lives in isolation and is mostly being used by sales organizations, so it can feel incomplete too in a way. As the analytics industry advances further, it is important to keep this in mind. Any current modern analytic enterprise solution requires the orchestration of multiple tools sold by multiple vendors that don’t always work as well together as needed. This is an interesting opportunity for open source tools and vendors that take integration more seriously. It’s interesting because open source solutions have a natural tendency to integrate well with each other and avoid lock in. Maybe that’s why Jupyter Notebooks are exploding right now in popularity. They provide the type of live feedback users love with the power of a programming language with a rich ecosystem of libraries like Python. With Jupyter, analysts can connect to pretty much everything, can write to everything, and can output all kinds of interesting things. For example, developers like Greg Reda have been using tools like Jupyter for cohort analysis. This is a good approach when trying to crunch data on customer acquisition and to demonstrate which subset of customers has the best lifetime value. Here you can see how easily he created a cohort chart that looks good after finalizing his analysis: import seaborn as sns sns.set(style=‘white’) plt.figure(figsize=(12, 8)) plt.title(‘Cohorts: User Retention’) sns.heatmap(user_retention.T, mask=user_retention.T.isnull(), annot=True, fmt=‘.0%’); Which outputs this nice cohort chart: Open source is also catching up on enterprise. Vega is a solid implementation of the “grammar of graphics”, a concept to define data visualizations in a declarative way. Vega shares the same theoretical foundations as Tableau, has a Python implementation and is already integrated with Jupyter. Vega is so good that ElasticSearch officially made it an important part of their Kibana visualization platform last year. OK, but what about analytics and BI in companies? Are we seeing a trend towards adoption of open source tools? Airbnb is an example of a company that has put together a custom in-house toolkit so that any employee, even those not familiar with coding in SQL, can use data to make informed decisions. They called it Superset and they have open sourced it. Superset is now in the process of becoming part of the Apache software foundation. Netflix is another example of a company doubling down on open source for BI and analytics. Netflix software engineers even developed their own version of Jupyter called nteract and have few interesting articles on using notebooks in production. For business analytics managers like myself, the lesson is simple. Management might buy into good looking dashboard tools, but the workers actually doing things with the data need solutions that are easy to customize and integrate. In analytics a complete solution goes from the raw data all the way to the dashboard, the commentary, and the insights. While services like Tableau and Looker are nice, a mastery of languages like SQL and Python will give you the ability to wrangle complex, often messy data into reporting that can be used across your company. New BI dashboards will come and go. More cloud enterprise applications will arrive with great fanfare but mastering the ability to build tools suited to your in-house needs will never go out of style.Tags: big data, bulletin, business intelligence 26 Comments Can you elaborate on why Tableau is isolated? I think Tableau has done a lot of work in connecting to many different data sources, but it has somewhat failed in the data prep space. Most of the data is not consumed as it comes from the source, it requires a lot of preparation (think all the vlookups, sums, pivots, you do in Excel). So by not making data prep easy Tableau lives on an island, as long as you have data prep done in other tools you are good, but if you have bought just Tableau and hope for all to work it’s going to be a disappointment. You seem to be ignoring the elephant in the room: Power BI. It already covers all the requirements including ETL, data prep and reporting, integration with R & Python. It’s a no-code tool so accessible to a vastly larger audience of analysts, not just developers. Consider some recent stats: > 20 PB of data ingested / month. > 25m data models hosted The point of the article is that more integrated tools tend to get better adoption. From that perspective PowerBI did well. Their huge success is for a good part due to their solid integration with Excel and the rest of the Office 365 stack. However, last time I checked (some months ago) their ETL process downloaded all data on desktop, their scheduling/automation was fairly basic. Tableau with Tableau Prep is covering their ETL weakness. So I find hard to think there is a clear winner between Tableau and PowerBI, for me they are on the same boat. In my case we had already some adoption of Tableau and the better Office 365 integration did not justify a switch. Would I choose PowerBI if I could start now from 0 today? I don’t know, PowerBI is fairly expensive once you start scaling to hundreds/thousands of users and it’s not Microsoft’s core product, if you need a specific feature or fix you mind find yourself in the weak position dealing with a company with a lot of other focus areas. Said that, there is a lot of stuff happening with Azure, so if PowerBI gets more and more integrated in that stack it might have a real advantage over Tableau. I can even raise a great downfall for Power BI in my opinion: Power BI analysts must have windows PCs, once Power BI desktop only work on Windows… and the Power BI scheduler also works only on Windows.. In the end of the day we fall in the same argument Alessio stated: Open-source tools and languages tend to be more flexible and in a world with multiple envoirnments and complexities it’s fundamental to know powerfull langagues such as Python and SQL.. Comparing Jupyter with Powerbi and Tableau makes no sense. The later offer front end dashboard capabilities to end users. They allow end users to slice and dice the data and come up with their own insights. Jupyter can only make static charts which are not helpful in every case. Powerbi offers data transparency to everyone who wishes to track performance. I would rather do my own charts with my own data rather than rely on fancy chart without knowing what the underlying assumptions of the data are.. jupyter is just an IDE on which you can make a dashboard, but its not a dashboard in itself. On the chart u showed in this article, can I extract the data behind it? Can I change the assumptios of the chart? No, right? I can do that on Powerbi. So please stop confusing non coders like us Above I probably should’ve said “low-code” as there are coding features available, but you can achieve a lot without writing any code. My point is the more accessible tools get the broadest adoption. If coding skills are required, you are immediately limited to a very small subset of people, who are less likely to also have subject matter expertise. A “task” becomes a “project”, needing a team and then a project manager. On the Power BI ETL process, you only described the “Import” scenario – Power BI has had DirectQuery (live querying of over 20 sources including most SQL / cube / big data platforms) since 2015. Alternatively with “Dataflows” you can run “Import” scenario processes in the cloud, delivering to a data lake. On scheduling, there is a REST API for refresh, so its more flexible. On costs and scale, with the Premium license you buy cloud capacity and can freely distribute to as many consumer users as that can support. Premium also features better scheduling. On any common scenario, Power BI is 4x – 10x cheaper than Tableau. My point is that there is no silver bullet and that eventually you need to look at the whole picture with integration as the number 1 priority. Truth is PowerBI is not going to integrate well in every situation. Also, price scales differently in PowerBI and it does get expensive. Their live connect is buggy and doesn’t work well. Also its ETL will download everything on desktop even on liveconnect. Dataflow seems to work only in Azure (as I said above if you go with Azure PBI does have some advantages over Tableau). We can continue point per point, but again if it works for you great, it’s just that I don’t believe it’s an “elephant in the room” and that it’s important to consider all things before adopting it. So Alessio, which tool in your opinion is ready to compete with the flexibility that Excel provides. First I would ask myself why do I need an Excel alternative and if that is a good enough reason to look for something different Hey Alessio, check out lookml. It is the new transformation later and as far as I know Looker is the only tool with a modeling later like that. Coupled with something like Fivetran and Snowflake, it is a pretty potent solution. I have read about that tool, but for now we are happy generating the sql required for the in database transformations with Python. We have our own library for that and so far it’s looking promising. We also plan to use Airflow so leveraging Python and its libraries seems to make sense for now. But lookml seems to be going in a good direction. thanks for the information. What is “CRM” in the context of this article? It would be helpful if acronyms and initialisms were spelled out on the first instance that they appear in a document. I have an MBA and decades of experience, but I hadn’t a clue about what you were trying to say without performing a Web search, and even then only guessing what it might mean. Customer Relationship Management, I’d guess. Where do they hand out MBAs to people that have never even heard of CRM? 👌 The most popular response was “tech articles written by other developers.” So you decided to start with a post written by an Analytics Manager that is more of an industry analysis than a tech article? Sorry but at least from my perspective, that’s something entirely different… Where does TIBCO’s Spotfire fit into your analysis? I’m surprised it’s not mentioned at all – the majority of the Oil & Gas space is completely dedicated to Spotfire. There are many products in this space and the point of this article was to discuss more holistically how to build your analytics strategy. Said that, I normally refer to Garnter to get a sense of how a tool is doing, so: They don’t seem to be doing that well. Please understand this doesn’t mean anything without context, so in your specific case TIBCO might be the right choice. This article has a bizarrely narrow viewpoint. “a mastery of languages like SQL and Python…” — to paraphrase: “analysts should learn Python and SQL”. Cool story — this point has been made all over the web since about 2015 (and a company such as DataCamp bases its entire business model on this simple idea). And comparing BI software like Looker and Tableau to Jupyter Notebooks or to Excel is vapid. Looker and Tableau are useful as graphical interfaces to a properly maintained (i.e. automated loading of raw data & transformation into dimensional models) warehouse. Jupyter Notebooks are better suited for ad hoc analysis or modeling. Excel is useful for finance/accounting, or if you’re effectively just doing back-of-the-napkin arithmetic. In short: these pieces are often complementary. If you think you can replace BI software with Jupyter notebooks, then you’re almost certainly doing it wrong. I am aware of no sophisticated company that would view these things as substitutes. Metrics/KPIs and data science are different processes and require different systems; the system for the former works best when it a) updates data in a 100% automated manner and b) provides a portal that business users (i.e. non-analysts) can use to inspect/segment/compare the data used for KPIs. Thanks for the comment and definitely the article is more of an introduction to analytics architecture today than a full detailed description. Hopefully my “2 cents” are that open source makes integration easier, not that Tableau can replace BI stacks. Think how easily you can create a workflow in Jupyter and move it into Airflow for scheduling and automation, you can’t easily integrate solutions like that with more standard vendor software. I have used all the tools you mentioned here, being in the business for 20+ years. Coming from very structured data warehousing to the new big data world, all of a sudden the road is not as clear any more. Like you said, someone might want to say let’s skew these $m companies and go with open source solutions. If you still have a DW, the best tool for an enterprise is MicroStrategy. If you are enter into Cloud with big data, it is still unclear what to do, with Quicksight on AWS, PowerBI on Azure, now Looker on GCP. Tableau has the best visuals, but the pretty face is getting challenges from PowerBI and the like; it’s a blessing now that they are sold to Salesforce. All other solutions would eventually disappear, leave perhaps only a few niche players like MicroStrategy. That being said, all the cloud giant backed tools, Quicksight, PowerBI, Looker are just babies when compared to Tableau and MicroStrategy. But these babies has got rich parents and should take over the world in the next 5-10 years. Comparing Jupyter with Tableau and Powerbi is silly and ignorant. The later offer dashboard capabilities and great features to the end user to slice and dice the data. Powerbi offers great power to the user to do their own Analysis and extract the data..I am not sure one can make a similar dashboard on Jupyter without hours of coding. What this article mentions is that jupyter can only help in making great static charts, which are not helpful in every case. Users want more and more control over data and would like to do their own analysis rather than being handed down fancy complicated charts. This is especially important for non coders like me. SAP Business Objects is the best. That and Oracle and you’re good, for non-Big Data anyway. This was a really interesting article. I noticed that you spoke about re-tool. I have always been confused about what the difference is between no-code and low-code. So I searched and found this article that explains the differences pretty clearly – anyway thought that other people might find that helpful because I was confused on the subject before. Cheers!
https://stackoverflow.blog/2019/07/16/google-looker-salesforce-tableau-bi-open-source-alternatives/
CC-MAIN-2021-31
refinedweb
3,426
61.56
Building Unit Test Projects [1] PUBLISHED With unit tests, you can verify that your code works well and increase its reliability. The Tizen Studio provides the creating, building, and editing tools for unit tests, and a view for checking and analyzing the test results. The Tizen Studio uses the gtest framework to make and launch test cases. To manage your test cases, you can use the Test Explorer view. Creating a Unit Test Project You can create a test project for the Tizen native project through the Tizen Native Unit Test Project wizard. The wizard provides the test project for each Tizen native project type, such as UI application, service application, shared library, and static library. To create a test project: - In the Tizen Studio menu, select File > New > Other > Tizen > Tizen Native Unit Test Project. - In the New Tizen Unit Test Project window: - In the Select the Tizen Project for test panel, select the project you want to test. - Specify a name for the test project. - Specify a destination folder where to save the project. - Click Finish. To use the test project: - In the Project Explorer view, open the <TEST_PROJECT_HOME>/src/<TEST_PROJECT_NAME>TestCase.cppfile. - Add a TEST_F()test case. Each TEST_F()test case is independent. If the TEST_F()test case is associated with a fixture class name, the test case runs based on that fixture class. - Add assertions. The unit test tool supports basic assertions, binary comparison, and string comparison in the gtest. For more information, see Google Test Advanced Guide [2]. To test the project written in the C code, a unit test project for the C++ language is provided. In this case, the tested function must be qualified as an extern "C" to avoid the 'undefined reference' error as demangled symbols in the error message. There are 2 forms of the extern "C" declaration: - Declare the extern "C" linkage specification in the C header file: #ifdef __cplusplus extern "C" { #endif int foo; void bar(); #ifdef __cplusplus } #endif - Include the C headers in the C++ code: extern "C" { #include "header.h" } In the following example with a calculator sample project, a test case is created for the utils_round() function declared in the utils/utils.h header file: - Create a calculator project named myProject, and for it a unit test project named myProjectTest. - Append the test method to the end of the myProjectTest/src/myProjectTestTestCase.cppfile: TEST_F(TestSuite, utils_round) { double var = 3.5; /* long long utils_round(double value); */ EXPECT_EQ(utils_round(var), (long long)4); } - Change the line that includes the utils/utils.hfile: #include "view/window.h" #include "view/main-view.h" extern "C" { #include "utils/utils.h" } #include "utils/ui-utils.h" Running the Unit Test Project on Devices To launch the unit test project, click the Run icon in the toolbar. Figure: Launching the test project After the test cases are executed, the results are displayed on both the Test Result and Test Explorer views. Figure: Test results Customizing the Launch Configuration The test case running can be customized with launch options. To set the launch options: - In the Project Explorer view, right-click the project. - Select Run > Run Configurations or Run > Debug Configurations. - Select Tizen Native Unit Test, and click New. The name of the test project is displayed in the Configurations dialog box. You can control specific launch options in the Advanced tab: - Run Disabled Tests: If selected, the disabled test cases are also run. - Shuffle Tests: If selected, test cases are run in a random order. - Generate an XML Report: If selected, a test result XML file is generated. Managing Test Cases in the Test Explorer On the Test Explorer view, you can launch the test cases, and check the results. If you want to open the Test Explorer view or update the test cases, right-click the unit test project in the Project Explorer, and select Show in Test Explorer. When the test cases are executed, the test case states are automatically updated. Table: Test case states The Test Explorer view provides the following options for testing and test cases: - Refresh Tree: refreshes the test case tree to reflect the linked unit test project's changes. - Expand All and Collapse All: expands or collapses the test case tree. - Check All and Clear All: checks or unchecks all the check boxes in the tree. - Check Failed: checks failed test cases only. - Run Checked: runs checked test cases. - Run Disabled Tests: if selected, runs also the disabled test cases. - Shuffle Tests: if selected, runs test cases in a random order. - Generate an XML Report: if selected, generates a test result XML file. The Run Disabled Tests, Shuffle Tests, and Generate an XML Report options can be altered in the Advanced tab of the launch configuration.
https://developer.stg.tizen.org/print/22781
CC-MAIN-2020-29
refinedweb
790
64.61
Bitcoin Whitepaper, Beautified Programmers often use tools to beautify source code that is otherwise difficult to read. Beautifiers come in many forms: These tools are all automated. Just copy/paste your text in one side, click a button, and a pretty version of your code shows up on the right side. When it comes to whitepapers, we’re out of luck :-( So, I took The Bitcoin Whitepaper and beautified it manually here. Feel free to add comments in that Google Doc. No information is missing or semantically modified (except for the error I discovered and corrected)… It’s just reformatted to be easier to read and comprehend. Enjoy! My Observations Below are my observations from beautifying the Bitcoin Whitepaper: I. Satoshi Nakamoto is a: 1. Human While manually prettifying the Bitcoin Whitepaper, I noticed an error in section 3 . “To err is human, to forgive is divine.” - Alexander Pope 2. Genius Satoshi was the first to combine cryptography, networking, game theory and economic incentives to create trust in a trustless, distributed environment to support a secure digital currency. His genius was not in creating a digital currency, for that had already been attempted many times; it was in creating a system of economic incentives, based on cryptography, to drive a purely peer-to-peer version of digital currency that would allow online payments to be sent directly from one party to another without going through a financial institution. This diagram from section 10 (Privacy) shows how Bitcoin breaks the flow of information only to identities; whereas, the traditional privacy model hides everything from the public: 3. Whitepaper Expert The Bitcoin Whitepaper is a great study on the correct way to write a Whitepaper. Frequently in the paper, Satoshi makes a factual statement, for example: And then provides supporting statements to provide insight into how it works: This format is concise, clearly describes business processes, and tells a good story. LaTeX was the most likely software used to create the document, which is apparent when we observe the correctness and accuracy in the mathematical equations and diagrams. Who used LaTex at the time of the writing of the Bitcoin Whitepaper? 4. Sexist Male Satoshi regularly used “he” (with no mention of “her”). 5. Helpful Realist Satoshi is helpful and realistic when he suggested that businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification. 6. Cypherpunk The Cypherpunk Manifesto begins:. … When my identity is revealed by the underlying mechanism of the transaction, I have no privacy. I cannot here selectively reveal myself; I must always reveal myself. Therefore, privacy in an open society requires anonymous transaction systems. Since the Bitcoin protocol defines a means of eCommerce whereby parties participate in public yet completely anonymous transactions (free from the control of a central authority) it appears that Satoshi was influenced by the doctrines of the cypherpunk movement. Looking beyond the white paper to Bitcoin’s first ever block, i.e., the Genesis Block, we find the following text: The Times 03/Jan/2009 Chancellor on brink of second bailout for banks This was a clear indication of Satoshi’s opinion of the instability caused by fractional-reserve banking, central banks and the financial crisis of 2007–2008. 7. Cryptoeconomics & Mechanism Design Expert Cryptoeconomics combines cryptography and economic incentives to design decentralized protocols and applications. Mechanism design is a field of economics that studies how to design protocols to incentivize rational actors to behave in desirable ways. Mechanism design is the antipode of game theory. In game theory, we start with the game and analyze its outcomes according to the enjoyment derived by the player from their choices. In mechanism design, we start by defining desirable outcomes and work backwards to create a game that incentivizes players to act in ways that lead towards those outcomes. Satoshi incentivized the Bitcoin miners to process transactions by providing them with two means of income: - A block reward for solving the POW puzzle (it’s currently 12.5 BTC per block, which is currently $88,727.00) and - A transaction fee for processing the transactions in that block that the miner adds to the Bitcoin blockchain. (Most transactions cost at least 0.5 mBTC and there are usually between a few dozen and 3,000 transactions per block. To see the latest blocks go here.) “The iron rule of nature is: you get what you reward for. If you want ants to come, you put sugar on the floor.” - Charlie Munger 8. Investor, Trader, and Entrepreneur Investors working in the field of high frequency trading would be most familiar with the concept of the Binomial Random Walk, as described in section 11 (Calculations). Satoshi understood that financial markets are not like a drunk, randomly staggering his/her way home; they can be manipulated by the profit-driven human psychology. 9. Insatiable Learner Without being a voracious reader and thinker, could Satoshi have devised such an ingenious, peer-to-peer version of electronic cash that has changed the world? 10. C Programmer Of the languages I’ve used in my professional career I can think of several that would be likely be more appropriate to use for a coding example rather than C, including: C is great for performance, not so great for illustrative purposes for a wide audience. In other languages, we can accomplish the same work with a lot fewer lines of code. Note that the code snippet presented in the paper does not show the entire C program. I wanted to run it for myself to prove that the numbers presented were, indeed, correct. Here’s the complete program: /*************************************************************** * bwp-calc -- Calculations for the Bitcoin white paper * * * * Author: Lex Sheehan * * * * Purpose: Demonstration of Bitcoin white paper calculations * * * * Usage: Click the <Execute> button and see Results below. * * * * Notes: * * * * For a deep understanding of cryptocurrencies and * * blockchain technology (including Ethereum) ... * * Register at cryptocurrencies.developersclass.com * ***************************************************************/#include <stdio.h> #include <math; }int main() { int num = 10; int z; double p; double q = 0.1; printf("q=%f\n", q); for (z=0; z <= num; z++) { p = AttackerSuccessProbability(q, z); printf("z=%i P=%f\n", z, p); } int zTimes5; q = 0.3; printf("\nq=%f\n", q); for (z=0; z <= num; z++) { zTimes5=z*5; p = AttackerSuccessProbability(q, zTimes5); printf("z=%i P=%f\n", zTimes5, p); } puts("\nSolving for P less than 0.1%...\n"); q = 0.1; puts("P < 0.001"); for (q=0.10; q <= 0.45; q+=0.05) { p = 1; for (z=0; p >= 0.001; z++) { p = AttackerSuccessProbability(q, z); //printf(">> q=%f z=%i p=%f\n", q, z, p); } // P is now >= 0.001, so take previous value of z printf("q=%f z=%i\n", q, z-1); } }// share URL: jdoodle.com/a/CNb // embed URL: I put the results in an online C compiler tool. Check it out for yourself and run the code here. The output of running this code shows us that the probability of double spends drops exponentially to zero as the honest mining majority finds more blocks than potential attackers. II. Error Discovered in Bitcoin Whitepaper Yes! I found an error in the Bitcoin Whitepaper, albeit a minor one. In section 2 . First, let’s examine what a timestamp is and when it is created… The timestamp is created at the moment the block of Bitcoin transactions is created. It is an Unix timestamp, which means it is an integer number. We can see it in the block header structure below: Next, let’s examine what a Unix timestamp is… Bitcoin uses Unix timestamps, which are numbers that represent the numbers of seconds since Thursday, 1 January 1970. We can use the developers console in our web browser to create a timestamp and print it out in an ISO8601 string format: We can clearly see that a timestamp does not include another timestamp, or anything else for that matter; it is simply a number that represents the time (in seconds since 1 Jan 1970) at which the current block was created. It’s the block header that includes the hashPrevBlock field. Now, let’s examine what a hash consists of: blockHash = SHA256Hash(Version, hashPrevBlock, hashMerkleRoot, Time, Bits, Nonce) We see that a block’s hash is the result of sending all the data from the block’s header fields into our hash function; the result is a 64 byte string that looks like this: 00000000000000001588d80f3cb1d593cb198f485aef33ca926b58a62bcceda8 We can use this tool to see that block hash and the other data that comprises it. Clearly, Satoshi Nakamoto meant to say: Each block includes the previous timestamp in its hashPrevBlock header field, forming a chain, with each additional hash reinforcing the ones before it. To say … Each timestamp includes the previous timestamp in its hash, forming a chain, with each additional timestamp reinforcing the ones before it. … is simply inaccurate. Therefore, it’s the block hash that contains the previous timestamp and that’s what effectively links the blocks together (not the timestamp value). The best visual explanation that I’ve found of how this works can be found here. There, you can see what happens; however, if you want to understand how it works, nothing beats implementing it yourself in your own blockchain. That’s exactly what we do in the upcoming Cryptocurrencies Developers Class. Note that this class is conducted in person, in a small group seminar style. As such, places are strictly limited. Other Bitcoin Whitepaper Mistakes Implementation Inconsistencies Others have cited mistakes, see below, but those mistakes are realized only after comparing the implementation to the words in the white paper. Note that this article is limited to observations gleaned solely from The Bitcoin Whitepaper. Below is a link to a description of known problems in Satoshi Nakamoto’s paper, “Bitcoin: A Peer-to-Peer Electronic Cash System”, as well as notes on terminology changes and how Bitcoin’s implementation differs from that described in the paper. harding/bitcoin-paper-errata-and-details.md Mathematics Notation Mistake Here, these mathematicians argue that the distribution of the number of blocks mined by the attacker should be called a Negative Binomial Distribution, rather than the Poisson law. They claim that luck should also be included in the calculations, but note that it really makes no difference in the end. In either case, the probability of double spends drops exponentially to zero as the honest mining majority finds more blocks, i.e., Bitcoin still works properly either way. The Reference Client Mistake Here it’s claimed that the biggest mistake of Bitcoin has to do with its source repository, where that one source repository defines how the protocol is implemented. The repository owners can change the block size, say whether Bitcoin addresses begin with a “1” or a “5”, etc. Nodes that run the Bitcoin protocol download client software from that single repository and generally accept it as as the reference client. This can be considered centralized form of development. While this may be true, this is more of a deployment detail than a flaw in the white paper. III. Unanswered Questions Here’re some questions you may have after reading the Bitcoin Whitepaper: - What exactly is a Hash function and how does it work? - What really happens when miners solve those cryptographic puzzles? Doesn’t this imply that more and more energy will be consumed by more and more powerful CPUs over time? If so, how much heat do all these CPUs generate and how does this impact our global climate? - How does the consensus mechanism of rules and incentives actually work in practice? - Are RESTful APIs used in a peer to peer network, if not how do the nodes in the network communicate with other? - Aren’t there a Private Keys to go along with the Public Keys? (How are they created and stored?) How does the cryptography actually work? - How does the 51% Attack work in practice? Can Bitcoin be broken? - How can we trust the timestamp from trustless peers in a distributed network? How does that impact the Bitcoin protocol? - Why is the block hash 64 bytes, whereas the other hash values are 32 bytes? And why does it begin with a bunch of 0’s? - Is Bitcoin security determined more by the cryptographic proofs or by having a majority of honest nodes? - What is a practical example of a multi-input (or multi-output) transaction? What questions do you have? (We’ll likely cover them all in our interactive class.) IV. Gaining Understanding and Confidence With all the scammers and talk about the Boom and Bust of Bitcoin, now more than ever is the time to deeply understand Bitcoin; not just the What’s of cryptocurrencies, but the How’s and the Why’s. For me, the first time I really understood how Bitcoin works was after much effort: - Reading and studying Bitcoin for 15 weeks - Conferring with experts in the field - Programming peer-to-peer networking (Using Go and libp2p) - Creating and coding a gossip protocol (Of peers periodically picking and communicating its new favorite book) - Leveraging public key cryptography - Building Merkle Trees - And building a blockchain and my very own cryptocurrency: LexCoin If you really want to “get it” that’s one way (the hard way). The Solution The easy way is to take my Cryptocurrencies Developers Class. Join me in class where I Thanks, hope to see you in class! — Lex Sheehan Author Learning Functional Programming in Go Instructor cryptocurrencies.developersclass.com Blogger lexsheehan.blogspot.com Twitter @lex_sheehan LinkedIn lexsheehan This article originally appeared at Bitcoin Whitepaper, Beautified. This work is licensed under the Creative Commons Attribution 3.0 Unported License.
https://medium.com/@lex.sheehan/bitcoin-whitepaper-beautified-699423935ed
CC-MAIN-2019-35
refinedweb
2,286
53.61
I am trying to calculate the number of steps executed for the following nested loop specially for asymptotic growth. Based on the number of steps I will derive the Big O for this algorithm. def get_multiples(list): multiple = [] for o in list: for i in list: multiple.append(o*i) return multiple The way I have calculated is as follows (list consists of large number of elements = "n"): Assignment statement (no. of steps = 1): multiple = [] Nested Loops: for o in list: for i in list: multiple.append(o*i) In the outer loop the variable o is assigned n times. Each time the outer loop executes, first the variable i is assigned n times, then the variables are multiplied n times and finally the list is appended n times. Therefore the no. of steps = n*(n+n+n) = 3n2 Return statement (No. of steps = 1): return multiple Therefore the total no. of steps = 3n2 + 2 However the correct answer is 3n2 + n +2. Apparently the execution of the outer loop takes additional n steps which is not required for the inner loop. Can somebody explain to me what did I miss ? It does not make a difference to complexity since it will still be O(n2) I think that the correct way to calculate the nested loop is as follows: The number o is assigned n times. the number i is assigned n2 times, o*i is calculated n2 times, the append function is called n2 times. therefore n + n2 + n2 + n2 = 3n2 + n add it to the rest and you get 3n2 + n + 2
https://www.codesd.com/item/number-of-steps-in-a-nested-loop.html
CC-MAIN-2018-39
refinedweb
265
72.36
Agenda SW: Would it be better to stick with the plan to have a telcon with Aria tomorrow, or maybe on the 13th of March? TVR: I would prefer to wait, and encourage Tim to send an email to Michael Cooper clarifying his position, which will make the later discussion more productive. TBL: That's OK with me. SW: I will tell Al Gilman that this is what we suggest doing. <ht_vancouver> HT: I would like to discuss . I'm sorry it took so long for me to prepare this. SW: Could you give background for our new members, please? HT: This goes back at least 2-3 years. JR: Actually, the issue is older than that. HT: Despite the lines in WebArch that say "use http-scheme URIs for everything", there's quite a bit of energy behind using new URI schemes, e.g. in the Library Science community, and other contexts where persistent URIs are a big concern, and also in lots of other places. The Government of New Zealand is using new URN subspaces e.g. for identifying namespaces. A new and widely publicized proposal for a new URI scheme called XRI also drew attention to this. We had a request for the TAG to look at this. Dave Orchard and I did early drafts. The feedback was: this will only convinced those who are already convinced. There also was useful discussion with the community that was proposing lsid as a new URI scheme. I pointed them to John Kunze's work on the ark scheme, which uses http. <DanC_lap> John A. Kunze HT: It seemed that the XRI work stopped short of full Oasis Recommendation-like status (can't remember what Oasis calls that). XRI was redesigned to shift focus away from persistent identifiers and toward providing names for things and individuals, much as you'd need for RDF. There is a relative XRI =henryt that is defined as equivalent to xri://=henryt DO: These are not really URIs; they don't meet the syntactic constraints. HT: Well, they're like IRIs. Those aren't URIs either until you escape them, and then they can be. Turns out they are publishing new drafts, labelled 2.0, which are nearing what we would call "last call" status. We the TAG noticed these late, and sent some last minute questions. I have some reason to believe they are preparing a response to us. They also appear to be working on some new drafts. AM: What is the problem they are trying to solve? HT: I've been trying to understand that. Scribe thinks the draft in question is probably: HT: Note that Dave and I have more or less split the TAG draft. I've worked on some parts; he's worked on others. DO: Ashok asks a good question. I think reasons include 1) persistence of resolution to a resource and 2) guarantee that the identifier is persistently assigned over time AM: Does that mean that an XRI designating, say, the current director of W3C couldn't eventually resolve to Tim's successor? NM: No, I >think< the concern is that someone doesn't grab your XRI if, say, someone else grabs what you thought was your DNS registration. <Stuart> offers individual i-name registrations for $12 per year. HT: IANA has some procedures that I, Henry, can go through if I'm unhappy with what people are doing with, say, DNS HenryThompson.name. They may have that goal too, but last I looked, the exact escalation mechanism was listed as TBD. <Stuart> DO: They have acknowledged that there is resolution of say example.foo.com, which is in some sense resolved by asking foo.com what example.foo.com is, and then doing similar things with paths. They seem not to like the split between DNS (.) resolution vs path (/) resolution in URIs. XRI's can more uniformly use (/). HT: I'll get to that later. My document, which we're about to discuss, is all about delegation. I've come to believe that's the substantive issue. SW: Last I looked, if I want =skw, I have to pay $12 year. Not clear to me what happens next year if I don't pay. TBL: Doesn't seem functionally different from the DNS story. HT: Yes, I'd like to move on to discussing the draft TAG finding, which really isn't best seen as a critique of XRIs. I think there are general concerns of the communities that desire the two persistence properties we've discussed (persistent mapping to representations, and also a uniform mechanism for naming and accessing metadata). I'm not going to discuss the latter, in part because the TAG is ramping up discussion of that under ISSUE-57, httpRedirections-57. Still, we won't have a complete answer to the concerned communities until we figure out the metadata bit. There are two issues relating to persistence. 1: domain names aren't owned, they're leased. Still, it's the only universally available lookup mechanisms. JR: On the Internet. DC: There are also things like Freenode, which is a P2P system. HT: OK, so by the way, we are talking about naming system for the Web, and retrieval is fundamental to what we're discussing. I believe that Ray Dennenberg, by contrast, specficially wants a system for which retrieval is known NOT to be possible. I'm not discussing that now. TBL: Libraries have the interesting characteristic that in many cases, only 10% of materials put into the libary are ever checked out. JR: But, do you know in advance which 10%? HT: Nobody ever expects the network effect :-) Pretty much all of these schemes are a combination of lookup and hierarchy, and with the possible exception of Freenode, most of the ones we see use DNS as the lookup bootstrap. IANA has very fundamentally sound reasons for leasing not selling. It's not clear to me there's a way around that. TBL: Can you elaborate? HT: No. I should say, I haven't studied it enough to fully justify what I've said, but it's my intuition. <DanC_lap> "Full domain ownership" -- HT: I think it's important that we tackle this for the foundational domains of the Web itself. I think we need a holding company that has a legal right to inherit names that others fail to keep. TBL: The legal contracts are tied to to the top level domains, like ".org". I'd like to start a TAG discussion, sometime, of what the requirements would be. That's not a general solution. NM: What did you mean by foundational. Is ibm.com in it? HT: Could be a false assumption, but I'm assuming there's a category difference for the organizations that have on the Web documents that are not only on the Web, but constitutional of what the Web is. TBL: We need a name for those. NM: So, you're worried that if iana.org gets taken, then the list of registered scheme names can get hijacked? TBL: I think the role of MIT libraries, the Louvre etc. in the social system of maintaining the archives of the world's technical material, e.g. Microsoft Vista manuals, is an important piece of this puzzle. JR: Important yes, but different. HT: I agree. (Scribe is falling behind Tim, a bit) TBL: Having the manuals is important. HT: Having the manuals isn't the issue, but having them at the same URI is. That said, I don't want to have this discussion today. I have other things I want to discuss in this agenda slot. I'm focussing on communities that want naming conventions with persistence characteristics that meet their needs. That includes Life Science Identifiers (LSID), references to scholarly papers, etc. They are all nervous about, and have some bad experiences with, the single point of failure that's involved in the DNS lookup step. JR: Well, I think they're wrong about that concern. HT: You can certainly get fault tolerance in the moment with failover to multiple machines. The deeper problems are the social ones. The owner of a name may, for various reasons, stop providing access to representations of resources for which they have been responsible. Delegation is the name for the answer in general, but I'm exploring two flavors of solution. ... 1) Centralzation: put all your eggs in one basket and watch that basket. Everyone agrees there will be one domain name. The group will work really hard to make sure resolution works in perpetuity. There is still a nonzero possibility of trouble with the single point of failure. There may also be quite complex contractual frameworks required to ensure that the expectations are properly agreed to. Another concern I've heard is that there's a loss of "branding", in that you lose the opportunity to "advertise" another organization in the DNS part of the URI string. ... 2) Delegation as Replication: two or more lookups are done, before you get to the hierarchy part. "If just one of us survives, we're OK". Consider an ARK example. Each URI references one of the DNS names, e.g. If either Berkeley.edu or the other site survives, the resolution works. You could use ark.org NM: To be architecturally sound, don't you need to start with as the prefix. Otherwise, what happens if I manage to register berkeley.edu to myself? TBL: Not an issue, Berekely.edu won't be stolen. NM: But I thought the whole point of this complexity to deal with the case where it WAS stolen? HT: Noah's right, I think. Note that this deals with the branding concerns, at least in some ways. WebArch says "A URI owner SHOULD NOT associate arbitrarily different URIs with the same resource." BUT, I don't think swapping out the DNS names in this scheme is really arbitrary. So, doesn't violate WebArch. TBL: You'll sort of wind up with a new protocol built on top of HTTP. Not totally bad, but a tradeoff. HT: Yes. TBL: Don't John at Berkeley and others share a responsibility for agreeing on serving the data? HT: No, not for serving the data. Only for preserving the stability of the name mappings. NW: What have we gained? If University of Edinburgh goes out of business, I buy their DNS name, and serve bad content. How does this scheme help? HT: Policies under which control is taken away are hard to formulate. NM: I understand how Norm can set up his challenge, I don't know how a client will pick Norm's content vs. the intended. HT: You still need replication of representations. NM: Berkeley and Edinburgh are not symmetric. Edinburgh is a data server. If it gets taken over, then the core managers (Berekeley) agree that Edinburgh isn't trustworthy and route around it. <DaveO> I *think* that Tim's point is that the 2nd level name can evolve and can transition to a new organization.. HT: Regarding Noah's challenge about Berkeley.edu, in this example being hijacked, the only thing I can see to say is that the clients may know to try some of the others that have joined with Berkeley in maintaining these. You can also put a hash code in the URI. That's the only way to ensure that "that which is named" doesn't change. Point 3. Centralize naming, distributed storage. DC: like purl.org. HT: Yes, I think so. TBL: I think the Akamai approach is interesting. You hash URIs onto a ring that runs 0-1, into which the servers have arranged themselves. If that server doesn't have, you go around to the next one. You're just guessing, with very high probability, where you'll find a server for a representation. You can go straight to the data. <Zakim> DanC_lap, you wanted to point out that gandi.net sells (not rents) domain names DC: Going back to selling vs. renting, gandi.net offers to sell you not rent. NM: Do they have to "rent" at the next level up? HT: No, I think the next level up just brokers. TBL: I'm more and more convinced that we need a top level domain for doing buying vs. renting right. <DanC_lap> (there's a .museum tld; I wonder if it meets timbl's requirements) NM: Socially, I expect that people would finding calling it "museum" would be confusing, even if it otherwise was entirely suitable. <DanC_lap> (see ) HT: I agree with everything Tim said except the bit about needing a top-level domain. TVR: Where does that leave us as the TAG in terms of what we can actually do? <timbl_> See SW: The previous draft of our finding seemed pretty hard over on "use http URIs". You were going to reconsider in doing this draft. Where are we now? HT: I'd like to take this note and use it as a new beginning to reframe the finding. I'd like to reframe the substance of the finding as "here are the tradeoffs". I think we still can say, with modest cost that we identify, such as escaping requirements and round trips: "http can be used for all of this" <jar> Need a URI analog of HT: The subtext is that the costs of using http for doing these things are typically low enough, and the benefits are great enough, that we can recommend it. DO: Do you think you are addressing the need that the XRI folks have expressed? It's the same as the reason the WAF group is doing their things. It can be in an HTTP header or in a processing instruction. I think the XRI folks want to have the recursion through the path controlled by the creator of the document. HT: That's a social contract saying that the people at xri.org will respond to GETs by interpreting documents of that structure. DO: It's other organizations too. HT: Yes, it starts at xri.org. It's a social contract, but I'm not convinced the URI needs to start with an xri: URI scheme to make that practical. I'll have to think about how to clarify that what you do at each stage of the lookup process, you have have the control you want in the places you want it. SW: Are you saying your action's done? <timbl_> TimBL: Proposed that a new TLD is necessary. HT: No, I didn't do a draft finding. SW: And when you do a draft we'll get it back on our agenda. <timbl_> Timbl: the social contracs around the current TLDs are source of te current anxieties which give rise to these new systems like XRI. TBL: I would like to ask that the next version of the document will include how you would do it, and perhaps in yet another section how I would do it? HT: If you mean addressing the domain name persistence question, I think that's a different issue. TBL: I thought that to meet the goals of the paper, you have to discuss the social and other issues around top level domains. HT: We need to discuss offline. TVR: How can we as TAG influence domain name persistence? TBL: We can. TBL: We can propose all kinds of things. The TAG has status and influence regarding things like this. <timbl_> TimBL: The TAG could propose a new TLD and the cration of a new organization to run it. SW: BREAK HT: This draft was influenced by Tim's note on the Interpretation of XML Documents which points out that the recursive interpretation of XML documents is the natural one. Seems to offer a way into addressing semantics of documents with more than one namespace. The finding winds up focussing on the question of "what's the default processing model". Is there any sequence of steps that applications should as a matter of course perform before consuming XML documents. Later, we framed this as "If an author takes responsibility for the information in an XML document, what is she/he taking responsibility for." Consider, e.g. the case of a document I send you with XInclude statements in it. Am I communicating the raw infoset, with the include element itself, or is it better on balance to say that I'm taking responsibility for the document that results. When the inclusion is perform! ed? TVR: What about processing instructions asking for XSLT transforms? HT: Known to be an open question. <ht_vancouver> HT: The ones we prioritized as likely highest were XInclude, Signature checking, and Encryption. <Stuart> HT: We are discussing TVR: How deep do you go on nested inclusion? NW: Xinclude says you must go all the way down. TBL: Doesn't seem right. I you should say "the document means this", not "processors must do this". NW: I think the spec is a bit more careful about this. XInclude speaks of a synthesized infoset. I think it may do it quite declaratively. TBL: These specifications shouldn't say what processors do when there's an error. They should declare the interpretation of legal documents. NM: Yes. HT: This document defines a general notion of an elaboration signal. Also defined is a general notion of quotation. TBL: Tim has previously commented that quotation should be tied to particular XML vocabularies, not at the document level. (scribe isn't sure why this appears to have Tim speaking of himself in the 3rd person -- probably a mistake in scribing, but there is no better record of what was said) HT: Section 4.1 in this draft attempts to address this objection by saying that individual parts of documents can signal this individually. NM: The first sentence in 4.1 parses ambiguously. In particular, it can be read as implying that "documents are in namespaces". HT: Not my intention. I'll fix it. SW: Can we move past the history to the draft that's on the table? HT: Yes, I will. Note that section 6 hasn't caught up with the rest of the document. TVR: What if for whatever reason I start with a simple document not using namespaces? If I later Xinclude it in a document that uses namespaces, does it inherit the container namespace? NW: No. It stays not in a namespace. NM: I'd suggest perhaps factoring the mention of namespaces and why this finding will generally most useful with markup that's namespace-qualified, but then just referring to things like "quoting elements". <Stuart> Hmmmm.... I am vex'ed really by the globally scoped/locally scoped nature of qualified element names... ie. the significance even of a qualified element name (say it's content model) *can* vary by structural position in a document - cf SCUDs for assiging URI for element and attribute names. TBL: Yes, and sometime I would like to find a way to discuss the idea we mentioned yesterday, of establishing default prefixes based on media type. Is there a lot of XML being used out there that's not using namespaces? NW: A lot of the messages being passed for RESTful APIs aren't using namespaces. NM: OK, that convinces me a bit more that factoring the mention of namespaces might be a good thing. These folks doing RESTful messages might still find the finding helpful in clarifying quoting elements, etc., even if those are not qualified. TBL: Yes. When things aren't qualified, the semantics have to be grounded in prior agreements, and that applies to things like quoting too. <DaveO> DBO: I can't remember seeing SOAP messages with bodies that don't have a namespace. HT: If you're elaborating an infoset, what counts as quotation depends on where you are in the tree. SOAP may be an example. A SOAP intermediary needs to not elaborate the quoting, endpoint does. TBL: That's a complicated way of looking at it. The message >has< one semantic. Maybe or maybe not one bit of software or another bothers to get at all that semantics by doing the expansion. <DaveO> DBO: <DaveO> DBO: Also, WS-I Basic Profile requires a namespace in soap body: <DaveO> DBO: R1014 The children of the soap:Body element in a MESSAGE MUST be namespace qualified. <Norm> Wow. I had no idea. <Norm> Not unreasonble at all. HT: I think we need to allow for different answers in different consumers. TBL: Strongly disagree. NM: I'm on the fence. Imagine a message. Some headers are encrypted with a key that's for document management software. That software never sees the document decrypted, just the control headers plus an encrypted blob. Tim as the ultimate receiver sees the opposite; his softwrae has the other keys, so never sees the management headers, just the body. TVR: (scribe got behind something about multipart) I like Tim's answer. <Stuart> HT: I have tried to explore a compositional semantics of XML in the document . As an example, I remind you of RDF, which has a convention for embedding XML as XML, inside an RDF/XML document. This isn't infosets and elaboration yet, just the particular interpretation of XML burried in RDF/XML. TBL: Too bad that the elaboration signal in this case is an attribute, but it still works. HT: Yes, I just used a partial function from element names + attribute sets to consequences. It accommodates signals in attributes as well. <DanC_lap> (oops; I didn't manage to do read this bit on composition, despite my action to do so. ) (Henry then summarizes the document) (Henry explores the definition (15) in the document. Scribe finds it impractical to retype all of this mathematical notation on the fly). TBL: The "exclusion" parameter seems to be too much complexity. DO: In a financial transaction, you might want to have the same markup (encrypted credit card number) elaborated in some contexts and not others. <Zakim> timbl_, you wanted to intosoduce the use case of query on an elabrated infoset which only slects certain things, eg credit card # NM: Whether to try to decrypt the credit card number might depend not just on the document contents, but also on whether you are the auditing application or something else. TBL: Imagine an XPath recursing down looking for a customer number. Never happens to trip over the credit card number. If I get to where I need the key and I don't have it, that's a bug. DO: No, it's not a bug. Noah had it right. It depends on what Hal Lockhard calls the "situation". You may be looking at the message, and your application has to store it in the database encrypted. With a different application, you may need it plain text. That information about the situation can't by entirely found in the document. So, the elaboration will be influenced by the situation. TBL: That's the X in the function. HT: I'm afraid I didn't prep well enough for this. I've actually addressed this. "There is at least one flaw in this, where I said X is unlikely to be constant through the descent." NM: Not just varying through the descent, also could be constant for a given tree traversal (I'm storing the document) vs. constant but different for a second traversal (this application always cracks open the credit card number) TBL: I prefer to view Dave's case as lazy evaluation. There's one true semantic for the elaborated infoset, independent of whether every application chooses to get at it. <Zakim> DaveO, you wanted to mention the "situation" is coming up elsewhere and to mention that maybe Hal can help Henry DO: I think perhaps Hal Lockhart could be helpful. AM: I'm thinking of standard documents that don't have these quoted things, and the situations. I'm thinking about a separate file or instruction that will tell you how to do the interpretation. HT: A long time ago, we separated out talking about things like pipeline languages. Here, we're talking about the handful of things that apply across the board <scribe> scribenick: ht_vancouver NM: This relates to the self-describing web stuff. There's this context-independence invariant which a lot of the web depends on. What we're trying to do is derive some very general [elaborations] which go with _any_ application/...+xml message body Consider a google crawler that knows nothing but what it gets at the end of a URI -- where would it look for a separate 'situation' document? <scribe> scribenick: noah <Zakim> jar, you wanted to wonder whether an algebraic (equivalence) approach would be helpful JR: I think you get in trouble if you talk about how things "are processed". Another approach is to talk about equivalences, e.g. between the initial document and the ones with the inclusion done. We then say do the same thing to both versions. <timbl_> +q <timbl_> +1 HT: I'm not sure whether that makes sense. What I'm trying to do in the pdf document is to state what the equivalences are. JR: If I deal with both things, I should deal with them in the same way? NM: Hmm, I'm not sure I see how to "deal with" the encrypted form of, say, an inventory list in the same way as the unencrypted form. SW: How many people have read this (seemed like about half of those present) <jar> jar was suggesting a formal framework of the form: if A and B are equivalent, and if a processor handles both A and B, then it *must* handle A and B in the same way (sorry, this is online thinking, always a risk) HT: Please give me an action to a) revise this document based on feedback and b) how to combine with the Elaborated Infoset draft finding. SW: What might we "find" for our TAG finding? <DanC_lap> trackbot-ng, status ACTION Henry S. to a) revise composition.pdf to take account of suggestions from Tim & Jonathan and feedback from email and b) produce a new version of the Elaborated Infoset finding, possibly incorporating some of the PDF <scribe> ACTION: Henry S. to a) revise composition.pdf to take account of suggestions from Tim & Jonathan and feedback from email and b) produce a new version of the Elaborated Infoset finding, possibly incorporating some of the PDF <trackbot-ng> Created ACTION-113 - S. to a) revise composition.pdf to take account of suggestions from Tim & Jonathan and feedback from email and b) produce a new version of the Elaborated Infoset finding, possibly incorporating some of the PDF [on Henry S. Thompson - due 2008-03-05]. ****ADJOURNED FOR LUNCH**** Collection of comments <Norm> scribenick: Norm <DaveO> I made comments in June in DaveO: I wanted to see some microformats in here. Both done right with the profile URI and then a discussion of how it's often not used correctly. ... The theory is that the microformats are grounded in URI space and can be self-describing, I think that many of them are, in fact, not, and many implementations also are not. ... We should point out the theory as well as the practice. Noah: I did see the comment in June. I think you can look at microformats in two ways. <Stuart> topic Noah: One, where short names are used in data values. I did try to tell that story. ... My question is, is there enough value in microformats to add it. It's hot this year, but will it be relevant later. <DanC_lap> (on microformats and URI-based extensibility: ) DaveO: I think microformats may be a prominent technology that could be used in a self-describing way, but we need to give advice to encourage them to do so. ... The microformats folks aren't really pushing this. Noah: I think there's a choice, I've been trying to avoid getting into every particular technology. ... I can see two reason to add microformats, one is that it teaches a new principle; (and Noah didn't say what two was) TimBL: Yes, I think we should tell that story. Raman: I agree with Tim too. DaveO: I also had a question about the use of RDF. I think the statements about RDF are too strong. Noah: I think you're taking that a little out of context. <DanC_lap> (noodling... "RDF [RDF] plays an important role for creating self-describing Web data resources, and for integrating representations rendered using other technologies such as XML." <DanC_lap> ) General agreement that the text reads as if it says that if you're interested in self describing web, you should use RDF. TimBL: I suggest that RDF be introduced as a common data model for integrating and processing data from many sources, and as a reference model for self-describing data. Some discussion of the intent. <DanC_lap> ("for data, if you can turn it into RDF, you're home," timbl just said. but graphs/relations aren't well supported in ordinary programming languages. s-expression-shaped things like the XML DOM or JSON feel more like "home" to a python/javascript programmer.) Noah: What I meant it to read as was "RDF does two things, 1) sometimes it's how you store your data and 2) even if you're using some other technology, you can still use RDF as a model. ... Everyone is happy with the last para before 4.3.1? No objections. Stuart: I have some levels of discomfort. JAR: Do you have a list of negative examples in mind? Noah: I think that microformats were in part a negative, which is why I chose Atom JAR: Negative examples are really helpful, but I can imagine why you didn't want to put them in. Noah: If I add microformats, should I drop Atom? ... They seem to be teaching the same things. DaveO: I like the Atom example; keep both. Noah: Ok DaveO: I think the Atom one is straight-up, they did it right, the microformats one is less clear. ... so having both would be valuable. ... There's only one microformat that mentions the profile, hCard, and none of the tools that generate them actually generate the profile. JAR: Predicates being the same as another takes you out of OWL-DL. It'd be nice to avoid that here as it doesn't seem necessary. Noah: I'll follow-up with you offline for a little OWL tutorial. <DanC_lap> (short version: change sameAs to equivalentProperty) JAR: RFC 2119 only applies to specifications. Personally, I find the SHOULDs a little off-putting in the absence of some expectation-setting. DanC: The GPNs need work. Noah: Should I just kill 2119? General sentiment that we've abused it in the past. Some discussion of whether the use of SHOULD/MUST is appropriate. Ashok: In specs, these are conformance statements. TimBL: A protocol spec is a contract, it says if you do these things, then you'll get these invariants. ... This is a lot like a spec; I'd like to see it made more spec-like. Noah: I don't think SHOULDs are about conformance. ... There are no MUSTs in the document. JAR: There's a distinction that I think is missing. The way I see a spec is a little different; it's like a game; it's a set of rules. You voluntarily enter the game, and when you do you take on a bunch of obligations. The spec is saying, if you're playing this game, then you should do these things. ... That's perfectly clear. If you want to say you're playing the game, then you must do these things. ... In this document, I'm missing the first part. Noah: Do you find this in other findings? JAR: Yes, it's not just about this finding. Noah: Then maybe we should come back to this in a broader context. <timbl_> Within a given community, there is a set of standards S, If within the community, all clients understand specs from S, and all server s express themselves uniquely using members of S, then clients will understand servers. The TAG has the dulty I suspect to ennumerate S for the web at large. <Zakim> DanC_lap, you wanted to lean toward speaking to microformats as SXSWi is coming up; if it's too ephemeral, maybe a blog item instead or in addition? and to wonder if making new Noah: I've put a lot of time into this, some of the comments are about the content and I'm happy to work on those for as long as it takes, or drop it, JAR has made a different comment. This finding is like the others but he has concerns about it. I'd have like to hear that before the first or second draft. If we want to change the style of fidnings, let's start the next one. <timbl_> I agree with DamC. It is missing 4.1 and a half, MIME-type based extensibility. DanC: Most W3C technologies shouldn't aim to be in the ubiquitous set; they should aim to be in the extension set. ... The things I like are: <DanC_lap> "... when such self-describing resources are linked together, the Web as a whole can support reliable, ad hoc discovery of information." <timbl_> DanC: GPNs not in support of a principle seem out of place to me. <DanC_lap> self-describing resources promote reliable <DanC_lap> [14:51] <timbl_> q? <DanC_lap> [14:51] <Zakim> ... ad-hoc discovery of information General agreement that a principle in this neighborhood would be a good thing. TimBL: We could do an interoperability thing here. There was a web services event that did this. ... We could set the bar; at the moment, using my model of the set of standards, the standards which are shared are DNS, HTTP 1.1, HTML 4, RDF, GRDDL, XML. <DanC_lap> (the list tim's giving is, to me, given in section "2 The Web's Standard Retrieval Algorithm") TimBL: Maybe we do it separately for semantic web and presentation technologies <DanC_lap> (GRDDL is designed to work even if not everybody groks it.) <Zakim> timbl_, you wanted to note re RDF that it is also a standard for interoperability betwen applications TimBL: To be compatible with this profile, for example, all semantic web technologies must implement GRDDL. We could point out that the microformats aren't in this space because they're not implemented by semantic web tools. ... You could make it a lot more like a spec this way. <timbl_> TimBL: In the spec, I'd like to see that diagram or one like it. Some attempt to present both the diagram and Section 2 on the projector. DanC: Aren't these pretty similar? Noah: I think the diagram would be pretty frightening to a new comer. TimBL: The green bit is basically interpreting an HTTP response. ... Well, more or less. ... The pink box is presenting hypermedia Noah: I'm sure we could clean this up, but is it the level of detail that we want? TimBL: I think that sections 4.* are different ways of adding stuff. And in a way, the fact that the diagram is messy, indicates how many points there are that you can try to hijack this. The sequence of the diagram lets you point out that you can hook things in where the pointy bits are. DanC: Pointy bits? Flow? Some discussion of the arrows on the right hand side DanC: Ok. That's not self-evident. Noah: I think what I'm stressing about is that I had a different reader in mind. ... I've tried to write this for users that don't need all this detail. ... The web is different than your private network because it promotes ad-hoc discovery by people who don't know each other. ... This diagram is in the same space, but for a different audience. TimBL: The person I want to address is the one who is wondering whether to use RDF or a microformat. DanC: With that diagram? How? Noah: That's what I was thinking about when I wrote the text that I already got push back on. ... I'm very torn. There's a lot of good input here, but I'm not sure we're closing on this. <timbl_> Suggest: s/Good Practice: RDFa SHOULD be used to make information conveyed in HTML self-describing./Good Practice: RDFa will be usable be used to make information conveyed in HTML self-describing when and only when RDFa is an accepted recommendatio TimBL: I'm happy for the more rigerous version to be in the architecture of the semantic web activity. JAR: Yes, some of this could be expressed more formally. ... This document is saying things that are different from what an ontology of HTTP document would say. Noah: It shouldn't conflict. TimBL: Does it? JAR: We can talk more about what this AWWSW effort is doing, but this is more advice to people in the trenches. Ideally, the RDF also has that property, but we aren't there yet. Noah: This is inspiring to correctness, but not rigor. It's trying to show you some principles. <DanC_lap> (again: pointer to david booth's rules?) <DanC_lap> (aha... ) Some discussion of David Booth's rules, which we plan to discuss tomorrow morning. JAR: Just because you have a model, that's different from giving advice or direction about what mechanisms you should use. TimBL: The RDF rules say "these are the things that a client infers and therefore, it provides a protocol and expections at a much stronger level" JAR: But it's not prescriptive in the same way. It allows you to deduce that good practice is such and such, but it doesn't actually say that. DanC: It seems to me like the difference between saying "if you open that door, I'll be happy" and "would you please open that door" <Zakim> DanC_lap, you wanted to note that GRDDL is designed to be dynamically discovered, not ubiquitously known TimBL: The sequence in the bullets in 2, I think is important. We need something else to address in the semantic web. ... I'd like you to mention the fact that there's a crucial decision to make about whether or not something is an accepted standard. For example, right now, the jury is still out on RDFa. <DanC_lap> (to repeat: GPNs not in support of any particular priniciple seem out of place to me) Noah: Anyone else happy with that? DanC: No. I think GPNs without principles are awkward. <timbl_> Principle: A set of standards, shared betweem many readers and many writers, allow interoperability. Noah: The good practice is that it helps you follow your nose. <timbl_> Principle: Flexibility points in the architecture allow specifications, metadat and code to be found which allow the smooth extesion of the set of standards. Noah: I'm happy to do whatever makes sense, but I don't want to thrash. <DanC_lap> (I'm struggling with stuff like "the web will be a better place if you use GRDDL". I'm not sure why...) Henry: I'd prefer a middle way, which is you should separate this document at least conceptually into two parts: one is the self-describing old-fashioned web and two is extensions into the semantic web. Noah: That's why 4.3 is its own subsection, but clearly that's not working for you. <timbl_> I cannot see why GRDDL deosn't have to be in hte set of standards fro it to work Henry: All I'm saying is, the GPNs should say "GPN for the semantic web" or something like that. Stuart: I'd be reluctant to say that without being sure that the semantic web community is behind us. <timbl_> +1 to upgrade 4.3 as insert new 5 Henry: At the moment, I don't see any gradation and I think that would be useful for someone coming to this document to find answers about how to put information the web: this is the bare minimum, this is the sweet spot, this is the whole thing. Ashok: Who is this for and why should they pay attention? Noah quotes from the abstract. <Zakim> Stuart, you wanted to mention concerns wrt to intention and GRDDL/RDFa 'mined' triples. Broadly: language designers not web masters. Stuart: In the GRDDL/RDFa area, I'm a little uncomfortable with "mining triples out of documents". With respect to GRDDL/RDFa, you can look at in two ways: are the triples really there because the author put them in, or are you mining them out where maybe the author didn't intend them. Noah: I think its a non-issue; for GRDDL, you have to make it explicit. <timbl_> The self-describing web mean s that there is one meaning for each doeumcnt, so mining and experssion MUST be the same. It may be less clear for RDFa. Stuart: I'm not sure I agree, you have to really understand what the transformation is going to do in order to understand the statements. TimBL: I think documents have a context insensitive meaning. There's no difference. Because you used a standard mechanism to extract the information, the author must be held accountable for that information. Stuart: I hear what you're saying, but I find it a hard sell. <DanC_lap> +1 thesis is: each document has one context-free meaning (or at least: one meaning in the context of ubiquitously deployed standards) Stuart: Especially when the GRDDL may change. TimBL: I think that's a corner case. Noah: Is there a good story here about the distinction between explicitly putting in a GRDDL link in and having one implicitly. DanC: No, I don't think so. Some discussion of how GRDDL might be found. <timbl_> acktimbl_ <Zakim> timbl2, you wanted to suggest an opening for the new section 5 TimBL: I wanted to suggest that 4.3 should be a new section 5. ... With new introductory text. <Zakim> DanC_lap, you wanted to put my finger on something about MIME types and media plug-ins and to note comments on RDFa test case #1 DanC: I had these vague feelings about MIME types. Section 2 is pretty standard. Section 3 says use something analagous to "use PNG instead of some new format" ... Section 4 is new stuff. Noah: Yeah, I think may be we should drop 4.1 ... It's only indirectly about self-describing. TimBL: I think it's perfect. ... It says "don't" DanC: I would rather that this was more ordered: if you're thinking about a new URI scheme or a new MIME type, a new MIME type; a new namespace or a new media type, a new namespace. <DanC_lap> uri scheme costs more than new mime type; new mime type costs more than new namespace DanC: After section 4.1, then we get into RDF and the semantic web. ... I'm wondering about flash plugins and java apps. ... When you want to extend the capability of the web and you want to publish something that doesn't fit, what people do is flash, ... TimBL: Or they invent new attributes and then write scripts to do something with them. DanC: there's a whole bunch of stuff about the semantic web but nothing about the hypertext web. Noah: I thought we already talked about the hypertext web. ... The XML stuff winds up late because it was suggested that I tell the RDF/triple story first. Some discussion of the flash case and how it boiled down to a new media type. Noah: If follow-your-nose leads you to a new mime type, then all it means is that large numbers of users won't be able to understand the content. ... Mark Baker said don't use XML languages and I'd be grateful for advice on that point. *** break *** <ht_vancouver> 2001/tag/doc/nsDocuments-2007-11-13/docbook.n3 <timbl_> <ht_vancouver> java -Dpellet.configuration=file:$pd\\pellet.properties -jar $pd\\lib\\pellet.jar "$@" <timbl_> Tag findings += "CLASSPATHs considered harmfull" Some discussion of what the model described by the nsDocument finding is. Nature keys and purposes. TimBL reviews some of the RDF fragments in the Tabulator Scibe fails to keep up Henry attempts to describe the semantics of: <> purpose: validation [a nature:Object; nature: key ""; ... target <> ]; There seems no longer to be consensus that this is a good model. <scribe> ACTION: Henry S. to find the counter example that made it necesseary to make a terniary relationship [recorded in] <trackbot-ng> Created ACTION-114 - S. to find the counter example that made it necesseary to make a terniary relationship [on Henry S. Thompson - due 2008-03-06]. <Zakim> timbl_, you wanted to agree it is wirth putting in the pattern of inventing new attributes or stye say and imoplement it in jscript in the early stages or for old browsers. Stuart: One of the issues that led to this being a key rather than a class relation is that because the values varied so widely. Agreed. But not the issue Stuart: It still bugs me that the purpose arrow is the direction that it is. Henry: Yes, and I tried to fix that. Beginning of section 7. Stuart: RDDL is a directory with entries in it. The nature and purpose are on the entry. Those entries are quads, with namespace, nature, purpose, and resource. TimBL attempts to answer the question Noah asserts that the answer was to a different question <timbl_> 't match the picture Noah: This goes way back, the reason we chose this structure was because the same target might be viewed with two natures. ... At least as a thought experiment, does it sit any better if we call it nature:treatAs ... I'm not trying to tell you want it is, I'm telling you you may treat it as if it was a specific thing. If you view it hthrough this prism, you will be happy. <Zakim> DanC_lap, you wanted to ask ht to estimate cost of flipping purpose arrow DanC: What's the cost of changing it? Henry: High. First, it breaks the appearance of coherence with RDDL; second, it requires redoing the ontology. <Zakim> timbl_, you wanted to worry that the N3 deosn TimBL: The fact that purpose is class isn't clear from the diagram. Some discussion of whether or not this helps Stuart. TimBL: Can we agree that we'll remove the label 'purpose'? General agreement. <Zakim> Stuart, you wanted to disagree that flipping the arrow 'breaks' coherence with the RDDL 1.0 model. Stuart: Henry said "it would break coherence". I disagree. You examine the RDDL 1.0 document and I think you'll find that subject of a purpose is not a namespace. We look at 3.2 of the RDDL spec. Henry: Yes, I agree that sentence supports your position; but the fact that it's materialized as an xlink:arcrole supports my position. Considering: <rddl:resource xlink:type="simple" xlink: title="DTD for validation" ... arcrole="" ... role="" ... <h3> 7.4 Document Type Definition</h3> <p> A DTD <a href="">rddl-xhtml.dtd</a> for RDDL, defined as an extension of XHTML Basic 1.0 using Modularization for XHTML</p> </rddl:resource> Stuart: That is the directory entry; it has an ancillary resource, and it has a nature and a purpose. <Zakim> noah, you wanted to say I think the purpose is of the target Noah: 3.2 says "related resources may have a purpose" ... it does not say directory entries have a purpose. ... it could be that related resources have a purpose, not directory ... or it could be that both have purpose and we're talking about different ones ... and it could be that the wording is just sloppy ... It runs backwards, but the pair that is involved is the correct pair. TimBL discusses the task of working out what the attributes mean <DanC_lap> TimBL: ... and so actually the arcrole (purposes#validation) plays the part of predicate ... TimBL: We could ask the RDDL spec authors to clarify the thing that purpose applies to. Stuart: What is the subject of that arc? <Zakim> DanC_lap, you wanted to do a silent auction thingy Stuart: I would assert that it's the entry. DanC: Straw poll: ship it! two "yes" but not from the editor <Zakim> ht, you wanted to discuss what the subject was originally <DanC_lap> PROPOSED: to address editorial comments, e.g. purpose label, skw's comments on rddl informal ontology capturing Henry: I don't believe that any change to the topology of the ontology is necessary. But I'd be willing to take an editorial pass to improve the description of the way the RDDL is encoded. <DanC_lap> PROPOSED: to address editorial comments, e.g. purpose label, skw's comments on rddl informal ontology capturing, and publish as a TAG finding Henry: I'm in favor of that TimBL: Yep. noah, Norm: Yep. DanC: Any objections? jar, Raman abstain. RESOLVED to do as proposed.. <ht_vancouver> ACTION: Henry S. to improve the presentation of the way the ontology reconstructs RDDL 'purpose', and to attempt to address skw's concern about the subject of the so-called purpose relation [recorded in] <trackbot-ng> Created ACTION-115 - S. to improve the presentation of the way the ontology reconstructs RDDL 'purpose', and to attempt to address skw's concern about the subject of the so-called purpose relation [on Henry S. Thompson - due 2008-03-06]. ADJOURNED.
http://www.w3.org/2001/tag/2008/02/27-minutes
crawl-001
refinedweb
8,291
73.98
MR::IProto - iproto network protocol client IProto client can be created with full control of its behaviour: my $client = MR::IProto->new( cluster => MR::IProto::Cluster->new( servers => [ MR::IProto::Cluster::Server->new( host => 'xxx.xxx.xxx.xxx', port => xxxx, ), ... ], ), ); Or without it: my $client = MR::IProto->new( servers => 'xxx.xxx.xxx.xxx:xxxx,xxx.xxx.xxx.xxx:xxxx', ); Messages can be prepared and processed using objects (requires some more CPU): my $request = MyProject::Message::MyOperation::Request->new( arg1 => 1, arg2 => 2, ); my $response = $client->send($request); # $response isa My::Project::Message::MyOperation::Response. # Of course, both message classes (request and reply) must # be implemented by user. Or without them: my $response = $client->send({ msg => x, data => [...], pack => 'xxx', unpack => sub { my ($data) = @_; return (...); }, }); Messages can be sent synchronously: my $response = $client->send($response); # exception is raised if error is occured # besides $@ you can check $! to identify reason of error Or asynchronously: use AnyEvent; my $callback = sub { my ($reply, $error) = @_; # on error $error is defined and $! can be set return; }; $client->send($request, $callback); # callback is called when reply is received or error is occured It is recommended to disconnect all connections in child after fork() to prevent possible conflicts: my $pid = fork(); if ($pid == 0) { MR::IProto->disconnect_all(); } This client is used to communicate with cluster of balanced servers using iproto network protocol. To use it nicely you should to implement two subclasses of MR::IProto::Message for each message type, one for request message and another for reply. This classes must be named as prefix::*::suffix, where prefix must be passed to constructor of MR::IProto as value of "prefix" attribute and suffix is either Request or Response. This classes must be loaded before first message through client object will be sent. To send messages asyncronously you should to implement event loop by self. AnyEvent is recomended. Prefix of the class name in which hierarchy subclasses of MR::IProto::Message are located. Used to find reply message classes. Instance of MR::IProto::Cluster. Contains all servers between which requests can be balanced. Also can be specified in servers parameter of constructor as a list of host:port pairs separated by comma. Max amount of simultaneous request to all servers. Max amount of request retries which must be sent to different servers before error is returned. Delay between request retries. Constructor. See "ATTRIBUTES" and "BUILDARGS" for more information about allowed arguments. Send $message to server and receive reply. If $callback is passed then request is done asyncronously and reply is passed to callback as first argument. Method must be called in void context to prevent possible errors. Only client errors can be raised in async mode. All communication errors are passed to callback as second argument. Additional information can be extracted from $! variable. In sync mode (when $callback argument is skipped) all errors are raised and $! is also set. Response is returned from method, so method must be called in scalar context. Request $message can be instance of MR::IProto::Message subclass. In this case reply will be also subclass of MR::IProto::Message. Or it can be passed as \%args hash reference with keys described in "_send". Send all of messages in \@messages and return result (sync-mode) or call callback (async-mode) after all replies was received. Result is returned as array reference, which values can be instances of MR::IProto::Response or MR::IProto::Error if request was passed as object, or hash with keys data and error if message was passed as \%args. Replies in result can be returned in order different then order of requests. See "_send" for more information about message data. Either $message or \%args allowed as content of \@messages. Class method used to disconnect all iproto-connections. Very useful in case of fork(). For compatibility with previous version of client and simplicity some additional arguments to constructor is allowed: host:port pairs separated by comma used to create MR::IProto::Cluster::Server objects. Are passed directly to constructor of MR::IProto::Cluster::Server. Is passed directly to constructor of MR::IProto::Cluster. See "BUILDARGS" in Mouse::Manual::Construction for more information. Pure asyncronious internal implementation of send. $message is an instance of MR::IProto::Message. If \%args hash reference is passed instead of $message then it can contain following keys: Message code. Depending on this value balancing between servers is implemented. Message data. Already packed or unpacked. Unpacked data must be passed as array reference and additional parameter pack must be passed. First argument of pack function. Code reference which is used to unpack reply. Message have no reply. Is retry is allowed. Values of attributes "max_request_retries" and "retry_delay" is used if retry is allowed. Callback used to determine if server asks for retry. Unpacked data is passed to it as a first argument. MR::IProto::Cluster, MR::IProto::Cluster::Server, MR::IProto::Message.
http://search.cpan.org/dist/MR-Tarantool/lib/MR/IProto.pm
CC-MAIN-2016-40
refinedweb
815
59.3
Ticket #11577 (closed defect: fixed) save(x,filename) fails for pure Python objects for x if filename contains a dot Description (last modified by leif) (diff) (The summary actually is not completely accurate - there might be some Python object this works for that I'm not aware of.) If the filename passed to save() contains a dot, save() assumes that the user doesn't just want to dump the (pickled) object, but instead wants to call the object's save() method. I guess this makes sense in situations like save(g, 'mygraph.png'), but the code should fall back to dumping the pickled version (e.g. via try: ... except AttributeError: ... - suggested via IRC by leif) if the object has no save() method. leif also suggested checking if the file name extension is known - however I guess that we then should verify this with the object itself (e.g. it wouldn't make sense to save a graphics object to a .wav file) and not statically compare with a list of known extensions. sage: save((1,1), 'foo2') sage: save(Matrix(3,3), 'foo.bar3') sage: save((1,1), 'foo.bar4') --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /tmp/sagedebug/<ipython console> in <module>() /usr/local/sage/local/lib/python2.6/site-packages/sage/structure/sage_object.so in sage.structure.sage_object.save (sage/structure/sage_object.c:8156)() AttributeError: 'tuple' object has no attribute 'save' Here's the culprit: # Add '.sobj' if the filename currently has no extension if os.path.splitext(filename)[1] == '': filename += '.sobj' if filename.endswith('.sobj'): try: obj.save(filename=filename, compress=compress, **kwds) except (AttributeError, RuntimeError, TypeError): s = cPickle.dumps(obj, protocol=2) if compress: s = comp.compress(s) open(process(filename), 'wb').write(s) else: # Saving an object to an image file. # XXX This of course fails for plain Python objects: obj.save(filename, **kwds) Apply only trac_11577-jhp.v2.patch. Attachments Change History comment:1 Changed 23 months ago by logix - Description modified (diff) - Summary changed from save(x,filename) fails for some types of objects for x if filename contains a dot to save(x,filename) fails for pure Python objects for x if filename contains a dot comment:2 Changed 23 months ago by leif - Owner changed from ncalexan to was - Component changed from sage-mode to pickling - Description modified (diff) comment:3 Changed 23 months ago by leif What about simply adding .sobj above if there's no extension or the object has no save() method? comment:4 Changed 23 months ago by leif - Status changed from new to needs_review - Authors set to Leif Leonhardy Attached patch does exactly what I suggested last. comment:6 follow-up: ↓ 7 Changed 22 months ago by jhpalmieri - Status changed from needs_review to needs_work You shouldn't write to a file like "foo.bar" in a doctest. Instead, make sure you write to a temporary directory, for example using SAGE_TMP. comment:7 in reply to: ↑ 6 Changed 22 months ago by leif Replying to jhpalmieri: You shouldn't write to a file like "foo.bar" in a doctest. Instead, make sure you write to a temporary directory, for example using SAGE_TMP. Ooops, I thought doctests would be executed there, i.e. sage{-doctest,doctest.py} would cd to that before running Python. Changed 22 months ago by leif - attachment trac_11577-fix_save_to_filenames_with_dots.patch added Sage library patch. Adds ".sobj" to filename if the object has no save() method. (Corrected and improved version.) Based on Sage 4.7.1.rc0. comment:8 Changed 22 months ago by leif - Status changed from needs_work to needs_review Finally... The updated patch now adds another doctest and clarifies the description in the docstring. (I intentionally haven't changed it to enumerate all parameters as we normally do, as I think it is more readable as it is now, and compress=True should be self-explanatory. For some reason I don't get the correct default parameters shown when typing save? at the Sage prompt though.) I originally wanted to add import os.path from sage.misc.misc import SAGE_TMP filename = os.path.join(SAGE_TMP, "foo.bar") ... but then saw the other examples, so I guess a lot of doctests rely on - having SAGE_TMP already imported (by sage.all), and also - SAGE_TMP having a trailing slash (or os.path.sep), the latter being perhaps a bit more dangerous than the former. So I also just used SAGE_TMP + "...". comment:9 follow-up: ↓ 10 Changed 22 months ago by jhpalmieri - Status changed from needs_review to positive_review - Reviewers set to John Palmieri - Description modified (diff) - Milestone changed from sage-4.7.1 to sage-4.7.2 At the sage: prompt, I get Definition: save(obj, filename, compress=None, **kwds=True) Odd, the values are shifted right by one (filename should be None, compress should be True). I wonder what's causing that. Other functions in the same file seem to be okay. I have some old copies of Sage around, and this behavior dates back to at least version 4.3. As far as SAGE_TMP is concerned, you're right, it's sloppy. Probably the right solution is to have a function sage_tmpfile or something like that, which takes one argument, FILE, and returns a valid path using os.path.join(SAGE_TMP, FILE). In fact, if you look at how SAGE_TMP is defined in the first place (in sage.misc.misc), it's defined using explicit slashes rather than with os.path.join. This all belongs on another ticket; if I have time, I'll work on that eventually. I'm happy with the patch, basically as is. I'm attaching a new version fixing one or two words in the docstring. I'm marking it as "positive review"; if you don't like my changes, switch it back. Changed 22 months ago by jhpalmieri - attachment trac_11577-delta.patch added for reference only: difference between old patch and new one Changed 22 months ago by jhpalmieri - attachment trac_11577-jhp.patch added apply only this patch comment:10 in reply to: ↑ 9 Changed 22 months ago by leif Replying to jhpalmieri: At the sage: prompt, I get Definition: save(obj, filename, compress=None, **kwds=True) Oh, I did not notice the true keywords. The most disturbing thing with SAGE_TMP is IMHO that it ignores settings of TMP, TMPDIR and TEMP, and -- worse -- does not even use /tmp/ which is more likely to be on a local (usually auto-cleaned) file system than $HOME/.sage/, and I don't expect people to change DOT_SAGE to live on a temporary filesystem. (Ok, one can still create a symbolic link from ~/.sage/temp to whatever is desired, but who knows? Especially since there's also ~/.sage/tmp/ and of course $SAGE_ROOT/tmp/.) We furthermore had SAGE_TMPDIR and SAGE_TESTDIR, the former meanwhile changed to the latter. I'll perhaps open a sage-devel thread on that (to make William happy). I'm attaching a new version fixing one or two words in the docstring. Ok. Regarding extensions, I'd consider foo.png having one, while foo.baz and foo.bar (non-exclusively) having some. :-) comment:11 Changed 21 months ago by jdemeyer - Status changed from positive_review to needs_work I have a doctest failure which might have been caused by this: sage -t -force_lib devel/sage/sage/misc/cachefunc.py ********************************************************************** File "/mnt/usb1/scratch/jdemeyer/merger/sage-4.7.2.alpha2/devel/sage-main/sage/misc/cachefunc.py", line 1106: sage: for f in sorted(FC.file_list()): print f[len(dir):] Expected: /t-.key.sobj /t-.sobj /t-1_2.key.sobj /t-1_2.sobj /t-a-1.1.key.sobj /t-a-1.1.sobj Got: /t-.key.sobj.sobj /t-.sobj.sobj /t-1_2.key.sobj.sobj /t-1_2.sobj.sobj /t-a-1.1.key.sobj.sobj /t-a-1.1.sobj.sobj comment:12 Changed 21 months ago by jdemeyer - Work issues set to sage/misc/cachefunc.py doctest failure Confirmed that this does indeed cause the above doctest failures on sage-4.7.1. comment:13 Changed 21 months ago by jhpalmieri Here's a new patch, along with a "delta" patch. Changed 21 months ago by jhpalmieri - attachment trac_11577-delta-1to2.patch added for reference only: difference between old jhp patch and v2 Changed 21 months ago by jhpalmieri - attachment trac_11577-jhp.v2.patch added apply only this patch comment:14 Changed 21 months ago by leif - Status changed from needs_review to positive_review - Description modified (diff) - Authors changed from Leif Leonhardy to Leif Leonhardy, John Palmieri - Keywords .sobj added - Reviewers changed from John Palmieri to John Palmieri, Leif Leonhardy - Work issues sage/misc/cachefunc.py doctest failure deleted New patch looks reasonable, and passes all [long] tests in sage/{misc,structure}/, so positive review. (Tested with Sage 4.7.1.rc2.) comment:15 Changed 21 months ago by jdemeyer - Status changed from positive_review to closed - Resolution set to fixed - Merged in set to sage-4.7.2.alpha2
http://trac.sagemath.org/sage_trac/ticket/11577
CC-MAIN-2013-20
refinedweb
1,491
66.74
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "saved" - My wife opens a document, writes her entire paper and uses the close ❌ button to save it. I think I married an adrenaline junkie.12 - - - - Internet has been saved in Europe... for now... EU Parliament voted against the new Copyright directive.... 👏👏👏19 - Ooooooh I just skip a heart attack and shit myself at the same time, thank god for error "can not delete database, database is in use"8 - My coding behavior: 1. console.log("Hello World"); 2. CTRL S 3. this.date=moment(); 4. CTRL S 5. const yesterday = this.date 6. CTRL S 7. Open Chrome Browser to preview 8. Accidentally pressed CTRL S and saved that page - - Peer review is a life saver!!! My colleague just saved me my job as i almost published this fucking block to production.19 - # - - - Just saved a life. I was just walking with the dog (it's 3am here) and there was a bleeding drunk on the street - completely unconscious. Must have fallen on his head. Got the police. The paramedics told me a few minutes ago that the guy had a skull fracture. A few hours later it would probably have been too late for him.5 - - - sent and email of some code to someone, and when I coppied from VScode into the Email Cliant it saved the Background Color + Syntax Highlighting 😆27 - A ecom website which sales premium gold product from 50k to 170k INR. database : mysql all passwords and user ID's are saved in plain text.23 - - Toilet seat with a laptop table in front. The only moment I can focus. Nobody can disturb me. The duck also love to swim in the bath. I can even fap when looking at my sexy code. I don't need to travel when I gonna pee or poop. Saved me a lot of time.9 - - - Today FileZilla saved my life by storing all site connection passwords in base64 Without it we'd have lost access to an old production server 👌6 - I almost did rm -rf / as root on my computer, but Ubuntu actually stopped me from doing it, even as root, saying it was too dangerous. Props, Ubuntu. You saved my hard drive.9 - I was called as an IT technician at law firm to install a program in a lawyer's pc. Found a porn video saved in bookmarks...why? 🙄14 - - - I am puching myself over this one.. This made it into the official release. And this is one of the public screen shots that I saved to my phone from my devBlog. How did I not see this?? .. "SizeTest" ..24 - I started using Keepass and changed all my passwords to auto generated passwords. Somehow, my PC crashed before I saved the database. That was the day, where I lost access to my primary email address.5 - When I was 10 my younger brother saved over my fully completed pokedex in Pokemon blue. First big data loss taught me a good life lesson. Now I backup everything on a local server.1 - - Recruiter emails me about a role. I replied that I am not interested. Two weeks later, the recruiter emails me again that my profile is not suitable for the role and they have saved my details and will contact me in future. WTF. This is a very well reputed organization for that matter.4 - Even though my ikea rack has served me well, I am happy with my newest server room update. - - - !rant This site will save you hours of explaining how you can't make every "awesome app idé" that people have! (Thank me with a stress ball of stress I just saved you 😇 - Girlfriend tells me she probably saved a life at work in hospital the other day (she's a physiotherapist). She asked me how my day was. Most of it was hunting a typo - for a holiday booking website.8 - Sometimes, just sometimes, GitHub has its heroes. Probably saved me over an hour of my life. Thank you kind stranger!!8 -. - To all those people that have Git alliases like: "gc" = "git clone" or "ga" = "git add" What do you do with all that extra saved time?19 - - - It officially happened... Accidentally used rm -rf /* (Actual command was a bit more involved, but it did pretty much the same thing) Laptop doesn't boot now. Saved my home directory though. Hooray.7 - Customer: Can you do a database query for me? Me: Made the query and send them the result as a csv-file. Customer: Is it possible to send it as an excel-worksheet because the columns don't have the right width. Me: Resize the columns to the right width, saved it as xlsx-file and send it back.5 - - Not a rant. Whoever came with the idea to implement an automatic restore point in Windows... BLESS YOU!! Just had some problems and the PC won't boot. I entered the troubleshooting option and saw that there was an restore point from 22 this month. Just saved my ass.4 - It would have been so helpful if they taught us about licenses and copyrights. Would've saved me a lot of trouble.4 - - - After a nightmare weekend where moving my 80 year old desk caused the wood to completely split and almost broke my computer and screens in the process..... I felt physically sick..... Somehow I became a carpenter for the day and rebuilt and resized my desk to make it better then ever..... Couldn't be happier with the final result tbh!10 - - Thanks to DevRant, you have saved hundreds of monitors from getting smashed, keyboards from getting torn into pieces and mouses being flung across the rooms! There is a platform for rants now!4 - - Opens pycharm import time; print(time. *hits Ctrl+space* >Auto complete not working >Searches SO no answer >Realized file saved as time.py > Proceeds to contemplate career choice3 - When you're in the middle of a spree and you haven't saved in the past half hour. Then this blue beauty appears. 🙃 We all know who the winner of week 27 is.. - @dfox and @trogus , thank you for making this project. You have saved a lot of pens i break in my office . #rantingHelps7 - Show a pre recorded video of ur product during a ppt rather than a live demo. There are literally zero chances of fuckups.. Saved me quite a few embarrassments.1 - :) - - Client: “I’ve attached a screenshot of the issue.” The “screenshot” is a printout of the website, with no annotations detailing what the issue is, scanned back in at a 90° rotation and saved as a PDF 😄2 - - Something I ranted about 1 year and 2 days ago just saved my life today. Those lost hours that day saved me a few hours today. I wonder though: if I hadn't written about it on devRant, would I still remember it today?3 - I once saved lives sending cpr teams to heart attack victims through an sms gateway platform. This was amazing considering it was back in 2008 ;)11 - I fucking love the vsCode "search everywhere" function, it saved me countless times with chinese frameworks, holy shit.3 - - So, what are you all working on right now? Let's get some screen-shots in here! I'm working on my "BrowserBandit" software - it reads a firefox or chrome profile and extracts saved user/pass combos, history, and autocomplete entries - - That moment when you notice that you haven't saved one file and this caused all the problems... 2 hours down the drain2 - Today my life was saved by some fellow devs here on devRant and for those who helped(I will try to @yall in the comments), thank you so much you saved me! And more importantly saved me from all that fucking stress, which was plaguing me all day and breaking me down and lately I’ve needed that kind of pick me up. I felt so relieved I took a glorious nap! It was so needed and my head felt so much less like I bashed it into a wall piled with stress. Recently I’ve started to actually make friends from people on devRant and it makes me excited because I can actually talk about programming/get help if I need it and they are able to. And talking things out and getting explanations for questions I have it just feels so wonderful. Things have been luckily lookin up a bit and it’s giving me some hope and inspiration to do more.4 - Came into work this morning. Saw my machine off and some guy fiddling with the electricals. Don't worry boss, I wasn't working on anything terribly important that probably didn't get properly saved when your henchman yanked out the power cables. This place is the fking worst.6 - Tip: Never add an execution (PHP for example) script as a bookmark in your browser.. Your browser will occasionally poll the saved URL to update the favicon or sync to the server or something like that, which triggers the script without you knowing...11 - - God bless being a student. I just moved a massive calculation to uni's jupyter servers. Saved me from a shitton of effort and burning my laptop down. 🙏 - - Once i was so lazy that i wanted to write comments and i started. On first line i stopped in half. Saved and made a commit. - Man! I love refactoring! 👨🏻💻😍 Only saved about 20 lines but it went from ugly string manipulation to beautiful JavaScript objects!9 - I typed a long email and Outlook just hanged and closed. Upon re-open my email did not get saved to drafts. Argh - - Worst issue you got blamed for, but wasn't your fault. Best story about a dev you know who's angrier than you. Best time backups saved your ass. Story about a traumatic dev experience.1 - IT dept releases update for Cisco Jabber for work environment and describes it as a minor update. Me installs new version... - completely new UI - loses saved login credentials - loses connected devices - loses all settings - loses history My definition of "minor" is "slightly" different4 - - - - Thank you, GIT. Thank you. Your reflog saved work of months after someone deleted the wrong branch.1 - - What are you all planning on getting on Black Friday/Cyber Monday? I've pinched pennies all year and saved up just enough to only be able to afford 35% or so of the cost of being beaten in a local dark alley. That's not even what I wanted, but still.16 - I saved passwords to db hashed to SHA-1 with no salt... I left that company but I'm sure that application is still actively used today.2 - - Last year => Upgraded to windows 10 from 8.1 Got a new phone in October. Backed up all the data from previous phone to PC. ( Saved it on desktop ) Restarted PC few hours later to transfer data to new phone. Windows failed to start. Failed to reset/recover Had to factory reset PC Lost all the data MacroShit11 - Wrote a long SQL query yesterday for some data I was exporting. It was just about finished and figured I would wrap it up in the morning. PC reboots overnight. Never saved the query. // Rookie mistake3 - - - For those of you wanting dark themes for sites without them, look into the Stylish FF/Chrome extension. You can install themes for sites on it where it would otherwise require manual customisation. I've currently got dark themes for Facebook, WhatsApp Web and Reddit.6 - - - When a rant reminds you of a feature that ends up solving a huge issue in the system you are working on. Thank you devRant!! - Designers use plugin in WP to modify UI. CSS and JS is saved (somewhere?) in database. No version control. Changes are made across different envs at the same time and all need to be migrates to prod later. Pls2 - Working on a database where every column name are acronyms. No, the 2-5 seconds you just saved yourself from typing are not worth it, it’s so easy to make a self-documenting database but you had to fuck that up.1 - I use Pockets to save stuff which I wanna read later. I once saved a StackOverflow page for some reason and later when I was browsing the list, I saw this. PS: This is the profile picture of that user.2 - - - - I was on a verge of losing my job and then I used GSAP framework and saved my job. Struggles of being the only front end developer in a company is painful.2 - case you are bored, check this out: How C# Saved My Marriage, Enchanced My Career, and Made Me an Inch Taller.... - Its time we need water proof notepads for our showers. could have saved me a bundle of ideas which never came back5 - Just saved a co-worker by having an installer from 8 months ago in a folder called "desktop", 4 levels deep in folders called "desktop", all on my desktop. My hoarding habits finally saved the day!!!2 - Why am I a rockstar today? So glad you asked! I used "sed" without having to look at the manual. 12 files updated, 10 minutes saved... that's some good time to devRant if you ask me.4 - Changing authentication mechanism in SharePoint from windows identity to ADFS identity is stupidly complicated, especially for existing large farms with custom code. On the plus side - just convinced the director this is stupid - saved myself, himself, and 1000 users a ton of misery.12 - Hello DevRanters! I've put together a gaming desktop, and I want your opinions 😊... Keep in mind, I am Australian, so it's all in AUD currency. Currently 1$ AUD == 1.30$ USD16 - "SAVE WHEN IT WORKS!" it's the note that I have on my desk since I didn't commit changes on a project, neither saved it locally and kinda screw that one entirely. - - Chrome blocked TheGreatSuspender yesterday. All the Chrome installations got that extension removed. My chrome: 300-400 tabs (in total): *poof*21 - - accidentally quit chrome instead of closing one tab because of the shortcut is very close (cmd+Q and cmd+W) found cmd+shift+T shortcut to restore tabs feels saved chrome restore all tabs chrome stopped working chrome quits unexpectedly3 - Why does the 'save image' feature in 'DevRant' Android App take so long? I mean it's just saving the image from ram to storage right.... also the image sizes are pretty small. Okay just noticed they aren't getting saved anywhere...😅13 - - The one skill I know that I am really proud of is GIT. Put me into trouble with merge conflicts. Saved my life with its version control. Always had an adventurous ride with Git. Hope to have many more such rides and get to learn more about you. - Have you had any money saved? What is your process when it comes to saving money? What kind of investments did you do with your savings?19 - Very. I saved 27,000 pounds by never going and teaching myself using the interwebs. ALL HAIL THE INTERNET - After hearing to hundreds of "just this last small change" , i told my client that he was a "chutiya" and he sent a link to this saying he had not intention of driving in India.😎😎7 - Already using PhpStorm for 2 years now. Just discovered there was an auto-formatting tool for your code. Could have saved me hours of work. Why is life so hard with me2 -! - Everyday tech support struggles: - "I can't find my document." - "Ok where did you save it?" - "In Word." - "I understand you saved it in Word but where did you save the file to?" - "I'm not sure, Word did the rest - - When reading the documentation could have saved you hours of debugging. "How nice, there's actually a property for that..." - Oh my effing goodness...just went through the repo of an app we're working on and this new dev in our team saved his commits with, "ok", "done", "fixed", "another one", "arrggghhh!", "wow!"..."not complete"...for fucks sakes...DUDE!1 - I almost fired off an update query without a where clause on the live db. I stopped at the last second. I'm now having survivors guilt. Why me? Why was I saved when so many before me didn't make it? 😁4 - - - - You COULD buy the entry-level Mac Pro for $7000 or get this build for $4000 that's a fuckton more powerful AND has 2 monitors (with stands!) and just stick Hackintosh on it. - I finally quit being a stubborn ass and started using cloud storage. well that's quite a few gigs saved off of photos and docs. now no longer have to remote in to grab files. next month setting up plex2 - . - - - - - Because of all the devRant posts about unit tests, I decided to write a few to see how they worked. They just saved me from pushing completely broken code to production. THANK YOU devRant!!1 - I swear the implementation of byte arrays in dot net is fucking brilliant, never thought I would give good credit to dot net but the amount of bloody times this shit has saved me is unbelievable...3 - *hits CTRL-S to ensure everything is saved * .5 seconds later: *Hits CTRL-ALT-S to ensure sanity is saved* Git Push2 - - - I saved my uni work onto a floppy disk (2001) and walked a mile into university library to print it. When I got there and put it into the computer it had corrupted and the disk was unreadable! Luckily I had a back up on my computer so had to walk the mile back, saved again onto two different floppy disks this time and walked the mile back. This time I managed to print it and deliver the work 5 minutes before the deadline. - - I've fallen in love with (unit) testing, it just saved my ass from deploying a broken version of my library because of a missing '!'3 - Ah I feel so accomplished. My desktop (running Manjaro) had a Linux kernal module error and couldn't find my drives after a system restore (my fault, forgot to restore a specific sector). Well after a few hours, I managed to save it! Oh liveCDs you're wonderful. No data lost~! - I hate sleeping, I always think about the possibility of the work I can do in the time I would have saved from not sleeping Lou Looks like a waste of time, but can't skip this one!4 - "I've recently lost someone who really meant lots for me, we worked together for years, he new all about me, he even saved my dreams, projects and hobbies, such a great memory ..." "What was his name?" "D:\"1 - I'm working on a big glass fastening machine. The PLC's software (ladder) is saved on those floppy disks. I'm screwed. - My childhood basically. Saved up all my money to buy Pokémon games. I had one that was red tho. The amount of hours I spent on this thing... If not thousands of hours... Tens of thousands... Then moved to Minecraft, and that's how I got coding 😇 (batch but still)5 - - Reading financial advisors: "you should have saved your annual income at the age of 30 if you want to retire at the age of 67" *swallows on my own spit* I guess my life is senseless from now on.1 - Jeez, it can be done! Thank you CSS, i really thought it wasn't possible. My bacon is saved. (Overriding an inline style, javascript generated, from a linked style sheet).2 - I'm not sure which is worse: games that display the "unsaved changes will be lost" warning immediately after you saved, or games that display the warning and there's no obvious way to actually save your game. Bonus points when there really is a manual save process and you lose all your progress because you thought it auto saved.1 - - Tl;dr got to mass slaughter expensive winblow$ servers from AWS. The kicker - no-one was using any of them for over a year...good bye saved money that’s no going to my paycheck 😭. - ... - I guess talking to a duck helps after all although my duck was actually writing an email to business explaining an issue I am having in a requested change. Right before I hit send, I go... Ah I get it now! That saved me from some embarrassment :) - - Deleted the database of an application I built for college since they were replacing it with a better one. Later, the teacher remembered that he didn't take a backup. Fortunately, I remembered I had configured a cron job an year back in the app which saved me that day. 😅 - So you have saved up some money, got a macbook, paid for an apple developer account to build an app which uses the camera only to find out that the simulator doesnt work with the macbook's camera. Apple do you realise that without devs you wouldn't have a decent app store?8 - Come into work after a 3-day weekend, see a file isn't saved, think it's probably just a random space or something I accidentally added after modifying the crap out of the file and saved for the weekend so I compare the file from the disk to the file in memory. Turns out, I didn't save a single time since I started working on it. I guess Friday me likes to live dangerously.2 - - Fuck business networking meetings. Having to go as a representative of our company was not my most wished for thing... But hey, free drinks and meeting and talking to one of the devs there saved me the annoyance of having to participate in a business circle-jerking for 30 minutes.4 - - I am trying to install eclipse for college Windows defender blocked it because it found malware virus in it Disabled windows defender Waste my time to download again Kaspersky blocked it because it found multiple trojans Kaspersky saved my computer over 30 times from real damaging viruses Disable kaspersky too Waste my time squared download eclipse again Open eclipse Import tomcat v8 Eclipse crashes6 - - Why didn't I learn about property overloading before. So much copy and pasting I could have saved. Well the drawback of selflearning :) (yes I know about the perfomance but it doesn't matter on this project ) - Seeing the Winnie Pooh eating InfoSec propaganda meme this morning on devRant saved my day. I'm still laughing 8 hours later 😂2 - No matter if you understand all the medical terms, you need to read this. It is amazing.... - Have Pocket app. Save awesome article/site to Pocket. Swear to read it later. Open pocket app after a while. Shit so much saved. Can't read all of this no time. Thus begins the infinite loop.4 - When I'm on a typing rage and I hit print screen key accidentally couple of times. "Onedrive saved your screenshot". Whaat? Weird onedrive/windows feature/ bug..7 - Thank you to the developer who introduced me to the dark theme in my editor back in the day! Probably saved my eyesight.1 - My Test-Suite with karma and jasmine, they saved my ass multiple times. Wouldn't have noticed so many things at rewriting without the tests. - C++20 Modules ! I can't wait to get rid of includes and include barriers ! Still prefer Rust though, borrowing times saved my butt just this morning, hopefully we'll get them on C++ too at some point1 - Last week, i discovered the android shell. I hated it, cause I was used to do bash stuff. Then I saw busybox on my /system/bin. It saved the day. - - Git - Because it has saved my ass more often than I would like to admit and thanks to reflog almost any fuck up is salvageable. I also like it because it makes a handy multi-game mod manager on a pinch. - The database dude: yeah it gets saved as a string. *me sets up preg_match for a string* Database 'guru': we tried entering the data in the form and we are getting an error. Fix it! Turns out it's being saved by id. Data wizards my ass - Every day I ask myself at least 5 (not too difficult) questions about programming (for instance "Can I compile Java in runtime?") If I don't know them - I find their answer somewhere It is like continuous integration, but with my knowledge - small portions of info are saved well in my brains)) - AHHHHHHHHHHHHH Just discovered Emmet in Atom editor. Oh god why am I so late to the party. Already I easily saved easily half of my HTML coding time. Anyone who does any HTML work should definitely check it out. Don't take my word for it, just look at this beauty. - Saved almost 50min traveling time due to public holiday so less traffic while working on public holiday. It's a loss-loss situation anyway. Well fuck. - Yo, remember when @Alice parodied the millenial 404 message? It was hilarious, but now it's gone :c Has anyone saved it by any chance? I don't want this to be lost forever, it was too good :c5 - :/ - Fucked a clients wordpress website up by editing a saved option thats saved as serialised php data. Tried using the row data from a backup and updating it in the database and it still loads the fallback theme settings. What do?5 - I was about to reinstall Arch Linux when I've just realized that I can fix the problem with just reinstalling/reconfiguring GRUB. Thinking just saved me hours.2 - - Just discovered Katalon and Selenium. If I knew about it earlier I could have saved days back in 2017 if not weeks. <3 this is a good start to 2018!1 - Why would anyone want to run Laravel when it runs so much slower than raw PHP? Surely the development time saved is negated by the amount of optimisation work required?4 - - Me: Spends 4 hours configuring my new IDE Me: Hovers Apply for the last apply IDE: Crashes Ne: Reopen IDE, not worried as I applied and saved multiple times during setup IDE,: Totally zero'ed EVERY SINGLE SETTING... Me: Googles alternatives to X - I really don't like the anxiety when Xcode crashes, I need to force quit it and really really hope that my code is saved... - New year old me Fucking Windows "You have written a lot of code lately it would be ashame if I just crash" *Bluescreen* "Configuring updates" Fuuuuuuuuuccccckkkkkkk Luckily I saved my code5 - installing oracle 11gr2 database via docker is so easy and quick rather installing to the system Totally saved my time, now I can have more time for learnig1 - The condition of software development in 2019: “Please don’t apply if you don’t have the core concepts of programming, and you depend upon copying and pasting the code from StackOverflow/saved file.”2 - - Shoutout to them self built utilities you carry with you where ever you go! Saved my butt a couple times! - - Hey if your company refreshing / retiring laptops for the latest greatest - you might want to think about using for disaster recovery. Store them in another location or home. Push out updated image automatically. Recently saved a company 400k recurring expense with this strategy. - The only good use of WhatsApp status is that you get to know people still have your number saved in their phone! 😂2 - Thanks god bash scripting exists. Saved my time from running manually a C program with input files to check the output that would take me at least 1H.1 - 6hrs trying to get a static and dynamic cell working in Swift. Xcode crashes and not a fucking thing was saved! My git commits make no fucking sense!4 - "The more GPUs you buy, the more money you saved" - Jensen Huang. ASUS finally promoting GPU mining app on their RTX official webpage! Good luck with saving money!!!6 - I'd argue to say that committing often, even if the commits aren't always meaningful, has saved me numerous times from bad code gone awry. - Nothing like when I created an offline installation of visual studio. That's >20GB saved for each time I've had to start everything a fresh. - hnnnggg that moment when the program stopped responding while saving and crashes and it hasnt saved ur file yet - - - - Robert Johansson. I mean. He saved the world. Literally. Made first contact. And second contact. And third contact. Literally lives coding. And nobody said the bloke must be real.2 - - Today's story. 1. Git commit all changes 2. Need to git pull bcz of master change 3. Mistakly did git commits undo. All my changes fucked up. At last Ctr+z saved my ass - You know you spend too much time with computers when the opportunities for new knowledge and time saved from a book titled "sed & awk 101 hacks" get you very excited1 - - - - Replace "Check out this devRant" with the first line when sharing. I have saved a lot of rants but they have the same title and it is hard to see which is which. I suggest to replace that with the first line or first sentence. - An entirely sentient AI. Saved on a floppy disk. People in the late 90s had quite the colourful imagination about the future...2 - 2 days hard thinking why my prepared statement not saved to the database, until I found this ... ADDDATA ... And I only put the parameter with ADDATA ... How beautiful my life. Thanks ADDDDDDDDDSDDDDSSDDSDDDDDDSDDDDDDATA1 - Why don't you devRant open a new tab when I click a link? I don't want to right click to open in a new tab ):6 - Recent experience (previous sem). We had this DBMS teacher who used to sit most of the time during lectures, and used to write SQL in lab session with the help of lab technician. We're saved by more experienced lecturer at last hour. - - - Ever quick-saved in a fallout game right before hacking a terminal... just so when you got locked out you could go back....... but when reading the tutorial you realized that YOU CAN GO BACK TO HACKING A TERMINAL IN 10 SECONDS AFTER YOU GOT LOCKED OUT... -) - Client wanted to backup the uploaded files by users to a different drive. The servers I was working on was Windows servers so I just used robosync between the 2 folders, saved as a batch script.2 - Thanks to Google Chrome for the "HOLD Cmd + Q to quit" option. Saved the day while I was writing peer feedback :D4 - yeah, and i encountered mr. blue screen. i'm glad that android studio project saved, automatically. just a little more patience, you will get a job and buy some fucking legendary unit!!!! #RamHurts1 - So you know those movies where the girl fells in love with the bad gay(kidnapper, thief...) and she doesn’t want to be saved. Same relationship with me and JavaFX.1 - Where do the images get saved to on devrant (on Android)? I've saved so many memes but don't actually know where they go12 - I used to love Omegle. It saved me from feeling alone. Unfortunately there are distraction apps in market and everything is bullshit. - - So many occasions to choose from! Probably the most pissed I've been is when I'd been assigned to work on (and completed fixes for) the same bug as another developer twice in the same week after already having spent 4 days working on a new enhancement that's requirements changed literally an hour after I'd saved the code! - When I was wondering why I am unable to navigate using #androidIntent , #intentFlag saved the day. #AndroidDev1 -. - Bug: "attach img/gif", select the Dropbox app to select an img from, select an img -> devRant android app returning back to the starting screen, not to the recently typed rant, the entry hasn't been saved.. - If I'm in a merge conflict and have the Diff Editor open (REMOTE, LOCAL) which file is the final copy that will be "saved"?2 - - - Does anyone have experience with Google Drive (GSuite) and rclone? I want to use it as a storage for jellyfin (emby fork) and Nextcloud, with the first being only saved there and the latter either as or with a backup. -7 - Trying to run a web page from arduino with some js in it to draw a graph. And it craps out with some error about start tag. It works fine saved as an html file on my laptop. Wtf?1 - All the time I've saved by using react instead of native solutions I've lost to some bullshit bug somewhere deep in the react source that crept in during the last update. - Hi guys, i got a new laptop and i want to use pop os as my main os, i want to know if my data is gonna be saved or do i have to use persistence ?3 - ClosedXML my new best friend, saved the day after spending hours reading shitty .xlsm files using NetOffice - Whenever I try to download my avatar, it says "Saved to galley" but it's not there. I am having this issue since beginning. Anyone else having same issue?9 - Thanks @Siddharthkr93, atom really saved me from the blue screen of death, compliments of Microsoft1 - Hm..favourite function.. Just before my apprenticeship as I used php more often, var_dump() was propably my favourite because it saved hours of my life :P - Worked around a major blocker using iframes inside modals. The 8 hours saved will become 8 days extra in Web Developer Hell when I have to refactor it fully! Pray for. - Just saw this talk on event sourcing. Kind of amazed by the benefits of this, and that I haven’t really touched upon it previously. Would have saved me a lot of trouble in the past year. Anyone have any ranting/thoughts about this? - - Has anyone else noticed a trend this summer of os updates completely shafting servers? Just saved my 5th prod server and its getting fucking old - Not sure how, but I broke my servo (TowerPro SG-5010)... Luckily my uni had a few smaller ones for 2€ laying around, my project would've failed without that Top Tags
https://devrant.com/search?term=saved
CC-MAIN-2021-10
refinedweb
5,828
74.08
NAME XML::Compare - Test if two XML documents semantically the same SYNOPSIS"; } DESCRIPTION This module allows you to test if two XML documents are semantically the same. This also holds true if different prefixes are being used for the xmlns, or if there is a default xmlns in place. This modules ignores XML Comments. SUBROUTINES - same($xml1, $xml2) Returns true if the two xml strings are semantically the same. If they are not the same, it throws an exception with a description in $@ as to why they aren't. - is_same($xml1, $xml2) Returns true if the two xml strings are semantically the same. Returns false otherwise. No diagnostic information is available. - is_different($xml1, $xml2) Returns true if the two xml strings are semantically different. No diagnostic information is available. Returns false otherwise. PROPERTIES - namespace_strict (Bool) If this property is set, then all the namespaces of both documents must match exactly. The default, unset, raises an error only if the first document, $xml1, has a namespace defined and this is different from $xml2's (or $xml2has no namespace). - error After the 'is_same' method is used, this will contain either the error string from the last comparison error, or undef. - ignore An array ref of XPath expressions to 'strip' from the documents before comparing. This is implemented by evaluating each XPath expression at the beginning, then removing those nodes from any lists later found. - ignore_xmlns A hashref of prefix => XMLNS, if you used namespaces on any of the 'ignore' XPath entries. EXPORTS Nothing. SEE ALSO AUTHOR Andrew Chilton, <andychilton@gmail.com<gt>, .
https://metacpan.org/pod/release/CHILTS/XML-Compare-0.04/lib/XML/Compare.pm
CC-MAIN-2017-09
refinedweb
260
57.37
This is the mail archive of the newlib@sourceware.org mailing list for the newlib project. On Sun, 2013-02-10 at 15:28 -0600, Joel Sherrill wrote: > I have a hack of a patch to get mips-rtems to build. I thought I had posted it but maybe I didn't. > > I think it is a bug in the new implementation. > > --joel Yes, this is a bug. I have been able to reproduce it. The problem is with: #if (_MIPS_ISA == _MIPS_ISA_MIPS4) || (_MIPS_ISA == _MIPS_ISA_MIPS5) || \ (_MIPS_ISA == _MIPS_ISA_MIPS32) || (_MIPS_ISA == _MIPS_ISA_MIPS64) #ifndef DISABLE_PREFETCH #define USE_PREFETCH #endif #endif When building for mips-elf, _MIPS_ISA is defined (by GCC) as _MIPS_ISA_MIPS1. The problem is that _MIPS_ISA_MIPS1 (and MIPS4, and MIPS5, etc) are not defined to have any specific values. So _MIPS_ISA is expanded to _MIPS_ISA_MIPS1 and _MIPS_ISA_MIPS1 is expanded to 'nothing'. Since the value of '_MIPS_ISA_MIPS4' is also 'nothing', they match and we set USE_PREFETCH when we shouldn't. The long term fix is to fix the defines that GCC does for the MIPS_ISA macros, but in the short term I guess we could check a different define, I see GCC is also doing these defines with mips1: #define _MIPS_ARCH_MIPS1 1 #define _MIPS_ARCH "mips1" We could check one of those. I will send a patch soon. Steve Ellcey steve.ellcey@imgtec.com (sellcey@mips.com)
https://sourceware.org/ml/newlib/2013/msg00081.html
CC-MAIN-2019-30
refinedweb
220
73.47
Back to: ASP.NET Web API Tutorials For Begineers and Professionals Cross-Origin Resource Sharing in Web API In this article, I am going to discuss how to enable Cross-Origin Resource Sharing in Web API which allows cross-domain AJAX calls. Please read our previous article before proceeding to this article as we are going to work with the same example. In our previous article, we discussed How to Call a Web API Service in a Cross-Domain Using jQuery AJAX with an example. The CORS support is released with ASP.NET Web API 2. - What are the same-origin policy and default behavior of browsers in AJAX Requests? - What is CORS? - How to enable CORS in Web API? - Understanding the Parameters of EnableCorsAttribute. - How to use EnableCors attribute at the Controller and action method level? - How to Disable CORS? What are the same-origin policy and default behavior of browsers in AJAX Requests? Browsers allow a web page to make AJAX requests only within the same domain. Browser security policy prevents a web page from making AJAX requests to another domain. This is called the same-origin policy. In other words, it is a known fact that browser security prevents a web page of one domain from executing AJAX calls on another domain. What is CORS? CORS is a W3C standard that allows you to get away from the same-origin policy adopted by the browsers to restrict access from one domain to resources belonging to another domain. You can enable CORS for your Web API using the respective Web API package (depending on the version of Web API in use). Changes to Web API project: Delete the following 2 lines of code in the Register() method of WebApiConfig class in WebApiConfig.cs file in the App_Start folder. We added these lines in our previous articles to make ASP.NET Web API Service return JSONP formatted data var jsonpFormatter = new JsonpMediaTypeFormatter(config.Formatters.JsonFormatter); config.Formatters.Insert(0, jsonpFormatter); Your WebApiConfig class should look as shown below. How to enable CORS in Web API? Step1: Install Microsoft.AspNet.WebApi.Cors package. Execute the following command using the NuGet Package Manager Console. Step2: Include the following 2 lines of code in Register() method of WebApiConfig class in WebApiConfig.cs file in App_Start folder EnableCorsAttribute cors = new EnableCorsAttribute(“*”, “*”, “*”); config.EnableCors(); After adding the above two lines of code, the WebApiConfig class should as shown below. public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Web API routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); EnableCorsAttribute cors = new EnableCorsAttribute("*", "*", "*"); config.EnableCors(); } } The following 2 lines of code in Register() method of WebApiConfig.cs file in App_Start folder enables CORS globally for the entire application i.e. for all controllers and action methods EnableCorsAttribute cors = new EnableCorsAttribute(“*”, “*”, “*”); config.EnableCors(); Step3: In the Client Application, set the dataType option of the jQuery ajax function to JSON dataType: ‘json’ Now run the service first and then the client application and check everything is working as expected. Parameters of EnableCorsAttribute Origins: A comma-separated list of origins that are allowed to access the resource. For example “,” will only allow ajax calls from these 2 websites. All the others will be blocked. Use “*” to allow all Headers: A comma-separated list of headers that are supported by the resource. For example “accept,content-type, origin” will only allow these 3 headers. Use “*” to allow all. Use the null or empty string to allow none Methods: A comma-separated list of methods that are supported by the resource. For example “GET, POST” only allows Get and Post and blocks the rest of the methods. Use “*” to allow all. Use a null or empty string to allow none How to use EnableCors attribute at the Controller and action method level? It is also possible to apply the EnableCors attribute either at the controller level or at the action method level. If applied at a controller level then it is applicable for all methods in that controller. Call the EnableCors() method without any parameter values. Apply the EnableCorsAttribute on the controller class [EnableCorsAttribute("*", "*", "*")] public class EmployeesController : ApiController { } In the same manner, we can also apply it at a method level if we wish to do so. public class EmployeeController : ApiController { [HttpGet] [EnableCorsAttribute("*", "*", "*")] public IEnumerable<Employee> GetEmployees() { EmployeeDBContext dbContext = new EmployeeDBContext(); return dbContext.Employees.ToList(); } } To disable CORS for a specific action apply [DisableCors] on that specific action. In the next article, I am going to discuss Routing in Web API. Here, in this article, I try to explain Cross-Origin Resource Sharing in Web API step by step with a simple example. I hope this article will help you with your needs. I would like to have your feedback. Please post your feedback, question, or comments about this article. 3 thoughts on “Cross-Origin Resource Sharing in WEB API” Hello, Your guides are splendid! In this particular article it is not clear to me whether I need to enable Cors in the MVC and in the Web API EmployeeService projects simultaneously? Nevermind, I understood now that you only need that enabled on the API’s end 🙂 You are right. We are happy that you found our articles helpful for you.
https://dotnettutorials.net/lesson/cross-origin-resource-sharing-web-api/
CC-MAIN-2020-05
refinedweb
882
57.98
React Js— Fetching Data From An API These React JS blogs are a part of my personal journey with React JS. This basic task involves fetching an API and displaying a simple list. I think it is perfect practice for beginners and those looking to revise their knowledge. Getting Started - Ensure that you have Node.js installed on your PC. To check, run npx — help. If a list of options is returned, you’re good. If not , please download Node.js. - Next , go into your desired directory and run the following command to create your project, create-react-app reactapi - Open the project you just created , called reactapi, using a text editor such as Visual Studio Code. Setting up the Project In your newly created project, head to the App.js file and make the following changes so that you are left with just the following code: import React, { Component } from "react";class App extends Component {render() {return (<div className="App"></div>);}}export default App; I then suggest you run npm start to test if your application works fine, it will open on the browser automatically once you run the command in your terminal. The constructor and states Next, we create a constructor function. We then define the state of the app, we will use two states, one is items which is an array of the data we will fetch from the api and the second is the isLoaded state to know when the items have been loaded from the api. constructor(props) {super(props);this.state = {items: [],isLoaded: false};} ComponentDidMount() Next , we create a component called Did Mount. We will create an api caller , the fetch method, to take the first argument of the url part of the api. We will use the following api that is good for testing: [ { "id": 1, "name": "Leanne Graham", "username": "Bret", " } ] If you open the link on your browser you can see that it is one array of multiple json objects. componentDidMount() {fetch(“").then(res => res.json()).then(json => {this.setState({isLoaded: true,items: json});});} We put the link as an argument of the fetch function and convert it to json format. We then set the state of items to json which contains the data from the api and set isLoaded to true, because we got the data. The data is now saved in the component so we can use it. The render method render() {var { isLoaded, items } = this.state;if (!isLoaded) {return <div>Loading..</div>;} else {return (<div className=”App”><ul>{items.map(item => (<li key={item.id}>{item.name} | {item.email}</li>))};</ul></div>);}return <div className=”App”></div>;} Here it is said that if the data has not loaded , display text saying Loading .. or if the opposite is true return a list. We used the JavaScript map function which creates a new array that allows us to loop through each object in the api result. The result in your browser after running npm start should be: The complete example is hosted on GitHub: zorgonred/React-Js--Fetching-Data-From-An-API You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or… github.com Thank you for reading! Feel free to make any other suggestions or recommendations for future challenges. Leave a few claps if you enjoyed it !
https://medium.com/@logistico94/react-js-fetching-data-from-an-api-87a8a757871b
CC-MAIN-2020-16
refinedweb
561
63.8
Created on 2012-02-21 14:12 by tarek, last changed 2015-03-18 08:50 by Vadim Markovtsev. If you try to run the code below and stop it with ctrl+C, it will lock because atexit is never reached. Antoine proposed to add a way to have one atexit() per thread, so we can call some cleanup code when the app shuts down and there are running threads. {{{ from wsgiref.simple_server import make_server import threading import time import atexit class Work(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.running = False def run(self): self.running = True while self.running: time.sleep(.2) def stop(self): self.running = False self.join() worker = Work() def shutdown(): # bye-bye print 'bye bye' worker.stop() atexit.register(shutdown) def hello_world_app(environ, start_response): status = '200 OK' # HTTP Status headers = [('Content-type', 'text/plain')] start_response(status, headers) return ["Hello World"] def main(): worker.start() return make_server('', 8000, hello_world_app) if __name__ == '__main__': server = main() server.serve_forever() }}} My take on this is that if wanting to interact with a thread from an atexit callback, you are supposed to call setDaemon(True) on the thread. This is to ensure that on interpreter shutdown it doesn't try and wait on the thread completing before getting to atexit callbacks. @grahamd : sometimes you don't own the code that contains the thread, so I think it's better to be able to shutdown properly all flavors of threads. Reality is that the way Python behaviour is defined/implemented means that it will wait for non daemonised threads to complete before exiting. Sounds like the original code is wrong in not setting it to be daemonised in the first place and should be reported as a bug in that code rather than fiddling with the interpreter. Is there any good reason not to add this feature ? what would be the problem ? It does seem to be for the best, I don't see any drawbacks At the moment you have showed some code which is causing you problems and a vague idea. Until you show how that idea may work in practice it is a bit hard to judge whether what it does and how it does it is reasonable. Mmm.. you did not say yet why you are against this feature, other than "the lib *should not* use non-daemonized threads" This sounds like "the lib should not use feature X in Python because it will block everything" And now we're proposing to remove the limitation and you are telling me I am vague and unreasonable. Let me try differently then. Consider this script to be a library I don't control. I need to call the .stop() function when my main application shuts down. I can't use signals because you forbid it in mod_wsgi How do I do, since asking the person to daemonize his thread is not an option ? I see several options: 1 - monkey patch the lib 2 - remove regular threads from Python, or making them always daemonized 3 - add an atexit() option in threads in Python 4 - use signals and drop the usage of mod_wsgi I think 3- is the cleanest. I haven't said I am against it. All I have done so far is explain on the WEB-SIG how mod_wsgi works and how Python currently works and how one would normally handle this situation by having the thread be daemonised. As for the proposed solution, where is the code example showing how what you are suggesting is meant to work. Right now you are making people assume how that would work. Add an actual example here at least of how with the proposed feature your code would then look. For the benefit of those who might even implement what you want, which will not be me anyway as I am not involved in Python core development, you might also explain where you expect these special per thread atexit callbacks to be triggered within the current steps for shutting down the interpreter. That way it will be more obvious to those who come later as to what you are actually proposing. > Add an actual example here at least of how with the proposed feature your code would then look. That's the part I am not sure at all about in fact. I don't know at all the internals in the shutdown process in Python and I was hoping Antoine would give us a proposal here. I would suspect simply adding to the base thread class an .atexit() method that's called when atexit() is called, would do the trick since we'd be able to do things like: def atexit(self): ... do whatever cleanup needed... self.join() but I have no real experience in these internals. Except that calling it at the time of current atexit callbacks wouldn't change the current behaviour. As quoted in WEB-SIG emails the sequence is: wait_for_thread_shutdown(); /* The interpreter is still entirely intact at this point, and the * exit funcs may be relying on that. In particular, if some thread * or exit func is still waiting to do an import, the import machinery * expects Py_IsInitialized() to return true. So don't say the * interpreter is uninitialized until after the exit funcs have run. * Note that Threading.py uses an exit func to do a join on all the * threads created thru it, so this also protects pending imports in * the threads created via Threading. */ call_py_exitfuncs(); So would need to be done prior to wait_for_thread_shutdown() or by that function before waiting on thread. The code in that function has: PyObject *threading = PyMapping_GetItemString(tstate->interp->modules, "threading"); ... result = PyObject_CallMethod(threading, "_shutdown", ""); So calls _shutdown() on the threading module. That function is aliased to _exitfunc() method of _MainThread. def _exitfunc(self): self._stop() t = _pickSomeNonDaemonThread() if t: if __debug__: self._note("%s: waiting for other threads", self) while t: t.join() t = _pickSomeNonDaemonThread() if __debug__: self._note("%s: exiting", self) self._delete() So can be done in here. The decision which would need to be made is whether you call atexit() on all threads before then trying to join() on any, or call atexit() only prior to the join() of the thread. Calling atexit() on all possibly sounds the better option but I am not sure, plus the code would need to deal with doing two passes like that which may not may not have implications. This would be useful. It shouldn't be part of atexit, since atexit.register() from a thread should register a process-exit handler; instead, something like threading.(un)register_atexit(). If called in a thread, the calls happen when run() returns; if called in the main thread, call them when regular atexits are called (perhaps interleaved with atexit, as if atexit.register had been used). For example, this can be helpful to handle cleaning up per-thread singletons like database connections. A couple of years ago I suggested something similar. I'd like to see both thread_start and thread_stop hooks so code can track the creation and destruction of threads. It's a useful feature for e.g. PyLucene or profilers. The callback must be inside the thread and not from the main thread, though. Perhaps somebody likes to work on a PEP for 3.5? Most logical would be an API on Thread objects (this would obviously only work with threading-created threads). A PEP also sounds unnecessary for a single new API. See also the issue #19466: the behaviour of daemon threads changed at Python exit in Python 3.4. The changes referenced in msg204494 ref: #19466 were reverted via changesets 9ce58a73b6b5 and 1166b3321012 I agree that there must be some way to join the threads before exiting, with a callback or anything else. Currently, my thread pool implementation has to monkey patch sys.exit and register SIGINT handler to shutdown itself and avoid the hangup (100+ LoC to cover all possible exceptions). I am working on a big framework and demanding from users to call "thread pool shutdown" function before exit would be yet another thing they must remember and just impossible in some cases. It would ruin the whole abstraction. Python is not C, you know.
http://bugs.python.org/issue14073
CC-MAIN-2015-40
refinedweb
1,372
72.26
Asked by: Porting tool to convert Windows Forms applications to Metro Style applications Hi all, In these forums we have discussed a lot about Metro Style development and it seems very interesting to move from Windows forms development to here. But I would like to draw your attention to another aspect of this. Moving up to metro development could be easier to a developer who has been working not only with Winforms but also with WPF and Silverlight, as most of the basic concepts have flowed down from there. But to a newbie Winform developer this migration would not be that easy. This is a real concern if you want to write your current Winforms project in Metro Style. You don’t have any shortcuts available other than writing from the scratch. Do you? As a solution for this (for our university research project) we are developing a tool which has the capability of converting a .NET 4 C# based (we are considering the lower versions as well) Windows Forms project to Metro Style C# based (WinRT) project. I’m not going to describe things in detail but basically following is how we perform this conversion/mapping. UI Conversion: All the code related to GUI component, which mainly include in a designer.cs file, will be mapped into corresponding Xaml control tags. For an example if we consider the button control our mapping includes all possible properties and events. Of course there might be cases that a property of windows from UI component doesn’t have a proper mapping in its Metro control. But for some of those situations we have used some workarounds. The final output of this conversion is the generated XAMLl UI descriptor file. Logical Code Conversion: This converts all logical code with possible mappings to corresponding new libraries. This mapping could be either a class as a whole or a member of a class. For an example in metro, coding and the concept of message box is slightly different and that should be mapped in the conversion. Concepts such as navigation, lifecycle of an app are completely different from the standard windows form application. We have managed to convert the basic instance of these occurrences into Metro. Hope you have a basic idea what I have tried to say. We have tested our tool for simple C# projects and hoping to expand the functionality. What we need is your comments and guidelines because that might come handy in our future development process as an invaluable input. We do know that the scope of the tool (the area we can map in .NET framework) would be limited but as a startup we would like to provide the conversion of basic and frequently used libraries. Please add your comments and ask any question. That is highly appreciated. I will post more details in near future. Cheers! Tuesday, August 07, 2012 5:18 AM - Edited by Rakitha Perera Tuesday, August 07, 2012 5:23 AM General discussion All replies Hi, I think it's quite difficult. Maybe we can seperate the logical code and UI code , reserve the logical code and recreate a UI for Metro Style. It's hard to convert the WinForm UI code to Metro's. Metro apps based on WinRT have many differents from WinForm, not only the namespace or method name but some implementing code are different. There are lots of method we can use or information we can get in WinForm are not supported in Metro. Converting WinForm to Metro will be a large project I think. Aaron Xue [MSFT] MSDN Community Support | Feedback to us Get or Request Code Sample from Microsoft Please remember to mark the replies as answers if they help and unmark them if they provide no help. Thursday, August 09, 2012 8:36 AM Hi, Thanks for the comments! Yeah, as you mentioned converting a Winform to Metro is not easy. Maybe it might not possible to convert something complex, like a business oriented application. I can understand the fact you mentioned to separate the two conversions. In this project what we want to prove is that there are possibilities to do this conversion though some may argue that current desktop applications have no value in Metro. So for the moment we tried to limit our scope to the basic controls and convert a simple, useful applications. We selected a scientific calculator as our first target which is simple but still useful in both environments. This Winform application contains few buttons, textboxes, labels and radio buttons in UI. Logical code is simple (System.Math lib used) except the usage of the message box which should be ported by the application. Our porting tool can handle this conversion which means we can now focus on porting other apps which has the similar depth, maybe simple games or applications. We found this document very helpful in this process but let me know if there are any other place I can get more information. I think if we or someone else could success with this kind of a tool at least for a certain extent it would be great for all developers because then we don’t have to code our applications from the scratch to Metro. Cheers, Thursday, August 09, 2012 11:56 AM - Edited by Rakitha Perera Thursday, August 09, 2012 11:57 AM I'd second Aaron. We aro also starting to think about expanding our applications to Metro style and my first thought was to leave the business logic intact as much as possible and to create the GUI from scratch. That said, your initiative can help a great deal even if in the first phase it would only support simple apps. Good luck! MichaelThursday, September 06, 2012 5:49 AM
https://social.msdn.microsoft.com/Forums/windowsapps/en-US/62145078-5007-4266-a4b1-29980e1b6f69/porting-tool-to-convert-windows-forms-applications-to-metro-style-applications?forum=winappswithcsharp
CC-MAIN-2017-22
refinedweb
964
61.46
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Similarly to test case it is possible to apply list of decorators to test suite. It is done by specifying a list of decorators as the second argument to the macro BOOST_AUTO_TEST_SUITE or the third argument to the macro BOOST_FIXTURE_TEST_SUITE. How a test suite decorator affects the processing of the test units inside of it varies with the decorator and is described for each decorator in subsequent sections. For instance, the function of the decorator in the above example is that when tests are filtered by label "trivial", every test unit in suite suite1 will be run. Similar to C++ namespace test suite can be closed and reopened within the same test file or span more than one file and you are allowed to apply different decorators in each point, where test suite is opened. If this is the case, the list of decorators applied to the test suite is the union of decorators specified in each place. Here an example. In the above example, the scope of test suite suite1 is opened three times. This results in a test suite containing three test cases and associated with two label decorators. Therefore running tests by label "trivial" as well as by label "simple" both result in executing all three test cases from the suite.
https://www.boost.org/doc/libs/1_60_0/libs/test/doc/html/boost_test/tests_organization/decorators/suite_level_decorators.html
CC-MAIN-2022-21
refinedweb
237
58.11
When I find out when neg repped me like that, they are DEAD! I was NOT spoonfeeding. That code was a copy/paste pretty much of their own code a few posts earlier. I've reported the incident... Type: Posts; User: javapenguin When I find out when neg repped me like that, they are DEAD! I was NOT spoonfeeding. That code was a copy/paste pretty much of their own code a few posts earlier. I've reported the incident... r.nextInt() will return some random number that is a valid integer. That's from -2^32 +1 to 2^32 -1 I think. However, you could keep it as it but do this int k = Math.abs(r.nextInt()) % 10; ... Array indexes aren't valid, unless your variable happens to be of type Object, then it could be I think, if you initialized it as such, for regular int type. Try using a different variable than... The user presumably means that the code works, but they don't know the logic or it or why it is what it is. The game goes on forever. You don't seem to have a thing that will end it. Never mind. That was a spammer. Using AI stuff to copy posts, and it copied your first line, and post semi-relevant stuff, along with possibly links or stuff. That user has been banned. Anyway,... The Random's nextInt() could return negatives. However, this might work for you. r.nextInt(Integer.MAX_VALUE)%10; However, if that generated 1000 and then 100, it would both go to 0 with that... #-o Should have realized that. (Probably would have if I wasn't sleep deprived.) Actually, you might possibly be able to use a for loop for that one, though the while should be kept as well. for (int k = 1; k <=10; k++) { if ( n < 0.1k) count(k-1)++; } import javax.swing.JFrame; import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.awt.geom.*; public class myFrame extends JPanel implements KeyListener, ActionListener... Arrays are a collection of similar, to an extent, an Object array could hold anything for instance, types. It has int indexes and would, assuming it was already intiialized, and assuming you... It appears that you're incrementing i several times in the same iteration, hence it's exiting quicker. If all of them are occurring under the same condition, then combine them under the same if... I did have such a test program. However, the results, which did point at what copeg was saying, conflicted with that I had long believed to be true about abstract, that is that I thought I needed at... No it's not inheritance. If anything, it's composition (or aggregation) hence the thread title. Thanks. However, it appears it's a Model-T style thing, not that I have to choose black, but that I can't have one JMenu with a Red selection and another, in the same program, with a Green... It seems to do this every single solitary time so I think I can make a shorter version of it that will show what's going on. import javax.swing.JMenuItem; import javax.swing.JMenu; import... You could always have both. Create a second one that will initialize your current object to the values of the parameter object. Use getters and setters, like setLength(), setWidth(),... I'm a bit confused here, unless there are two level variables, but is that java statement legal? It's declaring a variable and then initializing it to itself plus one. I'm not sure if that will... An SSCEE? I was asking pretty much how a class could be abstract without any abstract methods or unimplemented interface methods. Also, another thing, I can sub that class so far and only have... It implements, I think, all the methods from the interface Border. Yet it's abstract. How is that possible? I thought that in java an abstract class had to have at least one abstract method. I... JTextField field = new JTextField(20); field.setText("Button 1"); That is because you are referring to the field object you created that only has the scope of the ActionListener. Also,... Put this line before your actionListeners are coded. JTextField field = new JTextField(20); Right now field is null when it's going into them. Try changing it, in the TextBook class, to implements Comparable<TextBook> and see if that works. An Item might not necessarily be a TextBook. If you also had a class that extended... Any reason why both Item and Textbook are both implementing Comparable<Item>? I think that it's Comparable<T> so if you also want to have TextBook implement Comparable, then I think it... You posted the GUI class twice it looks like.
http://www.javaprogrammingforums.com/search.php?s=086143fe2fb82204874b2085b0847279&searchid=1725413
CC-MAIN-2015-35
refinedweb
804
76.22
: > Thus, > import spyce > print spyce.getServer().globals > should show you the globals. Try it and let me know if it works in your > environment. Thanks! That's very helpful. I'll give it a try. Niko Hi Niko, Sorry for the delayed reply. Yes, it should be possible. The globals are stored in field of the primary spyceServer object. Open up the file: spyce.py. You'll see a function called: getServer(). That returns the server object, and it has a field called globals, which is the dictionary of globals parsed from the configuration file. Thus, import spyce print spyce.getServer().globals should show you the globals. Try it and let me know if it works in your environment. As to your second question, regarding pre-loading a Spyce module. No, this is not possible, and for a good reason: it does not make sense. A Spyce module is a Spyce module because it is tied to a request. Otherwise, it could be written as a plain Python module. There are no requests at server startup time. Thus, you can pre-load regular Python modules, but not Spyce modules. All the best, Rimon. On Tue, 23 Dec 2003, Niko Matsakis wrote: >Hello, > > I have been using the Spyce package for various Python web page >work, and I want to thank you for making it available. It's wonderful; >precisely the minimal solution I was looking for. > > However, I have a question. I'm using the 'import' directive >in the spyce.conf file to load up my database at spyce startup. >However, I'd like to pass in a parameter containing the path to the >database file, but I can't figure out a good way to do it. I have been using >an environment variable, but for something started up from within Apache, >this is non-ideal. > > Ideally, I would put it in the globals section of the spyce.conf >file. However, is it possible for a python module to access the global values? >I know how a Spyce module can do it. > > If it is not possible, is it possible to pre-load a Spyce module >instead of a pure python module? > > >Thanks, >Niko Seems quite reasonable to me. I've applied the patch, and it will be in the next release. Thanks. All the best, Rimon. On Sun, 25 Jan 2004, Niko Matsakis wrote: >> Thus, >> import spyce >> print spyce.getServer().globals >> should show you the globals. Try it and let me know if it works in your >> environment. > >Hmm; I finally got around to giving this a try. Unfortunately, it didn't >quite work because of several ordering problems --- > > first, imports are processed before globals. This is easily > changed by swapping the order of the statements in spyceConfig.py > > then, the field 'globals' in the server is not initialized until > after the initial imports are complete. Solved that easily enough > too. > > but the biggest problem is that getServer() creates a server > if the global server field is not set. This means calling it > created two servers. I solved that with something of a hack > by creating a new global SPYCE_GLOBALS that is assigned the > value of the globals hashtable during the spyce server > constructor, and thus can be accessed by the imported modules. > > Not great, but very few lines of code change! :) Anyway, here is > the patch I ended up with. It seems to me that the spyce.conf > file is a great place to put config data for startup modules to live, > so perhaps you can either incorporate my changes or some more elegant > solution to the problem. > > >--- spyce-1.3.11/spyce.py Sun Jul 6 10:12:19 2003 >+++ x/spyce.py Sun Jan 25 21:47:11 2004 >@@ -52,6 +52,11 @@ > >+SPYCE_GLOBALS = None >+ >+def getServerGlobals (): >+ global SPYCE_GLOBALS >+ return SPYCE_GLOBALS > > ################################################## > # Spyce core objects >@@ -68,6 +73,7 @@ > overide_www_root=None, > overide_www_port=None, > ): >+ global SPYCE_GLOBALS > # server object > self.serverobject = spyceServerObject() > # http headers >@@ -82,6 +88,10 @@ > ) > # server globals/constants > self.globals = self.config.getSpyceGlobals() >+ SPYCE_GLOBALS = self.globals # hack >+ # now finish processing config file; this way imported modules have >+ # access to the globals >+ self.config.process () > # spyce module search path > self.path = self.config.getSpycePath() > # concurrency mode >diff -u -r spyce-1.3.11/spyceConfig.py x/spyceConfig.py >--- spyce-1.3.11/spyceConfig.py Tue Apr 22 11:28:12 2003 >+++ x/spyceConfig.py Sun Jan 25 21:48:49 2004 >@@ -73,7 +73,7 @@ > self.spyce_import = None # python modules loaded at startup > self.spyce_error = None # spyce engine-level error handler > self.spyce_pageerror = None # spyce page-level error handler >- self.spyce_globals = None # spyce engine globals dictionary >+ self.spyce_globals = {} # spyce engine globals dictionary > self.spyce_debug = None # spyce engine debug flag > self.spyce_concurrency = None # concurrency model > self.spyce_www_root = None # root directory for spyce web server >@@ -96,13 +96,14 @@ > self.overide_www_port = overide_www_port > self.default_www_handler = default_www_handler > self.default_www_mime = default_www_mime >+ def process(self): > # process (order matters here!) > self.processConfigFile() > self.processSpycePath() > self.processSpyceDebug() >+ self.processSpyceGlobals() > self.processSpyceImport() > self.processSpyceError() >- self.processSpyceGlobals() > self.processSpyceConcurrency() > self.processSpyceCache() > self.processSpyceWWW() >@@ -231,7 +232,7 @@ > if (len(self.spyce_pageerror)<2 or self.spyce_pageerror[0] not in ('string', 'file')): > raise 'invalid pageerror handler specification ("string":module:variable, or ("file":file)' > def processSpyceGlobals(self): >- self.spyce_globals = {} >+ self.spyce_globals.clear () > if self.spyce_config.has_key('globals'): > for k in self.spyce_config['globals'].keys(): > self.spyce_globals[k] = self.spyce_config['globals'][k] > > > >thanks, >Niko > I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/spyce/mailman/spyce-users/thread/Pine.LNX.4.44.0401230252510.15931-100000@pompom.cs.cornell.edu/
CC-MAIN-2017-13
refinedweb
937
61.02
On Mon, Dec 19, 2005 at 05:48:45PM +0100, Thomas Hood wrote: > > Note the definition for /usr/lib is "Libraries for programming and > > packages" and "/usr/lib includes object files, libraries, and internal > > binaries that are not intended to be executed directly by users or > > shell scripts." and /var/lib is "Variable state information" and "This > > hierarchy holds state information pertaining to an application or the > > system. State information is data that programs modify while they run, > > and that pertains to one specific host." > > > > Combining these two, and adding the "...needed to boot the system" > > qualifier seems like it would perfectly cover the above requirements > > and /run. > Let me see if I have understood the argument. Let's call the new > directory 'R' for now. > > <attempt to paraphrase> > /lib is, like R, a directory required for programs needed > to boot the system and run commands in the root filesystem; > and /var/lib is, like R, a place where data is stored. > We just heard "lib" twice! So /lib is the right place for R. > </attempt to paraphrase> "We just heard "lib" twice!" ? You either get to fairly summarise an argument or mock it; don't try to do both at once. The line of thinking is this: we would like to put everything in a single namespace under /, but we can't for two reasons: one is that / has to be small in some circumstances, so we use a separate namespace, /usr, that doesn't have that limitation; the other is that it's better to have separate read-only and read-write namespaces; so we put non-static information in /tmp, and /var. But beyond that, /, /usr, and /var are more or less the same -- hence /bin and /usr/bin, /lib and /usr/lib, and /tmp and /var/tmp, in each case, both directories containing the same sort of contents -- with the provisos that changing data goes in /var/foo, large static data goes in /usr/foo, and we limit /foo to stuff that's needed to boot or recover the system. If the question is "what does /lib contain?" the answer, then, is "the same sort of stuff that /usr/lib or /var/lib contains, but limited to that necessary to boot the system". > I don't think that an argument from the meaning of "lib" can get > much traction because /lib, /usr/lib and /var/lib are so different. They don't seem very different to me. /lib and /usr/lib both contain static libraries. /usr/lib and /var/lib both contain subdirectories of package specific data, split by the requirements of /usr and /var. /lib, /usr/lib and /var/lib are all a good default place for random data that people hadn't previously thought of. > (I'll guess that these differences are there because: > * /usr/lib contained both application code and application data > in the old days; So did /etc. Unlike /etc, /usr/lib remains *designed* to contain application code and data, currently. /lib in practice does the same thing today: containing .so files, kernel modules, script fragments, and terminal information. > * When application data was removed from /usr/lib it was placed > in /var/lib, which missed the opportunity to choose a more > appropriate name such as '/var/data'; For a brief time, /var/lib was renamed to /var/state, as it happens; it was renamed back when that proved unnecessarily cumbersome. > * When /usr/share was split out of /usr/lib, no /share was split > out of /lib. That would be because it's no easier to share /share across different machines than /usr/share, and because creating new directories in / is a bad idea. > But there are problems with this particular argument as I have > paraphrased it (probably distorting it). First, if we accept the > reasoning steps then the conclusion ought to be that the right value > for R is "/lib/lib". What went wrong? You pulled a "lib" from nowhere -- following the theory the "right" value for R in that case would be "<package>" or "misc". But that doesn't help distinguish an early read-write namespace, and seems kinda pointlessly pedantic. Afterall, if you're going to be that pedantic, /lib already forbids non shared libraries and non modules, so clearly /lib/run is unpossible. > So if we add _another_ directory with the same supporting role as > them then it should be, like them, in the root directory. Only if you also want to say "supporting directories of apps in /usr/bin should be, like them, in /usr, thus /usr/var/...". And while you might well say that, we don't. > Second, we missed the fact that the > function of R is more analogous to /var/run than to /var/lib, Actually we (I) deliberately ignored it; mostly because /var/run is a subset of the functionality of /var/lib, and "needed for boot" doesn't warrant breaking up the functionality like that. /lib/var would be the obvious alternative, and more accurately indicate "variable data needed early in bootup". *shrug* > and so > should have a basename of 'run' rather than 'lib'. Hence R should > be /run. Heh. I'm shocked -- shocked!! -- by your conclusion. > Briefly, if R is like /var/run except that it supports programs In theory, R is exactly like /var/run, and should in fact be /var/run. I'm not sure there actually are any situations where /var can't be mounted as soon as it's necessary; that may mean doing an NFS mount before running ifup; but that happens already if you've got an NFS mounted root fs, eg. In any event, the /var/run -> /run argument is fine, except that you *also* have to have a good argument why you can't just use an existing directory in /. > Here's another possible argument: > Putting R in /lib spoils the otherwise read-only > character of that directory. Putting R in / spoils the otherwise read-only character of that directory. *shrug* On Mon, Dec 19, 2005 at 06:01:15PM +0100, Thomas Hood wrote: > Anth. Only in the sense that they're free to type "sudo mv /bin /Binaries" and live with the fact they can't login. If /run is a tmpfs, putting information that should be in /var will cause performance problems. Accidently encouraging that by our choice of naming isn't good behaviour on our part. > As for upstream programmers, most of them can't use /run because > their software doesn't run with root privileges. That is, pretty much everything that runs as a daemon, and that might have otherwise used /var in general. > I don't think that Debian would ever be accused of lacking zeal in > enforcing its Policy. :) Please see the other thread on RC bugs not getting fixed... Cheers, aj Attachment: signature.asc Description: Digital signature
https://lists.debian.org/debian-devel/2005/12/msg00904.html
CC-MAIN-2017-51
refinedweb
1,140
61.36
/* Window creation, deletion and examination for GNU Emacs. Does not include redisplay. "keymapdef macintosh #include "macterm.h" #endif /* Values returned from coordinates_in_window. */ enum window_part { ON_NOTHING, ON_TEXT, ON_MODE_LINE, ON_VERTICAL_BORDER, ON_HEADER_LINE, ON_LEFT_FRINGE, ON_RIGHT_FRINGE }; Lisp_Object Qwindowp, Qwindow_live_p, Qwindow_configuration_p; Lisp_Object Qwindow_size_fixed; extern Lisp_Object Qheight, Qwidth; static int displayed_window_lines P_ ((struct window *)); *)); static Lisp_Object window_list_1 P_ ((Lisp_Object, Lisp_Object, Lisp_Object)); /* The value of `window-size-fixed'. */ int window_size_fixed; /*; /* Non-nil means that Fdisplay_buffer should even the heights of windows. */ Lisp_Object Veven_window_heights; /* */ extern ();->min,)->height; }_internal_width used by WINDOW, and BOTTOM is one more than the bottommost row used by WINDOW left or right fringe of the window, return 5 or 6, and convert *X and *Y to window-relative corrdinates. X and Y are frame relative pixel coordinates. */ static enum window_part coordinates_in_window (w, x, y) register struct window *w; register int *x, *y; { /* Let's make this a global enum later, instead of using numbers everywhere. */ struct frame *f = XFRAME (WINDOW_FRAME (w)); int left_x, right_x, top_y, bottom_y; enum window_part part; int ux = CANON_X_UNIT (f); int x0 = XFASTINT (w->left) * ux; int x1 = x0 + XFASTINT (w->width) * ux; /* The width of the area where the vertical line can be dragged. (Between mode lines for instance. */ int grabbable_width = ux; if (*x < x0 || *x >= x1) return ON_NOTHING; /* In what's below, we subtract 1 when computing right_x because we want the rightmost pixel, which is given by left_pixel+width-1. */ if (w->pseudo_window_p) { left_x = 0; right_x = XFASTINT (w->width) * CANON) && *y < bottom_y) /* We're somewhere on the mode line. We consider the place between mode lines of horizontally adjacent mode lines as the vertical border. If scroll bars on the left, return the right window. */ part = ON_MODE_LINE; if (FRAME_HAS_VERTICAL_SCROLL_BARS_ON_LEFT (f)) { if (abs (*x - x0) < grabbable_width) part = ON_VERTICAL_BORDER; } else if (!WINDOW_RIGHTMOST_P (w) && abs (*x - x1) < grabbable_width) part = ON_VERTICAL_BORDER; else if (WINDOW_WANTS_HEADER_LINE_P (w) && *y < top_y + CURRENT_HEADER_LINE_HEIGHT (w) && *y >= top_y) { part = ON_HEADER_LINE; if (FRAME_HAS_VERTICAL_SCROLL_BARS_ON_LEFT (f)) { part = ON_VERTICAL_BORDER; } /* Outside anything interesting? */ else if (*y < top_y || *y >= bottom_y || *x < (left_x - FRAME_LEFT_FRINGE_WIDTH (f) - FRAME_LEFT_SCROLL_BAR_WIDTH (f) * ux) || *x > (right_x + FRAME_RIGHT_FRINGE_WIDTH (f) + FRAME_RIGHT_SCROLL_BAR_WIDTH (f) * ux)) part = ON_NOTHING; else if (FRAME_WINDOW_P (f)) if (!w->pseudo_window_p && !FRAME_HAS_VERTICAL_SCROLL_BARS (f) && !WINDOW_RIGHTMOST_P (w) && (abs (*x - right_x - FRAME_RIGHT_FRINGE_WIDTH (f)) < grabbable_width)) { part = ON_VERTICAL_BORDER; } else if (*x < left_x || *x > right_x) /* Other lines than the mode line don't include fringes and scroll bars on the left. */ /* Convert X and Y to window-relative pixel coordinates. */ *x -= left_x; *y -= top_y; part = *x < left_x ? ON_LEFT_FRINGE : ON_RIGHT_FRINGE; } else { *x -= left_x; *y -= top_y; part = ON_TEXT; } else { /* Need to say "*x > right_x" rather than >=, since on character terminals, the vertical line's x coordinate is right_x. */ if (*x < left_x || *x > right_x) { scroll bars on the left. */ /* Convert X and Y to window-relative pixel coordinates. */ *x -= left_x; *y -= top_y; part = *x < left_x ? ON_LEFT_FRINGE : ON_RIGHT_FRINGE; } /* Here, too, "*x > right_x" is because of character terminals. */ else if (!w->pseudo_window_p && !WINDOW_RIGHTMOST_P (w) && *x > right_x - ux) { /* On the border on the right side of the window? Assume that this area begins at RIGHT_X minus a canonical char width. */ part = ON_VERTICAL_BORDER; } else { /* Convert X and Y to window-relative pixel coordinates. */ *x -= left_x; *y -= top_y; part = ON_TEXT; } return part; = PIXEL_X_FROM_CANON_X (f, lx); y = PIXEL_Y_FROM_CANON_Y (f, ly); switch (coordinates_in_window (w, &x, &y)) case ON_NOTHING: return Qnil; case ON_TEXT: /* X and Y are now window relative pixel coordinates. Convert them to canonical char units before returning them. */ return Fcons (CANON_X_FROM_PIXEL_X (f, x), CANON_Y_FROM_PIXEL_Y (f, y)); case ON_MODE_LINE: return Qmode_line; case ON_VERTICAL_BORDER: return Qvertical_line; case ON_HEADER_LINE: return Qheader_line; case ON_LEFT_FRINGE: return Qleft_fringe; case ON_RIGHT_FRINGE: return Qright_fringe;; enum window_part found; int continue_p = 1; found = coordinates_in_window (w, cw->x, cw->y); if (found != ON_NOTHING) *cw->part = found - 1; XSETWINDOW (*cw->window, w); continue_p = 0; } return continue_p; /*) != ON_NOTHING)) { *part = 0; window = f->tool_bar_window;; { int part; struct frame *f; f = XFRAME (frame); /* Check that arguments are integers or floats. */ CHECK_NUMBER_OR_FLOAT (x); CHECK_NUMBER_OR_FLOAT (y); return window_from_coordinates (f, PIXEL_X_FROM_CANON_X (f, x), PIXEL_Y_FROM_CANON_Y (f, y), );
https://emba.gnu.org/emacs/emacs/-/blame/38b81d747a2b394f01732ca5fa93dbf04456e30a/src/window.c
CC-MAIN-2021-17
refinedweb
640
53.71
Errata for Agile Web Development with Rails The latest version of the book is P (28-Sep-11) PDF page: hpcXE Paper page: tYVhL iLhIjY web20power.txt;1;1--JiueKXnZGiTq - Reported in: P2.0 (25-Aug-05) PDF page: i Paper page: i Dave says: For general help with Rails and examples in the book, I recommend the Rails mailing list at We need a forum for problems encountered while working through examples and better descriptions of how to fix the Mac Tiger mysql problems--Richard Williams - Reported in: P4.0 (21-Mar-06) Paper page: 1 You should put together a comprehensive list of all the naming conventions in one place. "id" is in the begining of the book, datetime on page 116 and the sloppy wrapup on page 188 just doesn't cut it.--Old C Hippie - Reported in: P6.0 (10-Sep-11) PDF page: 2rand Paper page: 2rand +z$8frZdyL%68pSU/:>w<:E3.lG-!XIB--zyqMdVsbiEOwzNQ - Reported in: P4.0 (03-Mar-06) PDF page: 5 Currently, all downloadable code linked from the PDF book has the Content-Type set to "text/html". This will make most browsers show the code in one huge line without breaks. It would be nice if all downloadable code had the Content-Type set to "text/plain", so browsers show them nicely formatted. In a perfect world, you would be able to set the Content-Type to "text/x-ruby-source" or something like that, and the browser would show the code with syntax highlight... oh, well...--Marcus Brito - Reported in: P4.0 (21-Apr-06) PDF page: 5 "In the margin, you--jeff e - Reported in: P4.0 (08-Feb-06) PDF page: 6 "This book documents Rails V1.0, which became available in mid 2005" should be "This book documents Rails 1.0, which became available on December 13, 2005" Also "the last release before Rails 1.0" should be "one of the last releases before Rails 1.0"--Peter T Bosse II - Reported in: P4.0 (23-Apr-06) PDF page: 15 "Sometimes these class-level methods return collections of objects. Order.find(:all, :conditions => "name='dave'") do |order| puts order.amount end" Find returns a collection but requires an iterator to call the block on it. So I think you're missing an ".each" before the "do".--Kieron Browne - Reported in: P4.0 (21-Jun-06) PDF page: 20 Well one drop for the general context... I must confess I haven't bought that book yet, I'm just testing it. It's my first contact with Rails, Ruby and even MySQL (began yesterday) Your explanation for the installation on windows is nice, but too "simple". In fact since there isn't another way to install Rails without dowloading it thru gem, you need to care about people who are "behind" a firewall with NAT. I suspected I needed to tell to "gem" that I was using a proxy ..but where and how !!!. I checked in many files in the gem directories but without succes. Also unlucky on official Rails homepage. A few haurs later I fanally found the info in a fack in the wiki version for Rail. Please specify the following parameter for people using proxies : -p (where proxy is the name or IP of the proxy server and port the port number used for internet access especially when "NATed") => ex : gem install rails --include-dependencies -p Sorry for my bad english, 'cause I'm a poor frenchy :) Can feedback me on kagejin0@yahoo.fr other Erratums will follow--Koguma - Reported in: P4.0 (17-Jun-06) Paper page: 23 In the fifth printing of the book, the link to Lucas Carlson's instructions for Ruby on Mac OS X Tiger leads to an error page.--Jeffrey Yu - Reported in: P4.0 (03-Apr-06) PDF page: 33 The "Making Development Easier" box is very valuable information, but I question whether this is the best place to put it. I would have rather read something about how close .rhtml is to a JSP. In any case, talking about Webrick and its use in production at this point sort of threw me off track a bit. Just a suggestion that something else might be more effective here. (-:--Lindy Mayfield - Reported in: P4.0 (03-Apr-06) PDF page: 50 I am going through the tutorial with you: After "rails depot" and then "cd depot" and then "ls" my directories are different from yours. For example I don't have CHANGELOG or Rakefile.--Lindy - Reported in: P4.0 (26-Apr-06) PDF page: 50 The code at the bottom of the page is unclear. What should one do with it? Type it at an 'mysql' prompt? If so, the result is an error (1046) that states that no database is selected. - Reported in: P4.0 (21-Jun-06) PDF page: 50 I'm using MYSQL 5.0.22 the current last stable version on windows XP SP2. It's my first enccounter with MYSQL so my problems could be related to my lack of experience. Nevertheless in your databases creation process thru the command-line I got some trouble. In fact the "grants" didn't work because they expect to have the users created before. For instance thru a CREATE USER 'dave'@'localhost' and CREATE USER 'prod'@'localhost' identified by 'wibble'. The bad point is that by doing so (maybe a new default behavior of Mysql) all the priviledges are granted to these users. So we need to restrict thme by command line or by using a GUI (I advise to use MySQL-Front, it's a nice free toy) => same erratum on page 51 when you create the table products thru the sql script. By doing so you're using an ODBC user. So you need to create an 'ODBC'@'localhost'. Feedbacks on kagejin0@yahoo.fr--Koguma - Reported in: P4.0 (26-Jan-06) PDF page: 51 you might want to add "-u dave" to the mysql statement--Chad Bearden - Reported in: P4.0 (26-Apr-06) PDF page: 51 When one types the following string (mysql depot_development <db/create.sql), the result is an ->. What then? This entire section is unclear. - Reported in: P4.0 (12-Feb-06) PDF page: 52 It seems that the rails script now adds the application to database.yml: database: depot_development It automatically replaces the rails_development with depot_development. --John Crowhurst - Reported in: P4.0 (04-Apr-06) PDF page: 52 Not sure if you mention this later or not, but for Windows there needs to be: port: 3306 in the file. And for Linux there needs to be, for example: port: 3306 socket: /var/lib/mysql/mysql.sock --Lindy - Reported in: P4.0 (21-Jun-06) PDF page: 52 Depot_xxxx are already correctly set in database.yml. But as you said after, I advise to put 'dave' as username in development and test. The system is using root by default and since no password is provided this will lead to an error in the script generation on page 53. P.S. : I have one more parameter in the database.yml version.. host: localhost Feedback to kagejin0@yahoo.fr--Koguma - Reported in: P4.0 (19-Apr-06) PDF page: 53 Repeated reinstallations of entire package on OSX 10.4.6 continually produce the following: ./script/../config/../config/environment.rb:8: warning: already initialized constant RAILS_GEM_VERSION exists app/controllers/ exists app/helpers/ exists app/views/admin exists test/functional/ dependency model exists app/models/ exists test/unit/ exists test/fixtures/ identical app/models/product.rb identical test/unit/product_test.rb identical test/fixtures/products.yml Unknown database 'depot_development' Error is not documented in footnotes and I can't find anything on it an forums. Any ideas? - Reported in: P4.0 (21-Apr-06) PDF page: 53 for, depot> ruby script/generate scaffold Product Admin I had to use infor from to get it to work on Linux (gentoo distro). --Mike Nelson - Reported in: P4.0 (26-Apr-06) PDF page: 53 When attempting to run the line "ruby script/generate scaffold Products Admin" in Windows, I receive the following error listed in the book (addressed only for Mac): Before updating scaffolding from new DB schema, try creating a table for your model (Product). - Reported in: P4.0 (30-Apr-06) PDF page: 53 On OS X (10.4.3), when I attempt to run script/generate scaffold Product Admin, I get the following error: You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occured while evaluating nil.each Not sure how to proceed. I have been on the IRC channels looking for help but to no avail. Any ideas? - Reported in: P4.0 (28-Feb-06) Paper page: 54 While creating the databases, the user should issue the command: mysql> grant all on depot_development.* to 'dave'@'localhost'; There is no indication if the user should substitue dave with the user's own logon name. I know this is a book about Rails, but some minor SQL explanations would be nice. Could even be fine text at bottom of page!--John - Reported in: P4.0 (11-Mar-06) Paper page: 54 After setting database privileges, if you exit mysql without typing the magic words "flush privileges", the privileges you've just set won't take effect.--Daniel Torrey - Reported in: P1.0 (03-Dec-06) Paper page: 57 ppc:~/work/depot tomcoady$ ruby script/generate scaffold Product Admin ./script/../config/boot.rb:18:in `require': No such file to load -- rubygems (LoadError) from ./script/../config/boot.rb:18 from script/generate:2:in `require' from script/generate:2--Tom Coady - Reported in: P1.0 (03-Dec-06) Paper page: 57 Solution to my tech error: ppc:~/src/rubygems-0.8.11 tomcoady$ sudo ruby setup.rb ppc:~/work/depot tomcoady$ sudo gem install -v=1.1.6 rails --Tom Coady - Reported in: P4.0 (09-Jul-06) Paper page: 57 After running "ruby script/generate scaffold Product Admin" on linux box using SuSE v10.0 and MySql Ver 14.12 Distrib 5.0.22, I get an error stating /tmp/mysql.sock cannot be found. I resolved this by searching for mysql.sock and creating soft link to it from /tmp dir. e.g. ln -s /var/lib/mysql/mysql.sock tmp/mysql.sock I was then able to get form depicted on p.58--Mark Glass - Reported in: P3.0 (12-Jan-06) Paper page: 57 The 'gotcha' notes (3,4,& 5) are useful. You might consider adding another... If you get Routing Error: Recognition failed for '/admin', make sure that you're starting WEBrick from the depot folder and not the demo folder.--Jason - Reported in: P6.0 (16-Dec-06) Paper page: 57 I think the code should read ruby script/generate scaffold Products Admin since the table was created with the name products (plural, not singular) otherwise when attempting to load the page, I get a recognition failed error I made that simple change and it worked--Clark Alexander - Reported in: P4.0 (11-Apr-08) Paper page: 57 My version of the paper book is actually P5 (2006-02-16). The problem is that scaffold has changed. I am running Rails 2.02 and the following command will work. ruby script/generate scaffold Product id:int title:varchar description:text image_url:varchar price:decimal -f So, in answer the the question "That wasn't hard now, was it?" is actually "it was a nightmare"!!--Antony Scott - Reported in: P6.0 (09-Jan-07) Paper page: 57 Okay, so this is a gripe. As noted previously, the link to Lucas Carlson's fix is no longer to be found as described. However, you can go to the Internet Archive website and reconstruct the script to run locally. That being said, I have done so and have reinstalled Ruby and MySQL and am still getting the dreaded "--ray palermo - Reported in: P4.0 (08-May-06) Paper page: 57 I've seen this error elsewhere but never a solution. Here goes: On page 57, I entered, at depot prompt, ruby script/generate scaffold Product Admin The error I get is: "#28000Access denied for user 'root'@'localhost' (using password: NO)" The author's statement after the description of this is: "That wasn't hard now, was it"...actually, it's turning out to be very hard. I had to battle to get the commands going on page 54 too. It really shouldn't be this hard... --pat lynch - Reported in: P4.0 (19-Mar-06) PDF page: 62 The unless clause of the validate method is wrong. To verify this, comment out 'validates_numericality_of :price' and try submitting a product without a price; you'll get a null constraint violation. Something like 'unless price && price >= 0.01' works better.--Kevin Christen - Reported in: P4.0 (21-Jun-06) PDF page: 62 In the eBook we have this code for product.rb: errors.add(:price, "should be positive") unless price.nil? || price >= 0.01 Where as the downloadable code has the following: errors.add(:price, "should be positive") unless price.nil? || price > 0.0 Notice that the last line compares "price > 0.0" instead of "price > 0.01" --Max - Reported in: P4.0 (21-Jun-06) PDF page: 62 Quote from second paragraph: "Note that we only do the check if the price has been set. Without that extra test we--Max - Reported in: P6.0 (28-May-07) PDF page: 62 Just a note for readers who may get put-off by the unknowability's of generated code-bases early on. Should a user make an error in pluralisation in either the SQL or Scaffold command (e.g. product v.s. Products) the application fails with a NameError for the model name (activesupport 'load_missing_constant' in the trace) when trying to acess the 'admin' url. I could only recover from this by deleting and recreating the whole application. Running the Scaffold again overrode files, etc, but despite diffing between version controlled files before and after there was no visible reason why was stuck, and little info on Google addressed this. This leaves me feeling I need to be very careful when running the Scaffold commands if you are deeper into your application development.--Liam Clancy (metafeather) - Reported in: P4.0 (30-Apr-06) PDF page: 63 Paper page: 63 Cannot "destroy" items. Both file listings 66 & 67 omit ':post' argument in the link_to 'Destroy' line. The line should read: <%= link_to 'Destroy', { :action => 'destroy', :id => product }, :confirm => "Are you sure?", :post => true %> --Randy W. Sims - Reported in: P4.0 (01-Apr-06) PDF page: 64 Link to ERb definition is not on page 31 as stated.--Pierre-Loïc Raynaud - Reported in: P6.0 (21-Aug-06) PDF page: 64 It took me quite some time when I was working through the sample code to figure out what was wrong. When editing the list.rhtml, I just erased what was there and worked through the example in the book, however, when I did that, I could never destroy any of the products that I had added. Look at the code generated from the scaffold, I realized that the example in the book is missing something vital. In File 67 the Destroy code should be replaced with this: <td><%= link_to 'Destroy', { :action => 'destroy', :id => product }, :confirm => 'Are you sure?', :post =>true %></td> and in File 68 it should be: <%= link_to 'Destroy', { :action => 'destroy', :id => product }, :confirm => "Are you sure?", :post => true %> The line :post => true is missing in both files. Anyways, hope this helps.--David Kartik - Reported in: P4.0 (18-Jul-06) PDF page: 65 ITERATION A4 - Reported in: P4.0 (08-Mar-06) PDF page: 65 Relative URL's not allowed by current validates_format_of :with parameter; where the text says: "Put some images in the public/images directory and enter some product descriptions, and the resulting product listing might look something like Figure 6.7, on the next page.", the current validation functionality in the model (/depot/app/models/product.rb) will not allow the correct relative url (i.e. /images/sk_utc_small.jpg) to be entered, since it requires a full URL. In the downloadable code for depot4/app/models/product.rb, the regular expression should be commented out or deleted, and replaced with one allowing relative url's for the images. And this should be among the modifications mentioned in the text for Iteration A4.--Victor Kane - Reported in: P4.0 (21-Jun-06) PDF page: 65 ":post => true" is missing after :confirm => "are you sure?". Without this you cannot destroy anything :) Feedback to Kagejin0@yahoo.fr --Koguma - Reported in: P6.0 (17-Aug-06) Paper page: 69 The list action for Destroy as it is in the book didn't work for me: Clicking Destroy would pop up the confirmation dialog, but choosing ok would just refresh the list but not delete the product. After looking at the initially generated scaffolding code, I've noticed that this included an additional parameter :post => true After I changed the whole line to <%= link_to 'Destroy', { :action => 'destroy', :id => product }, :confirm => "Are you sure?", :post => true %> the destroy action worked as expected. PS: It might be a version issue. I've observed it with Rails 1.1.6 PPS: I have the 5th printing, February 06. This not available from the version drop down box.--Stefan Moser - Reported in: P4.0 (03-Mar-06) Paper page: 69 The text starting: 'put some images in the public/images directory' led me to think that the correct url for the images in the production form was /public/images/xxx.jpg Actually, after some considerable head scratching, it turns out the base for static resources is public, so the correct href is /images/xxx.jpg. The text gives no inkling of this..--Michael Karliner - Reported in: P4.0 (17-Mar-06) Paper page: 69 The images do not display when added in relative format. In the list.rhtml file you should use <img width="100" height="80" src="/depot/images/<%= product.image_url %>" /> instead. Then when adding an image just add the image name. It would be bad practice to use images that begin with http: anyway, especially if those images are from another site.--scottrad - Reported in: P4.0 (07-Apr-06) PDF page: 71 We’ll also take this opportunity to tidy up the index.rhtml view in app/views. it should be: app/views/store--kibo - Reported in: P4.0 (18-Jan-06) PDF page: 71 This is the first time depot.css is mentioned. It has to be included in the project for proper view display. Needs to be copied from the sample code.--james patrick mclemore - Reported in: P6.0 (18-Oct-06) PDF page: 71 I'd rather remove the line numbers from listing of layout store.rhtml. With line numbers this sample is uncomfortable to copy/paste. - Reported in: P1.0 (04-Dec-06) Paper page: 73 The suggested code does not work for me but I found the right code elsewhere in the code bundle - detail here to avoid unititiated Product: Coady - Reported in: P4.0 (09-Feb-06) PDF page: 76 As was previously stated, " In the implementaion of find_cart, the 3rd line reads "session[:cart] ||=Cart.new" This line should read "@cart = session[:cart] ||= Cart.new" " I ran into an issue in which if I accessed /store or /store/display_cart directly in a fresh (meaning session cookie-less) browser window, I could refresh the page as many times as I want and keep the same session, however if I added an item to the cart or otherwise changed pages in the browser, it would set a new cookie everytime (even everytime I refreshed the new page). Making the change noted in the quote above fixed the issue. I'm running Ruby 1.8.2, Rails 1.0.0 on a Win XP Pro SP2 pc, and I caught the error by adding the line <%= debug session %> to both my index.rhtml and display_cart.rhtml (in store views). - Reported in: P4.0 (17-May-06) PDF page: 76 At first sight this approach to database maintenance is attractive--Gordon Thiesfeld - Reported in: P4.0 (03-Jun-06) PDF page: 76 As pointed out before, there is an error in session[:cart] ||= Cart.new I replaced the code with session[@cart] ||= Cart.new and it magically worked again Need someone to explain this to me - Reported in: P2.0 (20-Aug-05) PDF page: 79 The hyperlink to page 474 that describes the << operator points to page 473.--Stephen Touset - Reported in: P6.0 (07-Sep-06) Paper page: 80 I made all the additions to the code to the end of making the cart I get the rails error: "You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occured while evaluating nil.size" I don't know how to fix this--Gerry - Reported in: P3.1 (18-Mar-06) Paper page: 81 IWhen I run the sql at the bottom of the page to create the new table, I get an error "ERROR 1005 at line 13: Can't create table '.\depot_development\line_items.frm' <errno: 150>" MySQL (I'm running 4.0.16) needs an index to be declared on the foreign key can be created. Like so: create table line_items ( id int not null auto_increment, product_id int not null, quantity int not null default 0, unit_price decimal(10,2) not null, primary key (id), index ix_line_items (product_id), constraint fk_items_product foreign key (product_id) references products(id) ) TYPE=InnoDB; Also MyIsam is the default table type, so I've forced innodb for the foreign key support. Not sure if that would have affected anything though.--Nick Coyne - Reported in: P1.0 (12-Sep-06) Paper page: 81 similar to the error already reported... ERROR 1064 (42000) at line 12: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' product_id int not null, quantity int not null default 0, unit_price decimal' at line 2 (mysql 5.0.21) typing the code as is on the bottom of 81, mySQL is kicking back this error. help?--dave rupert - Reported in: P4.0 (22-Jun-06) PDF page: 81 items.sizes is nil !!! - Reported in: P4.0 (31-Mar-06) PDF page: 82 Others have mentioned this error and I've tried all their suggestions but I can't get any to work. Upon testing the cart to see if it will return the @item.size of the cart's contents I get an error of: NoMethodError in Store#display_cart Showing app/views/store/display_cart.rhtml where line #2 raised: You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occured while evaluating nil.size Now what? I've rebooted the WebBrick server. Killed the browser window after deleting all the cookies, session data, etc. I'm using RadRails so I closed it as well. Booted the server back up from a command line and get the same option. I've searched the internet through and tons of users have had this problem but there are no solutions. Please, oh please, help!--Philip Joyner - Reported in: P4.0 (20-Mar-06) PDF page: 83 find: Using the model declaration forces Rails to load the user model class early, replace with: Using the model declaration forces Rails to load the cart model class early, explanation: you're talking about the "cart" model not the "user" model. - Reported in: P4.0 (30-Mar-06) PDF page: 84 The online version of the file for '...app/models/cart.rb' () differs in the fourth last line from the class definition in the book. Book (pp 83-4): ... @total_price += item.unit_price ... Downloadable file 'cart.rb': ... @total_price += product.price ... The dowloadable file seems to be correct.--Gus Gollings - Reported in: P3.1 (02-Jan-06) Paper page: 85 The display_cart method on this page must be placed in the store controller *before* the private declaration line that's already in that class definition. I may be the only person who is paying so close attention to getting the method right that I overlook this placement but doing so results in an extended (and informative) debugging session as Rails complains it has no method called "display_cart".--Dan Shafer - Reported in: P2.0 (07-Apr-06) Paper page: 85 In reference to problem #2532, "You have a nil object when you didn't expect it!", I also ran into the problem in chapter 8. I typoed a simple part of the application, tested it, fixed the typo, then got that error no matter what I tried (even deleting everything and typing it up again). The answer was indeed to delete session data: depot/tmp/sessions--April - Reported in: P3.0 (12-Oct-06) Paper page: 85 When looking at "depot/public/store/display_cart" with my browser I was getting the error: undefined local variable or method `find_cart' for #<StoreController:0x4080b768> I then checked the downloadable Code snippets and noticed the find_cart method defined in File 75. I added this method to my code (as I was typing it from the book) and things worked. It should say in the book to add the find_cart method (it doesn't). (note: the version I'm using is: Third Printing, September 2005 Version: 2005-9-13 (the closest option to submit this erratum was P3.0 - September 29))--Sean Lerner - Reported in: P1.0 (09-Mar-06) Paper page: 85 Re #2532 - This also happend to me running under Webrick. The solution was to both close the browser and restart Webrick. Hope this helps the other reader.--brian - Reported in: P1.0 (16-Sep-07) PDF page: 86 Concerning the invalid IDs of objects in the URL line: it is not certain that an exception will always be thrown. I assume that the ID field, extracted from the URL, is transformed to a number without any checks made. So if there is a product with ID=0 in the database, it is actually added in the cart! This is way worse than crackers looking at an exception report page! One solution is to make sure somehow that the database starts counting from 1, but this doesn't make me feel secure enough. I believe that there should be further error checks on the ID parsing level.--Nikos Mouchtaris - Reported in: P4.0 (12-Apr-06) Paper page: 87 In regard to the common error "NoMethodError in Store#display_cart Showing app/views/store/display_cart.rhtml where line #5 raised: You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occured while evaluating nil.each " I found that I had a typo in the store_controler.rb I had entered. --- def display_cart @cart = find_cart @items = @cart_items end --- but correct code is --- def display_cart @cart = find_cart @items = @cart.items end --- Oh the joys of being a newbie!!! 5 hours of head scratching for that one!--glenn - Reported in: P2.0 (15-Mar-06) Paper page: 88 Editing cart.rb in order to enable the quantity to increment requires a WebTrick restart to work properly.--Bob Clewell - Reported in: P4.0 (13-Mar-06) PDF page: 90 The file link beside "<% @page_title = "Your Pragmatic Cart" -%>" is File 29. File 29 contains the fmt_dollars() helper. fmt_dollars() isn't defined until later on PDF p93. Perhaps the file link on PDF p90 should be to a version of display_cart.rhtml without fmt_dollars?--David Hislop - Reported in: P4.0 (15-May-06) PDF page: 96 Unable to create the session table. C:\apps\ruby\files\work\depot>rake --trace db:session:create (in C:/apps/ruby/files/work/depot) rake aborted! Don't know how to build task 'db:session:create' c:/apps/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake.rb:1449:in `[]' c:/apps/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake.rb:455:in `[]' c:/apps/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake.rb:1906:in `run' c:/apps/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake.rb:1906:in `run' c:/apps/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/bin/rake:7 c:/apps/ruby/bin/rake.bat:25 --Dave Pawson - Reported in: P3.0 (31-May-06) Paper page: 96 Flash messages are not displaying correctly using IE 6 sp2. Results of invalid add_to_cart, empty_cart, etc no flash message appears to display. Resizing IE window with mouse, flash message appears & disappears. Resizing IE window with F11 key yields flash menu with missing bottom red line of the flash box. Hitting key F11 to maximize IE, flash message displays --Steven - Reported in: P2.0 (22-Nov-05) Paper page: 98 using the fmt_dollars(item.unit_price) method, values less than 1, (values < 1) on the display_cart details show up as 0.00 but get added correctly to the total.--Jeff Bell - Reported in: P4.0 (31-Jan-06) PDF page: 98 render_partial "form" is now render :partial => "form"--Dave Myron - Reported in: P6.0 (17-Nov-12) PDF page: 98 Destroy link does still not work, even if <%= link_to 'Destroy', { :action => 'destroy', :id => product }, :confirm => "Are you sure?", :post => true %> as suggested here is used instead of the used code in the book, which does not work either. It simply creates a link like: localhost:3000/products/10 But this only shows the products attributes and an edit link plus a back link. - Reported in: P4.0 (27-Jun-06) PDF page: 99 In "Putting Session in the Database", theres an extra "that." in the first paragraph (First, we--Jose Guerra - Reported in: P4.0 (06-Mar-06) PDF page: 99 Since file names can contain dashes (-) in them, it never struck me that check-out.rhtml was actually checkout.rhtml until I went to test the page. While a - in publishing means that a word is continued on the next line, it was misleading to me since it just so happened to be a file name that it was connecting to on the next line.--Ryan Prins - Reported in: B1.0 (10-Oct-06) PDF page: 101 @order.line_items << @cart.items has to be @order.LineItems << @cart.items otherwise, ruby will report an unknown method "line_items" for the order object.--Volker - Reported in: P6.0 (24-Oct-06) Paper page: 103 103 describes nirvana as a state of being, but it is really a state of not-being.--William Henderson - Reported in: P4.0 (26-Jun-06) PDF page: 103 Not filling the 4 fields is will highlight them correctly, but there is no message as shown on the screen copy.--koguma - Reported in: P3.0 (21-Dec-05) Paper page: 105 When I write code as in file 36: options = [["select a payment option", ""]] + Order::PAYMENT_TYPES select( "order", "pay_type", options) I see an exception thrown by select. It expects the options parameter to respond to the "stringify_keys" method -- which array does not respond to. When I use a Hash instead of an array it works -- but unfortunately, I lose control over the ordering of the items by doing so. --Bill Burcham - Reported in: P4.0 (28-Feb-06) Paper page: 105 The file name checkout.rhtml is hyphenated to check-out.rhtm in paragraph 3. This is the only reference to the file name on the page where the code is set out. If you use the hyphenated form, errors occur later on (page 108). checkout.rhtml is the correct name.--Rob Nichols - Reported in: P4.0 (28-Feb-06) Paper page: 108 The code in checkout.rhtml that displays the error message is given as: <%= error_messages_for(:order) %> This works fine if there is an error present. However, when initially connecting to the page I get the error: ------------------------------------------------- NoMethodError in Store#check-out Showing app/views/store/check-out.rhtml where line #2 raised: You have a nil object when you didn't expect it! You might have expected an instance of ActiveRecord::Base. The error occured while evaluating nil.errors Extracted source (around line #2): 1: <% @page_title = "Checkout" -%> 2: <%= error_messages_for(:order) %> 3: <%= start_form_tag(:action => "save_order") %> 4: <table> 5: <tr> RAILS_ROOT: c:/web/depot/public/../config/.. ------------------------------------------------- It seems that on initial access a nil object is passed to error_messages_for. However, if I change the line to: <%= error_messages_for(:order) if @order %> so that the presence of an object "order" is checked before passing anything to error_messages_for, the page works on opening from new and still displays the errors correctly.--Rob Nichols - Reported in: P4.0 (28-Feb-06) Paper page: 108 The problem with "error_message_for(:order)" occurs if you have named the page check-out.rhtml as given on page 105. If you correct the page name to checkout.rhtml, the page works correctly.--Rob Nichols - Reported in: P4.0 (30-Dec-05) Paper page: 109 Adding the scaffold stylesheet to the store layout seems to disable any formatting from the depot stylesheet. It doesn't seem to matter which I list first--scaffold styling always overrides (even though in html source the order does change). I've tried both firefox and ie. Interestingly, I looked in the download files and neither the chapter 9 nor the chapter 10 versions of the store layout files link to the scaffold stylesheet. - Reported in: P4.0 (13-Apr-06) Paper page: 109 #2855: "scaffold" needs to be included as an argument to the stylesheet_link_tag call. old <%= stylesheet_link_tag "depot", :media => "all" %> new <%= stylesheet_link_tag "scaffold", "depot", :media => "all" %> Looking back it seems easy but 30 minutes of confusion for me! --glenn - Reported in: P2.0 (13-Mar-06) Paper page: 109 There is an issue with the cascading style sheets being used here. The problem is that both the scaffold and depot stylesheets are combining to produce strange text size and the scaffold stylesheet is overriding the depot stylesheet's link and link-hover code. (it seems to be thse use of pt, px, ex, em text sizes... setting sizes then uses percentages of those newly set sizes) I believe this is an issue of IE and FireFox dealing with CSS differnetly than whatever browser/version the author wrote these examples with. We need a fix for this... almost everyone on the internet uses IE or FireFox.--Brian - Reported in: P2.0 (16-Feb-06) Paper page: 115 "and a slightly different interaction style to the one we've been using so far." -> "... interaction style than the one we've been using so far."--Justin Johnson - Reported in: P6.0 (01-Feb-07) PDF page: 116 This may be a simple erratum. In the 7th row of the code, where in the function of ship, the "#{count_text} marked as shipped" should be "has shipped". --Ian - Reported in: P4.0 (16-Jan-06) PDF page: 116 The code given for the ship method should not include the check (if count > 0) if you want to be able to show the intended message "No orders marked as shipped" flash message when no items are checked off before submitting the form. --Alan M - Reported in: P3.1 (19-Mar-06) Paper page: 119 The admin.css file is not included in the code download, so we need to type this in manually :(--Nick Coyne - Reported in: P4.0 (13-Apr-06) Paper page: 119 "depot" needed to be included as an argument to the stylesheet_link_tag call. old <%= stylesheet_link_tag "depot", :media => "all" %> new <%= stylesheet_link_tag "scaffold", "depot", :media => "all" %> Looking back it seems easy but 30 minutes of confusion for me!--glenn - Reported in: P4.0 (21-Feb-06) PDF page: 120 The code snippet given for login_controller.rb is missing an "end" statement at the end of it to close : class LoginController < ApplicationController --BrianWarren - Reported in: P3.0 (14-Oct-06) Paper page: 122 The ship method only calls the pluralize method if the count is greater than 1: def ship count = 0 if things_to_ship = params[:to_be_shipped] count = do_shipping(things_to_ship) if count > 0 count_text = pluralize(count, "order") flash.now[:notice] = "#{count_text} marked as shipped" end end @pending_orders = Order.pending_shipping end Though the pluralize method has functionality to handle a zero checked for shipping submission: def pluralize(count, noun) case count when 0: "No #{noun.pluralize}" # -- THIS ISN'T BEING UTILIZED when 1: "One #{noun}" else "#{count} #{noun.pluralize}" end end I suggest removing the code that checks to see if the count is greater than one and instead always call the pluralize method: def ship count = 0 if things_to_ship = params[:to_be_shipped] count = do_shipping(things_to_ship) # REMOVED -- if count > 0 count_text = pluralize(count, "order") flash.now[:notice] = "#{count_text} marked as shipped" # REMOVED -- end end @pending_orders = Order.pending_shipping end Thanks, Sean sean@ttcrider.ca--Sean Lerner - Reported in: P4.0 (28-Feb-06) PDF page: 122 The Login controller's "Add User" picture is incorrect. It has the stylesheet applied and looks much fancier.--Miles K. Forrest - Reported in: P4.0 (22-Apr-06) Paper page: 122 in File 48 (listed on page 122) -- the Pluralize method returns an error of 'undefined method' for 'pluralize' - Reported in: B1.0 (04-Jun-05) PDF page: 126 "Have a look at the source of the controller on page 478 and of the view on page 486." The link to page 478 goes to page 475, and the link to page 486 goes to page 483.--Nathan Wright - Reported in: P4.0 (28-Feb-06) PDF page: 126 Link to page 490 actually goes to 487, and 498 goes to 495--Miles K. Forrest - Reported in: P6.0 (08-Oct-06) Paper page: 130 Whenever I try to add the initial user I am just forwarded to login/login. Is there another way just to add 1 user?--Sean - Reported in: P3.0 (14-Feb-06) Paper page: 134 The delete_user action uses a symbol reference for the redirection: redirect_to(:action => :list_users) While this obviously works it is a little confusing when in most similar cases you reference by a string: redirect_to(:action => "index") It seems as the references are interchangable in this context, but when writing test they are not - the same type must be used in the assert_redirected_to clause as in the controller. It would be great to have a notice about the possible different coding styles, and the implications for the tests. Maybe you could elaborate on wheter consistency is desirable as well. (Or you could just change the reference, but that wouldn't be as interesting.) --Daniel - Reported in: P4.0 (31-Jan-06) PDF page: 134 It seems that the generator for unit tests no longer generates the setup method (Rails 1.0.0) so the code in the PDF/Book doesn't match watch is actually produced.--Dave Myron - Reported in: P4.0 (07-Mar-06) PDF page: 138 the "assert_equal 29.95, @product.price" fails for me, even though product.price is 29.95. The only way to make this assert to pass it was changing the line to "assert_equal 29.95.to_s, @product.price.to_s" - Reported in: P4.0 (22-Feb-06) PDF page: 139 You might want to make mention of looking at the code for this test section. You tend to be very brief in your explanation and tend to leave the reader a little lost unless they aggressively hunt down the details of the code, etc. For instance, setup() is never mentioned until *after* you say the code works. Likewise, as another user noted, the details of categories.yml aren't mentioned and just seem to confuse the matter as you don't appear to use it in your reference code. You may want to omit the latter bit, or make it a footnote.--James - Reported in: P4.0 (17-Mar-06) PDF page: 140 Following the book, the create.sql creates MyISAM tables in MySQL by default. To change these later, use: ALTER TABLE depot_test.products ENGINE=InnoDB; or you can add ENGINE=InnoDB just before the semicolons in the create.sql CREATE TABLE statements.--Rufus - Reported in: P4.0 (21-Mar-06) PDF page: 141 Maybe it helps someone: here is the correct code for test_read_with_fixture_variable: def test_read_with_fixture_variable assert_kind_of Product, @product assert_equal products(:version_control_book).id, @product.id assert_equal products(:version_control_book).title, @product.title assert_equal products(:version_control_book).description, @product.description assert_equal products(:version_control_book).image_url, @product.image_url assert_equal products(:version_control_book).price, @product.price assert_equal products(:version_control_book).date_available, @product.date_available end - Reported in: P4.0 (11-Apr-06) PDF page: 141 If you are getting the error about @products being nil then set use_instantiated_fixtures = true in test/test_helper.rb.--Luca Spiller - Reported in: P1.0 (11-Jan-06) PDF page: 141 Paper page: 68 As a side bar it would have been nice to have been warned about strftime and how our database with empty or default 0000-00-00 00:00:00 values for date_available would cause the application to fail. I'm new to ROR and Ruby so, sorry if this is a known / given. --Hezekia McMurray - Reported in: P4.0 (14-Mar-06) PDF page: 141 New to RoR, so this might be the same as the erratum beginning "With the new testing rules, a good chunk ". In the test_read_with_hash method, the line vc_book = @products["version_control_book"] fails because @products is nil, i.e. does not seem to have been filled in. Similarly, in the test_read_with_fixture_variable method, the line assert_equal @version_control_book.id, @product.id fails because @version_control_book is nil.--David Hislop - Reported in: P3.1 (24-Dec-05) PDF page: 142 Figure 12.1 suggests the members of the @products hash are Product objects coming from the database, as @version_control_book is. This is misleading because the @products hash contains Fixtures. in particular with the former @product.data_available_before_type_cast is needed in the equals assertion, whereas in the latter @product.data_available works.--Xavier Noria - Reported in: P3.1 (15-Dec-05) PDF page: 143 If you make a mistake in the yaml for the entry @future_proof_book, you won't be told if it doesn't exist and it will silently pass as it resolves to nil. I added: unavailable_item = @future_proof_book assert_not_nil unavailable_item assert !items.include?(unavailable_item) to catch that possibility.--Yan-Fa Li - Reported in: P2.0 (16-Feb-06) Paper page: 144 "... then paying his valid credit card..." -> "... then paying with his valid credit card ..."--Justin Johnson - Reported in: P4.0 (31-Jan-06) PDF page: 146 test_helper.rb looks (mostly) nothing like the example in the PDF. It now looks like: ENV["RAILS_ENV"] = "test" require File.expand_path(File.dirname(__FILE__) + "/../config/environment") require 'test_help' class Test::Unit: = true # Add more helper methods to be used by all tests here... end --Dave Myron - Reported in: P3.0 (27-Dec-05) Paper page: 147 test_helper.rb must be updated in order for the test_destroy method to be used if the MySQL database is using MyISAM tables (the default for CocoaMySQL). One must set "self.use_transactional_fixtures = false"--Parker McGee - Reported in: P3.0 (28-Dec-05) Paper page: 148 Rails has recently, in 1.0, updated the way it handles fixtures. If you're having problems, see Mike Clark's weblog here: McGee - Reported in: P3.0 (03-Feb-06) Paper page: 148 The @products and @version_control_book instance variables are not available unless you set the 'use_instantiated_fixtures' attribute to true. This can be set in the TestCase class Rails creates for you.--Cathal - Reported in: P4.0 (01-May-06) Paper page: 148 when running test_read_with_hash omitting the before_type_cast suffix to the date assertion seems to allow the test to pass - including it causes a mismatch between the hashed fcture date and that called from test.database --laura herald - Reported in: P1.0 (27-Sep-05) Paper page: 151 Maybe I'm not quite understanding what Mike is saying (in the context of using a dynamic fixture to 'future-proof' the test of salable_items). He says "Perhaps we should refactor the salable_items() method to take a date as a parameter. That way we could unit test the salable_items() method simply by passing in a future date to the method...". It would seem that instead of passing in a 'future' date. You'd want to send in a known, fixed, date that would be prior to the one in the fixture for the future-proof book. I'm thinking it as salable_items AS OF a supplied date, and typically it would be today's date in production, but could be fixed for tests.--Richard Jensen - Reported in: P3.0 (19-Apr-07) Paper page: 152 in your tests, you are testing the equality of ambiguous numbers (floats). The prices you chose for the fixtures just _happen_ to work, but other prices will cause this error: 1) Failure: test_add_unique_products(CartTest) [test/unit/cart_test.rb:15]: <#<BigDecimal:b74e84fc,'0.5988E2',8(16)>> expected but was <59.88>. use (for example) assert_equal sprintf("%0.2f", (@version_control_book.price + @automation_book.price) ), sprintf("%0.2f", @cart.total_price)--Andrew Yates - Reported in: P4.0 (07-Feb-06) PDF page: 156 - 159 The file examples for store_controller.rb thoughout this section contain arguments to the fixture() method that look like this. fixtures :products, :orders. Having both these fixtures referenced causes ActiveRecord::StatementInvalid: Mysql::Error: type errors to be raised when the test is run. This appears to be the result of the fact that orders.yml at this point has not been modified, and contains the default generated values that Rails provides. The errors specifically mention that the orders table does not specify default values that the ActiveRecord tries to insert into this table using such assign statements as assert_equal 1, assigns(:items).size. The addition of the necessary fixture information to oreders.yml is discussed at the end of this section, and unless I'm mistaken, has not been suggested before this. Perhaps this will help others to trace this problem without wasting much time.--m gentzel - Reported in: P4.0 (11-Apr-06) Paper page: 156 Book Version 2005-12-20. Running "ruby test/functional/login_controller_test.rb" produces error: "method 'before_destroy' for LoginController:Class (NoMethodError)". I comment out the "before_destroy dont_destroy_dave" line from login_controller.rb and I get the result specified in the book (the "302" error). Being new at ruby/rails (learning directly from this book) not sure how to fix this... (running 1.8.2 on XPPro)--Martin Crundall - Reported in: P3.0 (25-Feb-06) Paper page: 159 This may be a Ruby/Rails version problem (Ruby 1.8.2, Rails 1.0.0). The session array is problematic. I see examples with it as a global variable (your book) and as a session variable (e.g., Four Days on Rails). Confusing, but not your fault. However, on my version(s), session is not a global variable in the functional test code, it is a field of @request. This makes sense to me and my code runs.--Jeffrey L. Taylor - Reported in: P4.0 (23-Feb-07) PDF page: 166 The listing of application.rb is inaccurate. It should include the following lines just after the class declaration: # Pick a unique cookie name to distinguish our session data from others' session :session_key => '_depot_session_id' --Tom Pollard - Reported in: P4.0 (15-Mar-06) PDF page: 169 File 121 online () doesn't quite match the code on the page, nor the code from the zip file I downloaded. I didn't diff it, but at least the elapsedSeconds test is different (8.0 in the book and zip, 3.0 in the online file, probably as it once was in the book?)--David Hislop - Reported in: P4.0 (15-Mar-06) PDF page: 169 When running the performance test it fails with a MySQL foreign key constraint violation. Examining the test log file, it's because the first save_order after the add_to_cart generated by the code "get :add_to_cart, :id => products(:version_control_book).id" created a single line item associated with the first order. Knowing at this stage of the book how to fix situations like this by ordering the deletes after a test would be nice!--David Hislop - Reported in: P4.0 (21-Mar-06) PDF page: 170 The use of transactional fixtures is already standard in my version of Rails (set in test_helper.rb).--Juergen - Reported in: P4.0 (14-Jul-06) PDF page: 170 h - Reported in: P3.0 (17-Feb-06) Paper page: 176 Running the code listed for File 121 on Page 176 of the book (12.7 Performance Testing) produced an integrity constraint violation error from MySQL caused by the Order.delete_all statement, which attempts to deletes orders which are tied to LineItems. Modifying the fk_items_order constraint (create.sql, file 37/106, page 102/502) to include an ON DELETE CASCADE clause resolves the problem, by removing the line item record when deleting the associated order. ALTER TABLE line_items DROP FOREIGN KEY fk_items_order; ALTER TABLE line_items ADD CONSTRAINT fk_items_order FOREIGN KEY (order_id) REFERENCES orders(id) ON DELETE CASCADE I did use phpMyAdmin to create the tables and constraints, but going over it, I'm not sure why it would have worked for you guys, unless MySQL prior to 4.12 had default different behavior.--Nathan Youngman - Reported in: P3.1 (10-Jan-06) PDF page: 183 Grouping controllers in modules: I have grouped controllers as described in this section using the command format: myapp> ruby script/generate controller Admin::Book action1 action2 ... FIles and directories were generated as described, however, a request formatted according to your example: results in an error "no action corresponds to book".--sharon - Reported in: P4.0 (03-Jul-06) PDF page: 193 Logging In Now that we have a user in the test database, let's see if we can log in as taht user. If we were using a browser.... ---- taht should be that (This is from the beta excerpt of the testing chapter, in case you've already fixed this)--Orien Vandenbergh - Reported in: P4.0 (27-Apr-06) PDF page: 207 Order.find_on_page(page_num, page_size) the above function defined should probably be more like, def Order.find_on_page(page_num,page_size) find(:all, :order => 'id', :limit => page_size, :offset => (page_num-1)*page_size) end note: (page_num-1)*page_size is the change--Mike Nelson - Reported in: P4.0 (19-Jan-06) PDF page: 209 To be a little more consistent, I think the line order = Order.find_all_by_email(params['email']) should be "orders = .... " to match the other find_all examples. - Reported in: P3.1 (23-Dec-05) PDF page: 212 <blockquote><code>result = Product.update_all("price = 1.1*price", "title like '%Java%'") </code></blockquote> It would have been useful if you showed the usage of like statements when you were explaining find() too. It isn't too hard to muck around and figure out, but would be a lot clearer if explicitly shown.--UltraBob - Reported in: P4.0 (17-Jan-06) PDF page: 213 Paper page: 222 In the bit about save vs. save! when it comes to callbacks in there too it doesn't work exactly like that either :-( Went digging and found where they talk about it some more. --Chris Nolan.ca - Reported in: P4.0 (07-Apr-06) PDF page: 221 With regard to assigning a new object to a has_one relationship, the text suggests the following code: invoice = Invoice.new # fill in the invoice invoice.save! an_order.invoice = invoice When the invoice row is saved, it does not yet have a foreign key value for the order table--wouldn't this cause the SQL INSERT to be followed by an UPDATE to add the foreign key? In addition to being inefficient, what if the update fails and no error is generated? It would seem you are in the same boat so you are no better off (aside from the fact that an update is generally less likely to fail than an insert). Would a transaction help to force an error to be generated? Also, if the invoice table does not allow a null value in order_id or has a foreign key constraint, then won't the save! method always fail? It would seem to be worth mentioning that the database table needs to be set up to allow null order_id values for this code to work. --Michael Johnson - Reported in: P1.0 (25-Feb-06) Paper page: 226 "It's worth noting that it isn't the foreign keys that set up the relationships. These are just hints to the database that it should check that the values in the columns reference known keys in the target tables. The DBMS is free to ignore these constraints (and some versions of MySQL do)." The DBMS is free to ignore these constraints? I find this statement very misleading. If MySQL ignores foreign key constraints, that is a major BUG in MySQL, nothing more. Why would a database let you define constraints and then ignore them? This is the kind of statement that makes me lose confidence in the rest of the book (which otherwise seems quite good). Please update with a more accurate description of what foreign key constraints do. Perhaps a description of the buggy behavior of MySQL could be placed in a side bar. --Stephen Hutton - Reported in: P4.0 (01-May-06) PDF page: 226 6th paragraph -- "This is useful in cases where simply adding to the where clause using the :condition option isn--Rob Leslie - Reported in: P3.0 (27-Mar-06) Paper page: 228 In the second sentence: "You indicate these relationships..." the word relationships is misspelled ('relatonships'). - Reported in: P4.0 (02-Jan-06) PDF page: 232 Regarding class User < ActiveRecord::Base has_and_belongs_to_many :articles def read_article(article) articles.push_with_attributes(article, :read_at => Time.now) end # ... end I am missing information on how to update-or-add articles to the list. That is, how would one best go about adding the article if it didn't exist, or just update the Time.now bit if it existed already? Haven't been able to find the answer. More generally, at least the mention of find_on_create (which isn't applicable here, I don't think?) would be a nice addition (though perhaps not on this particular page). - Reported in: P1.0 (10-Jan-06) PDF page: 236 The section on counters states: "How many lines items does this order have?" but then the counter is created with the product, so instead it answers "How many lines items does the product have", which doesn't make any sense. Unless I'm completely missing something, of course...--Elan Feingold - Reported in: P3.0 (21-Dec-05) Paper page: 250 I would really value an expansion on how to induce #error_message_for to format validation errors for objects that have been rolled back (by the transaction). The statement that "there's no easy way..." is tantalizing.--Bill Burcham - Reported in: P4.0 (08-Jan-06) PDF page: 261 validates_format_of: syntax coloring error in example code; "in" should be blue instead of red--Benoit Gagnon - Reported in: P4.0 (02-Feb-06) PDF page: 261 I couldn't work out really quickly how to make a test case work to see if this is true or not, but it seems like the default message for validates_exclusion_of should be “is included in the list.” instead of “is NOT included in the list.” It doesn't make sense to me that the default message for validates_exclusion_of and validates_inclusion_of should be the same.--UltraBob - Reported in: P4.0 (02-Jan-06) PDF page: 266 Not certain, but shouldn't the regexp in def normalize_credit_card_number self.cc_number.gsub!(/-\w/, '') end contain \s rather than \w? Assuming we want to strip whitespace rather than word characters. - Reported in: P4.0 (02-Jan-06) PDF page: 267 If there is no difference between created_on and created_at, nor between updated_on and updated_at, perhaps this should be stated (even more) explicitly to avoid confusion. - Reported in: P4.0 (20-Apr-06) PDF page: 269 The Encrypter class reads: def initialize(attrs_to_manage) when it should have a splat: def initialize(*attrs_to_manage) in order to handle multiple attributes.--Ashley Taylor - Reported in: P4.0 (15-Jul-06) PDF page: 273 result.each do |line_item| puts "Line item #{line_item.id}: Should fail, as result = LineItem.find_by_sql("select quantity, quantity*unit_price as total_price " + " from line_items") Didn't fetch the id; as you point out on page 276 ("The Case of the Missing ID")--Daniel - Reported in: P4.0 (22-Feb-06) PDF page: 279 At the beginning of the ActiveController section(16.1) the process of the loading of an ActionController is described. There it says that the corresponding helper file is merged into the Controller (point 3). This is wrong, the helper is obviously only avaible in the View. - Reported in: P3.0 (27-Dec-05) PDF page: 301 If a section header is the first thing on a page, the link from the TOC will land on the page previous to the start of the section. Also, the page numbering in the PDF is off - the number on the page says 292 but Adobe Reader or Preview (Mac OS X) report the page as 301. You should adjust the physical numbering and logical numbering so they are in sync. I don't know what you are using to prepare the PDF, but you can fix page numbers using the Adobe Acrobat tool.--Joshua Susser - Reported in: P2.0 (29-Jun-06) Paper page: 302 "You can pass additional parameters as a hash to these named route." "route" should be pluralized--Austin - Reported in: P3.0 (17-Jan-06) Paper page: 302 Shouldn't the OrderController class extend ApplicationController or ActionController... not ActiveRecord::Base?--Ben Munat - Reported in: P4.0 (01-Jan-06) PDF page: 306 I'm thinking the link to the 16.8/p323 should actually be to p310 re: Session Expiry and the :session_expiry and not to page caching?--Chris Nolan.ca - Reported in: P6.0 (08-Apr-07) PDF page: 308 You refer to "Session Container Performance in Ruby on Rails" hosted in your site. The overall picture comparison is not available (media.pragprog.com/ror/sessions/img/sessions.png) --Hugo - Reported in: P4.0 (01-Mar-06) PDF page: 319 The first line of verify_premium_user is a return statement. That can't be correct, can it?--Jeff de Vries - Reported in: P4.0 (17-Mar-06) Paper page: 330 Why does verify_premium_user have a "return" statement as the first line in its definition? I believe this is a typo.--Justin Johnson - Reported in: P4.0 (02-May-06) PDF page: 336 In <%= number_with_delimiter(12345678, delimiter = "_") %> I suspect the second argument should be :delimiter => "_".--Rob Leslie - Reported in: P4.0 (24-Mar-06) PDF page: 344 and the corresponding value are the value and the corresponding values are the value--Robin Bhattacharyya - Reported in: P2.0 (14-Apr-06) Paper page: 351 There is only a couple of lines about RSS. Could you please include an example of how this works both on the server side and client side. After reading this I'm not sure how Rails could be used to make an RSS server.--Peter Michaux - Reported in: P3.0 (10-Mar-06) Paper page: 351 "Finally, the magic option :encode=>"javascript" uses client-side Javascript to obscure the generated link, making it harder for spiders to harvest e-mail addresses from your site." Client side Javascript isn't used to obscure the link, but to decode an already obscured link. Spiders can't 'see' the output of Javascript functions (yet?).--bitbutter - Reported in: P3.0 (28-Mar-06) Paper page: 352 In the RHTML-code at the bottom of the page, inside the for user loop, the <tr> tag isn't closed. Thanks for an awesome book!--Mathias Wittlock - Reported in: P4.0 (26-Feb-06) Paper page: 357 In description of text fields: "common options include :size... and :maxsize..." Shouldn't the last be ":maxlength", in order to have the browser cut off input greater than specified length? --Micah Bly - Reported in: P3.1 (22-Mar-06) Paper page: 359 It would be great if you showed some code to illustrate your point "in reality the find would probably be either in the controller or in a helper module".--Nick Coyne - Reported in: P6.0 (19-Oct-06) Paper page: 362 I found it hard to get through the upload process. When precisely the picture= accessor is called? It is called in the model, down in ActiveRecord somewhere. Hard to explain. But a few more setneces about the need of the accessor-method would be nice.--3y - Reported in: P4.0 (02-Mar-06) Paper page: 363 The file uploading process described here is accurate, complete and helpful but it includes a sentence that makes its use potentially ambiguous. The last paragraph of text on the page says, "The picture is uploaded into an attribute called picture. However, the database table doesn't contain a column of that name." I mis-read this to mean, in essence, "If we'd actually had a column called picture in our database, none of the following code monkeying would have been necessary." That is a wrong interpretation. The model magic is necessary whether or not the file is being uploaded to a database column called picture.--Dan Shafer - Reported in: P3.0 (18-Jan-06) Paper page: 369 "If the current request is being handled by a controller called store, Rails will by default look for a layout called store_layout (with the usual .rhtml or .rxml extension) in the app/views/layouts directory" This does not work for me. I have to add a directive like this: layout "store_layout" in the store_controller.rb file to get the layout to work. Am I doing something wrong, or is the default layout name different from what the book says?--Richard Smith - Reported in: P1.0 (10-Jan-06) Paper page: 375 render_component passing the controller as a symbol generates a camelize error with rails 1.0. Various irc members also mentioned that it is a big no-no and should be passed as render_component :controller => 'name, :action => 'actionname'--John Athayde - Reported in: P4.0 (20-Mar-06) Paper page: 377 "It's in the file get_links in the component's link subdirectory." -> "It's in the file get_links.rhtml in the component's link subdirectory."--Justin Johnson - Reported in: P3.0 (05-Dec-05) PDF page: 380 Note page# is acrobat reader page# - number in header is 380 The Observer code in this section does not work. To cut a long story short the parameters get mangled when they are retrieved using @phrase = request.raw_post || request.query_string so that if you type in an r a r&_ gets sent. Instead you need to 1) Use @phrase = params[:search_text] 2) Use a :with parameter with observe_field e.g. :with => "'search_text=' + escape(value)" There is stuff on the wiki about this. It would also be good if this section explained about the CTAGS code being generated by observe_field Rails 0.14.3 Ruby 1.8.2 Windows XP --Andrew Premdas - Reported in: P4.0 (06-Apr-06) PDF page: 380 Unfortunately, this could be user error, but I copied and pasted the code samples directly from your site. When I buit the AJAX guessing game, the initial page that I get looks like yours in the figures on the following page. My initial URL is "" However, when I submit my answer, either a winning or losing answer, the URL changes to "" As a result, the partial update is not performed. Instead, I get a replacement load of just the form. The "Guess What" header is not missing from the page. I am a beginner, so this could be pilot error. I hope that I'm not wasting your time. --Chris Kappler - Reported in: P4.0 (24-Mar-06) PDF page: 386 this would be not be a big deal this would not be a big deal--Robin Bhattacharyya - Reported in: P6.0 (28-Jan-07) PDF page: 387 "... in real life it might remove a remove from a database table" should probably be "... in real life it might remove a row from a database table" - Reported in: P1.0 (12-Dec-05) Paper page: 391 The AJAX example using form_remote_tag() on pages 391-392 doesn't work. I downloaded the code directly from the website and tried it on Windows XP SP2 with Ruby 1.8.2-15 and Rails 0.14.4 running on WEBrick and Apache, using IE6 and Firefox 1.5 to browse, and on OS X Tiger. The code is supposed to update a <div id="update_div> with a new form using AJAX. Instead of updating the "update_div" like it should, it renders the form as an entirely new page by itself. WEBrick output: 127.0.0.1 - - [12/Dec/2005:10:43:28 Eastern Standard Time] "GET /guesswhat HTTP/1.1" 200 851 - -> /guesswhat 127.0.0.1 - - [12/Dec/2005:10:43:28 Eastern Standard Time] "GET /favicon.ico HTTP/1.1" 200 0 - -> /favicon.ico 127.0.0.1 - - [12/Dec/2005:10:43:32 Eastern Standard Time] "POST /guesswhat/guess HTTP/1.1" 200 623 -> /guesswhat/guess It goes from "- -> /guesswhat" to " -> /guesswhat", which leads me to believe that something in Rails changed that is causing a misdirection somehow.--Charlie Squires - Reported in: P4.0 (09-Jan-06) PDF page: 392 If you download File 202 (update_many.rhtml), be aware that this file is actually generating Javascript, not HTML; therefore, the embedded HTML comment causes an error when it is executed as Javascript by the browser. Solution: delete the comment, or use Javascript-style commenting--Dennis Bell - Reported in: P4.0 (12-Feb-06) PDF page: 393 It uses a trvial partial template for each line. "trivial" - Reported in: P4.0 (12-Feb-06) PDF page: 394 "We’ll put these in a <script> section in our page header, but the header is defined this the page template." -- grammatical typo - Reported in: P6.0 (20-Oct-06) Paper page: 395 "@phrase = request.raw_post || request.query_string" does not allow me to test the search by entering an URL such as "/controller/search?ruby". query_string is not available to my search, it is emtpy.--3y - Reported in: P4.0 (12-Feb-06) PDF page: 397 "If you want _to_ different actions depending on whether JavaScript is enabled..." --> "two" - Reported in: P4.0 (24-Mar-06) PDF page: 398 Paper page: 410 Under Web V2.1 heading: Need a comma after "(à l Google Suggest)".--Carol Deihl - Reported in: P1.0 (12-Jan-06) Paper page: 404 Fading out the random three elements does not work with the File 202 (update_many.rhtml) as downloaded. Nothing happens until you remove the comments that have been inserted at the top of that file. It seems the comments affect what is returned by the "eval(request.responseText)" call. I am using Rails 1.0.0 and Ruby 1.8.2 --Mike Berrow - Reported in: P3.0 (01-Nov-06) Paper page: 415 OK this might just be my misunderstanding... but the @body hash is described as the vehicle for passing values to the template, however the example templates do not use the @body hash, but they use the order object directly. Is the @body hash superfluous in the examples?--Les Nightingill - Reported in: P2.0 (08-Apr-06) Paper page: 415 The section on E-mail templates calling partials is ok if the partial is in the same directory as the template. But what if the partial is in another view subdirectory. And what if that partial calls other partials and helper functions. Things break pretty fast. Since "Rails can't guess the default locations", it seems like a good workaround is to use in the regular controller with a regular template and then send that body to the email model with - Reported in: P2.0 (08-Apr-06) Paper page: 417 The section on Delivering HTML-Format E-mail is too brief. There needs to be a discussion of using a layout for the email and when and where the layout should be applied to the email. Also what about css files? Is it possible to use a css file with a Rails built HTML email? How to do it? Must give the full url not the url relative to the website base url. - Reported in: P4.0 (20-Feb-06) Paper page: 425 The chapter on web services seems like notes. It would be better if there were a few more examples of less trivial situations. I'd like to see an example where some information is sent to a Rails app via web services to create a new instance of a model and then the model is saved. If the validations for the model fail then how can the errors and flash be sent back to the client app. - Reported in: P4.0 (20-Feb-06) Paper page: 436 The two sections on external clients are confusing. Why are the endpoints listed in the XML-RPC section when they also apply to SOAP. I know there is a note about this but it is at the end of the XML-RPC section. Also the section on SOAP says "and use an IDE... to generate the client class files." What?! This leaves me wondering a lot. Do I need to do this? Perhaps for each section you could include the code for simple command line plain Ruby scripts for a SOAP client and a XML-RPC client that work with a previous Rails web service example. Please write this in a tutorial fashion (ie. step by step). - Reported in: P3.0 (25-Oct-06) Paper page: 437 "For delegated and layered dispatching, the information telling us which service object teh invocation should be routed to is embedded in the request. For delegated dispatching we rely on the controller action name to determine which service it should go to." Shouldn't the first "delegated" be "direct"? --Alex Pounds <alex@alexpounds.com> - Reported in: P4.0 (29-Jan-06) PDF page: 438 The "File Uploads" bookmark incorrectly points to the page previous to the actul "File Uploads" section.--Robert Stuttaford - Reported in: P2.0 (23-Feb-06) Paper page: 440 How do you quote SQL in the :order parameter? I tried :order => ["? ASC", order_by], but the ?s weren't expanded.--Justin Johnson - Reported in: P4.0 (06-Mar-06) PDF page: 451 Sorry, my fault I should have you reported this but when I saw it the first time. But I wasn't used to report bugs to the publisher directly. ----snip def local_request? ["127.0.0.1", "88.88.888.101", "77.77.777.102"].include?(request.remo ----snap You should subsitute 777 and 888 with a value <= 255. --Markus Werner - Reported in: P2.0 (10-Mar-06) Paper page: 457 For your next edition: Lighttpd is now available for Windows and configuration details for that platform would be very useful. I have tried to use the config provided, but cannot get Lighttpd to start. It fails to "find" the dispatch.fcgi file... I had a hunch that the problem may be d/t the fact that the shebang line in dispatch.fcgi isn't used, so lighttpd doesn't know that it must pass the file to the ruby interpreter. However, changing the file extension to dispatch.fcgi.rb did not solve the problem so i remain in the dark. I know this isn't the appropriate forum for solving this problem for me, but perhaps you could off the top of your head give me some direction. Would be much appreciated. Oh, and the book is great!!--Rainer Thiel - Reported in: P1.0 (08-Jun-06) Paper page: 464 Found a solution to this on a message board (). Seems the solution is to use ::ActionController::UnknownAction instead of ActionController::UnknownAction--ChrisT - Reported in: P3.0 (13-Jan-06) Paper page: 464 I don't know if this is a problem with Rails 1.0, Ruby 1.8.4, or something else entirely, but I could not get the rescue_action_in_public to load properly until I changed it. Ruby kept complaining that UnknownAction was an uninitialized constant. At first I just removed it, but that was unsatisfactory. In the end I had to change the case to use exception.class.name rather than exception, and list string constants instead of the actual class literals. Doing that made it work. I'm completely baffled about why this may be, but I'm a Ruby newbie as well as a Rails newcomer.--Jim Elliott - Reported in: P4.0 (07-Mar-06) Paper page: 464 I have the same error generated from page 464 of the P3.0 September printing - as Jim Elliot (#2298) suggests 'unitialized constant' "ActionUnknown" but I am obviously less astute than he as I haven't been able to figure out the adjustments that I must make to have it functional. Thus far, the RonR mail list has been unable to make it meaningful to me and so I am stuck with methodology suggested in the book that I can't make work. Craig--Craig White - Reported in: P4.0 (25-Apr-06) Paper page: 474 "Creating the log files slows down the rendering of the action quite a lot on most machines" Really? If so, perhaps you could talk about how to turn off / reduce logging. OTOH, if you're actually referring to the tail command slowing things down, this sentence could be a lot clearer. jon@steelskies.com--Jonathan del Strother - Reported in: P1.0 (20-Jul-07) PDF page: 477 animals = %w( ant bee cat dog elk ) should be... animals = %w{ ant bee cat dog elk }--tim - Reported in: P4.0 (13-Apr-06) PDF page: 477 "put the start of the block at the end of the source line containing the method call." This is ambiguous; what if I have a method call, a whole bunch of other stuff on the "source line", then a block? Could be written "put the start of the block after the method call, at the end of a line."--Mark Ostroth - Reported in: P2.0 (03-Dec-06) Paper page: 487 "[...], sharing the modules functionality without using inheritance." should be: "[...], sharing the module's functionality without using inheritance." --Gabor Cselle - Reported in: P3.0 (23-Mar-06) Paper page: 515 The following line drove me nuts... <div class="seperator"> </div> ... as it kept on producing a questionmark(?) in the view. I thus removed the entity reference and used a normal space and still it produced the questionmark. I finally revised the line and used the following to get the expected result: <div class="separator"><%= sprintf("\ ") %></div>--Rheinard Korf - Reported in: P4.0 (10-Apr-06) PDF page: 517 Could the Index get its own entry in the outline view (Preview's 'drawer')? It always takes a couple brain cycles to remember to go to Appendices->Resources->Bibliography and then to the next page.--Rob Biedenharn - Reported in: P4.0 (07-Mar-06) Paper page: 527 Page 527 (Appendix C.3 Cross-Reference of Code Samples): Text states: "If a source sample has a marginal note containing a number, you can look that number up in the list that follows to determine the file containing that code." I downloaded the code for windows but some of it seems to be missing. For instance, the marginal note on page 87 mentions source sample File 82. The cross reference page 528 shows: 82 depot9/app/views/store/display_cart.rhtml The problem is that there is no depot9 directory included in the code samples. There is a depot1, depot10, depot11, and depot12; but the display_cart.rhtml files in those directories do not match the source file as it is printed in the book(p.87 in the case of File 82 display_cart.rhtml). Note the display_cart.rhtml in the depot10 directory is similiar to File 82, but not the same. This makes it a little frustrating to debug for typos when following along with the book. Is there some way I can get the files as cross referenced from the book, rather than files that are merely just similar to the file I am looking for?--John - Reported in: P2.0 (16-Dec-07) PDF page: 910faauyff Paper page: 8mx2hvufgn 5w69cgk6 xo5wz0aey wfj22eyi--7kkmix6ro3 Stuff To Be Considered in the Next Edition - Reported in: P1.0 (09-Dec-05) Paper page: ii Please put a list of Rails reserved words inside the front/back cover. Maybe with a footnote to never use them as column headings in your tables. I spent the last hour chasing down the error caused by using "method". Anyway, thank you for a much better than average computer book.--John M Miller - Reported in: P2.0 (03-Nov-05) Paper page: 1 For the next version a few plug-in examples would be nice - Reported in: P1.0 (27-Jul-05) PDF page: 6 Under "Rails Versions", both "Rails V1.0" and "Rails 1.0" are referred to. I think the "V" is unnecessary, so standardising on "Rails 1.0" would be best. (Dave says: indeed it would...)--Tim Bell - Reported in: P3.1 (09-Dec-05) PDF page: 20 Linux Redhat FC4, doesn't install what you expect when you install ruby via YUM. To get a more complete ruby install type: <code>yum -y install ruby rdoc irb</code> otherwise you get strange errors when you install rails while it installs the rdocs.--Yan-Fa Li - Reported in: P4.0 (16-Feb-06) PDF page: 40 Re: Hashes and Parameter Lists I'm new to Ruby, but have read page 474 on Arrays & Hashes - it made sense, apart form the last part on Hashes and Parameter Lists, which is a bit unclear. Likewise, on page 40: "The :action part is a Ruby symbol. You can think of the colon as meaning ... you can use this keyword parameter facility to give those parameters values." ?? I'd like to understand why hashes are being used for passing parameters. Enjoying the book - thanks.--Anthony Denahy - Reported in: P4.0 (26-Dec-05) PDF page: 51 Having the table definition using the equivalent Active Record Migrations definition would be nice. In this example I've substituted float for decimal in the definition since migrations does not have a corresponding type. class CreateProductsTable < ActiveRecord::Migration def self.up create_table :products |table| table.column :title, :string, :limit => 100 table.column :description, :text table.column :image_url, :string, :limit => 200 table.column :price, :decimal end def self.down drop_table :products end end --Alan M - Reported in: P3.1 (09-Dec-05) PDF page: 52 For P53 to work on Fedora Linux FC4, it appears one has to install the mysql gem. Unfortunately that won't build unless you have previously installed ruby-devel and mysql-devel. And even then it won't build without passing extra parameters: <code> yum -y install ruby-devel mysql-devel sudo gem install mysql -- --with-mysql-lib=/usr/lib/mysql </code> After you follow these steps, the basic admin app works.--Yan-Fa Li - Reported in: P3.0 (05-Dec-05) Paper page: 53 Running OS X 10.4.3 and following all the directions in this chapter, I get to the place where I'm generating the scaffold and nothing works correctly. Others here have reported finding their proper directory at another location but whether I type http;//localhost:3000/admin or, I get the same error: Routing Error Recognition failed for "/products" (or "/admin") I noticed that one response here was to get the latest Rails. I'm running 1.8.2 (2004-12-25) [powerpc-darwin8.0], which as far as I can tell is the latest released version of rails unless I'm just not understanding something. At least, I got this version by following the directions in the Pragmatic book, which to this point is the only source of information on which I'm relying. I know my MySQL database is running and can be connected to. This appears sto be a problem of the generate script but I'm darned if I can figure out what should be going on here. I want to fall in love with Rails, but so far, she's jilting me! --Dan Shafer - Reported in: P3.1 (13-Dec-05) PDF page: 53 On OSX Tiger (10.4.3) - Ruby 1.8.2 - Rails 1.0.0 - MySql 5.0 I was blocked using this command: ruby script/generate scaffold Product Admin blocks on line: create test/fixtures/products.yml To be fixed, user should install mysql gem using following command: sudo gem install mysql -- --with-mysql-dir=/usr/local/mysql--Alx - Reported in: P1.0 (31-Jul-05) PDF page: 53 Regarding the /tmp/mysql.sock problem mentioned above. I had the same problem running on Fedora Core 4, there was no mysql.sock in /tmp. I overcame the issue by using an IP address (127.0.0.1) in the database.yml file rather than localhost. You also need to alter the Mysql grant lines to use the IP address. This connects via TCP/IP rather than through /tmp/mysql.sock. (Dave says: this must be some kind of mysl configuration thing. I'm not sure I can address it in the book, but the above is certainly good information to know)--Robert McGovern - Reported in: P3.0 (26-Oct-05) PDF page: 53 This is a follow-up to my post of earlier this morning. I don't know whether it's definitive, but I've found a solution that works for me:. I'm glad to have found this, but it took uncomfortably much digging, so I hope you'll add a pointer to this site in the next release of the book, if the problems haven't been fixed (by Apple and anyone else at fault) by then.--Ralph Haygood (rhaygood@duke.edu) - Reported in: P2.0 (16-Sep-05) PDF page: 53 On Mac OS X I was getting the error "Access denied for user --Lyle Vogtmann - Reported in: P3.0 (27-Nov-05) PDF page: 53 This not so much a new erratum as a request that erratum #1073 on page 53 of the PDF file be explained in more detail than it currently appears. This is the "Access denied for user ��@�localhost� (using password: NO)" problem on OSX 10.4. Please explain where the file to be patched resides, how to recompile it - which version of gcc - and anything else that needs to be done. I've been trying to generate a scaffold for three weeks now and researching the issue on the web I've found a lot of other people who are running in circle like me: we could really benefit from a more detailed and comprehensive tutorial on how to get RoR to play nice with MySQL on OSX 10.4+--Stefano Bertolo - Reported in: P2.0 (25-Oct-05) Paper page: 54 Now that MySql 5.0 is "generally available", novices to MySQL may encounter an ERROR 42000: Can't find any matching row in the user table. This is because GRANTs to a non-existant user no longer create that user() Future versions should probably demonstrate how to create the databases using GRANTS for version prior to 5.0.2 and using the CREATE USER statement () for MySQL 5.0.2 and later. (Confirm that 5.0.2 was when GRANT changed to no longer adding users if they did not exist).--David Willis - Reported in: P1.0 (17-Jul-05) PDF page: 62 The suggested image url is relative /images/sk_auto_small.jpg but the validation requires a full url starting with http. (Dave says: I'm going to tidy all that up in the next edition: it was a change made at the suggestion of a reviewer, but in retrospect I regret it) - Reported in: P2.0 (10-Nov-05) Paper page: 64 This is the first mention of validation that I saw in the book. I'm experimenting with a database containing constraints and the question that came to mind immediately was "Does/can Rails validation work with the database's constraints?" I'm not sure an in-depth explanation is appropriate this early in the book, but at least a one-liner similar to "See page whatever for information on the relationship between constraints and validation" --Brian - Reported in: P4.0 (30-Dec-05) PDF page: 67 The hyperlink to click back to page 56 goes to page 54. The reference to material on page 56 is correct, just the location the link moves you to is incorrect.--Alan M - Reported in: P4.0 (11-Feb-06) Paper page: 68 Another problem with web development is keeping HTML formating separated from its content. (The idealization of this concept is the 'semantic web'.) It's easy today to separate content from formatting using CSS. Most modern browsers support CSS1 and CSS2, and the CSS3 standard is around the corner. The use of CSS is an overdue concept that I would expect a book on new technologies such as this one would promote. I am disappointed to see that the list.rhtml example on page 68 shows formatting attributes placed inline with the HTML. The attributes "cellpadding", "cellspacing", "width", "height", etc. have their CSS equivalents and could as easily been placed in the scaffold.css file. Doing so would be an added benefit for the readers of your book and for the future of the web. I hope this gets corrected in future copies. I want to believe that such a timely and otherwise well written book is promoting *all* of the modern standards. Thank you, Jose Hales-Garcia UCLA Department of Statistics --Jose Hales-Garcia - Reported in: P2.0 (24-Dec-05) Paper page: 75 In task A, class names in HTML are in CamelCase, like ListLine, ListTitle (p. 69). In task B, class names in HTML are in lower case, like catalogentry, catalogprice (p. 75) It would be better to stick to one style.--olegf - Reported in: P4.0 (31-Dec-05) PDF page: 76 The equivalent Active Record Migrations definition of line_items table: class CreateLineItemsTable < ActiveRecord::Migration def self.up create_table :line_items do |table| table.column :product_id, :integer, :null => false table.column :quantity, :integer, :default => 0, :null => false table.column :unit_price, :decimal, :null => false end execute "ALTER TABLE line_items ADD CONSTRAINT fk_items_product FOREIGN KEY (product_id) REFERENCES products (id)" end def self.down drop_table :line_items end end--Alan M - Reported in: P4.0 (31-Dec-05) PDF page: 77 Since mysql 4.0, InnoDB has become the default storage engine, and it supports foreign key. This seems to make footnote 1 on this page unnecessary. --Haobo Yu - Reported in: P2.0 (18-Oct-05) Paper page: 96 in the action add_to_cart we have a rescue that states in the logger.error "Attempt to access invalid product #{params[:id]}" ...thats not neccessarily true in my case the error occured somewhere in the find_card so it got me totaly confused. Shouldn't the rescue be limited to "product = Product.find(params[:id])" excluding the next two lines? Something like: def add_to_cart begin product = Product.find(params[:id]) rescue logger.error("Attempt to access invalid product #{params[:id]}") flash[:notice] = 'Invalid product' redirect_to(:action => 'index') return end @cart = find_cart @cart.add_product(product) redirect_to(:action => 'display_cart') end--Nicolas Holzheu - Reported in: P3.0 (02-Jan-06) Paper page: 108 Using <%= error_messages_for( :order ) %> in book while the files contain <%= error_messages_for( "order" ) %>--Jon Smirl - Reported in: P3.0 (14-Dec-05) Paper page: 122 It looks like the pluralize method added to the controller is designed to provide you feedback even if you've checked no items to ship (the when 0 case formats it nicely). However, this will never get displayed, because flash.now is set only if count > 0 up in the ship method. Getting rid of the if clause around the setting of flash.now fixes this, and you get notified if you hit the ship button with nothing checked. I suspect this was the original intent; I certainly like it better that way. Also note that this change doesn't cause a spurious notice to appear when you first reach the shipping page; that's prevented by the outer if statement in which things_to_ship is assigned.--Jim Elliott - Reported in: P2.0 (19-Aug-05) PDF page: 122 Paper page: 129 See the last paragraph before the screenshot. Starts with the text "That--Tom Brice - Reported in: B1.0 (08-Jul-05) PDF page: 126 The sentence that begins "If you are following along, delete your session file..." refers ,I believe, to a technique discussed in footnote 5 on page 84. The footnote does not use the term "session file"; it calls it a "cookie file". The technique is much better specified for Unix than it is for Windows. This technique should be discussed in more detail, either here or on page 84. --Michael Cronin - Reported in: P4.0 (30-Dec-05) PDF page: 134 Not sure if this is a difference between Rails versions, but in the test_truth method of ProductTest, the book reads "assert_kind_of Product, @product" whereas my autogenerated code was "assert_kind_of Product, products(:first)" Thought it should be noted for other readers. --Mike - Reported in: P1.0 (05-Oct-05) Paper page: 138 The code on page 138 of the book is incorrect redirect_to(jumpto) will not work if jumpto contains nested hashes. For example, if jumpto contains this post hash, it will not work. {"commit"=>"ask", "post"=>{"title"=>"asdfsdfs", "body"=>"asdfsdf", "price"=>"345", "tags"=>"#finance ", "email"=>"myemail@gmail.com"}, "action"=>"postQ", "controller"=>"ask"} will result in request_to flattening it down to {"commit"=>"ask", "post"=>"titleasdfsdfsbodyasdfsdfprice345tags#finance emailmyemail@gmail.com", "action"=>"postQ", "controller"=>"ask"}--Baldukai and Defiler - Reported in: P3.1 (20-Dec-05) PDF page: 140 In Rails 1.0 the defaults behaviour for fixtures in tests changed. If you use MySQL without transactions (as you will if you follow the book), the fixtures are not restored after every test method. As a fix you can either use InnoDB tables (that have transactions) or set the config value use_transactional_fixtures=false and use_instantiated_fixtures=true in test/test_helper.rb. Then you have the pre-1.0 behaviour. (Dave says: yes, we'll be redoing this in the next edition) --Felix von Delius - Reported in: P3.0 (28-Nov-05) PDF page: 140 "Here�s the bottom line: even if a test method updates the test database, the database is put back to its default state before the next test method is run. This is important because we don�t want tests to become dependent on the results of previous tests." I am having trouble with this... I moved the test_destroy method above the test_update method, and the test_update method fails because it can't find a product with id=1. The database table doesn't seem to be restored between tests. (This is probably due to a change in the Rails defaults since the book was written. We'll make all this clearer in the next edition) - Reported in: P2.0 (14-Dec-05) Paper page: 146 Further to Mark's suggestion it is probably better to update the test/test_helper.rb file, which is where these values are explicitly set.--Chris Sendall - Reported in: P3.0 (24-Nov-05) PDF page: 149 Paper page: 146 When you start calling the test_delete and test_read_with_hash methods the code breaks. In order for the code to work you have to put the following lines at the top of your unit test: self.use_transactional_fixtures = false self.use_instantiated_fixtures = true (Agreed--this is a change in Rails since the book was written)--Mark Bates - Reported in: P3.1 (18-Dec-05) PDF page: 150 Paper page: 157 I was having problems getting the tests described in chapter 12 to work. Specifically, once i deleted a piece of test data (test_destroy), that piece if data (the first book) wasn't present again for subsequent tests. I believe the problem is that MySQL was defaulting to MyISAM format for my tables. MyISAM doesn't support transaction rollback. It appears the way that the test harness ensures test data is in it's original form for each test is to use a transaction ROLLBACK which is ignored by MySQL for tables using MyISAM format. In order to fix this, I went into my create.sql script and added TYPE = InnoDB to the end of each table create. That causes MySQL to enforce transactions and the unit tests began to work as described in the text. Here is an example of the exact create table syntax: create table products ( id int not null auto_increment, ...other columns... primary key(id) ) TYPE=InnoDB; (Dave says: this is a change to the default Rails behaviour since the book was written.--Steven Chanin - Reported in: P2.0 (20-Nov-05) Paper page: 151 I agree with Richard Jensen's comment, the paragraph beginning "While the use of time is a convenient way of demonstrating a dynamic fixture..." is confused: 1) It would seem that Mike would need to pass in a *past* date (rather than future date) to test his revised salable_items(). 2) While I believe Mike's general statement that unit tests can provide insights on how to better refactor the code, in this specific case, I don't see it. Adding a date parameter to salable_items() is gratuitous as far as the application itself is concerned. Are we really going to ask (from the application) "which items are salable tomorrow (or yesterday)?" If so, then yeah, we need the date parameter. Otherwise, I'd argue that it's not worth increasing the application's complexity/potential sources of error just to make testing it trivially easier. ...But (like Richard) I wonder if I'm missing Mike's point...--Frank Myhr - Reported in: P1.0 (28-Sep-05) Paper page: 173 I run 'rake test_units' as instructed and get an error because the line_item_tests (which we haven't modified) fail. This appears to be due to a foreign key constraint triggered by the line_items fixture. 1) Error: test_truth(LineItemTest): ActiveRecord::StatementInvalid: #23000Cannot add or update a child row: a foreign key constraint fails: INSERT INTO line_items (`id`) VALUES (2) c:/ruby/lib/ruby/gems/1.8/gems/activerecord-1.11.1/lib/active_record/connect ion_adapters/abstract_adapter.rb:462:in `log' (Dave says: we're going to rewrite this chapter to use transactional fixtures and work out these kinds of issues)--Richard Jensen - Reported in: P1.0 (28-Sep-05) Paper page: 176 The performance test doesn't run. 1) Error: test_save_bulk_orders(OrderTest): ActiveRecord::StatementInvalid: #23000Cannot delete or update a parent row: a foreign key constraint fails: DELETE FROM orders If I add "LineItem.delete_all" in the teardown before the Order.delete_all, it seems to work. --Richard Jensen - Reported in: P3.1 (20-Dec-05) PDF page: 185 "See the Active Support RDoc for details." If it has been mentioned anywhere how to do this, I missed it, and so did s search of the PDF for RDoc. Generating documentation for your own app was mentioned, though the lack of detail on that was rather disappointing to me.--UltraBob - Reported in: P3.1 (22-Dec-05) PDF page: 191 In Rails 1.0, the Inflector class correctly handles "sheep", so the example that reads " if you have a class named Sheep, it’ll valiantly try to find a table named sheeps" is no longer true. Try "deer", although that word will get fixed soon, too.--Tim Cartwright - Reported in: P3.1 (20-Dec-05) PDF page: 216 Could you add sub-sections in Database relationship (or in large chapters in general) so it'll be easier to navigate to a specific sub-chapter in the index view (in Apple Viewer for example). Thanks for this one great book.--Alx - Reported in: P2.0 (20-Nov-05) Paper page: 259 Love the book, it's somewhat warn now! It would be great if you could include end to end examples of Localization; example: SQL date in the DB, display, edit UK style date. Secondly, and more important, end to end use of composed_of - again show and update of a composed_of field using rhtml Many thanks--Jonathan - Reported in: P4.0 (17-Jan-06) PDF page: 300 Minor! "Cookies and Sessions" in the PDF's table of contents points to the bottom of page 300. The section actually begins at the top of page 301.--Grant Hollingworth - Reported in: P3.0 (12-Nov-05) PDF page: 328 Discussion of templates leaves out the rules for comments. I went nuts for two days trying to find the syntax error when my html comment was perfectly fine, surrounding the erb delimiters. That doesn't work and it seems like a bug. The ruby manual mentions putting the hash mark after the percent-sign in the beginning and that works. The rails book leaves that out. --Warren Seltzer/Michael Nacht - Reported in: P1.0 (23-Jul-05) PDF page: 352 It would be nice to see a discussion of file uploading that doesn't insert the file into the database in a future version of the book. - Reported in: P1.0 (07-Aug-05) PDF page: 387 Paper page: 399 The code described as "File 194" has "<% 16.times do |i| ..." which I think is trying to build a 4x4 series of squares, but the diagram on the following page (Figure 18.7)have only a 3x3. Should the "16.times" be "9.times"? (Dave says: hmmm.... actually, the code in the book is what produced the window shown. I just sized the browser window to produce the list of 9 squares. That's kinda confusing, I agree, but isn't really an error. I'll fix this in the next edition)--Alan Hynes - Reported in: B2.0 (26-Jun-05) PDF page: 401 Could you guys consider squeezing in some more information about Unicode? It is true that Ruby's support for Unicode is not as mature as, say, Python's, but that doesn't mean that it's not possible to create fully multilingual apps in Rails. After writing a couple blog posts on the subject ([1]), (only to find out later that [3] had all that information and more), it seems clear to me that it *is* possible to write apps that handle multilingual text without too much trouble, even if Ruby can't handle manipulating individual characters too easily. i18n is a pretty important topic these days, and I for one would love to see the cutting edge of Rails extend to that domain... Cheers, Patrick Hall [1] [2] (Dave says: this is definitely important, but it won't make the first printing of the book. I'm going to leave this ticket open and revisit it later)--Patrick Hall - Reported in: P1.0 (18-Nov-05) Paper page: 551 It would be nice if the RecordNotFound exception was included in the index, as the RecordInvalid exception is. The index entry could refer to the "Reading Existing Rows" section on page 212 and also the "To Raise, or Not to Raise?" sidebar on page 219. In the absence of an entry for RecordNotFound, I used the index entry for the find() method, which references page 212 but not page 219.--Stephen Viles - Reported in: P1.0 (18-Nov-05) Paper page: 551 New index entry for RecordNotFound exception could also reference the "Iteration C2: Handling Errors" section starting on page 91.--Stephen Viles
https://pragprog.com/titles/rails1/errata
CC-MAIN-2016-50
refinedweb
16,673
66.94
. In my opinion, Python is one of the best languages you can use to learn (and implement) machine learning techniques for a few reasons: - It's simple: Python is now becoming the language of choice among new programmers thanks to its simple syntax and huge community - It's powerful: Just because something is simple doesn't mean it isn't capable. Python is also one of the most popular languages among data scientists and web programmers. Its community has created libraries to do just about anything you want, including machine learning - Lots of ML libraries: There are tons of machine learning libraries already written for Python. You can choose one of the hundreds of libraries based on your use-case, skill, and need for customization. The last point here is arguably the most important. The algorithms that power machine learning are pretty complex and include a lot of math, so writing them yourself (and getting it right) would be the most difficult task. Lucky for us, there are plenty of smart and dedicated people out there that have done this hard work for us so we can focus on the application at hand. By no means is this an exhaustive list. There is lots of code out there and I'm only posting some of the more relevant or well-known libraries here. Now, on to the list. The Most Popular Libraries I've included a short description of some of the more popular libraries and what they're good for, with a more complete list of notable projects in the next section. TensorFlow This is the newest neural network library on the list. Just having been released in the past few days, TensorFlow is a high-level neural network library that helps you program your network architectures while avoiding the low-level details. The focus is more on allowing you to express your computation as a data flow graph, which is much more suited to solving complex problems. It is mostly written in C++, which includes the Python bindings, so you don't have to worry about sacrificing performance. One of my favorite features is the flexible architecture, which allows you to deploy it to one or more CPUs or GPUs in a desktop, server, or mobile device all with the same API. Not many, if any, libraries can make that claim. It was developed for the Google Brain project and is now used by hundreds of engineers throughout the company, so there's no question whether it's capable of creating interesting solutions. Like any library though, you'll probably have to dedicate some time to learn its API, but the time spent should be well worth it. Within the first few minutes of playing around with the core features I could already tell TensorFlow would allow me to spend more time implementing my network designs and not fighting through the API. If you want to learn more about TensorFlow and neural networks, try taking a course like Deep Learning with TensorFlow, which will not only teach you about TensorFlow, but the many deep learning techniques as well. - Good for: Neural networks - Book: TensorFlow for Deep Learning - Website - Github scikit-learn The scikit-learn library is definitely one of, if not the most, popular ML libraries out there among all languages (at the time of this writing). It has a huge number of features for data mining and data analysis, making it a top choice for researches and developers alike. Its built on top of the popular NumPy, SciPy, and matplotlib libraries, so it'll have a familiar feel to it for the many people that already use these libraries. Although, compared to many of the other libraries listed below, this one is a bit more lower level and tends to act as the foundation for many other ML implementations. Given how powerful this library is, it can be difficult to get started with it unless you have a good resource. One of the more popular resources I've seen is Python for Data Science and Machine Learning Bootcamp, which does a good job explaining how to implement many ML methods in scikit-learn. Theano Theano is a machine learning library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays, which can be a point of frustration for some developers in other libraries. Like scikit-learn, Theano also tightly integrates with NumPy. The transparent use of the GPU makes Theano fast and painless to set up, which is pretty crucial for those just starting out. Although some have described it as more of a research tool than production use, so use it accordingly. One of its best features is great documentation and tons of tutorials. Thanks to the library's popularity you won't have much trouble finding resources to show you how to get your models up and running. - Good for: Neural networks and deep learning - Learn more: Practical Deep Learning in Theano + TensorFlow - Website - Github Pylearn2 Most of Pylearn2's functionality is actually built on top of Theano, so it has a pretty solid base. According to Pylearn2's website: Pylearn2 differs from scikit-learn in that Pylearn2 aims to provide great flexibility and make it possible for a researcher to do almost anything, while scikit-learn aims to work as a “black box” that can produce good results even if the user does not understand the implementation. Keep in mind that Pylearn2 may sometimes wrap other libraries such as scikit-learn when it makes sense to do so, so you're not getting 100% custom-written code here. This is great, however, since most of the bugs have already been worked out. Wrappers like Pylearn2 have a very important place in this list. Pyevolve One of the more exciting and different areas of neural network research is in the space of genetic algorithms. A genetic algorithm is basically just a search heuristic that mimics the process of natural selection. It essentially tests a neural network on some data and gets feedback on the network's performance from a fitness function. Then it iteratively makes small, random changes to the network and proceeds to test it again using the same data. Networks with higher fitness scores win out and are then used as the parent to new generations. Pyevolve provides a great framework to build and execute this kind of algorithm. Although the author has stated that as of v0.6 the framework is also supporting genetic programming, so in the near future the framework will lean more towards being an Evolutionary Computation framework than a just simple GA framework. NuPIC NuPIC is another library that provides to you some different functionality than just your standard ML algorithms. It is based on a theory of the neocortex called Hierarchical Temporal Memory (HTM). HTMs can be viewed as a type of neural network, but some of the theory is a bit different. Fundamentally, HTMs are a hierarchical, time-based memory system that can be trained on various data. It is meant to be a new computational framework that mimics how memory and computation are intertwined within our brains. For a full explanation of the theory and its applications, check out the whitepaper. Pattern This is more of a 'full suite' library as it provides not only some ML algorithms but also tools to help you collect and analyze data. The data mining portion helps you collect data from web services like Google, Twitter, and Wikipedia. It also has a web crawler and HTML DOM parser. The nice thing about including these tools is how easy it makes it to both collect and train on data in the same program. Here is a great example from the documentation that uses a bunch of tweets to train a classifier on whether a tweet is a 'win' or 'fail': from pattern.web import Twitter from pattern.en import tag from pattern.vector import KNN, count twitter, knn = Twitter(), KNN() for i in range(1, 3): for tweet in twitter.search('#win OR #fail', start=i, count=100): s = tweet.text.lower() p = '#win' in s and 'WIN' or 'FAIL' v = tag(s) v = [word for word, pos in v if pos == 'JJ'] # JJ = adjective v = count(v) # {'sweet': 1} if v: knn.train(v, type=p) print knn.classify('sweet potato burger') print knn.classify('stupid autocorrect') The tweets are first collected using twitter.search() via the hashtags '#win' and '#fail'. Then a k-nearest neighbor (KNN) is trained using adjectives extracted from the tweets. After enough training, you have a classifier. Not bad for only 15 lines of code. Caffe Caffe is a library for machine learning in vision applications. You might use it to create deep neural networks that recognize objects in images or even to recognize a visual style. Seamless integration with GPU training is offered, which is highly recommended for when you're training on images. Although this library seems to be mostly for academics and research, it should have plenty of uses for training models for production use as well. Other Notable Libraries And here is a list of quite a few other Python ML libraries out there. Some of them provide the same functionality as those above, and others have more narrow targets or are more meant to be used as learning tools.
https://stackabuse.com/the-best-machine-learning-libraries-in-python/
CC-MAIN-2019-43
refinedweb
1,564
59.53
pyspread 0.0.5 has been released. -- New features are: + X, Y, Z for relative cell relations are now pre-processed (easier to use) + Cells can be given a name with <name>=<expresion>. These names are located in the global namespace + Arrays and matrices within one cell can now easily be spread out to a cell range in the grid with the spread function. (spread(x,y,z,value) where x,y,z represents the top-left-front target cell and value is replaced by the origin cell reference) + Rows and columns can be inserted and deleted + Copy, Paste and Delete behaviour improved + Copy and Paste now work in cell editor when no selection is present + Basic tab-delimited text import added --
https://mail.python.org/pipermail/python-announce-list/2008-May/006618.html
CC-MAIN-2014-10
refinedweb
122
52.23
To get an idea, during Sonar analysis, your project is scanned by many tools to ensure that the source code conforms with the rules you’ve created in your quality profile. Whenever a rule is violated… well a violation is raised. With Sonar you can track these violations with violations drill down view or in the source code editor. There are hundreds of rules, categorized based on their importance. Ill try, in future posts, to cover as many as I can but for now let’s take a look at some common security rules / violations. There are two pairs of rules (all of them are ranked as critical in Sonar ) we are going to examine right now. 1. Array is Stored Directly ( PMD ) and Method returns internal array ( PMD ) These violations appear in the cases when an internal Array is stored or returned directly from a method. The following example illustrates a simple class that violates these rules. public class CalendarYear { private String[] months; public String[] getMonths() { return months; } public void setMonths(String[] months) { this.months = months; } } To eliminate them you have to clone the Array before storing / returning it as shown in the following class implementation, so noone can modify or get the original data of your class but only a copy of them. public class CalendarYear { private String[] months; public String[] getMonths() { return months.clone(); } public void setMonths(String[] months) { this.months = months.clone(); } } 2. Nonconstant string passed to execute method on an SQL statement (findbugs) and A prepared statement is generated from a nonconstant String (findbugs) Both rules are related to database access when using JDBC libraries. Generally there are two ways to execute an SQL Commants via JDBC connection : Statement and PreparedStatement. There is a lot of discussion about pros and cons but it’s out of the scope of this post. Let’s see how the first violation is raised based on the following source code snippet. Statement stmt = conn.createStatement(); String sqlCommand = 'Select * FROM customers WHERE name = '' + custName + '''; stmt.execute(sqlCommand); You’ve already noticed that the sqlcommand parameter passed to execute method is dynamically created during run-time which is not acceptable by this rule. Similar situations causes the second violation. String sqlCommand = 'insert into customers (id, name) values (?, ?)'; Statement stmt = conn.prepareStatement(sqlCommand); You can overcome this problems with three different ways. You can either use StringBuilder or String.format method to create the values of the string variables. If applicable you can define the SQL Commands as Constant in class declaration, but it’s only for the case where the SQL command is not required to be changed in runtime. Let’s re-write the first code snippet using StringBuilder Statement stmt = conn.createStatement(); stmt.execute(new StringBuilder('Select FROM customers WHERE name = ''). append(custName). append(''').toString()); and using String.format Statement stmt = conn.createStatement(); String sqlCommand = String.format('Select * from customers where name = '%s'', custName); stmt.execute(sqlCommand); For the second example you can just declare the sqlCommand as following private static final SQLCOMMAND = insert into customers (id, name) values (?, ?)'; There are more security rules such as the blocker Hardcoded constant database password but I assume that nobody is still hardcodes passwords in source code files… In following articles I’m going to show you how to adhere to performance and bad practice rules. Until then I’m waiting for your comments or suggestions. Happy coding and don’t forget to share! Reference: Fixing common Java security code violations in Sonar from our JCG partner Papapetrou P. Patroklos at the Only Software matters blog.
http://www.javacodegeeks.com/2012/09/fixing-common-java-security-code.html
CC-MAIN-2014-42
refinedweb
594
56.55
#include <sys/uio.h>()\(emit.) Permission to read from or write to another process is governed by a ptrace access mode PTRACE_MODE_ATTACH_REALCREDS check; see ptrace(2)., −1 is returned and errno is set appropriately. The memory described by local_iov is outside the caller's accessible address space. The memory described by remote_iov is outside the accessible address space of the process pid. The sum of the iov_len values of either local_iov or remote_iov overflows a ssize_t value. flags is not 0. liovcnt or riovcnt is too large. Could not allocate memory for internal copies of the iovec structures. The caller does not have permission to access the address space of the process pid. No process with ID pid exists. These system calls were added in Linux 3.2. Support is provided in glibc since version 2.15.; }
http://manpages.courier-mta.org/htmlman2/process_vm_readv.2.html
CC-MAIN-2019-18
refinedweb
137
68.97
#include <Store3.h> Delete the on disk representation of the object. This will cause GDataEventsI::OnDelete to be called after which this object will be freed from heap memory automatically. So Once you call this method assume the object pointed at is gone. Saves the object to disk. If this function fails the object is deleted, so if it returns false, stop using the ptr you have to it. Sets the stream, which is used during the next call to GDataI::Save, which also deletes the object when it's used. The caller loses ownership of the object passed into this function. Returns the type of object
http://www.memecode.com/lgi/docs/classGDataI.html
CC-MAIN-2014-41
refinedweb
107
75.3
Game Creation with XNA/Print version Table of contents Basics Game Creation / Game Design - Introduction - Types of Games - Story Writing and Character Development - Project Management - Marketing, Making money, Licensing Mathematics and Physics - Introduction - Vectors and Matrices - Collision Detection - Ballistics - Inverse Kinematics - Character Animation - Physics Engines Programming Audio and Sound 2D Game Development - Introduction - Texture - Sprites - Finding free Textures and Graphics - Menu and Help - Heads-Up-Display (HUD) 3D Game Development - Introduction - Primitive Objects - 3D Modelling Software - Finding free Models - Importing Models - Camera and Lighting - Shaders and Effects - Skybox - Landscape Modelling - 3D Engines Networking and Multiplayer Artificial Intelligence Kinect Other Appendices References License Preface To start writing games for Microsoft's XBox360, one usually has to read many books, web pages and tutorials. This class project tries to introduce the major subjects, get you started and if needed point you into the right direction for finding additional material.. Other Wikibooks There are also other wikibooks on subjects related that are quite useful: - Creating a Simple 3D Game with XNA - Video Game Design - Blender 3D: Noob to Pro - C Sharp Programming - Introduction to Software Engineering - Game Creation with the Unity Game Engine Other Class Projects Inspiration from successful class projects can be drawn here, which by themselves are also quite interesting and maybe helpful for this project: Basics Introduction. Setup For this book we will use Visual Studio 2008 and the XNA Framework 3.1. Although there are newer versions available, for many reasons we will stay with this older version. Contents - 1 Table of contents - 1.1 Basics - 1.2 Game Creation / Game Design - 1.3 Mathematics and Physics - 1.4 Programming - 1.5 Audio and Sound - 1.6 2D Game Development - 1.7 3D Game Development - 1.8 Networking and Multiplayer - 1.9 Artificial Intelligence - 1.10 Kinect - 1.11 Other - 1.12 Appendices - 1.13 References - 1.14 License - 1.15 Preface - 2 Basics - 2.1 Introduction - 2.2 Setup - 2.3 C-Sharp - 2.4 Game Loop - 2.5 Input Devices - 3 Game Creation / Game Design - 4 Story Writing and Character Development - 4.1 Character Development - 4.2 Story Writing / Story Telling - 4.3 Author - 4.4 Links - 4.5 Books - 4.6 Project Managment - 5 Introduction - 6 Xbox Games + Marketplace - 7 AppHub - 8 Xbox Marketing Strategies - 9 Weblinks - 10 References - 11 Mathematics and Physics - 12 Collision Detection - 13 Ballistics - 13.1 Basic Physics - 13.2 Projectile Movement - 13.3 Impact - 13.4 References - 13.5 Inverse Kinematics - 13.6 Character Animation - 13.7 Introduction - 13.8 Keyframed Animation - 13.9 Interpolation methods - 13.10 Skeletal Animation - 13.11 Summary - 13.12 Authors - 13.13 Physics Engines - 14 Programming - 14.1 Introduction - 14.2 Fields of Applications - 14.3 Features - 14.4 Version history - 14.5 Authors - 14.6 References - 14.7 External links - 15 Version Control Systems - 16 Versioning Software - 16.1 Subversion - 16.1.1 Introduction - 16.1.2 Client / Server Concept of SVN - 16.1.3 Setting up a SVN Server in Windows - 16.1.4 Using a SVN client - 16.2 Git - 16.3 Hosting - 16.4 References - 16.5 External links - 16.6 References - 17 Reusable Components - 17.1 Overview - 17.2 Examples - 17.3 Creating a reusable component - 17.4 Where to find more samples? - 17.5 Links - 17.6 Authors - 17.7 Frameworks - 17.8 LTrees - 17.9 Nuclex Framework - 17.10 References - 18 Audio and Sound - 18.1 Introduction - 18.2 Sound in XNA - 18.3 XACT - 18.4 Authors - 18.5 Creation - 18.6 Links and sources - 19 Synthesizer - 20 2D Game Development - 20.1 Introduction - 20.2 Texture - 20.3 Texture Coordinates/ UVW Coordinates - 20.4 How to build textures in Photoshop - 20.5 Textures in XNA - 20.6 Resources: - 20.7 Sprites - 20.8 Creating Sprites - 20.9 Using of Sprites in XNA Games - 20.10 Finding free Textures and Graphics - 20.11 Menu and Help - 20.12 Heads-Up-Display - 20.12.1 Introduction - 20.12.1.1 Application - 20.12.1.2 Examples - 20.12.1.3 Less is more - 20.12.2 Text in HUD - 20.12.3 Images in HUD - 20.12.4 Components - 20.12.4.1 Text - 20.12.4.2 Meter - 20.12.4.3 Radar - 20.12.4.4 Bar - 20.12.5 Useful links - 20.13 References - 21 3D Game Development - 22 Finding free Models - 22.1 3D Models - 22.2 Authors - 22.3 Importing Models - 22.4 Introduction - 22.5 Cinema4D - 22.6 Maya - 22.7 3ds Max - 22.8 Blender - 22.9 Sketchup - 22.10 Summary - 23 Camera - 24 Lighting - 24.1 Introduction - 24.2 Normals - 24.3 BasicEffect - 24.4 Author - 24.5 Shaders and Effects - 24.5.1 Development of shaders - 24.5.2 Types of shaders and their function - 24.5.3 Programming with BasicEffect Class in XNA - 24.5.4 Programming your own HLSL Shaders in XNA - 24.5.4.1 Shading Languages - 24.5.4.2 The High Level Shading Language (HLSL) and its use in XNA - 24.5.4.3 Structure of a HLSL Effect-File (*.fx) - 24.5.4.4 First try: A simple ambient shader - 24.5.4.5 Diffuse shading - 24.5.4.6 Texture Shader - 24.5.4.7 Advanced Shading with Specular Lighting and Reflections - 24.5.4.8 Additional Parameters - 24.5.4.9 Postprocessing with shaders - 24.5.4.10 Create tansparency Shader - 24.5.4.11 Other kinds of shaders - 24.5.5 Using FXComposer to create shaders for XNA - 24.5.6 Particle Effects - 24.5.7 Links - 24.5.8 Author - 25 Skybox - 25.1 Creating a simple Skybox - 25.1.1 Creating Skybox Images with Terragen 2 - 25.1.1.1 Adding Terrain - 25.1.1.2 Texturing using Shaders - 25.1.1.3 Camera Setup - 25.1.1.4 Render/Quality Settings - 25.2 XNA integration - 26 Skydome - 27 Links - 28 Authors - 29 Landscape Modelling - 30 Implementation in XNA - 31 Links - 32 Literature - 33 Authors - 34 Networking and Multiplayer - 34.1 Introduction - 34.2 Split-Screen - 34.3 Network and Peer-to-Peer - 34.3.1 Basics - 34.3.2 Techniques - 34.3.3 Weblinks - 34.3.4 References - 34.4 Network Engines - 34.4.1 Lidgren Network Engine - 34.4.2 Raknet - 34.4.3 Windows Communication Foundation - 34.4.4 Authors - 35 Artificial Intelligence - 35.1 History - 35.2 Difficulties - 35.3 References - 35.4 Authors - 35.5 History - 35.6 Techniques - 35.6.1 Chasing and Evading - 35.6.2 Pattern Movement - 35.6.3 Flocking - 35.6.4 Path finding - 35.6.5 Further Techniques - 35.7 Examples - 35.8 Difficulties - 35.9 References - 35.10 Authors - 35.11 General Information - 35.12 Available AI Engines for XNA - 35.13 Other AI Engines - 35.14 Your Own AI Engine [6] - 35.15 "Temporary Guidelines" - 36 Kinect - 37 Other - 37.1 Introduction - 37.2 Another Software is SAYA-Engine 0.3, you can use for 3D Games in XNA - 38 Appendices - 39 License - 40 GNU Free Documentation License - 40.1 0. PREAMBLE - 40.2 1. APPLICABILITY AND DEFINITIONS - 40.3 2. VERBATIM COPYING - 40.4 3. COPYING IN QUANTITY - 40.5 4. MODIFICATIONS - 40.6 5. COMBINING DOCUMENTS - 40.7 6. COLLECTIONS OF DOCUMENTS - 40.8 7. AGGREGATION WITH INDEPENDENT WORKS - 40.9 8. TRANSLATION - 40.10 9. TERMINATION - 40.11 10. FUTURE REVISIONS OF THIS LICENSE - 40.12 11. RELICENSING - 41 How to use this License for your documents Preparation You should first make sure that you have a newer version of Windows, such as XP, Vista or 7, with the appropriate service packs installed. In general, it is a good idea to use the US version of the operating systems. In addition, since at least DirectX 9 compatibility is needed, you may not be able to use a virtual machine (such as Parallels, VMWare or Virtual Box) for doing XNA programming. Make sure that no older or newer version of Visual Studio is installed on your computer. Especially with older version there are always issues. It is good advise not to have several versions of Visual Studio running on one computer. Install Visual C# 2008 Express Edition First download the C# 2008 Express Edition from Microsoft. You can also use the Visual Studio Express version. Installation is straightforward, simply follow the wizard. After installation, make sure you run Visual Studio at least once before proceeding to the next step. Install the DirectX Runtime Download and install the 9.0c Redistributable for Software Developers. This step should not be necessary on newer Windows version. First, try to get away without it, if in a later part you get some funny error message related to DirectX, then execute this step. Install XNA Game Studio 3.1 After having run Visual Studio at least once, you can proceed with the installation of the XNA Game Studio. First, download XNA Game Studio 3.1. Execute the installer and follow the instructions. When asked, allow communications with XBox and with network games. Test your Installation To see if our installation was successful, let's create a first project. - start Visual C# 2008 Express Edition - select File->New Project under 'Visual C#->XNA Game Studio 3.1' you should see a 'Platformer Starter Kit (3.1)', click OK to create the project - to compile the code use either 'Ctrl-B', 'F6' or use 'Build Solution' from the Build menu - to run the game use 'Ctrl-F5', enjoy - take a look at the code, among other things, notice that a 'Solution' can have several 'Projects' Next Steps (optional) We will only develop games for the PC, if you want to develop games for the XBox also, you need to become a member of XBox LIVE and purchase a subscription (in case your university has a MSDN-AA subscription, membership is included). Advice You need to be attentive of which XNA version you have to install. compatible versions: Authors Sarah and Rplano C-Sharp When coding for the XBox with the XNA framework, we will be using C-Sharp (C#) as programming language. C-Sharp and Java are quite similar, so if you know one, basically you know the other. A good introduction to C-Sharp is the Wikibook C_Sharp_Programming. C# has some features that are not available in Java, however, if you know C++ some may look familiar to you: - properties - enumerations - boxing and unboxing - operator overloading - user-defined conversion (casting) - structs - read-only fields The biggest difference between C-Sharp and Java probably are the delegates. They are used for events, callbacks and for threading. Simply put, delegates are function pointers. Properties This is an easy way to provide getter and setter methods for variables. It has no equivalent in Java, except if you consider the automatic feature of Eclipse to add these methods. Simply consider the following example, notice the use of the value keyword. Enumerations In Java you can use interfaces to store constants. In C# the enumeration type is used for this. Notice that it may only contain integral data types. Boxing and Unboxing This corresponds to Java’s wrapper types and also is available now in Java. Interesting to notice is that the original and boxed are not the same. Also notice that unboxed stuff lives on the stack, whereas the boxed stuff lives in the heap. Operator Overloading This is a feature that you may know from C++, or you might consider the overloading of the ’+’ operator for the Java String class. In C# you can overload the following operators: - unary: +, -, !, +, ~, ++, --, true, false - binary: +, -, *, /, %, &, |, ^, <<, >>, ==, !=, <, >, <=, >= For instance for vector and matrix data types it makes sense to overload the '+', '-' and the '*' operators. User-Defined Conversion Java has built-in casting, so does C#. In addition, C# allows for implicit and explicit casting, which means you define the casting behavior. Usually this makes sense between cousins in a class hierarchy. However, there is a restriction: conversions already defined by the class hierarchy cannot be overridden. Structs Structs basically allow you to define objects that behave like primitive data types. Different from objects, which are stored on the heap, structs are actually stored on the stack. Structs are very similar to classes, they can have fields, methods, constructors, properties, events, operators, conversions and indexers. They can also implement interfaces. However, there are some differences: - structs may not inherit from classes or other structs - they have no destructor methods - structs are passed by-value not by-reference Read-Only Fields When we were discussing the keyword const the difference to Java’s final was that you had to give a value to it at variable declaration time. A way around this is the readonly keyword. However it still has the restriction, that a readonly variable has to be initialized inside the constructor. Delegates Usually, in Java when you pass something to a method, it is a variable or an object. Now in C# it is also possible to pass methods. This is what delegates are all about. Note that delegates are also classes. One good way of understanding delegates is by thinking of a delegate as something that gives a name to a method signature. In addition to normal delegates there are also multicast delegates. If a delegate has return type void, it can also become a multicast delegate. So if a delegate is the call to one method, then a multicast delegate is the call to several methods, one after the other. Callbacks Callback methods are used quite often when programming C or C++ and they are extremely useful. The idea is instead of waiting on another thread to finish, we just give that thread a callback method, that it can call once its done. This is very important when there are tasks that would take a long time, but we want the user in the meanwhile to do other things. To accomplish this, C# uses delegates. Inheritance Object-oriented concepts in C# are very similar to Java’s. There is a few minor syntax related differences. Only with regard to method overwriting in an inheritance chain, C# provide more flexibility than Java. It allows for a very fine-grained control over which polymorphic method actually will be called. For this it uses the keywords 'virtual', 'new', and 'override'. In the base class you need to declare the method that you want to override as virtual. Now in the derived class you have the choice between declaring the function 'virtual', 'new', or 'override'. Game Loop! Input Devices Introduction Kinect is a revolutionary video camera for the Xbox that recognizes your movement in front of the television. This can be used to control games just with your body. Developers can use the Kinect framework to integrate this into their game. Game Creation / Game Design Introduction Here we first consider what types of games there are, basics behind story writing and character development. Also project management, marketing, making money, and licensing are issues briefly touched upon. More Details Lore ipsum ... Types of Games. Authors Story Writing and Character Development The title should fit your story. It should create an interest to play the game. It should partially reveal what the game is about but not say to much to keep the thrill. Prologue Thonka Links Character Development And Storytelling For Games by Lee Sheldon(Premier Press 2004) Die Heldenreise im Film by Joachim Hammann (Zweitauseneins) Project Managment BlaBla about project management and how important it is. Should include basics of project management, including milestones, risk analysis, etc. Especially, also tools like MS Project, Zoho, Google Groups or similar should be compared and described how to use them. Authors to be continued... thonka - also interested: juliusse Introduction After finishing to develop your Xbox game, your aim will be to make as many people as possible to buy and enjoy your game to get at least the money back which you have invested into the game and at best some reward. Microsoft itself offers a platform for downloading games which can be used to distribute games - it contains two sections, where independent developers can submit their creations. This Book gives information about the whole platform, the special independent developers sections, describe the ways how to publish a game successfully and provides some informations how Microsoft generally promotes the Xbox to attract more users. Xbox Games + Marketplace General The Xbox Marketplace is a platform, where users can purchase games, download videos, game demos, Indie Games (will be treated in a separate chapter) and some additional content like mappacks or themes for the XBox 360 Dashboard. It was launched in November 2005 for Xbox and 3 years later, in November 2008, for Windows OS. Since 11th August 2009 it's possible to download Xbox 360 Games. The content will be saved on the Xbox 360's hard drive or an additional memory unit. Payment The Xbox Marketplace has it's own currency: "Microsoft Points". So users can purchase content without a credit card and credit card transaction fees can be avoided for Microsoft.[1] Microsoft Points are offered in packages of different quantities, from 100 up to 5000 while 80 points are worth US$1 [2] and can be purchased with credit card or Microsoft Point Cards in retail locations and since May 2011 by PayPal in supported regions. Some points of criticism are that users have to buy usually more points then they actually need and that they obscure the true costs of the content: . [..] So, even if you are buying only one song, you have to allow Microsoft, one of the world’s richest companies, to hold on to at least $4.01 of your money until you buy another." [3] "Microsoft is obscuring the true cost of this content. A song on Zune typically costs 79 Microsoft Points, which, yes, is about 99 cents. But it seems to be less because it's just 79 Points." [4] These statements are from Zune reviews, a platform to stream and download music and movies, also with Xbox 360, similar to iTunes. Microsoft Points are the currency of Zune, too and the points can be transfered between Xbox Live Marketplace and Zune accounts. Xbox Live Arcade General Xbox Live Arcade was launched on 3rd November 2004 for the original Xbox. It's a section of the Xbox Marketplace which accepts games from a wide variety of sources, including indie developers, medium-sized companies and large established publishers who develop simple pick-up-and-play games for casual gamers, for example "Solitaire" or "Bejeweled". [5] It starts with 27 arcade games on the beginning, now there are about 400 games available. In November 2005, Xbox Live Arcade was relaunched on the Xbox 360. It now has an fully integrated Dashboard, every arcade title has a leaderboard and 200 Achievement points. Publishing an Arcade Game Publishing an Arcade Game can cost a few hundreds of dollars and takes about 4-6 month of time to develop an test for a small team. They have to work closely with the Xbox Live Arcade team on everything from game design and testing to ratings, localization and certification. If everything is finished, the Xbox Live Arcade team puts the game onto Xbox Live. The whole process can be broken down in some steps: [6] - Contact - Write an email to the Arcade team with the concept, if they are interested they will send some forms to fill in. - Submission - Submit the game concept formally with as much information as possible about design, documents, screenshots and prototypes to be discussed in the Arcade portfolio review. - Create - After a positive review the developing can start. Tools especially for Arcade game developing are available (for e.g. Achievements and Leaderboards). An Arcade team producer get assigned to work with the developer for working on design, Gamescore and Achievments and a schedule with milestones for showing process to the Arcade team. - Full test - Final test with debugging and verification, then the regular Xbox 360 certification to be signed. - Publishing - The game is available at the Arcade Game Marketplace now. Xbox Live Indie Games General Xbox Live Indie Games is a category in the Xbox Marketplace for games from independent developers with Microsoft XNA. The difference to the Xbox Live Arcade Games is, that Indie Games are just tested by the community, has much lower costs of production and they are often very cheap. Currently there are were submitted about 1900 Indie Games since the release at 19th November 2008.[7] Publish an Indie Game Before starting to develop an Indie Game, some restrictions should be noted:[8] - The binary distribution package must be no larger than 150 MB and should be compiled as single binary package. - The games are priced at 200, 400 and 800 Microsoft Points, games that are larger then 50 MB must be priced at least 400 Microsoft Points. - Each game needs an eight minute trial period to offer a testing time for users. After the trial time they can decide whether they want to buy this game or not. - Xbox Live Indie Games have not the same features as the Xbox Live Arcade Games. There are no Achievements or leaderboards available, but they include multiplayer support, game invitation, game informations, Xbox Live Avatars and Party Chat. - AppHub membership is required The publishing itself is also a process, but much less complex then for Xbox Live Arcade Games:[9][10] - Create - Develop the game in C# using the XNA Game Studio framework, to allow the developers to debug and test their game internally before release. - Submission - Uploading the package at the App Hub website, add some metadata, specify costs and design the Marketplace offer. - Playtest - Other developers of the App Hub community can test the game for one week to give some feedback. - Peer Review - Developers check the game for unacceptable content, instability or other things which could block the publishing. Multile reviews are needed to pass the peer review successfully. If a game was declined, it can be resubmitted if the feedback has been used. - Release - If the peer review was successful, the game is available in the Marketplace Indie games section. The developer now gets 70% of the profit, Mircosoft 30% (in US$!). AppHub AppHub is a specific website and community for Xbox Live Indie Games (and Windows Phone) developers. AppHub offers free tools like XNA Game Studio and DirectX Software Development Kit, provides community forums where users can ask questions, give advice, or just discuss the finer points of programming. Code samples provides developers with a jump-start to implementing new features, and the Education Catalog is packed with articles, tutorials, and utilities to help beginners and experts alike. An App Hub annual subscription for $99 USD provides you with access to the Xbox LIVE Marketplace, where you can sell or give away your creation to a global audience. For students the membership is free if you register at MSDNAA. They also provide a developer dashboard so developers can manage all aspects of how the game appears in marketplace, monitor downloads, and track how much money they've earned. So the AppHub membership is required to publish an Indie Game. Per year, members can submit up to 10 Indie Games, peer review new Indie Games before they get released and get offered premium deal from partners. Xbox Marketing Strategies 53 million Xbox Consoles have been sold world wide, the Xbox Live community has more than 30 million members and it's getting harder for Microsoft to attract new customer. So they try to gain user from a new target audience and develop some new strategies to get the Xbox into as many homes as possible. Microsoft uses a lot of viral marketing and tries to let users to interact as much as possible in their own Xbox Live community. Xbox Party The usual Xbox gamer is male, so there are a lot of women who can be won as new customers. Inspired by "Tupperware Parties", Microsoft offers the possibility to get an Xbox pack to throw a home party to present the Xbox. Hosts got an Xbox party pack of freebies that included microwaveable popcorn, Xbox trivia game "Scene It? Box Office Smash," an Xbox universal media remote control, a three-month subscription to Xbox Live, and 1600 Microsoft Points. The aim is to spread the Xbox and get into a new target audience, everyone wants to have the console all friends are on.[11] Special offers Another strategy is to reach even the last ones of the main target audience who haven't an Xbox yet. A main reason are the costs of an Xbox, a special offer for college students now offers an Xbox 360 to all U.S. college students who buy a Windows 7 PC. By targeting college kids, Microsoft is going after the sexiest demographic. College students ages 18 to 24 spend more than 200 billion dollars a year on consumables. The average student has about $600 a month in disposable income from part-time work, work-study or scholarships. They also typically don’t have mortgages or car payment. Because of this, they are able to spend their money less conservatively than an adult who has those expenses on top of paying back college loans and possibly providing for their families. [12] To promote the marketplace and connect the users of Windows Phones and Xbox closer to each other, Microsoft offers a free Xbox 360 game to developers of Windows Phone Apps, the best App also wins a Windows 7 Phone. It's just available for the first 100 Apps and calles Yalla App-a-thron comeptition.[13] Promote Indie Game Indie Games are developed usually by independent developers with low costs. The best strategy to advertise for an Indie Game is spreading it as much as possible. Users can rate games in the Marketplace, games with a good rating get downloaded more often. If someone plays an Indie Game, friend in the Xbox Live are able to see that and maybe the game gets spread more and more into the community. Websites like IndieGames.com constantly present popular Indie Games, the aim of every developer should be to get as much attention as possible and to trust into viral marketing. Weblinks - Xbox Marketplace - App Hub - Wikipedia: Xbox Live Marketplace - Wikipedia: Microsoft Points - Wikipedia: Xbox Live Arcade - Wikipedia: Xbox Live Indie Games References - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ Mathematics and Physics Introduction Lore ipsum ... Vectors and Matrices We need to recall some basic facts about vector and matrix algebra, especially when trying to develop 3D games. A nice introduction with XNA examples can be found in the book by Cawood and McGee. [1] Right Triangle Matrices References - ↑ S. Cawood and P. McGee (2009). Microsoft XNA Game Studio Creator’s Guide. McGraw-Hill. - ↑ Wikipedia:Sine - ↑ Wikipedia:Cosine Collision Detection Collision detection is one of the basic components in a 3D game. It is important for a realistic appearance of the game, which needs fast and rugged collision detection algorithms. If you do not use some sort of collision detection in your game you are not able to check if there is a wall in front of your player or if your player is going to walk into another object. Bounding Spheres First we need to answer the question "What is a bounding sphere?" The bounding sphere means a ball which has nearly the same center point as the object which is enclosed by the ball. A bounding sphere is defined by its center point and its radius. In collision detection the bounding spheres are often used for ball-shaped objects like cliffs, asteroids or space ships. Let's take a look at what happens when two spheres are touching. The image shows , the radius of each sphere now also defines the distance its center to the opposite sphere's skin. The interspace between the centers would be equal to radius1 + radius2. If the distance would be greater, the two spheres would not touch but if it would be less, the spheres would intersect. A feasible way to determine if a collision has occurred between two objects with bounding spheres you can simply find the distance between their centres and see if this is less than the sum of their bounding sphere radius. Another way to use bounding spheres is to use the balance point of the object as the center point of the bounding sphere. Thereby you use the midpoint of all vertices as the centre of the bounding sphere. This algorithm gives you a more exact midpoint than the first way. XNA Bounding Spheres Microsofts XNA offers a model for you to use by developing your own game called "BoundingSphere". XNA provides this for you so that there is no need to calculate it. Models in XNA are made up of 1 or more meshes. When doing collisions you will want to have one sphere that borders the whole model. That means at model load time you will want to loop through all the meshes in your model and expand a main model sphere. foreach (ModelMesh mesh in m_model.Meshes) { m_boundingSphere=BoundingSphere.CreateMerged(base.m_boundingSphere, mesh.BoundingSphere); ... To see if two spheres have collided Xna provides us to use: bool hasCollided=sphere.Intersects(otherSphere); Bounding Rectangles or Bounding Box In collision detection handling with rectangles you want to see whether two rectangular areas are in any way touching or overlapping each other. Therefor we need to use the bounding box. A bounding box is simply a box that encloses all the geometry of a 3D object. We can easily calculate one from a set of vertex by simply looping through all the vertices finding the smallest and biggest x, y and z values. To create a bounding box around our model in model space you need to calculate the midpoint an the four corner point of the rectangle we want to enclose. Then you need to build a matrix and rotate the four point about the midpoint with the given rotation value. After that we need to go through all the vertices in the model keeping a track of the minimum and maximum x, y and z positions. This gives us two corners of the box from which all the other corners can be calculated. XNA Bounding Box Because each model is made from a number of mesh we need to calculate minimum and maximum values from the vertex positions for each mesh. The"ModelMesh" object in XNA is split into parts which provides access to the buffer which is keeping the data of the vertex (VertexBuffer) from which we can get a copy of the vertices using the GetData call. public BoundingBox CalculateBoundingBox() { // Create variables to keep min and max xyz values for the model); } Terrain Collision Collision detection with a terrain and an object is different than the collision between objects. First of all you have to detect the coordinates of your current player (object). The height map of your terrain gives you a "gap value" which identifies the distance between two sequenced vertices. When dividing your coordinate position through those "gap values" you can detect the vertices at your position. You can get from your heightmapbuffer the 4 vertices squares where you are. Using these datas and your position in this square, you can calculate the best interspace to the terrain so that there is no collision with it. Collision Performance Sometimes collision detection slows down a game. It is the most time-consuming component in an application. Therefor there are data structures as quadtree and octtree. Quadtree (2D) A quadtree is a tree structure using a principle called ‘spatial locality’ to speed up the process of finding all possible collisions. Objects can only hit things close to them. To advance the performance you should avoid the testing again objects which are far away. The easiest way to check for collision is to divide the area which is going to be checked into a consistent grid and declare each object with all intersecting grid cells. The quadtree tries to overcome this weakness by recursively splitting the collision space into smaller subreg possible resolution. A search is made by starting at the object’s node and climb up to the root node. Octtree (3D) Octtrees work the same way as quadtree. It is used for collision detection in 3D areas. References Bounding Volumes and Collisions Bounding Sphere Collision Detection Author sarah Ballistics. Inverse Kinematics) This method is a refinement of the DLS method and needs fewer iterations. Cyclic Coordinate Descent Nexus' Child References - Character Animation Keyframe animation is an animation technique, that originially was used in classical cartoons. A Keyframe defines the start- and endpoint of an animation. they are filled with so called interframes or inbetweens. History, Tradional Keyframe Animation An Object will move from one corner to an other. The first keyframe shows the object in the top left corner and the second keyframe shows it in the bottom right corner. Everything in between is interpolated. - Example usage - - - Interpolation methods The individual segments are pass with constant speed. Discrete With discrete interpolation, the animation function jumps from one value to the next without interpolation. Spline Interpolation Keyframe Animation in XNA Author ARei Skeletal Animation don't need any complex algorithms to animate the models. Without this technique it is virtually impossible to animate the mesh in combination with the bones. Rigging: - Bones: 59 up to 79 in 4.0 - Polygons: depends on the hardware Typical programs are: - Motion Builder - 3ds Max - Maya Animations in XNA Author FixSpix Summary What we learnt in this chapter In this chapter we learned, how to animate our character in two different ways. First the keyframe-animation and than the skeleton-animation. These two techniques are the most important in xna. But which one is better? Better in this context is the wrong word, lets replace the word "better" with the words "better in which situation". Its simple... use the skeleton-animation in 3D and the keyframe-animation in the 2D area. Author fixspix Authors A.Rei and FixSpix Physics Engines: Author Programming Introduction Lore ipsum ... to be edited by iSteffi. Fields of Applications Features Visual Studio supports the developer with helpfull features which are useful in every development step. The Code Editor Visual Studio comes along with its own debbuger. The debugger supports by securing that the application operates in a logical way and as you want it to operate. It makes it possible to stop on different code positions to check the building. Expandability The developer using Visual Studio has the chance to expand the functions of the standard Visual Studio. Browser and Explorer - Windows versions on which it runs⁴ - Cobra_w References Microsoft Visual Studio on Wikipedia External links Visual Studio Website Visual Studio Developers Center on MSDN Version Control Systems Overview A version control system (also called revision control or source control system) is a software used to track changes in documents and binary files. It is typically used in software development to manage source code files. For every change, a unique ID, a timestamp and the user who changed the file is saved. Thus, changes between two different versions can be easily compared and also who changed the file and when. Some systems also provide means to comment a specific version (to mark what has been changed) or give it a unique name (such as "Beta 1", or "Release Candidate"). Since every change is saved, one can roll back or change to any version that has been saved. This also provides protection against malicious or accidental changes and serves as a backup in case of data loss. There are three types of versioning control: Local, centralized and distributed systems. Local Systems Local systems require only a one computer. They are mostly suited for single developers, who want to have control over smaller projects they work on. Probably everyone has already used a local system, if only unintentional. Office programs like Microsoft Office or OpenOffice keep a backup of the currently open files in case of crashes or memory corruption. You may have noticed that for example Microsoft Word offers to recover a previous version of the file in case the computer crashed while the file was open. To accomplish that, the program saves a backup of the currently open file every couple of minutes, usually hidden from the user and regardless whether he also saved his document on purpose. Another example is the shadow copy service of modern Microsoft Windows versions. It keeps copies of system files that can be restored in case a file has been corrupted or damaged by a faulty update. Centralized Systems Centralized systems use a client-server architecture to keep track of changes. This kind of system is usually used to track multiple files or whole programming projects. A server stores an "official" copy of all files, folders and changes on its hard disk. This is also called a repository. A client that wants to participate in the development process first needs to acquire the files stored on a server. This procedure (the initial as well as any further pulls from the server) is called "checkout", in which the whole content of the repository is copied to the local machine. The client now may do changes to any file he wants to, for example adding some new procedures to a project, or improving an algorithm. After all changes are done, he needs to communicate the changes to the server. The upload of the changed files to the server is called a "commit". The server keeps track of any changes the client made to the repository and adds a new "revision" to the server. Other users that also work on the project need to update their local repositories to the newly committed version on the server. If changes to a file overlap, a "conflict" occurs. The user then has the opportunity to view the differences and may choose to merge them, depending on the versioning software used. It is possible to check out any previous version that has been committed to the server. Distributed Systems Distributed versioning systems don't use a central repository. Instead, every users has it's own local repository, and changes are communicated through patches to other users. However, there may be a common repository where everyone publishes their changes (in most open source projects there is usually an upstream repository, but it is not mandatory). In comparison to centralized systems, which forces synchronization of all changes between all users, distributed systems focus on the sharing of changes. This has some advantages, but may not be suited for every kind of project. For example, every developer has a local versioning control, that can be used for drafts which aren't important enough to synchronize them to a central server. Version Numbering The more complex the project becomes, the more different versions will float around the repositories. If the developer or the team works towards a specific release (for example fixing some bugs) it is a good idea to give each release a unique version number. This helps the user to distinguish between different releases so he can see that he uses the most recent version of an application. A widely used scheme to number versions is the usage of three digits. The first digit indicates a major version. It should only be changed if large changes occur or a lot of new functions are added. The second digit indicates a minor version. It is increments if some (larger) feature is added or a lot of bugs were fixed. The third digit displays a small change to the code, maybe a critical bugfix that has been overlooked in the previous build and needs to be fixed quickly. Of course, one can use a totally different scheme for numbering versions, e.g. using only two digits or using the designated date of the release. Vocabulary Most versioning software uses the same terminology as other systems, so here is a quick list of commonly used words in software versioning[1]: - Branch - A branch is a fork or a split from the currently used code base. This is useful if experimental features are included, or if a specific part of the code gets a major overhaul. - Creating a local copy of any version in the repository. - Commit - Submitting changed code to the repository. - Conflict - A conflict can occur if different developers commit changes to the same file, and the system is unable to merge the changes without risking to break something. A conflict must be either resolved (manually), or one of the conflicting changes has to be discarded in favor of another. - Merge - A merge is an operation where one or more changes are applied to a file. This can for example be the inclusion of a branch into the main code line, or just a small commit to the repository. Ideally, the system can merge the files automatically without any problems, but in some cases a conflict (see above) may occur. - Repository - Contains the most recent data of the project. All changes are submitted into the repository, so that every developer can access the latest version. - Trunk - The name of the development line, that contains the latest, bleeding-edge code of the project. - Update - Receiving changed code from the repository, so that the local version is on par with the version in the repository. Versioning Software Popular version control systems include SVN (Subversion), Git, CVS, Mercurial and a lot more. In this part we will just look at the most widely used (SVN and Git) and explain how to use them with Visual Studio to organize and control your XNA project. A detailed list and a comparison can be found here: Comparison of revision control software Subversion Introduction SVN stands for Subversion and is developed by the Apache Foundation. It is a centralized software versioning and revision control system, which means that it has a central repository (project archive) that is hosted by a server and clients can access it. When users change a file locally and commit it to the repository, only the changes that were made are transferred and not the whole file. That makes the system very efficient. Also a subversion repository's size is proportional to the number of changes that were made and not the number of revisions. That keeps the repository size to a minimum. The file system behind subversion uses two values to address a specific version of a specific file in the file system: the path and the revision. Every revision in a subversion file system has its own root that is used to access contents of that revision. The latest revision is called “HEAD” in SVN. Checkins in a SVN file system are kept atomic by using transactions. That means the client either commits everything or nothing at all. This helps to avoid broken builds that were caused by check-in errors or faulty transactions. So a transaction can be committed and become the latest revision or it can be aborted anytime. Subversion is seen as a further development of CVS, which is another but much older versioning system that is no longer actively developed. It improves some of the issues of CVS such as moving files and directories or renaming them without loosing the version history. Also branching and tagging is faster in SVN, as it is just implemented as a copy operation in the repository. Client / Server Concept of SVN The concept of a Subversion system is that a repository is hosted on a server and accessed by different SVN Clients through the SVN Server. Each client can checkout a working copy, work on it and submit the changes to the central repository (commit). All the other clients can then update their working copy so it is always synchronized with the newest version in the central repository. Setting up a SVN Server in Windows Installing SVN Server First download the Subversion Windows MIS Installer from the official website: The current version is called: Setup-Subversion-1.6.6.msi Then install Subversion on Windows. To check that the Subversion was successfully installed and configured open a new command window in Windows (by clicking Start → Execute → Then enter “cmd” and press OK). In the command window type svn help and you should see some help information if everything is working correctly. Create a SVN Repository with SVN Server and TortoiseSVN Now we are going to create a Subversion Repository. To do this we use another tool called TortoiseSVN, which is a popular program to access and work with SVN repositories in Windows . It is a Subversion client that is implemented as a Microsoft Windows shell extension and can be easily used within Windows Explorer. So first download TortoiseSVN here: And then install. After installation new menu points should have been added to the right-click context menu in Windows Explorer that allow the use of SVN commands directly in Windows. Then we need to create a new folder where our future repository should be stored by the server. In this example we create a folder: D:\repository Then right-click on this new folder and choose TortoiseSVN –> Create repository here… and TortoiseSVN will create the default structure of an empty SVN repository inside this folder. Now to add some content to the repository we will at first create a so called standard layout in a temporary folder and then import this folder into our new repository. So create another folder named D:\structure. Add three subfolders into this folder and call them: trunk, tags and branches. The trunk directory is the main directory of a project and will contain all versioned data. Now to import the structure folder, right-click on it and choose TortoiseSVN –> Import… . In the opening window insert the following path as “URL of repository”: The import message should contain a comment for the version that is being imported into the repository. Write something like “First import” and then click OK. A new window should open and log all the three folders that where imported into the repository. That is it, you can delete the temporary folder called structure because the data is now in the repository. Setup Subversion Server Furthermore we the security of the new repository should be adjusted which is especially important when it is used in a network or the internet. This means setting a security level for anonymous (everybody) and authenciated users (users that have a login and password for the repository) and configuring the user accounts with their passwords. To do this open the file D:\repository\conf\svnserve.conf in a text editor . All config parameters are commented by default, so if you want to activate one you have to uncomment it by removing the # at the beginning of the line. The important part is in line 12 and 13: anon-access = read auth-access = write The setting for the access are read, write and none. In the above case everybody can checkout the current version from the repository but only authenticated users with an account can submit changes. This is the way most open source projects operate so let's keep this setting for the moment. If you set one or both parameters to none, nobody can read or write from the repository. Now we just have to add a few authenticated users to test the system. To do this uncomment line 20 in the conf file that says: password-db = passwd This means the database with the login names and passwords can be found in a file called passwd in the same directory as the svnserve.conf. So save the svnserve.conf and open the default file passwd. A new user is defined in this file by adding a new line with the following scheme: username = userpassword So just create a test account and save the file. Use SVN Server to host a repository Now it is time to use our repository, so let's host it with SVN. Open a window command line window and then execute the following: svnserve -d -r "D:\repository" This should start and enable the SVN Server. Every time you want to use the Server it needs to be started this way, but this is just or test environment where we use the server and client on the same machine. Usually the SVN server is located on a separate server on the internet. Using a SVN client Using TortoiseSVN client to checkout a working copy We are not going to work directly on the repository, because it belongs to the server and the principle behind SVN is that everybody who works on the projects checks out his own copy and works with it locally (a working copy). Usually the SVN server is a network resource that is used to checkout a copy of the project and submit the changes (commit) that were made locally. So let's checkout a working copy from our own local SVN server and work with it. We will use TortoiseSVN for that again. Create a new folder in D:\workcopy (or any other path). Right-click on the new folder and choose SVN Checkout in the context menu. For the URL of the repository fill in: That means we make a checkout from a SVN server that happens to be set up locally (that's why we can use localhost). The folder trunk contain the latest version. Leave the rest of the settings the way they are (HEAD revision turned on) and click OK. If you configured in your security setting that one needs to be an authenticated user to perform a checkout you then have to enter your login and password. Otherwise if you enabled reading for anonymous users you will not be asked. A status window will tell you about the successful checkout and the revision number. The checkout is now completed and the content of the repository is now in the work copy directory. But it is empty because there are no files in our repository yet. Just one hidden directory named .svn that contains some internal SVN version history and should not be deleted. Now we will add a simple file to the working copy and commit it to the repository with TortoiseSVN. Later we will do this directly in Visual Studio with an entire project. In Windows Explorer add a text file in the workcopy checkout and then right-click it and choose: TortoiseSVN –> Add… The icon of the new file should now change from a little question mark to a blue plus symbol (if it does not, refresh with F5). The file is now marked as something to add in the future but it is not committed yet. This is done by doing a right-click on the workcopy folder and choosing: SVN –> Commit… Enter a comment (this comment should contain a short summary of the changes so it becomes more obvious what has been changed in which version of the file) and click OK. You should be asked for the password and login again at this point, but you can also save it so TortoiseSVN will not ask again. It is important that you configure your SVN server so that committing is only possible for authenticated users so it is easier to keep track of who committed changes and prevent unregistered people from making unwanted changes. After this step a status window should tell you about the successful commit. Using SVN within Visual Studio with AnkhSVN We already know the SVN Client TortoiseSVN which uses the Windows context menu to integrate SVN into Windows Explorer, but it would be even better if we could use SVN directly in Visual Studio. There are two projects offering this kind of functionality: AnkhSVN and VisualSVN. While VisualSVN is a commercial product that costs 50$ to obtain a personal license, AnkhSVN is open source and free. That is why we will just have a look at AnkhSVN in this article. AnkhSVN supports Microsoft Visual Studio 2005, 2008 and 2010. It provides an interface to perform all the important SVN operations directly within the Visual Studio Development Environment. AnkhSVN can be downloaded here: Install AnkhSVN and we are ready to go. The simplest way of using the new repository is to create a new project with Visual Studio and place it inside our workcopy directory. Visual Studio should have automatically recognized that it was created inside an SVN working copy, so the SVN features and the correct address for the repository are already set. At this point we have created a new project in an repository so all the new files have to be committed at first. To do this AnkhSVN offers different ways inside the development environment: - You can now open a window to view and commit changes (View → Pending changes). There you can see a list of all files that need to be committed. Just click on Commit and enter a comment as in TortoiseSVN and everything should work. - Another way to commit the project to the repository is by right-clicking on the solution name in the solution explorer and click “Commit solution changes”. In the solution explorer you can also see similar icons to the ones used in TortoiseSVN to show the synchronization status of each file. - To update changes from the repository, just click update in the Pending changes view. Again there is another way by right-clicking on the solution name in the solution explorer and choosing “Update Solution to latest version” in the context menu. - To checkout an existing visual studio project from a repository click Open → Subversion Project... Then enter the SVN server adress and find the project file in the repository. - Other features of SVN such as merging a branch, switching a branch, locking files and more are available through the context menu in the solution explorer as well. You don't need to use AnkhSVN to work with Visual Studio projects inside a SVN repository. You can also use the SVN command line tool or TortoiseSVN. The only thing you should be aware of is which files you commit as Visual Studio is creating build and debug files locally on the machine in the solution directory which should not be committed but build freshly on each individuals machine. You should commit the *.sln file of a Visual C# solution, but not he *.suo file (both in the main folder of the solution). You should also commit all the other files except the bin and the obj folder. By using right-click in Windows Explorer and choosing TortoiseSVN → Add to ignore list you can set these folders permanently on an ignore list so that they will not be committed to the repository. If you use AnkhSVN within Visual Studio you don't have to worry about this as it will automatically just add add and commit the necessary files. Git (Introduce principles, talk about client and server software, and how to integrate with Visual Studio. Show how to use it, also for beginners.) Git is a distributed version control system. It was developed by the creator of Linux, Linus Torvalds, in 2005. The emphasis of Git lies on speed and scalability with large projects. The size of the project (and thus the size of the repository) has only a minimal impact on the performance of patches[2]. Introduction Infrastructure of Git Basically, git consists of three major parts that are important when using it. Since git is a distributed system, one has a local repository which is exactly what it sounds like. This is where all changes are recorded. All your changes are first committed to your local repository, and must then explicitly pushed to a remote repository. The files with your code lie in a working directory. Between your working directory and the local repository is a staging area that gathers all changes, before they are committed to the local repository. It’s like a loading bay, where packets are stored before they are loaded into an airplane. Terminology Git uses a slightly different terminology than described in the vocabulary above. Changes are added to the staging area. A commit describes the process of adding files to the local repository from the staging area, while a push sends all changes to the remote repository. Fetching means to get all changes from the remote repository to the local repository. A pull directly copies the remote repository to the local repository. A checkout reverts changes in your local repository and restores the state of the files either from the staging area, or the local repository. The diagram to the right illustrates the data flow of git. Usage Git on Windows There are two possibilities to use Git, either via command-line or a GUI. Former relies solely on text-based commands and works on all operating systems. Alternatively one can use graphical user interface to manage your sources. While command-line input has its advantages (such as being independent from the operating system), it effectively forces the user to learn the commands (for creating repositories, committing, updating and so on) which can slow down the development process at first. Using a GUI in our case (creating a game with XNA) is beneficial, because we can have tight integration with Visual Studio and can manage the project directly from the development environment. There are a number of graphical tools for Git under Windows. Since TortoiseSVN is popular with SVN-users, its Git-pendant TortoiseGit may be a choice, but it is currently not really on-par with the SVN version. Thus, I recommend using the Git Extensions. It features direct integration with Visual Studio and in combination with the Git Source Control Provider, we can have small icons displaying the status of a file in the project (such as conflicts, committing status…). Install Git Extensions Installation of the Git Extensions is easy, just download the latest version including MsysGit (essentially a native port of Git to Windows) and KDiff3 (for comparing and merging files) and start the installer. Be sure to select “Install MSysGit” (required) and “Install KDiff” (recommended) and check if support for your Visual Studio version (2008 in our case) is selected. After you have started Git Extensions, a checklist might pop up, remembering you to set some parameters. If the path to Git hasn’t been detected, you must point it to its installation folder. Additionally you need to specify a username, E-Mail address and the diff- and mergetools. If everything is ok, the checklist should show every point in green. Hosting If you have your own server you can easily set up a SVN Server as described above and host your own repository. However if you work on an open source project of a smaller scale, it is advisable to just use one of the available free open source hosters . There is quite a number of free open source hosters that help to host and distribute open source projects. Most of them supply an SVN version control system and sometimes other systems such as Git or Mercurial. So these hosters supply not only version control system which is very useful to work together on a project with a team, but they also help to host a project for public distribution via download. Another advantage is that it is also easier to find more fellow developers for your project via this channel, because it becomes more visible for other open source developers. An extensive list of open source hosters with a detailed comparison can be found here. The most popular are Google Code, SourceForge and GitHub. Hosting at Google Code Project Hosting at Google Code is easy and you don't need to apply and wait to get accepted like at SourceForge. There are just two requirements: - The project has to be open source. - You need to be in a country where Google is able to conduct business (which is almost the whole world). It is restricted to open source, because the goal of Google Code is to help open source developers with no funding that cannot afford hosting. It is recommended that the project is explicitly declared under one of the available open source licenses. So Google Code is the right choice for smaller free time projects that require hosting for efficient team work and distribution. Every project on Google Code has its own Subversion and Mercurial repository. Mercurial is another revision control system that is based on a distributed system and also cross-platform. Besides the revision control system with the repository and code hosting, Google Code also offers useful extras such as a bug tracking system, a wiki for the project that can be used for documentation and integration with mailing lists at Google Groups. All this is accessible through a simple web interface. For more information read the official Google Code FAQ: To get started for you project, you need a Google Account and follow the steps on this page: Hosting at SourceForge.net SourceForge is the world's largest open source software hosting web site. It was established in 1999 and it hosts more than 230,000 projects so far and has over 2 million registered users. The goal is similar to the goal of Google Code: Provide free services to help people to build and distribute open source software. It acts as a centralized location for open source software developers by providing users with several version control systems: SVN, CVS, Git, Mercurial and Baazar. Other features include project wikis, a bug tracking system, a MySQL database and a SourceForge sub-domain. SourceForge also includes an internal ranking system that makes very active projects more visible to other developers, which is helpful to get more people join your project. To get hosted at SourceForge you first need to apply to them and accept their terms of use (which involves granting SourceForge a perpetual license). Then the SourceForge team will decided if your project is accepted as a SourceForge project. The two important criteria are that your project is producing software, documentation or an aggregate (like a linux distro) of a software and that our project is under one of the open source licenses. If it is not open source it will get rejected. Generally it is a bit harder to host a very small scale private project that just started at SourceForge. Google Code is the better option because it requires no acceptance. To get started first register an account at SourceForge.net. Hosting at Github Another possibility to host your project is Github. Creating an account and repository is free, as long as your project is open source and publicly available to everyone. You will have about 300 MByte of storage (there are no "hard" limits), so watch out if you push large textures or audio files to the repository. If you need restricted access, you need to pay for it. There are several paid plans available, depending on what you need. After you have signed up, you need to create a new repository. Give it a name and optionally a description and homepage URL. Now you need to configure Git Extensions to clone the repository to your computer, which is an awfully extensive task. Follow the "Set up SSH Keys" procedure (the last step is optional, it just checks whether everything is working). Make sure you remember the passphrase you have entered. Now you need to create a private key file. Start puttygen.exe, select Conversions -> Import. Navigate to the id_rsa file (the one without extension) and select it. Click "Save private key" and store it somewhere, but check that its extension is *.ppk. Now start Git Extensions, select Clone Repository. Now you need to fill out the fields: - Repository to clone: The SSH address from the source-tab at github. Should be something like "git@github.com:username/projectname.git" - Destination: The folder where the repository is stored. (e.g. D:\Repositories) - Subdirectory to create: The name of the subdirectory where the you files go into (e.g. "XNA Project", the resulting path is D:\Repositories\XNA Project) Click "Load SSH key" and point it to the *.ppk file you have create before. If you are finished, click "Clone". The repository is now being copied to your computer. After it has finished, you can start putting your Visual Studio solution into the repository folder and work with it. Via the Commit button in Git Extensions you can commit your files to your local repository and push it to github. Remember that you might need to add the files to the staging area first. If you want to get the newest files from the remote repository, click the Pull button. The other people working with you on the project need to have a github account as well. You can add them as Collaborators from the admin panel of you project. They will have full read and write access. If you need further help with any of the described procedures here, check the Github help system. It's quite extensive and has described almost everything with helpful screenshots. References Description on offical SourceForge Website Authors SVN - Leonhard Palm Git/Versioning Software generally - Lennart Brüggemann - ↑ Revision control#Common Vocabulary Wikipedia. Received 18 May 2011. - ↑ DVCS Round-Up: One System to Rule Them All?--Part 2. Linuxfoundation.org. Received 18. May 2011. External links - git beginner FAQ at Stack Overflow - Comparison of git and SVN - Short introduction to SVN - Free book about Subversion - Article about AnkhSVN in Visual Studio - A good resource concerning AnkhSVN is the official documentation References Reusable Components Overview Game State Management) Score, Life, Health Bar ... Radar 3D Radar HUD is an another example of the HUD that shows how to integrate a 3D Radar into the 3D game using 2D Heads Up Display. Creating a reusable component? - App Hub - content catalog - App Hub Forums - XNA Game Studio - Education Roadmap (Samples, Starter Kits, Tutorials) - CodePlex - Open Source Project Hosting Links Some of the resources listed below contain complete projects that can be downloaded and used in your games. However, there are also some tutorials showing the process of creating a particular component. User Interface Elements Game Menu - Game State Management - Network Game State Management - Tutorial: Create buttons menu in xna, quickly and easily Heads Up Display - 3D Radar Heads Up Display - Tutorial: Creating a Legend of Zelda-style HUD with C# and XNA Framework 3.0 - Tutorial: Not so Healthy... How to create a health bar Authors Maria (wiki login: jasna) Frameworks? - intuitive and simple design - works cross-platform (XBox 360 and Windows) - special console UI controls - support for different keyboard layouts - unified scaling - renderer-agnostic design - skinning in default renderer (skin elements using XML files) - completely test coverage Implementation Component to create fast and easy a GUI in a game. It is not GUI Manager for complex settings but all aspects of a typical game GUI are covered. It automatically change sizes according the Lennart Brüggemann, mglaeser References - ↑ LTrees Demo Application, Change Set 22316. ltrees.codeplex.com, received on 28. May 2011 - ↑ - ↑ - ↑ Audio and Sound Introduction Good sound is a crucial part in a successful game. For this you need to learn about XACT and about ways to creation sound and audio. Also finding free sounds is an important topic. Sound is a wave form that travels through all types of terrestrial matter (solids, liquids and gases). Humans can hear sound as a result of these waves moving the ear drum, a membrane that, with the help of the middle ear, translates sound in to electrical signals. These signals are sent along nerves to the brain, where they are "heard". We most commonly hear sound waves that have traveled through the air. For example what we call thunder, is the shock wave of a lightning bolt; that is, when lightning strikes, it displaces the air around it sending sound waves in all directions. We can also hear sound in water and through solids. Because of their higher density, sound actually travels farther in these mediums than through the air. Sound, as we normally think of it, usually originates from some sort of movement or vibrating body. The frequency of a sound wave, measured in Hertz (Hz), determines the pitch, or how high or low a sound is. It is the distance between peaks in a sound wave. Longer, low frequency wave forms (e.g. bass sounds) travel farther and can travel through different forms of matter more easily than high frequency sound waves. Whales use both high frequency sound waves, including ultrasound and low frequency sound, including infrasound. The loudest and lowest sounds they make, travel the farthest, up to hundreds of miles. The amplitude or loudness of a sound wave, is measured in decibels (dB) which is a logarithmic scale. A jet engine is frequently said to be around 140 dB, while a blue whale call can be up to 188 dB. Due to the nature of the dB scale, these sounds are millions of times louder than a whisper. Even very "simple" sounding tones, like that of a flute, are not perfect sine wave forms. Hardware and software based sound generators are able to create sine waves and other wave forms such as triangle (saw) or square waves. In general each perceived, or fundamental tone may have a series of overtones and harmonics.. Sound in XNA Microsoft XNA Game Studio 3.0 Unleashed, 2009 by Chad Carter (ISBN-13: 9780672330223) -: -: -: -: -: Authors - Christoph Guttandin - Ronny Gerasch Creation Notes: decibel, frequency, oscillators, DFT, FFT(dissect a tone in sine waves), ASDR-Envelopes, MIDI, Well temperament, overtones, timbre, pitch, amplitude, phase, 3D sound, ear anatomy, sound tutorials, free software, sequencer, noise & tones, Creating a sound is easy and almost anything we do creates sound. In musical contexts sound is created by acoustic or electric instruments or analog or digital hardware. To use sounds in a game they must first be recorded and digitized, either in the recording process itself or afterwards. It is increasingly difficult to find places on earth that are absent of man made sound, so it is easy to understand that games trying to imitate reality should have sound in almost every sequence, even if only in the background. Filmmakers record background noise repeatedly over the course of a shoot to increase the authenticity of a film. There are several basic steps in capturing sound: recording, manipulation/ effecting and playing/ reproduction. XNA Game Studio 4 added classes for handling MP3s and capturing and playing back sound from a headset, so even a user's voice can be processed in the same way as a normal recording. Recording In general, sound is recorded in analog or digital form. Because of its low start up cost and easy, precise editing, digital recording is the more popular form of recording. Digital audio recording is the act of recording a sound by taking discrete samples of its wave form and turning them into digital information that can be stored or processed. Digital recording is typically done on a computer, but can also be done with a stand alone recorder with a hard drive, or a handheld device with flash memory. The sampling rate is measured in Hertz, and is the number of times a second a sound is sampled. The bit depth , measured in bits, is how much information is sampled, each time a sample is taken. Higher bit depths offer a more accurate approximation of a wave form. A "CD quality" audio recording is 16 bits, at a 44.1 kHz sampling rate. Generally, the highest quality digital recordings are 24 bit at 192khz. Historically, due to space limitations, games were limited to 8 bit recordings. These "classic" game sound effects and music are easily distinguished from their more modern counterparts. It is comparatively easy to record digitally, for several reasons. Digital recording, in its most basic form, requires only a computer. With the use of plugins, a computer can generate most of the sound a user might need. More elaborate setups might include an audio interface, for recording live instruments or midi signals. Live (microphone or instrument input) and computer generated sound can be seamlessly mixed in audio software. Editing is nonlinear and is also simple. A user can cut, copy and paste pieces of a recording and arrange them as desired. These functions can also be performed across projects and platforms. Compression Until recently, MP3 was by far the most popular form compressing an audio file. MP3s are satisfactory for a game if they are primary compressions (i.e. the first time a full quality audio file has been compressed) above a 160 kbps bit rate. Any bit rate below that begins to sound "lossy." As of version 4, XNA Game Studio has WAV and MP3 importer classes, meaning a game's sound quality is basically up to the creator. Analog audio recording is the act of recording a sound wave in its entirety, as an electronic signal, typically onto magnetic tape. Before an analog recording can be put on CD or used in a game, it must be digitized. The signal can be recorded with less noise if this conversion is done during recording rather than as a separate step. This form of recording is typically ruled out by modern musicians, due to the expense and the time it requires. The need for an engineer, mixing board, tape machine, tape reels and sound room, contribute to the cost. Editing is more laborious because it is linear. That is, an engineer cannot simply copy one good part of a recording to multiple parts of a song. Editing means physically cutting the tape, or rerecording part by part. Microphones use a similar principle the the human eardrum to receive sound. Inside a microphone, a membrane or set of ribbons is displaced by a sound wave and triggers an electrical signal, also a wave form. That is, a microphone translates the sound wave (most often vibrating air) into an electrical wave form, using magnets to generate the electrical signal. There are two general kinds of microphone. Dynamic microphones, which are passive, needing no external power to send electrical signals. Condenser microphones need an external power source called phantom power to function. This is commonly 48 volts and is sent to the microphone through its cable from a mixer or microphone amplifier. MIDI allows separate external synthesizers and other audio equipment to communicate with each other and was an essential part of any studio until USB began replacing its hardware in the early 2000's. Acoustic instruments are the predecessors to electric instruments and need no amplification to be heard. They are recorded by using a microphone to pick up their sound. Electric instruments (e.g. guitars and bass guitars) use the vibration of strings over magnetic coils to generate an electrical signal. To be heard, these signals must be amplified and sent through loudspeakers, which vibrate the air. When struck without amplification, the strings also make sound waves but they are not strong enough to be heard more than a few meters away from the instrument being played. The overtones and harmonics created by stringed instruments, especially by a piano, are extremely difficult to emulate using digital technology. An audio interface (AI), or sound card, converts the analog signals it receives into digital information a computer can process. These analog signals are usually generated by microphones, electric instruments or synthesizers. Computers can generate digital sound signals and do not need to be sent through an AI in order to be processed. In order for the signals being processed by the computer (analog or digital) to be heard, they need to be sent back out through an AI that converts the digital signals back into analog signals and then through loudspeakers or headphones. Recording software or sequencer processes the signals that are generated by a computer or converted using an AI and can produce signals using plugins. These plugins can also emulate analog effects or instruments. The sound options available to a game creator, have increased with recording software performance. Historically, creators were limited to very small sound file size. Modern game stations have more processing power and random access memory and can handle much larger, higher quality sound files. It is commonplace for bands to license songs to video game makers for game soundtracks. Traditionally, sound effects were recorded in much the same way as music; in a studio with someone performing the sound (e.g. breaking glass or footsteps) in front a microphone. In recent years, with the availability of innumerable sound sample libraries, game makers, like filmmakers, use mostly prerecorded samples for sound effects. Sound effects are extremely important to a players experience of a game, especially in realistic games where sounds are required to be as authentic as possible. Reproduction Sound reproduction uses much the same process as recording, but in reverse. A tape or record is played or digital file read and converted back into sound waves. This is usually done with speakers or headphones. Accurate sound reproduction is vital to the experience of a game. Speakers and headphones are the rough equivalent of microphones but are used for sound output, instead of sound input. The electrical signals being played back are sent through an amplifier, which strengthens the signal, through a cable to speakers, where a magnet is used to set the speaker's membrane in motion. This membrane vibrates the air, sending sound waves into the space in in front of and behind the membrane. Speakers are usually contained in some sort of housing, which needs to be tuned for accurate sound reproduction. Housings for headphone speakers come in three general types: over-ear, around-ear, and in-ear. These types have two configurations. They can be open, which projects sound outward, as well as into the ear, or closed which blocks outside noise and keeps sound from escaping. Audio Effects Audio effects are used to change existing sounds which are recorded or generated by software or by synthesizers and are usually user configurable. Traditionally they were encased in boxes, or pedals, that could be activated with the foot of a musician during a musical performance or in larger rack mountable formats for use in a recording studio. Software plugins are able to emulate most formerly hardware based effects. - Filter The filter is a commonly used effect. Its function is to cut off frequencies above or below a defined frequency, known as the cutoff. The resulting frequency can be amplified and is known as resonance. There a different types of filters and there are many different approaches to build these with many individual characteristics. We only differ between their cutoff types: - Lowpass filter Allow lower frequencies through to the output stage, cutting higher frequencies. - Highpass filter Allow higher frequencies through to the output stage, cutting lower frequencies. - Bandpass filter - Notch filter - Equalizer Boosts or cuts certain frequency bands in a signal. - Delay Repeats an incoming signal to the output stage making the output sound like and echo of the original input. - Reverb - Flanger - Phaser - Chorus - Unisono - Distortion Manipulates or deforms an incoming signal. - Waveshaping Synthesizer Synthesizers use electronic circuits to generate electric signals. They can be analog, digital or a combination of both. - Subtractive synthesis Most analog and digital synthesizers use this common approach of subtractive synthesis. The essence of these synthesizers are one ore more oscillators with a rich filled frequency spectrum of overtones. These sounds can be filtered by low-pass, band-pass, high-pass or notch filter. - Additive synthesis Instead of filtering overtones like the subtractive synthesis does, we are adding overtones to the base note. - FM synthesis Also called frequency modulation synthesis is an approach which has its origin in telecommunications engineering. The main idea is to create overtones by manipulating a carrier wave's frequency by an other modulating wave. So the carrier wave's frequency gets higher, where the modulation wave's position is positive and gets lower, where the modulation wave'postion is negative. - PM synthesis Phase modulation synthesis is very similar in its acoustic results to frequency modulation. Instead of manipulating the frequency of the carrier wave, its phase gets manipulated by a modulation wave. - Wavetable synthesis A wavetable is mostly a bunch of samples and an oscillator picks a small window of these samples and repeats this part of information. This window can be moved while it's playing. - Granular synthesis The granular synthesis is also based on an existing sample wave file like the wavetable synthesis, but this wave sample is cut it many small pieces also called grains which are between 1 and 50 milliseconds. Mood in games (with examples) - Action game - In action games there are only sounds with a simple background music. These has a catchy melody. That means that you have to avoid big score leaps and that the backgroundmusic has to be singable. To get an exciting mood you have to take fast tempo and take . The key has to be in major. So that the melody sounds happy. - In addition there has to be a soundnotification, when you get a point or removed a line and so on. - E.g. Tetris - The melody of the backgroundmusic is very catchy simple and singable. There are no big score leaps. - Soundsnotification: - -Removing a line: - Here could be a space sound. Something like that - -turning a shape - Here could be a short sound. This sound could be a little tick. - Shooter game - Adventure game - Role playing game - Strategy games - Simulation game Links and sources Synthesizer Introduction If you want to create a game and you think of what your game should sound like you most probably have a pretty clear idea of the atmosphere and the sounds you want to achieve. There are three ways i can think of to get your desired sounds: - search the web for free sounds that suit your needs. - take any kind of recorder (e.g. your mobile phone, mp3-player with recording function, a microphone, ... ), go out and record whatever you think sounds cool and then pimp it up with a recording software. - design your own sounds using a synthesizer. The third and last approach is the one i thought to be the most exciting and here i am now searching the web for a simple synthesizer i can start my little experiment with. My goal (and therefore the goal of this article) is to get an understanding of how i have to manipulate which parts of the synthesizer to get what kind of sound-effects. Preparation I found a nice book about synthesizer programming/sound design[1] which uses native instruments reaktor5 so i decided to go along with that and use their basic synthesizer called soundschool_synth which is available for download here. Unfortunately this is a demo version which runs only for half an houre and you can not save your snapshots but it is designed to demonstrate the basic concepts of sound synthesis and therefore is exactly what i need. Lets start the demo version of Reaktor5, go to file and open ensemble and choose the SoundSchoolAnalog.ens. What you see should look somewhat like this: How does the sound get through the synthesizer? Every synthesizer consists of three to four basic elements to shape a sound: First of all a sound has to be generated. Responsible for that is the Oscillator. You can choose between some basic wave-shapes like the sine-wave, sawtooth, or rectangle. Try them out and hear the differences. Since in our synthesizer we have two oscillators, the generated sound-waves have to be mixed. For that purpose every synthesizer with more than only one oscillator needs a Mixer. The resulting signal is a waveform-combination which can already include a beat and /or an interval. At this point the generation of sound is completed and we come to the elements that do its modulation. After passing through the mixer the next station of our sound-wave is the Filter. Here parts of the frequencies get cut off (filtered) which adds a different timbre to the sound. Try out the different filter-characteristics and play with the cutoff-knob and you'll hear how the timbre of the sound changes. The third thing we want to be able to change is the way sounds fade in and/or out. This happens in the Amplifier. In this synthesizer, just like in most other ones, the amp is not directly visible but it is controlled by the Amp envelope in which you find 4 knobs: A = attack, D = decay, S = sustain, R = release. Changing their values you can directly hear (and see in the graphic below) what happens to the progression of the sound. All the other components basically have the purpose of changing, regulating and modifying those four elements. So lets follow the way of the sound and try to get a deeper understanding of what really happens and which design opportunities we have in each of the different modules of the synthesizer. Oscillator In general there are 6 different waveforms: sine, triangle, sawtooth, rectangle/square, pulse and noise. In our first oscillator we have four different wave-shapes and three controllers to modify them. The first controller is the symm-knob which changes the symmetry of the wave. Try it out!! Did you realize that if you choose the pulse-wave and leave symm on 0 (off) you get a simple rectangle-wave, but by increasing symm you can modify the waves width and therefore turn it into a pulse-wave?! And if you choose the triangle- or sine-wave increasing the symmetry bends it clockwise and turns it into a sawtooth! The next knob is the interval-knob which simply transposes the sound in steps of semitones. The third knob regulates the frequency-modulation: if you turn it up osc1 does not only generate a sound but its amplitude also controls the frequency of osc2. This means that the frequency of osc2 gets higher, where the wave of osc1 is positive and gets lower, where the wave is negative. This feature adds a really important character to a sound: vibrato! Lets try it out with a little experiment: - For osc1 choose the pulse-wave and in the mixer turn osc1 on 0 (off). We don't want to hear the wave, we only want to use it as a modulator and since the pulse-wave switches rapidly from positive to negative it the best wave-form to demonstrate FM. - For osc2 choose the sine-wave and in the mixer turn it on 1 (on). Now turn the FM-knob slowly up. Already now you should hear a vibration in the sound but it will get even more striking! - Now turn the interval of osc1 on -60 and the interval of osc2 on 60. What you should see in the scope should be a wave that switches from if you still didn't understand what is happening there turn osc1 on 1 just to hear the sound we are using for the manipulation: it is a periodic, very short sound that seems just like a beating. Now it should be all clear: When the wave of the beating sound is positive the frequency of our sine and therefore its sound is high and when it is negative the frequency is low and therefore we hear a deep sound. The second oscillator offers the same amount of different parameters which differ just slightly from the first one. Instead of bending a wave like the symm-controller of osc1 does, the puls-sym-controller just adjusts the pulse-width; and instead of the FM controller we have a knob for detuning. Detuning makes only sense if we use both oscillators as sound-generators so that we can detune them against each other. Lets do another small experiment to see which effect we can reach with detuning. - 1. We choose the square/pulse sound-wave for both of the oscillators and in the mixer turn osc1 on 1 (on) and osc2 on 0 (off) - 2. While playing a note on the keyboard, slowly turn osc2 on as well. If you stop at about 0.25 you should be able to nicely see the effect in the scope. - 3. Try out what happens if you turn the detune on. It looks like one wave is faster than the other and as you can hear the tone already seems to gain some color. - 4. Then turn the detune off again and try out the interval. Basically the interval- and the detune-knob do the same: they change the frequency of the wave but whereas the detuning results in just a very slight - change, turning the interval on 12 (or 24) already makes the tone one octave (respectively two) higher. Play around with both, the interval and the detuning and even try out what happens if you combine other waveforms!! What you just experienced is actually the phenomena of beat: it emerges when two oscillators with slightly different frequencies interfere with each other. The sound gets fetter and seems more animated. Sync Sync stands for synchronization and is a tool which, similar to the FM, gives the first oscillator a modulating role: every time its signal reaches its starting point it forces the second oscillator to start over as well. Choose a pulse-wave for osc1 and a sawtooth-wave for osc2. Now increase the interval of osc2 (set it on a value between 1 and 12) and check the sync-box. In the mixer turn osc1 on 0 and osc2 on 1 and you will see how the sawtooth-wave gets interrupted and reset every time the pulse-wave crosses the x-axis. LFO LFO stands for Low-Frequency-Oscillator. The way it works is basically very similar to the FM. The LFOscillator generates a wave, usually with a frequency below 20Hz, which then is used to modify certain components of the synthesizer, such as the inputs of any other, audible oscillator (pitch and symmetry), the filter or the amplifier. Obviously the difference to FM is that you can use this wave to modify any components that are modifiable in a synthesizer. Its rate defines the velocity (in our synthesizer between 0.1 and 10 Hz) and its amount (guess what!? ..) the amount of the modulation. In our synthesizer-model the first three units of the LFO (rate, waveform and symm) describe the characteristics of the genertated wave and the units on the right describe how much and what the wave modulates. Mixer The first two knobs of the mixer are self-explaining: they regulate the amount of signal taken from both of the oscillators. The third controller is responsible for the Ring-modulation. ..This sounds complicated but it is actually really easy: it is basically the multiplication of the two waves (signal of osc1 multiplied by the signal of osc2). Put the mixer for the two oscillators on 0, turn the RingMod on and then try out the different combinations of waves! Filter The filter of our synthesizer consists of a drop-down-menu, from where we can choose the type of filter we want to use, and four controllers. The most important controller is the Cutoff-knob!! It sets the frequency form which the filter starts to operate. This means that if you choose a LowPass-filter only the parts of the signal with a higher frequency than the cutoff-value get filtered and the lower ones pass through unchanged; if you choose a HighPass-filter the signals which are higher than the cutoff-value pass through and the lower ones get filtered and the BandPass-filter filters both, the higher and the lower signals and just lets a band around the cutoff-frequency pass unchanged. At this point one thing we should throw a little glance at is the the slope of a filter. In our synthesizer the filters not only differ in their range but also in their slope. The slope is measured in decibel per octave and tells us how fast the filter starts to pull in. A filter with a slope of 6 dB/oct is also called a 1-pole-filter, with a slope of 12 dB/oct a 2-pole-filter and so on... That's what the number behind the names of our filters mean!! So if you just switch between Lowpass1 and Lowpass4 you will realize that the higher the number of the pole and therefore the slope is the clearer we can hear the filter- effect! The Resonance-controller is also a very important one: it boosts the frequency around the cutoff-value!! If you turn it on completely the filter starts self-oscillating. This is because the frequencies around the cutoff-value get lifted up that much that they result in a sine-wave and all the overtones get cut off! The best way to hear and see this phenomena is by choosing the noise wave, set the filter on LowPass4 and the Resonance on 1. Because we chose the LowPass-filter all frequencies lower than the cutoff-value get filtered and therefore with a high cutoff-value nothing happens. But try turning the cutoff down!! You will see that slowly the random noise signal turns into a sine wave! The Env-value simply describes how much the filter is controlled by the filterEnvelope, whose purpose is to control the chronological sequence of the filter-effect. Again choose the noise wave, put resonance on 1 and the cutoff-frequency on 80. If you now change the ADSR-values of the envelope and put the env-controller on a negative value you will realize that the result is as if you would turn the cutoff from a low frequency up to 80, if you put the env-controller on a positive number the result is as if you would turn the cutoff from a high frequency down to 80. You see that using envelopes has the same effect as playing with controllers and the filterEnvelope is the one that controlles the progression of the timbre. K-track stands for Keyboard-tracking and is responsible for how much the cutoff-frequency follows the note-pitch. Choose the pulse-wave with a lowPass4-filter, set cutoff on 80 and resonance on 0.5. now play a very low note and afterwards a very high note while the k-track is set on 0 (turned off). We can see in the scope that the high note got filtered that much that it almost doesn't have any overtones anymore and turned into a sine-wave while the low note has its own characteristic sound and shape. Most of the times we don't want this to happen but instead we want that the filter filters relative to the frequencies we play. If now we turn the k-track on 1 that is exactly what happens!! Amplifier Like mentioned before the Amplifier is not really visible in the synthesizer but representative for it is the AmpEnvelope. This unit functions just like the envelope of the filter, where you can modify the ADSR-values, only that instead if controlling the progression of the filtration it actually regulates the progression of the finally audible sound. This is one of the most essential tools for sound-design because it defines if a sound is for example short and crisp or long and stretched. As you can figure the AmpEnvelope for the sounds of a car-racing game should look a lot different than the AmpEnvelope for the sounds of a horse-racing game and the sound of the wind has a different progression then the sound of a gunshot!! Don't we love patterns?? So now that we know about all the different components and what they do, instead of the 'trial-and- error'-approach of just playing around with the knobs, hoping to accidentally get a nice sound out of that machine we should get ourselves a pattern (to use for orientation, obviously not to stolidly stick to it) to achieve our first reasonable results. Here are the steps we should follow: Now we need to find a freeware synthesizer (similar to this one so we can use our pattern!!) and start actually DOING something!! References Author jonnyBlu Finding free Sounds There are many sources of free sounds on the net. This chapter will show you where you can find which sounds and music, and which licences are the right licences for you. Important is also the help you may need as with respect to what kind of mood do you want to create. Or if you want to create some random sound or use musac (music that sucks). Here are a few good sites with many audio samples: This site is good because, you can just search a keyword and listen free to any song. Authors to be edited by GG. 2D Game Development Introduction The simplest games are 2D games. Here you will learn about textures and sprites, how to find free textures and graphics on the internet, how to create menus and help screens for your games and Heads-Up-Display (HUD). More Details Lore ipsum ... Texture Textures come in many formats, some well known such as bmp, gif, jpg or png, some less known like dds, dib oder hdr formats. You need to know about UV coordinates and how they get mapped. Also topics such as texture tiling, transparent textures, and textures are accessed and used in the shader should be discussed. Introduction In the context of 3D modeling a texture map is a bitmap that is applied to a models surface. In combination with shaders it is possible to display nearly every possible face and attribute of nearly any material. The process of texturing is comparable to applying patterned paper to a box. Multitexturing is the use of more than one texture at a time on one model. Texture Coordinates/ UVW Coordinates Every vertex has got a xyz-position and additionally a texture coordinate in the uvw-space (also called uvw-coordinate). The uvw-coordinates are used to how to project a texture to a polygon. In case of a 2d- bitmaptexture like they are normally used in computer games there are just the u and v coordinates needed. In case of mathematical textures (3d noise e.g.) normally the uwv coordinates are needed. - The uv coordinate (0,0) is the bitmaps left bottom corner - The uv coordinate (1,1) is the bitmaps right top corner - If uv coordinates <0 or >1: tiling of a texture One Vertex could have more than one texturecoordinate: So there is more than one mapping channel used for displaying overlapping textures to represent more complicated structures. Tiling Tiling is the repetition and the arrangement of the repetition of a texture next to each other, free of overlaps. If the uv coordinate is <0, the texture will be scaled down and repeated. If the uv coordinate is >1, the texture will be scaled up. Games In games there is often just one texture for the whole 3d-model, so there is just one texturecoordinate for one vertex, therefore there is just one mapping channel. How to build textures in Photoshop Why? Photoshop is in this context generally used for the creation and editing of textures for 3d-models. Frequently photographs are used to convey a realistic impression. Example: Lizard's skin -> Dragon texture. How? Transparent Textures and Color Blending Color blending mixes two colors together to produce a third color. The first color is called the source color which is the new color being added. The second color is called the destination color which is the color that already exists (in a render target,. (...) How to create? Alpha Blending - Die transparenten Objekte sind zu sortieren nach ihrem z-Wert im View-Space oder ClipSpace - z-Buffer-Schreiben auf off stellen aber z-Buffer-Lesen auf on - Bei Zeichnen der vorsortierten transparenten Objekte wähle dann die Reihenfolge: BackToFront Seamless Textures Mostly textures have to be tile able. Therefore no edges should be visible if the image is repeated. A great, very useful helper is the Photshop filter->sonstige Filter-> Verschiebungseffekt. It is very useful to create edge free patterns. example how to create seamless textures (in Photoshop CS 4) 1) Get the picture border in the middle. Use the Filter • Sonstige Filter • Verschiebungseffekt. The value should be the half length of the edge. Do not forget the option "Durch verschobenen Teil ersetzen"!! Now you have to retouch the resulting edges. Typical tools for retouching Copy and Paste of certain bitmap sections and mask-using Stamp and Brush 2) You have to do this a second time, because there are edges at the sides of the picture. Mark the mid-points of the sides and use the filter "Verschiebungseffekt" a second time. Move the picture by a third or a quarter of the edge length. Now the marks and edges are somewhere in the pictures center. Here you have to do the last retouching. Height information/Bump maps It is a little complicated to get height information from a picture, also not every photo is suitable to get its height-information and to get a bump map. Here you find a tutorial how to do it: unter 2) Relief-Information aus dem Bild gewinnen Galileodesign Textures in XNA The following nice tutorial how to do it you can find here : Tutorials texture = Content.Load<Texture2D> ("riemerstexture"); This line binds the asset we just loaded in our project to the texture variable! Now we have to define 3 vertices and to store them in an array. We will need to be able to store a 3d Position and a texture coordinate. The vertex format is VertexPositionTexture. We have to declare this variable at the top. VertexPositionTexture[] vertices; Now we define the 3 vertices of our triangle in our SetUpVertices method we create: private void SetUpVertices() { vertices = new VertexPositionTexture; texturedVertexDeclaration = new VertexDeclaration(device, VertexPositionTexture.VertexElements); } For every vertex we define it is position in 3D space in a clockwise way. Next we define which UV-Coordinate in our texture corresponds with the vertex. Remember: the (0,0)texture coordinate us at the top let point of our texture image, the (1,0) at the top right and the (1,1) at the bottom right. Don’t forget to call the SetUpVertices method from your LoadContent method: SetUpVertices (); Now our vertice is set up and our texture image load, now we draw the triangle: In the Draw method add this code after our call to the Clear method: Matrix worldMatrix = Matrix.Identity; effect.CurrentTechnique = effect.Techniques["TexturedNoShading "]; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xTexture"].SetValue(texture); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = texturedVertexDeclaration; device.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 1); pass.End(); } effect.End(); We need to instruct our graphics card to sample the color of every pixel from the texture image. This is exactly what the TexturedNoShading technique of my effect file does, so we set it as active technique. As we didn’t specify any normals for our vectors, we cannot expect the effect to do any meaningful shading calculations. As explained in Series 1, we need to set the World matrix to identity so the triangles will be rendered where we defined them, and View and Projection matrices so the graphics card can map the 3D positions to 2D screen coordinates. Finally, we pass our texture to the technique. Then we actually draw our triangle from our vertices array, as done before in the first series. Running this should already give you a textured triangle, displaying half of the texture image! To display the whole image, we simply have to expand our SetUpVertices method by adding the second triangle: private void SetUpVertices() { vertices = new VertexPositionTexture; vertices[3].Position = new Vector3(10.1f, -9.9f, 0f); vertices[3].TextureCoordinate.X = 1; vertices[3].TextureCoordinate.Y = 1; vertices[4].Position = new Vector3(-9.9f, 10.1f, 0f); vertices[4].TextureCoordinate.X = 0; vertices[4].TextureCoordinate.Y = 0; vertices[5].Position = new Vector3(10.1f, 10.1f, 0f); vertices[5].TextureCoordinate.X = 1; vertices[5].TextureCoordinate.Y = 0; } We simply added another set of 3 vertices for a second triangle, to complete the texture image. Don’t forget to adjust your Draw method so you render 2 triangles instead of only 1: device.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2, VertexPositionTexture.VertexDeclaration); Now run this code, and you should see the whole texture image, displayed by 2 triangles! Resource: Resources: Sprites What are Sprites? Sprites are two dimensional image.The best known sprite is the mouse pointer. Sprites are not only used in 2D games but sprites are also used in 3D games for example, for splash screens, menus, explosions and fire.These graphics based on the followed coordinate system. Creating Sprites Important to creating a Sprite you should know that the file can be bmp, png or jpg. Most suitable are painting programms for creating Sprites such as Adobe Photoshop. For animations sprite sheets are necessary. Individual animation steps must be arranged in tabular form in the file. Using of Sprites in XNA Games Add Sprites add the image to the project right click on the content file - "add" - new element-->> bitmap -->> you can draw in visual studio your own bitmap graphic - existing element-->> ..select a graphic on your own data structure Let's create a few Texture2D objects to store our images. Add the following two lines of code as instance variables to our game's main class: Texture2D landscape; Texture2D star; load the images into our texture objects. In the LoadContent() method, add the following lines of code: landscape = Content.Load<Texture2D>("landscape1"); // name of your images star = Content.Load<Texture2D>("star"); Using SpriteBatch SpriteBatch is the most important class of 2D drawing. The class contains methods for drawing sprite onto the screen. SpriteBatch have many usefull methods you can find all about these class by msdna libary. The standard template of Visual Studio already has added a SpriteBatch object. the instance variables in the main: SpriteBatch spriteBatch; a reference to this SpriteBatch class in the LoadContent() method: protected override void LoadContent() { // Create a new SpriteBatch spriteBatch = new SpriteBatch(GraphicsDevice); } method Draw()-->important drawing with SpriteBatch[1] - SpriteBatch.Draw (Texture2D, Rectangle, Color); - SpriteBatch.Draw (Texture2D, Vector, Color); more about SpriteBatch.Draw protected override void Draw(GameTime gameTime) { graphics.GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(landscape, new Rectangle(0, 0, 800, 500), Color.White); spriteBatch.Draw(star, new Vector2(350, 380), Color.White);//normal spriteBatch.End(); base.Draw(gameTime); } Make Sprites smaller /bigger /semitransparent and/or rotate SpriteBatch.Draw must be overloaded to reduce or enlarge or rotate or make transparent Sprites.[2] In the method spriteBatch.Draw() we can give to a color value not only "Color.White" but also RGB and even an alpha value. API:[3] SpriteBatch.Draw Methode (Texture2D, Vector2, Nullable<Rectangle>, Color, Single, Vector2, Single, SpriteEffects, Single) public void Draw ( - Texture2D texture, - Vector2 position, - Nullable<Rectangle> sourceRectangle, - Color color,======>//this value can have an alpha value for transparent - float rotation,====>//this value is the radius at which the graphic is rotate - Vector2 origin,===>//this value is the point at which the graphic is rotate - float scale,======>//this value is important to reduce or enlarge sprites - SpriteEffects effects, - float layerDepth ) more about the parameters find here spriteBatch.Draw(star,new Vector2(350,380),Color.White);//normal spriteBatch.Draw(star,new Vector2(500,(380+(star.Height/2))),null,Color.White,0.0f,new Vector2(0,0), 0.5f,SpriteEffects.None,0.0f);//small spriteBatch.Draw(star,new Vector2(200,(380-(star.Height/2))),null,Color.White,0.0f,new Vector2(0,0), 1.5f,SpriteEffects.None,0.0f);//bigger spriteBatch.Draw(star,new Vector2(650,380),null,Color.White,1.5f,new Vector2(star.Width/2,star.Height/2), 1.0f,SpriteEffects.None,0.0f);//rotate spriteBatch.Draw(star,new Vector2(50,380),new Color(255,255,255,100));//semitransparent Animated Sprites First, make a sprite sheet in which a motion sequence is shown for example go, jump, bend, run .. Next add a new class named AnimateSprite and add the follow variables. public Texture2D Texture; // texture private float totalElapsed; // elapsed time private int rows; // number of rows private int columns; // number of columns private int width; // width of a graphic private int height; // height of a graphic private float animationSpeed; // pictures per second private int currentRow; // current row private int currentColumn; // current culmn The class consists of three methods: LoadGraphic (loading of the texture and set the variable), Update (for updating or moving animation) and Draw (to draw the sprite). LoadGraphic In this method, the entire variable and the texture are assigned. public void LoadGraphic( Texture2D texture, int rows, int columns, int width, int height, int animationSpeed ) { this.Texture = texture; this.rows = rows; this.columns = columns; this.width = width; this.height = height; this.animationSpeed = (float)1 / animationSpeed; totalElapsed = 0; currentRow = 0; currentColumn = 0; } Update Here, the animation is updated. public void Update(float elapsed) { totalElapsed += elapsed; if (totalElapsed > animationSpeed) { totalElapsed -= animationSpeed; currentColumn += 1; if (currentColumn >= columns) { currentRow += 1; currentColumn = 0; if (currentRow >= rows) { currentRow = 0; } } } } Draw Here is the current frame is drawn. public void Draw(SpriteBatch spriteBatch, Vector2 position, Color color) { spriteBatch.Draw( Texture, new Rectangle((int)position.X, (int)position.Y, width, height), new Rectangle( currentColumn * width, currentRow * height, width, height), color ); } } Using in Game add Code to class Game1 main: AnimateSprite starAnimate; LoadContent: starAnimate = new AnimateSprite(); starAnimate.LoadGraphic(Content.Load<Texture2D>(@"spriteSheet"), 3, 4, 132, 97, 4); Update: starAnimate.Update((float)gameTime.ElapsedGameTime.TotalSeconds); Draw: starAnimate.Draw(spriteBatch, new Vector2(350, 380), Color.White); Drawing Textfonts add the Font to the project right click on the content file - "add" - "new element.." - SpriteFont This file is an XML file, in which font, font size, font effects (bold, italics, underline), letter spacing and characters to use are given. From these data, XNA created the bitmap font. To use German characters have to set the end value to 255.[7] the instance variables in the main: SpriteFont font; in the LoadContent() method: font = Content.Load<SpriteFont>("SpriteFont1"); //name of the Sprite(Look Content) in the Draw() method: spriteBatch.DrawString(font, "walking Star!", new Vector2(50, 100), Color.White); Authors SuSchu -- Susan Schulze Usefull Websites References - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ Finding free Textures and Graphics Where do I find textures and graphics on the internet? And how do I find the kind of graphics I need? Also, important to consider: Under what licence are these graphics? What are the constraints for my software, such that I can use them? Where do I find 'for-sale' graphics, or where can I hire a designer to create custom graphics for my game? Authors Ich würde gerne dieses Thema bearbeiten : Rayincarnation Menu and Help Ich würde gerne dieses Thema bearbeiten : Rayincarnation, thonka Heads-Up-Display A Heads-Up-Display (short HUD) is any transparent display that presents information without requiring users to look away from their usual viewpoints. The origin of the name stems from the modern aircraft pilots being able to view information with heads "up" and looking forward, instead of angled down looking at lower instruments. Although they were initially developed for military aviation, HUDs are now used in commercial aircraft, automobiles, and even in todays game design. There the HUD relays information to the player as part of a game's user interface. This article will feature examples for HUD elements and XNA templates for some of these basic components. Since good sprites are really important for creating a great looking HUD, designing these with professional image processing applications, such as Gimp or Photoshop is vital. Developing the skills will not be part of this article. Introduction Application There are many different types of information that can be displayed using a HUD. Below is an outline of the most important stats displayed on video game HUDs Health & lives Health is of extreme importance. Hence this is one of the most important HUD Stats on display. This contains information about the player's character or about NPC's, such as allies and enemies. RTS games (e.g. Starcraft) usually display the health level of all units that are visible on screen. In many action oriented games (first- or third-person shooters) the screen flashes briefly, when the player is attacked, and shows arrows indicating the direction the threat came from. Weapons & items Most action games (first- and third-person shooters in particular) show information about the weapons currently used, ammunition left, other weapons, objects or items that are available. Menus Menus for different game related aspects (e.g. start game, exit game or change settings). Time This contains timer counting up or down to display information about certain events (e.g. end of round), records such as lap times or the length of time a player can last in survival based game. HUDS can be used to display in-game time (time, day, year within the game) or even show real time. Context-sensitive Information This contains information that are only shown when necessary or important (e.g. tutorial messages, one/off abilities, subtitles or action events). Game progression This contains information about the player's current game progress (e.g. stats on a gamer's progress within one particular task or quest, accumulated experience points or a gamer's current level). It also includes information about the player's current task. Mini-maps, Compass, Quest-Arrow Games are all about reaching objectives, so HUDs must clearly state them, either in the form of a compass or quest arrow. A small map of the area that can act like a radar, showing the terrain, allies and/or enemies, locations like safe houses and shops or streets. Speedometer Used in most games that feature drivable vehicles. Usually shown only when driving one of these. Cursor & Crosshair The crosshair indicates the direction the player is pointing or aiming to. Examples Less is more In order to increase realism information normally displayed using a HUD can be instead disguised as part of the scenery or part of the vehicle the player is using. For example, when the player is driving a car that can sustain a certain number of hits, a smoke trail or fire might appear from the car to indicate that the car is seriously damaged and will break down soon. Wounds and bloodstains may sometimes appear on injured characters who may also limp or breathe heavily to indicate that they are injured. In some cases, no HUD is displayed at all. Leaving the player to interpret the auditory and visual cues in the game world creates a more intense athmosphere. Text in HUD Every font installed on your computer can be used to display text in your HUD. Therefore the font has to be added as an "Existing file" to the project in Visual Studio. Afterwards a .spritefont (XML) file can be found in the content folder of your project. There all parameters, such as style, size or kerning, can be easily configured. SpriteFont spriteFont = contentManager.Load<SpriteFont>("Path//Fontname"); Displaying fonts spriteBatch.DrawString(spriteFont, textLabel + ": " + textValue, position, textColor); (Semi-)Transparency Color myTransparentColor = new Color(0, 0, 0, 127); Background Rectangle rectangle = new Rectangle(); rectangle.Width = spriteFont.MeasureString(text).X + 10; rectangle.Height = spriteFont.MeasureString(text).Y + 10; Texture2D texture = new Texture2D(graphicsDevice, 1, 1); texture.SetData(new Color[] {color}); spriteBatch.Draw(texture, rectangle, color); Images in HUD Since there is no concept of drawing on canvas elements, images or sprites are an important element for creating HUDs. XNA supports many different image formats, such as .jpeg or .png (including transparency). contentManager.Load<Texture2D>("Path//Filename") or you could try this one : contentManager.Load<Texture2D>(@"Path/Filename") With this aproach we use the default "content" folder and the "doubled" ("//") slash is not necessary. Displaying images spriteBatch.Draw(image, position, null, color, 0 , new Vector2(backgroundImage.Width/2, backgroundImage.Height/2), scale, SpriteEffects.None, 0); Components The following components are templates that are ready to use. They can be easily customized to fit the individual requirements. Text Information This component displays a text field. It can be used to display a big variety of information, such as time, scores or objectives. In order to increase readability a semi transparent background is displayed behind the text. Class variables private SpriteBatch spriteBatch; private SpriteFont spriteFont; private GraphicsDevice graphicsDevice; private Vector3 position; private String textLabel; private String textValue; private Color textColor; private bool enabled; Constructor /// <summary> /// Creates a new TextComponent for the HUD. /// </summary> /// <param name="textLabel">Label text that is displayed before ":".</param> /// <param name="position">Component position on the screen.</param> /// <param name="spriteBatch">SpriteBatch that is required to draw the sprite.</param> /// <param name="spriteFont">Font that will be used to display the text.</param> /// <param name="graphicsDevice">Graphicsdevice that is required to create the semi transparent background texture.</param> public TextComponent(String textLabel, Vector2 position, SpriteBatch spriteBatch, SpriteFont spriteFont, GraphicsDevice graphicsDevice) { this.textLabel = textLabel.ToUpper(); this.position = position; this.spriteBatch = spriteBatch; this.spriteFont = spriteFont; this.graphicsDevice = graphicsDevice; } Enable /// <summary> /// Sets whether the component should be drawn. /// </summary> /// <param name="enabled">enable the component</param> public void Enable(bool enabled) { this.enabled = enabled; } Update /// <summary> /// Updates the text that is displayed after ":". /// </summary> /// <param name="textValue">Text to be displayed.</param> /// <param name="textColor">Text color.</param> public void Update(String textValue, Color textColor) { this.textValue = textValue.ToUpper(); this.textColor = textColor; } Draw /// <summary> /// Draws the TextComponent with the values set before. /// </summary> public void Draw() { if (enabled) { Color myTransparentColor = new Color(0, 0, 0, 127); Vector2 stringDimensions = spriteFont.MeasureString(textLabel + ": " + textValue); float width = stringDimensions.X; float height = stringDimensions.Y; Rectangle backgroundRectangle = new Rectangle(); backgroundRectangle.Width = (int)width + 10; backgroundRectangle.Height = (int)height + 10; backgroundRectangle.X = (int)position.X - 5; backgroundRectangle.Y = (int)position.Y - 5; Texture2D dummyTexture = new Texture2D(graphicsDevice, 1, 1); dummyTexture.SetData(new Color[] { myTransparentColor }); spriteBatch.Draw(dummyTexture, backgroundRectangle, myTransparentColor); spriteBatch.DrawString(spriteFont, textLabel + ": " + textValue, position, textColor); } } Meter Information This component displays a round instrument. It can be used to display a big variety of information, such as speed, rounds, fuel, height/depth, angle or temperature. The background image is displayed at the passed position. The needle image is rotated accordingly to the ratio between maximum and current value. The rotation angle is interpolated to create a smooth, life like impression. Class variables private SpriteBatch spriteBatch; private const float MAX_METER_ANGLE = 230; private bool enabled = false; private float scale; private float lastAngle; private Vector2 meterPosition; private Vector2 meterOrigin; private Texture2D backgroundImage; private Texture2D needleImage; public float currentAngle = 0; Constructor /// <summary> /// Creates a new TextComponent for the HUD. /// </summary> /// <param name="position">Component position on the screen.</param> /// <param name="backgroundImage">Image for the background of the meter.</param> /// <param name="needleImage">Image for the neede of the meter.</param> /// <param name="spriteBatch">SpriteBatch that is required to draw the sprite.</param> /// <param name="scale">Factor to scale the graphics.</param> public MeterComponent(Vector2 position, Texture2D backgroundImage, Texture2D needleImage, SpriteBatch spriteBatch, float scale) { this.spriteBatch = spriteBatch; this.backgroundImage = backgroundImage; this.needleImage = needleImage; this.scale = scale; this.lastAngle = 0; meterPosition = new Vector2(position.X + backgroundImage.Width / 2, position.Y + backgroundImage.Height / 2); meterOrigin = new Vector2(52, 18); } Enable /// <summary> /// Sets whether the component should be drawn. /// </summary> /// <param name="enabled">enable the component</param> public void Enable(bool enabled) { this.enabled = enabled; } Update /// <summary> /// Updates the current value of that should be displayed. /// </summary> /// <param name="currentValue">Value that to be displayed.</param> /// <param name="maximumValue">Maximum value that can be displayed by the meter.</param> public void Update(float currentValue, float maximumValue) { currentAngle = MathHelper.SmoothStep(lastAngle, (currentValue / maximumValue) * MAX_METER_ANGLE, 0.2f); lastAngle = currentAngle; } Draw /// <summary> /// Draws the MeterComponent with the values set before. /// </summary> public void Draw() { if (enabled) { spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.SaveState); spriteBatch.Draw(backgroundImage, meterPosition, null, Color.White, 0, new Vector2(backgroundImage.Width / 2, backgroundImage.Height / 2), scale, SpriteEffects.None, 0); //Draw(backgroundImage, position, Color.White); spriteBatch.Draw(needleImage, meterPosition, null, Color.White, MathHelper.ToRadians(currentAngle), meterOrigin, scale, SpriteEffects.None, 0); spriteBatch.End(); } } Radar Information This component displays a radar map. It can be used to display a big variety of information, such as objective or enemies. The background image is displayed at the passed position. Dots representing objects in the map are displayed accordingly to an array of positions. Class variables private SpriteBatch spriteBatch; GraphicsDevice graphicsDevice; private bool enabled = false; private float scale; private int dimension; private Vector2 position; private Texture2D backgroundImage; public float currentAngle = 0; private Vector3[] objectPositions; private Vector3 myPosition; private int highlight; Constructor /// <summary> /// Creates a new RadarComponent for the HUD. /// </summary> /// <param name="position">Component position on the screen.</param> /// <param name="backgroundImage">Image for the background of the radar.</param> /// <param name="spriteBatch">SpriteBatch that is required to draw the sprite.</param> /// <param name="scale">Factor to scale the graphics.</param> /// <param name="dimension">Dimension of the world.</param> /// <param name="graphicsDevice">Graphicsdevice that is required to create the textures for the objects.</param> public RadarComponent(Vector2 position, Texture2D backgroundImage, SpriteBatch spriteBatch, float scale, int dimension, GraphicsDevice graphicsDevice) { this.position = position; this.backgroundImage = backgroundImage; this.spriteBatch = spriteBatch; this.graphicsDevice = graphicsDevice; this.scale = scale; this.dimension = dimension; } Enable /// <summary> /// Sets whether the component should be drawn. /// </summary> /// <param name="enabled">enable the component</param> public void Enable(bool enabled) { this.enabled = enabled; } Update /// <summary> /// Updates the positions of the objects to be drawn and the angle for the rotation of the radar. /// </summary> /// <param name="objectPositions">Position of all objects to be drawn.</param> /// <param name="highlight">Index of the object to be highlighted. Object with a smaller or a /// greater index will be displayed in a smaller size and a different color.</param> /// <param name="currentAngle">Angle for the rotation of the radar.</param> /// <param name="myPosition">Position of the player.</param> public void update(Vector3[] objectPositions, int highlight, float currentAngle, Vector3 myPosition) { this.objectPositions = objectPositions; this.highlight = highlight; this.currentAngle = currentAngle; this.myPosition = myPosition; } Draw /// <summary> /// Draws the RadarComponent with the values set before. /// </summary> public void Draw() { if (enabled) { spriteBatch.Draw(backgroundImage, position, null, Color.White,0 , new Vector2(backgroundImage.Width / 2, backgroundImage.Height / 2), scale, SpriteEffects.None, 0); for(int i = 0; i< objectPositions.Length; i++) { Color myTransparentColor = new Color(255, 0, 0); if (highlight == i) { myTransparentColor = new Color(255, 255, 0); } else if(highlight > i) { myTransparentColor = new Color(0, 255, 0); } Vector3 temp = objectPositions[i]; temp.X = temp.X / dimension * backgroundImage.Width / 2 * scale; temp.Z = temp.Z / dimension * backgroundImage.Height / 2 * scale; temp = Vector3.Transform(temp, Matrix.CreateRotationY(MathHelper.ToRadians(currentAngle))); Rectangle backgroundRectangle = new Rectangle(); backgroundRectangle.Width = 2; backgroundRectangle.Height = 2; backgroundRectangle.X = (int) (position.X + temp.X); backgroundRectangle.Y = (int) (position.Y + temp.Z); Texture2D dummyTexture = new Texture2D(graphicsDevice, 1, 1); dummyTexture.SetData(new Color[] { myTransparentColor }); spriteBatch.Draw(dummyTexture, backgroundRectangle, myTransparentColor); } myPosition.X = myPosition.X / dimension * backgroundImage.Width / 2 * scale; myPosition.Z = myPosition.Z / dimension * backgroundImage.Height / 2 * scale; myPosition = Vector3.Transform(myPosition, Matrix.CreateRotationY(MathHelper.ToRadians(currentAngle))); Rectangle backgroundRectangle2 = new Rectangle(); backgroundRectangle2.Width = 5; backgroundRectangle2.Height = 5; backgroundRectangle2.X = (int)(position.X + myPosition.X); backgroundRectangle2.Y = (int)(position.Y + myPosition.Z); Texture2D dummyTexture2 = new Texture2D(graphicsDevice, 1, 1); dummyTexture2.SetData(new Color[] { Color.Pink }); spriteBatch.Draw(dummyTexture2, backgroundRectangle2, Color.Pink); } } Bar Information This component displays a bar. I can be used to display any kind of information that is related to percentages (e.g. fuel, health or time left to reach an objective). The current percent value is represented by the length of the colore bar. Accordingly to the displayed value, the color changes from green over yellow to red. Class variables private SpriteBatch spriteBatch; private GraphicsDevice graphicsDevice; private Vector2 position; private Vector2 dimension; private float valueMax; private float valueCurrent; private bool enabled; Constructor /// <summary> /// Creates a new Bar Component for the HUD. /// </summary> /// <param name="position">Component position on the screen.</param> /// <param name="dimension">Component dimensions.</param> /// <param name="valueMax">Maximum value to be displayed.</param> /// <param name="spriteBatch">SpriteBatch that is required to draw the sprite.</param> /// <param name="graphicsDevice">Graphicsdevice that is required to create the semi transparent background texture.</param> public BarComponent(Vector2 position, Vector2 dimension, float valueMax, SpriteBatch spriteBatch, GraphicsDevice graphicsDevice) { this.position = position; this.dimension = dimension; this.valueMax = valueMax; this.spriteBatch = spriteBatch; this.graphicsDevice = graphicsDevice; this.enabled = true; } Enable /// <summary> /// Sets whether the component should be drawn. /// </summary> /// <param name="enabled">enable the component</param> public void enable(bool enabled) { this.enabled = enabled; } Update /// <summary> /// Updates the text that is displayed after ":". /// </summary> /// <param name="valueCurrent">Text to be displayed.</param> public void update(float valueCurrent) { this.valueCurrent = valueCurrent; } Draw /// <summary> /// Draws the BarComponent with the values set before. /// </summary> public void Draw() { if (enabled) { float percent = valueCurrent / valueMax; Color backgroundColor = new Color(0, 0, 0, 128); Color barColor = new Color(0, 255, 0, 200); if (percent < 0.50) barColor = new Color(255, 255, 0, 200); if (percent < 0.20) barColor = new Color(255, 0, 0, 200); Rectangle backgroundRectangle = new Rectangle(); backgroundRectangle.Width = (int)dimension.X; backgroundRectangle.Height = (int)dimension.Y; backgroundRectangle.X = (int)position.X; backgroundRectangle.Y = (int)position.Y; Texture2D dummyTexture = new Texture2D(graphicsDevice, 1, 1); dummyTexture.SetData(new Color[] { backgroundColor }); spriteBatch.Draw(dummyTexture, backgroundRectangle, backgroundColor); backgroundRectangle.Width = (int)(dimension.X*0.9); backgroundRectangle.Height = (int)(dimension.Y*0.5); backgroundRectangle.X = (int)position.X + (int)(dimension.X * 0.05); backgroundRectangle.Y = (int)position.Y + (int)(dimension.Y*0.25); spriteBatch.Draw(dummyTexture, backgroundRectangle, backgroundColor); backgroundRectangle.Width = (int)(dimension.X * 0.9 * percent); backgroundRectangle.Height = (int)(dimension.Y * 0.5); backgroundRectangle.X = (int)position.X + (int)(dimension.X * 0.05); backgroundRectangle.Y = (int)position.Y + (int)(dimension.Y * 0.25); dummyTexture = new Texture2D(graphicsDevice, 1, 1); dummyTexture.SetData(new Color[] { barColor }); spriteBatch.Draw(dummyTexture, backgroundRectangle, barColor); } } Useful links UI game design HUD design in Photoshop Resources References - Beginning XNA 3.0 Game Programming: From Novice to Professional; Alexandre Santos Lobão, Bruno Evangelista, José Antonio Leal de Farias, Riemer Grootjans, 2009 - Microsoft® XNA Game Studio 3.0 UNLEASHED; Chad Carter; 2009 - Microsoft® XNA Game Studio Creator's Guide: An Introduction to XNA Game Programming; Stephen Cawood, Pat McGee, 2007 Authors Christian Höpfner 3D Game Development Introduction Many games require 3D. This used to be very complicated, but has gotten significantly easier with the XNA framework. Still, you need to learn about many new concepts. We first introduce primitive objects, such as vertices and index buffers. Essential for creating 3D models is 3D Modelling Software and also finding free models. Importing models into XNA is also not trivial. Related to 3D are concepts related to camera and lighting, as well as shaders and effects. Also topics such as skybox and landscape modelling are covered here. Lastly, we introduce some 3D engines. More Details Lorem ipsum ... Primitive Objects Points, lines, and triangles are the primitive objects of the graphics card. Everything else is made up of one of these. Hence, it is a good idea to start with understanding these, before continuing to delve into more advanced topics. Authors none 3D Modelling Software Manissel681 Finding free Models You don't have to create 3D models from scratch. Most objects you may need have already been created, you only need to find them. For Sketchup and Blender, for instance, there are many available models. So here we show you how to find 3D models, what to worry about, especially with respect to licencing. 3D Models Yobi3D - 3D model search engine - See search results in 3D 3dtotal for example: - vehicles - monsters - weapons - architecture - low poly models - declaration of file extension and file size - different views of the 3D models artist-3d for example: - vehicles - architecture - weapons - characters - Ranking - thumbnail view - Choice between a list with thumbnails or only thumbnails 3dmodelfree for example: - interior - outdoor - good structure NASA - only NASA models 3dcar-gallery - only vehicles archive3d for example: - interior - character and related - vehicles - animals - outdoor - good variety gfxfree for example: - vehicles - architecture - character/animals - different views of the 3D models scifi3d - SciFi models for example: - Star Wars - Star Trek - Blade Runner 3Ds Max Maya Cinema 4D SKETCHUP BLENDER Websiteranking 60 excellent free 3D model websites Authors sfittje Importing Models Author FixSpix Cinema4D Simple .fbx file export Actually for XNA it is insignificant whether the file is a .fbx or a .x file. It is only important for the modeler concerning the software he is using. --> Introduction References Author sfittje Maya whith each other. Is it possible to export .x files from Maya? How to Export (.x)? If you use the cvXporter, here are a few steps to use this tool. Click Here is an example how to manage the problem if your plug-in doesnt work! Click Please do only these steps if the fbx and .x importer doesnt work. I will talk more about the fbx importer later. How to Export (.fbx)? The fbx format is the simplest way to export a file which can be used in XNA. Maya doesnt? --> Introduction References Author FixSpix 3ds Max wiht the animations and textures of your model. Exporting in adequate formats How to export? - whith the complete hierarchy of these. The DirectX SDK Viewer is a nice tool to check your .x file. There you have the possibility to see the normals, the textures... on your model from the .x file. DirectX SDK How to import? -->Introduction References Author FixSpix Blender? No blender cannot export to .X without a plugin. Exporting in adequate formats How to export(.x)? - File--> - Export--> - DirectX(.x) The result is a nice .x file from your model. How to export(.fbx)? Here we are, again the only solution is a plugin. What else? Blender supports the script language Python, here is a nice script for the export to xna. How to import? -->Introduction References Author FixSpix Sketchup Simple .fbx file export --> Introduction References Author sfittje Summary What we learned in this chapter? The diffrence Now you have to measure and to decide which is the best approach for you. But please be aware of importing only .fbx or .x files into your program! Help and solutions Here you can find help for the topic "importing models": - On page 282 - On page 261 References Camera Introduction Coordinate Systems The math helper functions MathHelper.ToDegree(radians) and MathHelper.ToRadians(degrees) can help you by the conversion. Matrices and Spaces If you want to visualize your 3D content for the user on a 2D Screen, you need to get a camera to work. You do this by using the above mentioned View and Projection Matrix which transforms the data for your needs. The View Matrixostion, camTarget, camUpVector); The Projection Matrix(fieldOfView, Introduction(); Author Manissel681 Shaders and Effects There are pixel shaders and vertex shaders. You first need to understand the difference, how they work and what they can do for you. Then you need to learn about the shader language HLSL, its syntax and how to use it. Especially how to call it from the program. Finally, you will also learn about the program called FXComposer, which shows you how to load effects, what their HLSL code is, how to modify it, and how to export and use the finished shaders in your game. Development of shaders In the past computer generated graphics were generated by a so called fixed-function pipeline (FFP) in the video hardware. This pipeline offered only a reduced set of operations in a certain order. This proved to be not flexible enough for the growing complexity of graphical applications like games. That is why a new graphics pipeline was introduced to replace this hard-coded approach. The new model still has some fixed compentents, but it introduced so called shaders. Shaders do the main work in rendering a scene on the screen and can be easily exchanged, programmed and adapted to the programmer's needs. This approach offers full creativity but also more responsibility to the graphics programmer. There are two kinds of shaders: the vertex shader and the pixel shader (in OpenGL called fragment shader). And with DirectX 10 and OpenGL 3.2 a third kind of shader was introduced: the Geometry shader that offers even further possibilities by creating additional, new vertices based on the existing ones. Shaders describe and calculate the properties of either vertices or pixels. The vertex shader deals with vertices and their properties: their position on the screen, each vertice's texture coordinates, its color and so on. The pixel shader deals with the result of the vertex shader (rasterized fragments) and describes the properties of a pixel: its color, its depth compared to other pixels on the screen (z-depth) and its alpha value. Types of shaders and their function Nowadays there are three types of shaders that are executed in a specific order to render the final image. The scheme shows the roles and the order of each shader in the process of sending data from XNA to the GPU and finally rendering an image. This process is called the GPU workflow: Vertex Shader Vertex shaders are special functions that are used to manipulate the vertex data by using mathematical operations. To do this the vertex shader takes vertex data from XNA as input. That data contains the position of the vertex in the three dimensional world, its color (if it has a color), its normal vector and its texture coordinates. Using the vertex shader this data can be manipulated, but only the values are changed, not the way the data is stored. The most basic function of every vertex shader is transforming the position of each vertex from the three dimensional position in the virtual space to the two dimensional position on the screen. This is done by matrix multiplication with the view, world and projection matrix. The vertex shader also calculates the depth of the vertex on the two dimensional screen (z-buffer depth), so that the original three dimensional information about the depth of objects is not lost and vertices that are closer to the viewer are displayed in front of vertices that are behind other vertices. The vertex shader can manipulate all the input properties such as position, color, normal vectors and texture coordinates, but it cannot create new vertices. But vertex shaders can be used to change the way the object is seen. Fog, motion blur and heat wave effects can all be simulated with vertex shaders. Geometry Shader The next step in the pipeline is the new but only optional geometry shader. The geometry shader can add new vertices to a mesh based on the vertices that were already sent to the GPU. One way to use this is called geometry tesselation which is the process of adding more triangles to an existing surface based on certain procedures to make it more detailed and better looking. Using a geometry shader instead of an high-poly model can save a lot of CPU time, because not all of the vertices that are supposed to be later displayed on the screen have to be processed by the CPU and sent to the GPU. In some cases the polygon count can be reduced to half or a quarter. If no geometry shader is used the output of the vertex shader goes straight to the rasterizer. If a geometry shader is used, the output also goes to the rasterizer after adding the new vertices. Pixel / Fragment Shader The rasterizer takes the processed vertices and turns them into fragments (pixel-sized parts of a polygon). Whether a point, line, or polygon primitive, this stage produces fragments to "fill in" the polygons and interpolate all the colors and texture coordinates so that the appropriate value is assigned to each fragment. After that the pixel shader (DirectX uses the term "pixel shader," while OpenGL uses the term "fragment shader") is called for each of these fragements. The Pixel shader calculates the color of an individual pixels and is used for diffuse shading (scene lightning), bump mapping, normal mapping, specular lighting and simulating reflections. Pixel shaders are generally used to provide surfaces with effects they have in real life. The result of the pixel shader is a pixel with a certain color that is passed to the Output Merger and finally drawn onto the screen. So the big difference between vertex and pixel shaders is that vertex shaders are used to change the attributes of the geometry (the vertices) and transform it to the 2D screen. The pixel shaders in contrast are used to change the appearance of the resulting pixels with the goal to create surface effects. Programming with BasicEffect Class in XNA Basic Class XNA is very useful and effective if you want to make a simple effect and lighting for your model. It works like fixed function pipeline(FFP) which offered a limited and unflexible operation. To use BasicEffect class we need first to declare an instance of the BasicEffect at the top of the game class. BasicEffect basicEffect; This instance should be initiliazed inside Initiliaze() methode because we want to initiliaze it when the program starts. If we do this in another place that could be lead into performance problem. basicEffect = new BasicEffect(graphics.GraphicsDevice, null); Next, we implement some method in the game class to draw a model with BasicEffect class. With the BasicEffect class, we don't have to create EffectParameter object for each variable. Instead, we can just assign these value into BasicEffect' properties. private void DrawWithBasicEffect (Model model, Matrix world, Matrix view, Matrix proj){ basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = proj; basicEffect.LightingEnabled = true; basicEffect.DiffuseColor = new Vector3(1.0f, 1.0f, 1.0f); basicEffect.SpecularColor = new Vector3(0.2f, 0.2f, 0.2f); basicEffect.SpecularPower = 5.0f; basicEffect.AmbientLightColor = new Vector3(0.5f, 0.5f, 0.5f); basicEffect.DirectionalLight0.Enabled = true; basicEffect.DirectionalLight0.DiffuseColor = Vector3.One; basicEffect.DirectionalLight0.Direction = Vector3.Normalize(new Vector3(1.0f, 1.0f, -1.0f)); basicEffect.DirectionalLight0.SpecularColor = Vector3.One; basicEffect.DirectionalLight1.Enabled = true; basicEffect.DirectionalLight1.DiffuseColor = new Vector3(0.5f, 0.5f, 0.5f); basicEffect.DirectionalLight1.Direction = Vector3.Normalize(new Vector3(-1.0f, -1.0f, 1.0f)); basicEffect.DirectionalLight1.SpecularColor = new Vector3(0.5f, 0.5f, 0.5f); } After all necesarry properties have been assigned. Now our model should be drawn with BasicEffect class. Since in a model could be have more than one mesh, we use foreach-loop to iterate each mesh of the model private void DrawWithBasicEffect (Model model, Matrix world, Matrix view, Matrix proj){ .... foreach (ModelMesh meshes in model.Meshes) { foreach (ModelMeshPart parts in meshes.MeshParts) parts.Effect = basicEffect; meshes.Draw(); } } To view our model in XNA, we just call the our methode inside Draw() methode. protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); DrawWithBasicEffect(myModel, world, view, proj); base.Draw(gameTime); } Draw texture with BasicEffect Class To draw a texture with BasicEffect class we must enable the alpha property. After that we can assign the texture into the model. basicEffect.TextureEnabled = true; basicEffect.Texture = myTexture; Create transparency with BasicEffect class First we assign the transparency value into basicEffect properties basicEffect.Alpha = 0.5f; then we must tell the GraphicsDevice to enable transparency with this code inside Draw() methode protected void Draw(){ ..... GraphicsDevice.RenderState.AlphaBlendEnable = true; GraphicsDevice.RenderState.SourceBlend = Blend.SourceAlpha; GraphicsDevice.RenderState.DestinationBlend = Blend.InverseSourceAlpha; DrawWithBasicEffect(model,world,view,projection) GraphicsDevice.RenderState.AlphaBlendEnable = false; ..... } Programming your own HLSL Shaders in XNA Shading Languages Shaders are programmable and to do that several variations of a C like high-level programming languages have been developed. The High Level Shading Language (HLSL) was developed by Microsoft for the Microsoft Direct3D API. It uses C syntax and we will use it with the XNA Framework. Other shading languages are GLSL ( OpenGL Shading Language) that is offered since OpenGL 2.0 and Cg ( C for Graphics) another high-level shading language that was developed by Nvidia in collaboration with Microsoft, which is very similar to HLSL. Cg is supported by FX Composer which is discussed later in this article. The High Level Shading Language (HLSL) and its use in XNA Shaders in XNA are written in HLSL and stored in so called effect files with the file extension .fx. It is best to keep all shaders in one separate folder. So create a new folder "Shaders" in the content node of the Solution Explorer in Visual C#. To create a new Effect fx-file, simply right-click on the new "Shaders" folder and select Add → New Item. In the New Item dialog select "Effect File" and give the file a suitable name. The new effect file will already contain some basic shader code that should work, but in this chapter we will write the shader from scratch, so the already generated code can be deleted. Structure of a HLSL Effect-File (*.fx) As already mentioned, HLSL uses C syntax and can be programmed by declaring variables, structs and writing functions. A Shader in HLSL usually consist of four different parts: Variable declarations Variable declarations that contain parameters and fixed constants. These variables can be set from the XNA application that is using the shader. Example: float4 AmbienceColor = float4(0.5f, 0.5f, 0.5f, 1.0f); With this statement a new global variable is declared and assigned. HLSL offers the standard c data types like float, string and struct but also other shader specific data types for Vectors, Matrices, Sampler, Textures and so on. The official Reference: MSDN In the example we declared a 4 dimensional vector that is used to define a color. Colors are represented by 4 values that represent the 4 channels (Red, Green, Blue, Alpha) and have a range from 0.0 to 1.0. Variables can have arbitrary names. Data structures Data structures that will be used by the shaders to input and output data. Usually these are two structures: one for the input that goes into the vertex shader and one for the output of the vertex shader. The output of the vertex shader is then used as the input of the pixel shader. Usually there is no structure needed for the output of the pixel shader, because that is already the end result. If you include a Geometry Shader you need additional structures, but we will just look at the most basic example consisting of a vertex and pixel shader. Structures can have arbitrary names. Example: struct VertexShaderInput { float4 Position : POSITION0; }; This data structure has one variable of the type 4 dimensional vector in it called Position (or any other name). POSITION0 after the variable name is a so called semantic. All the variables in the input and output structs must be identified by semantics. A list can be found in the official HLSL Reference: MSDN Shader functions Implementation of the shader functions and logic behind them. Usually that is one function for the vertex shader and one for the pixel shader. Example: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return AmbienceColor; } Functions are like in C: They can have parameters and return values. In this case we have a function called PixelShaderFunction (name can be arbitrary) which takes a VertexShaderOutput object as input and returns a value of the semantic COLOR0 and type float4 (4 dimensional vector representing the 4 color channels) Techniques A technique is like the main() method of a shader and tells the graphic card when to use what shader function. Techniques can have multiple passes that use different shader functions, so the resulting image on the screen can be composed with multiple passes. Example: technique Ambient { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_1_1 PixelShaderFunction(); } } This example technique has the name Ambient and just one pass. In this pass the vertex and pixel shader functions are assigned and the shader version (in this case 1.1) is specified. First try: A simple ambient shader The simplest shader is a so called ambient shader that just assigns a fixed color to every pixel of an object so only its outline is seen. Let's implement an ambient shader as a first try. We start with an empty .fx-File that can have an arbitrary filename. The vertex shader needs the three scene matrices to calculate the two dimensional position of a certain vertex on the screen based on the three dimensional coordinates. So we need to define three matrices inside the fx-file as variables: float4x4 WorldMatrix; float4x4 ViewMatrix; float4x4 ProjectionMatrix; float4 AmbienceColor = float4(0.5f, 0.5f, 0.5f, 1.0f); A variable of the type float4x4 is a 4 dimensional matrix. The other variable is a 4 dimensional vector to determine the ambient light color (in this case a gray tone). The color values for the Ambient color are float values that represent the RGBA channels, where the minimum value is 0 and the maximum value is 1. Next we need the input and output structures for the vertex shader: struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; }; Because it is a very simple shader the only data they contain at the moment is the position of the vertex in the virtual 3D space (VertexShaderInput) and the transformed position of the vertex on the two dimensional screen (VertexShaderOutput). POSITION0 is the semantic type of both positions. Now we need to add the shader calculation itself. This is done in two functions. At first the vertex shader function: VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); return output; } This is the most basic vertex shader function and every vertex shader should look similar. The position that is saved in input is transformed by multiplying it with three scene matrices and then returning it as the result. The input is of the type VertexShaderInput and the output is of the type VertexShaderOutput. The matrix multiplication function that is used (mul) is part of the HLSL language. Now all we need is to give the pixel shader the position that was calculated by the vertex shader and color it with the ambient color (based on the ambient intensity). The pixel shader is implemented in another function that returns the final pixel color with the data type float4 and the semantic type COLOR0: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return AmbienceColor; } So it should become clear why in the end result every pixel of the object will have the same color: because we don't have any lightning yet in the shader and all the three dimensional information gets lost. To make our shader complete we need a so called technique, which is like the main() method of a shader and the function that is called by XNA when using the shader to render an object: technique Ambient { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_1_1 PixelShaderFunction(); } } A technique has a name (in this case Ambient) which can be called directly from XNA. A technique can also have multiple passes, but in this simple case we just need one pass. In one pass it is exactly defined which function of our shader file is the vertex shader and which function is the pixel shader. We do not use a geometry shader here, because in contrast to the vertex and pixel shader it is just optional. Furthermore it is determined which shader version should be used, because the shader models are continually developed and new features are added. Possible versions are: 1.0 to 1.3, 1.4,2.0, 2.0a, 2.0b, 3.0, 4.0. For the simple ambient lighting we just need version 1.1, but for reflections and other more advanced effects pixel shader version 2.0 is needed. The complete shader code: float4x4 WorldMatrix; float4x4 ViewMatrix; float4x4 ProjectionMatrix; float4 AmbienceColor = float4(0.5f, 0.5f, 0.5f, 1.0f); struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, WorldMatrix); float4 viewPosition = mul(worldPosition, ViewMatrix); output.Position = mul(viewPosition, ProjectionMatrix); return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return AmbienceColor; } technique Ambient { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_1_1 PixelShaderFunction(); } } Now the shader file is completed and can be saved, we just need to get our XNA application to use it for rendering objects. At first a new global variable of the type Effect has to be defined. Each Effect object is used to reference a shader which is inside a fx-file. Effect myEffect; In the method that is used to load the content from the content folder (like models, textures and so on) the shader file needs to be loaded as well (in this case it is the file Ambient.fx in the folder Shaders): myEffect = Content.Load<Effect>("Shaders/Ambient"); Now the Effect is ready to use. To draw a model with our own shader we need to implement a method for that purpose: private void DrawModelWithEffect(Model model, Matrix world, Matrix view, Matrix projection) { foreach (ModelMesh mesh in model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = myEffect; myEffect.Parameters["World"].SetValue(world * mesh.ParentBone.Transform); myEffect.Parameters["View"].SetValue(view); myEffect.Parameters["Projection"].SetValue(projection); } mesh.Draw(); } } The method takes the model and the three matrices that are used to describe a scene as parameters. It loops through the meshes in the model and then trough the mesh parts in the mesh. For each part it assigns our new myEffect object to a property that is called "Effect" as well. But before the shader is ready to use, we need to supply it with the required parameters. By using the Parameters collection of the myEffect-object we can access the variables that were defined earlier in the Shader file and give them a value. We assign the three main matrices to the equivalent variable in the shader by using the SetValue() method. After that the mesh is ready to be drawn with the Draw() methode of the class ModelMesh. So the new method DrawModelWithEffect() can now be called for every model of the type Model to draw it on the screen using our custom shader! The result can be seen in the picture. As you can see, every pixel of the model has the same color because we have not used any lightning, textures or effects yet. It is also possible to change fixed variables of the shader directly in XNA by using the Parameters collection and the SetValue() method. For example to change the ambient color in the shader in the XNA application the following statement is needed: myEffect.Parameters["AmbienceColor"].SetValue(Color.White.ToVector4()); Diffuse shading Diffuse shading renders an object in the light that is coming from a light emitter and reflects off the object's surface in all directions (it diffuses). It is what gives most objects their shading, so that they have brightly lit parts and darker parts creating a three dimensional effect that was lost in the simple ambient shader. Now we will modify the previous ambient shader to support diffuse shading as well. There are two ways to implement diffuse shading, one way uses the vertex shader the other uses the pixel shader. We will look at the vertex shader variant. We need to add three new variables to the previous ambient shader file: float4x4 WorldInverseTransposeMatrix; float3 DiffuseLightDirection = float3(-1.0f, 0.0f, 0.0f); float4 DiffuseColor = float4(1.0f, 1.0f, 1.0f, 1.0f); The variable WorldInverseTransposeMatrix is another matrix that is needed for the calculation. It is the transpose of the inverse of the world matrix. With the ambient lighting only we did not have to care about the normal vectors of the vertices, but with the diffuse lighting this matrix becomes necessary to transform the normals of a vertex to do lighting calculations. The other two variables are used to define the direction where the diffuse light comes from (first value is X, second value Y and third Z in the 3D space) and the color of the diffuse light that bounces off the surface of the rendered objects. In this case we use simply white color and the light emits in the direction of the x-axis in virtual space. The structures for VertexShaderInput and VertexShaderOutput need some small modification as well. We have to add the following variable to the struct VertexShaderInput to get the normal vector of the current vertex in the vertex shader input: float4 NormalVector : NORMAL0; And we add a variable for the color to the struct VertexShaderOutput, because we will calculate the diffuse shading in the vertex shader, which will result in a color that needs to be passed to the pixel shader: float4 VertexColor : COLOR0; To do the diffuse lighting in the vertex shader we have to add some code to the VertexShaderFunction: float4 normal = normalize(mul(input.NormalVector, WorldInverseTransposeMatrix)); float lightIntensity = dot(normal, DiffuseLightDirection); output.VertexColor = saturate(DiffuseColor * lightIntensity); With this code we transform the normal of a vertex so that it is then relative to where the object is in the world (first new line). In the second line the angle between the surface normal vector and the light that shines on it is calculated. The HLSL language offers a function dot() that calculates the dot product of two vectors, which can be used to measure the angle between two vectors. In this case the angle is equal to the intensity of the light on the surface of the vertex. At last the color of the current vertex is calculated by multiplying the diffuse color with the intensity. This color is stored in the VertexColor property of the VertexShaderOutput struct, which is later passed to the pixel shader. At last we have to change the value that is returned by PixelShaderFunction: return saturate(input.VertexColor + AmbienceColor); It simply takes the color we already calculated in the vertex shader and adds the ambient component to it. The function saturate is offered by HLSL to make sure that a color is within the range between 0 and 1. You might want to make the AmbienceColor component a bit darker so its influence on the final color is not so big. This can also be done by defining an intensity variable that regulates the intensity of a color. But we will keep things short and simple now and discuss that later. The complete shader code: { return saturate(input.VertexColor + AmbienceColor); } technique Diffuse { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_1_1 PixelShaderFunction(); } } That is it for the shader file. To use the new shader in XNA we have to make one addition to the XNA application that uses the shader to render objects: We have to set the WorldInverseTransposeMatrix variable of the shader in XNA. So right in the DrawModelWithEffect method in the part where the other parameters of the object myEffect are set by using SetValue() we have to set the WorldInverseTransposeMatrix. But before setting it, it needs to be calculated. For that we invert and then transpose the world matrix of our application (Which is multiplied with the objects transformation first, so everything is at the right place). Matrix worldInverseTransposeMatrix = Matrix.Transpose(Matrix.Invert(mesh.ParentBone.Transform * world)); myEffect.Parameters["WorldInverseTransposeMatrix"].SetValue(worldInverseTransposeMatrix); That is all that needs to be changed in the XNA code. Now you should have nice diffuse lighting. You can see the result in the pictures. Remember this shader is already using diffuse and ambient lighting, that is why the dark parts of the model are just gray and not black. If we modify the pixel shader to just return the vertex color without adding the ambient light, the scene looks different (second picture): return saturate(input.VertexColor); The dark parts of the model where there is no light are now completely black because they no longer have an ambient component added to them. Texture Shader Applying and rendering textures on an object based on texture coordinates is also done with shaders. To adapt the previous diffuse shader to work with textures we have to add the following variable: texture ModelTexture; sampler2D TextureSampler = sampler_state { Texture = (ModelTexture); MagFilter = Linear; MinFilter = Linear; AddressU = Clamp; AddressV = Clamp; }; ModelTexture is of the HLSL data type texture and stores the texture that should be rendered on the model. Another variable of the type sampler2D is associated to the texture. A sampler tells the graphic card how to extract the color for one pixel from the texture file. The sampler contains five properties: - Texture: Which texture file to use. - MagFilter + MinFilter: Which filter should be used to scale the texture. Some filters are faster than others, other filters look better. Possible values are: Linear, None, Point, Anisotropic - AddressU + AddressV: Determine what to do when the U or V coordinate is not in the normal range (between 0 and 1). Possible values: Clamp, Border Color, Wrap, Mirror. We use the Linear filter which is fast and Clamp, which just uses the value 0 if the U/V value is lesser than 0 and the value 1 if the U/V Value is greater than 1. Next we add texture coordinates to the output and input structs of the vertex shader so this kind of information can be collected by the vertex shader and forwarded to the pixel shader. Add to struct VertexShaderInput: float2 TextureCoordinate : TEXCOORD0; And add to struct VertexShaderOutput: float2 TextureCoordinate : TEXCOORD0; Both are of the type float2 (a two-dimensional vector) because we just need to store two components: U and V. Both variables also have the semantic type TEXCOORD0. The step of applying the color of the texture to the object happens in the pixel shader, but not in the vertex shader. So in the VertexShaderFunction we just take the textureCoordinate from the input and put it into the output: output.TextureCoordinate = input.TextureCoordinate; In the PixelShaderFunction we then do the following: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 VertexTextureColor = tex2D(TextureSampler, input.TextureCoordinate); VertexTextureColor.a = 1; return saturate(VertexTextureColor * input.VertexColor + AmbienceColor); } The function now calculates the color of the pixel based on the texture. Additionally the alpha value for the color is set separately in the second line, because the TextureSampler does not get the alpha value from the texture. Finally in the return statement the texture color of the vertex is multiplied by the diffuse color (which adds diffuse shading to the texture color) and the ambient color is added as usual. We also need to make a change in the technique function this time. The new PixelShaderFunction is now to sophisticated for pixel shader version 1.1, so it needs to be set to version 2.0: PixelShader = compile ps_2_0 PixelShaderFunction(); The complete shader code for the texture shader:); // For Texture output.TextureCoordinate = input.TextureCoordinate; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // For Texture float4 VertexTextureColor = tex2D(TextureSampler, input.TextureCoordinate); VertexTextureColor.a = 1; return saturate(VertexTextureColor * input.VertexColor + AmbienceColor); } technique Texture { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } Changes in XNA: In the XNA Code we have to add a new texture by declaring a Texture2D object: Texture2D planeTexture; Load the texture by loading a previously added image of the content node (in this case a file called "planetextur.png" that is located in the folder "Images" of the content node of the solution explorer) : planeTexture = Content.Load<Texture2D>("Images/planetextur"); And finally assign the new texture to the shader variable ModelTexture in our usual draw method: myEffect.Parameters["ModelTexture"].SetValue(planeTexture); The object should then have a texture, diffuse shading and ambient shading as you can see in the sample image. Advanced Shading with Specular Lighting and Reflections Now let's create a new more sophisticated effect that looks really nice and real and can be used to simulate shiny surfaces like metal. We will combine a texture shader with a specular shader and a reflection shader. The reflection shader will reflect a predefined environment The specular lighting adds shiny spots on the surface of a model to simulate smoothness. They have the color of the light that is shining on the surface. The difference of specular lighting to the shaders we have used before is that it is not only influenced by the direction the light comes from, but also the direction from which the viewer is looking at the object. So as the camera moves in the scene, the specular lighting is moving around on the surface. The same goes for the reflection shader, based on the position of a viewer the reflection on an objects surface is changing. Calculating reflections like in the real world would mean to calculate single rays of light bouncing off surfaces (a technique called ray tracing). This requires way to much calculation power which is why we use a simpler approach in real time computer graphics like XNA. The technique we use is called environment mapping and maps the image of an environment onto an object's surface. This environment mapping is moved when the viewers position is changing so the illusion of a reflection is created. This has some limitations, for example the object only reflects a predefined environment image and not the real scene. Therefore the player and all other moving models will not be reflected. This has some limitations, but they are not very noticeable in a real time application. The environment map could be the same as the skybox of a scene. More about the skybox in another article: Game Creation with XNA/3D Development/Skybox. If the environment map is the same as the skybox it will fit to the scene and look accurate, however you can use whatever environment mapping looks good on the model in the scene. The basis for the following changes is the previously developed texture shader. For specular lighting the following variables need to be added: float ShininessFactor = 10.0f; float4 SpecularColor = float4(1.0f, 1.0f, 1.0f, 1.0f); float3 ViewVector = float3(1.0f, 0.0f, 0.0f); The ShininessFactor defines how shiny the surface is. A low value stands for a surface with broad surfaces highlights and should be used for less shiny surfaces. A high value stands for shinier surfaces like metal with small but very intense surface highlights. A mirror would have an infinite value in theory. The SpecularColor specifies the color of the specular light. In this case we use white light. The ViewVector is a variable that will be calculated and set from the XNA applicaton at run time. It tells the shader which direction the viewer is looking at. For the reflection shader we need to add the environment texture and a sampler as variables: Texture EnvironmentTexture; samplerCUBE EnvironmentSampler = sampler_state { texture = <EnvironmentTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = Mirror; AddressV = Mirror; }; The EnvironmentTexture is the environment image that will be mapped as a reflection on our object. This time a cube sampler is used which is a little bit different from the previously used 2D sampler. It assumes that the supplied texture is created to be rendered on a cube. No changes need to be made in the VertexShaderInput struct, but two new variables need to be added to the struct VertexShaderOutput: float3 NormalVector : TEXCOORD1; float3 ReflectionVector : TEXCOORD2; NormalVector is just the normal vector of a single vertex that comes directly from the input. The reflection vector is calculated in the vertex shader and used in the pixel shader to assign the right part from the environment map to the surface. Both are of the semantic type TEXCOORD. There is already one variable of thetype TEXCOORD0 (TextureCoordinate) so we count further to 1 and 2. In the VertexShaderFunction we have to add the following commands: // For Specular Lighting output.NormalVector = normal; // For Reflection float4 VertexPosition = mul(input.Position, WorldMatrix); float3 ViewDirection = ViewVector - VertexPosition; output.ReflectionVector = reflect(-normalize(ViewDirection), normalize(normal)); At first the previously calculated normal vector of the current vertex is written to the output, because it is later needed for specular shading in the pixel shader. For the reflection the vertex position in the world is calculated along with the direction the viewer looks on the vertex. Then the reflection vector is calculated using the HLSL function reflect() that uses normalized values of the previously calculated normal and ViewDirection vector. To the PixelShaderFunction we add the following calculations for the specular value:); So to calculate the specular highlight the diffuse light direction, the normal, the view vector and the shininess is needed. The end result is another vector that contains the specular component. This specular component is added along with the reflection to the return statement at the end of the PixelShaderFunction: return saturate(VertexTextureColor * texCUBE(EnvironmentSampler, normalize(input.ReflectionVector)) + specular * 2); In this case we got rid of the diffuse and ambient component because it is not necessary for this demonstration and looks even better without it in this case. Without the diffuse lighting component, it looks like the light comes from everywhere and reflects on shiny metal. So in the return statement the texture color is used along with the reflection and the specular highlight (multiplied by 2 to make it more intense). The finished shader code:; }; // For Specular Lighting float ShininessFactor = 10.0f; float4 SpecularColor = float4(1.0f, 1.0f, 1.0f, 1.0f); float3 ViewVector = float3(1.0f, 0.0f, 0.0f); // For Reflection Lighting Texture EnvironmentTexture; samplerCUBE EnvironmentSampler = sampler_state { texture = <EnvironmentTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = Mirror; AddressV = Mirror; };; // For Specular Shading float3 NormalVector : TEXCOORD1; // For Reflection float3 ReflectionVector : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, WorldMatrix); float4 viewPosition = mul(worldPosition, ViewMatrix); output.Position = mul(viewPosition, ProjectionMatrix); // For Diffuse Lighting float4 normal = normalize(mul(input.NormalVector, WorldInverseTransposeMatrix)); float lightIntensity = dot(normal, DiffuseLightDirection); output.VertexColor = saturate(DiffuseColor * lightIntensity); // For Texture output.TextureCoordinate = input.TextureCoordinate; // For Specular Lighting output.NormalVector = normal; // For Reflection float4 VertexPosition = mul(input.Position, WorldMatrix); float3 ViewDirection = ViewVector - VertexPosition; output.ReflectionVector = reflect(-normalize(ViewDirection), normalize(normal)); return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // For Texture float4 VertexTextureColor = tex2D(TextureSampler, input.TextureCoordinate); VertexTextureColor.a = 1; // For Specular Lighting); return saturate(VertexTextureColor * texCUBE(EnvironmentSampler, normalize(input.ReflectionVector)) + specular * 2); } technique Reflection { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } To use the new shader in XNA we need to set 2 additional shader variables from XNA in the draw method: myEffect.Parameters["ViewVector"].SetValue(viewDirectionVector); myEffect.Parameters["EnvironmentTexture"].SetValue(environmentTexture); But at first the object environmentTexture should be declared and loaded first (as usual): TextureCube environmentTexture; environmentTexture = Content.Load<TextureCube>("Images/Skybox"); In contrast to the model texture, this texture is not of the type Texture2D but the type TextureCube because in our case we use a skybox texture as environment map. A skybox texture consists not only of one image like a regular texture, but six different images that are mapped on each side of a cube. The images have to fit together in the right angle and be seamless. You can find some skybox textures here: RB Whitaker Skybox Textures Secondly the viewDirectionVector we use to set the ViewVector variable in the reflection shader should be declared in the class as a field: Vector3 viewDirectionVector = new Vector3(0, 0, 0); It can be calculated this way: viewDirectionVector = cameraPositionVector – cameraTargetVector; Whereby cameraPositionVector is a 3D vector containing the current position of the camera and cameraTargetVector is another vector with the coordinates of the camera target. If for example the camera is just looking at the point 0,0,0 in the virtual space, the calculation would be even shorter: viewDirectionVector = cameraPositionVector; //or viewDirectionVector = new Vector3(eyePositionX, eyePositionY, eyePositionZ); With all these changes in the XNA game the reflection should look like in the picture. But the appearance largely depends on the environment map used. Additional Parameters Another good idea would be to introduce parameters for the intensity of a shader. For example instead of simply returning the ambient color in the return statement of the pixel shader function in the diffusion shader above: return saturate(input.VertexColor + AmbienceColor); One could return: return saturate(input.VertexColor + AmbienceColor * AmbienceIntensity); Whereby AmbienceIntensity is a float between 0.0 and 1.0. This way the intensity of the color can be easily adjusted. This can be done with every component we have calculated so far (ambient, diffusion, textur color, specular intensity, reflection component). Postprocessing with shaders Until now we have worked with 3D shaders but 2D shaders are also possible. A 2D image can be modified and processed by a picture editing software such as Photoshop to adapt its contrast, colors and apply filters. The same can be achieved with 2D shaders that are applied on the entire output image that is the result of rendering the scene. Examples for the kinds of effects that can be achieved: - Simple color modifications like making the scene black and white, inverting the color channels, giving the scene a sepia look and so on. - Adapting the colors to create a warm or cold mood in the scene. - Blur the screen with a blur filter to create special effects. - Bloom Effect: A popular effect that produces fringes of light around very bright objects in an image simulating an effect known from photography. So to start we create a new shader file in Visual Studio (call it Postprocessing .fx) and insert the following code for post-processing texture ScreenTexture; sampler TextureSampler = sampler_state { Texture = <ScreenTexture>; }; float4 PixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0 { float4 pixelColor = tex2D(TextureSampler, TextureCoordinate); pixelColor.g = 0; pixelColor.b = 0; return pixelColor; } technique Grayscale { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } } As you can see for the post-processing we only need a pixel shader. The post-processing is handled by supplying the rendered image of the scene as a texture which is then used by a pixel shader as input information, processed and returned. The function has only one input parameter (the texture coordinate) and returns a color vector of the semantic type COLOR0. In this example we just read the color of the pixel at the current texture coordinate (which is the screen coordinate) and set the green and blue channel to 0 so that only the red channel is left. Then we return the color value. Now using this 2D shader in XNA is a bit more tricky. At first we need the following objects in the Game class: GraphicsDeviceManager graphics; SpriteBatch spriteBatch; RenderTarget2D renderTarget; Effect postProcessingEffect; It is very likely that the GraphicsDeviceManager and SpriteBatch object is already created in an existing project. However the RenderTarget2D and Effect object have to be declared. Check that the GraphicsDeviceManager object is initialized in the constructor: graphics = new GraphicsDeviceManager(this); And the SpriteBatch object is initialized in the LoadContent() method. The new shader file we just created should be loaded in this method as well: spriteBatch = new SpriteBatch(GraphicsDevice); postProcessingEffect = Content.Load<Effect>("Shaders/Postprocessing"); Finally make sure that the RenderTarget2D object is initialized in the method Initialize(): renderTarget = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, 1, GraphicsDevice.PresentationParameters.BackBufferFormat ); Now we need a method to draw the current scene to a texture (in form of a render target) instead of the screen: protected Texture2D DrawSceneToTexture(RenderTarget2D currentRenderTarget) { // Set the render target GraphicsDevice.SetRenderTarget(0, currentRenderTarget); // Draw the scene GraphicsDevice.Clear(Color.Black); drawModelWithTexture(model, world, view, projection); // Drop the render target GraphicsDevice.SetRenderTarget(0, null); // Return the texture in the render target return currentRenderTarget.GetTexture(); } Inside this method we use the draw function that is using our 3D shader (in this case: drawModelWithTexture()). So we still use all the 3D shaders to render the scene first, but instead of displaying this result directly, we render it to a texture and do some post-processing with it in the Draw() method. After that the processed texture is displayed on the screen. So extend the Draw() method with this: protected override void Draw(GameTime gameTime) { Texture2D texture = DrawSceneToTexture(renderTarget); GraphicsDevice.Clear(Color.Black); spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.SaveState); postProcessingEffect.Begin(); postProcessingEffect.CurrentTechnique.Passes[0].Begin(); spriteBatch.Draw(texture, new Rectangle(0, 0, 1024, 768), Color.White); postProcessingEffect.CurrentTechnique.Passes[0].End(); postProcessingEffect.End(); spriteBatch.End(); base.Draw(gameTime); } At first the normal scene is rendered to a texture named texture. Then a sprite batch is started along with the postProcessing effect that contains our new post-processing shader. The texture is then rendered on the sprite batch with the postProcessing Effect applied to it. The effect should look like in the picture. Another simple effect that can be achieved with a post-processing shader is converting the color image to a gray scale image and then reducing it to 4 colors, which creates a cartoon-like effect. To achieve this, the PixelShaderFunction inside our shader file should look like this: float4 PixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0 { float4 pixelColor = tex2D(TextureSampler, TextureCoordinate); float average = (pixelColor.r + pixelColor.g + pixelColor.b) / 3; if (average > 0.95){ average = 1.0; } else if (average > 0.5){ average = 0.7; } else if (average > 0.2){ average = 0.35; } else{ average = 0.1; } pixelColor.r = average; pixelColor.g = average; pixelColor.b = average; return pixelColor; } A gray scale image is generated by calculating the average of the red, green and blue channel and using this one value for all three channels. After that the average value is additionally reduced to one of 4 different values. At last the red, green and blue channel of the output is set to the reduced value. The image is grayscale because the red, green and blue channel all have the same value. Create tansparency Shader Create a tranparency shader is easy. We can start with diffuse shader example from above. First we need some variable called alpha to determine the transparency. The value should be between 1 for opaque and 0 for complete transparent. To implement the transparency shader we just need some modification in PixelShaderFunction. After all lighting calculation have been done, we must assign the alpha value into result color properties. float alpha = 0.5f; float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 color = saturate(input.VertexColor + AmbienceColor); color.a = alpha; return color; } to enable alpha blending we must add some code in technique technique Tranparency { pass p0 { AlphaBlendEnable = TRUE; DestBlend = INVSRCALPHA; SrcBlend = SRCALPHA; VertexShader = compile vs_2_0 std_VS(); PixelShader = compile ps_2_0 std_PS(); } } The complete transparency shader { float4 color = saturate(input.VertexColor + AmbienceColor); color.a = alpha; return color; } technique Diffuse { pass Pass1 { AlphaBlendEnable = TRUE; DestBlend = INVSRCALPHA; SrcBlend = SRCALPHA; VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_1_1 PixelShaderFunction(); } } Other kinds of shaders A few other popular shaders with a short description. Bump Map Shader Bump Mapping is used to simulate bumps on otherwise even polygon surfaces to make a surface look more realistic and give it some structure, additionally to the texture. Bump Mapping is achieved by loading another texture that contains the bump information and perturbing the surface normals with this information. The original normal of a surface is changed by an offset value that comes from the bump map. Bump maps are grayscale images. Normal Map Shader Bump Mapping is nowadays replaced by normal mapping. Normal mapping is also used to create bumpiness and structures on otherwise even polygon surfaces. But normal mapping handles drastic variations in normals better than bump mapping. Normal Mapping is a similar idea to bump mapping: another texture is loaded and used to change the normals. But instead of just changing the normals with an offset a normal map uses a multichannel (RGB) map to completely replace the existing normals. R, G and B values of each pixel in the normal map correspond to the X,Y,Z coordinates of the normal vector of a vertex. The further development of normal mapping is called parallax mapping. Cel Shader (Toon Shader) A Cel Shader is used to render a 3D scene in a cartoon-like look so that it appears to be drawn by hand. Cel Shading can be implemented in XNA with a multi-pass shader that builds the result image in several passes. Toon Shader Example To create toon shader we can start from diffuse shader. The basic idea behind toon shader is that the light intensity will be divided into several levels. In this example we create the intensity into 5 levels. To represents the lightness level we need some array variable called toonthresholds and to determine the boundary between levels we use array toonBrightnessLevels. float ToonThresholds[4] = {0.95,0.5, 0.2, 0.03 }; float ToonBrightnessLevels[5] = { 1.0, 0.8, 0.6, 0.35, 0.01 }; now in PixelShader we implement the classification of light intensity and assign into it an appropriate color.; } The complete toon shader float4x4 World : World < string UIWidget="None"; >; float4x4 View : View < string UIWidget="None"; >; float4x4 Projection : Projection < string UIWidget="None"; >; texture colorTexture : DIFFUSE < string UIName = "Diffuse Texture"; string ResourceType = "2D"; >; float3 DiffuseLightDirection = float3(1, 0, 0); float4 DiffuseLightColor = float4(1, 1, 1, 1); float DiffuseIntensity = 1.0; float ToonThresholds[4] = {0.95,0.5, 0.2, 0.03 }; float ToonBrightnessLevels[5] = { 1.0, 0.8, 0.6, 0.35, 0.01 }; sampler2D colorSampler = sampler_state { Texture = <colorTexture>; FILTER = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; struct VertexShaderInput { float4 position : POSITION0; float3 normal :NORMAL0; float2 uv : TEXCOORD0; }; struct VertexShaderOutput { float4 position : POSITION0; float3 normal : TEXCOORD1; float2 uv : TEXCOORD0; }; VertexShaderOutput std_VS(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.position, World); float4 viewPosition = mul(worldPosition, View); output.position = mul(viewPosition, Projection); output.normal = normalize(mul(input.normal, World)); output.uv = input.uv; return output; }; } technique Toon { pass p0 { VertexShader = compile vs_2_0 std_VS(); PixelShader = compile ps_2_0 std_PS(); } } Using FXComposer to create shaders for XNA FX Composer is a integrated development environmet for shader authoring. Using FX Composer to create our own shader is very helpful. With Fx Composer we can see the result soon and it is very efficient to make some experiment with the shader. Using FX Composer shader library into XNA In this example I use FX Composer version 2.5. using FX Composer library into your own XNA is very easy task. Let just start it with example. Open your FX Composer and create some new Project. In Material click right mouse and choose „Add Material From File“ and choose metal.fx. All you need is copy all the codes from metal.fx and create new effect in your XNA project and replace all the content with the codes from metal fx. You can also copy the file metal.fx into put it into your XNA project. From this, all we need is some modification in XNA class based on variables in metal.fx in metal.fx you can see this code : // transform object vertices to world-space: float4x4 gWorldXf : World < string UIWidget="None"; >; // transform object normals, tangents, & binormals to world-space: float4x4 gWorldITXf : WorldInverseTranspose < string UIWidget="None"; >; // transform object vertices to view space and project them in perspective: float4x4 gWvpXf : WorldViewProjection < string UIWidget="None"; >; // provide tranform from "view" or "eye" coords back to world-space: float4x4 gViewIXf : ViewInverse < string UIWidget="None"; >; In our XNA Class we must change the ParameterEffect name. Matrix InverseWorldMatrix = Matrix.Invert(world); Matrix ViewInverse = Matrix.Invert(view); effect.Parameters["gWorldXf"].SetValue(world); effect.Parameters["gWorldITXf"].SetValue(InverseWorldMatrix); effect.Parameters["gWvpXf"].SetValue(world*view*proj); effect.Parameters["gViewIXf"].SetValue(ViewInverse); we must also change the technique name in XNA class. Because XNA use directX9 we choose the “technique Simple” effect.CurrentTechnique = effect.Techniques["Simple"]; Now you can run the code with metal effect. the complete function: private void DrawWithMetalEffect(Model model, Matrix world, Matrix view, Matrix proj){ Matrix InverseWorldMatrix = Matrix.Invert(world); Matrix ViewInverse = Matrix.Invert(view); effect.CurrentTechnique = effect.Techniques["Simple"]; effect.Parameters["gWorldXf"].SetValue(world); effect.Parameters["gWorldITXf"].SetValue(InverseWorldMatrix); effect.Parameters["gWvpXf"].SetValue(world*view*proj); effect.Parameters["gViewIXf"].SetValue(ViewInverse); foreach (ModelMesh meshes in model.Meshes) { foreach (ModelMeshPart parts in meshes.MeshParts) parts.Effect = basicEffect; meshes.Draw(); } } Particle Effects to create particle effect in XNA we use a point sprite. A point sprite is a resizable textured vertex that always faces the camera. There are several reasons why we use pointsprite for rendering particles - a point sprite only use one vertex. It could reduce a significant number of vertex for a thousand particles. - there is no need to store or set map UV coordinates.it is done automatically. - Point sprites always face camera. So we don't need to bother with angle and view. creating a point sprite shader is a very easy, we just need some implementation in pixelshader to define the texture coordinate float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 uv; uv = input.uv.xy; return tex2D(Sampler, uv); } and in a vertexshader we only needs to return a POSITION0 for the vertex . float4 VertexShader(float4 pos : POSITION0) : POSITION0 { return mul(pos, WVPMatrix); } to enable the point sprite and set the properties of point sprite we do that in technique. technique Technique1 { pass Pass1 { sampler[0] = (Sampler); PointSpriteEnable = true; PointSize = 16.0f; AlphaBlendEnable = true; SrcBlend = SrcAlpha; DestBlend = One; ZWriteEnable = false; VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_1_1 PixelShaderFunction(); } } the complete point sprite shader float4x4 World; float4x4 View; float4x4 Projection; float4x4 WVPMatrix; texture spriteTexture; sampler Sampler = sampler_state { Texture = <spriteTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 uv :TEXCOORD0; }; float4 VertexShaderFunction(float4 pos : POSITION0) : POSITION0 { return mul(pos, WVPMatrix); } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 uv; uv = input.uv.xy; return tex2D(Sampler, uv); } technique Technique1 { pass Pass1 { sampler[0] = (Sampler); PointSpriteEnable = true; PointSize = 32.0f; AlphaBlendEnable = true; SrcBlend = SrcAlpha; DestBlend = One; ZWriteEnable = false; VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_1_1 PixelShaderFunction(); } } now lets move to our game1.cs file. First we need to declare and load the Effect and the texture. And to store the position vertex we use an array of VertexPositionColor elements. The position of vertex should be initialized with random number. Effect pointSpriteEffect; VertexPositionColor[] positionColor; VertexDeclaration vertexType; Texture2D textureSprite; Random rand; const int NUM = 50; ....; } } next step we create DrawPointsprite method to draw the particle.(); } and we call the DrawPointSprite() methode in Draw() methode protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); DrawPointsprite(); base.Draw(gameTime); } to make the position dynamic we make some implementation in Update() methode. protected override void Update(GameTime gameTime) { positionColor[rand.Next(0, NUM)].Position = new Vector3(rand.Next(400) / 10f, rand.Next(400) / 10f, rand.Next(400) / 10f); positionColor[rand.Next(0, NUM)].Color = Color.White; base.Update(gameTime); } this is very simple pointsprite shader. You can make more sophiscated point sprite with dynamic size and color. the complete game1.cs namespace MyPointSprite { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Matrix view, projection; Effect pointSpriteEffect; VertexPositionColor[] positionColor; VertexDeclaration vertexType; Texture2D textureSprite; Random rand; const int NUM = 50; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { view =Matrix.CreateLookAt (Vector3.One * 40, Vector3.Zero, Vector3.Up); projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 4.0f / 3.0f, 1.0f, 10000f); base.Initialize(); }; } } protected override void Update(GameTime gameTime) { positionColor[rand.Next(0, NUM)].Position = new Vector3(rand.Next(400) / 10f, rand.Next(400) / 10f, rand.Next(400) / 10f); positionColor[rand.Next(0, NUM)].Color = Color.Chocolate; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); DrawPointsprite(); base.Draw(gameTime); }(); } } } Links Introduction to HLSL and some more advanced examples Last accessed: 9th June 2011 Another HLSL introduction Last accessed: 9th June 2011 Very good and detailed tutorial on how to use Shaders in XNA Last accessed: 15th January 2012 Official HLSL Reference by Microsoft Last accessed: 9th June 2011 Author - Leonhard Palm: Basics, GPU Pipeline, Pixel and Vertex Shader, HLSL, XNA Examples - DR 212: BasicEffect Class, Transparency Shader, Toon Shader, FX Composer, Particle Effects Skybox You use heightfields and procedurals to generate Terrain in Terragen 2. Using Heightfields - Now we will add better colors and textures using Shaders. Modifying the mountain ground color - Creating a skybox cube map Skydome Same concept as a skybox but instead of a cube, a sphere is used. Can be used to simulate atmospherics and sun movement (dawn, dusk). See Tutorials on how to create these. Links Terragen Skybox Skybox Tutorials Skyboxes Ready and free to use skyboxes (public domain): Skydome Skydome Tutorials Other References Authors arie Landscape Modelling Introduction How do we implement and model a landscape which is based on XNA Framework into our game? This WIKI-entry will deal exactly with this problem. By example it will be shown how to create a landscape using a HeightMap. Furthermore we will create a texture, drag it onto our landscape and write loads of source code. Finally there will be some tips regarding topics related to Landscape Modeling. A HeightMap (Wikipedia: HeightMap) is nothing else than a greymap. So to say a 2D texture which points out the heights and depths of our landscape. Every pixel of the greymap is between 0 and 255 indicating our elevation. To create such a map use a program like Terragen. Terragen is a program used to create photorealistic landscape-images pretty quick. However it also is a perfect tool to create a HeightMap. Terragen is available in 2 versions (date: 05.06.2011) one version which has to be paid for - Terragen 2 and a free version Terragen Classic. For our needs the free version is perfectly ok. Creating HeightMap Enough of the introduction – let’s get started. After downloading and installing Terragen Classic we can see the following menu: On the left hand side we can see the buttons provided by Terragen. First step is to click on „Landscape“ and a new window will open up. Here we click on “Size” to adjust the size of our HeightMap – 257x257 or 513x513. Tip: If you already have a skybox implemented, use the size of your skybox image. Next we click on “View/Sculpt” to model our HeightMap. You will see a black picture with a white arrow in it – that’s your camera perspective. You can adjust the perspective as you like by moving the arrow to the desired position. To start painting your terrain you need to click on “Basic Sculpting Tool” (1) located at the top left corner of your window. Now you can start to draw your landscape. Something like this should be the result: If you are not satisfied with your result you can always click on “Modify” within your landscape window and adjust certain settings like maximum height of your mountains. Another useful function is “Clear/Flatten” which resets your HeightMap so you can start all over again. When you are done painting your HeightMap, click on the button “3D Preview”. This is what it should look like (depending on what you have drawn): To save your HeightMap click on „Export“ in the landscape menu and choose „Raw 8 bits“ as Export Method (1). Click on “Select File and Save…” name your HeightMap and save it to your Hard Drive. We are nearly done with our HeightMap, which is now in .raw format. Finally we need to convert this format into something else by using a program like Photoshop or the free tool “XnView” (). Change your .raw format to .jpg, .bmp or .png because the “default Content Pipeline” from XNA can handle these formats as “Texture2D”. Creating Texture What would our landscape be without texture? Therefore, let’s use Terragen to create one. To do so open the “Rendering Controls” within your Terragen menu. First thing to do is adjust the size using „Image Size“ (1) depending on whatever size you made your HeightMap (512x512 or 256x256). In the Rendering Control Window, at the bottom right corner, position your camera so you can actually see you floor (2). To directly face the floor use the value -90 for pitch (3). This makes you directly look at your floor. Furthermore set the “Detail” – slider (4) to maximum in order to get the highest quality when rendering. Click on “Render Preview” (5) to get a preview of your texture. Alternatively you can open your “3D Preview” again, but your texture will not be shown rendered. Any black spots on your texture will probably be shadows cast on your terrain. Click on the button „Lightning Conditions“ in the Terragen Menu and uncheck „Terrain Casts Shadows“ and „Clouds Casts Shadows“(1) to make them disappear. Now you are done and can click on „Render Image“ (6) in your “Rendering Control”. Terragen now renders your texture which should look something like this: You can also change the colour of your texture. To do so click on the “Landscape” button in your Terragen menu. Choose “Surface Map” (1) and click on “Edit” (2). The “Surface Layer” window will open up. Now click „Colour…“(3) to choose your colour. When you are satisfied with your texture save it to your Hard Drive. Play around with the settings, render it and check the changes. If you choose the colour to white, this is what your texture should look like: Now we are done with the basics and finally reached our first goal – our own HeightMap and texture: Implementation in XNA From now on we start working on implementing the HeightMap and the texture into XNA code. But to actually see something we need to start by programming a camera. Creating Camera Class We create a new Project in Visual Studio 2008 and add a new class named „Camera“. We start of by assigning some class variables. A matrix viewMatrix for the camera view and a projectionMatrix for the projection. The projectionMatrix converts the 3D camera view into a 2D image. To position our landscape later on, we will need another matrix terrainMatrix. Furthermore it would be nice if we could move or rotate our camera over our landscape. Therefore we declare Vector3 variables for position, alignment, movement and rotation of our camera. // matrix for camera view and projection Matrix viewMatrix; Matrix projectionMatrix; // world matrix for our landscape public Matrix terrainMatrix; // actual camera position, direction, movement, rotation Vector3 position; Vector3 direction; Vector3 movement; Vector3 rotation; The camera constructor gets parameters to initialize all these variables.); } Now if you ask yourself what exactly the methods CreateLookAt(), CreatePerspective(), CreateTranslation() are doing, check the class library of XNA Framework -> XNA Framework Class Library Reference. All methods are clearly described there. Keep the XNA Framework class library in mind to check all the methods unclear to you, because not all methods used in the source code will be explained in detail. To exercise this at least once we use the method CreateTranslation(). Go to Matrix.CreatePerspective Method (Single, Single, Single, Single) and you will find a detailed description of all the parameters used by the method as well as their return values. Back to our camera class. Next step is to create an Update() method which will get a number as parameter. In this method we define the movement and rotation of our camera and calculate our new camera position at the end. We do that because when we create a camera in our Game1.cs later on, we can move our camera by using keyboard inputs. Every keyboard input sends a number which will be processed by the camera’s Update() method.; } Finally our camera gets a Draw() method. In this method we pass our landscape to ensure it gets displayed later on. public void Draw(Terrain terrain) { terrain.basicEffect.Begin(); SetEffects(terrain.basicEffect); foreach (EffectPass pass in terrain.basicEffect.CurrentTechnique.Passes) { pass.Begin(); terrain.Draw(); pass.End(); } terrain.basicEffect.End(); } Before we can start to write our Terrain.cs class we need to implement the method SetEffects() which is used by the Draw() method. BasicEffect is a class in XNA Framework which provides rendering effects to display objects. public void SetEffects(BasicEffect basicEffect) { basicEffect.View = viewMatrix; basicEffect.Projection = projectionMatrix; basicEffect.World = terrainMatrix; } Now our Camera.cs class is ready and to actually see something we now start to write our Terrain.cs class. Overview Camera.cs class This is how the complete Camera.cs class should look like. { class Camera { // matrix for camera view and projection Matrix viewMatrix; Matrix projectionMatrix; // world matrix for our landscape public Matrix terrainMatrix; // actual camera position, direction, movement, rotation Vector3 position; Vector3 direction; Vector3 movement; Vector3 rotation;; } public void SetEffects(BasicEffect basicEffect) { basicEffect.View = viewMatrix; basicEffect.Projection = projectionMatrix; basicEffect.World = terrainMatrix; } public void Draw(Terrain terrain) { terrain.basicEffect.Begin(); SetEffects(terrain.basicEffect); foreach (EffectPass pass in terrain.basicEffect.CurrentTechnique.Passes) { pass.Begin(); terrain.Draw(); pass.End(); } terrain.basicEffect.End(); } } } Creating Landscape Class Create a new class and rename it Terrain.cs. Again we start by defining class variables we will need. We will need Texture2D variables for our HeightMap and our texture image as well as variables to work with the textures, especially arrays. GraphicsDevice graphicsDevice; // heightMap Texture2D heightMap; Texture2D heightMapTexture; VertexPositionTexture[] vertices; int width; int height; public BasicEffect basicEffect; int[] indices; // array to read heightMap data float[,] heightMapData; In the constructor of our Terrain.cs we call the GraphicsDevice unit in order to be able to access it in our class. public Terrain(GraphicsDevice graphicsDevice) { this.graphicsDevice = graphicsDevice; } Now we create a method which will get our textures (this will happen from the Game1.cs class and will be explained later) and calls other methods so we get closer to our landscape. So let’s write the missing methods.(); } We start by implementing the SetHeight() method which will get the greyscale from each pixel of the texture, indicating its actual height, and writes them into the heightMapData[] array. The complete method:; } } } To get the intensity of each greyscale it is suffice to get the value of a single colour, either red, green or blue – which one you choose is up to you. To not get to much difference in altitude you can divide your colourvalue by a value. Hence this line: heightMapData[x, y] = greyValues[x + y * width].G / 3.1f; It also works the other way around. When you multipliy with a value you will get a higher difference in altitude. The next two methods deal with the creation of indices and vertices. SetVertice() creates the area of our landscape using triangles. An area consist of two triangles. A triangle can be described by 3 numbers which are called indices. These indices of a triangle are assigned to vertices. If you need a refreshment in that matter go check Riemer’s XNA Tutorials -> Recycling vertices using inidices. In our method some strange mathematical stuff is used to calculate correct indices. Play around a bit and check out what happens when you change certain values. public void SetIndices() { // amount of triangles index = new int[6 * (width - 1) * (height - 1)]; int number = 0; // collect data for corners for (int y = 0; y < height - 1; y++) for (int x = 0; x < width - 1; x++) { // create double triangles index[number] = x + (y + 1) * width; // up left index[number + 1] = x + y * width + 1; // down right index[number + 2] = x + y * width; // down left index[number + 3] = x + (y + 1) * width; // up left index[number + 4] = x + (y + 1) * width + 1; // up right index[number + 5] = x + y * width + 1; // down right number += 6; } } The SetVertices() method calcualtes the 2D-position for each vertex the texture should be applied. The heights and depths will be assigned using the data from the heightMapData[] array.); } } Now we implement a SetEffects() method in which we use a new shader object of type BasicEffet (Wikipedia: Shader). Its texture properties get assigned to our terrain texture and its display gets activated. public void SetEffects() { basicEffect = new BasicEffect(graphicsDevice, null); basicEffect.Texture = heightMapTexture; basicEffect.TextureEnabled = true; } To actually draw the landscape our terrain.cs class gets an own Draw() method. From here we call the method DrawUserIndexedPrimitives()(from GraphicsDevice class from XNA) which is extremely powerful and contains a pretty long list of parameters. First the type of object that is to be drawn. A collection of triangles is meant when using TriangleList. Followed by our array containing the vertices. The next parameters take the starting point and the ammount of our vertices. Next is the array with our indices and at the end the number of the first triangle and the ammount of triangles. public void Draw() { graphicsDevice.DrawUserIndexedPrimitives<VertexPositionTexture>(PrimitiveType.TriangleList, vertices, 0, vertices.Length, indices, 0, indices.Length / 3); } Last but not least we need to adjust our Game1.cs in which we now call our camera and our terrain to reach or goal to see our landscape. Overview Terrain.cs class Prior to that an overview of the complete Terrain.cs class: { public class Terrain { GraphicsDevice graphicsDevice; // heightMap Texture2D heightMap; Texture2D heightMapTexture; VertexPositionTexture[] vertices; int width; int height; public BasicEffect basicEffect; int[] indices; // array to read heightMap data float[,] heightMapData; public Terrain(GraphicsDevice graphicsDevice) { this.graphicsDevice = graphicsDevice; }; } } } public void SetIndices() { // amount of triangles indices = new int[6 * (width - 1) * (height - 1)]; int number = 0; // collect data for corners for (int y = 0; y < height - 1; y++) for (int x = 0; x < width - 1; x++) { // create double triangles indices[number] = x + (y + 1) * width; // up left indices[number + 1] = x + y * width + 1; // down right indices[number + 2] = x + y * width; // down left indices[number + 3] = x + (y + 1) * width; // up left indices[number + 4] = x + (y + 1) * width + 1; // up right indices[number + 5] = x + y * width + 1; // down right number += 6; } }); } } public void SetEffects() { basicEffect = new BasicEffect(graphicsDevice, null); basicEffect.Texture = heightMapTexture; basicEffect.TextureEnabled = true; } public void Draw() { graphicsDevice.DrawUserIndexedPrimitives<VertexPositionTexture>(PrimitiveType.TriangleList, vertices, 0, vertices.Length, indices, 0, indices.Length / 3); } } } Adjusting Game1.cs class Before we start we import our HeightMap as well as our texture image into VisualStudio2008. Right click on content in your project-explorer. Choose "Add" –> "existing Element…" in the menu popping up. Choose your images and import them. You now should see your HeightMap and your texture image listed under Content. Now create your camera and your terrain as class variables. //-------------CAMERA------------------ Camera camera; //-------------TERRAIN----------------- Terrain landscape; To let VisualStudio2008 know where to find your images, add the following line to the constructor: Content.RootDirectory = "Content"; Next initialize your camera and your terrain using the Initialize() method. // initialize camera start position camera = new Camera(new Vector3(-100, 0, 0), Vector3.Zero, new Vector3(2, 2, 2), new Vector3(0, -100, 256)); // initialize terrain landscape = new Terrain(GraphicsDevice); If you later dont see anything you might need to adjust your Vector3 vectors which are passed into the camera class. The following line from the LoadContent() method is used to load the HeightMap and texture image into your terrain class: //load heightMap and heightMapTexture to create landscape landscape.SetHeightMapData(Content.Load<Texture2D>("heightMap"), Content.Load<Texture2D>("heightMapTexture")); Because we programmed our camera class forward-looking and want to move our camera over our terrain, we simply need to define the keys for movement in our Update() method. // move camera position with keyboard KeyboardState key = Keyboard.GetState(); if (key.IsKeyDown(Keys.A)) { camera.Update(1); } if (key.IsKeyDown(Keys.D)) { camera.Update(2); } if (key.IsKeyDown(Keys.W)) { camera.Update(3); } if (key.IsKeyDown(Keys.S)) { camera.Update(4); } if (key.IsKeyDown(Keys.F)) { camera.Update(5); } if (key.IsKeyDown(Keys.R)) { camera.Update(6); } if (key.IsKeyDown(Keys.Q)) { camera.Update(7); } if (key.IsKeyDown(Keys.E)) { camera.Update(8); } if (key.IsKeyDown(Keys.G)) { camera.Update(9); } if (key.IsKeyDown(Keys.T)) { camera.Update(10); } Last but not least we need to tell the camera’s Draw() method to draw our landscape. // to get landscape viewable camera.Draw(landscape); Overview Game1.cs class { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; //-------------CAMERA------------------ Camera camera; //-------------TERRAIN----------------- Terrain landscape; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { // initialize camera start position camera = new Camera(new Vector3(-100, 0, 0), Vector3.Zero, new Vector3(2, 2, 2), new Vector3(0, -100, 256)); // initialize terrain landscape = new Terrain(GraphicsDevice); base.Initialize(); } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); //load heightMap and heightMapTexture to create landscape landscape.SetHeightMapData(Content.Load<Texture2D>("heightMap"), Content.Load<Texture2D>("heightMapTexture")); } protected override void Update(GameTime gameTime) { // move camera position with keyboard KeyboardState key = Keyboard.GetState(); if (key.IsKeyDown(Keys.A)) { camera.Update(1); } if (key.IsKeyDown(Keys.D)) { camera.Update(2); } if (key.IsKeyDown(Keys.W)) { camera.Update(3); } if (key.IsKeyDown(
https://en.wikibooks.org/wiki/Game_Creation_with_XNA/Print_version
CC-MAIN-2017-09
refinedweb
33,413
56.15
I have ths code to attach a MovieClip to my main movie: em_mc.addEventListener(MouseEvent.CLICK, fem3, false, 0, true); function fem3(e:MouseEvent):void{ var mc:section1=new section1(); mc.y= 306; addChild(mc); var myTween:Tween = new Tween(mc, "x", Elastic.easeInOut, -500, 481, 1, true); }; The movieclip attaches with an animation. When I close (or exit) the attachd MovieClip I do not get an animation for exit. This is the code for exiting the attached Movieclip: home_mcb.addEventListener(MouseEvent.CLICK, exitinteraction); function exitinteraction(event:MouseEvent):void{ this.parent.removeChild(this); } How Could I add a similar animation for exiting before removing the MovieClip from the main movie? Currently the MovieClip just dissapears Any help..thanks What you should do is look into the Tween class so that you can understand what the parameters in it are indicating. Then you should be able to create a tween that does the opposite. If you need to have the object removed still, then you will need to add an event listener to the Tween for it MOTION_FINISH event... you can use that events handler function to do the this.parent.removeChild(this); that you have now. Thank You Ned!! Totally forgot about MOTION_FINISH. I solved the isuue with this on the attached movieclip: import fl.transitions.Tween; import fl.transitions.easing.*; import fl.transitions.TweenEvent; function onFinish (e:TweenEvent):void { this.parent.removeChild(this); } function exitinteraction(event:MouseEvent):void{ var myTween:Tween = new Tween(this, "x", Strong.easeOut, 481, -500, 2, true); myTween.addEventListener(TweenEvent.MOTION_FINISH, onFinish); } You're welcome
http://forums.adobe.com/message/4568415
CC-MAIN-2014-15
refinedweb
260
53.17
2 Set Up Oracle Visual Builder Studio to Manage Your Development Cycle Like other Oracle Cloud services, you must create an instance of Oracle Visual Builder Studio (VB Studio) before you can start using it. You can create only one instance in an Oracle Cloud account. Before You Begin Before you set up VB Studio, ensure that you are assigned the correct roles in Oracle Identity Cloud Service (IDCS). Create the VB Studio Instance You can create one only one VB Studio instance in an Oracle Cloud account, so make sure you don't already have one before you get started. - In a web browser, go to view the list of supported browsers, see. - On the Sign-In page, in Account, enter your account name and click Next. - On the Oracle Cloud Account sign-in page, enter your Oracle Cloud account credentials and click Sign In.The Oracle Cloud Console, also called the Oracle Cloud Infrastructure console or OCI console, opens. - In the upper-left corner, click Navigation Menu . - Under More Oracle Cloud Services, select Platform Services, and then select Visual Builder Studio. - In the Instances tab, click Create Instance. - On the Create New Instance page, in Instance, enter a unique name. In Description, enter a description.The name helps you to identify the service instance. - In Notification Email, enter your email address where you'd like to receive email notification when the instance is ready. - In Region, select your home region.You'll find your region on the OCI console's header. - Click Next. - On the Service Details page, verify the entered details and click Next. - On the Confirmation page, click Create.Expand the Instance Create and Delete History section to track the status. The VB Studio Organization page opens. Click the OCI Credentials link or the OCI Account tab to configure OCI connections before you create. From the Organization page, you can manage all projects, OCI connections, virtual machines, and the properties of the organization. To open a project, click its name. You can't open a project if you're not a member. Configure VB Studio to Connect to OCI To run builds in a VB Studio instance, you need to connect to the Oracle Cloud Infrastructure (OCI) or Oracle Cloud Infrastructure Classic (OCI Classic) account's Compute Virtual Machines (VMs) and Object Storage buckets. If you're an OCI user, set up connections to OCI Compute and OCI Object Storage. VB Studio runs its builds on OCI Compute VMs, and stores build and Maven artifacts in OCI Object Storage buckets. If you're an OCI Classic user, set up connections to OCI Compute Classic and OCI Object Storage Classic. VB Studio runs its builds on OCI Compute Classic VMs, and stores build and Maven artifacts in OCI Object Storage Classic containers. Connect to Your OCI Account Before you configure VB Studio to connect to your OCI account, set up the OCI account to host and manage necessary resources, such as VMs for your builds and storage buckets for your project data. You can use the root compartment and the tenancy user that was created when the OCI account was created, but it's recommended to create a dedicated compartment to host VB Studio resources. This allows you to organize VB Studio resources better because they aren't mixed with the other resources of your tenancy. You can also restrict users and control read-write access to the compartment without affecting other resources. To learn more about compartments, see Understanding Compartments. To set up the OCI account, create resources as described in Set Up the OCI Account. After creating the resources, get their details as described in Get the Required OCI Input Values because you'll need them to Set Up the OCI Connection in VB Studio. If you don't have authorization to create and manage OCI resources, ask some one who can create the resources and share their details. Set Up the OCI Account - On the OCI console, in the upper-left corner, click Navigation Menu . - Under Governance and Administration, select Identity, and then select Compartments. - On the Compartments page, create a compartment to host VB Studio resources. To learn more about compartments, see Working with Compartments. - To create the compartment in the tenancy (root compartment), click Create Compartment. - In the Create Compartment dialog box, fill in the fields and click Create Compartment. Here's an example: - Create a user to access the VB Studio compartment. To learn more about OCI users, see Working with Users. - In the left navigation menu, under Governance and Administration, click Identity, and then click Users. - Click Create User. - In the Create User dialog box, fill in the fields and click Create. Here's an example: - On your computer, generate a private-public key pair in the PEM format.To find out how to generate a private-public key pair in the PEM format, see How to Generate an API Signing Key. Here's an example of private-public key files on a Windows computer: - Upload the public key to the user's details page. To learn more about uploading keys, see How to Upload the Public Key. - Open the public key file in a text editor and copy its contents. - In the left navigation menu of the OCI console, under Governance and Administration, click Identity, and then click Users. - Click the user's name created in Step 4. - In the User Details page, click Add Public Key. Here's an example: - In the Add Public Key dialog box, paste the contents of the public key file, then click Add. - On the Groups page, create a group for the user who can access the VB Studio compartment and add the user to the group. To learn more about groups, see Working with Groups. - In the left navigation menu, under Governance and Administration, go to Identity, and click Groups. - Click Create Group. - In the Create Group dialog box, fill in the fields and click Submit. Here's an example: - On the Groups page, click the group's name. - On the Group Details page, click Add User to Group. - In the Add User to Group dialog box, select the user created in Step 4 and click Add. Here's an example: - In the root compartment, not the VB Studio compartment, create a policy to allow the group created in step 6 to access the VB Studio compartment. To learn more about policies, see Working with Policies. - In the navigation menu, under Governance and Administration, click Identity, and then click Policies. - On the left side of the Policies page, from the Compartment list, select the root compartment. - Click Create Policy. - In Name and Description, enter a unique name and a description. - In Policy Statements, add these statements. allow group <group-name> to manage all-resources in compartment <compartment-name> This grants all permissions to the VB Studio group users to manage all resources within the VB Studio compartment. allow group <group-name> to read all-resources in tenancy This grants read permissions to the VB Studio group so that its users can read—but not use, create or modify—all resources inside and outside the VB Studio compartment. The group users can't use, create, or modify the resources. Here's an example: - Click Create. Get the Required OCI Input Values Every Oracle Cloud Infrastructure resource has an Oracle-assigned unique ID called an Oracle Cloud Identifier (OCID). To connect to OCI, you need the account's tenancy OCID, home region, the OCID of the compartment that hosts VB Studio resources, and the OCID and the fingerprint of the user who can access the VB Studio compartment. To connect to OCI Object Storage, you need the Storage namespace. You can get these values from the OCI Console pages. This table describes how to get the OCI input values required for the connection. Set Up the OCI Connection in VB Studio To connect to OCI, get the VB Studio compartment's details, user details, and the required OCID values. Then, create an OCI connection from VB Studio. If you're not the OCI administrator, get the details from the OCI administrator. - In the navigation menu, click Organization . - Click the OCI Account tab. - Click Connect. - In Account Type, select OCI. - In Tenancy OCID, enter the tenancy's OCID copied from the Tenancy Details page. - In User OCID, enter the OCID of the user who can access the VB Studio compartment. - In Home Region, select the home region of the OCI account. - In Private Key, enter the user's private key who can access the VB Studio compartment.The private key file was generated and saved on your computer when you created the private-public key pair in the PEM format. See Step 5 in Set Up the OCI Account. Make sure that the private key you enter contains the -----BEGIN RSA PRIVATE KEY-----and -----END RSA PRIVATE KEY-----markers. - In Passphrase, enter the passphrase used to encrypt the private key. If no passphrase was used, leave the field empty. - When you enter a private key and a passphrase, the Fingerprint field is automatically populated. Ensure that the automatically populated fingerprint value matches the fingerprint value of your private-public key pair. If it doesn't, update it to enter the correct value. - In Compartment OCID, enter the compartment's OCID copied from the Compartments page. - In Storage Namespace, enter the storage namespace copied from the Tenancy Details page. - To agree to terms and conditions, select the terms and conditions check box. - To validate the connection details, click Validate. - After validating the connection details, click Save. Connect to Your OCI Classic Account To connect to OCI Classic, you need the credentials of a user with the Compute.Compute_Operations and Storage.Storage_Administrator identity domain roles along with the service ID and Authorization URL of OCI Object Storage Classic. The Compute.Compute_Operations role enables you to create, update, and delete VMs on OCI Compute Classic. The Storage.Storage_Administrator role enables you to store artifacts on OCI Object Storage Classic. Before you create the OCI Compute Classic connection, you must check the Compute_Operations Role: Terms of Use and get the Service ID and the authorization URL of OCI Object Storage Classic: Get OCI Object Storage Classic Input Values - Open the Oracle Cloud Dashboard. - In the Storage Classic tile, click Action and select View Details. If the Storage Classic tile isn’t visible, click Customize Dashboard. Under Infrastructure, find Storage Classic, click Show, and then close the Customize Dashboard window. - On the Service Details page, in the Additional Information section of the Overview tab, note the values of the Auth V1 Endpoint URL and the last part of the REST Endpoint URL. If you’re using an Oracle Cloud traditional account, the fields shown in this graphic might differ from the fields on your Service Details page. Description of the illustration storage_cloud_console.png Create an OCI Classic Connection from VB Studio After you have the required values, create an OCI Classic connection from VB Studio. - In the navigation menu, click Organization . - Click the OCI Account tab. - To create a connection, click Connect. To edit the connection details, click Edit. - In Account Type, select OCI Classic. - In the OCI Object Storage Classic section, fill in the required details. - In Service ID, enter the value copied from the last part of the REST Endpoint URL field of the Service Details page.For example, if the value of the REST Endpoint URL is, then enter Storage-demo12345678. - In Username and Password, enter the credentials of the user assigned the Storage.Storage_Administrator identity domain role. - In Authorization URL, enter the URL copied from the Auth V1 Endpoint field of the Service Details page.Example:. - Click Validate. - In the OCI Compute Classic section, fill in the required details. - In Username and Password, enter the credentials of the user who’s assigned the Compute.Compute_Operations identity domain role. - To agree to terms and conditions, select the terms and conditions check box. - Click Validate. - Click Save. Compute_Operations Role: Terms of Use Here are some special legal terms and guidance that apply to the usage of the Compute_Operations role for VB Studio. In addition to these VB Studio terms, you should follow security best practices in maintaining the security of the username and password. You must create a dedicated username and password for use by VB Studio. When creating a username, avoid including personal names or personal information (like birthdays). Your password should always be complex and impossible to guess. You understand that a user with the Compute_Operationsrole can view, create, update and delete OCI Compute Classic resources such as VM instances, storage volumes, security rules, and security IP lists. Your failure to maintain security best practices to secure the username and password of the user with the Compute_Operationsrole may create a high risk for you and your organization. You should assign the Compute_Operationsrole privileges only to the username created for VB Studio. Notwithstanding VB Studio terms, you acknowledge that Oracle isn’t responsible or liable for any action you take in accessing or creating access to the VB Studio or OCI Compute Classic. Set Up the Build System VB Studio builds run on OCI Virtual Machines (VMs), also called as Build VMs. The Build VMs run in a public subnet of an OCI Virtual Cloud Network (VCN). If you're new to OCI Networking, see Overview of Networking to learn about OCI networking concepts such as VCN and subnets. Before your organization's members create jobs and run builds, follow these steps to set up your build system: - If you haven't, configure VB Studio to connect to your OCI or OCI Classic account. - (Optional) Configure your VCN to run Build VMs. - Create Build VM templates. - Add Build VMs. What are Build VMs and Build VM Templates? Build VMs are OCI Compute VMs dedicated to run builds of jobs your organization’s members define in VB Studio projects. Build VM Templates define the operating system and the software packages installed on Build VMs. Build VMs run in a Virtual Cloud Network (VCN). To set up the build system, you should first create Build VM templates and then add Build VMs. Only you and other organization administrators can create Build VM templates and Build VMs. Your organization's members use these Build VMs and Build VM templates to run build jobs. When you create a Build VM template, VB Studio adds Java, and some required software packages to it. These default software packages are called Required Build VM Components. If your organization’s members need more software packages in the VM template, you can add the packages from the VB Studio Software Catalog. To learn more about the default and other available software packages, see Software for Build VM Templates. If your organization's members need software packages that aren't available in the above templates, create a Build VM template and configure it to add the required software packages. After creating Build VM templates, add some VMs from your OCI Compute account’s quota to run builds. To add a Build VM, you specify: - Build VM template When a Build VM starts, VB Studio installs the operating system and the software you've defined in the template on the VM. -. - VM's shape A shape is a template that determines the number of CPUs, amount of memory, and other resources allocated to a newly created instance. To learn more about shapes, see VM Shapes. -. When your organization's members create jobs, they simply associate the appropriate Build VM template with each job. The Build VM starts automatically when a build runs on it and stops after its wait time period. You can manually start and stop it too. When a build runs: - The build executor checks the job's Build VM template and then looks for the VM that's allocated to the template: - If a VM is available, the build executor immediately runs the build on the VM. - If all VMs are busy running builds of other jobs using the same Build VM template, the build executor waits until a VM becomes available and then runs the current job's build on it. - If a VM doesn't exist, the build executor reports an error. - If the build is the VM's first build since the VM's creation or its sleep timeout period, the build executor installs the software defined in the Build VM template before it runs the build. This takes time. - After installing the software, the build executor runs the job’s commands and actions. - After the build is complete, the executor copies any generated artifacts to the OCI Object Storage bucket or the OCI Object Storage Classic container. - The Build VM waits for some time for any queued builds. If no builds run during the wait time period, the Build VM uninstalls its software and stops. Build VMs in Virtual Cloud Network A Build VM runs in a Virtual Cloud Network (VCN). You can run Build VMs in the VB Studio's default VCN, or in your VCN. If you don't have a VCN or want to use the default option without any additional configuration, use the VB Studio default VCN. If you want Build VMs to access services that are running in your VCN, then you should run Build VMs in your VCN. Use VB Studio's Default VCN If you don't have a VCN to run Build VMs, use VB Studio's default VCN. It is automatically created for you when the first Build VM that uses it starts. VB Studio's default VCN, called vbs-executor-vcn, resides in the compartment you created and configured in Connect to Your OCI Account. When a Build VM that uses the default VCN starts, VB Studio checks its OCI compartment for the default VCN. If it doesn't exist, VB Studio creates a VCN called vbs-executor-vcn with CIDR block 10.0.0.0/16. If the VCN exists, VB Studio uses it to run your Build VMs. When VB Studio creates the default VCN, it also creates these components and adds them to the VCN: - An Internet Gateway - A Route Table that uses the Internet Gateway as the routing rule - A Security Rule with these rules: - Ingress: Allow TCP traffic on destination port 22 (SSH), 9003 (Executor agent debug), 9005 (VM agent debug), 9082 (Executor agent), and 9085 (VM agent) from source 0.0.0.0/0and any source port. - Egress: To any destination from any protocol - Three subnets, one for each availability domain. Their CIDR is set to 10.0.0.0/24, 10.0.1.0/24, and 10.0.2.0/24. As soon as the default VCN is available, you have full control over it and can modify it. You can add private subnets for your private services, add more public subnets or delete the existing subnets, modify security lists, and add or remove other components. Note: - If you modify the default VCN, make sure that at least one public subnet is available in the VCN. If there are no public subnets, Build VMs in the default VCN won't run and your builds will fail. - The default VCN is created once and continues to stay until it is deleted manually. - If your organization's members configure jobs that access services in the private or public subnets of the VCN, ask them to configure their jobs to access the services using private IPs or Fully Qualified Domain Name (FQDN). Run Build VMs in Your VCN If you run Oracle Cloud services in your VCN, you should configure your VCN to run VB Studio Build VMs so your services and Build VMs are in the same VCN. This lets Build VMs access your Oracle Cloud services easily without any complex networking configuration. Before you configure your VCN, make a note of these: - In your VCN, create a public subnet or configure an existing public subnet to allow inbound access from and outbound access to VB Studio. - When you're creating or configuring a public subnet, make sure it is regional. - Instead of modifying an existing security list's security rules, you should create a new security list for the public subnet. For the public subnet, create a security list and add ingress rules from source CIDR 0.0.0.0/0for VB Studio ports 22(SSH), 9082(Executor Agent), and 9085(VM Agent). This is required to allow VB Studio access the Build VMs in your VCN. - If your VCN isn't in the same compartment that VB Studio is in, make sure that the user whose OCID you've specified in Set Up the OCI Connection in VB Studio is assigned the use virtual-network-familypolicy for the VCN's compartment. This is required for networking permissions and builds to run in your VCN. This statement assigns the policy to the user's group: allow group <group-name> to use virtual-network-family in compartment <vcn-compartment-name> Here's an example of the use virtual-network-familypolicy added to the policies you created in Set Up the OCI Account. - Make sure that your VCN has a route table with a rule that allows Internet access. - To allow Build VMs access your private subnet's services and resources, configure the private subnet's security rules to allow incoming traffic from the public subnet used by Build VMs. - After adding Build VMs to your VCN, ask your organization's members to configure their build jobs to use the private IP addresses or the Fully Qualified Domain Name (FQDN) of services that are running in the VCN. Tell them not to use public IP addresses, because when Build VMs are in the same VCN as the service, public IP addresses will route the traffic outside the VCN, causing builds to fail. Build VMs run in a VCN's public subnet. This table describes what you need to do if you have a VCN. Create and Configure a Public Subnet in Your VCN Before you can run Build VMs in your VCN, you must first create a public subnet in your VCN with security rules that allow inbound access from and outbound access to VB Studio. - On the OCI console, in the upper-left corner, click Navigation Menu . - Under Core Infrastructure, select Networking, and then select Virtual Cloud Networks. - Under List Scope, select the compartment. - From the VCNs list, click the VCN's name. - Under Resources, click Security Lists, and then click Create Security List. - In Name, enter a name for the security list. - In Create in Compartment, ensure that the correct compartment is selected. - In Allow Rules for Ingress, click + Additional Ingress Rule and follow these steps: - In Allow Rules for Egress, click + Additional Egress Rule and follow these steps: - In Source Type, select CIDR. - In Source CIDR, enter 0.0.0.0/0. - In IP Protocol, select All Protocols. - (Optional) In Description, add a description. - Click Create Security List.After creating the security list, click its name to verify the ingress and egress rules you added. Here's an example of ingress rules: Here's an example of the egress rule: - Under Resources, select Subnets and follow these steps to create a public subnet:If you want to edit an existing public subnet, jump to the next step. - Click Create Subnet. - In Name, enter the subnet's name. - In Subnet Type, make sure that Regional is selected. - In CIDR Block, enter the subnet's CIDR block.Don't set it to 172.17.0.0/16as it's the default subnet allocated to Docker. - In Route Table, select the VCN's route table. - In Subnet Access, make sure that Public Subnet is selected. - In DHCP Options, select the VCN's DHCP options. - In Security List, select the security list you created in Step 6. - Fill in the other fields. Here's an example: - Click Create Subnet. - If you want to edit an existing subnet, follow these steps: Allow Build VMs to Access Your Private VCN Resources After adding a public subnet in your VCN, if you want Build VMs to access the resources and services (such as Java Cloud Service or a VM-based Database) running in the VCN's private subnet, configure the private subnet's security rules to allow incoming traffic from the public subnet used by Build VMs. For example, to allow Build VMs access Java Cloud Service running in a private subnet, configure the subnet's security list to add the Build VMs CIDR ranges to the Ingress rule associated with the JCS Admin port. - On the OCI console, in the upper-left corner, click Navigation Menu . - Under Core Infrastructure, select Networking, and then select Virtual Cloud Networks. - On the Virtual Cloud Networks page, click the VCN. - Under Resources, click Security Lists, and then click the private subnet's security list. - Click Add Ingress Rules.If you want to modify an existing rule, click the Actions icon (three dots), and then select Edit. - In Source Type, select CIDR. - In Source CIDR, enter the Build VMs public subnet's CIDR range. - In Destination Port Range, enter the service's port number. - (Optional) In Description, add a description.Here's an example of Java Cloud Service port 7002 with a source CIDR of 10.0.4.0/24: - Click Add Ingress Rules. - If required, repeat steps from 6 to 11 for each service's port. Create and Manage Build VM Templates You can create and manage Build VM templates from the VM Templates page in Organization Administration. Add and Manage Build VMs When you add a Build VM, you allocate a VM on the linked OCI Compute or OCI Compute Classic to run builds of jobs. To add Build VMs, you'll need these details: - Build VM template's name that defines the software to be installed on Build VMs. Make sure that the template has the correct software. - OCI region where you have authorization to add Build VMs. To learn more about regions and availability domains, see Regions and Availability Domains. - VM's shape. To learn more about shapes, see VM Shapes. - VM's VCN, if you want to run it in another VCN than the VB Studio's default VCN. To find more about VCNs and subnets, see VCNs and Subnets. Each build runs in one build executor, or one VM. You can build up to 99 builds in parallel using the same Build VM template. Tip: To minimize build execution delays, set the number of VMs of a specific Build VM template to the number of jobs that you expect to run in parallel using that template. If the VM quota is available, that number of Build VMs will be added to the Virtual Machines tab. You should also make sure that all VMs of a specific Build VM template run in the same VCN. If you add VMs in different VCNs, your builds might behave unpredictably. You can always return to the Virtual Machines tab to add or remove VMs, based on your actual usage. Note that the more VMs you have running at a specific time, the higher the cost. To minimize the higher cost, use the Sleep Timeout setting on the Virtual Machines page to automatically shut down inactive VMs. Software for Build VM Templates VB Studio offers various software packages in the Software Catalog of Build VM templates. Some software packages are available by default in each VM template. Default Software Packages These software packages are available by default in each Build VM template. You can't edit or remove these software packages from a VM template. Software Packages in the Software Catalog Here's a list of the software available in the VB Studio's software catalog. If multiple versions are available, you can add only one version to the VM template. Add Users to IDCS To add users to VB Studio and its projects, make sure they are added to IDCS and assigned appropriate VB Studio roles. If you want to federate with your existing identity provider, see Federating with Identity Providers. To add users manually to IDCS, follow these steps: - Open the Oracle Cloud Console page. - In the upper-left corner, click Navigation Menu . - Under Governance and Administration, select Identity, and then select Federation. - On the Federation page, click the identity service provider's link. - On the Identity Provider Details page, click Create IDCS User. - In the Create IDCS User dialog box, enter the new user's details and click Create. - To send the password reset instructions and URL to the new user, click Email Password Instructions. - Click close. - On the Identity Provider Details page, click the user's IDCS Username link. - On the User Details page, click Manage Service Roles. - On the Manage Service Roles page, search for the service with Developer Cloud Service description, click the Actions icon (three dots) and select Manage Instance Access. - On the Manage Access page, in the Instance Role column, select the role you want to grant to the user. Assign the DEVELOPER_ADMINISTRATOR role to users who can administer VB Studio. Assign the DEVELOPER_USER role to other non-admin users. A user must be assigned one of these two roles to access VB Studio. - Click Save Instance Settings. - On the Manage Service Roles page, click Apply Role Settings. For more details about adding users to IDCS and assigning them roles, see Managing Oracle Identity Cloud Service Users in the Console and Managing Instance Roles in the Console. To learn more about VB Studio IDCS roles, see IDCS Roles.: Manage Your Development Cycle After setting up VB Studio and adding users to IDCS, you should learn and then guide your organization's members about the development cycle in VB Studio. To find out more about VB Studio and its projects, see Get Started in Managing Your Development Process with Visual Builder Studio. Learn about how to create a project, administer a project, add users to it, create and manage issues and Agile boards, review source code with merge requests, create and manage build jobs and pipelines, and use other features of VB Studio._1<< - On the User Preferences page, click the General tab. - Select the Show News Banner on Organization and Project Home check box.
https://docs.oracle.com/en/cloud/paas/visual-builder/visualbuilder-administration/service-setup.html
CC-MAIN-2020-40
refinedweb
5,048
63.9
I use my own software, Epigrass, to run my geo-referenced dynamic populational models. Epigrass is great for running complex models but so far, it didn't help much when it came to representing the results in a nice way. I then decided to give Epigrass a major overhaul (which I hope to release soon) to include support to shapefiles (.shp). Shapefile is a very common map format supported by every GIS software I know. Thankfully, there is a great library for handling this type of files (and others too) which has bindings for Python. Its called OGR and is distributed as part of another library called GDAL (apt-get install python-gdal). The following code is taken straight from Epigrass-devel CVS tree (module epigdal.py) on sourceforge, so feel free to explore the rest of the code if you fell like. So, the OGR does the loading of vector maps (and any data associated with it) and make them available for manipulation in Python. import ogrThe code above takes care of extracting the first layer from "mymap.shp". A layer is a set of geometrical objects (points, lines or polygons), called Features. A Shapefile may contain many layers, if you want to find out about them, you can write something like this: map = ogr.Open('mymap.shp') layer = map.GetLayer(0) nlay = map.GetLayerCount() layer_list = [map.GetLayer(i) for i in xrange(nlay)] Or you might want to get the layer names: layer_namelist = [map.GetLayer(i).GetName() for i in xrange(nlay)] Fortunately my map had a single layer, so I just proceeded to get a hold of the features in the layers. In my case, I was only interested in Polygons, to calculate their centroids (geometric center). At this point, I will have to refer the reader to a link to the code since Blogger won't allow you to edit decently formatted code. So for the feature extraction code, look at lines 71-93 of this module. In that code, I iterate over the layer's features, and get the centroids from their geometries and save both the centroids and the geometry objects in dictionaries using the variable 'geocode' from the map's own database as key. Note the I do this only for type 3 geometries (polygons). My next step is then to generate another map layer with centroid data included. This time it will be a layer of points instead of polygons. See how I do that in lines 96-138. Now we have covered how to read a layer an how to create a layer. We have the necessary skills to move on to the main topic of this post, which is creating a layer in Google Maps/Earth from a layer derived from a shapefile, plus whatever data we may want to associate with it. Google use its own XML schema to represent GIS layers. It is called KML. I am not going to explain KML in detail here, try this tutorial or the Google documentation. I am going to create the KML directly using minidom from Python's standard library. Also, I am going to encapsulate the code into a class to better organize it. Look at class KMLGenerator which starts at line 259. To use this class you just call the method addNodes (passing a layer object taken from a shapefile as shown above, after instantiating the class, and then call writeToFile to write your KML file. As I create the polygons, I color them according to the values of one of the data fields of the layer. I use matplotlib cm class and rbg2hex function, tho choose the color and convert it to hex format. to finish it off, the required screenshot: Notice the polygons colored according to disease prevalence, and the comment which shows up in the pop-up balloon. I hope you enjoyed reading this post. Please post a comment if you have any further quastions or comments. 3 comments: Awesome! I've been looking for a way to do this. That looks realy helpful, its exactly what I am trying to do. Thanks. I'm going to have to get a friend with some comp sci experience to help me though since I am a bit of a newbie. Cheers. ~Michael Interesting post.
http://pyinsci.blogspot.in/2007/06/generate-google-mapsearth-layers-from.html
CC-MAIN-2018-09
refinedweb
717
72.56
One way of looking at extending the Nintex Workflow Platform is to focus on the smallest unit of processing: the function. This series of posts will look at how you can approach your function as a portable unit in the Nintex Workflow Platform. I'm going to look at defining the boundaries of your function, what you need to host your function, how the Nintex Workflow Platform can help delivery your function to workflow designers in your enterprise, and how you can set up the interfaces to your function to deliver your solution to targets inside and outside of your network. Table of Contents - Smallest Meaningful Process - Rise of the Lowly Console Application - Functions with RESTful Endpoints are Microservices - Where to Put Your Portable Function In the Nintex Workflow Platform - Function to Microservice with the Nintex Platform Smallest Meaningful Process You can think of your function has a self-contained thing in the Nintex Workflow Platform. As a self-contained thing it is portable. All that you need to execute your function is a runtime. The function can execute inside of the Nintex Workflow Platform within a workflow, or it can run outside of the Nintex Workflow Platform as a web service that can be incorporated back into your workflows using the Web request action. Inside the Nintex Workflow Platform If you're working with a SharePoint instance that doesn't have an internet connection (for security reasons or maybe it is beyond the reach of the Internet) you can place your function into the SharePoint runtime. You have two options for this: inline functions or custom workflow actions. Inline functions In any text input that supports Inserting reference fields, an inline function can be entered that will resolve at workflow runtime. You can create your function, add your function the DLLs that contain your function namespace to your SharePoint environment, and then register the function using the Nintex Workflow Administration tool. Workflow actions Workflow actions appear in the toolbox in the workflow designer. A builder of workflows can drag and drop the action onto the control flow of the workflow, double-click the action to configure the action, and then access the action when running the workflow. You can add your function to your a custom workflow action project using the Nintex Workflow SDK and give builders of workflows access to the processing of your function within their workflows using the workflow designer. Outside Nintex Workflow Platform You can host your function external to the SharePoint runtime. For instance you can provision your function as an endpoint to a web service running in the cloud, and then you can access the endpoint using the the Web request action. This has the benefit of locating your code in a single accessible location. If you maintain the endpoint contract, you can update your code, or even swap out the entire code base of your function without disturbing the delivery of your function. From your service, users can access the processing of your function from each of Nintex workflow products, such as Nintex Workflow for Office 365 and Nintex Workflow Cloud, using the Web Request action. Wrapping Your Function and Supporting Complexity Another factor in considering in the portability of your function is that you have two main ways of adding and then wrapping any complexity supporting your function: - using user defined actions to contain a workflow supporting your function - adding a RESTful interface to your a workflow supporting your function With Nintex Workflow 2013/2016, you can place a container around your custom action and its supporting actions as a user defined action that will show up in the Workflow Designer like any other action. You can add a custom action containing your function to a workflow that may add a Beacon action for monitoring the use of your function using Nintex Hawkeye, a function that writes to a log, and logic to handle errors or unexpected events.To interact with the user defined action, a user just needs to know the input parameter and the expected return. With Nintex Workflow 2013 Enterprise Edition or Nintex Workflow Cloud, you can add a RESTful endpoint that can accept a JSON payload contain workflow variables. The workflow's complexity supporting your function can be concealed behind the REST interface. To interact with the external-start enabled workflow, a user just needs to know the endpoint, the input payload, and the expected return. Let's turn back to the console application as the place to develop your a process that you would like to share with users of your custom inline function, actions, and services. Rise of the Lowly Console Application A function in its most primitive form is a fragment of code that takes an input, performs some processing on this input, and then produces a return. Nearly every book on the syntax of a computer programming language begins with the "Hello Word" example. Often in the introduction on how the language implements functions, the primer includes a variety of the hello world function where you pass input into the function, and then produce the output, of "Hello <parameter>." For example in JavaScript this might look like: var hello = function (input) { var output = "Hello " + input; alert(output); } Function Console Application A console application can be run on your local machine. In our example, we are looking at a console application that converts JSON to XML, or XML to JSON. The console application takes in a local file, and save a local file. The meat of the functioning is contained in its own class. In this sense the console application focuses on the delivery of a single bit of processing. It may have supporting methods, but is basically a stateless, static 'chunk' of processing. Input goes in, is processed, and the product of the processing returned. In creating your function, encapsulate in a class. This is not full on object orientation, but in this case you are not creating something as thoughtful or heavy weight as an object model, but rather creating a tiny fragment of processing in response likely to some need. The mode here is a more retrograde, back to the days of BASIC's GOTO and essentially procedural. You are keeping it simple. Complexity will come in this approach in at higher-levels where it may be more easily managed. Set Up - Visual Studio Steps - Create a console application. - Add the Newtonsoft JSON.NET library to the project references. - Create a class with a static method that accepts a value and produces a return. - Instantiate the class in Program.cs in the Main() method. - Pass the input into your object, and then handle the return. Code (C#) Sample for a Function in a Console Application Program.cs using System; using System.IO; namespace JsonConverter { class Program { /// <summary> /// A console application that converst from XML to JSON, or form JSON to XML. /// </summary> static void Main() { string inputtext = File.ReadAllText(@"/input.txt"); // add pathname to input file var conversion = new JsonConverterAction(); string convertedtext = conversion.ConvertIt(inputtext); Console.Write(convertedtext); File.WriteAllText(@"/output.txt", convertedtext); //add pathname to output file } } } JsonConvertAction.cs using System; using System.IO; using System.Xml; using System.Xml.Linq; using Newtonsoft.Json; /// <summary> /// Converts text form JSON to XML or from XML to JSON. The converter detects the first character in the string and then converst to XML or JSON. /// <parameter>Input string (in either JSON or XML format.</parameter> /// </summary> namespace JsonConverter { class JsonConverterAction { public string ConvertIt(string inputText) { var convertedtext = inputText.Trim(); var firstChar = inputText[0].ToString(); switch (firstChar) { case "1": case "<": convertedtext = XmlToJson(convertedtext); break; case "2": case "{": convertedtext = JsonToXml(convertedtext); break; case "3": case "[": convertedtext = JsonToXml(convertedtext); break; default: convertedtext = JsonToXml(convertedtext); break; } return convertedtext; } //Converts a JSON string to XML. public string XmlToJson(string inputXMLasString) { try { var outputJsonAsString = ""; var xmldoc = new XmlDocument(); xmldoc.LoadXml(inputXMLasString); outputJsonAsString = JsonConvert.SerializeXmlNode(xmldoc); return outputJsonAsString; } catch (Exception ex) { var expectmessage = ex.ToString(); var returnmessage = "JSON not valid. " + expectmessage; return returnmessage; } } //Converts an XML string to JSON. public string JsonToXml(string inputJSONasString) { try { var outputXmlAsString = ""; XNode node = JsonConvert.DeserializeXNode(inputJSONasString, "Root"); var stringWriter = new StringWriter(); var xmlTextWriter = new XmlTextWriter(stringWriter); node.WriteTo(xmlTextWriter); outputXmlAsString = stringWriter.ToString(); return outputXmlAsString; } catch (Exception ex) { var expectmessage = ex.ToString(); var returnmessage = "JSON not valid. " + expectmessage; return returnmessage; } } } } Division of Concern When developing an application, programmers often resort to the console application. We craft our function to operate in a predictable way and make sure it does what it is supposed to do: when you feed it a set of inputs, the function produces a range of usable outputs. We may optimize performance at this level, as well, or refactor the function as we refine our understanding of the problem and how the function delivers a solution. We want the function to produces what we expect it to produce. Developing a complex program is typically an aggregation of problems solved at this foundational level. More abstract constructs such as abstraction, encapsulation, and even object orientation work to extend and reuse solutions created at this primary level. Beginning with a focus on a function that will work well as a portable function in the Nintex Workflow Platform, that is a static function that takes in a single input and produces a single output, helps you to maintain a focus on the division of concern. This function does this. Once this concern has been addressed in code, you can move to higher-level logic using Nintex Workflow. You can then add supporting logic such as handling state, errors, and monitoring tasks with other actions in the Nintex toolbox. You can wrap this logic in a User Defined Action, or provide an interface through External Start. Or you can place your function in the cloud add your supporting logic in Nintex Workflow Cloud, and make it accessible via an External Start Event. When you couple this with cloud products such as Microsoft Azure's Functions (aka serverless functions), you have a tool set that allows you to focus your problem solving skills on first creating a function that produces useful and expected outputs, and then to re-purpose this function to accomplish more complex tasks. And then when you begin to create a number of functions that have RESTful endpoints, you are beginning to create something that resembles a microservices platform. Functions with RESTful Endpoints are Microservices A microservice is an element of a microservices architecture. A single service is a block in this complex pattern. At a basic level, a Nintex Workflow with a RESTful endpoint wrapping a function is an independent process accessible through a language-agnostic API. You can fire your function using Postman, JavaScript, Python, even a commandline utility such as cURL. As long as the interface remains the same, you can update the underpinnings or even refactor the entire workflow. The single service can serve as a modular block in a complex structure that emerges organically and rapidly from solving stakeholder problems and reusing your existing solutions. A discussion of microservices typically talks about how to transition from a monolithic system that already exists. A process of building upward from function, to restful endpoints, to defined services may be thought of as a system of organic growth. Where to Put Your Portable Function In the Nintex Workflow Platform Function to Microservice with the Nintex Platform This series of posts will look at: - Adding your function to Nintex Workflow 2013 as an inline function - Adding our function to Nintex Workflow 2013 as a custom workflow action - Wrapping your action in a user-defined action in Nintex Workflow 2013 - Adding a RESTful endpoint to a Function in Nintex Workflow 2013 - Adding Your Function to the Cloud as a Serverless Function in Azure - Adding an External Start Event to your function Nintex Workflow Cloud
https://community.nintex.com/community/dev-talk/blog/2017/02/18/function-to-microservice-with-the-nintex-platform
CC-MAIN-2019-04
refinedweb
1,970
51.48
Welcome to another Flask web development tutorial, in this tutorial we're going to be discussing how to utilize Flask-Mail for emailing from within your app. To start, we need to grab Flask-Mail: sudo pip install Flask-Mail. Next, from within your __init__.py, add the following import to the top: from flask_mail import Mail, Message Next, along with the app definition, we add the following: app = Flask(__name__) app.config.update( DEBUG=True, #EMAIL SETTINGS MAIL_SERVER='smtp.gmail.com', MAIL_PORT=465, MAIL_USE_SSL=True, MAIL_USERNAME = 'your@gmail.com', MAIL_PASSWORD = 'yourpassword' ) mail = Mail(app) If you are not using gmail, you will need to use a different mailserver. Now let's make a quick mail sending function in our __init__.py file: @app.route('/send-mail/') def send_mail(): try: msg = Message("Send Mail Tutorial!", sender="yoursendingemail@gmail.com", recipients=["recievingemail@email.com"]) msg.body = "Yo!\nHave you heard the good word of Python???" mail.send(msg) return 'Mail sent!' except Exception, e: return(str(e)) Next, visit this URL to send the email. Using a gmail may require you to enable insecure apps in your account. You will get an email from Google if that is the case when you actually run the application, and you will also get a return on the page that looks like: (534, '5.7.14 Please log in via your web browser and\n5.7.14 then try again.\n5.7.14 Learn more at\n5.7.14 s84sm12251698qki.14 - gsmtp'). If you get this with Gmail, go to "My Account," then "Sign-in and security," then the "connected apps & sites." On this page, you are looking for the "Allow less secure apps:" and then turn that on to allow the emails to go through. Sometimes (usually), this STILL wont be enough. You may send one email successfully, but then the others will error again, and the error will contain: (534, '5.7.9 Please log in with your web browser and then try again. Learn more at\n5.7.9 f189sm12187803qhe.1 - gsmtp'), where what matters it the URL:. Head to here to turn off captcha. That should keep you all set, but just keep reading the errors if you keep getting them. Sometimes, you still wont get anywhere. Generally, at least from my findings, this happens when you use a server that has no reputation as a site. If you've done all of the steps for Google, and you still get errors, it might be your server. If you don't have an associated website, this might be the reason. I assume this is to fight people who are trying to spam, but I really do not know. With this, we can send simple emails. We can actually take this further, sending more than simple text-based emails. As an example, I will share my "forgot password" emailing snippet: msg = Message("Forgot Password - PythonProgramming.net", sender="pythonprogrammingnet@gmail.com", recipients=[email_addr]) msg.body = 'Hello '+username+',\nYou or someone else has requested that a new password be generated for your account. If you made this request, then please follow this link:'+link msg.html = render_template('/mails/reset-password.html', username=username, link=link) mail.send(msg) Here, you can see there is a text-based version, with some variables and new lines. You can also see an additional attribute, msg.html. Here, we can actually use a template. My template here is: <p>Hello {{username}},</p> <p>You or someone else has requested that a new password be generated for your account. If you made this request, then please click this link: <a href={{link}}><strong>reset password</strong></a>. If you did not make this request, then you can simply ignore this email.</p> <p>Best,</p> <p>Harrison</p> <p>Harrison@pythonprogramming.net</p> You can get far more fancy with your HTML, but this is a nice basic example. Should someone be using an email that wont support the HTML, the plain-text body can be shown. Learn more about Flask-Mail here. Otherwise, in the next tutorial, we're going to be talking about how to return files with Flask.
https://pythonprogramming.net/flask-email-tutorial/
CC-MAIN-2022-40
refinedweb
693
67.86
BPEL M3 information What's in M3 ? The biggest new item in M3 is the BPEL 2.0 validator. Now your BPEL files will get checked for errors. In addition, we have moved very closely to BPEL 2.0 compliance in several ways outlined below. Read more to see what is in M3. M3 has been built and further milestones will continue to exist on top of WTP2.0 This eliminates an issue for developers to use the BPEL EMF model in their downstream plug ins and various nasty issues around model code regeneration. There had been several improvements to the editor. You can now hide the palette or place it in the palette view just like any other standard GEF editor does. A duplicate command had been added to the edit action so that you can quickly select and duplicate entire sections of diagrams for fast diagram creation. Selection and multi-selection speed has been improved. Some undo /redo problems were fixed too. You can now validate a BPEL project. There are currently 2 hooks into validation. There is a builder which is attached to the BPEL project and there is a WST extension point implementation for BPEL validation. - Validation in the editor - The problems view - Validation settings - Validation in the menu The if BPEL 2.0 activity has been added and the corresponding switch activity removed. The compensateScope BPEL 2.0 activity has been added. A integrated WSIL browser has been added to help you discover web services available for integration using BPEL. You should be able to plugin in your own WSIL documents for viewing as well. - WSIL preferences in Eclipse Preferences - WSIL browser in import dialog. You can copy/cut/paste from one BPEL editor instance to another. You can copy entire sections of BPEL code. Multiple selections also work. You can also copy variables, partner links, etc and paste them in the right location. Everything that you copy also appears in source form in the system clipboard. The reverse is also true. You can paste BPEL source code from the clipboard to the visual editor "where it makes sense". Now if you see any BPEL code anywhere, you can clip it, and paste into the editor "where it makes sense". Try pasting this little nuggest "some place". <sequence name="main" xmlns: <receive name="receiveInput" operation="initiate" partnerLink="client" portType="ns:Traffic2" variable="input"/> <assign name="Assign" validate="no"> <copy> <from part="payload" variable="input"> <query><![CDATA[/tns:input]]></query> </from> <to part="hwynums" variable="trafficRequest"/> </copy> </assign> <invoke inputVariable="trafficRequest" name="Invoke" operation="getTraffic" outputVariable="trafficResponse" partnerLink="traffic" portType="ns0:CATrafficPortType"/> <assign name="Assign1" validate="no"> <copy> <from part="return" variable="trafficResponse"/> <to part="return" variable="trafficResponse"/> </copy> </assign> <invoke inputVariable="output" name="callbackClient" operation="onResult" partnerLink="client" portType="ns:Traffic2Callback"/> </sequence> Namespaces,namespaces, namespaces. To migrate closer to 2.0 compliance we have moved to 2.0 proposed namespaces for BPEL, PartnerLinks, VariableProperties. Also, the templates are using the new namespaces as well. UI Support for documentation elements had been added. UI Support for variable initialization had been added. Variable initialization is 2.0 BPEL feature and takes on the form of a virtual copy rule. We have spent some time cleaning out bugs and are hoping to get the count closer to 0 in the future. - Bugs fixed so far. - Bugs still needing fixing. While not an official part of the release, the folks from the Univ. of London had implemented their own Active BPEL runtime for the editor. More on this here:
http://www.eclipse.org/bpel/users/m3.php
CC-MAIN-2015-27
refinedweb
590
60.01
Important: Please read the Qt Code of Conduct - QT module compile problem. I am using ubuntu 18.04 for development, but serialbus module do not support this platform. So , i guess we can compile the qt/serialbus by myself. I use qmke & make one by one step. But error meets below: Makefile:729: recipe for target '.obj/qcanbusdevice.o' failed In file included from /home/jason/Downloads/qtserialbus/src/serialbus/qcanbusdevice.cpp:38:0: /home/jason/Downloads/qtserialbus/src/serialbus/qcanbusdevice_p.h:43:10: fatal error: private/qobject_p.h: No such file or directory #include <private/qobject_p.h> ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. /home/jason/Downloads/qtserialbus/src/serialbus/qcanbus.cpp:46:10: fatal error: private/qfactoryloader_p.h: No such file or directory #include <private/qfactoryloader_p.h> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. make[2]: *** [.obj/qcanbusdevice.o] Error 1 ... ... Nothing tips with the module source on serialbus Anyone who can help , appreciate! - jsulm Qt Champions 2019 last edited by @Pangolin said in QT module compile problem.: but serialbus module do not support this platform That would be new to me. Why do you think it is not supported? I think the OP means that there's no serialbus module in Ubuntu's Package archive if he installs Qt from apt. But the module is available if installing by official installer. - jsulm Qt Champions 2019 last edited by jsulm @Bonnie @Pangolin My Ubuntu 18.04 has libqt5serialport5 and libqt5serialport5-dev Or is this serialbus something different? @jsulm I'm not sure but I think serialport module should be different from serialbus? Since you'll get both dlls in Windows. @jsulm said in QT module compile problem.: @Bonnie @Pangolin My Ubuntu 18.04 has libqt5serialport5 and libqt5serialport5-dev Or is this serialbus something different? Yes it is a different module: @Pangolin to compile a Qt module you need to compile base Qt as well - at the very least the Qt base repo. @jsulm Good to see you sir . As i know qt serialbus module only support the platform higher than 18.04 not include 18.04. Many thanks to you. @sierdzio So appreciate to you sir. I m always think why we can not build qt module separately and more wondering that no compile tips for qt module source at all ! @Pangolin said in QT module compile problem.: I m always think why we can not build qt module separately It's all because of configurestep, which is done by qtbaserepo - it sets all build settings for all other modules. Some modules can be build separately. And I think that starting with Qt 6, all modules will be buildable separately. @sierdzio Thank you so much sir. I would like to meet QT6 and if that done will bring us so many flexible features. It will be released in December. But first few releases will be quite, well, incomplete. A lot of modules are missing in 6.0 and will come back around 6.1 or 6.2 (so in December next year).
https://forum.qt.io/topic/119948/qt-module-compile-problem
CC-MAIN-2020-45
refinedweb
494
53.37
Details Description The Schema section of v42 of says: "The openejb-jar.xml deployment plan is defined by the openejb-jar-2.1.xsd schema located in the <geronimo_home>/schema/ subdirectory of the main Geronimo installation directory. The openejb-jar-2.1.xsd schema is shown here: 1. All versions of openejb-jar in this text should be 2.2, not 2.1 2. The namespace URL should be. gopenejb is an error. Activity - All - Work Log - History - Activity - Transitions I put a fix in trunk in rev 691702. David Blevins suggested putting the schema somewhere in plugins/openejb, so I put it in plugins/openejb/openejb/src/main/resources/openejb-jar-2.2.xsd. can you also apply to branches/2.1 (2.1.4-SNAPSHOT) and branches/2.1.3? added to braches/2.1 with rev 691746, and branches/2.1.3 with rev 691747. I committed some updates so that the openejb-jar-2.2.xsd is installed via plugin infrastructure instead of copied during a build. Also, added Apache license header to the schema file. Committed the changes to trunk (Revision: 693554), branches/2.1 (Revision: 693567) and branches/2.1.3 (Revision: 693572). Ted, is there anything else that needs to be done for this bug or can it be resolved? Thanks Jarek. I like your solution better than my original one. I second your motion to close this issue. Those opposed, reopen the issue! With v45 of that wiki page, I have made the appropriate changes. I also changed a gpkgen-2.1 to pkgen-2.1
https://issues.apache.org/jira/browse/GERONIMO-4276
CC-MAIN-2017-34
refinedweb
264
72.22
Code runs as intended on the first time through but if I go to run it again it crashes automation desk. If I run it from within Python there are no errors and I can run it repeatedly. "AD." are AutomationDesk variables that do not work outside of automationdesk. for testing I comment that part out and just use DEBUG instead of AD.DEBUG. Automation desk is a software by dSpace. from Tkinter import * import Tkinter,tkFileDialog,tkMessageBox from datetime import datetime #print time now = datetime.now() print "Test Start time is: "+'%s:%s:%s' % (now.hour, now.minute,now.second) #hide the main window root = Tk() root.withdraw() #Debugger option debugYN = tkMessageBox.askyesno("Debug", "Would you like to debug?") if debugYN == True: _AD_.DEBUG = 1 print "Debugging enabled" else: _AD_.DEBUG = 0 #File name selection file = tkFileDialog.askopenfilename() if file != None and debugYN == True: print file _AD_.DFCxlsPath = file if _AD_.DEBUG == 1: now = datetime.now() print "Select XLS & Debug Completed at "+'%s:%s:%s' % (now.hour, now.minute,now.second) root = None #root.destroy() del file del debugYN #remove now here because no matter what we print the start time del now I ran into the same issue when trying to use Tkinter in AutomationDesk, so I contacted them for support. Here is their official response: "we do not recommend using Tkinter within an AutomationDesk Exec block. We instead recommend using the internal 'Dialogs' library which you can access from the Library Browser in AutomationDesk. Furthermore Tkinter is not thread-safe. Please have a look into the following documentation on your PC: C:\Program Files (x86)\Common Files\dSPACE\HelpDesk 2014-A\Print\AutomationDeskGuide.pdf > Troubleshooting > Using Tkinter The cause is a thread problem in the interaction between Tkinter and Python 2.7. There are other reports of this problem on the Internet. e.g.:" Unfortunately, the Dialogs library is not very powerful and I've had difficulty finding good documentation for it.
https://codedump.io/share/OSRwlHn2SO47/1/python-tkinter-crashes-automation-desk-on-second-run
CC-MAIN-2019-09
refinedweb
325
60.72
scale.py: 16 points A scale is a sequence of notes, defined by the intervals between them. For example, the major scale is defined by the 7 intervals (and hence 8 notes) (2,2,1,2,2,2,1), that is, there are 2 half tones between the first and second notes, between the second and third notes, but a single half tone between the third and fourth notes, and so on. The “C Major Scale” is the sequence of notes starting at C and continuing to D, E, F, G, A, B, C as determined by the major scale intervals. The D major scale is the major scale starting at D and continuing to E, F#, G, A, B, C#, and D. The A major scale is the sequence of notes create a WAV file with a scale specified by command-line arguments. In particular, we’d like to be able to run: python scale.py -3 M to create an A major scale, python scale.py 0 N to create a C minor scale, or python scale.py 4 B to create a blues scale in E. In particular, you will pass as command-line arguments the tonic note (in its half-tone offset from middle C) and which scale to play (as a character: M for major, N for minor, and B for blues). How can your program make use of the arguments you add after the program name? If you add: import sys at the beginning of your program, you’ll get access to the variable sys.argv, which is a list of the arguments passed to Python from the shell. The first of these is always the name of the program itself. But if you were to run: python scale.py 4 B and that program included the statement print(sys.argv), we’d get as output: ['scale.py', '4', 'B'] Given this (and possibly judicious use of the int() function), you should be able to get all the input you need from command-line arguments. To make use of your SoundWave class, include: import soundwave at the top of your file. The bulk of this file should be a main() function which will first gracefully handle invalid input from the user by catching exceptions, reporting the error, and quitting the program. Closing a Program Previously, when you’ve wanted to quit a program due to faulty user input, you might have wrapped your code in an if statement or used an empty return within your main() function. Now that we know about the sys module, we can instead use the command exit as follows, which will quit the program entirely. sys.exit(-1) You should then also declare a dictionary intervals, similar to that below: intervals = {'M':[2,2,1,2,2,2,1], 'N':[2,1,2,2,1,2,2], 'B':[3,2,1,1,3,2]} Each value list corresponds to the number of half tones between successive notes in the Major, Minor, and Blues scales respectively. This should make building scales cleaner. Next, you should create a new SoundWave object at the half tone chosen by the user in the command-line arguments. Then, use a for loop to create another SoundWave for the next note in the scale (using the intervals corresponding to the type of scale the user desires) and call extend() to add this new note to the first one you created. By the end of the for loop, you should have the entire scale in a single SoundWave object. Finally, call the save method on that SoundWave object and pass in “scale.wav” as the name of the WAV file to be created. After running your scale.py program, you should be able to click on the scale.wav file and play it to hear the scale you created (similar to the other parts of the lab). If you encounter a 0 second long scale, remind yourself about the duration parameter to the constructor for SoundWave.
https://www.cs.oberlin.edu/~cs150/lab-8/part-5/
CC-MAIN-2022-21
refinedweb
670
69.31
Board index » VC.Net All times are UTC(); #include <c.h> bool C::VirtualFunction() { return false; #include <c.h> #include <stdio.h> void main() { C* c = new C(); if (c->VirtualFunction()) printf("true\n"); else printf("false\n"); Yes, it is a known bug. I've reported this bug couple of months ago and the response below shows the work around (I've tested it and it works). I've also been told that this problem will be fixed in the next release but I've tested on Everett and the problem is still there (when will this be fixed????). >This is a known bug. >Workaroud is to set the EAX register to 255 or less before the return >'false' from unmanaged to managed code (one way to do this): >#pragma unmanaged >int ForceEAX() >{ > //return 256; //A::F() returns incorrect true > return 255; //A::F() returns correct false >} >bool A::F() >{ > ForceEAX(); > return false; >} >The lower byte of EAX is handled properly, it's only >Hope this helps. >Thank you, >Bobby Mattappally >Microsoft VC++/C# Team >This posting is provided "AS IS" with no warranties, and confers no rights. >-----Original Message----- >Let's say my main is compiled as managed (/clr). From it, >I call a function in another file compiled as not managed >(without /clr). When the function I call is virtual AND >its return type is bool, even if the function returns >false, the calling one always receives true ! (); >}; >==== c.cpp ===== >// Compile without /clr >#include <c.h> >bool C::VirtualFunction() >{ >return false; >} >==== test.cpp ===== >// Compile with /clr >// will print "true" !!! (instead of "false") >#include <c.h> >#include <stdio.h> >void main() >{ >C* c = new C(); >if (c->VirtualFunction()) > printf("true\n"); >else > printf("false\n"); >} >. > I've also been told that this problem will be fixed in the next release > but I've tested on Everett and the problem is still there (when will this > be fixed????). I regret that this hasn't been addressed sooner. -- Brandon Bray Visual C++ Compiler This posting is provided AS IS with no warranties, and confers no rights. 1. Fatal Error C1010 in Mixing Managed C++ and Unmanaged C++ Code 2. System.ExecutionEngineException - mixing managed/unmanaged code 3. Mixing Managed and Unmanaged code 4. Overhead of mixing managed/unmanaged code 5. Debugging mixed managed/unmanaged c++ code 6. Possible vc++ bug: using c++ references to managed objects in unmanaged code 7. bug for transition of bools between managed and unmanaged code 8. mixing unmanaged and managed C++ 9. Try to get DirectX started in the Mixed Managed and Unmanaged C++ Programming 10. VC7++ - Detection of memory leaks, mixed DLL (unmanaged/managed) 11. Mixing Managed and Unmanaged C++ 12. Referencing data from unmanaged code to managed code in C++ Wrapper class
http://computer-programming-forum.com/7-vcdotnet/0ce4a312ad9f0db3.htm
CC-MAIN-2019-18
refinedweb
458
67.45
Compareto and equals method Difference between CompareTo and equals method. siva - Oct 21st, 2017 equals() method is derived from Object class which is a super class in java indirectly and this method is used to verify the content of the two objects if same return true otherwise return false. In ... Bharat - Oct 1st, 2012 CompareTo method which is present in the String class is used to check the two strings. It checks each character with the other String and if found equals then returns 0. Else negative value or positi...... Focus Use of Focus() method in asp.net Sandhya.Kishan - Jul 5th, 2012 The focus() method sets focus to the current window.When web page is loaded you can use a BODY onload event and javascript client code to set focus method(). Example The above example assumes a web .... Driver manager What is driver manager? Sandhya.Kishan - Jun 19th, 2012 The Driver Manager is a library that manages communication between applications and drivers.The Driver Manager is used solves a number of problems related to determining which driver to load based on a data source name, loading and unloading drivers, and calling functions in drivers. dev patel - Jun 16th, 2012 In jdbc- object which can connect java application to a jdbc driver that is called driver manager. 19th, 2012 1.output cache extensibility 2.session state compression 3.routing in asp.net 4.increased URL character strength 5.new syntax for Html Encode 6.View State mode for individual controls MVC Design pattern what is the difference between MVC1 and MVC2 in j2EE? Read Best Answer Editorial / Best AnswerSandhya.Kishan - Member Since Mar-2012 | May 14th, 2012 1.MVC1 consists of Web browser accessing Web-tier JSP pages. The JSP pages access Web-tier JavaBeans that represent the application model, and the next view to display is determined by hyperlinks selected in the source document or by request parameters. 2.MVC1 is page-centric design, meaning any JSP page can either present in the JSP or may be called directly from the JSP page. 3.MVC1 Combines both presentation logic with the Business logic. 4.MVC1 we can have multiple controller servlets. 1.MVC2 introduces a controller servlet between the browser and the JSP pages. The controller centralizes the logic for dispatching requests to the next view based on the request URL, input parameters, and application state. 2.MVC2 removes the page-centric property by separating Presentation, Control logic and Application state. 3.MVC2 can have only one controller servlet . Sandhya.Kishan - May 14th, 2012 1.MVC1 consists of Web browser accessing Web-tier JSP pages. The JSP pages access Web-tier JavaBeans that represent the application model, and the next view to display is determined by hyperlinks sele... Testing Steps - From which phase the testing should be started ? As well as is there having any global standard testing phase which should be sequential. Like Unit testing - Module Testing ....so on. Sandhya.Kishan - Jun 12th, 2012 Testing starts at the requirement phase of the SDLC and continuous till the last phase of the SDLC. Steps involved in testing 1.Static testing includes review of documents required for the software d... Time Issues Sandhya.Kishan - May 26th, 2012 Answer is c)6 11AM - 5PM = 6 hrs rain increased 1.25 every 2 hrs 3. 3*1.25 = 3.75 total rain = 2.25 + 3.75 = 6 Write a test case for Fibonacci series? Sandhya.Kishan - Jun 25th, 2012 Test case for fibonacci series can be 1.When an zero is entered it should return a zero. 2.When a negative integer is entered it should not accept the value and should return an error msg. 3.When a p.... BaselineTesting What is Baseline testing? is it same for web and other type of testing? Sandhya.Kishan - Mar 17th, 2012 Baseline testing are testing standards to be used at the starting point of comparison within the organization.It is a test which is taken before any activity or treatment have occurred. Requirement specification validation is a baseline testing. Why type of testings are available for VisualStudio 2010 ? Sandhya.Kishan - Mar 17th, 2012 Some types of testing available are: 1.ordered testing 2.unit testing 3.manual testing 4.load testing 5.coded UI testing. What is the default wait time in Silk test? Sandhya.Kishan - Jun 5th, 2012 The default wait time in silk test is 10 seconds. What is VLAN ? what is vlan in vio server in aix ? What is its main purpose ? Sandhya.Kishan - Jul 11th, 2012 VLAN stands for virtual LAN,it is a broadcast domain created by switches.With VLAN’s, a switch can create the broadcast domain.The purpose of VLANS is to improve network performance by separating large broadcast domains into smaller ones. inverse of matrix program to find inverse of nth order square matrix?(c++) Sandhya.Kishan - May 29th, 2012 Void trans(float num[25][25],float fac[25][25],float r) { int i,j; float b[25][25],inv[25][25],d; for(i=0;i Write a program to identify a duplicate value in vector ? Sandhya.Kishan - Mar 20th, 2012 "c void rmdup(int *array, int length) { int *current, *end = array + length - 1; for (current = array + 1; array < end; array++, current = array + 1) { while (current < ... How would you test in cloud ? Sandhya.Kishan - Jun 12th, 2012 By understanding a platform providers elasticity model/dynamic configuration method we can test in cloud. What do you mean by package access modifier? Sandhya.Kishan - Mar 10th, 2012 Access modifier are used to implement encapsulation feature of oops.There are 3 access specifiers namely Private: The current class will have access to the field or method. Protected - The current cl... Linked list in java How to implement reverse linked List using recursion? Sandhya.Kishan - Apr 11th, 2012 The program is List* recur_rlist(List* head) { List* result; if(!(head && head->next)) return head; result = recur_rlist(head->next); head->next->next = head; head->next = NULL; return result; } void printList(List* head) { while(head != NULL) { std::cout How can you read a SOL file using JAVA script? Sandhya.Kishan - Mar 7th, 2012 The methods IloCplex.readSolution and IloCplex.writeSolution is used to read a sol file in java script....... Iterative Algorithm design an iterative algorithm to traverse a binary tree represented in two dimensional matrix Sandhya.Kishan - Jul 16th, 2012 A binary tree can be traversed using only one dimensional array. InOrder_TreeTraversal() { prev = null; current = root; next = null; while( current != null ) { if(prev == current.parent) { prev = cu... Sigbus error ? what is sigbus error Sandhya.Kishan - Mar 10th, 2012 When a bus error occurs a signal is sent to the processor called as sigbus signal.The constant for sigbus is defined in header file signal.h.Sigbus error is thrown when there is improper memory handling. What do you mean Inscope and Outscope Kumar - Jul 14th, 2014 InScope - What are all testings we are going to conduct for the App. (Like - Functional testing, Regression Testing, Load Testing...etc.) Outof Scope - What are all the testings we are NOT going to c... Sandhya.Kishan - Mar 19th, 2012 We can define scope by defining deliverable, functionality and data and a;so by defining technical structure. In-scope are things the project generates internally e.g. Project Charter, Business Requir... Raise application error can we raise_application_error in exception block?? if we use what will happen? Sandhya.Kishan - Jun 18th, 2012 Whenever a message is displayed using RAISE_APPLICATION_ERROR, all previous transactions which are not committed within the PL/SQL Block are rolled back automatically . RAISE_APPLICATION_ERROR is use... Error Handling What is an error handling framework? Sandhya.Kishan - Mar 12th, 2012 Error handling framework indicates serious problems that a reasonable application should not try to catch. Most such errors are abnormal conditions.. code switching and code mixing what is the difference between code switching and code mixing? Sandhya.Kishan - Apr 27th, 2012 Concurrent use of more than one language in the same sentence of a conversation is known as code switching as Code mixing refers to mixing of two or more languages in a speech.It occur within a multilingual setting where speakers share more than one language. To print unique numbers eliminating duplicates from given array write a java code to print only unique numbers by eliminating duplicate numbers from the array? (using collection framework) Sandhya.Kishan - May 26th, 2012 import javax.swing.JOptionPane; public static void main(String[] args) { int[] array = new int[10]; for (int i=0; i. Command routing in MDI What is command routing in MDI Sandhya.Kishan - Mar 20th, 2012 Command routing is passing commands to its targeted objects.When a command is routed, it goes to the main frame. From the main frame, it is routed to the child frame of the active view; it is then rou... What is the difference between an image and a map Sandhya.Kishan - Apr 11th, 2012 Image:An image is an exact replica of the contents of a storage device stored on a second storage device. Map:A file showing the structure of a program after it has been compiled. The map file lists ... how to capture webtable values Read Best Answer Editorial / Best AnswerSandhya.Kishan - Member Since Mar-2012 | Apr 11th, 2012 By using the function getroproperty("field name") we can capture webtable values. Sandhya.Kishan - Apr 11th, 2012 By using the function getroproperty("field name") we can capture webtable values. Artificial intelligence how do humans recognize a word? Sandhya.Kishan - Apr 5th, 2012 We basically process the shape of each individual letter in a word at the same time, and therefore determine the word itself. We then derive the semantics of the word using "back up" files in the brain. galla_srinivas - Feb 22nd, 2012 By Identifing it by knowing language or knowledge how do you establish a connection between two ear files Sandhya.Kishan - Jun 22nd, 2012 The Connection Pool Manager is used to establish a connection between two ear files. what is difference between query calculation and layout calculation Read Best Answer Editorial / Best AnswerSandhya.Kishan - Member Since Mar-2012 |. Sandhya.Kishan -. Define Raster and Vector Data. Define raster and vector data. Explain what is the difference between raster and vector data? Read Best Answer Editorial / Best AnswerSandhya.Kishan - Member Since Mar-2012 | Jul 4th, 2012 Sandhya.Kishan - Jul 4th, 2012 Raster data is a set of horizontal lines composed of individual pixels, used to form an image on a CRT or other screen.Raster data makes use of matrix of square areas to define where features are loca... How to send sms from java application ? Sandhya.Kishan - Mar 7th, 2012 Look up SMS gateway. TO send a text, your really just sending any EMAIL with the SMS gateway. Its very easy. for instance, versions is: yournumber@vtext.com .So just have the user input the phone number and their carrier and then send email out using JavaMail. what are 4 member function for each object in c++. Each C++ object possesses the 4 member fns, what are those 4 member functions.please tell me what i s the answer for this question. Sandhya.Kishan - Jun 13th, 2012 Each C++ object has constructor,default constructor,copy constructor and destructor as the member functions. ABC - Aug 24th, 2011 Following are four default functions available for each object 1) constructor 2) destructor 3) copy constructor 4) assignment operator How to Retrieve the Hidden Field Value how to retrieve the value of hidden filed in one page from another Sandhya.Kishan - Apr 11th, 2012 In .aspx, you can access the hidden fields when the page is submitted by using - string customerId = Request.Form["txtCustomerId"]; What is the difference between MLOAD and TPUMP ? Sandhya.Kishan - Apr 17th, 2012 1.TPump allows us to load data into tables with referential integrity which MultiLoad doesn't allow. 2.TPpump does not support MULTI-SET tables,but multiple tables can be loaded in the same MultiLoad... how to write program to print descending order Bhushan Pote - Apr 6th, 2015 Bhushan - Apr 6th, 2015... Sizeofthe Variable how to find the size of the (datatype) variable in java?in c we use sizeof() operator for for finding size of data type Sandhya.Kishan - Mar 27th, 2012 There is no any particular function in java to find the size of the variable,because java removes the need for an application to know about how much of space needs to be reserved for a primitive value, an object or an array with a given number of elements. What services does the internet layer provide? Sandhya.Kishan - Apr 19th, 2012 The internet layer packs data into data packets known as IP datagrams, which contain source and destination address information that is used to forward the datagrams between hosts and across networks.... Is it possible to debug the RSA encrypting algorithm? If yes, how it is possible? Ritesh Kumar - Nov 24th, 2015 Yes we can Debug the RSA Algorithm only by Brute Force Attack. So it takes minimum 5 years (Current records) time for a Super Computer to debug the entire combination of the probable passwords to debu... Sandhya.Kishan - Jul 7th, 2012 The RSA algorithm as it makes use of unique prime number which is not the same each time when being generated.Hence we cannot debug the algorithm. What is DHCP Relay Agent? Suthakar - Dec 4th, 2012 We use DHCP Relays when DHCP client and server don't reside on the same (V)LAN, as is the case in this scenario. The job of the DHCP relay is to accept the client broadcast and forward it to the server on another subnet. Sandhya.Kishan - Mar 14th, 2012 It is a Bootstrap Protocol that relays DHCP(Dynamic Host Configuration Protocol) messages between clients and servers for DHCP on different IP Network.using DHCP in a single segment network is easy. I... what is heartbeat in clustering? Sandhya.Kishan - Mar 14th, 2012 Heartbeat cluster is a program that runs specialized scripts automatically whenever a system is initialized or rebooted.This cluster allows clients to know about the presence (or disappearance!) of pe... how round robin algorithm works ? Sandhya.Kishan - Mar 10th, 2012 In round robin algorithm time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement,... what are the parameters in http.conf file ? Sandhya.Kishan - Jul 11th, 2012 Some parameters are 1.mod_rewrite 2.WLLogFile 3.DebugConfigInfo 4.StatPath 5.CookieName 6.MaxPostSize 7.FileCaching what is the output of kill -3 pid ? kiran78 - Jul 20th, 2012 Kill -3 pid find the thread dump jvm process Pranaw - May 6th, 2012 Kill -3 pid is used to create thread dump for the process id. This is basically used for troubleshooting and to understand what went wrong with the above process. Suppose some node of weblogic is not ... What is enumerated data type ? Sandhya.Kishan - Jun 6th, 2012 Enumerated data type are the variables which can only assume values which have been previously declared. These values can be compared and assigned, but which do not have any particular concrete repres... How will you test the font of any style ? Eg: Verdana, Arial etc pavan.7014 - May 30th, 2012 Thanks for the Answer Sandhya , Actually I have faced a question how shall we test the Font without using any tool.?? And one more query that , how can we test by previewing the font ? Please help me out on the same.... Sandhya.Kishan - Mar 13th, 2012 The Font Control Panel allows you to configure font settings, organize fonts and preview font styles. Preview function is used to test fonts of any style. Minimum number of comparisons required What is the minimum number of comparisons required to find the second smallest element in a 1000 element array? Sandhya.Kishan - May 26th, 2012 999 comparisons are required to find the second smallest element in an array. Report Defects to the Developer In how many way we can report defects to the deveploer? Sandhya.Kishan - Apr 20th, 2012 We can report defects to the developer either in formal way or through informal way. Communicating the details of the failure with the developers in person, in email or over the phone is an informal ... Verification Plan How do you plan for verification in your project? Sandhya.Kishan - Mar 19th, 2012 The plan for verification of a project can include steps like 1.Develop verification plan. 2.Trace between specifications and test cases. 3.Develop Verification Procedures. 4.Perform verification. 5.Document verification results. C Program Execution Stages Briefly explain the stages in execution of C program? How are printf and scanf statements statements being moved into final executable code? Sandhya.Kishan - Mar 8th, 2012 The stages of execution are: * Making and Editing * Saving * Compiling * Linking * Running Object Repository Extensions When do we use .mtr and .tsr extensions in QTP? State the difference with suitable example? Sandhya.Kishan - Jun 25th, 2012 We use filename.mtr as an extension for Per test object rep files. We use filename.tsr as an extension for Shared Object rep files.... Distance Between x and z Intercept Determine the distance between x and z intercept of the plane whose eqn is 2x+9y-3z=18 Sandhya.Kishan - May 26th, 2012 Sqrt of ((d/a)^2+(d/c)^2) d=18 a=2 b=9 c=-3 (d/a)^2=81 (d/c)^2=36 Sqrt(81+36)=58.5 C Program Exectuion Stages Briefly explain the stages in execution of C program ?How are printf and scanf statements statements being moved into final executable code? Sandhya.Kishan - Mar 28th, 2012 There are seven stages of execution 1. Forming the goal 2. Forming the intention 3. Specifying an action 4. Executing the action 5. Perceiving the state of the world 6. Interpreting the state of the world 7. Evaluating the outcome CLR and Base Class Libraries Define clr and base class libraries. Sandhya.Kishan - Jul 17th, 2012 A base class library is a standard library to all common intermediate languages.With the help of common intermediate language the base class library can encapsulate a large number of common functions,... tsf meir ... touseef - Mar 26th, 2012 Clr = common language runtime works like the heart works fr any being........ Java Deadlock How to avoid deadlock in Java? Sandhya.Kishan - Aug 1st, 2012 A deadlock occurs when one thread has the control for A and tries to get the control for B while another thread has the control for B and tries to get the control for A. Each will wait forever for the... Indexes Searching Capabilities How do indexes increase the searching capabilities? Sandhya.Kishan - Jul 25th, 2012 By using the concept of serial scanning the indexes can increase the searching capabilities. Compress String How to compress a String (algorithem)? Sandhya.Kishan - Aug 1st, 2012 "java import java.io.ByteArrayOutputStream; java.io.IOException; import java.util.zip.GZIPOutputStream; import java.util.zip.*; public class zipUtil{ public static String compress... Open Files Simultaneously How will you increase the allowable number of simultaneously open files? Sandhya.Kishan - Mar 15th, 2012 Instant File Opener allows to create a list of multiple files, programs, folders, and URLs to be opened at the same time by opening a single special file or by logging into Windows. Files are opened ... Invoke Another Program How will you invoke another program from within a C program? Sandhya.Kishan - Mar 8th, 2012 We can invoke another program by using function like system() call like system(test.exe). Function Call How will you call a function, given its name as a string? Sandhya.Kishan - Mar 15th, 2012 We cannot call a function whose name is a string, we have to construct a table of two-field structures, where the first field is the function name as a string, and the second field is just the functi... URL Recording Mode What is the use of URL Recording Mode ? m tulasi ram - Oct 17th, 2018 URL mode is used to record the non browser applications and when we want to measure each and every component load time and when the application is generating more java flies. sri - Oct 29th, 2014 URL is used for recording the HTML and NON-HTML PAGES but HTML mode recording it is not possible why bcoz where the application functionalities getting downloaded(buffering) it is possible in only URL mode of recording Read Input at Run Time What are the different ways to read input from keyboard at run time? Sandhya.Kishan - Aug 1st, 2012 By using scanner class we can input data from the keyboard.By declaring the Scanner classs input as System.in, it pulls data from the keyboard (default system input). Subject Marks There are 5 Sub with equal high marks. Mark scored by a boy is 3:4:5:6:7 (Not sure). If his total aggregate if 3/5 of the total of the highest score, in how many subjects has he got more than 50%? Tanvi - Nov 17th, 2012 It is cleanly mentioned in question that he has scored 3 subjects Sandhya.Kishan - May 28th, 2012 In three subjects he will get more than 50%. Find the Speed An Engine length 1000 m moving at 10 m/s. A bird is flying from engine to end with x sec and coming back at 2x sec. Take total time of bird traveling as 187.5s. Find the to and fro speed of the bird. What is pre-emptive data structure ? Sandhya.Kishan - Apr 27th, 2012 There are primitive data types but not primitive data structures. Primitive data types are predefined types of data, which are supported by the programming language. For example, integer, character, and string are all primitive data types. Stacks Task What kind of useful task does stacks support? Read Best Answer Editorial / Best AnswerSandhya.Kishan - Member Since Mar-2012 | Apr 17th, 2012 Stack supports four major computing areas,they are 1.expression evaluation 2.subroutine return address storage 3.dynamically allocated local variable storage and 4.subroutine parameter passing. Sandhya.Kishan - Apr 17th, 2012 Stack supports four major computing areas,they are 1.expression evaluation 2.subroutine return address storage 3.dynamically allocated local variable storage and 4.subroutine parameter passing. Inherit Private/Protected Class Can a private/protected class be inherited? Explain Sandhya.Kishan - Aug 1st, 2012 Yes, but they are not accessible. Although they are not visible or accessible via the class interface, they are inherited. Masked Code What is the number of masked code ee@? Bharath Yadlapalli - May 2nd, 2012 When kill -3 command is executed, it will quit from executing the process and additionally it will dump core for that process mentioned with pid. Sandhya.Kishan - Apr 9th, 2012 022 is the number of mask code ee@. DocType and DOM What is DocType? What is DOM? Augustin - Jul 7th, 2012 DOM - API for HTML. It represents a web page as a tree. In other words, DOM shows how to access HTML page. DOCTYPE is used 1) for validation, "validator.w3.org" 2) specifies the version of HTML. ... Sandhya.Kishan - Apr 16th, 2012 The DocType declaration helps a document to identify its root element and document type definition by reference to an external file, through direct declaration.It helps in specifying certain attribute... DataType Byte Values What are the byte values of datatypes? Sandhya.Kishan - Aug 1st, 2012 The default byte value of data types in zero. XML and SGML What is the relationship between XML and SGML? Does XML replace SGML or is it a subset of SGML? Sandhya.Kishan - Mar 21st, 2012 SGML is the basis XML and HTML and provides a way to define markup languages and sets the standard for their form.SGML passes structure and format rules to markup languages. XML is a subset of SGML.It is a meta language and is used to define other markup languages. Compute Average of Two Scores Describe an algorithm to compute the average of two scores obtained by each of the 100 students HAkizimfura Yves - May 27th, 2014 Write an algorithm that will display the sum of 5 integers by using two variables only NB: do not use loops! Shikhar Singhal - Jun 10th, 2013 Algorithm- Let score1[100] and score2[100] be the arrays storing respective marks of the 100 students let float avg[100] store the average of the respective students. for n=0 to 99 avg[n]= (f... Race Around Condition What is race around condition? Read Best Answer Editorial / Best AnswerSandhya.Kishan - Member Since Mar-2012 |. Sandhya.Kishan -. Data Migration How will you migrate the data from one system domain to another system domain? What testing procedures will follow? Sandhya.Kishan - Jun 25th, 2012 Domain migration happens when servers are upgraded and the data (including any authentication and authorization information) must be moved to a new system, when an administrator changes from one ISP t... Compile... ActiveX Component What project option causes the necessary files to be generated when the project is compiled? Sandhya.Kishan - Jul 12th, 2012 Gcc -c proc.adb is an option to generate the necessary files during compilation. Print Using string copy and concate Commands How will you print TATA alone from TATA POWER using string copy and concate commands in C? Sandhya.Kishan - Jun 15th, 2012 Include #include #include int main() { char myString[] = "TATA POWER"; char output[10]; strcpy(output,myString); output[4] = ; printf("OUTPUT :%s ", output); printf("ORGINAL STRING :%s", myString); getch(); } Masked Code What is the number of the masked code ee@? Sthitaprajna kar - Mar 20th, 2017 022 is the masked code. Sandhya.Kishan - Jun 15th, 2012 022 is the number of the masked code ee@ Read the heights in inches and wieght in pounds Read the heights in inches and wieght in pounds of an individual and compute and print their BMI=((weight/height)/height)*703 Sandhya.Kishan - Apr 17th, 2012 #include #include void main { int h,w,bmi; clrscr(); printf{"Enter your height in inches:"}; scanf{"%d",&h}; printf{"Enter your weight in pounds:"} scanf{"%d",&w}; bmi=((w/h)/h)*703; printf{"BMI=%d",&bmi}; getch(); } State of the Art in QA What is the "State of the Art in QA"?) Smart Client What is Smart Client? Sandhya.Kishan - Apr 12th, 2012 Smart client is an application which can simultaneously hold the advantages of the thin client such as auto-update,zero install and advantages of thick client such as high productivity and high performance. sutharsanan - Sep 27th, 2011 Smart client can be worked as thick client or thin client. Microsoft XML What is Microsoft XML? Sandhya.Kishan - Mar 17th, 2012 It is a service which enables the developers to create interoperable XML applications on all platforms of XML 1.0. Java Copy Command Write the Java version of MS DOS Copy Command Sandhya.Kishan - Jul 13th, 2012 The command FileUtils.copyFile(fOrig, fDest); is similar to ms ds copy command. Names of Constraints Oracle stores information regarding the names of all the constraints on which table? A)USER_CONSTRAINTS B)DUAL C)USER D)None of these Naresh kumar - Oct 3rd, 2012 Constraints divided into 3 types those are: 1. domain integrity constraints:- not null,check 2. entity integrity constraints:- unique,Primary key 3. referential integrity constraints:- foreign key In... sukrampal - Jul 12th, 2012 User-constraints and all_constraints Functional Difference What is the functional difference between wave trap, lighning arrestor, surge absorber. Sandhya.Kishan - Mar 17th, 2012 The function of Wave trap is to trap the communication signals of higher frequency sent from remote substation and diverting them to teleprotection panel in the control room substation. The function ... Function that Counts Number of Primes Write a function that counts the number of primes in the range [1-N]. Write the test cases for this function. Sandhya.Kishan - Jun 25th, 2012 Static int getNumberOfPrime(int N) { int count = 0; for (int i=2; i Advantages of ADO over Data Control Name two advantages of ADO over data control. Sandhya.Kishan - Apr 6th, 2012 Some advantages of ADO over data control are 1.ADO is faster with most databases compared to data control. 2.ADO separates Datahandling and Database Structure manipulation,hence its easier to protect... Grid Control What is Grid Control? For what purpose it is used? Sandhya.Kishan - Apr 6th, 2012 DataGrid control is a control in vb which helps in displaying the entire table of a record-set of a database. The control also allows users to view and edit the data. Average Temperature The average temperature of Monday to Wednesday was 37C and of Tuesday to Thursday was 34C. If the temperature on Thursday was 4/5 th of that of Monday, the temperature on Thursday was? vishwanatham - Dec 27th, 2012 37 - 3 = 34 ( (mon + tue + wed) / 3 ) - 3 = (tue+wed+thu)/3 ( mon + tue + wed -9) / 3 = (tue + wed+ thu) /3 ( mon + tue + wed - 9) = (tue + wed + thu ) mon - 9 = thu (since thu = (4/5) mon ) (5 * thu)/4 - 9 = Thu Thu = 36 Sandhya.Kishan - May 12th, 2012 The average temperature on Thursday will be 36 degrees. DDL Operation Trigger Which types of trigger can be fired on DDL operation? A. Instead of triggerB. DML triggerC. System triggerD. DDL trigger Sandhya.Kishan - Apr 5th, 2012 The trigger which can be fired on DDL operator is DDL trigger. Explain how Sequence Diagram differ from Component Diagram? swati - Jun 19th, 2014 1.Component diagrams are used to illustrate complex systems,they are building blocks so a component can eventually encompass a large portion of a system, but sequence diagrams are not intended for sho... Sandhya.Kishan - Apr 11th, 2012 1. A component diagram represents how the components are wired together to form a software system where as a sequence diagram is an interaction diagram which represents how the processes operate with ... Design a Framework What are the criterias that are considered to design a framework in QTP? Sandhya.Kishan - Apr 2nd, 2012 Some criterias in designing a framework are 1.Based on the requirements the framework should be kept simple, because Complexities can only destruct the whole purpose of framework. 2.As the project p... Neutral Grounding Resistor What is use of Grounding the Neutral of the star connecting transformer through resistor (NGR)? Sandhya.Kishan - May 17th, 2012 All electrical systems should have a link to ground.otherwise there will be severe ground insulation stress on transients. A neutral grounding transformer links the power system neutral to ground. A resistors used for earthing the star point of a transomer and protect the transformer. Flat Hit Numbers If the number of hits become flat, then the issue is with,a)App Serverb)Web serverc)Db serverd)Authorization server Shiv - Jul 2nd, 2012 Its an issue with connection of Webserver. jai hanuman - May 30th, 2012 Problem related to webserver to tune the weblogic connections Garbage Collection Algorithm What algorithm is used in garbage collection? Sandhya.Kishan - Aug 1st, 2012 The algorithms used by garbage collectors are 1.Naïve mark-and-sweep 2.Tri-color marking... Post Order Binary Tree Traverse Design a conventional iterative algorithm to traverse a binary tree represented in two dimensional array in postorder. Sandhya.Kishan - Mar 10th, 2012 In Postorder traversal sequence we first look for the left node then the right node and then the root. Algorithm:Code - void postOrder(tNode n) - { - if(n==null) - return; - postOrder(n.left); - postOrder(n.right); - visit(n); - } Algorithm Characteristics List out the characteristics of an algorithm Sandhya.Kishan - Mar 7th, 2012 1.It should be simple. 2.Generally written in simple language. 3.It involves finite number of steps. 4.should be executed in short period of time. 5.Output of algorithm should be unique. Importance of Algorithm What is the importance of algorithms in the field of computer science? Sandhya.Kishan - Mar 7th, 2012 Algorithms are blue prints of a program which gives all the details and functionality involved in finding the solution to a problem.It is important as we can build a program on any platform with the help of an algorithm. Pure Virtual Functions How can you make a class as interface, if you cannot add any Pure Virtual Function? Sandhya.Kishan - Jun 13th, 2012 By putting a “virtual destructor inside an interface” makes a class an interface. sangeeta - Feb 21st, 2012 Add pure virtual destructor in that class Two-Dimensional Arrays A Two-dimensional array X (7,9) is stored linearly column-wise in a computer's memory. Each element requires 8 bytes for storage of the value. If the first byte address of X (1,1) is 3000, what would be the last byte address of X (2,3)? Read Best Answer Editorial / Best AnswerSandhya.Kishan - Member Since Mar-2012 | Apr 17th, 2012 use the formulae X(i,j)=Base+w[n(i-1)+(j-1)] where m=7 ,n =9 ,i=2 ,j=3 hence 3000+8*[9(2-1)+(3-1)] =3000+8*(9+2) =3000+8*11=3088 Sandhya.Kishan - Apr 17th, 2012 Use the formulae X(i,j)=Base+w[n(i-1)+(j-1)] where m=7 ,n =9 ,i=2 ,j=3 hence 3000+8*[9(2-1)+(3-1)] =3000+8*(9+2) =3000+8*11=3088 Bytecode to Sourcecode How to convert bytecode to sourcecode? Sandhya.Kishan - Aug 1st, 2012 A Java Decompiler (JD) can convert back the Bytecode (the .class file) into the source code (the .java file). Error Trapping Functions Functions for error trapping are contained in which section of a PL/SQL block? Sandhya.Kishan - Apr 5th, 2012 The Exception section of the PL/SQL block contains the functions for error handling. Internet and Telephone Network Topology Which topology is mostly used as the internet & telephone network? Rahul - Mar 18th, 2013 In Internet WE mostly use Star Topology, But Mesh topology Is Secure,and In telephone System mostly , used Star Topology. Sandhya.Kishan - Apr 19th, 2012 Internet does not follow a standard topology,networks may combine topologies and connect multiple smaller networks, in effect turning several smaller networks into one larger one. A ring topology can be used for telephone networks. EAI Internal and External IO What is Internal IO and External IO? Sandhya.Kishan - Apr 12th, 2012 The internal io are created through EAI Siebel Wizard.These object have their base type as siebel business objects. The internal io are used in EAI Siebel Adapter BS through query methods. External i... Change jar File Icon How to change jar file icon. Sandhya.Kishan - Aug 1st, 2012 The jar file doesnt have an icon, its a system-wide setting that applies to ALL jar files.) Requirements Elicitation process Explain the various steps to conduct Requirements Elicitation process Sandhya.Kishan - Jun 25th, 2012 Stepe involved in elicitation requirement are 1.Identify the real problem, opportunity or challenge 2.Identify the current measure which show that the problem is real 3.Identify the goal measure to s... Intersection table What is an intersection table and why is it important? Sandhya.Kishan - Jul 6th, 2012 An intersection table implements a many-to-many relationship between two business components. scud021 - Jul 19th, 2008 A table added to the database to break down a many-to-many relationship to form two one-to-many relationships Define Delay time - Load runner Sandhya.Kishan - Jun 12th, 2012 Delay time is the time the elapses between request and response. VB.NET testing What's involved in end to end VB.NET testing? Sandhya.Kishan - Jun 18th, 2012 A software once completed goes though rigorous testing before its actual integration.It also goes through different types of software testing and also different types of integration. The different ty... File Compression how can we compress any text file using c. can anybody provide me sample code Sandhya.Kishan - Mar 8th, 2012 The function comp() can be used for compression.The compression logic for comp() should provide the fact that ASCII only uses the bottom (least significant) seven bits of an 8-bit byte. The compressio... Requirement Gathering Name three activities involved in requirement gathering Sandhya.Kishan - Apr 5th, 2012 Three activities involved in requirement gathering are 1.Eliciting requirement 2.Analyzing requirement 3.Recording requirement Integrity Rules List the rules used to enforce table level integrity. Sandhya.Kishan - Apr 5th, 2012 There are 3 rules to enforce table level integrity. 1.Foreign key value can be modified only if we want to match the corresponding primary key value. 2.We cannot delete records either from parent or c...
http://www.geekinterview.com/user_answers/663204
CC-MAIN-2020-16
refinedweb
6,085
58.18
New features in Python 3.6¶ Python 3.6 was released in December 2016. As of mypy 0.500 most language features new in Python 3.6 are supported, with the exception of asynchronous generators and comprehensions. Syntax for variable annotations (PEP 526)¶ Python 3.6 feature: variables (in global, class or local scope) can now have type annotations using either of the two forms: foo: Optional[int] bar: List[str] = [] Mypy fully supports this syntax, interpreting them as equivalent to foo = None # type: Optional[int] bar = [] # type: List[str])¶ Python 3.6 feature: coroutines defined with async def (PEP 492) can now also be generators, i.e. contain yield expressions. Mypy does not yet support this. Asynchronous comprehensions (PEP 530)¶ Python 3.6 feature: coroutines defined with async def (PEP 492) can now also contain list, set and dict comprehensions that use async for syntax. Mypy does not yet support this. New named tuple syntax¶ Python 3.6 supports an alternative syntax for named tuples. See Named tuples.
https://mypy.readthedocs.io/en/latest/python36.html
CC-MAIN-2017-13
refinedweb
169
70.6
If you haven’t read the previous parts of our Practical guide to web data QA, here are the first part, second part, third part and fourth part of the series. During a broad crawl, you might be extracting data from thousands or tens of thousands of websites with different layouts. When you scrape this many websites using a single spider, analyzing and validating the extracted data can be challenging. One important question to answer is what criteria should be used to determine the overall quality of the dataset. In the following article, which is the final part of our QA series, we’re going to work with 20,000 different sites and go through all steps with detailed explanations about the process. In this example, we are interested in finding specific keywords on the sites - if the keyword is found on a page then it should be flagged for further analysis. A large subset of heterogeneous sites should be processed to collect information for a specific keyword. We are going to crawl recursively starting from the domain and check all found links on the landing page. Several types of limits can be introduced here, like: The goal is to get as many pages as possible and verify the existence of keywords in them. The first step is inspired by two popular topics in programming: Simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. The idea is to identify good representations of 1-5% of the dataset and work with it. So for 20,000 sites, we can start with 200 sites that represent best the dataset. A random selection might work as well when it’s not possible to pick by other methods. At this point, these should be clear: Let's assume that our sites are named like this: We are going to use a semi-automated approach to verify the first n sites and the results extracted from the spider: In this step, a summary of the first n spiders plus the semi-automated checks can be represented in tabular form: In columns with the yellow header, you can see the extraction from the spiders, while in the blue columns it’s the result from the semi-automation. If you want to learn more about semi-automation then you can read the previous blog post from our QA series. Pivot tables are handy to group results and show meaningful data for conclusions about coverage. Example for status: The evaluation criteria for the first subset is the number of: This is the high-level evaluation of the first run. In the next step, we will check how to identify potential problems. The remaining parts of the data validations: The combination of the previous two results will be the criteria to decide if this first run is successful or not. If there are any problems, a new full run of the subset might be required (after fixes and corrections) or only on the sites which have problems. Unexpected problems are common for broad crawls and one of the ways to deal with them is by using deductive reasoning and analysis. Let say that for site_1 we know: Then we can ask several questions until we found an answer: The diagram below will help whether to report a problem or mark the site as successfully crawled: We divided the big problem into smaller tasks and defined an algorithm for successful spiders. Now we can work with the whole dataset by using tools like Pandas, Google Sheets, and our intuition. First, we recommend checking the initial source of URLs for: This should be done to verify what is the expected number of processed sites - which should be something like this: final = initial URLs - duplications - wrong URLs - empty Checks can be done in several ways. For example, the check for duplicated data can be done in Google Sheets/Excel by using pivot tables but to do quality check we need to have in mind that data should be cleaned because: Will not be shown as duplicated unless we remove the extra parts like: This can be done by using: Pandas can be used as well: df['website'] = df['website'].str.replace('http://', '') .str.replace('https://', '') .str.replace('www.', '') # remove last / and lowercase all records def remove_backslash(s): return s[:-1] if s.endswith('/') else s df_init['website'] = df_init['website'].apply(remove_backslash) df_init['website'] = df_init['website'].str.lower() It’s always a good idea to compare apples with apples! Once data for the execution is collected n in the form of: We can divide the jobs into several groups: Each of those groups will be analyzed for different validations. We can start with the jobs without items and investigate if: Next, let’s focus on the errors. In this case, the best is to collect logs for all jobs and process them to find top errors and after that analyze top errors one by one. Below we can find the top errors for this execution: Each error should be analyzed and reported. How to use Pandas to analyze the logs. First, we need to clean them by: df['message'] = df['message'].str.split(']' , expand=True)[1] df['message_no_digits'] = df['message'].str.replace('d+', '') df['message_short'] = df['message_no_digits'] .str.slice(0, 50) So finally we will get from to: Next we can extract URLs from the logs: df['url'] = df['message'].str.extract(pat = '(https:.*)') For the group with the reached limit jobs, we need to find out why this is the case. Possible reasons: All jobs in this group should have an explanation and be categorized in one of the groups above. Let’s do the URL inspection. In this case, we can use a small and simple library like And get results like: Or explore URLs by Pandas and get a summary as: Or do site per site analysis, counting the first level after domain: site_1 ----------------- news 3563 services 25 site_2 ----------------- contacts 74 services 25 Below you can find part of the code used for this analysis: url_summary.get_summary(result.url Pandas can help too: df['website'].str.split('/', expand=True)[3] .str.split('?', expand=True)[0].value_counts().head(-1) The code above will extract everything from the 3rd / to next? and will return count for each group. Example: Will extract - test. The output will be: test 1 foo 1 This can be done per site with slight modifications(if the frames is a list of dataframes for each website): for df in frames: print(df.iloc[0]['website']) print('-' * 80) print(df['website'].str.split('/', expand=True)[3] .str.split('?', expand=True)[0].value_counts().head(-1)) If data is in a single dataframe then you can use domain information if it’s exists or create it by: df['domain'] = df['website'].str.split('/', expand=True)[0] then you can split the DataFrame into several by: frames = [pd.DataFrame(y) for x, y in df.groupby('domain', as_index=False)] Or process the information by groups. The final expected result is: site1 ----------------- test 64 foo 13 site2 ----------------- bar 3472 foo 15 tests 1 Based on the analysis done in the previous step we can start with our evaluation. The evaluation is based on the groups above after the applied fixes and correction and re-execution of the run. The success of the run will depend on different factors and the importance of data requirements. Pivot tables and diagrams will help: To justify the progress between the different runs and the coverage in the final one. Pandas can be used to draw plots for hundreds and thousands of rows. Drawing can be one per group or for problematic sites and factors. Below is shown the code for plotting two plots next to each other: import matplotlib.pyplot as plt sites = df.url.unique() for n, i in enumerate(range(0, len(sites), 2)): fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(14, 6)) plt.xticks(rotation='vertical') ax1, ax2 = axes temp_df = df.copy() temp_df = df[df['url'] == sites[i]] sites_df = temp_df[['errors', 'items', 'key']].sort_values(by='key', ascending=True)[ ['errors', 'items']].fillna(0) sites_df.plot(ax=ax1, title='Plot for site ' + str(sites[i])) fig.autofmt_xdate() temp_df = df.copy() temp_df = df[df['url'] == sites[i + 1]] sites_df = temp_df[['errors', 'items', 'key']].sort_values(by='key', ascending=True)[ ['errors', 'items']].fillna(0) sites_df.plot(ax=ax2, title='Plot for site ' + str(sites[i + 1])) fig.autofmt_xdate() tempDf = df.copy() tempDf = df[df['url'] == sites[i + 2]] country_df = tempDf[['errors', 'vcard', 'items', 'key']].sort_values(by='key', ascending=True)[ ['errors', 'vcard', 'items']].fillna(0) country_df.plot(ax=ax3, title='Plot for site ' + str(sites[i + 2])) fig.autofmt_xdate() except IndexError: print('out of index') The final report will have two parts. Detailed table for all jobs from the final run: And tab with pivot tables: Reporting should include the most important information about the problems and should be easy to be generated. Creating a report like the one above, when all the steps are clear, takes a few hours and a big part of the time is spent on data extraction. In this article, we demonstrated how to evaluate data coming from a large number of different websites. This can be a good starting point to validate heterogeneous data and how to report the results in understandable ways for a wider audience.
https://www.zyte.com/blog/a-practical-guide-to-web-data-qa-part-v-broad-crawls/
CC-MAIN-2021-10
refinedweb
1,562
62.07
Watch and Compile hogan.js templates Hedgehog is a node.js utility script that will watch a directory with raw hogan.js template files. It will listen for changes and compile the raw mustache templates into compiled vanilla js files. The templates will be available in a global T namespace (this is configurable), relative to the filepath of the raw template file. For instance: Let's say we create a template and save it as ./templates/user/profile.mustache: <h1>{{ name }}</h1> Now all you need to do is include the compiled templates along with the HoganTemplate (~700 bytes) lib. npm install hedgehog to install it in your current working directory, or: npm install hedgehog -g to install it globally. Tested on node 0.6.x You can run the hedgehog as standalone utility or with your existing node app: var Hedgehog = require'hedgehog';var h = ; By default hedgehog will look in a ./templates directory. By default hedgehog will compile templates into a ./templates/compiled directory You can configure hedgehog by passing an options object. For example: 'input_path': 'path/to/raw/templates'; By default compiled templates will be accessible through the window.T object in the browser, you can set this to whatever you prefer. A path relative from where the script is called, that points to your raw .mustache templates. A path relative from where the script is called, that specifies where the the vanilla .js files should be compiled into. Hogan.js compiles mustache templates, but you can use another file extension if you like. For a Rails project, I'd typically use Jammit to concatenate and minify the template files on deployment. For a Express.js project I've tried connect-assets with great success. It's an asset pipeline for node.js/connect.js inspired by Rails 3.1
https://www.npmjs.com/package/hedgehog
CC-MAIN-2015-32
refinedweb
303
68.26
30 June 2011 By clicking Submit, you accept the Adobe Terms of Use. The steps in the tutorial are designed to get you up to speed on working with the Project panel, whether you're an experienced developer or brand new to Flash. Previous experience programming with ActionScript is required if you wish to extend the supplied templates beyond the scope of this article. Intermediate The best way to avoid web development projects that become disorganized, cluttered, and downright confusing is to take advantage of the project management tools available in your development environment. The Project panel in Adobe Flash Professional can help you manage your projects and publish multiple files within a single project. It's a great way to visualize your Flash projects and move quickly between files and folders as you work. By using the Project panel and following best practices for asset organization, you'll be able to quickly launch and maintain projects—saving production time and making it easier to deliver the final product. The Project panel helps you do the following: The Project panel has been updated in Flash Professional CS5.5 to include better development workflows using shared assets and a standard project format, as well as to optimize integration with Adobe Flash Builder 4.5. Follow this tutorial to create a slideshow sample project. You'll learn how to use the Project panel as you set up your project using shared assets and then publish the slideshow to three different platforms to deploy for web, desktop, and mobile (see Figure 1). Use the provided slideshow asset files to build the completed project. You'll learn the project development workflow and how to leverage the new author-time shared assets feature. Along the way, you'll become familiar with the structure of the FLA file as you build the project, so that you can edit the artwork assets. You can use the provided sample files to build and review a fully functional slideshow application, analyze the project structure and (if desired) update the template using your own image files to customize it. Before you get started, you may find it helpful to read the section titled Create projects in the Flash Professional Help documentation and watch Improved project workflows on Adobe TV. To get a detailed overview of the improved Project panel, check out Tareq AlJaber's article, Sharing projects between Flash Professional and Flash Builder, which covers the details of working with Flash Professional CS5.5 and Flash Builder 4.5. Before you begin building the sample project, let's review the provided assets in the sample files folder to get an overview of the workflow you'll use to create and manage Flash projects. The supplied slideshow assets create a simple image viewer including a display area with transitions, a play button, a caption bar, back and forward buttons, and a full-screen button (see Figure 2). The display area is the heart of the slideshow powered by the PhotoGallery component. The PhotoGallery is designed as a dynamic widget that loads a list of images from an external source. An XML file supplies the names and captions for the images, which allows you to change the list without updating the SWF file (see Figure 3). The primary objective in this tutorial is to create variations of the supplied slideshow elements using a Flash project and the author-time shared assets feature. Building a Flash project is easy to do. You usually start by creating a project folder and a strategy for structuring files and folders within it. From there, you gather the artwork and assets you'll need before moving to Flash. Here's an overview of the steps you'll follow in this tutorial: The following sections describe each of these steps and walk you through the workflow. The provided sample files contain the slideshow assets as well as some sample images to work with. Use these files to follow the tutorial and jump right into building a Flash project. Follow these steps to set up the sample files: Tip: If you plan on working with a lot of different projects for many different clients, it can be useful to create a "work" folder where you can store multiple project folders. Doing so makes your projects easy to find and you can use the Flash Player Global Security Settings panel to set the folder as a trusted location, which helps to avoid security errors during local development. The easiest way to create a project is to create a quick project from a FLA file open in Flash. The FLA's folder becomes the project folder and the FLA becomes the default file selected for publishing. Follow these steps to create your Flash project: That's it! You've created the Flash project. Take a moment to explore the project (see Figure 5). Notice that Flash automatically created the AuthortimeSharedAssets.fla file. You'll use this FLA to store shared assets in the next steps. The project includes two folders along with the FLA files: assets and src. The assets folder contains the images and XML file which describes them. The src folder contains ActionScript script files which provide functionality to the slide show assets. For the purposes of this tutorial, you don't need to edit any of the code in the src folder, but you can browse and open the files using the Project panel if you're curious and want to learn how the code controls the behavior of the slideshow. The default configuration of the Project panel doesn't display image file types in the view. Follow these steps to add the JPEG file type filter: At this point, you can see all the project asset files listed in the Project panel. One of the improvements of the Flash CS5.5 Project panel is the ability to use author-time shared symbols across FLA files in the project. This opens up new work flows for defining common assets which update across files any time a change is made. In the next steps you'll convert the supplied files into shared assets which can be used to create slide shows of different sizes. Follow these steps to convert the slide show assets to shared symbols: At this point, you can use the shared assets to define any number of file variations. Another new feature in Flash CS5.5 is the improved workflow for publishing Flash content to mobile and desktop formats. To understand how this works, you'll set up a slideshow to run in an AIR for Android FLA file. Follow these steps to create the FLA: Notice that the new FLA is added to the Project panel and opened in the Flash workspace. Choosing the AIR for Android option automatically configures the Stage size (480 x 800) and the player settings for the file. Follow these steps to add the author-time shared assets: At this point, both the SlideShow.fla file and the SlideShowMobile.fla file are linked to the shared content in the AuthortimeSharedAssets.fla file. Author-time shared assets simplify the process of editing content that repeats across files. For example, if you create a series of banners that reuse common graphic elements, you can easily edit those common elements from a single file. When you update any file, all the other files linked to that shared symbol update automatically. Follow these steps to change the colors of the controls across files: Tip: If you check the other FLA files and don't see the shared assets updates, you probably didn't save the edited file when you made the changes. Saving the file after edits is the operation that updates the linked files. The Android file is intended for playback on a mobile device which uses touch and gesture events for user interaction instead of mouse events. That means you'll need to rework the ActionScript code in the sample files to accommodate the characteristics of the device. To do this, you'll create a new ActionScript file and assign it as the document class of the FLA. You have two options at this point: You could use the Project panel to create an ActionScript file which you can edit in Flash Professional, or you can open the Flash project in Flash Builder and use the Flash Builder text editor to create and edit the file. The following sections describe both approaches. Follow these steps to create an ActionScript file in Flash Professional: You'll add the ActionScript to the file in a moment, but first take a moment to review how you would create the file if you want to work with the project in Flash Builder. Follow these steps to open the project in Flash Builder for editing: Note: Flash Professional CS5.5 and Flash Builder 4.5 share the same project structure, enabling you to combine workflows more smoothly than before. This means that you can open a Flash Professional project in Flash Builder, and vice versa, interchangeably and seamlessly. Whichever route and editor you prefer to use, the next step involves updating the file with the ActionScript code below and assigning it as the document class of the FLA file. Follow these steps to complete the file: package { import flash.display.MovieClip; import flash.display.Sprite; import flash.display.StageDisplayState; import flash.events.Event; import flash.events.TransformGestureEvent; import flash.events.MouseEvent; import flash.ui.Multitouch; import flash.ui.MultitouchInputMode; import src.gallery.Photo; import src.gallery.PhotoGallery; public class SlideShowMobile extends MovieClip { //******************** // Properties: public var slideShowDataURL:String = "assets/images.xml"; public var slideShowDelay:uint = 6000; //******************** // Initialization: public function SlideShowMobile() { // Configure the Multitouch object for gesture input Multitouch.inputMode = MultitouchInputMode.GESTURE; // Configure slide show slideShow_mc.slideShowDelay = slideShowDelay; slideShow_mc.scaleMode = Photo.SCALEMODE_ZOOM; slideShow_mc.transition = slideShow_mc.transitions["iris"]; slideShow_mc.addEventListener(Event.COMPLETE, onDataLoaded); slideShow_mc.addEventListener(Event.CHANGE, onImageChanged); slideShow_mc.addEventListener(TransformGestureEvent.GESTURE_SWIPE, onSwipe); slideShow_mc.maxWidth = 480; slideShow_mc.maxHeight = 759; slideShow_mc.loadXML(slideShowDataURL); // Configure buttons playPause_btn.initialize(clickHandler, true, true); prev_btn.initialize(clickHandler, false); next_btn.initialize(clickHandler, false); fullScreen_btn.initialize(clickHandler, false); } //******************** // Events: protected function clickHandler(event:MouseEvent):void { // Route button clicks switch( event.target ) { case playPause_btn: if( slideShow_mc.isPlaying ) { timer_mc.stopTimer(); }else{ timer_mc.resetTimer(); } slideShow_mc.playSlideShow(!slideShow_mc.isPlaying); break; case prev_btn: slideShow_mc.back(); break; case next_btn: slideShow_mc.forward(); break; case fullScreen_btn: toggleFullScreen(); break; } } protected function onDataLoaded(event:Event):void { slideShow_mc.playSlideShow(true); } protected function onImageChanged(event:Event):void { var str:String = slideShow_mc.selectedPhoto.id+" of "+ slideShow_mc.selectedPhoto.total+": "+ slideShow_mc.selectedPhoto.caption; captions_mc.maxChars = 44; captions_mc.setText(str); if( slideShow_mc.isPlaying ){ timer_mc.startTimer(slideShowDelay); } } protected function onSwipe( event:TransformGestureEvent ):void { if( event.offsetX == 1 ){ // Swiped towards right slideShow_mc.forward(); } else if( event.offsetX == -1 ){ // Swiped towards left slideShow_mc.back(); } } //******************** // Methods: function toggleFullScreen():void { if( stage.displayState == StageDisplayState.NORMAL ){ stage.displayState = StageDisplayState.FULL_SCREEN; }else{ stage.displayState = StageDisplayState.NORMAL; } } } } SlideShowMobile.as is configured slightly differently than the supplied SlideShow.as file. It includes additional code that responds to gesture swipes on a device. This allows you to swipe left to see the previous picture or swipe right to see the next picture in the slideshow. At this point, your project is ready to publish. You can publish one or more FLA files from the project at a time using the Test Project button on the Project panel. In the next steps you'll publish the original SlideShow.fla file for the web, convert its settings to publish it to the desktop, and set up the Android file to publish to mobile format. To publish the SlideShow.fla to web format, open the SlideShow.fla file in Flash Professional and click the Test Project button in the Project panel. The SWF plays and the related HTML file is automatically added to the project. Notice that the web published files include the SlideShow.html file, the SlideShow.swf file, and the files in the assets folder. To display the slideshow on the web, upload all of these files to a server. Follow these steps to publish the SlideShow.fla to a desktop format: Notice that the published files are encapsulated in the SlideShow.air file. This is an installer file which will install the slideshow as a desktop application for the user. Follow these steps to publish the SlideShowMobile.fla file to a mobile format: Notice that the published files are encapsulated in the SlideShowMobile.apk. This is an installer file that you can upload to the Android market. Another exciting feature in Flash Professional CS5.5 is the ability to preview and debug Android applications on an Android device plugged into your computer via USB. Follow these steps to preview the SlideShowMobile app on an Android device: To get more instructions on publishing for mobile, check out this Adobe TV video by Paul Trani demonstrating the Android publishing process: Publishing an AIR for Android app. Packaging your application for Google Android, Apple iOS, or BlackBerry Tablet OS devices involves acquiring signing certificates and provisioning application packages for the various platforms. For more specifics, read the following tutorials (written for ActionScript and Flex developers but applicable to Flash Profesional developers as well): Practice using author-time shared assets and working with the Project panel to port a single application to multiple platforms. As a next step, try working with the iOS mobile platforms and the extended ActionScript APIs for touch and gesture devices. See the following resources for more information on using the Project panel in Flash Professional CS5.5 and author-time shared assets: Also check out the following links for details on working with AIR and mobile apps in Flash Professional: This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe.
http://www.adobe.com/devnet/flash/articles/flash_project_panel.html
CC-MAIN-2014-10
refinedweb
2,305
56.35
React Properties | Last time, I wrote on React Components. This time, we will talk about something every component has - properties. I'll be making use of the component we created in that article to explain properties. So you might want to read it especially if you are not so familiar with React Components. A component props is an object that holds information about that component. They are similar to attributes in HTML. In fact, they are passed to the components same way attributes are passed to HTML. Here's the code for the component we made in my previous article. import React from "react"; import ReactDOM from "react-dom"; class Greeting extends React.Component { render() { return ( <div className = "box"> <h2> Hello Human Friend!!!</h2> <p> We are so glad to have you here. </p> </div> ); } } ReactDOM.render(<Greeting />, document.getElementById("app")); Adding Properties to Components So let's say we want to pass a value to the name property of Greeting component we created. We will do this. <Greeting name = "Sarah" /> Quite simple right? A component can take as many properties as you want it to. <Greeting name = "Sarah" message = "I will like you to be my friend" blah = "blah blah blah" /> Take note of what's necessary to pass a property. You need a name which in the above example are name message and blah. The name of the property can be anything you choose to call it. The second thing is the information you want that property to store. These are "Sarah", "I will like you to be my friend" and "blah blah blah" in the example above. If we want to pass information that's not a string, say a function, an object, an array or a variable, it has to be wrapped in curly braces. <Greeting message = {greetings} myInfo = {{firstName : "Sarah", lastName : Chima }}/> How can we access and display this properties we just passed to the component? Accessing Properties We use this.propsto access properties passed to a component. The this.props object contains all properties that a component has. Let's go back to our example of the Greeting component. Let's say we want to the user to be able to pass the name of the person to say hello to and the message to be passed. This is how we will do that. class Greeting extends React.Component { render() { return ( <div className = "box"> <h2> Hello {this.props.name}!!!</h2> <p> {this.props.message} </p> </div> ); } } What we have just done is to tell the component to render whatever information that is passed to the name property and message property. So if we do this ReactDOM.render(<Greeting name = "Sarah" message = "I want to be your friend" /> , document.getElementById("app")); This is what we are actually rendering ... <div className = "box"> <h2>Hello Sarah!!!</h2> <p>I want to be your friend</p> </div> ... Handling an Event There are times when we want to pass functions to a property. For instance, you want a function to handle an event like onClick or onKeyDown, how can this be done? Let's proceed with our example to explain this. Let's add a button to our Greeting that says "hello" when clicked. First of all, we will add a method to the Greeting component that handles the onClick event. This method will be outside render(). class Greeting extends React.Component { handleClick () { alert ("Hello Human"); } render() { ... Then we will pass the method to the onClick property of the button we add. ... render () { return ( <div className = "box"> <h2>Hello {this.props.name}!</h2> <p>{this.props.message}</p> <button onClick = {this.handleClick}>Click Me</button> </div> ); } Default Properties You can also pass default properties to a component. For instance, what if a person uses the Greeting Component without passing any property to it? We don't want our new acquaintance to think we are unfriendly. In this case, we can set default props for the component. We do this by adding a default value property for our component. This defaultProps must be equal to an object. Greetings.defaultProps = {name : "Human Friend", message : "You are welcome to our world"}; This will be added after the class declaration. Accessing the ChildrenComponents can also have children. For instance, the `Greeting` component can be written as <Greeting>Hello</Greeting> This is still valid. To access the children of this component, which in this case is the string Hello, we use this.props.children. With this we can access anything that's between the two tags. NOTE One last thing to note that Properties are read only. A component should not modify its own props. props store information that can only be modified by another component and never by itself. Here's a Codepen that demonstrates all we've done so far. Try to play with it until you are sure you understand what we've done. Got any question? Any addition? Feel free to leave a comment. Thank you for reading :)
https://sarahchima.com/blog/react-props/
CC-MAIN-2019-13
refinedweb
828
68.36
HDR lighting + bloom effect #1 Members - Reputation: 122 Posted 26 April 2009 - 05:43 AM #2 Members - Reputation: 852 Posted 26 April 2009 - 06:04 AM #3 Members - Reputation: 852 Posted 26 April 2009 - 06:07 AM What you say there is a good starting point. You will have to use render targets with a higher resolution then 8-bit per channel and then there are lots of small little challenges attached to each of the points you mention in 1 - 8.. The level of detail you will go into will decide on how good your HDR pipeline is at the end. Gamma correction will be also a major topic to look into. #4 Moderators - Reputation: 17487 Posted 26 April 2009 - 06:50 AM Secondly, I agree with wolf that the SDK samples are a good place to start. In fact there's actually a managed port of the HDR Pipeline sample that happens to be the first when when you search on google for "HDR sample MDX". You might also want to check out this, it's a good overview. #5 Members - Reputation: 554 Posted 26 April 2009 - 07:42 AM Hi, you've got some of the concepts mixed up. From your description, it sounds like you're trying to do three things: 1. Render HDR 2. Perform automatic luminance adaption during tone-map pass 3. Add a bloom effect Here's a brief explaination on how you do each step, and why: HDR rendering: this is the source of a lot confusion, mainly because it's one of those buzzwords that gets thrown around alot. Here's what it means in practical terms: When you draw stuff "regularly", you usually do that to a color buffer where each channel is eight bits (for example RGBA8). This is fine for representing colours, but when you're rendering 3D-stuff you really need more precision, because your geometry will be lit and shaded in various ways, which can cause pixels to have very high or very low brightness. The way to fix this is simply to render to a buffer that has higher precision in each color channel. That's it. Instead of just using 8 bits, use more. One format that is easy to use and has good enough precision is half-float format, in D3D lingo D3DFMT_A16R16G16B16F. Because of limitations of the GPU, you usually can't set the backbuffer to a high-precision format. So instead, you create a texture with this format, render to it, and then copy the result to the backbuffer so it can be shown on your screen. So, all you have to do is to create a texture with this new format (for example), and bind it as the render target instead of the default back buffer. Let's call this texture the HDR render texture. When you have created it and set it as render target, just draw as usual. When you're done rendering, copy the pixels in the HDR texture to the old back buffer to show it. The copy is usually done by drawing a full screen quad textured with the HDR render texture over the back buffer. When you've done this: voilá! Your very first HDR rendering is done :) If you've done this correctly, the first thing you will notice is that there has been no improvement at all to your regular rendering ;) This is because we haven't done any of the cool stuff that higher precision enables us to do. Some of the most common things people do are bloom, exposure, vignetting and luminance adaption (exposure, vignetting and luminance adaption are usually called tone-mapping when used together). Here's what the are, and how you do them. Exposure: there's a great article written by hugo elias that explains it much better than I could do here: In practice, that article boils down to a single line of code at the end of your shader: float4 exposed = 1.0 - pow( 2.71, -( vignette * unexposed * exposure ) ); where 'unexposed' is the "raw" pixel value from your HDR texture, 'vignette' is explained below, and 'exposure' is the constant K in hugo elias article. In my code it's simply declared as: const float exposure = 2.0; ...because 2.0 makes my scene look nice. You may have to use a different value that look good for you, if you decide to implement exposure. If you want it a bit more robust, You can make this happen automatically, it is described in 'luminance adaption' below. Also, know that there are several ways of performing the exposure, with different formulas, which result in different images. The Hugo Elias one is an easy way to get started though. Vignetting: Because a lens in a camera has a curved shape, it lets in less amount of light at the edges, so many photos or films (especially on cheap cameras) have noticably darkened edges. See example here:. This effect is called vignetting. It is simulated with two lines of code: float2 vtc = float2( iTc0 - 0.5 ); float vignette = pow( 1 - ( dot( vtc, vtc ) * 1.0 ), 2.0 ); ...where iTc0 are the texture coordinates of the full screen quad, and ranging from 0..1. The result is a factor that is 1.0 in the center of the screen and becomes less as it moves away from the center. Luminance adaption: this is part of the exposure, but can be done separately. In Hugo's code, the constant K (and in my code the variable 'exposure') the exposure is fixed, meaning that you have to tweak it manually for a scene to look good. If a level varies a lot in brightness (for example you are standing in a dark room and then walking outside to a sunny day), no value of K will work very well for both scenes (the sunny outside may be 10,000 times (or more) brighter than the dark inside). Instead, you need to measure how bright the scene is so you can adjust K accordingly. The easiest way to do this is to take the average of all pixels in the HDR texture. One way to do this that is fast is to make mip-maps of the HDR texture, all the way down to a 1-pixel texture. This final one-pixel texture will then contain the average of all the pixels above, which is the same as the average scene luminance. Use this value as K when doing the exposure. You simply do this by using the 1-pixel texture as input to the exposure, instead of the hardcoded K (or 'exposure' as it's called in my code example). You will need to tweak it to look good and adapt the way you want, but when it's done your renderer can handle all kinds of brightnesses, which is very cool :) Finally, there's the blooming: i'm sure you already know what this is, simply making bright parts of the scene glow a bit.. Whoa, long post :) While this probably seems pretty complicated, depending on what look you want for your game and the type of game you are creating you can decide on implementing all of this or just some of it. The modern FPS games and racing games implement most of this above, but if you just want to make a simple space shooter with some nice glowing effects, all you have to implement is the "render to high-precision-texture"-part, and the bloom part. For starters, you should probably just try that, and then add the other effects as you get more comfortable. So, to answer your questions: 1. It depends, see above :) 2. You render color data to a texture by creating a texture with usage D3DUSAGE_RENDERTARGET and a high precision pixel format, and then setting it with device->SetRenderTarget( 0, m_hdr_rt_tex ); 3. You resize by creating more mip levels for your texture, and rendering to them. Don't forget to set the ViewPort to match the mip level size. 4. One simple way of getting the luminance is simply averaging the color channels together. This can be done with a dot product, like so: float lum = dot( color.rgb, float3( 0.333, 0.333, 0.333 ) ); Some people like to weigh the different channels differently with more on green and a lot less in blue, but in practice nobody ever notices the difference unless you point it out ;) Feel free to experiment :) 5. You blur a texture by averaging several nearby samples together. 6. It depends, but when adding bloom you usually just add the color values. 7. Described above. 8 (3rd? :)). Yes, you can have everything in one .fx-file, create each one as a technique (downsample_technique, bloom_combine_technique etc). Best of luck! Simon #6 Members - Reputation: 116 Posted 26 April 2009 - 08:26 AM #7 Members - Reputation: 122 Posted 26 April 2009 - 08:36 AM K. So, to answer to ur'....answers :).... i'm working on a rts game, and now i'm doing the map editor. the reason that i'm using mdx is because i don't know how to add butons, panels, and so on in native c++. (the game application is written in native c++. i've done the start screen and now i have to do the level editor). I've looked at the samples that come up with the SDK, but i simply don't understand them. I started studying HLSL 2 days ago (i follow the tutorial). thx for the 'long answer' :). it was the kind of answer i was expecting to get. but, i don't know how to do some of the things u described there.... 1st, u said that i have to copy the HDR-texture to the back buffer. isn't there a simpler way of doing this? smth like device.backbuffer = hdr-texture? from what u wrote there, i have to create 4 vertices and render them with the hdr-texture right? 2nd: i don't know how to scale the texture (create mipmaps) 3rd: i didn't understood how to blur the texture :( i think that's all :) [Edited by - cyberlorddan on April 26, 2009 3:36:37 PM] #8 Moderators - Reputation: 17487 Posted 26 April 2009 - 11:27 AM Quote: Well I really don't think you want to create a version of your renderer in native DX, and a version of your renderer in MDX. That's a disaster waiting to happen. What you CAN do is generate managed wrappers of your native C++ classes using C++/CLI. This will allow you to write your editor in C#, and use the same native rendering back-end. However I'll warn you that although it starts out somewhat simple, maintaining your wrappers can turn into a very non-trivial task. If you're not working on a bigger team, it's much more ideal to just have everything written in managed code. Another option is to use a C++ toolkit like Qt or WxWidgets for doing the UI. Those are generally much easier to work with than the native Windows API. #9 Members - Reputation: 116 Posted 26 April 2009 - 04:29 PM 2 and 3 can be answered if you would take the time to look at some code from the sdk or other relivant source, for examples on how to do both take a look at this article; there is also a more in depth and complex description of the hdr process based on the DirectX10 API here on the wiki. #10 Members - Reputation: 122 Posted 27 April 2009 - 05:13 AM 1 more question..... U said that MDX is a dead project... that means that it won't be updated anymore? or only the documentation for it won't be updated? i'm asking this because i saw that tesselation was implemented in directx 11 only, and i saw somewhere (i can't remember where) while i was doing modifications to my mdx engine, a struct or enumerator or smth (it showed up in the intellisense scroll list) that had the name Tesselation.... :| it might be a stupid question, but i really don't know very much of mdx :( #11 Moderators - Reputation: 17487 Posted 27 April 2009 - 05:30 AM Quote: Yes, it won't be updated in any way. No D3D10, no D3D10.1, no D3D11. Not even bug fixes. Like I said it's not even included in the SDK anymore. The tessellation stuff you're seeing in the documentation was never actually supported in D3D9 hardware, and is completely different from the programmable tessellation available in D3D11. #12 Members - Reputation: 122 Posted 27 April 2009 - 05:42 AM Quote: i said that i know everything i should (well i was wrong). i ran into another problem :( first of all, the format of the texture that i should set to render the scene to. i have a problem..... if i set it to A16B16G16R16, then semi-transparent objects are rendered wrong (instead of blending with the content that's below them, they are blending with the color i set in device.clear() method. second, i have no depth buffer :| . when rendering to the texture (surface) instead of the screen, the depth buffer doesn't work. it renders everything in the order i tell them to render (meaning that objects that are in the back are shown in front of others) any solutions? :( btw, i should have mentioned that by setting the texture format to rgb8 the transparency problem is solved, but that is not a hdr format right? #13 Members - Reputation: 116 Posted 27 April 2009 - 08:20 AM #14 Members - Reputation: 122 Posted 27 April 2009 - 08:35 AM Quote: yes... but when i try to execute it it gives an error that says that the device can not pe initialized properly... :( i've benn thinking that this problem might be caused by the incompatibility of my graphics card (it's kinda old... geforce 6200 . i'll change it on 10th may with an 9800 one - my birthday :D ). could this also be because i'm using mdx instead of native directx? #15 Members - Reputation: 122 Posted 27 April 2009 - 09:48 AM let me explain first what this code should do (or what's left from it). first i initialize all the stuff i have to. the function render() is called on every frame. the problems: objects are rendered in front of others (the depth buffer doesnt work). the transparency is a mess.... you can look at this image to see what results i get: i also explained some things in the image namespace LevelEditor { public partial class Main : Form { int widthP; int heightP; Device motorGrafic; VertexBuffer vb = null; //fsdfsdfsfdsfsdfsdfs IndexBuffer ib = null; Matrix projection; Matrix camera; short[] indices = { 0, 1, 2, 2, 1, 3 }; CustomVertex.PositionNormalTextured[] terrainTriangle; TerrainPointByPoint[] terrainDetailPBP; //terrainpbp is class that holds the terrain data such as height, texture, pathing and so on.... Texture rocksTexture; Texture dirtFinalTexture; Texture originalRenderedScene; Surface originalRenderSurface; Surface bbS; Surface depthStencilS; CustomVertex.TransformedTextured[] screenVertices = new CustomVertex.TransformedTextured[6]; Effect effect; float curWindHei; float curWindWid; public Main() { InitializeComponent(); //widthP and heightP represent the terrain size. they are initialized here (i removed the code because it was big and wasnt relevant initializeMainEngine(); initializeBuffers(); initializeMeshes(); initializeTextures(); vd = new VertexDeclaration(motorGrafic, velements); originalRenderedScene = new Texture(motorGrafic, this.Width, this.Height, 1, Usage.RenderTarget, Format.A16B16G16R16, Pool.Default); originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0); } void initializeBuffers() { vb = new VertexBuffer(typeof(CustomVertex.PositionNormalTextured), 4, motorGrafic, Usage.Dynamic | Usage.WriteOnly, CustomVertex.PositionNormalTextured.Format, Pool.Default); //fsdfsdfsfdsfsdfsdfs vb.Created += new EventHandler(this.OnVertexBufferCreate); //fsdfsdfsfdsfsdfsdfs OnVertexBufferCreate(vb, null); ib = new IndexBuffer(typeof(short), indices.Length, motorGrafic, Usage.WriteOnly, Pool.Default); //fsdfsdfsfdsfsdfsdfs ib.Created += new EventHandler(this.OnIndexBufferCreate); //fsdfsdfsfdsfsdfsdfs } void initializeMainEngine() { PresentParameters paramPrez = new PresentParameters(); paramPrez.SwapEffect = SwapEffect.Discard; paramPrez.Windowed = true; paramPrez.MultiSample = MultiSampleType.FourSamples; paramPrez.AutoDepthStencilFormat = DepthFormat.D16; paramPrez.EnableAutoDepthStencil = true; paramPrez.BackBufferFormat = Format.X8R8G8B8; motorGrafic = new Device(0, DeviceType.Hardware, this.splitContainer1.Panel2, CreateFlags.SoftwareVertexProcessing, paramPrez); effect = Effect.FromFile(motorGrafic, "defaultEffect.fx", null, ShaderFlags.None, null); } void initializeTextures() { //removed code } void initializeMeshes() { //removed code } void initializeTerrain() { //removed code } void OnIndexBufferCreate(object sender, EventArgs e) { //removed code } void OnVertexBufferCreate(object sender, EventArgs e) { VertexBuffer buffer = (VertexBuffer)sender; //fsdfsdfsfdsfsdfsdfs originalRenderedScene = new Texture(motorGrafic, this.splitContainer1.Panel2.Width, this.splitContainer1.Panel2.Height, 1, Usage.RenderTarget, Format.A16B16G16R16F, Pool.Default); originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0); //some code removed here } void generateTerrainData(int whichOne) { //code removed here } Point GetMouseCoordonates() { //code removed here } void render() { projection = Matrix.PerspectiveFovLH((float)Math.PI / 4, curWindWid / curWindHei, 0.1f, 50.0f); camera = Matrix.LookAtLH(currentCameraPosition, currentCameraTarget, currentCameraUp); bbS = motorGrafic.GetBackBuffer(0, 0, BackBufferType.Mono); motorGrafic.SetRenderTarget(0, originalRenderSurface); motorGrafic.Indices = ib; motorGrafic.VertexDeclaration = vd; motorGrafic.RenderState.SourceBlend = Blend.SourceAlpha; motorGrafic.RenderState.DestinationBlend = Blend.InvSourceAlpha; motorGrafic.SetStreamSource(0, vb, 0); motorGrafic.BeginScene(); motorGrafic.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.CornflowerBlue.ToArgb(), 1, 0); motorGrafic.RenderState.AlphaBlendEnable = true; motorGrafic.RenderState.ZBufferEnable = true; effect.SetValue("xColoredTexture", rocksTexture); effect.Technique = "Simplest"; effect.Begin(0); effect.BeginPass(0); effect.SetValue("xViewProjection", Matrix.Translation(trackBar1.Value, trackBar2.Value, trackBar3.Value) * camera * projection); effect.SetValue("xRot", Matrix.Translation(trackBar1.Value, trackBar2.Value, trackBar3.Value)); motorGrafic.SetTexture(0, rocksTexture); motorGrafic.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 4, 0, 2); //////////i use this to render the position of my light ////i render here my scene } }; effect.EndPass(); effect.End(); motorGrafic.EndScene(); motorGrafic.RenderState.Lighting = false; motorGrafic.SetRenderTarget(0, bbS); motorGrafic.SetTexture(0, originalRenderedScene); motorGrafic.Clear(ClearFlags.Target, Color.Red, 1, 0); motorGrafic.BeginScene(); motorGrafic.VertexFormat = CustomVertex.TransformedTextured.Format; motorGrafic.RenderState.CullMode = Cull.None; motorGrafic.DrawUserPrimitives(PrimitiveType.TriangleList, 2, screenVertices); motorGrafic.EndScene(); motorGrafic.Present(); } void setCameraPosition(float xCamPozS, float yCamPozS, float zCamPozS) { ////removed code } void setCameraTarget(float xCamTarS, float yCamTarS, float zCamTarS) { //removed code } } } ...so how do i fix this? #16 Members - Reputation: 116 Posted 27 April 2009 - 09:45 PM originalRenderedScene = new Texture(motorGrafic, this.Width, this.Height, 1, Usage.RenderTarget, Format.A16B16G16R16, Pool.Default); originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0); Only the latest DirectX 10 compatible graphics cards (NVIDIA G8x) supports alpha blending, filtering and multi-sampling on a 16:16:16:16 render target. Graphics cards that support the 10:10:10:2 render target format support alpha blending and multi-sampling of this format (ATI R5 series). Some DirectX 9 graphics cards that support the 16:16:16:16 format support alpha blending and filtering (NVIDIA G7x), others alpha blending and multi-sampling but not filtering (ATI R5 series). All alternative color spaces than the following (HSV, CIE Yxy, L16uv, RGBE) do not support alpha blending. Therefore all blending operations still have to happen in RGB space. An implementation of a high-dynamic range renderer that renders into 8:8:8:8 render targets might be done by differing between opaque and transparent objects. The opaque objects are stored in a buffer that uses the CIE Yxy color model or the L16uv color model to distribute precision over all four channels of this render target. Transparent objects that would utilize alpha blending operations would be stored in another 8:8:8:8 render target in RGB space. Therefore only opaque objects would receive a better color precision. To provide to transparent and opaque objects the same color precision a Multiple-Render-Target consisting of two 8:8:8:8 render targets might be used. For each color channel bits 1-8 would be stored in the first render target and bits 4 - 12 would be stored in the second render target (RGB12AA render target format). This way there is a 4 bit overlap that should be good enough for alpha blending. #17 Members - Reputation: 122 Posted 28 April 2009 - 05:11 AM Quote: thx for the answer. so i'll have to wait until i get a new nvidia graphics card... i could use this time to move the code to c++. anyway, i still can't get the depth buffer to work properly. this isn't a hardware problem, because the HDRFormats sample shows the teapot as it should. any solution for the depth buffer problem? #18 Members - Reputation: 116 Posted 28 April 2009 - 08:16 AM I don't know what's wrong with the depth buffer, but I don't think you should have to use a floating point buffer to hold the information, 256 deltas per channel should be plenty. #19 Members - Reputation: 122 Posted 28 April 2009 - 08:28 PM Quote: I 'solved' the problem with the depth buffer. I had multisampling turned on. When i turned it off, the depth buffer worked as it should. but now a new, probably stupid, question arises: how do i turn multisampling on without 'damaging' the depth buffer? #20 Members - Reputation: 554 Posted 28 April 2009 - 11:52 PM /Simon
http://www.gamedev.net/topic/532960-hdr-lighting--bloom-effect/
CC-MAIN-2016-30
refinedweb
3,458
55.54
Hi, I am the dev lead for this area. I have tried to script out a database and it works for me. Given that it takes you so long but does eventually finish, it most likely has something to do with the structure of your database. Is there anything interesting about your schema that we should know about? Is it possible to give us the generated UDM (which should not have any data) so we can recreate your database and then try to script it back out? Thanks For what it worth. Here is simple piece of C# code that you can use to generate a create statement for your database. You wouldnt need SSMS for that and performance should be better: using Microsoft.AnalysisServices; using System.Xml; Server server = new Server(); server.Connect("localhost"); Database db = server.Databases("MyDatabase"); XmlTextWriter xmlwrite = new XmlTextWriter("MyDBScript.xmla", System.Text.Encoding.UTF8); xmlwrite.Formatting = Formatting.Indented; xmlwrite.Indentation = 2; Scripter.WriteAlter(xmlwrite, db, true, true); xmlwrite.Close(); HTH Edward Melomed.
https://connect.microsoft.com/SQLServer/feedback/details/418476/scripting-a-ssas-database-locks-up-in-sql-server-2008-management-studio
CC-MAIN-2016-07
refinedweb
168
61.22
Hi Oleg, Thanks a lot for your reply. I see now where my attempt went wrong and why it couldn't work in the first place, the instances will indeed overlap. I'm not completely satisfied with your solution though, but seeing how you did it has lead me to the solution I want. Details below. :-) ]). While I appreciate the ingenuity of the solution, unfortunately I cannot use it. First of all I don't want to require my users to write double brackets everywhere, it makes the code a lot uglier IMO. Another problem is that in my real library (as opposed to the simplified example I gave here) I allow the embedding of lists, which means that the [[x]] is not safe from overlap as it is in your example. But I still see the general pattern here, the point is just to get something that won't clash with other instances. I could define data X a = X a instance (TypeCast a XML) => Embed (X a) XML where embed (X a) = typeCast a and write test1 = p (X $ p (X $ p "foo")) Not quite so pretty, even worse than with the [[ ]] syntax. However, I have an ace up my sleeve, that allows me to get exactly what I want using your trick. Let's start the .lhs file first: > {-# OPTIONS_GHC -fglasgow-exts #-} > {-# OPTIONS_GHC -fallow-overlapping-instances #-} > {-# OPTIONS_GHC -fallow-undecidable-instances #-} > module HSP where > > import Control.Monad.State > import Control.Monad.Writer > import TypeCast -- putting your six lines in a different module Now, the thing I haven't told you in my simplified version is that all the XML generation I have in mind takes place in monadic code. In other words, all instances of Build will be monadic. My whole point of wanting more than one instance is that I want to use one monad, with an XML representation, in server-side code and another in client-side code, as worked on by Joel Björnson. Since everything is monadic, I can define what it means to be an XML-generating monad in terms of a monad transformer: > newtype XMLGen m a = XMLGen (m a) > deriving (Monad, Functor, MonadIO) and define the Build and Embed classes as > class Build m xml child | m -> xml child where > build :: String -> [child] -> XMLGen m xml > > class Embed a child where > embed :: a -> child Now for the server-side stuff: > data XML = CDATA String | Element String [XML] > deriving Show > > newtype HSPState = HSPState Int -- just to have something > type HSP' = StateT HSPState IO > type HSP = XMLGen HSP' Note that by including XMLGen we define HSP to be an XML-generation monad. Now we can declare our instances. First we can generate XML values in the HSP monad (we use HSP [XML] as the child type to enable embedding of lists): > instance GenXML HSP' XML (HSP [XML]) where > genElement s chs = do > xmls <- fmap concat $ sequence chs > return (Element s xmls) Second we do the TypeCast trick, with XMLGen as the marker type: > instance TypeCast (m x) (HSP' XML) => > Embed (XMLGen m x) (HSP [XML]) where > embed (XMLGen x) = XMLGen $ fmap return $ typeCast x And now we can safely declare other instances that will not clash with the above because of XMLGen, e.g.: > instance Embed String (HSP [XML]) where > embed s = return [CDATA s] > > instance Embed a (HSP [XML]) => Embed [a] (HSP [XML]) where > embed = fmap concat . mapM embed -- (why is there no concatMapM??) This last instance is why I cannot use lists as disambiguation, and also why I need overlapping instances. Now for some testing functions: > p c = build "p" [embed c] > test0 :: HSP XML > test0 = p "foo" > test1 :: HSP XML > test1 = p (p "foo") > test2 :: HSP XML > test2 = p [p "foo", p "bar"] All of these now work just fine. We could end here, but just to show that it works we do the same stuff all over again for the clientside stuff (mostly dummy code, the clientside stuff doesn't work like this at all, this is just for show): > data ElementNode = ElementNode String [ElementNode] | TextNode String > deriving Show > > type HJScript' = WriterT [String] (State Int) > type HJScript = XMLGen HJScript' > > instance Build HJScript' ElementNode (HJScript ElementNode) where > build s chs = do > xs <- sequence chs > return $ ElementNode s xs > > instance TypeCast (m x) (HJScript' ElementNode) => > Embed (XMLGen m x) (HJScript ElementNode) where > embed (XMLGen x) = XMLGen $ typeCast x > > instance Embed String (HJScript ElementNode) where > embed s = return $ TextNode s Testing the new stuff, using the same p as above: > test3 :: HJScript ElementNode > test3 = p "foo" > > test4 :: HJScript ElementNode > test4 = p (p "foo") And these also work just as expected! :-) Thanks a lot for teaching my the zen of TypeCast, it works like a charm once you learn to use it properly. Really cool stuff! :-) /Niklas
http://www.haskell.org/pipermail/haskell-cafe/2006-August/017392.html
crawl-002
refinedweb
793
57.13
[ ] Robert Muir resolved LUCENE-2642. --------------------------------- Resolution: Fixed OK i merged back all of Uwe's improvements. Thanks for the help Uwe. I think now in future issues we can clean up and improve this test case a lot. I felt discouraged from doing so with the previous duplication... > > {code} > public class TestQueryParser extends LocalizedTestCase { > {code} > it is now > {code} > @RunWith(LuceneTestCase.LocalizedTestCaseRunner.class) > public class TestQueryParser extends LuceneTestCase { > {code} > *. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional commands, e-mail: dev-help@lucene.apache.org
http://mail-archives.apache.org/mod_mbox/lucene-dev/201009.mbox/%3C32771381.161071284408646992.JavaMail.jira@thor%3E
CC-MAIN-2015-35
refinedweb
109
54.39
What's the point? -. Asynchronous programming in Dart is characterized by the Future and Stream classes. A Future represents a computation that doesn’t complete immediately.. Receiving stream eventsReceiving stream events: Future<int> sumStream(Stream<int> stream) async { var sum = 0; await for (var value in stream) { sum += value; } return sum; }. The following example tests the previous code by generating a simple stream of integers using an async* function: Error eventsError events. You can catch the error using try-catch. The following example throws an error when the loop iterator equals 4: Working with streamsWorking with streams The Stream class contains a number of helper methods that can do common operations on a stream for you, similar to the methods on an Iterable. For example, you can find the last positive integer in a stream using lastWhere() from the Stream API. Future<int> lastPositive(Stream<int> stream) => stream.lastWhere((x) => x >= 0); Two kinds of streamsTwo kinds of streams There are two kinds of streams. Single subscription streamsSingle subscription streamsBroadcast streams. Methods that process a streamMethods that process a stream The following methods on Stream<T> process the stream and return a result: Future<T> get first; Future<bool> get isEmpty; Future<T> get last; Future<int> get length; Future<T> get single; Future<bool> any(bool Function(T element) test); Future<bool> contains(Object needle); Future<E> drain<E>([E futureValue]); Future<T> elementAt(int index); Future<bool> every(bool Function(T element) test); Future<T> firstWhere(bool Function(T element) test, {T Function() orElse}); Future<S> fold<S>(S initialValue, S Function(S previous, T element) combine); Future forEach(void Function(T element) action); Future<String> join([String separator = ""]); Future<T> lastWhere(bool Function(T element) test, {T Function() orElse}); Future pipe(StreamConsumer<T> streamConsumer); Future<T> reduce(T Function(T previous, T element) combine); Future<T> singleWhere(bool Function(T element) test, {T Function() orElse}); Future<List<T>> toList(); Future<Set<T>> toSet(); All of these functions, except drain() and pipe(), correspond to a similar function on Iterable. Each one can be written easily by using an async function with an await for loop (or just using one of the other methods). For example, some implementations could be: Future<bool> contains(Object needle) async { await for (var event in this) { if (event == needle) return true; } return false; } Future forEach(void Function(T element) action) async { await for (var event in this) { action(event); } } Future<List<T>> toList() async { final result = <T>[]; await this.forEach(result.add); return result; } Future<String> join([String separator = ""]) async => (await this.toList()).join(separator); (The actual implementations are slightly more complex, but mainly for historical reasons.) Methods that modify a streamMethods that modify a stream The following methods on Stream return a new stream based on the original stream. Each one waits until someone listens on the new stream before listening on the original. Stream<R> cast<R>(); Stream<S> expand<S>(Iterable<S> Function(T element) convert); Stream<S> map<S>(S Function(T event) convert); Stream<R> retype<R>(); Stream<T> skip(int count); Stream<T> skipWhile(bool Function(T element) test); Stream<T> take(int count); Stream<T> takeWhile(bool Function(T element) test); Stream<T> where(bool Function(T event) test); The preceding methods correspond to similar methods on Iterable which transform an iterable into another iterable. All of these can be written easily using an async function with an await for loop. Stream<E> asyncExpand<E>(Stream<E> Function(T event) convert); Stream<E> asyncMap<E>(FutureOr<E> Function(T event) convert); Stream<T> distinct([bool Function(T previous, T next) equals]); The asyncExpand() and asyncMap() functions are similar to expand() and map(), but allow their function argument to be an asynchronous function. The distinct() function doesn’t exist on Iterable, but it could have. Stream<T> handleError(Function onError, {bool test(error)}); Stream<T> timeout(Duration timeLimit, {void Function(EventSink<T> sink) onTimeout}); Stream<S> transform<S>(StreamTransformer<T, S> streamTransformer); The final three functions are more special. They involve error handling which an await for loop can’t do—the first error reaching the loops will end the loop and its subscription on the stream. There is no recovering from that. You can use handleError() to remove errors from a stream before using it in an await for loop. The transform() functionThe transform() function The transform() function is not just for error handling; it is a more generalized “map” for streams. A normal map requires one value for each incoming event. However, especially for I/O streams, it might take several incoming events to produce an output event. A StreamTransformer can work with that. For example, decoders like Utf8Decoder are transformers. A transformer requires only one function, bind(), which can be easily implemented by an async function. Stream<S> mapLogErrors<S, T>( Stream<T> stream, S Function(T event) convert, ) async* { var streamWithoutErrors = stream.handleError((e) => log(e)); await for (var event in streamWithoutErrors) { yield convert(event); } } Reading and decoding a fileReading and decoding a file The following code reads a file and runs two transforms over the stream. It first converts the data from UTF8 and then runs it through a LineSplitter. All lines are printed, except any that begin with a hashtag, #. import 'dart:convert'; import 'dart:io'; Future<void> main(List<String> args) async { var file = File(args[0]); var lines = file .openRead() .transform(utf8.decoder) .transform(LineSplitter()); await for (var line in lines) { if (!line.startsWith('#')) print(line); } } The listen() methodThe listen() method The final method on Stream is listen(). This is a “low-level” method—all other stream functions are defined in terms of listen(). StreamSubscription<T> listen(void Function(T event) onData, {Function onError, void Function() onDone, bool cancelOnError}); To create a new Stream type, you can just extend the Stream class and implement the listen() method—all other methods on Stream call listen() in order to work. The listen() method allows you to start listening on a stream. Until you do so, the stream is an inert object describing what events you want to see. When you listen, a StreamSubscription object is returned which represents the active stream producing events. This is similar to how an Iterable is just a collection of objects, but the iterator is the one doing the actual iteration. The stream subscription allows you to pause the subscription, resume it after a pause, and cancel it completely. You can set callbacks to be called for each data event or error event, and when the stream is closed. Other resourcesOther resources Read the following documentation for more details on using streams and asynchronous programming in Dart. - Single-Subscription vs Broadcast Streams, an article about the two different types of streams - Creating Streams in Dart, an article about creating your own streams - Futures and Error Handling, an article that explains how to handle errors using the Future API - Asynchrony support, a section in the language tour - Stream API reference
https://www.dartlang.org/tutorials/language/streams
CC-MAIN-2019-18
refinedweb
1,163
50.57
View the project on GitHub Since I'm using Elm in my project, I needed to figure out how to use Pusher in Elm, because there is no Pusher client for it. Fortunately, Elm offers a very clean and nice way to interact with Javascript. I was positively surprised by how easy it actually was. Using Pusher in Elm Let's start with the code. Here's how I did it. First, I declared two ports in an Elm module. port module Pusher exposing (..) port messages : (Pony -> msg) -> Sub msg port connect : String -> Cmd msg messageswill be invoked from Javascript every time the Pusher subscription receives a message. Nevermind the type Pony, that's going to change when I implement updating the game state in the backend. connectwill be invoked from Elm to create the Pusher connection. Javascript will listen to that message and open the connection. Let's have a look at the Javascript code required to make this work. import Elm from 'app/Main.elm' import Pusher from 'pusher-js' const mount_node = document.querySelector('#root') const app = Elm.Main.embed(mount_node) let pusher app.ports.connect.subscribe(() => { pusher = new Pusher(PUSHER_KEY, { encrypted: true, }) pusher.subscribe('ponies') .bind('pony-data', app.ports.messages.send) }) All functions prefixed with port in Elm are exposed in Javascript in the object app.ports. Therefore, I can call app.ports.connect.subscribe to run a callback in Javascript when the Pusher.connect command is executed in Elm. In a similar fashion, I can call app.ports.messages.send to invoke the function messages in Elm. To glue everything in Elm, I need to subscribe to the function messages, and return the command connect in the update function when I want to connect to pusher. Here's how I did it in my Main.elm. import Pusher type Msg = OnPusherMessage Pony | OnFetchGame (WebData GameResponse) update : Msg -> Model -> (Model, Cmd Msg) update msg model = case msg of OnPusherMessage pony -> ({ model | name = pony.name }, Cmd.none) OnFetchGame response -> ( updateModel model response, Pusher.connect "connect" ) subscriptions : Model -> Sub Msg subscriptions model = Pusher.messages OnPusherMessage There we go. Adding Pusher.messages in the subscriptions will trigger a OnPusherMessage with the received message every time app.ports.message.send is invoked in Javascript. Similarly, returning Pusher.connect in update will send an event to app.ports.connect when the client has received the game state, so that updates can start flowing. Next Steps The next step is to make the client send clicks to the server and have the server update the game state and send it back to all connected clients. Hopefully I'll have something functional enough to be deployed in a couple of days. Discussion (1) Thank you so much for sharing this!
https://dev.to/avalander/pixel-wars-using-pusher-in-elm-4ifb
CC-MAIN-2021-25
refinedweb
457
60.21
QuickTime for Java: A Developer's Notebook/Audio Media From WikiContent Current revision This is the first of three chapters dealing with specific media types. Video will be covered in Chapter 8, and several other kinds of media—including things you might not have thought of as media, such as text and time codes—will be covered in Chapter 9. It's possible that you've never thought of QuickTime as being the engine for audio-only applications—the ubiquity of QuickTime's .mov file format probably makes it more readily recognized as a video standard. But QuickTime's support for audio has been critical to many applications. For example, the fact that QuickTime was already ported to Windows made bringing iTunes and its music store over to Windows a lot easier. In fact, iTunes is probably responsible for getting QuickTime onto a lot more Windows machines than it would have reached otherwise. So, I'll begin with a few labs that are particularly applicable to the MP3s and AACs collected by iTunes users. Reading Information from MP3 Files If you've ever listened to an MP3 music file—and at this point, who hasn't—you've surely appreciated the fact that useful information like artist, song title, album title, etc., is stored inside the file. Not only does this make it convenient to organize your music, but also, when you move a song from one device to another, this metadata travels with it. The most widely accepted standard for doing this is the ID3 standard, which puts this metadata into parts of the file that are not interpreted as containing audio data—MP3s arrange data in frames , and ID3 puts metadata between these frames. ID3 tags typically are found at the beginning of a file, which makes them stream-friendly, although some files tagged with earlier versions of the standard have the metadata at the end of the file. Note Visit to learn more about ID3. When QuickTime imports an MP3 file, it reads ID3 tags and makes them available to your program through the movie's user data, allowing you to display the tags to the user, or use them in any other way you see fit. How do I do that? Once you open an MP3 as a movie, you need to get at the user data, which contains the imported ID3 tags. Fortunately, it's wrapped as an object called UserData : UserData userData = movie.getUserData( ); The user data is something of a grab bag of data that you can read from and write to freely. Items are keyed by FOUR_CHAR_CODE s, and the contents aren't required to adhere to any particular standard or format (after all, you're free to write whatever you like in user data). For example, QuickTime Player writes a "WLOC" entry that stores the window location last used for the movie. Apple has a standard set of keys that you can use to retrieve the data parsed from an MP3's ID3 tags. Because these are text values, you use UserData 's getTextAsString( ) method to pull them out. getTextAsString( ) takes three arguments: the type you're requesting; an index to indicate whether you want the first, second, etc., instance of that type; and a region tag that's irrelevant in the ID3 case. Example 7-1 shows a basic exercise of this technique, getting the UserData object and asking for album, artist, creation date, and song title information. Note Run this example from the downloadable book code with ant run-ch07-id3tagreader. Example 7-1. Retrieving ID3 metadata package com.oreilly.qtjnotebook.ch07; import quicktime.*; import quicktime.std.*; import quicktime.std.movies.*; import quicktime.std.movies.media.*; import quicktime.io.*; import java.util.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class ID3Tag ID3TagReader( ); System.exit(0); } public ID3Tag static void dumpTagsFromUserData (UserData userData) { // try for each entry in TAG_MAP for (Iterator it = TAG_MAP.entrySet( ).iterator( ); it.hasNext( ); ) { Map.Entry entry = (Map.Entry) it.next( ); Integer key = (Integer) entry.getKey( ); int tag = key.intValue( ); String tagName = (String) entry.getValue(); try { String value = userData.getTextAsString (tag, 1, IOConstants.langUnspecified); System.out.println (tagName + ": " + value); } catch (QTException qte) { } // no such tag } } } When run, this dumps the found tags to standard out, as seen in the following console output: cadamson% ant run-ch07-id3tagreader Buildfile: build.xml run-ch07-id3tagreader: [java] Album: Arthur Or The Decline And Fall Of The British Empire [java] Full Name: Victoria [java] Artist: The Kinks What just happened? The application sets up some static values for keys it is interested in and maps them to human-readable names. For example, the FOUR_CHAR_CODE "@alb" is mapped to "Album." The program prompts the user to select an MP3 file and imports it as a movie, from which it gets a UserData object. In dumpTagsFromUserData( ), it calls getTextAsString() to attempt to get a value for each known tag. If successful, it writes the key and value to the console. If a given tag is absent from the user data, QuickTime throws an exception, which this program quietly ignores. QuickTime has an important and disappointing limitation: it does not import tags written in non-Western scripts. For example, here's the output when I run the application against an MP3 whose "artist" tag is in Japanese kana (characters): cadamson% ant run-ch07-id3tagreader Buildfile: build.xml run-ch07-id3tagreader: [java] Album: COWBOY BEBOP O.S.T.1 [java] Created: 1998 [java] Full Name: SPACE LION Because the artist ( , or "Yoko Kanno" in romaji) is written in non-Western characters, QuickTime doesn't attempt to import it, and thus there's no artist item to retrieve from the user data. What about... ...other tags? A big list of metadata tags are defined in the native API's Movies.h file. Unfortunately, these aren't in the StdQTConstants classes, or anywhere else in QTJ, so you have to define your own constants for them. Table 7-1 is the list of supported values. Table 7-1. Audio metadata tag constants Also, instead of requesting specific keys from the user data, can I just tour what's in there? Yes, you can use UserData.getNextType() to discover the types of items in the user data. This method takes an int of the last discovered type (use 0 on the first call), and returns the next type after that one. When it returns 0, there are no more types to discover. Given a type, you can get its data with getTextAsString() , but because you can't know that a discovered piece of user data necessarily represents textual data, it might be safer to call getData( ) , which returns a QTHandle , from which you can get a byte array with getBytes( ) . Reading Information from iTunes AAC Files If you read the last lab and thought about how ID3 metadata is imported into a QuickTime movie's UserData, you might well expect that the same thing would be true of AAC files created by iTunes: .m4a files for songs "ripped" by the user and .m4p files sold by the iTunes Music Store. In fact, because these files use an MPEG-4 file format that is itself based on QuickTime, you might think that using the same user data scheme would be a slam dunk. But...you'd be wrong. These AAC files do put the metadata in the user data, but they do so in a way that resists straightforward retrieval via QuickTime. Fortunately, it's not too hard to get the values out with some parsing. Note Buckle up, this one is rough. How do I do that? For once, theory needs to come before code—you need to see the format to understand how to parse it. Here's a /usr/bin/hexdump of an iTunes Music Store AAC file from my collection, Toto Dies.m4p: 0000b010 00 3d 5f 3c 00 3d 7d 5e 00 3d 9a fb 00 03 18 da |.=_<.=}^.=......| 0000b020 75 64 74 61 00 03 18 d2 6d 65 74 61 00 00 00 00 |udta....meta....| 0000b030 00 00 00 22 68 64 6c 72 00 00 00 00 00 00 00 00 |..."hdlr........| 0000b040 6d 64 69 72 61 70 70 6c 00 00 00 00 00 00 00 00 |mdirappl........| 0000b050 00 00 00 03 11 9b 69 6c 73 74 00 00 00 21 a9 6e |......ilst...!.n| 0000b060 61 6d 00 00 00 19 64 61 74 61 00 00 00 01 00 00 |am....data......| 0000b070 00 00 54 6f 74 6f 20 44 69 65 73 00 00 00 24 a9 |..Toto Dies...$.| 0000b080 41 52 54 00 00 00 1c 64 61 74 61 00 00 00 01 00 |ART....data.....| 0000b090 00 00 00 4e 65 6c 6c 69 65 20 4d 63 4b 61 79 00 |...Nellie McKay.| 0000b0a0 00 00 24 a9 77 72 74 00 00 00 1c 64 61 74 61 00 |..$.wrt....data.| 0000b0b0 00 00 01 00 00 00 00 4e 65 6c 6c 69 65 20 4d 63 |.......Nellie Mc| 0000b0c0 4b 61 79 00 03 0e 76 63 6f 76 72 00 03 0e 6e 64 |Kay...vcovr...nd| 0000b0d0 61 74 61 00 00 00 0d 00 00 00 00 ff d8 ff e0 00 |ata.............| 0000b0e0 10 4a 46 49 46 00 01 01 01 02 f9 02 f9 00 00 ff |.JFIF...........| Granted, this is not easy to read, but I'll bet you can pick out the artist (Nellie McKay) and the song title ("Toto Dies"), so you know this is the relevant section of the file. In fact, you also might notice the string "udta"...sounds a little like "user data," doesn't it? At work here is the QuickTime file format and its concept of atoms , which are tree-structured pieces of data used to describe a movie, its contents, and its metadata. Without going too deeply into the details—there's a whole book on the format—each atom consists of 4 bytes of size, a 4-byte type, and then data. Atoms contain either data or other atoms, but not both. The 4 bytes before "udta", 0x000318da, indicate the size of all the user data. The first child is an atom called "meta". Because its size is 0x000318d2, just 8 less than the size of "udta", the "meta" atom is clearly the only child of "udta". Unfortunately, because this is user data, the contents don't have to adhere to any published standard, and they don't. The first thing after "meta" should be the 4-byte size of its first child atom, but the value is 0x00000000—an illegal "no size" value—so, a normal QuickTime parser would ignore the contents of "meta". Funny thing is, although these contents aren't real QuickTime atoms, they're awfully close. Start with the stuff that's obviously the metadata and work backward: "Toto Dies" is preceded by an 8-byte pad (0x00000001 and 0x00000000), and before that is "data" and a 4-byte number. That number, 0x00000019, is the size of itself, plus "data", plus the 8-byte pad, plus the string "Toto Dies." And just before that, you'll find the string "©nam", preceded by a 4-byte size. Better yet, "©nam" is one of the constants defined in Movies.h for metadata tagging. Note See the previous lab for a list of QuickTime's metadata tags. Dig further and you'll find that there's a run of these tag-name/data structures, each of which has the structure discovered earlier: - Full size - 4 bytes - Type - 4 bytes - Contents size - 4 bytes - "data" - 4 bytes - Unknown - 8 bytes - Value - Variable number of bytes (size is implicit from earlier size data) The run of metadata blocks exists within a single pseudo-atom parent called "ilst". So, this analysis provides a strategy for getting iTunes AAC metadata: - Get the user data. - Look for a user data item called "meta" and get it as a byte array. - Inside this array, find "ilst". - Start reading 8-byte blocks as possible size/type combinations. If the type is known as a metadata type, skip past the 24 bytes of junk (the 8-byte pad, the "data", etc.) and read the String. The sample program in Example 7-2 implements this strategy. Note Run this example with ant run-ch07-aactagreader. Example 7-2. Retrieving iTunes AAC metadata package com.oreilly.qtjnotebook.ch07; import quicktime.*; import quicktime.std.*; import quicktime.std.movies.*; import quicktime.std.movies.media.*; import quicktime.io.*; import quicktime.util.*; import java.util.*; import java.math.BigInteger; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class AACTag AACTagReader( ); System.exit(0); } public AACTag void dumpTagsFromUserData (UserData userData) throws QTException { int metaFCC = QTUtils.toOSType("meta"); QTHandle metaHandle = userData.getData (metaFCC, 1); System.out.println ("Found meta"); byte[ ] metaBytes = metaHandle.getBytes( ); // locate the "ilst" pseudo-atom, ignoring first 4 bytes int ilstFCC = QTUtils.toOSType("ilst"); PseudoAtomPointer ilst = findPseudoAtom (metaBytes, 4, ilstFCC); // iterate over the pseudo-atoms inside the "ilst" // building lists of tags and values from which we'll // create arrays for the DefaultTableModel constructor int off = ilst.offset + 8; ArrayList foundTags = new ArrayList (TAG_NAMES.length); ArrayList foundValues = new ArrayList (TAG_NAMES.length); while (off < metaBytes.length) { PseudoAtomPointer atom = findPseudoAtom (metaBytes, off, -1); String tagName = (String) TAG_MAP.get (new Integer(atom.type)); if (tagName != null) { // if we match a type, read everything after byte 24 // which skips size, type, size, 'data', 8 junk bytes byte[ ] valueBytes = new byte [atom.atomSize - 24]; System.arraycopy (metaBytes, atom.offset+24, valueBytes, 0, valueBytes.length); String value = new String (valueBytes); System.out.println (tagName + ": " + value); } // if tagName != null off = atom.offset + atom.atomSize; } } /** find the given type in the byte array, starting at the start position. Returns the offset within the byte array that begins this pseudo-atom. a helper method to populateFromMetaAtom( ). @param bytes byte array to search @param start offset to start at @param type type to search for. if -1, returns first atom with a plausible size */ private PseudoAtomPointer findPseudoAtom (byte[ ] bytes, int start, int type) { // read size, then type // if size is bogus, forget it, increment offset, and try again int off = start; boolean found = false; while ((! found) && (off < bytes.length-8)) { // read 32 bits of atom size // use BigInteger to convert bytes to long // (instead of signed int) byte sizeBytes[ ] = new byte[4]; System.arraycopy (bytes, off, sizeBytes, 0, 4); BigInteger atomSizeBI = new BigInteger (sizeBytes); long atomSize = atomSizeBI.longValue( ); // don't bother if the size would take us beyond end of // array, or is impossibly small if ((atomSize > 7) && (off + atomSize <= bytes.length)) { byte[ ] typeBytes = new byte[4]; System.arraycopy (bytes, off+4, typeBytes, 0, 4); int aType = QTUtils.toOSType (new String (typeBytes)); if ((type = = aType) || (type = = -1)) return new PseudoAtomPointer (off, (int) atomSize, aType); else off += atomSize; } else { System.out.println ("bogus atom size " + atomSize); // well, how did this happen? increment off and try again off++; } } // while return null; } /** Inner class to represent atom-like structures inside the meta atom, designed to work with the byte array of the meta atom (i.e., just wraps pointers to the beginning of the atom and its computed size and type) */ class PseudoAtomPointer { int offset; int atomSize; int type; public PseudoAtomPointer (int o, int s, int t) { offset=o; atomSize=s; type=t; } } } When run with Toto Dies.m4p, the output to the console looks like this: cadamson% ant run-ch07-aactagreader Buildfile: build.xml run-ch07-aactagreader: [java] Found meta [java] Full Name: Toto Dies [java] Artist: Nellie McKay [java] Album: Get Away from Me [java] Created: 2004-02-10T08:00:00Z Note The "album" and "created" data didn't appear in the earlier hexdump because in the file they occur after the cover art data, which is several kilobytes long. What just happened? The program gets the UserData, gets its "meta" atom as a byte array, and looks for the "ilst" pseudo-atom. If it finds one, it skips ahead 8 bytes (over "ilst" and its size) and goes into a loop of discovering and parsing potential pseudo-atoms. To parse, you look at the first 4 bytes and consider whether it's a plausible size—in other words, whether it's big enough to contain data, but small enough to not run past the end of the byte array. If so, interpret the next 4 bytes as a FOUR_CHAR_CODE type and check against the list of known metadata types. If it matches one of the known types, you've got a valid piece of metadata, which this program simply writes to standard out. What about... ...combining this with the MP3 approach of the previous lab so that there's just one codebase? A good strategy for that would be to get the UserData and look for a "meta" atom. If you get one, assume you have iTunes AAC and do the previous parsing. If not, assume you have an MP3, and start asking for the various metadata types with UserData.getTextAsString( ), as in the previous lab. Providing Basic Audio Controls Most audio applications provide some basic audio controls to allow the user to customize the sound output to suit his environment. The MovieController provides a volume control, but you can do better than that: you can control balance, bass, and treble with simple method calls. How do I do that? The AudioMediaHandler class provides the methods setBalance( ) and setSoundBassAndTreble( ), so it's just a matter of getting the handler object. The key is to remember that: - Movies have tracks. - Tracks have exactly one Media each. - Each Media has a MediaHandler. Iterate over the movie's tracks to get each track's media and handler. To figure out whether a given track is audio, you can use a simple instanceof to see if the handler is an AudioMediaHandler. setBalance( ) takes a float, which ranges from -1.0 (all the way to the left) to 1.0 (all the way to the right), with 0 representing equal balance. setSoundBassAndTreble( ) is interesting because it's officially undocumented. As it turns out, you pass in ints for bass and treble, where 0 is normal, -256 is minimum bass or treble, and 256 is maximum. Note Well, the native version is undocumented. For once, the Javadocs have the useful info. Example 7-3 provides a simple GUI to exercise these methods. Note Run this example with ant run-ch07-basicaudiocontrolsplayer. Example 7-3. Providing balance, bass, and treble controls package com.oreilly.qtjnotebook.ch07; import quicktime.*; import quicktime.std.*; import quicktime.std.movies.*; import quicktime.std.movies.media.*; import quicktime.app.view.*; import quicktime.io.*; import java.awt.*; import javax.swing.*; import javax.swing.event.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class BasicAudioControlsPlayer extends Frame implements ChangeListener { JSlider balanceSlider, trebleSlider, bassSlider; AudioMediaHandler audioMediaHandler; public static void main (String[ ] args) { try { QTSessionCheck.check( ); Frame f= new BasicAudioControlsPlayer( ); f.pack( ); f.setVisible(true); } catch (QTException qte) { qte.printStackTrace( ); } } public BasicAudioControls); // build balance, treble, bass controls in a panel Panel controls = new Panel(new GridLayout (3,2)); controls.add (new JLabel ("Balance")); balanceSlider = new JSlider (-1000, 1000, 0); balanceSlider.addChangeListener (this); controls.add (balanceSlider); controls.add (new JLabel ("Treble")); trebleSlider = new JSlider (-256, 256, 0); trebleSlider.addChangeListener (this); controls.add (trebleSlider); controls.add (new JLabel ("Bass")); bassSlider = new JSlider (-256, 256, 0); bassSlider.addChangeListener (this); controls.add (bassSlider); add (controls, BorderLayout.SOUTH); } public void stateChanged (ChangeEvent ev) { Object source = ev.getSource( ); try { if (source = = balanceSlider) { // balance float newBal = (float) (balanceSlider.getValue( ) / 1000f); audioMediaHandler.setBalance (newBal); } else { // bass & treble audioMediaHandler.setSoundBassAndTreble ( bassSlider.getValue( ), trebleSlider.getValue( )); } } catch (QTException qte) { qte.printStackTrace( ); } } } When run, the program asks the user to select a file to play, and then shows a GUI, as seen in Figure 7-1. What just happened? The key to this example is the use of Swing JSlider s, which can be configured with appropriate bounds for the features they represent. For example, the bass and treble sliders run in a -256 to 256 range, with 0 as a default: trebleSlider = new JSlider (-256, 256, 0); The balance slider needs to pass a float between -1 and 1, but JSliders work with ints, so it uses a range of -1000 to 1000, which is scaled to an appropriate float before calling setBalance( ): balanceSlider = new JSlider (-1000, 1000, 0); All the sliders share a ChangeListener implementation that reads the new value from the affected JSlider and make a corresponding call to the AudioMediaHandler. Providing a Level Meter Many audio applications also provide a graphical " level meter," which is an on-screen display of the loudness or softness of certain frequencies within the audio. In QuickTime Player, this is shown as a set of bars on the right side of the control bar, as seen in Figure 7-2. The intensity of lower frequencies, like bass, is shown in the leftmost columns, while higher frequencies are to the right. How do I do that? AudioMediaHandler provides two key methods: setSoundEqualizerBands() to set up monitoring and getSoundLevelMeterLevels() to actually get the data. setSoundEqualizerBands( ) indicates which frequencies you want to monitor for your graphics display. These are passed in the form of a MediaEqSpectrumBands object, which is built up by constructing it with the number of bands you intend to monitor, then repeatedly calling setFrequency() to indicate which frequency a given band will monitor. Note Unfortunately, most of the level-metering methods are officially undocumented. As the audio plays, you can repeatedly call getSoundLevelMeterLevels( ), which returns an array of ints representing the measured levels. Example 7-4 creates a basic audio level meter in an AWT Canvas. Note Run this example with ant run-ch07-levelmeterplayer. Example 7-4. Providing an audio level meter package com.oreilly.qtjnotebook.ch07; import quicktime.*; import quicktime.std.*; import quicktime.std.movies.*; import quicktime.std.movies.media.*; import quicktime.app.view.*; import quicktime.io.*; import java.awt.*; import java.awt.event.*; import javax.swing.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class LevelMeterPlayer extends Frame { // bands used by apple sndequalizer example; equivalent to qt player's // int[ ] EQ_LEVELS = { 200, 400, 800, 1600, 3200, 6400, 12800, 21000 }; static final Dimension meterMinSize = new Dimension (300, 150); LevelMeter meter; AudioMediaHandler audioMediaHandler; public static void main (String[ ] args) { try { QTSessionCheck.check( ); Frame f= new LevelMeterPlayer( ); f.pack( ); f.setVisible(true); } catch (QTException qte) { qte.printStackTrace( ); } } public LevelMeter); // add level meter to GUI meter = new LevelMeter( ); add (meter, BorderLayout.SOUTH); // set up repainting timer Timer t = new Timer (50, new ActionListener( ) { public void actionPerformed (ActionEvent ae) { meter.repaint( ); } }); t.start( ); } class LevelMeter extends Canvas { public Dimension getPreferredSize( ) { return meterMinSize; } public Dimension getMinimumSize( ) { return meterMinSize; } public LevelMeter( ) throws QTException { MediaEQSpectrumBands bands = new MediaEQSpectrumBands (EQ_LEVELS.length); for (int i=0; i<EQ_LEVELS.length; i++) { bands.setFrequency (i, EQ_LEVELS[i]); audioMediaHandler.setSoundEqualizerBands (bands); audioMediaHandler.setSoundLevelMeteringEnabled (true); } } public void paint (Graphics g) { int gHeight = this.getHeight( ); int gWidth = this.getWidth( ); // draw baseline g.drawLine (0, gHeight, gWidth, gHeight); try { if (audioMediaHandler != null) { int[ ] levels = audioMediaHandler.getSoundEqualizerBandLevels( EQ_LEVELS.length); int maxHeight = gHeight - 1; int barWidth = gWidth / levels.length; int segInterval = gHeight / 20; for (int i=0; i<levels.length; i++) { // calculate height of each set of boxes, // proportional to level float levPct = ((float)levels[i]) / 255.0f; // math is a little weird here; y axis has 0 at top, // but we have 0 at bottom of this graph int barHeight = (int) (levPct * maxHeight); // draw the bar as set of 0-20 rectangles int barCount = 0; for (int j=maxHeight; j > (maxHeight - barHeight); j-=segInterval) { switch (barCount) { case 20: case 19: case 18: g.setColor (Color.red); break; case 17: case 16: case 15: g.setColor (Color.yellow); break; default: g.setColor (Color.green); } g.fillRect (i * barWidth, j - segInterval, barWidth - 1, segInterval - 1); barCount++; } } } } catch (QTException qte) { qte.printStackTrace( ); } } } } When run, this example provides the graphics-level display as shown in Figure 7-3. What just happened? This example sets up levels that, according to a demo in the native API, correspond to the same frequency bands metered by QuickTime Player: int[ ] EQ_LEVELS = { 200, 400, 800, 1600, 3200, 6400, 12800, 21000 }; When the user opens a movie, the program finds the AudioMediaHandler of the first audio track and calls setSoundEqualizerBands() with these bands. Then it creates an instance of the LevelMeter inner class, along with a Swing Timer to repaint the level meter every 50 milliseconds. When the repaint calls the meter's paint() method, it divides its width by the number of bands to figure out how wide each bar should be. The height takes a little more work: the returned levels are in the range 0 to 255, so the program calculates a "level percent" float by dividing by 255, then multiplying this by the height of the component. With the height and width of each frequency band, the component can draw a set of boxes, up to that height, to represent the band's level. What about... ...the values passed in for frequencies and the number that can be passed in? Unfortunately, with no documentation for this feature, there's only trial-and-error to fall back on. One thing I've found is that you can have only 10 bands—you can pass in as many frequencies as you want, and you'll get that many back in the int array returned by getSoundLevelMeterLevels( ), but only the first 10 will have nonzero values. Building an Audio Track from Raw Samples As I've said many times before: movies have tracks, tracks have media, media have samples. But what are these samples? In the case of sound, they indicate how much voltage should be applied to a speaker at an instant of time. By itself, a sample is meaningless, but as a speaker is repeatedly excited and relaxed, it creates waves of sound that move through the air and can be picked up by the ear. So, why would you want to do this? One plausible scenario is that you have code that generates this uncompressed pulse code modulation (PCM) data, like a decoder for some format that QuickTime doesn't support. By writing the raw samples to an empty movie, you can expose it to QuickTime and then play it, export it to QT-supported formats, and use other QuickTime-related functions. How do I do that? SoundMedia inherits an addSample( ) method from the Media class. This can be used to pack samples into a Media, which in turn can be added to a Track, which then can be added to a Movie. But what values do you provide to create an audible sound? The example shown in Example 7-5 creates a square wave at a constant frequency. A square wave is one in which the voltage is either fully on or completely off. To create a 1000-hertz (Hz) tone, you write samples to alternate between full voltage and zero voltage, 1,000 times per second. Figure 7-4 shows a graph of sample values for the square wave. Note Run this example with ant run-ch07-audiosamplebuilder. Example 7-5. Building audio media by adding samples package com.oreilly.qtjnotebook.ch07; import quicktime.*; import quicktime.std.*; import quicktime.std.movies.*; import quicktime.std.movies.media.*; import quicktime.io.*; import quicktime.util.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class AudioSampleBuilder { static final int SAMPLING = 44100; static final byte[ ] ONE_SECOND_SAMPLE = new byte [SAMPLING * 2]; static final int FREQUENCY = 262; public static void main (String[ ] args) { try { QTSessionCheck.check( ); QTFile movFile = new QTFile (new java.io.File("buildaudio audio track int timeScale = SAMPLING; // 44100 units per second Track soundTrack = movie.addTrack (0, 0, 1); System.out.println ("Added empty Track"); // create media for this track Media soundMedia = new SoundMedia (soundTrack, timeScale); System.out.println ("Created Media"); // add samples soundMedia.beginEdits( ); // see native docs for other format consts int format = QTUtils.toOSType ("NONE"); SoundDescription soundDesc = new SoundDescription(format); System.out.println ("Created SoundDescription"); soundDesc.setNumberOfChannels(1); soundDesc.setSampleSize(16); soundDesc.setSampleRate(SAMPLING); for (int i=0; i<5; i++) { // build the one-second sample QTHandle mediaHandle = buildOneSecondSample (i); soundMedia.addSample(mediaHandle, // QTHandleRef data, 0, // int dataOffset, mediaHandle.getSize( ), // int dataSize, 1, // int durationPerSample, soundDesc, // SampleDescription sampleDesc, SAMPLING, // int numberOfSamples, 0 // int sampleFlags) ); } // finish editing and insert media into track soundMedia.endEdits( ); System.out.println ("Ended edits"); soundTrack.insertMedia (0, // trackStart 0, // mediaTime soundMedia.getDuration( ), // mediaDuration 1); // mediaRate System.out.println ("inserted media"); // save up System.out.println ("Saving..."); OpenMovieFile omf = OpenMovieFile.asWrite (movFile); movie.addResource (omf, StdQTConstants.movieInDataForkResID, movFile.getName( )); System.out.println ("Done"); System.exit(0); } catch (QTException qte) { qte.printStackTrace( ); } } // main /** Fill ONE_SECOND_SAMPLE with two-byte samples, according to some scheme (like square wave, sine wave, etc.) then wrap with QTHandle */ public static QTHandle buildOneSecondSample (int inTime) throws QTException { // convert inTime to sample count (i.e., how many samples // past 0 we are) int wavelengthInSamples = SAMPLING / FREQUENCY; int halfWavelength = wavelengthInSamples / 2; int sample = inTime * SAMPLING; for (int i=0; i<SAMPLING*2; i+=2) { int offset = sample % wavelengthInSamples; // square wave - bytes are either 7fff or 0000 if (offset < halfWavelength) { ONE_SECOND_SAMPLE[i] = (byte) 0x7f; ONE_SECOND_SAMPLE[i+1] = (byte) 0xff; } else { ONE_SECOND_SAMPLE[i] = (byte) 0x00; ONE_SECOND_SAMPLE[i+1] = (byte) 0x00; } sample ++; } return new QTHandle (ONE_SECOND_SAMPLE); } } Note Run this example with ant-ch07-audiosamplebuilder. When run, this creates a five-second, audio-only movie file called buildaudio.mov. Open it in QuickTime Player or an equivalent (like the level meter player from the previous lab) to listen to the file. Note Square waves are not easy on the ears. Turn down your speakers or headphones before you play this file. What just happened? Two constants at the beginning define important values. SAMPLING is the number of samples to be played every second. This example uses 44,100, which is the same as on a compact disc. Tip An important consideration for choosing a sampling frequency is the Nyquist-Shannon Sampling Theorem , which states that you need to sample at a rate double the highest frequency you want to capture. So, a sampling rate of 44,100 will properly reproduce frequencies less than 22,050 Hz. Given that human hearing typically ranges from 20 to 20,000 Hz, this effectively covers any humanly audible sound. The FREQUENCY constant is the frequency of the sound wave to be produced. This example uses 262, which is approximately middle C on a piano. Note To be more precise, middle C is approximately 261.625565 Hz. To start writing samples, you need a SoundMedia object and a place to put your data. The example does this by: - Creating a new Movie with createMovieFile( ) . Using this approach—instead of the no-arg Movie constructor—has the benefit of indicating where the samples are to be stored. - Adding a new track to the movie, with no size, and a volume of 1 (full volume). - Creating a new SoundMedia object. This constructor takes the track the media is associated with and a time scale for the media. In this case, 44,100 is a good choice because then each sample will correspond to one unit of the media's time scale. You could use higher values, but not lower ones, because a sample can't be expressed as less than one unit of the time scale. - Calling beginEdits() on the media to indicate that the program will be making changes to the media. Most of the rest of the code in the example has to do with setting up the call to addSample() , which is somewhat tricky. The method takes seven arguments: - A QTHandleRef that points to the data to be added - An offset into the handle - The size of the data to be inserted - The durationPerSample—how much time the sample represents, in the media's time scale - A SampleDescription to describe the data in the handle - The number of samples being added with this call - Behavior flags The first thing to do is to create a SampleDescription that can be reused on every call to addSample( ). To do this, create a SoundDescription object. The constructor takes a "format" FOUR_CHAR_CODE, which for uncompressed data is "NONE". Tip Other valid formats are defined in "QuickTime API Reference: S0und Formats" on Apple's developer site. Next, you customize the SampleDescription object with some setter methods to indicate the number of channels, the size of each sample in bits, and the sampling frequency. For this example, I used one channel and 16 bits per sample. This means that when the byte array with the data is parsed, QuickTime will take the data 2 bytes at a time and assume it to be a 16-bit value. If there were two channels, there would be 4 bytes per sample: two 2-byte samples, one for each speaker. You might expect that you'd then simply loop through, adding one sample at a time to the Media and creating one second of audio every 44,100 times through the loop. Although this is legal, the resulting file won't actually play. The problem is that QuickTime wants you to put audio data in larger and more manageable chunks. To quote from the native AddMediaSample docs: You should set the value of this parameter so that the resulting sample size represents a reasonable compromise between total data retrieval time and the overhead associated with input and output. [ . . . ] For a sound media, choose a number of samples that corresponds to between 0.5 and 1.0 seconds of sound. In general, you should not create groups of sound samples that are less than 2 KB in size or greater than 15 KB. So, in this example, I've created a byte array to represent one second of samples, which is filled in a method called buildOneSecondSample( ). This method figures out where the waveform is at each sample time and writes either 0x7fff or 0x0000 to each 2-byte pair. Because the "NONE" format assumes signed shorts, 0x7fff is the maximum, not 0xffff. With the byte array filled, you can wrap it with a QTHandle, and you're ready to call addSample( ) . The call looks like this: soundMedia.addSample(mediaHandle, // QTHandleRef data, 0, // int dataOffset, mediaHandle.getSize( ), // int dataSize, 1, // int durationPerSample, soundDesc, // SampleDescription sampleDesc, SAMPLING, // int numberOfSamples, 0 // int sampleFlags) ); Once you're done adding samples, it's cleanup time. You use endEdits() to tell the Media you're done editing, then actually put the media into the track with Track.insertMedia() , which tells the track what parts of the media object to use and where it goes relative to the track's time scale. Finally, the movie is written to disk with the curiously named Movie.addResource( ) . What about... ...some other kind of wave because hearing that square wave is really unpleasant? A sine wave offers a nicer alternative, because it is much more like a naturally occurring sound. Figure 7-5 shows what its waveform looks like. The following alternate implementation of buildOneSecondSample( ) produces a sine wave—I didn't want to put it in the preceding example, which is already complicated enough without having to use trigonometry, like this does: public static QTHandle buildOneSecondSample (int inTime) throws QTException { // convert inTime to sample count (i.e., how many samples // past 0 we are) int wavelengthInSamples = SAMPLING / FREQUENCY; int sample = inTime * SAMPLING; double twoPi = 2 * Math.PI; double radiansPerSample = twoPi / wavelengthInSamples; // each sample should be one n/th of twoPi for (int i=0; i<SAMPLING*2; i+=2) { int offset = sample % wavelengthInSamples; // sine wave double angle = offset * radiansPerSample; double sine = Math.sin (angle); // sines are -1<x<1. we want from 0 to 0x7fff double heightD = (sine + 1) * (0x7fff / 2); // cast to int and fix endianness if on little (x86, etc.) short height = (short) heightD; // pack this into array as two bytes ONE_SECOND_SAMPLE [i] = (byte) ((height & 0xff00) >> 8); ONE_SECOND_SAMPLE [i+1] = (byte) (height & 0xff); sample ++; } return new QTHandle (ONE_SECOND_SAMPLE); } This implementation calculates the width of a wavelength in samples, then divides that into equal segments of a 2 radius for its calls to Math.sin( ) . The returned values are then translated so that instead of running from -1.0 to 1.0, they run from 0 to 0x7fff. It's also worth noting that the middle C sine wave is pretty hard to hear over basic computer speakers. You might have better results with a frequency of 440, which is the A above middle C.
http://commons.oreilly.com/wiki/index.php?title=QuickTime_for_Java:_A_Developer's_Notebook/Audio_Media&diff=prev&oldid=26177
CC-MAIN-2013-20
refinedweb
6,059
55.54
an EJB project This section is organized in the following parts: Setting up the project environment - Right click Under Project Explorer and Select New->EJB Project. - Name the project as StatefulBean. Select Next. - Mark the fields as suggested in the screenshot and Select Next. - Uncheck Generate Deployment Descriptor. This is beacuse we are using annotations is our applications and so deployment descriptors are redundant entity. Select Next. - On next screen select all default values and Select Finish. This creates a skeleton for the EJB project. Next steps are adding the bean class, bean interface and setter/getter methods. - Right click on ejbModule in StatefulBean project and select New->class. - Name the class as PersonalInfo and package as ejb.stateful. Select Finish. Adding Java classes and interfaces - Add the following code to PersonalInfo.java.PersonalInfo.java - Similarly create a class BillingInfo.javaand add the following code.BillingInfo.java PersonalInfo.javaand BillingInfo.javaare classes for setting and getting the user information. - Now we will add the Business interface or bean interface. Right click on the package ejb.stateful and Select New->Interface. - Name the interface as AccountCreator and Select Finish. - Add the following code to AccountCreator interface.Note: Once you enter this code you might see errors like @EJB can be resolved. Currently there are some limitations with the geronimo eclipse plugin which will resolved soon. We will soon suggest you how to get rid of those errors.AccountCreator.java - Next step is to add the implementation to the interface. Right click on ejb.stateful interface and select New->class. - Name the bean class as AccountCreatorBean and Select Finish. - Add the following code to AccountCreatorBean.Once you have added the code you will see lot of errors but this can be resolved easily and is shown in next step. The errors in the code is due to missing classes from our server runtime.AccountCreatorBean.java Making additional configurations Resolve the errors as follows. - Right click on StatefulBean project and select Properties. - On the next screen select Java Build Path->Libraries->Add External Jars. - Browse to <GERONIMO_HOME>/repository/org/apache/geronimo/specs/geronimo-ejb_3.0_spec/1.0.1and select geronimo-ejb_3.0_spec-1.0.1.jar. Select Open. - Similarly browse to <GERONIMO_HOME>/repository/org/apache/geronimo/specs/geronimo-annotation_1.0_spec/1.1.1 and add geronimo-annotation_1.0_spec-1.1.1.jar. - Once done you can see both the jars enlisted. Select OK. Let us walk through the EJB bean class code. - @Stateful public class AccountCreatorBean implements AccountCreator- @ Stateful annotation declares the Bean class as Stateful class. - @Resource(name="jdbc/userds") DataSource datasource;- This is a resource injection into the bean class wherin a datasource is injected using the @Resource annotation. We will shortly see how to create a datasource in geronimo. - public AccountCreatorBean*-This is a constructor for the bean class and it will be used to create a bean instance whenever a request is received from new client connection. - @PostConstruct, @PostActivate public void openConnection()- @PostConstruct and @PostActivate are annotations which are basically called lifecycle callback annotation. The lifecycle for these annotation is as follows - New bean instance is created using the default constructor. - Resources are injected - Now the PostConstruct method is called which in our case is to open a database connection. - PostActivate is called on the bean instances which have been passivated and required to be reactivated. It goes on the same cycle as being followed by PostConstruct. - @PreDestroy @PrePassivate public void closeConnection()- Again @PreDestroy and @PrePassivate are Lifecycle callback annotation. The lifecycle of these annotation is as follows - Bean instances in the pool are used and business methods are invoked. - Once the client is idle for a period of time container passivates the bean instance. The closeConnection function is called just before container passivates the bean. - If the client does not invoke a passivated bean for a period of time it is destroyed. - public void addPersonalInfo(PersonalInfo personalinfo) and public void addBillingInfo(BillingInfo billinginfo)- These two functions are invoked to store client data across various calls. - @Remove public void createAccount()- There are two ways in which a bean is destroyed and hence this where a client session ends in stateful bean. One is when a bean has been passivated and is not reinvoked by client hence the bean instance is destroyed. Another way is to use @Remove annotation. Once the client confirms and submits all the required information the data is populated into the database and that is where the session ends. Creating a database using the administrative console - Start the server and lunch the administrative console using the URL. - Enter default username and password. - In the welcome page, Under Embedded DB, Select DB Manager. - On the next page create a database named userinfo.sqlscript. Select Run Sql. userinfo.sql - To verify the table creation succeeded. Select Application as shown in the figure. - Next screen suggests the table has been successfully created. To view the contents of the table select VIEW CONTENTS. - The table is currently empty as shown in the figure. Creating a datasource using the. Creating a Web based application client - Right click under Project Explorer and Select New->Dynamic Web Project. - Name the project as StatefulClient and Select Next. - Keep the default settings as shown in the figure. Select Next. - On the next screen keep default values. Select Next. - Default values on this screen too. Select Finish. - Right click on the StatefulClient project and Select New->Servlet. - Name the package as ejb.stateful and Servlet as Controller. - Keep the default values and Select Next. - Keep the default values and Select Finish. - Once the servlet is created it shows error. This is due to servlet api missing from the runtime. This can be easily resolved. Right click on StatefulClient project and Select Properties. - On the next screen select Java build path and select Libraries. - Select Add External jars. - Browse to your <GERONIMO_HOME>\repository\org\apache\geronimo\specs\geronimo-servlet_2.5_spec\1.1.2and select geronimo-servlet_2.5_spec-1.1.2.jar and Select Open. - Select OK on the next screen this will remove all the errors. - Add the following code to Controller.javaservlet.Controller.java This servlet contains code referring to bean interface class and PersonalInfo and BilllingInfo class. We need to add these projects to the build path so that the classes can be compiled. - Right click on StatefulClient project and Select Properties->Java Build Path->Projects. Select Add. - Check StatefulBean and Select OK. - Once done the project will be visible in the build path. Select OK. - Next step is to add jsp pages to our client project. Right click on WebContent under StatefulClient project and Select New->jsp. - Name the jsp as PersonalInfo.jspand Select Next. - On the next screen select Finish. - Add the following code to PersonalInfo.jsp.PersonalInfo.jsp - Similarly add another jsp with the name BillingInfo.jspand add the following code. BillingInfo.jsp Let's walk through the servlet and jsp code. First through Controller servlet code. - if ( (request.getRequestURI()).equals("/StatefulClient/Controller"))- This code act as a controller on the jsp which is making a request. This is possible only when the jsp's make a call to th servlet with different names. How this can be done will be illustrated in the next section. - HttpSession hs= request.getSession(true); hs.setAttribute("handle", ac); This part of the code saves the remote interface handle and later in the second call same handle is used to make another call to bean methods. - RequestDispatcher rd=request.getRequestDispatcher("BillingInfo.jsp")- This code section and the next line forwards the control to next jsp that is BillingInfo.jsp. - Rest of the servlet deals with calling the setter methods and later sets the object so as to persist the data between different calls. - Next walkthrough the jsp code. - PersonalInfo.jsp has <form action="Controller"> whereas BillingInfo.jsp has <form action="Controller1"> as the action element but both internally calling the same servlet. This can be easily done by modifying web.xml this will be shown in the next section. Modifying openejb-jar.xml, web.xml and geronimo-web.xml - In StatefulBean project select META-INF/openejb-jar.xml and replace the existing code with following codeThe above deployment plan is different from the above one in the following wayopenejb-jar.xml - The namespace generated by geronimo eclipse plugin are not to AG 2.1 level. This is due to some limitation which will be fixed soon. - Since the ejb bean class refers to jdbc/userds datasource a <dependency> element has to be added in EJB deployment plan. - In StatefulClient project select WEB-INF/web.xml and replace the existing code with the followingweb.xml - In StatefulClient project select WEB-INF/geronimo-web.xml and add a dependency element for the StatefulBean EJB project. THe final web deployment plan will look as followsgeronimo-web.xml. Deploy and Run - Under project explorer right click on StatefulBean project and select Export->EJB jar file. - Browse to a destination and Select Finish. - Similarly export StatefulClient project. - Launch the administrative console with. Under application select Deploy New. - Browse to the StatefulBean project and select Install. - Similarly deploy StatefulClient project. - Launch the application using the link. Fill up the form and select SubmitQuery. - Once you submitt the current page, next page will be displayed wherein you need to enter your Billing Information. Once done select SubmitQuery. - Later you can verify the database which is populated with the user data.
https://cwiki.apache.org/confluence/display/GMOxDOC30/Stateful+Session+Bean
CC-MAIN-2015-48
refinedweb
1,567
52.56
My doubt is - i'm doing a script with the objective to open the door if the character that enters in the trigger are holding the "object3", and i do it without problem, but now i see this problem - If the character exit and enter again in the trigger the "open" animation starts again and it's feels so weird see the door teleports from the ceiling to the floor (i'm using a scifi-door that i found in asset store, link below). I know if i do the OnTriggerExit() the door can close when the character exits the trigger, but i asked to my friend to test with me something and i got this - if other player with the "object3" enters in the trigger at the same time with another player the door do the same thing for the 2 players, in other words, the door open 2 times, and if i enter in the trigger when the door is closing the door teleports to the ceiling too. I don't know how to solve this, someone can help me? I'm using this door for all the tests - public class Door2 : MonoBehaviour { public GameObject object, door; public Animation anim; void Start(){ anim = door.GetComponent<Animation>(); } void OnTriggerEnter ( Collider obj ){ door = GameObject.FindWithTag ("Door"); //if the character enter with the 3rd object and the door are closed, this code runs if ((object.GetComponent<Renderer> ().sharedMaterial.IsKeywordEnabled ("Object3")) && (!anim.isPlaying) && (door.GetComponent<Animation>().name != "open")) { door = GameObject.FindWithTag ("Door"); door.GetComponent<Animation> ().Play ("open"); door.GetComponent<Animation> ().wrapMode = WrapMode.Once; //here i try to do something like - if the character enters again in the trigger and the door are open, the door don't do the "open" animation again if ((object.GetComponent<Renderer> ().sharedMaterial.IsKeywordEnabled("Object3")) && (anim.isPlaying) && (door.GetComponent<Animation>().name == "open")) { return; } return; } } Answer by BigJigglyNutSack · Jan 17, 2018 at 04:17 PM Use a bool to detect if it's open or closed. Surround your code in this if(!isOpen) { //code to open your door isOpen = true; } Make sure you declare the "isOpen" bool. Also, using the Find method is something you should avoid if 502 People are following this question. OnTriggerExit2D doesn't work when one of gameObject setActive false. 0 Answers OnTriggerEnter and OnTriggerExit behaving unexpected 1 Answer math.Lerp not correct? 1 Answer How to move OnTriggerEnter to Update 1 Answer CapsuleCollider check if grounded 0 Answers
https://answers.unity.com/questions/1455977/c-how-can-i-solve-this-door-problem-in-script-can.html
CC-MAIN-2019-09
refinedweb
404
50.77
SYNOPSIS #include <nng/nng.h> int nng_recv(nng_socket s, void *data, size_t *sizep, int flags); DESCRIPTION The nng_recv() receives a message. The flags is a bit mask that may contain any of the following values: NNG_FLAG_NONBLOCK The function returns immediately, even if no message is available. Without this flag, the function will wait until a message is received by the socket s, or any configured timer expires. NNG_FLAG_ALLOC If this flag is present, then a “zero-copy” mode is used. In this case the caller must set the value of data to the location of another pointer (of type void *), and the sizep pointer must be set to a location to receive the size of the message body. The function will then allocate a message buffer (as if by nng_alloc()), fill it with the message body, and store it at the address referenced by data, and update the size referenced by sizep. The caller is responsible for disposing of the received buffer either by the nng_free()function or passing the message (also with the NNG_FLAG_ALLOCflag) in a call to nng_send(). If the special flag NNG_FLAG_ALLOC (see above) is not specified, then the caller must set data to a buffer to receive the message body content, and must store the size of that buffer at the location pointed to by sizep. When the function returns, if it is successful, the size at sizep will be updated with the actual message body length copied into data. RETURN VALUES This function returns 0 on success, and non-zero otherwise.
https://nng.nanomsg.org/man/v1.2.2/nng_recv.3.html
CC-MAIN-2020-10
refinedweb
256
65.35
Hi. My test uses remote modules like this: import { randomItem } from ""; import { htmlReport } from ""; It works fine locally, but fails on CI with the following errors: time="2022-04-25T12:44:36Z" level=error msg="Module specifier \"\" xxx.xxx.xxx.xxx:443: i/o timeout\"\n\tat go.k6.io/k6/js.(*InitContext).Require-fm (native)\n\tat\n" hint="script exception" I know, my CI node is behind the proxy, so an expected solution would be to set appropriate env variables (http(s)_proxy). I’ve tried them all (including uppercased ones), but k6 keeps failing with above error. I’am sure that my proxy configuration should work, because when I add proxy-related env-variables, wget -qO- starts working (but nor k6, unfortunately). What options do I have to resolve this issue? How to teach k6 to use proxy for resolving http(s) modules?
https://community.k6.io/t/using-proxy-for-getting-remote-http-s-modules/3533
CC-MAIN-2022-40
refinedweb
146
54.93
Problem You want to persist your objects to a MS SQL Server database instead of the default access database Solution using System; using DevExpress.Xpo; using DevExpress.Xpo.DB; class Program { static void Main(string[] args) { string conn = MSSqlConnectionProvider.GetConnectionString(".", "XPOCookbook"); XpoDefault.DataLayer = XpoDefault.GetDataLayer(conn, AutoCreateOption.DatabaseAndSchema); } } Discussion To connect to a SQL Server database, instead of the default Access database, the first thing that we have to do is to create the required connection string by calling the GetConnectionString() method on the MSSqlConnectionProvider class and passing in the server name and database name. Having thus obtained the connection string, we can then make our database the default one used by XPO by setting the XpoDefault DataLayer property to the result of calling the GetDataLayer() method, on the XpoDefault class, passing in the connection string and setting the AutoCreateOption. This is nothing more than the first 3 items of your KB A2944. I suggest to put a link somewhere to A2944 (and others already present, well written, regulary updated, KB articles) to increase visibility of these basic informations instead of repeating the same things in another place. I don't want to blame, use this as a positive suggestion... Hope to see some more focused info in the next recipe. Always thanks for your work. Hi Luca, thanks for your feedback. You are correct this information is available elsewhere, however, if you read the comments in the post (community.devexpress.com/.../the-expressapp-application-framework-project-mojave-and-show-stoppers.aspx) you will see that some customers have trouble in finding some information and so part of the reason for doing this cookbook is to surface this information and provide another mechanism for customers to find it. Another reason for providing this information is that you have to start somewhere and the begining seems a logical place. If I started where you wanted me to start I dare say I'd get comments asking why I didn't start at the begining. Not all the recipes are going to be of interest to all customers Hi Gary, it would be nice if there would be a cookbook for 2 different connections to 2 different databases (for example on the same SQL Server), and how you have to work with sessions. Btw its a very good idea show the mnemonics of the templates if you use coderush Thanks for your work Gary it would be nice for you to include details of using XPO in ASP.NET with membership management, state management and session management in your cookbooks. Are you hoping to have a Membership provider for ASP.NET? It would be nice to see, how we can quickly make one - which is really effective as the one introduced in ASP.NET 2.0. I would suggest that you provide (vsi) templates. Thanks Going well so far Gary, I agree with comments regarding finding information being a bigger issue than the information not being there. Any intention of "binding" the cookbook into a suplimental help file for downloading? It's nice having information at your finger tips but sometimes (when on the move) there are those of us with no internet connection. Not wanting to start an "old argument" here but why are examples so often in C# not VB? I keep hearing that VB programmers are "inferior" to C# ones - so why is it we appear to be deemed better at converting C# to VB than C# programmers are at converting the other way around? @Richard, @Malisa, more advanced repices will be coming later. @Steve, hmm, there is no plans to bind this into a help file at the moment, mainly because I hadn't thought of it. :-) Would anyone else be interested in such a thing if it were possible? that would rock as an extra section in the help file ;)
http://community.devexpress.com/blogs/garyshort/archive/2008/08/25/xpo-cookbook-2-change-the-default-database.aspx
crawl-002
refinedweb
643
59.23
In many ways, I guess clang is reasonable and roughly as good as GCC. But one thing I've begun to find fairly annoying is the over-done inlining of functions with -O2 or above. In a fairly simplistic recursive-descent parser I've written, I ended up adding the noinline attribute 5 times to functions to prevent the size of the program from bumping up another 4 KB, usually after simplifying the code a little which brought a function under some threshold causing it to be inlined. The first instance of this was with the warning function in the parser. (A macro used in the parser becoming slightly simpler caused the program size to increase, and investigating the mystery revealed that the warning function in the parser had now become inlined because its size had slightly decreased.) noinline Now I solve that by compiling the main parser file with -Os, but whenever -O2 or -O3 are used for a file, noinline attributes can be needed once in a while to prevent clang from doing something silly. It doesn't matter either way with GCC, which is more conservative about inlining of functions. Usually nothing of significance happens with GCC when the noinline attribute was added to keep clang from being too gung-ho on inlining. -Os -O2 -O3 What prompted this post is when I tonight had to add noinline to prevent a slow-down in array-filling code in a file compiled with -O3. But that's part 3 of 3 of recent changes. (Current version of ramp.c has all the changes.) First, after speeding up the program when built with GCC by compiling some files with -O3, I found that it became slower instead with clang because of one file, that one. This was solved by changing the fill-functions to handle pointers and indexes more simply and moving the use a value-multiplying array down into each of them. Then -O3 made for a speed-up with clang, too, my current test for the program running roughly 10% faster (the value-filling stuff matters a lot) on an old laptop with i386 OpenBSD and its standard clang 10.0.1, compared to before using -O3. Second, another much smaller change added a NULL-check near a call to one fill-with-value function which NULL-checks a pointer copied from the same pointer. The presence of both NULL checks caused clang to not only inline that function, but also in turn the loss of roughly half of the 10% speed-up. Third, adding a noinline to the function which was called restored the performance to the same as before the extra NULL check had been added. (In cases like this, it's great when the compiler optimizes the loops inside loop-containing functions as aggressively as possible, but not when it inlines those functions themselves. I don't know exactly what ended up happening before the noinline was added, but perhaps it hurt efficient cache use in some way.) (Edit: That slow-down does not happen on the amd64 system I have with clang. It may be something other than too much inlining which however happens along with that on the i386 system.) Well, that's my experience so far and with the versions of the compilers used with modern *BSD (and the modern GCC on Linux installations I've used) the past few years. (Before that, too little knowledge and experience with clang to say much of anything.) I think they must have set the default inline threshold too liberally, compared to what GCC does, years and major versions ago. joelkp If all your compiler is doing in the name of "optimization" is inlining a few functions behind your back, then you don't have any problems. Consider, instead, this case of aggressive compiler optimization which bit me some time back. The compiler in question is GCC (version doesn't matter--as you'll see presently). There was this network-packet processing code I'd written--standard stuff: read an ethernet packet from the card, do some munging on it, pass it up to the main program. I made some trivial change, and then the program started crashing on some inputs. After a day or two of hair-pulling, I discovered that my code was OK (arguably--you'll see why further on), and that this was a case of code improvement by the compiler aka. optimization. Here's a test code which demonstrates what was going on: $ cat -n t.c 1 #include <stdio.h> 2 #include <string.h> 3 #include <stdlib.h> 4 5 int 6 main(int argc, char* argv[]) 7 { 8 enum { N = 32 }; 9 char buf[N]; 10 char *s = NULL; /* Initialize */ 11 size_t n = 0; /* " */ 12 13 if (argc > 1) { 14 s = argv[1]; 15 if ((n = strlen(s)) >= N) { 16 fprintf(stderr, "%s: arg too long\n", *argv); 17 exit(EXIT_FAILURE); 18 } 19 } 20 21 memmove(buf, s, n); 22 buf[n] = 0; 23 24 if (s == NULL) 25 printf("`s' is null--compiler OK.\n"); 26 else /* GCC says s is not NULL when it is! */ 27 printf("compiler says `s' is not NULL when it is!\n"); 28 29 return 0; 30 } $ Notice that if no argument is passed to the program, s is not touched. Let's run it: s $ gcc -O -o t t.c $ ./t compiler says `s' is not NULL when it is! $ Huh? I mean: WTF!? Try w/o optimizatons: $ gcc -o t t.c $ ./t `s' is null--compiler OK. $ That worked. But, what's going on here? Compiling with -fsanitize=undefined gives a clue (in the old days, before ubiquitous static analysis, sanitizers and such-like, we used to look at the assembly output): -fsanitize=undefined $ gcc -O -fsanitize=undefined -o t t.c $ ./t t.c:21:2: runtime error: null pointer passed as argument 2, which is declared to never be null `s' is null--compiler OK. $ Notice how program behaviour has changed. Line 21 is the memmove. Check declaration of memmove: memmove $ fgrep -A1 memmove /usr/include/string.h extern void *memmove (void *__dest, const void *__src, size_t __n) __THROW __nonnull ((1, 2)); $ Ahh! memmove is annotated. It says: args. 1 and 2 to the function can be assumed to be nonnull. So, what gcc has done is look at that annotation and gone: Oh! I see you've called memmove, so I can therefore assume that s is not-NULL--even if n is 0 and memmove hasn't done anything--and optimize the other case out completely (and throw away your initialization statement too). nonnull n 0 This is the kind of (infamous) GCC "optimization" that led to a root exploit in the Linux kernel a few years ago. What about NetBSD, where memmove (and other similar functions) are not annotated (at least, I couldn't find it)? We'll, GCC still optimizes away my useful code. What gets my goat is that other people have also stumbled on this GCC issue. And the GCC folks won't fix it because: well... standards, annotations, droit du compiler, whatever... Even the latest GCC, 10.2.0, does this. clang, BTW, does the correct thing here. Try it. clang The purpose of in-lining functions is to take advantage of the processor's optimization mechanisms. The less branch instructions, the more efficient predictive execution and instruction caching. Today's computers have plenty of memory and run applications that are much more computing-intensive than in the past, so processors and compilers are designed for speed. Nowadays, only a few embedded devices are strongly memory constrained, so we can say the need to optimize for size has disappeared at 99.99%. Very clearly, trying and save 4 kilobytes is crucial on a Z80 but makes no sense on an x86 or amd64 machine. This means that if you begin defeating the compiler's algorithms by adding noinline here and there, expect unpredictable consequences. Don't forget that optimizations built into processors and compilers are based on statistics, so they are only efficient on "large" amounts of code executed "intensively", whereas a developer can only analyze a few lines at a time. Lastly, if you're in a use case where you really need to control the code the compiler generates, it may be a better idea to code that part of your application in assembly language instead of C or C++. It would avoid you the unpleasant surprise to see the object code change when using a different compiler, or a newer version of the same one. 20-100 You're plainly right that there's no practical need to save 4KB here and there on x86 and amd64. How people relate to more and less bloat in programs is in very large part a matter of aesthetics rather than anything practical. My first post conflated two separate problems, by the way. The first is what I've found a minor annoyance for a few years: clang inlining some functions uselessly when gcc doesn't with noticeable size bloat as a result. It turns out that there's an actual single threshold parameter for how llvm behaves where the default allows a fairly large inlining "cost", and statistically it probably makes clang better in benchmarks. Personally, I just think that value is a bit too high for -O2 (but it goes hand in hand with the general trend of the "inline" keyword mattering less and less). Others want it cranked even higher because it makes their programs faster. The second problem, which I falsely believed an instance of the first at first, is some kind of x86-specific issue, as I noted in the edited-in note. The function which was inlined is not actually a valid example of the first problem, the "inlining cost" is much smaller, the size bloat likewise, and it turns out gcc also inlined it. It just triggered a hasty reaction in me, because it was yet another clang-specific inlining thing happening where this time performance instead of size took a hit when I set out to optimize performance. I think using noinline in particular will generally have an effect similar to moving a function to a separate compilation unit (but a bit less severe). For functions where the performance of the call simply doesn't matter, like something which prints errors, it should be harmless for how the compiler optimizes the rest of the code. Do I care enough to actually go the route of using assembly for this project for complete control? No, not at present. I'm currently not nearly that deep in understanding of how to make things perform. Actually, currently, I appreciate how well modern compilers made that array filling code work when -O3 combined with the use of the restrict keyword on 64-bit systems. Filling float arrays became so much faster that attempts to make the program faster by avoiding it when only a single value is needed paled in comparison. rvp That's spooky in how something expected to be an ordinary function suddenly changes the meaning of code in which it's referred to. It seems the reason it happens regardless of annotations in the library used is that the gcc built-in version also has the annotation. I just tested this: Before commenting out #include <string.h> on my Linux system: #include <string.h> $ cc -O t.c $ ./a.out compiler says `s' is not NULL when it is! $ cc -O -fno-builtin t.c $ ./a.out compiler says `s' is not NULL when it is! After commenting out string.h: $ cc -O t.c t.c: In function 'main': t.c:15:12: warning: implicit declaration of function 'strlen' [-Wimplicit-function-declaration] 15 | if ((n = strlen(s)) >= N) { | ^~~~~~ t.c:15:12: warning: incompatible implicit declaration of built-in function 'strlen' t.c:4:1: note: include '<string.h>' or provide a declaration of 'strlen' 3 | #include <stdlib.h> +++ |+#include <string.h> 4 | t.c:21:2: warning: implicit declaration of function 'memmove' [-Wimplicit-function-declaration] 21 | memmove(buf, s, n); | ^~~~~~~ t.c:21:2: warning: incompatible implicit declaration of built-in function 'memmove' t.c:21:2: note: include '<string.h>' or provide a declaration of 'memmove' $ ./a.out compiler says `s' is not NULL when it is! $ cc -O -fno-builtin t.c t.c: In function 'main': t.c:15:12: warning: implicit declaration of function 'strlen' [-Wimplicit-function-declaration] 15 | if ((n = strlen(s)) >= N) { | ^~~~~~ t.c:4:1: note: 'strlen' is defined in header '<string.h>'; did you forget to '#include <string.h>'? 3 | #include <stdlib.h> +++ |+#include <string.h> 4 | t.c:21:2: warning: implicit declaration of function 'memmove' [-Wimplicit-function-declaration] 21 | memmove(buf, s, n); | ^~~~~~~ t.c:21:2: note: 'memmove' is defined in header '<string.h>'; did you forget to #include <string.h>'? $ ./a.out `s' is null--compiler OK. So it appears you can get rid of it by both not using a library declaration with the annotation and building with -fno-builtin. -fno-builtin joelkp Of course, it would be ugly to take that approach (replacing library declarations and disabling built-ins for a specific compiler) in portable code. I don't mean to say it would be a nice fix. For now, clang may avoid that kind of thing, but I searched a bit and it looks like this could change in future clang versions. Current documentation: "Note that the nonnull attribute indicates that passing null to a non-null parameter is undefined behavior, which the optimizer may take advantage of to, e.g., remove null checks." So they note they may do what gcc already does. joelkp I don't mean to say it would be a nice fix. The way I fixed my code was to add an extra check for a 0-sized buffer after processing. joelkp "Note that the nonnull attribute indicates that passing null to a non-null parameter is undefined behavior, which the optimizer may take advantage of to, e.g., remove null checks." So they note they may do what gcc already does. That's exactly the rationale the GCC folks have used to defend their "optimization": OK, so, I have to be a C language-lawyer and now have to go to night-school to understand your compiler-specific extensions as well? Yeesh! joelkp That's spooky in how something expected to be an ordinary function suddenly changes the meaning of code in which it's referred to. That's just it, mate. You try to write as portable and standards-safe a code as possible, then along comes some other bit of code which relies on extensions, and it throws a spanner in the whole works. rvp I figured out how this works; there's related more general features in both compilers ("assume", "unreachable", etc.). The optimizer is basically given a logical formula when the attribute is used. It is then allowed to remove any code specific to when the opposite is true. To ensure nothing can be removed, make sure the optimizer ends up with a tautology, always true, after looking at the surrounding code, so that the opposite is always false. Going back to the example you posted, where s can be NULL but buf can not, the solution is to ensure memmove is guarded with a NULL-check for s: buf if (s != NULL) memmove(buf, s, n); Then the problem goes away. Logically, the optimizer ends up dealing with: "if s is not null, then s is not null", a tautology. The opposite goes from "s is null" to simply "false". If such a check is forgotten in any place where it's needed, the feature shifts compiler semantics towards: "I see you left out that null check... Muhahahaha!" (More recent gcc versions than I now use seem to move in the direction of being more helpful, by actually providing warnings: -Wnonnull.) -Wnonnull.) joelkp Going back to the example you posted, where s can be NULL but buf can not, the solution is to ensure memmove is guarded with a NULL-check for s: if (s != NULL) memmove(buf, s, n); joelkp Going back to the example you posted, where s can be NULL but buf can not, the solution is to ensure memmove is guarded with a NULL-check for s: You know what the problem with that is, right? Sometimes such checks themselves are optimized away. This is the kernel exploit I mentioned earlier: Fun with NULL pointers, part 1 joelkp.) Right. But, this adds redundancy to the code, is wasteful, and is ugly to boot. Consider a simple implementation of memcpy: memcpy void mcpy(char* s, const char* t, size_t n) { while (n-- > 0) *t++ = *s++; } The code will work for all valid arguments. And as long as n holds the correct size of the buffers (even 0), both s and t can be NULL. Now, I'm being asked to code it in this way just to work around the compiler: t void mcpy(char* s, const char* t, size_t n) { if (s == NULL || t == NULL || n == 0) return; while (n-- > 0) *t++ = *s++; } I don't mind adding necessary checks, but, this kind of a check is just a waste. The code works fine without it. My original code was the same. If the buffer I was working on ended up with a size of 0, the control flow would just pass through without doing anything. But, then the compiler changed the code path underneath me.. rvp You know what the problem with that is, right? Sometimes such checks themselves are optimized away. This is the kernel exploit I mentioned earlier: Fun with NULL pointers, part 1 As it relates to "nonnull", that article makes clear that dereferencing a pointer is basically treated in the same way as passing it as a nonnull argument. If you consider a chain of nonnull uses of a pointer, you can think of each place where a check may crucially be present or absent as a node in a head-and-tail list moved down. Node after node, if the check is there, the head will be kept; but otherwise the tail is removed along with the head. In the function in the article, checking for a pointer became "headless" when a pointer-dereferencing was added before the first check, and that's why the check was removed. (Anyway, here's what seems like the solution if you want to ensure gcc never deletes NULL checks: -fno-delete-null-pointer-checks. Adding that to compilation options is also another fix for the original example code you posted. That option was mentioned in the follow-up article to the article you linked to, as something the Linux project ended up using, given the peculiarities of kernels (where NULL can actually be a valid address, unlike normal code).) -fno-delete-null-pointer-checks rvp Now, I'm being asked to code it in this way just to work around the compiler: Actually, it should kind of be the opposite. The onus is all on the caller to do the checking, exactly like the onus is on the pointer user to ensure it's not NULL before dereferencing it, but without warning messages, people are likely to unknowingly keep using code with what's now formally bugs in it. If/when warnings are actually produced - gcc used to lack warnings for these nonnull-issues due to some kind of technical debt for a very long time, and versions without warnings will be in wide use for years to come - people should be alerted to having to change their code and add checks to avoid ominous undefined behavior. rvp. That's because the annotation also exists in gcc's built-in versions of the functions. Which makes sense if the built-ins are in sync with the standard libc, but not as much if not. I think that may be feasible to change as part of tweaking the default gcc build which comes with NetBSD, changing such details to more closely match NetBSD libc. That's far beyond my detail-knowledge, but the people wrestling with gcc versions and updates in NetBSD may have the know-how (if they would turn out motivated to work on such a change). joelkp That's because the annotation also exists in gcc's built-in versions of the functions. There is no annotation for gcc's builtins, the compiler just inlines the appropriate code (so no annotation is needed)(the annotation, I think, are picked up from .../gcc/builtins.def). Using -fno-builtin does the trick on NetBSD (unlike Linux as in your example), but, then everything is undone if you add -D_FORTIFY_SOURCE=N because then another set of builtins are used. This what I dislike about this mess: playing hunt the correct option against the compiler. .../gcc/builtins.def -D_FORTIFY_SOURCE=N BTW, look at the assembly for -O. It's instructive. Compare it to -O -fno-builtin on NetBSD. For the first case, gcc completely elides one of the code paths--as you would expect it to. -O -O -fno-builtin
https://www.unitedbsd.com/d/364-clang-and-over-done-inlining-of-functions
CC-MAIN-2022-21
refinedweb
3,553
70.53
Why is your preferred programming language your go-to? Ali Spittel on November 18, 2018 I got asked on Twitter why I love Python so much, and I thought I would do a quick writeup, then open this up to a discussion on why your preferr... [Read Full] As might be clear from my entries to your challenges, I’m a Rubyist. I play around with other things, and explore different things, but Ruby captured my heart and I haven’t looked back. I’m super excited for Ruby 3. I've been a Rubyist for over a decade but, like you, I always explored new things, and now I settled mostly on Elixir as my go-to language for server-side web/apps. I still use Ruby and Python for other quick/focused scripts or tasks. I also appreciate Go, even if it's my last go-to, just when I need speed or portability (eg: deploy a binary in production) Ruby is my second go-to language right after Kotlin (I do primarily mobile apps) :) Great choice, you can't go wrong with Ruby. Kotlin definitely seems to get a lot of things right Last time I worked with Ruby (3-4 years ago), I heard about the Ruby 3x3 initiative. Any resource you can point me to for latest progress/news on Ruby v3? Go as it's a great all-rounder. Complied to native binary makes it simple to use for processing io and building tools. Fast enough and safe enough to build microsevices in. Memory managed to I don't worry about malloc and free any more :) I have also experimented with it and enjoyed it quite a lot, seems to be like pretty good replacement for C/C++. Am a go programmer too, at first I love C/C++. Go is awesome and it’s really fast 💨 I mostly use PHP and JS as those were the two languages I learned first. I can see how PHP and JS can be abused to write bad code, but so can any language. I recently used Go for a project where I needed plenty of threads and that was awesome, I love how simple Go is. PHP and JS are easy to use and as far as I know the two fastest scripting language (though JS cheats as it is all JIT now). JS has the benefit of being useful everywhere now and a must know for frontend web dev. I have considered learning python, but I keep thinking there is nothing I can do in python that I can't do in PHP/JS and PHP/JS both run significantly faster in most cases than vanilla python (excluding pypy and cython). JS because it is JIT compiled, and PHP because of heavy caching plus a ton of built in C extensions which are of course very fast. Main problem with PHP is that it was not so good in the past (before PHP 7 and PHP 5). It was slow and it had some weird functionalities. But with release of PHP 7 (and PHP 8), PSR standards and frameworks like Laravel and Symphony, it has improved a lot. It's very fast and with JIT coming in PHP 8 will be even faster. Other big problem is that is is very easy for beginners. And beginners obviously don't write so good code. This is also reason why many people think that (all) code in PHP is bad. Additionally: PHP is a beast of a web language. With the upcoming 7.3 release being 200% faster than the 5.6 release performance is only getting better. It is one of the fastest scripting languages, and it will become even faster when JIT is implemented (like an order of magnitude faster). That won't make a difference in web apps because computing is never the bottleneck (it's usually I/O bound : loading 1000s of classes from your favorite framework and querying an API or SQL server that's not even on the same machine comes at a cost), but it could make PHP usable for domains in which nobody would consider using PHP/Python/Ruby today, like heavy scientific computations, image processing, 3d rendering or IA. Preloading (coming in 7.4) on the other hand could help quite a bit on the I/O side. I started learning programming with C++ (go-to language for the first 3-4 years in uni), then there was Java, PHP, a bit of Python, a bit of Ruby on Rails... Somewhere in between I had to do a project with animations, and I used canvas and Javascript. Javascript was SO weird! I remember banging my head on the table because of "functions are objects and you can pass them around" while things like closures simply made me want to run away. Nothing made much sense. But then it became familiar. Like in romance movies, hate transformed into love. I like the flexibility. I like how it looks. Everything seems a bit easier when I code it in JS. I'm going to be the weirdo here: I don't have a go-to programming language. I like python. It's nice for simplicity, and I'm ok with the whitespace thing. I never get to apply this skill professionally though. I like javascript. It's everywhere. It's got some warts, but it's pretty simple to hammer stuff out. I like java. Tools like Spring Boot where almost anything you want to do is there already, and you can build pretty big things with a couple of config classes, a few interfaces, and some annotations. Same here, mostly any of these three or c#, when it comes to programming. The only reason I use JS more than the others is that it's really simple to just open the console in Chrome and start typing... I have two preferable languages depending on the situation: Java and TypeScript. Java is the language I use professionally, I am a web developer who loves Spring Boot and what it has to offer. The combination of Java maturity and Spring Boot ease of use made me love the Java language, before I used to see C# as the go-to language, but after I knew Java well, I fell in love. It is great to see that decades of well organized community driven development is at your back. You can find any solution for anything in Java and most of the time the solution is very elegant as well. People may say that it is verbose, but I think it has the vocabulary necessary to transmit what needs to be transmitted to the developer. It has the best libraries and the best exception handling as well, it is easy to find where is the problem, something that I didn't find in any language, unless, maybe, C#. TypeScript is Javascript for the statically typed language fans. You are almost forced to know Javascript nowadays, but Javascript really bothered me with its, in my opinion, unsafe way of dealing with types, I have come from Java, it is something I don't tolerate. Therefore, TypeScript has been incredible for me, it reminds me a lot Java (and Kotlin), which makes me feel more comfortable and I can use the fast development and prototype of JavaScript and NodeJS. So, I tend to use Java for work related things, web development and serious projects and TypeScript for quick projects, prototyping, etc. Honorable mentions: the growing desire to come back to C# and learn it well, I think it is important and it is probably not that far from Java. I think it is good to have both Java and C# under your belt, but I didn't find the time to do so until now. Python is a language that I never liked and probably never will, it is just not for me, I need curly braces in my life... hahaha I've felt this urge at times as well as I started with C#, but am a Python/Julia guy now. I will say as well that Java and C# are, in my opinion, sister languages in a lot of ways. If you know one, you are likely able to read the other and know what is going on for the most part. They were also designed with similar problems in mind, for similar use cases, and inspired by similar languages. They were just developed at different companies. I ultimately believe that Java is more widely used due to its portability, whereas C# is used by pretty much any business that runs the Microsoft stack. Sorry, let me rephrase that: pretty much any business :P. However, in the days where the JVM runs more than just Java and C# is now cross-platform (weird), they may be actually becoming more similar than they've ever been. I'm actually hugely interested to see the future of these two languages. Yes, that's true, C# and Java are very similar in many ways, except for a few different ways with doing things and writing things. It is somewhat the difference between dialects of a same language, I think. The different is mostly how things are done, for example dependency injection, database interaction, etc. In these areas they are very different, but it is a matter of framework, not language itself. Here where I live I see that there are a 50/50 ratio between Java and C# and it would be great for my career to know both. This weekend I decided to have a "C# Weekend", I'm rewriting a application I did for fun and practice in Java and Spring to C# and APS.NET Core. Probably, I will write an article about my impressions regarding this rewriting. :) I would read the crap out of that article. I left the C# world behind when .NET 4.5 was new and C# 5 was the latest version of the language so, suffice to say, I'm well removed from the C family nowadays. But I would love to see something that was written in Java, not only re-written into modern C#, but in the .NET Core repackaging of The .NET Framework. I've read a bit about .NET Core, but am not really sure if it is Microsoft doing its usual thing of acquiring a company (in this case Xamarin) then giving their founders a big middle finger by ripping their product apart, taking what they like, and throwing the rest away, telling Mono to go shove it, or an actual attempt to encourage a cross-platform, open sourced world. I like Microsoft's new direction. I really like how Satya Nadella, when first given his position, was expected to do a bunch of stuff, and in many cases had it demanded of him by the board, and instead kind of just said "That's nice. But Azure is my baby, I'm a cloud guy at heart. So guess what? We're going after AWS's cloud service. Oh, and we're going to do it by showing Google their not the only open playground of the big 5. Cheers fellas, I've got a company to run." But, I'm also relatively sure that at least part of that was, more or less, a PR stunt to help all of us skeptics believe that a "maverick" had taken MS by the ears and is leading a bright new revolution in tech and don't really trust it as far as I can grow a grand piano full of molten lead. It would be interesting to see the comparison either way though :D i write python for my job, but if i had my druthers i'd be writing haskell all the time. it's concise and elegant, and things like type classes, algebraic data types, and higher-kinded types (not to mention the concept of kinds in general) are things i miss when writing other languages Props for the colloquialism “druthers”. 👍 I would recommend you Elm language if you like Haskell. Pure functional language suitable for beginners. I actually looked into Elm--it's got a lot to recommend it, but I'm not super keen on the way the language and community is managed. If you like Elm, you should give PureScript a try--it's heavily Haskell-influenced and compiles to JavaScript, plus it has (imo) a better way of dealing with interoperation, not to mention fun stuff like row polymorphism. I know PureScript, I have seen a lot of talks about it, but it's a bit mathematical voodoo to me. Note, that I do not work as a frontend developer, but I do mobile apps in Kotlin, so everything else is just more or less hobby to me. I love haskell too. It's not my go to though. I would choose java or python (the languages I know best) or maybe C (I don't know very much C, but I sometimes have to use it. I like C more than java). My languages team: Ruby, everything is a object and care programmers happy. Go favor simple. Nodejs is popular. Rust has some new idea. Elixir' author often say beautiful code. Clojure, everything in (). I'm a Python guy through and through. I started my career with C#, which was a bit of a difficult first language to be honest. Back then, I didn't think about what was going on under the hood as I could barely even remember what to type. But I truly hated how much code it took for everything Now, I've not written a single line of C# code in 5 years so there are probably a ton of mistakes above. But if you've come from a C language to a high level scripting language like Python, Julia, Go, or Ruby and you can look me in the eye and tell me that the above C# example didn't make your sphincter tighten a little bit, then you are a dirty dirty liar 😋😋😋. What I love most about Python is that it teaches you to think in code. The syntax and ease of use allowed you to translate ideas in your head into code so easily its insane. A lot of folks feel that this kind of easy syntax makes programmers weak and squishy, but I disagree. I would consider a strong programmer somebody who can think through a problem while doing something else, figure out a solution, then simply sit down and type it out and watch it build and work correctly rather than trying to remember what that curly brace is supposed to go. Our job is to build solutions and solve problems efficiently, effectively and quickly. Does expertise in using a sword give you bragging rights? Yes, of course. Will a beginner with a gun kill you in open space from 30 feet away? Absolutely and bragging rights be damned. But different languages have different strengths and use cases. One size never fits all and I have a lot of trouble not trying to solve a problem with python that would be better suited to Go or Rust or Haskell. Either way, I love using Python because I can code thoughts like writing notes in a notebook. If I want some functionality, I like that Python makes it easy to translate that thought into code and test it quickly rather than fighting the compiler. I think dynamically/weakly typed languages are a bomb waiting to explode. I think statically typed is the way to go. But I do use python a lot and it was my first language. A lot of folks feel that way and I can't say that defining the type of data that goes into a variable or data set is not a huge advantage in a lot of ways. But I also think that dynamic typing has its place as well. In my opinion, having used both, I think the typing system of a language is less important than knowing how to use the typing system of your language of choice. Good code is good code and bad code is bad code. Although, I will totally admit that static typing does make it easer to write certain types of code well, it has its own set of issues. But I will also be the first to admit that Python, and languages like it, are far from perfect despite their popularity. Still my favorite language though! (Although, I have been looking at Rust's performance and memory/thread safety lately, and am thinking of starting to move some of my more speed-centric projects over to it. Don't tell Python yet thought cuz she doesn't know and I want her to be ready to move on with her life before I bring in a step-mom for her in the shape of Rust. It can be so tough for kids to see their dad get re-married just before they leave for college...) Perl because it gives me power to do anything simply Started learning to code in C#, quickly switched to JavaScript. Now my go-to is TypeScript. I love it because it is very versatile. TS/JS runs pretty much everywhere: Web, Mobile, Desktop, IOT, front-end, back-end. JS might not excel at everything but I honestly think it doesn't suck at much. You can build some pretty cool games with it, do machine learning, web apps (of course), desktop apps, data science stuff... It feels like the only limit is your imagination. I like the fact that it's a scripting language and you don't have to worry about memory management. I like the event system and how it deals with asynchronous code. Even though this may not be where it shines the most, I like the fact that you can write programs that make use of multi-threading and concurrency (even if everyone thinks you can't do that in JS). I like TypeScript because it adds a layer of type safety on top of the above. I usually find TS code prettier, cleaner and more often self-documented than plain JS code. It also makes writing OO style code easier while it still allows writing in a functional style if you prefer. Ruby is my gem. It was originally designed to make programming fun, and every time I use it, I enjoy myself. I try other languages, and see lots of potential for Elixir while having plenty of respect and appreciation for Python, but at the end of the day, I am a Ruby developer and I couldn't be happier. I see what you did there I'm split between C and ARM Assembly. C is beautiful to me because it's in touch with the hardware of the system, plus it's very portable between systems and is human-readable enough that you can do general purpose tasks as well as hardware programming. I love Assembly Language because I can follow through my program in the wires and components of computers. I like to be able to debug from a physical perspective. ARM asm is also a whole lot easier than any of the CISC asm languages (I'll learn x86 one day) and has more functionality in terms of microcontrollers and single board computers that I like to use. I like the history of asm and it feels closest to the early programmers of the 50s and it gives you a proper feel of how difficult most programs must have been back then. C's cool. I don't know it very well, but I'm forced to use it (gladly). I’m sure this is just a misconception, but I love javascript because of how easy it is to get into and how versatile it is. When I needed to make a list of options for a select and was given a list of languages and their ‘language code’, I opened up a node instance and turned those two lists into an object and the. Build a template and generated the list of html options in the terminal. Rather typing out everything I saved myself 20~ minutes, and I learned something about the fs package in node. I think that’s something special. I love Python because of its simplicity, awesomeness, popularity, and ecosystem. Here is "Zen of Python" (PEP20) that says what tries to be I like Erlang because it's concurrency, syntax (it's a little bit ugly, I know) and functional programming. The Erlang's syntax is so good for the functional programming in my opinion but many people hate its syntax, I still don't know why.. There is Elixir, that makes Erlang better and simpler. It could be lovely for Rubyists, but It's still not familiar for me. OCaml! It has beautiful syntax, features, performance. It by default compiles into OCaml Bytecode/Native, and it can be compiled into JS using Bucklescript, which is great. There is ReasonML, like Erlangs Elixir. It supports React programming, which makes you able to FRP (Functional React Programming). I really want to it's be popular, because OCaml has failed to be popular, but if Reason will be popular OCaml will be popular too and the ecosystem will be richer. Because Reason compiles into OCaml AST then JS using Bucklescript. It's good for JS programmers. I'm thinking for Kotlin too. It looks good! Perl is my go-to, and has been for over 20 years. This was back when Perl and CGI was the standard for dynamic web, and I thought "use Perl, or write it in C++ with the strings library?", and after a wave of nausea, never looked back. My general move is to try to turn something that's available in one form and turn it into something else, and Perl is very good at that, and when it isn't, I can shell it out and play with the result. I especially love CPAN, which I hold as a best-in-show for language repositories. I can install and upgrade old modules with every confidence that everything that worked before I started will work when I am done, which is not true of every dynamic language that starts with P. I know that things I want to do are affected by things I know that I can do with Perl, and I know it has slipped a lot in popularity in the last 20 years. But I know that many of my idle questions, from "Can I brute-force solve this logic problem in my son's math homework?" to "Can I re-implement a spirograph in SVG?" to "Can I put my FitBit step count in my Bash prompt?" are solvable with Perl. Perl. It wasn't the first language I learned (that was Basic, followed by Pascal, SQL, Prolog...), but I found its way of expressing things very similar to the way I thought about things. Prolog is weird. How did you like it? Well, we used it at university in "Applied Logic" and it made sense there. Years later, I was surprised how Erlang's syntax was similar to Prolog's one. My go to language was JavaScript because I started with that while I was attending Ironhack bootcamp. In my head, I thought only in JavaScript for my backend and front end possibilities. Then I started working for a company right after bootcamp that worked mainly with a LAMP stack ( Laravel , PHP) ... then I was hooked lol. Nowadays, I code everything in PHP and Laravel. It’s so elegant I've been using mostly C# for about 12 years. For about 10 years before that, it was classic VB (VB2 to VB6) and before that it was about 8 years of MASM, C and C++ with a sprinkling of QBASIC. The thing I like about C# is how it links both these earlier phases of my career and takes it further. If I want to just bang/try something out or show somebody an idea or concept, I go Python every time. The syntax just gets out of the way and I don't have to be as picky about all the details. If I want to be sure it works I go to Haskell. Basically, if it compiles it works, and I've brought the techniques I learned from Haskell into other typed languages to be able to reap some of the benefits there as well. If it needs to be fast, C/C++ or Fortran. C# currently, for which I love. This was mainly for my job and because I love the dotnet stack (even more so now Core is around). But I would like to transfer to a more data centric role around Machine Learning (if possible), so looking into F# and Python. You can still use C# for machine learning intensive tasks e.g. ML.NET My first language, C/C++. There's tremendous freedom and variety. It's got all the good as well as the bad. You take your pick. You can do templates or OOP or vanilla C. There's no interpreter to get in the way of what you want. And there's an endless richness to it - which can equally be considered a drawback. Since I have a physics background, I'm also a fan of Fortran. Did you know there's a 2008 version? It still comes out as the fastest** language and can teach you low-level details. It also changes little details with C, e.g. parameter passing details, start by one indexing, column major order, no pointers etc. So it can serve as a nice compare and contrast with C. Of course, no one can live without a scripting language. And Python is far and away the favorite. Though I'm curious to try Julia lately. ** Of course, you can program C/C++ to be just as fast. But if you're careless, then Fortran ties your hands more. I mean ... Ali ... you already wrote better about the reasons for Python than I could. :-) Java is the one I was trained in in school, so it'll always have a soft spot, and I like the "belt and suspenders" feel of its type safety and syntax sometimes ... especially when I think I might mess up. But for scratchpad stuff ... it's really nice to be able to write fast, and then later actually read what I wrote. As that old comic goes ... Python lets you import essay. ;-) Oh, that and it was runnable on Windows, thanks to WinPython, without admin rights! It was a HUGE step up from bashing my head against the wall (pun slightly intended) with shell scripting for PC file manipulation. I have a few preferences depending on the task at hand: bashfor anything to do with file and system operations pythonfor quick scripts and prototyping projects (generally < 500 lines). It is great for easy setup, lot of libraries and what you can accomplish in 500 lines java(and these days kotlin) for more serious projects where I feel static typing and JVM ecosystem will benefit Haskell is awesome, but hard. It hurts my brain, but I love it. Good luck It used to be python because I could spin up a console and test stuff easily. Then it became java since the IDE would generate everything for me. Now it's elixir since code is not complex enough to need IDEs, the console is available even in production and functional programming rocks. As my first language, Python used to be my go-to mostly because it let me build things out quickly without many barriers. Nowadays, I spend my days (and nights) focusing on web development so I'm writing a lot more JavaScript. It's not necessarily my favorite language but I've gotten comfortable with it because I use it so often now. I think the added context of working with a browser and DOM gives it a really fascinating ecosystem, so it's always going to keep me attached to some extent. I will say that I've been learning Go on my own time and it's slowly becoming my go-to if we're talking about language design. I generally prefer simplicity and readability in a language and I think Go is great at that compared to other languages, despite its known shortcomings. I'm hoping it won't be long before I actually start using it for personal projects and other endeavors. I love Ruby expressivity and cleanliness and the Rails and Dry ecosystem. I love Elixir/Erlang performance and concurrency, their immutability and functional approach to problems. I love Kotlin expressivity and performance, and the Spring ecosystem. To each their own, because if you got more than a hammer you can deal with more than a nail :) I wonder how these two could come together glued with “and” :) Because they really complement each other very nicely. You have the data access layer (models) and presentation layer (views and controllers) handled by Rails, and the business (transactions and operations), validation (validation schemas) and orchestration logic (autoinject and container) handled by Dry. I don't call models directly in controllers, I have service objects that transparently handles data transformation between the application and the outside world (being the front-end or the database), validating input data and integrating with external APIs or applying business rules Kotlin because it's general purpose and is so elegant. Python is also ok, but i like it only for scripting (I do not like OOP in Python because IMO it's not primarily designed for it). Ruby is IMHO much more elegant, flexible and naturally object-oriented. In terms of usability nothing beats JS though. Well I spend most of my professional time in JavaScript, and I've come to appreciate the language quite a lot. I also do a little Python, and that provides a nice break from JavaScript. But what I really like is Ruby. That's what I'll reach for if it doesn't have to be either of the former two. It's a great language, great community, easy to get started, and difficult to master. What makes me 'go-to' a language tends to be a question of whether I know how to solve a problem in a language already, and do I have time to do it in a language I don't know how to solve the problem in already. I'll usually try to churn a solution out in Bash using other programs ( curl, sed, jqand stuff) and some pipes. Then if that runs out of steam I'll probably use Golang as I hate it's standard library HTTP client the least, or NodeJS because of experience. If you gave me a code challenge I'd do it in Common Lisp. Lisp or Python (for most things); Perl, bash and C (for work related stuffs), though I am not particularly "good" with any particular language I muddle through --depending on the task, end_Goal or other restrictions, I tend to take the path of least resistance. Being more task oriented these days (meaning, old) --tending to use whatever is most convenient or interesting based on the environment and time-frame allotted. Though I have been working with "Go Lang" more, which seems to fit many use-cases and be fairly handy for building applications quickly and/or prototyping. I have also considered switching to clojure, outright, more than once. There are several languages that I like, so mostly my go-to definitions depends of what are the needs for the project. I know some C, C++, Rust, Python, Java (Ugh) and a little bit of Julia. I did a lot of coding with C, Rust, Java and Python, so these 4 are the languages I'm most comfortable with, but that doesn't mean I like all 4. I hate Java, I wasted a awful lot of time fixing bugs on C cause the compiler let me do dumb things. I LOVE Rust for the safety, for the fast and the zero-cost abstraction. I'm liking Julia a lot cause it's very easy to learn and write for data science and is very fast too, I mostly use python for that, but there have been times that it was too slow for the size of the data I was processing and Julia saved my ass. Fast execution -> Rust Safety -> Rust Fast writing -> Python or Julia I don't have anything go to for web stuff since I never done a web development before. I'm also taking suggestion of the modern languages that is used today for web development. I also experimenting with OcaML and Clojure, and I kinda liking them a lot... C++ tends to be my go-to language for technical interviews as it was what I was trained in academically and what I studied as I prepped to enter the job market. For writing full scale applications C# has become my go-to simply because I use it everyday and it's most familiar to me on a "bigger than a single algorithm" scale at this point. On a side note, I'm taking some online courses in Python and loving its simplicity so far! My go to is Java, as it wasn't so much my first language (C# -> C++ -> Java). But it was my first fluent language. Now it is burned into my brain to the point I started writing it by hand on paper because why not. Honestly, your article explains why I like Python as well. Perl was my first love -- my "baby duck" language (first language I learned, so followed it forever). But Python is my go-to now and all I want to use really. It does all the things I need it to and feels right while using it. PHP, but just because I got started working with WordPress and it's the language I'm most familiar in. Several times recently I've had Ideas for side projects or whatever that I could build with React or anything else but 9 times out of 10 it's come down to "I can build this now in PHP, or I could spend probably 3 times as long to learn to do it in something else". I find that I don't care too much about the syntax that I'm writing -- more important for me is that I know the details of the language like the back of my hand. For instance, I prefer JavaScript since I know in detail how prototypes work, how the garbage collector operates, which methods are available on an array, etc. This means I can just sit down and write code without having to constantly stop to look things up. If I had that same depth of knowledge in, say, Python or Go, then I'd feel comfortable switching languages. I have written an article about this... Why I use JS. Móricz Gergő But really, JS is the language that I know best, and that doesn’t require a file template (unlike C# for example). Like how Python was your first language you learned, Ruby was my first language I learned (at least since the days I was playing with Perl in high school) and it has been my favorite since. A lot of how you described Python, I think applies to Ruby as well. It is simply enjoyable to write and a pleasure to read. I have taught coding students Ruby and love how accessible it is to people entering the field. When I attended my first RubyConf and got the chance to meet Yukihiro Matsumoto ("Matz") for the first time, I realized that only someone as kind and thoughtful could create a language that is so intuitive and kind to its users. Absolute, first stop, go-to? Bash. I try and get what I need out of the standard set of Unix tools, piped together with some scrappy Bash scripting. This is usually if I'm trying to write something for just me or my colleagues - as it's harder to be sure of the environment your 'program' will be running in. If the Bash starts to get a little wild (i.e. to the point where I can't understand it easily), I'll fall back on either NodeJS or Go. Go most likely in the case where I don't want to get my head around any async issues, plus I like the standard library, but both are good. That gets me pretty far - probably up to and including a web service. But if it's a coding challenge I'll probably use Common Lisp or Scheme. Because it's fun! I don't have a particular go-to as I don't consider myself a bonafide developer. Nor am I a dedicated designer. I'm somewhere in the purgatory that lives between the two. I mostly work in HTML/CSS and some JS due to my job. Most client work I use PHP, but mostly because I can use includes for my HTML and its the only other language I used other than HTML/CSS. So I guess my goto language(s) are PHP/HTML/CSS ¯_(ツ)_/¯ Python was my first language too, and it was my favourite until learning Kotlin a few months ago. Kotlin is amazing with its extension functions and lambdas, and list operations are much nicer than those in Python. I find the dynamic typing of Python makes large projects a little more difficult to manage. C# all the way. I'm mainly a web developer so started out with PHP and front-end tech in Uni but gradually found that the front-end discipline is very easy so moved on to C#. It's tooling and readability is what I love about it. Even if I intensively worked with languages such as C, PHP or Javascript, I always come back to Java, especially on new projects. The biggest reason is that I like its constraints, static-strong typing rules, and toolset/maturity. Well, my first language was Haskell, my preferred languages are Kotlin and C++ (Kotlin wins by a small margin), and my go-to ones are either C, C++ or Java, simply because they're the ones I am most experienced in. I've used other languages - Objective-C, Swift, C#, Ruby, Python, Javascript - and although I don't like them as much, I find that knowing about them ends up influencing my problem solving skills and critical thinking even when using other languages. My go-to programming language is definitely PHP 7 with Symfony Framework. Its the language I have more experience with and it fits my programming style well. I can build stuff with it very fast and with good design thanks to OOP and all the amazing Symfony features like the very well done Dependency Injection container. Despite the general hate, for me PHP is one of most balanced languages in terms of simplicity, feature set and performance. Good OOP without Java memory hog ;) For the cases PHP is not a good fit, like applications that needs lots of concurrency, smaller services or system tools, Golang is my tool of choice. I think PHP and Golang, complement each other really well, and I believe I can build almost anything with these two. I need to get some experience with Frontend frameworks to complement my personal tech stack and I will go with VueJS. rust, easily. safe, fast, helps prevent you from writing bad code My day-to-day has been JavaScript for the past two years, because of work, and that has really pushed it to be my go to. I can really focus on specific functionality by utilizing the npm ecosystem to flesh peripheral components/functionality. I also love how flexible JavaScript is, but I do miss the strength ofa more classic/traditional compiled language like C# or Java. JavaScript (ES6) is my favorite. Node.js can be used in so many ways. Add something like React Native to your stack and wow, you can do server, web, desktop, and mobile. Besides all of that, JavaScript is the only language I can just "hack" away at and consistently get somewhere. There are so many features that are available but not necessary that you can do it any way you want very quickly if you know JS well enough. I think a lot of people think this level of choice is bad, but if you're a good ES6 dev I think the code diversity you become familiar with makes you all around more adaptable even in other languages. Python is definitely my goto language, I love its flexibility and how easy it is to make a simple script, or a big project. The only thing I haven't found out how to do in Python, is a good looking GUI. Ah, yes. For writing something quickly, probably Python. For making it run quickly, Go. For the Web, JavaScript, until something better comes along. I also find that every project has at least one Bash script somewhere. My goto is It's probably because I been burnt a fair bit of enterprise workspace. It may not always be the best tool for the best job. But it always can get it done. It also saved me many times over - unable to get approval to install Redis / Memcache without admin?. I will toss over and execute a less performant, but working jar file instead. Similar stories on bash scripts written to execute certain actions in the background. And that familiarity, reliability, and speed of which I can work on them, make them my goto. It also makes them one of my most hated languages of all times. Along with every quirk and design decision in the language, I wish was removed and never made. So I guess all is fair from there. No preference here; I am more of a 'right tool for the right job' kind of person. PowerShell is cross-platform. It runs on all the OSes. I've been in a love-hate relationship with Python. The good times include all of the things you've mentioned. The bad ones were when it's really hard to see the forest from the trees because of its imperative nature. For the most part, Go is the most productive imperative language for me. It's simpler than Python and I find the braces to be easier for my typing than whitespaces. Plus types! For learning languages, Ocaml and Scheme. Scheme is undoubtedly the easiest language to understand hard programming concept. My go to is Python (specifically Python 2.7, which is problematic I know). The why is an odd mix of "it's a language I love" and "I know how to do so much in it that at this point I'm kind of held hostage by it". I've been recently trying to make all the necessary changes to my work code bases so that we can start using Python 3, as well as spending all my free time learning Rust. Python may remain my go to but hopefully in time it'll be mostly glue for stuff I write in rust. I'm pretty much a pure python programmer at the moment, so it would have to be that, but I'm going to expand my languages known to include things like Java/C#/C++ and bash. I feel like I should learn something that's useful for front end like javascript, but I don't see myself even going for a job where that's needed so it's low on the list. My go-to language is probably Rust, although I also use JS a lot. What I like about Rust is that it gives you lots of safety guarantees and it's incredibly fast. Also, writing Rust programs requires you to think differently about how you structure your code, and I've found that to be really helpful in improving the general way in which I approach new problems. I started learning programming with Python, I didn't really knew what programming was but I gave it a try. Then I moved to Java and started building simple games on the terminal. From there I tried C++. But after a while I got introduced to the world of Web Development, and I was amazed! I learned Javascript and got into node.js. Now, I'm looking forward to learn Go and probably get into machine learning. Although my go-to language remains Python, it's simple and lets you build things really quickly! Python is not my favorite language, but it is the language I'm using at work. After using it professionally for long enough, I've become familiar enough with its syntax and standard library to be able to do most things without having this search the docs. This alone is enough to make Python my go-to language. Not any feature of the language itself - just the fact that I'm more proficient with it than with any other language. Python is an all-rounder. You can do anything from basic scripting to big data to running websites. I've used Vbs, Shell, PowerShell for different things over time and python can do everything they can do and may be more. The libraries are so vast and diverse, letting lot of possibilities a reality. I've used Django and it's simple and fantastic. My first programming language was Java, which was my go-to language throughout my undergrad studies. I then learned JavaScript as I started getting deep into web development. Getting into JavaScript was easy for me since I already knew the fundamentals and syntax of Java, and it slowly became my new go-to language. I really love its flexibility, how dynamic it is and the big variety of frameworks it has. If I'm trying to think of an implementation for a problem, I always find myself thinking in JavaScript. I go with Html. No i'm just kidding. I love c++ and to be honest I would try to do anything there. Right now I'm learning socket programming with it. But professionally I am a MEAN Stack developer and do a lot with typescript. I have experience in the programming over 12 years, and I like a lot of languages, but my the best tool today is Swift (Apple is the creator), for native development for iOS, macOS, watchOS, tvOS and applications like for server-side or utils. I've dabbled with a lot of languages over the years but I always end up going to either C or Python. I love C for raw programming as well as using it in networking. And Python I could happily marry. The ability for these two to work in correlation together is incredibly easy to work with and use. Typescript has the flexibility of JavaScript with the editor support of a typed language. It’s just plain fun incentives you get comfy with it. Wish I could settle on a go-to language. I'm going to say Ruby, then node (elm), then python. Only reason is syntax. I need a good editor to make sure I didn't miss an alignment with python. I don’t really have a single go-to. It depends on what the job is. If Python can do it more easily, great, I’ll do that. For any heavier work, though, C#. Quick, dirty, and easy. C#per from the bottom of my heart ❤️ Definitely Python :-) My goto language is c# even though it's the 4th language I learned. It increases productivity and it's just a great all rounder. Fixing to learn python and add it to my toolbelt VB.NET is my preferred go-to programming language. My feeling towards it is the same as yours towards Python :) Golang! I'm pretty new at the language but I love the simplicty. I did several mini-tools (like getting a Spotify-API) with it. It's a breeze to throw the binary on a server and forget about it. For me my go to is certainly python, but depending on the task I've found a lot of love for typescript and kotlin Ruby. Really it just comes down to the fact that Ruby is fun. I love experimenting with new languages, but very few languages give me the sense of joy I feel when I see good Ruby code. Ruby when I need to write something quickly. PHP when I need it to go fast, especially when dealing with files. I love Ruby, but Python has been my dirty secret for almost 10 years now. I also like to understand stuff by writing lisp code Python , then Ruby, then JavaScript I generally use PHP for projects but if I have to do a little script or anything like that i use Python. Javascript Elixir! Lerned it at my first programming job and I now use it all the time. For know I've use Python and Lua. I love Lua because it's easy to understand and use in lot of engine! First time i code with PHP. I don't have any preferred language, each language is useful for some situations. I'm a Pythonista. From everything you said, you should check out Golang. It follows basically everything you said why you love Python. 💪C++ 😘 Java 😉 JS 🤭python👌 Applescript and emacs lisp... I seem to be an outlier here :) Why should there be one? I would do math in R, some meta in Ruby, billing system in Java, FE in TypeScript or Elm, low-level in Rust, CRUD and concurrency in Elixir, etc.
https://dev.to/aspittel/why-is-your-preferred-programming-language-your-go-to-345c/comments
CC-MAIN-2019-30
refinedweb
8,371
70.84
When you are learning how to program in Java, one of the best ways to see proper code in action is to follow along with examples. These examples can be plugged into your own text editor or Java IDE and you can experiment with them as you learn about various Java components. If you are brand new to Java programming, make sure you check out Java Fundamentals before attempting the examples in this tutorial. Your very first program was probably a console-based application. In other words, you used the Windows Commands Prompt to execute your program in text format. These simple applications have no Graphical User Interface (GUI) and although you can do some cool things in the console, it won’t be long before you want to start creating graphical applications like the professionals. Fortunately, Java offers the Swing API. This class library allows you to create powerful GUI components very easily and does not require the use of a separate IDE (although it does make it much easier). In this tutorial, you will learn how to make the famous “Hello World” program using a GUI instead of the boring console view. What is Swing? Swing is part of the Java Foundation Classes (JFC for short). This is a group of features specifically designed for building GUIs and adding rich graphic functionality and interactivity to your Java applications. The main components of the JFC include: Swing GUI Components – This includes everything from buttons to split panes to tables. Many of these components are capable of sorting, printing, and drag-and-drop functionality as well as a few other supported features. Pluggable Look and Feel Support – Swing applications can easily take on the appearance of other popular programs. For instance, by default you can set your swing applications to have the look and feel of Java or Windows. The Java platform also supports GTK+ look and feel; making hundreds of other “look and feels” available to your Swing-based programs. Accessibility API – These components enable assistive technologies such as screen readers and braille displays. Java 2-D API – This class library enables you to incorporate high-quality 2-D graphics, text, and images into your applications and applets. Many popular Java games are written using the Java 2-D API exclusively. You can learn more about creating simple Java games in Learn Java from Scratch. There are a few other components of JFC as well, but the Swing GUI Components are what you need to know about for this tutorial. In total, the Swing API has 18 public packages. For most applications, however, you only need to import javax.swing.*. You can learn more about the available Swing packages in the Java Swing Programming course. Creating the Hello World GUI This tutorial assumes that you already have a functional version of the Java SDK on your computer. If you do not, you can learn more about setting up a Java environment on your PC in the Introduction to Java Training Course. Below you will find the code to create your very own Hello World GUI based Java Swing application. Open up your favorite text editor and add this code exactly as you see it to create this program: import javax.swing.JFrame; import javax.swing.JLabel; //import statements); } } Make sure to save the program in the text editor as HelloWorldFrame.java. Now that you have created the program in a text editor, you have to compile and run the program using the command prompt as follows: javac HelloWorldFrame.java and then to run it… java HelloWorldFrame.java A small window should now pop up displaying “Hello World.” Congratulations! You have just successfully completed your first Java application using a GUI. As you can see, the JFrame is the key component used in this example. The JFrame is responsible for creating the small application window that appears on your screen. A JFrame is a window with a title, border, and an optional menu bar as well as user-specified components within the JFrame. It can be moved and resized by the user or you can set default sizes within the application. For instance, by default a JFrame is displayed in the upper left corner of the screen. You can adjust where the JFrame loads by default by using the setLocation(x,y) method within the JFrame class. This method places the upper left corner of the JFrame at the location you specify. The other important element in this example is the JLabel. JLabels are used in the Swing GUI when a user interface component that displays a message or an image is needed. Notice that you can also include an image within a JLabel; a feature you will rely on heavily as you become more acquainted with using Swing. If you want to add an image to this example, simply modify the JLabel and reference the image within the JLabel constructor like this: import javax.swing.JFrame; import javax.swing.JLabel; //import statements public class HelloWorldFrame extends JFrame { public static void main(String args[]) { new HelloWorldFrame(); ImageIcon icon = createImageIcon (“images/example.jpg”); } HelloWorldFrame() { JLabel jlbHelloWorld = new JLabel(“Hello World”, icon); add(jlbHelloWorld); this.setSize(100, 100); // pack(); setVisible(true); } } As you can see, the bold areas in the example above are code that has been added to create the image in the JLabel. The createImageIcon method is where you put the source of your image and you simply need to refer to the image object (called “icon” in this example) to include the image in your application. Obviously, this is only scratching the surface of the power of Swing. Many robust GUIs have been built using Swing alone and it provides many interesting features that can be appreciated by beginners and experienced programmers alike. It’s always helpful to learn how to code Java before relying heavily on an IDE such as Eclipse. That said, most Swing components are drag-and-drop capable in IDEs; saving you a lot of development time. If you want to learn more about using Eclipse to create your programs, check out Java Programming Using Eclipse. If you are ready to take your Java programming to the next level, Swing is the way to do it. Your applications will be much more fun and interesting when you are creating GUIs that you can actually interact with.
https://blog.udemy.com/java-programming-examples/
CC-MAIN-2017-34
refinedweb
1,061
61.97
Up to [DragonFly] / src / sys / kern Request diff between arbitrary revisions Keyword substitution: kv Default branch: MAIN Welcome devctl(4) and devd(8). Obtained-from: FreeBSD The devinfo(3) library provides userspace access to the internal device hierarchy. The devinfo(8) utility can be used to view that information. Ported by Sascha Wildner. Obtained-from: FreeBSD Update acpi_battery(4) related code to the latest one from FreeBSD HEAD. Obtained-from: FreeBSD acpi_cpu(4) update. It's now possible to use higher (lower power usage) C states than C1 in modern (multicore) CPUs. Obtained-from: FreeBSD Handle (unit == -1) case in the device_find_child() function as already used by agp(4) and coretemp(4) code. Fixes unload/load cycle of the coretemp(4), agp(4) has more problems. Obtained-from: FreeBSD Don't let DS_BUSY buses block attachment of other devices. DS_BUSY implies that the device has been in state DS_ATTACHED before, so we need include DS_BUSY buses in the search as well. Joint-work-with: matthias@ Add bus_alloc_resources() and bus_release_resources() functions to allow to simplify the code and to make it easier to port drivers (initially the agp(4)) from FreeBSD. Obtained-from: FreeBSD Reorder initialization sequence of devclasses. This fixes a panic triggered by the use of the devclass of a driver during the attachment of this driver. Reported-by: Wesley Hearn <wesley.hearn@gmail.com> Reviewed-by: dillon printf -> kprintf in sys/ and add some defines where necessary (files which are used in userland, too). Rename sprintf -> ksprintf Rename snprintf -> knsprintf Make allowances for source files that are compiled for both userland and the kernel. Rename kvprintf -> kvcprintf (call-back version) Rename vprintf -> kvprintf Rename vsprintf -> kvsprintf Rename vsnprintf -> kvsnprintf 2 Rename malloc->kmalloc, free->kfree, and realloc->krealloc. Pass 1. Cleanup some of the newbus infrastructure. * Change the device_identify API to return success/failure, like most of the other newbus methods. This may be used for conflict resolution in the future. * Clearly document the device_identify method and formalize its use by adding discrimination between initial bus probes and bus rescans. Do not re-execute static identification code that has already been run every time a new driver is added at run-time. * Clearly document the do-ISA-last hack. * Provide generic routines for the most common device_identify operations (psueo or synthesized devices that operate under other devices, such as lpt operating under ppbus, which are not 'scanned' by the parent bus). * Remove the hacks that install and initialize the nexus device. Instead, use the existing DRIVER_MODULE infrastructure to install nexus under root_bus. * Document the boot-time initialization path so it doesn't take the next guy 8 hours to figure out what code is actually being run when. Remove redundant verbosity. The description of the parent just costs space in the dmesg buffer, it doesn't add any value since it was printed before. NEWBUS infrastructure for interrupt enablement and disablement. This allows a device to indicate to the interrupt dispatch architecture that it has enabled or disabled the device interrupt at the source. The dispatch will then decline to call the handler. This is necessary because it is possible for the interrupt handler to be called from the interrupt thread AFTER the device has disabled the hard interrupt. There are two cases: FIRST CASE: * hard interrupt occurs * interrupt thread is scheduled but cannot preempt * device disables interrupt * interrupt thread then runs handler while device believes interrupt to be disabled. SECOND CASE: * multiple devices share the same interrupt (#1 and #2) * device #1 interrupts and schedules the thread * the handler for ALL devices is run, even if device #2 disabled its hard interrupt. Clean up and simplify the interrupt vector code. Always install a MUX function. The MUX function will check the handler enablement/disablement state. GCC supports two pseudo variables to get the function name, __FUNCTION__ and __func__. The latter is C99, prefer that.. Generate more useful -v information on the console during device attach. The complete device chain is output prior to each attach. The normal device_print_child() is now moved from before the attach to after the attach so the correct resource information gets reported, especially the correct IRQ. Add PDEBUG call for device_shutdown. gcc-3.4 cleanups. Add missing break statements, deal with goto labels, and adjust the use of __FUNCTION__ (string concat is no longer supported). Plug a memory leak when the kernel initialiazes config_devtab resources in resource_new_name(). PR: kern/33344 (FreeBSD GNATS) Patch-by: David Xu <davidxu at freebsd.org> Remove newline from panic(9) message, it is redundant. KObj extension stage IIIb/III Merge inheritance support from FreeBSD: * Add a simpler form of 'inheritance' for devclasses. Each devclass can have a parent devclass. Searches for drivers continue up the chain of devclasses until either a matching driver is found or a devclass is reached which has no parent. This can allow, for instance, pci drivers to match cardbus devices (assuming that cardbus declares pci as its parent devclass).. Add convient functions for the bus interface: child_present, child_pnpinfo_str, child_location_str. From FreeBSD. Make subr_bus.c more consistent with regard to style(9) and itself. - adjust the 4 space indentation in the oldest parts - use return(value) form - move returns into the default case of certain switch statements - make some pointer checks explicit against NULL - reorder device_probe_and_attach to simplify the if's - remove unnecessary return; at the end of functions. Add device_is_attached to allow a driver to check wether a given device was or was not succesfully attached to the device tree. Adjust infrastructure for NEWCARD Sync TAILQ_FOREACH work from 5.x. The closer we can get this file to 5.x the better. Use M_ZERO instead of manually bzero()ing memory allocated with malloc(). Correct several bugs. If we fail to add a device be sure to delete its kobj. Remove a double kobj_init() call in make_device(). Replace a manual free() with a kobj_delete() call, and delete a kobj before reinitializing it. Fix misplacement of code. Due to additional DF code everything got shifted one position, including the main condition, which was wrong.) Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections. import from FreeBSD RELENG_4 1.54.2.9
http://www.dragonflybsd.org/cvsweb/src/sys/kern/subr_bus.c?f=h
CC-MAIN-2014-42
refinedweb
1,044
57.37
Opened 2 years ago Closed 21 months ago Last modified 21 months ago #17845 closed New feature (invalid) Provide a way for model formsets to know how many records will be created Description This is a feature request - please provide an easy way for model formsets to know how many records will be created. As an example, I'm trying to write a clean() method for an inlineformset that checks that the database will always contain at least one model row when saved. The information to do this doesn't seem to be available. Attachments (0) Change History (2) comment:1 Changed 21 months ago by aaugustin - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to invalid - Status changed from new to closed comment:2 Changed 21 months ago by apollo13 One (untested) possibility is: def clean(self): super(BlaForm, self).clean() c = 0 for f in self.extra_forms: if f.has_changed() and not self._should_delete_form(f): c += 1 After that c should tell you how many objects will get created, see save_new_objects and save_existing_objects for more info. Unfortunately, this isn't something you can know in clean(). The sequence is as follows: That's why it isn't possible to know how many records will be created until step 2. However, you could check len(form.new_objects) after saving.
https://code.djangoproject.com/ticket/17845
CC-MAIN-2014-10
refinedweb
224
62.38